TL;DR: we’ve enabled people to compile Zephyr programs from a computer with no toolchain installed, almost instantly.

Part of our charter at Golioth is to help people prototype and scale IoT devices faster. That’s why we offer an open source SDK built on top of Zephyr. We think this represents a “fast forward” or “cheat code” for quickly standing up an IoT device prototype. On the cloud side, our servers represent hundreds of hours of customization and testing; you can instantly connect and get access to resources that allow hardware and firmware developers to scale to thousands or millions of devices. But sometimes it can be scary to get started in a new ecosystem or Real Time Operating System (RTOS) like Zephyr, even if it will speed things up later. As such, we do public and private training for companies and individuals.

As part of the resources we offer, we maintain a Training site that walks people through how to get started using Zephyr, normally targeting remote training. You can follow along right now; you’ll need to purchase an Adafruit MagTag board and sign up for a free Dev Tier account, but everything else is covered on the training site. At the end of the training, you should understand how to interact with hardware in Zephyr and send data to and from the Golioth cloud over WiFi. It’s a short jump from there to re-target other hardware, including your custom designs.

The tripping points for the training often revolve around the installation process. This is multi-pronged:

  • The size of a Zephyr install is relatively large, even when you are only targeting a specific platform. Having multiple people in a room, even with good WiFi or network connectivity, means that the shared bandwidth will be a limiting factor. More trainees means slower downloads.
  • Everyone comes to training with a computer in a different state. They might have tried to install Zephyr tools in the past, or they might have a particularly rare Linux distro, or many other possible variations. It would be best if everyone showed up with a fresh OS install…but that is very unrealistic.
  • There are different expectations around how installations should go. Many embedded engineers are “Windows first” and expect a complete IDE for any new platform. Some silicon vendors help to support this in Zephyr, such as Nordic Semiconductor. But Zephyr was originally targeting Linux-based machines, and we have found the smoothest flow for installing tools for all of the platforms that Zephyr can target means you are Linux-first.

In this article, we’re going to talk about our attempt to normalize setups and have pre-installed tools using Kasm and Docker. These are not the only tools in this space; we have previously written about GitPod and are investigating GitHub Codespaces, but this is a look at one of the latest experiments we’re running at Golioth.

Kasm thin client

The concept of a browser based client or a “thin client” is nothing new. They were all the rage back in the day of time share servers (really those were “dumb terminals”) and then again in the 90s as computing was more ubiquitous throughout the office (with a centralized set of servers). The difference is that now things are much more graphical and running completely inside the browser.

Kasm was started in 2017 and includes an open source project run by Kasm Technologies. The company behind Kasm has a per seat licensing model or they will run the servers directly for you (once you’re past 5 trial seats). They specialize in visualizations around containers. Once you log into a Kasm server, you are able to launch a range of containers, normally a desktop view or a single app that will load up in your browser. You can try this for yourself on the Kasm demo page.

The server that we’re running on is a pre-configured image that I pulled from the Digital Ocean marketplace. I was able to install all of the required software on a provisioned server running in some unknown datacenter. All I did was log in the first time to get my credentials for a user and an admin, and the rest of my interaction was on the web interface that the Kasm server presents to me as an admin.

Docker

As a hardware engineer, Docker is one of those things I heard about for a long time and never really “got it”. I’m still not sure I do. But following the tutorial for customizing a Kasm container, I started to understand a bit more. In that set of tutorials I started from a base Operating System image (Ubuntu Focal) that allowed visualization through the browser. Then I was able to start customizing, adding things like custom files on the desktop, custom icons to launch programs I installed, or adding background images pulled in from the web. It was in this customization section that I could add all of the commands from the Golioth Docs for installing Zephyr tools.

My layman explanation of Docker would be “Creating a virtual computer where I can automatically install a bunch of software using shell scripts. Once I have built that virtual computer, I am able to use it over and over again, including different instances of that virtual computer (for this Kasm scenario)”. The analogy would be if I bought a bunch of laptops, had an install CD (remember those?) with all of the required software on them, and then I mailed the freshly installed laptop to everyone who is taking our training. Sound crazy? That’s one of the best solutions we have seen, where a trainer will bring a pelican case with 24 laptops freshly imaged to on-site training. Their training works flawlessly every time!

I don’t have much else to mention about Docker aside from the idea that it’s possible to script a bunch of install commands that match the install instructions we have on our Zephyr getting started guide. In fact, I used those very directions to build the container shown in the video above. So all I’m doing in this case is automating the install process, doing it once, and then deploying the container (with all of the software and dependencies installed) over and over again for different users.

Challenges

We don’t think this is the ultimate solution for our training, so much as an experiment that showcases what we can do with containerized solutions. There are some remaining challenges, and we would love to have some help from our community.

Loading firmware onto the device

Currently our plan (as shown in the video) is to have our users/trainees pull the final built binary to their local computer to run it on the device like the MagTag. This echoes the way the mbed online compiler worked.

If there is a bootloader and a USB to serial connection, it’s possible to directly load onto the embedded device. In the case of some Espressif boards, this would be something like having ESPtool.py installed locally on your machine. There are an increasing amount of tools that make this process easier, such as an ESP tool that allows you to load firmware using WebUSB. Certain specialized bootloaders like the one that comes default on the MagTag loads UF2 files. When the MagTag is plugged in over USB and a sequence of buttons are hit, the device shows up as a mass storage drive. You drop a UF2 formatted binary–which is just an alternative form of compiled format–onto the drive and the device reboots and starts running the code.

If it’s a board without a bootloader, the user would need to have a debugger and local tools to communicate with that debugger, such as a JLink device and JFlash software. This means they would still need some OS specific loader tools to get the binary into the embedded device. The user would not be able to take advantage of the built-in tools in west that allow direct loading onto the device.

You steppin’?

If you would like to do debugging instead of “printf/printk” debugging, you simply need to download a different file from the container. If you download the zephyr.elf file instead of the zephyr.bin file, you can load it into a 3rd party debugger like Segger Ozone (made by the same company as the JLink). We have done some experiments with this in the past, including also analyzing where the device is spending its time using SystemView. This would once again require installing local programs that could talk over the USB port to something like a JLink.

Experimental port forwarding and WebUSB

Some GDB debuggers/servers will host the control of the debugger over a port on the machine’s localhost. We have some experiments we’re trying where we forward this port to the container so we could directly run a debugger from a software debugger inside the container.

We have also heard some whispers of a WebUSB implementation that can tunnel to the container. So we could plug in a board on our host machine (ie. my laptop) and connect to it over WebUSB, and then forward all information along to the container machine (ie. the browser based desktop running on the Kasm server).

We would love to hear about other projects that are trying this.

Shared resources

The final challenge we are dealing with is the fact that we’re basically “renting” a computer to do exactly what we could be doing with the host machine sitting right in front of us. Most developers have access to very powerful machines and we are instead using the resources on a remote machine (the Kasm server). The cost of standardization is the cost of renting server time for each person in the workshop. It might be worth it, but it is a constraint and a challenge.

Containers are another tool

Anyone reading this with a web background is likely thinking, “Yeah, containers, cool, 2010 called and wants their headline back”. But we are excited about it because these tools are finally making their way into the historically sluggish embedded industry. While our use case of containers is mostly around zero-install-time training, others are using containers to automate their testing and implementing best software engineering practices for the range of devices they have on their desk or in the field.

We’d love to hear how you think we can improve our training and make it easier for you to learn more about Golioth, Zephyr, and building code instantly. Check out our forums, our Discord, ping us on Twitter, or send us an email at [email protected]

Hello from ZDS!

This week the Golioth team is at Zephyr Developer Summit. Previously we announced that we’ll be there and shared the talks we are presenting. We will post those shortly after the conference takes place. In the meantime, let’s recap how we got here in the first place and share a little bit more about what we’re showcasing.

Why Zephyr?

In short, because it helps our users. We are members of the Technical Steering Committee (TSC) and have been almost since the inception of the company. We built our first Device SDK on top of Zephyr because of the broad hardware support and high level integration of Golioth features into the Real Time Operating System (RTOS).

The assertion that “Zephyr helps our users” might be extra frustrating to beginners: Zephyr—and RTOSes more broadly—represents a tricky new set of skills that might be foreign to firmware or hardware engineers. For beginners coming from the hobby space, it can be an extra rude introduction into the world of command line compilation and large ecosystem. However, connecting to the internet is a difficult task, especially for custom hardware: we think that Zephyr represents a great first step towards managing those devices over time. We are committed to pushing for more user-friendly code and methods from the Zephyr foundation, and we will continue to publish best practices on our blog and our YouTube channel to help people get connected.

Showcase

One thing we’re excited about is showcasing how Golioth works to members of the community. We have been developing different “color coded” demos to make them a bit more memorable for folks that stop by our booth. Each of these demos feature a hardware (device) component and a dashboard component, in order to visualize the data that is on the Golioth Cloud.

This is the first time we have showcased the “Aludel”, which is our internal platform for prototyping ideas and switching out different development boards and plug-in sensors. We will post more about this in the future, including our talk on the subject.

Red Demo

The Red Demo is our showcase of devices running OpenThread on Zephyr; this is part of our larger interest in Thread, which we see as a very interesting way to connect a large range of sensors to the internet securely. We have been excited to show how we can use low power devices like the Nordic nRF52840 to communicate directly with the Golioth Cloud.

The devices we are using in this case are off-the-shelf multi-sensor nodes from Laird called the BT510. This hardware has additional sensors on the board which we integrated with LightDB Stream to send time-series data back to Golioth. This was fast work, thanks to Laird’s Zephyr support, it was as simple as calling out the board when we compiled the demo firmware.

We then capture the data from these on the Red Demo Dashboard, showing both historical and live data for the sensors.

 

Green Demo

The Green Demo showcases LightDB State, our real-time database that can be used to control a wide range of devices in a deployment. On the device side, it uses the Aludel platform to measure a light sensor, as would happen in a greenhouse. There is also a secondary Zephyr-based device inside a lamp, representing a grow light that might be inside a grow house. The lamp is set up to “listen” to commands from another node, in this case the Aludel.

LightDB State is used to control elements like “update rate” to control regulate flow of information. It also lets us monitor critical device variables on an ongoing basis and set up logic on the web to take actions as a result. Command and control variables can be set from multiple places, including a custom mobile app, the Golioth Console, a visualization platform, a web page, or (as is the case here) even from another device!

Our Green Demo Dashboard (below) again showcases live and historical information, as well as the current status of the connected lamp.

As an added bonus, we control some of the logic on the back end from a Node-RED instance, including control logic. That takes the light intensity sensor output and calculates how bright the lamp should be. Because this is written in Node-RED, we can include an additional input from a mobile app to control the “target intensity”. In this way, people at the booth can adjust the lamp output if the exhibition space is brighter or darker. Plus…it looks cool!

Blue Demo

The Blue Demo helps to showcase how data migrates into and out of Golioth. Using Output Streams, you can export all cloud events to 3rd party providers like AWS, Azure, and Google Cloud. Buttons on the Blue faceplate switch the output being sent back to the cloud. The sensor readings being exported to all 3 clouds can be turned on or off by changing which variables are exported from the device.

On the device side, we capture a sensor using our Aludel platform. The sensor is a BME280 (in-tree sensor in Zephyr), going through a feather form-factor dev board, talking to the network over a WizNet W5500 Ethernet external chip to the network. The Blue Demo Dashboard showcases the live data, and of course the data is being exported simultaneously to the 3 cloud platforms in real-time.

Orange Demo

Golioth is a “middleware” built on top of Zephyr RTOS, which means you can use it to implement new features on top of already-existing hardware. This demo uses the Nordic Semiconductor Thingy91 with custom firmware to send GPS data back over the cellular network to Golioth using LightDB Stream. This demo also has Golioth Logging and Device Firmware Update, which are easy to add to any project as an additional service for troubleshooting or in-field updates.

On the dashboard side, we wanted different ways to showcase this data, including “latest update”. Having access to the raw data is useful for anyone wanting to try asset tracking applications. We’re excited to be able to showcase this data as it dynamically flows into the Golioth Console and back out to the Grafana dashboard.

Future showcases

We’re excited to be showcasing our demos at the Zephyr Developer Summit, but these are moving targets! We will continue to update and pull in new feature for future events. We will be at Embedded World in two weeks (June 20-24th) and will have many of the same demos there.

The Zephyr Developer Summit (ZDS) is coming up June 7th-9th 2022 in Mountain View California at the Computer History Museum. Golioth will be there and we’re very excited to interact with fellow users, developers, and stakeholders in the open source real-time operating system (RTOS) known as Zephyr!

We love Zephyr

People reading this blog will not be surprised to know that we love Zephyr. We write about it quite often and it is the basis of our Zephyr-based SDK. As a result, many of our samples and demos are built using Zephyr. We often talk about Zephyr being an indicator that a hardware device will work with Golioth; all you need is a network connection, a board running Zephyr, and little bit of storage overhead to hold the Golioth code. It’s the hardware interoperability of Zephyr that allows Golioth users to target a wide range of platforms, including microcontrollers from Espressif, Infineon, Intel, Microchip, NXP, Nordic Semiconductor…and more being added every day!

Our plans at ZDS

We’re excited to be returning to ZDS. Last year we officially announced Golioth to the world at ZDS, and talked about how our platform works within the Zephyr ecosystem. We hope to have another year of connection, this time in person and online. Let’s look at how we’ll be participating.

Sponsoring/Showcase

We are helping to sponsor ZDS this year. We believe in the mission of the project and the conference and wanted to be part of it. We will also be showcasing Golioth at a vendor table at the conference. If you would like to see Golioth in action, you can stop by at any time to ask questions and see demos. You can, of course, also try out Golioth at any time using our Dev Tier plan, which gives anyone up to 50 free devices on the platform.

Giving Talks

We will be presenting a range of talks at ZDS:

  • What chip shortage? How we use Zephyr for truly modular hardware
    • Chris and Mike from Developer Relations will highlight the Aludel, an internal hardware platform we’ve built as a customizable solution that can switch out hardware pieces without major redesign. This modular hardware showcases a path for hardware and firmware teams to unify their codebase using Zephyr while targeting a wide range of hardware. Being able to swap out a sensor, microcontroller, or radio but keep the main board, or go from outdoor air monitoring to indoor monitoring is really powerful. Zephyr makes it much easier to create alternate builds and manage firmware pipelines to hardware variants.
  • Connecting Zephyr Logging to the Cloud over Constrained Channels
    • Our resident Zephyr expert Marcin will cover an approach to preparing Zephyr logging messages for transmission through a constrained networking layer, such as a cellular connection. This includes CBOR compression on all logging messages, including special handling around binary payloads. There is also an interface to a CoAP library to take advantage of smaller payloads and standardized format to a cloud backend. Additional tooling is included for selectable acknowledgement of messages, to handle high priority and high traffic scenarios.
  • Zephyr <3 Internet: How Zephyr speeds implementation for new IoT devices
    • I (Jonathan, CEO) will make a case to people outside of the Zephyr ecosystem on why they should adopt the platform and contrast the difficulties to other RTOS solutions. These networking concepts are so baked-in that it fundamentally changes the cost for anyone buying into the ecosystem. From vendors adding modems to developers building apps, the underlying framework saves time and engineering complexity.
  • End-to-end IoT development with Zephyr
    • Founding engineer Alvaro will cover the options for getting a Zephyr app connected (WiFi, Ethernet, Cellular), selecting the right data encoding (JSON/CBOR), securing the data transfer (DTLS/TLS), and choosing a protocol (HTTP/MQTT/COAP). But that’s not the end of the story, the cloud needs to manage devices allowed to connect, consume the data being received, open up options for using that data, and be aware of the continued state of the hardware. And once you have the data you need to build a user-facing application on top of it.

Giving a workshop

Hands-on demos are a critical part of understanding a new system. This is true of both Zephyr and of Golioth. We wanted to showcase how Golioth works to Zephyr users, while also helping people get a real piece of hardware talking to the cloud. We’re giving a workshop called “Hands-on with Zephyr-based IoT Hardware – Data Goes in, Data Comes Out, Data Goes Up“.  This is a hands-on developer training showing how to get a finished piece of hardware utilizing the various features that Zephyr has to offer. The main thrust of the training is getting up and running with the Zephyr toolchain, implementing examples on a piece of hardware (provided), and interacting with cloud services. The user will learn about various abstraction layers around things like CoAP and CBOR, and experience a real world example of a smart device talking back to the Golioth Cloud. This will also expose the user to web-side technologies and how they can export data to external commercial services like AWS, Azure, and GCP.

Meeting with users and partners

We love our community and are always looking to meet new people within it. Interested in setting up a time to discuss something? Email [email protected]

Should you attend?

If you’re a someone already developing for Zephyr and pushing code upstream, this is the best opportunity to meet with others from the community and continue to build your skills. We think this is a perfect event for you!

If you’re new to Zephyr, the content can seem a bit intimidating…but fear not! The first half day of the conference (June 7th starting at 1 pm) is the “Intro to Zephyr” day, and this is a great introduction to the platform and how you can build your skills using Zephyr. There are also reduced cost tickets for students, if you’re still learning. We think if you’re looking to build a product with Zephyr in the future, or already are building with Zephyr, it’s a worthwhile experience to be there.

See you there!

We’re excited to meet more people and hear the other great talks that will be happening at the 2022 Zephyr Developer Summit. While we definitely plan to share the talks after the fact, and you can also participate in the virtual conference, we still hope to see you there!

This guest post is contributed by Asgeir Stavik Hustad, a Golioth community member who is active on the Golioth Discord. Reach him on Twitter at @AsgeirSH.

This tutorial was inspired by and a response to the tutorial about how to build your Zephyr application in a standalone folder. I have done exactly that before, but I also wanted to include all my dependencies in that separate folder.

Background and motivation

I need to maintain different firmware with different Zephyr versions and trees. For example, I maintain the following directories:

  • Nordic’s Zephyr-variant (NCS) for the nRF9160
  • Base Zephyr for Atmel-MCUs
  • Base Zephyr, but locked to a particular version (ie. “2.7.0”)

We also have several custom boards. These are currently maintained in each project, but could be moved to a separate dependency if we want to use the same board overlay files in multiple projects.

Instead of trying to swap a single Zephyr-installation between all of these, I did some research into using west and its manifest file to automatically set up my project folders to include all dependencies. I also wanted to ensure our build server didn’t require any manual work to build different projects. The Zephyr docs present this topic in depth, and are recommended reading if you want to set this up.

Let’s look at how we can set up a project to fit a wider range of needs.

Project structure

Most of my projects are kept in my “Dev” folder, so for this example we’ll be using ~/Dev/app_zephyr as the root directory of the project.

I put my application source in application, which is further split into at least boards and src (you can add any folder you like here). You’ll note this is the same structure as any of the Zephyr or Golioth samples you see; in fact, you can copy a sample as the starting point (such as <Zephyr SDK Install location>/zephyr/samples/basic/blinky). The other folders include deps for dependencies and build for the build output folder.

Inside the root folder, add .west/config. This is a plain text file describing to west where it should look for the manifest file and where Zephyr should be placed.

[manifest]
path = application
[zephyr]
base = deps/zephyr

Drawbacks

  • The initial clone and west update of a project set up like this takes some time.
  • This method uses quite a bit of disk space because each project carries around the Zephyr dependencies, as opposed to having your application live within the Zephyr SDK.
  • Ensuring you get updates to all your projects means you need to update the projects in your manifest file to a new revision manually (not really a drawback in my eyes – I want control!)

Let’s go through the manifest-file itself step by step. It’s found in application/west.yml:

manifest:
  version: 0.7

  defaults:
    remote: zephyrproject
  remotes:
    - name: zephyrproject
      url-base: https://github.com/zephyrproject-rtos
    - name: mcutools
      url-base: https://github.com/mcu-tools
  projects:
    - name: zephyr
      repo-path: zephyr
      revision: v2.7.0
      import:
        path-prefix: deps
      path: zephyr
    - name: mcuboot
      remote: mcutools
      repo-path: mcuboot
      revision: v1.7.2
      path: deps/mcuboot

  self:
    path: application
  • The manifest version being set to 0.7 simply means west (the meta tool) must be at version 0.7 or higher to parse this correctly.
  • Default attributes for the project are not required, but in this case lists the main remote.
  • Remotes lists where west should look for project repos.
  • Projects lists the full range of repositories we’ll pull in as dependencies. This includes the revision, so we have control over upgrades when they are available. I want to prevent breaking changes from entering my project without my knowledge.
  • The self: path: application is used to further define where in the project tree the west.yml file is compared to the root of the project.

I feel that the projects key is the true turning point of this manifest. By adding to this we can make Zephyr pull any git repository we want, and put it in our dependencies-folder. We specify the project name, a remote (if different from the specified default), a repo-path on the remote (defaults to name), a revision (defaults to master) and a local path (with a slight footnote for this one).

There is a special key here as well that makes this work, import will make west import the projects from the manifest file of that project as well. This means that when running west update on this manifest, west will first clone all projects in this manifest, then run west update on the manifest file in the specified project, and clone all projects from that, with the specified path-prefix for all those. I’ve used this for the Zephyr-include, but not for the mcutools.

Build

In practice, this means that my project structure for the manifest file above after running west update will look like this:

- app_zephyr/
    - .west/
        - config
    - application/
        - boards/
            - arm/
                - ah_1202a/
        - src/
        - CMakeLists.txt
        - prj.conf
        - west.yml
    - deps/
        - mcuboot/
        - modules/
        - tools/
        - zephyr/
    - .gitignore

Your custom board *.dts-files can include all the root overlays from the Zephyr-dependency or any other projects. (I’ve also set this up so VS Code can do IntelliSense of these DTS files, that’s just a matter of setting the correct includePaths.)

From here, you can run west build and have it use your custom board-files, source and everything. In my case:

cd application
west build -p -b ah_1202a -d ../build

Revision Control

One of the benefits of a method like this is the reduced amount of files going into revision control. You don’t need to index all of the Zephyr directory files in your project repo. This is a bad idea anyway, given the size of the project and the almost certain guarantee they will be out of date the next time you pull your project. Locking the Zephyr version in west.yml will ensure that your project is always pulling from the expected version of an SDK or Zephyr repo. Adding a .gitignore file as shown below to your main repository will reduce your total footprint and only capture the unique elements of your project–your application code.

deps/
build/
.vscode/

Build it your way

The first step to building an optimal workflow for your company or personal development process is understanding how your build system works. The above method is far from the only way of doing things, but helps to give more precise control over what is tracked and what is pulled in from external sources.

This guest post is contributed by Ben Mawbey, a community member who is active on the Golioth Discord and frequently takes part in Office Hours.

Data wants to be visualized. The impact of showing a customer a slick plot of the information their devices have been collecting is massive compared to pointing at a few hundred lines of text from a log file or database query.

I was looking for some sort of dashboard or charting application for a demo for a sensor system we’ve built. I figured this will appease the clients’ need for pretty pictures over boring reports. I found exactly what I needed, and it only took me about 30 minutes to get it up and running.

Grafana graphing data

This looks great, even if you have no idea what the graphs mean! Let’s dig into how to get from numbers in a database to pretty pictures in a dashboard.

Pieces of the Puzzle: Golioth WebSockets, Node-RED, InfluxDB, and Grafana

Golioth is brilliant at getting your device data from the real world up to the cloud, climbing the IoT bean-stalk some would say. While abstracting some of the trickiest IoT problems, Golioth can present your time-series data as a convenient cloud resource using LightDB Stream service. By leveraging the built-in WebSockets support and some open-source tools we can rapidly store, manipulate and display this data!

Grafana is the open-source blockbuster for this application and can be easily set up to graph sensor data directly from Golioth LightDB Stream using the REST API. This has two major drawbacks:

  • Grafana must periodically poll for new data
  • While LightDB Stream does provide convenient data retention, I prefer to use my own data storage

Enter InfluxDB, the time-series database powerhouse and an ideal companion to Grafana when talking about IoT real-time applications. This pairing is so popular that the InfluxDB data source integration is baked right into Grafana! By utilizing InfluxDB to store our sensor data, we can perform more complex queries much faster.

The question remaining is how to shuffle data from Golioth to InfluxDB? There are many potential solutions to this hurdle, but my favorite is Node-RED which defines itself as a low code programming tool for wiring up event-driven systems.

Node-RED editor window

Node-RED uses graphical flows to connect data sources and destinations, bending protocols and translating data formats in innumerable ways

Node-RED has exploded in popularity and provides all sorts of integrations to connect your systems together. It provides simple blocks to perform actions and a slick graphical interface to wire up your data flows. Conceptually Node-RED acts as our rule engine to process and direct data.

Dashboards: Grafana and InfluxDB

System diagram

Grafana is immensely powerful at providing example custom views, data transformations, and alerting. That said, it is only as good as the quality of data you can provide. Having a tightly coupled InfluxDB instance with carefully curated data via Node-RED, allows you to quickly configure complex data queries on large datasets with low latency.

Before we can play with Grafana the first step is InfluxDB setup. After you’ve installed InfluxDB, create a new database on your InfluxDB instance:

> CREATE DATABASE golioth

Configure the InfluxDB Data Source in your Grafana instance by clicking the gear icon on the left sidebar, choosing and adding a data source, then searching for the InfluxDB plugin. Here’s how I’ve set up my data source:

Grafana InfluxDB source configuration

Assuming you have some data in your DB, we can quickly create a new time-series dashboard panel in Grafana and query the dataset using this integration:

Grafana panel configuration

This simple query shows how we structured the data in the DB, allowing us to select from a particular measurement, specifying a specific device identity tag and aggregating data points with specific time buckets. Adjusting the time range instantly updates the graph from our local InfluxDB instance.

Now how do we get the data from Golioth into InfluxDB?

Node-RED

Several networking integrations are provided with Node-RED. Presently the most relevant to Golioth would be HTTP nodes for REST API requests and the WebSockets node which is the easiest to configure.

You can see your sensor data collecting in the Golioth LightDB Stream by using the Golioth Console web interface:

Golioth Console showing LightDB stream data

We can use Node-RED flows to connect to Golioth via WebSockets and store the resulting data in our local DB:

NodeRED editor window

The nodes in this flow were set up as follows, taking care to give them appropriate names making it obvious to see their function. All of the nodes I’m using should come as part of every Node-RED installation except for the InfluxDB nodes. But don’t worry, these are trivial to install. On Linux is looks something like this:

cd ~/.node-red
npm install node-red-contrib-influxdb
sudo systemctl restart nodered.service

WebSockets Node:Node-RED websockets

First set up the credentials to your Golioth project using your generated API key and connect to the WebSockets LightDB Stream endpoint.

Debug Node:Node-RED debug

Drop a few of these nodes along the way and click the small green button to turn the debug log on. This is super handy to check data coming through and make sense of it.

JSON Node:Node-RED json

The LightDB Stream endpoint provides us with a JSON object representation containing our sensor data as well as meta information such as the data timestamp and device identity. This node allows us to parse this JSON into a javascript object so that we can work with it more easily in subsequent nodes.

Change Node:Node-RED change node

This node clearly shows the power of Node-RED as we can craft any sort of data manipulations or transformation.

We could do without this node and jump straight to InfluxDB, however, should any malformed data arrive we risk polluting the DB with bad data. By selectively transforming the incoming data and mapping it into a new object we can not only filter only good packets and arrange the measurement names but also add tags to build a solid data representation in the DB making our queries far more powerful.

InfluxDB Out Node:

Node-RED InfluxDB input nodeNode-RED InfluxDB server settings window

Finally, we can configure the data connection to our InfluxDB instance, set appropriately for your server configuration and database created earlier.

Assuming your flow is set up correctly, you should be able to see data collecting in your database. We know it works, but as I mentioned before, this visual is not going to impress our customers.

InfluxDB data

Revisiting the Grafana panel we previously created, you can see InfluxDB data is now being plotted!

Grafana graphing data

Corner Cases

One of the downsides of WebSockets is their ethereal nature, should there be some temporary connectivity issue, any data packets would be lost from the point of view of your InfluxDB database. A solution around this could be to set up another flow that executes periodically to sync with the LightDB Stream using the REST API. Node-RED could then be configured to check this data and add any missing values into the InfluxDB instance and prevent consistency issues.

Another concern with open-source self-hosted systems is security. It can be challenging to secure your server and services should they be public-facing. If you are handling sensitive data then it would be best to consult with an expert in this field. Fortunately, all of the tools discussed have subscription-based cloud services available that sort all of this out in the background.

Conclusion

Being able to set up a simple demo like this in less than 30 minutes demonstrates the power and flexibility of these modern open-source solutions. Coupled with the reliability and maintenance advantages of the Docker system, it’s a breeze to test locally on your desktop or Raspberry Pi and then deploy to production a moment later on your cloud server of choice. The rules engine and ease of wiring up blocks provided by Node-RED opens up a massive pool of possibilities, from countless other integrations to building intelligent processes. One such idea I would like to explore is integrating a device provisioning process into the flow such that we can link a device to a dataset or location during deployment or maintenance.