This guest post is contributed by Asgeir Stavik Hustad, a Golioth community member who is active on the Golioth Discord. Reach him on Twitter at @AsgeirSH.

This tutorial was inspired by and a response to the tutorial about how to build your Zephyr application in a standalone folder. I have done exactly that before, but I also wanted to include all my dependencies in that separate folder.

Background and motivation

I need to maintain different firmware with different Zephyr versions and trees. For example, I maintain the following directories:

  • Nordic’s Zephyr-variant (NCS) for the nRF9160
  • Base Zephyr for Atmel-MCUs
  • Base Zephyr, but locked to a particular version (ie. “2.7.0”)

We also have several custom boards. These are currently maintained in each project, but could be moved to a separate dependency if we want to use the same board overlay files in multiple projects.

Instead of trying to swap a single Zephyr-installation between all of these, I did some research into using west and its manifest file to automatically set up my project folders to include all dependencies. I also wanted to ensure our build server didn’t require any manual work to build different projects. The Zephyr docs present this topic in depth, and are recommended reading if you want to set this up.

Let’s look at how we can set up a project to fit a wider range of needs.

Project structure

Most of my projects are kept in my “Dev” folder, so for this example we’ll be using ~/Dev/app_zephyr as the root directory of the project.

I put my application source in application, which is further split into at least boards and src (you can add any folder you like here). You’ll note this is the same structure as any of the Zephyr or Golioth samples you see; in fact, you can copy a sample as the starting point (such as <Zephyr SDK Install location>/zephyr/samples/basic/blinky). The other folders include deps for dependencies and build for the build output folder.

Inside the root folder, add .west/config. This is a plain text file describing to west where it should look for the manifest file and where Zephyr should be placed.

[manifest]
path = application
[zephyr]
base = deps/zephyr

Drawbacks

  • The initial clone and west update of a project set up like this takes some time.
  • This method uses quite a bit of disk space because each project carries around the Zephyr dependencies, as opposed to having your application live within the Zephyr SDK.
  • Ensuring you get updates to all your projects means you need to update the projects in your manifest file to a new revision manually (not really a drawback in my eyes – I want control!)

Let’s go through the manifest-file itself step by step. It’s found in application/west.yml:

manifest:
  version: 0.7

  defaults:
    remote: zephyrproject
  remotes:
    - name: zephyrproject
      url-base: https://github.com/zephyrproject-rtos
    - name: mcutools
      url-base: https://github.com/mcu-tools
  projects:
    - name: zephyr
      repo-path: zephyr
      revision: v2.7.0
      import:
        path-prefix: deps
      path: zephyr
    - name: mcuboot
      remote: mcutools
      repo-path: mcuboot
      revision: v1.7.2
      path: deps/mcuboot

  self:
    path: application
  • The manifest version being set to 0.7 simply means west (the meta tool) must be at version 0.7 or higher to parse this correctly.
  • Default attributes for the project are not required, but in this case lists the main remote.
  • Remotes lists where west should look for project repos.
  • Projects lists the full range of repositories we’ll pull in as dependencies. This includes the revision, so we have control over upgrades when they are available. I want to prevent breaking changes from entering my project without my knowledge.
  • The self: path: application is used to further define where in the project tree the west.yml file is compared to the root of the project.

I feel that the projects key is the true turning point of this manifest. By adding to this we can make Zephyr pull any git repository we want, and put it in our dependencies-folder. We specify the project name, a remote (if different from the specified default), a repo-path on the remote (defaults to name), a revision (defaults to master) and a local path (with a slight footnote for this one).

There is a special key here as well that makes this work, import will make west import the projects from the manifest file of that project as well. This means that when running west update on this manifest, west will first clone all projects in this manifest, then run west update on the manifest file in the specified project, and clone all projects from that, with the specified path-prefix for all those. I’ve used this for the Zephyr-include, but not for the mcutools.

Build

In practice, this means that my project structure for the manifest file above after running west update will look like this:

- app_zephyr/
    - .west/
        - config
    - application/
        - boards/
            - arm/
                - ah_1202a/
        - src/
        - CMakeLists.txt
        - prj.conf
        - west.yml
    - deps/
        - mcuboot/
        - modules/
        - tools/
        - zephyr/
    - .gitignore

Your custom board *.dts-files can include all the root overlays from the Zephyr-dependency or any other projects. (I’ve also set this up so VS Code can do IntelliSense of these DTS files, that’s just a matter of setting the correct includePaths.)

From here, you can run west build and have it use your custom board-files, source and everything. In my case:

cd application
west build -p -b ah_1202a -d ../build

Revision Control

One of the benefits of a method like this is the reduced amount of files going into revision control. You don’t need to index all of the Zephyr directory files in your project repo. This is a bad idea anyway, given the size of the project and the almost certain guarantee they will be out of date the next time you pull your project. Locking the Zephyr version in west.yml will ensure that your project is always pulling from the expected version of an SDK or Zephyr repo. Adding a .gitignore file as shown below to your main repository will reduce your total footprint and only capture the unique elements of your project–your application code.

deps/
build/
.vscode/

Build it your way

The first step to building an optimal workflow for your company or personal development process is understanding how your build system works. The above method is far from the only way of doing things, but helps to give more precise control over what is tracked and what is pulled in from external sources.

This guest post is contributed by Ben Mawbey, a community member who is active on the Golioth Discord and frequently takes part in Office Hours.

Data wants to be visualized. The impact of showing a customer a slick plot of the information their devices have been collecting is massive compared to pointing at a few hundred lines of text from a log file or database query.

I was looking for some sort of dashboard or charting application for a demo for a sensor system we’ve built. I figured this will appease the clients’ need for pretty pictures over boring reports. I found exactly what I needed, and it only took me about 30 minutes to get it up and running.

Grafana graphing data

This looks great, even if you have no idea what the graphs mean! Let’s dig into how to get from numbers in a database to pretty pictures in a dashboard.

Pieces of the Puzzle: Golioth WebSockets, Node-RED, InfluxDB, and Grafana

Golioth is brilliant at getting your device data from the real world up to the cloud, climbing the IoT bean-stalk some would say. While abstracting some of the trickiest IoT problems, Golioth can present your time-series data as a convenient cloud resource using LightDB Stream service. By leveraging the built-in WebSockets support and some open-source tools we can rapidly store, manipulate and display this data!

Grafana is the open-source blockbuster for this application and can be easily set up to graph sensor data directly from Golioth LightDB Stream using the REST API. This has two major drawbacks:

  • Grafana must periodically poll for new data
  • While LightDB Stream does provide convenient data retention, I prefer to use my own data storage

Enter InfluxDB, the time-series database powerhouse and an ideal companion to Grafana when talking about IoT real-time applications. This pairing is so popular that the InfluxDB data source integration is baked right into Grafana! By utilizing InfluxDB to store our sensor data, we can perform more complex queries much faster.

The question remaining is how to shuffle data from Golioth to InfluxDB? There are many potential solutions to this hurdle, but my favorite is Node-RED which defines itself as a low code programming tool for wiring up event-driven systems.

Node-RED editor window

Node-RED uses graphical flows to connect data sources and destinations, bending protocols and translating data formats in innumerable ways

Node-RED has exploded in popularity and provides all sorts of integrations to connect your systems together. It provides simple blocks to perform actions and a slick graphical interface to wire up your data flows. Conceptually Node-RED acts as our rule engine to process and direct data.

Dashboards: Grafana and InfluxDB

System diagram

Grafana is immensely powerful at providing example custom views, data transformations, and alerting. That said, it is only as good as the quality of data you can provide. Having a tightly coupled InfluxDB instance with carefully curated data via Node-RED, allows you to quickly configure complex data queries on large datasets with low latency.

Before we can play with Grafana the first step is InfluxDB setup. After you’ve installed InfluxDB, create a new database on your InfluxDB instance:

> CREATE DATABASE golioth

Configure the InfluxDB Data Source in your Grafana instance by clicking the gear icon on the left sidebar, choosing and adding a data source, then searching for the InfluxDB plugin. Here’s how I’ve set up my data source:

Grafana InfluxDB source configuration

Assuming you have some data in your DB, we can quickly create a new time-series dashboard panel in Grafana and query the dataset using this integration:

Grafana panel configuration

This simple query shows how we structured the data in the DB, allowing us to select from a particular measurement, specifying a specific device identity tag and aggregating data points with specific time buckets. Adjusting the time range instantly updates the graph from our local InfluxDB instance.

Now how do we get the data from Golioth into InfluxDB?

Node-RED

Several networking integrations are provided with Node-RED. Presently the most relevant to Golioth would be HTTP nodes for REST API requests and the WebSockets node which is the easiest to configure.

You can see your sensor data collecting in the Golioth LightDB Stream by using the Golioth Console web interface:

Golioth Console showing LightDB stream data

We can use Node-RED flows to connect to Golioth via WebSockets and store the resulting data in our local DB:

NodeRED editor window

The nodes in this flow were set up as follows, taking care to give them appropriate names making it obvious to see their function. All of the nodes I’m using should come as part of every Node-RED installation except for the InfluxDB nodes. But don’t worry, these are trivial to install. On Linux is looks something like this:

cd ~/.node-red
npm install node-red-contrib-influxdb
sudo systemctl restart nodered.service

WebSockets Node:Node-RED websockets

First set up the credentials to your Golioth project using your generated API key and connect to the WebSockets LightDB Stream endpoint.

Debug Node:Node-RED debug

Drop a few of these nodes along the way and click the small green button to turn the debug log on. This is super handy to check data coming through and make sense of it.

JSON Node:Node-RED json

The LightDB Stream endpoint provides us with a JSON object representation containing our sensor data as well as meta information such as the data timestamp and device identity. This node allows us to parse this JSON into a javascript object so that we can work with it more easily in subsequent nodes.

Change Node:Node-RED change node

This node clearly shows the power of Node-RED as we can craft any sort of data manipulations or transformation.

We could do without this node and jump straight to InfluxDB, however, should any malformed data arrive we risk polluting the DB with bad data. By selectively transforming the incoming data and mapping it into a new object we can not only filter only good packets and arrange the measurement names but also add tags to build a solid data representation in the DB making our queries far more powerful.

InfluxDB Out Node:

Node-RED InfluxDB input nodeNode-RED InfluxDB server settings window

Finally, we can configure the data connection to our InfluxDB instance, set appropriately for your server configuration and database created earlier.

Assuming your flow is set up correctly, you should be able to see data collecting in your database. We know it works, but as I mentioned before, this visual is not going to impress our customers.

InfluxDB data

Revisiting the Grafana panel we previously created, you can see InfluxDB data is now being plotted!

Grafana graphing data

Corner Cases

One of the downsides of WebSockets is their ethereal nature, should there be some temporary connectivity issue, any data packets would be lost from the point of view of your InfluxDB database. A solution around this could be to set up another flow that executes periodically to sync with the LightDB Stream using the REST API. Node-RED could then be configured to check this data and add any missing values into the InfluxDB instance and prevent consistency issues.

Another concern with open-source self-hosted systems is security. It can be challenging to secure your server and services should they be public-facing. If you are handling sensitive data then it would be best to consult with an expert in this field. Fortunately, all of the tools discussed have subscription-based cloud services available that sort all of this out in the background.

Conclusion

Being able to set up a simple demo like this in less than 30 minutes demonstrates the power and flexibility of these modern open-source solutions. Coupled with the reliability and maintenance advantages of the Docker system, it’s a breeze to test locally on your desktop or Raspberry Pi and then deploy to production a moment later on your cloud server of choice. The rules engine and ease of wiring up blocks provided by Node-RED opens up a massive pool of possibilities, from countless other integrations to building intelligent processes. One such idea I would like to explore is integrating a device provisioning process into the flow such that we can link a device to a dataset or location during deployment or maintenance.