Controlling 10 devices is easy, controlling 10,000 is a different story. The trick is to plan for scale, which is what we specialize in here at Golioth.

A few weeks ago we announced the Golioth Device Settings Service that enables you to change settings for your entire fleet at the click of a button. Of course, you can drill down to groups of devices and single devices too. While the announcement post showed the Settings Service using our ESP-IDF SDK, today we’ll look at this setting service from the device point of view with Zephyr RTOS.

Device-side building blocks

The good news is that the Golioth Zephyr SDK has done all the hard work for you. We just need to do three things to start using the settings service in any Zephyr firmware project:

  1. Enable the settings service in KConfig
  2. Register a callback function when the device connects to Golioth
  3. Validate the data and do something useful when a new setting arrives

Code walk-through

Golioth’s settings sample code is a great starting point. It watches for a settings key named LOOP_DELAY_S to remotely adjust the delay between sending messages back to the server. Let’s walk through the important parts of that code for a better understanding of what is involved.

1. Enable settings in KConfig

CONFIG_GOLIOTH_SETTINGS=y

To turn the settings service on, the CONFIG_GOLIOTH_SETTINGS symbol needs to be selected. Add the code above to your prj.conf file.

2. Register a callback for settings updates

static void golioth_on_connect(struct golioth_client *client)
{
    if (IS_ENABLED(CONFIG_GOLIOTH_SETTINGS)) {
        int err = golioth_settings_register_callback(client, on_setting);

        if (err) {
            LOG_ERR("Failed to register settings callback: %d", err);
        }
    }
}

To do anything useful with the settings service, we need to be able to execute a function when new settings are received. Register the callback function each time the device connects to Golioth. Your app may already have a golioth_on_connect() function declared, look for the following code at the beginning of main and add it if it’s not already there.

	client->on_connect = golioth_on_connect;
	client->on_message = golioth_on_message;
	golioth_system_client_start();

Each time the Golioth Client detects a new connection, the golioth_on_connect() function will run, which in turn registers your on_setting() function to run.

3. Validate and process the received settings

enum golioth_settings_status on_setting(
        const char *key,
        const struct golioth_settings_value *value)
{
    LOG_DBG("Received setting: key = %s, type = %d", key, value->type);
    if (strcmp(key, "LOOP_DELAY_S") == 0) {
        /* This setting is expected to be numeric, return an error if it's not */
        if (value->type != GOLIOTH_SETTINGS_VALUE_TYPE_INT64) {
            return GOLIOTH_SETTINGS_VALUE_FORMAT_NOT_VALID;
        }

        /* This setting must be in range [1, 100], return an error if it's not */
        if (value->i64 < 1 || value->i64 > 100) {
            return GOLIOTH_SETTINGS_VALUE_OUTSIDE_RANGE;
        }

        /* Setting has passed all checks, so apply it to the loop delay */
        _loop_delay_s = (int32_t)value->i64;
        LOG_INF("Set loop delay to %d seconds", _loop_delay_s);

        return GOLIOTH_SETTINGS_SUCCESS;
    }

    /* If the setting is not recognized, we should return an error */
    return GOLIOTH_SETTINGS_KEY_NOT_RECOGNIZED;
}

The callback function will provide a key (the name of the settings) that is a string type, and a golioth_settings_value that includes the value, as well as an indicate of the value type (bool, float, string, or int64). The callback function should validate that the expected key was received, and test that the received value is the proper type and within your expected bounds before acting upon the data.

The final piece of the device settings puzzle is to return an enum value indicating the device status after receiving a new setting. This is used by the Golioth Cloud to indicate sync status. Our example demonstrates the GOLIOTH_SETTINGS_SUCCESS and GOLIOTH_SETTINGS_KEY_NOT_RECOGNIZED status values, but it’s worth checking out the doxygen reference for golioth_settings_status to see how different invalid key and value messages can be sent to Golioth for display in the web console.

Coordinating with the Golioth Cloud

The device settings key format is tightly specified by Golioth, allowing only “A-Z (upper case), 0-9 and underscore (_) characters”. We recommend that you add a key to the Golioth web console while writing your firmware to ensure the device knows what to expect.

Each device page includes a status indicator to show if the settings are in sync with the Golioth cloud.

Additional information is available when a device is out of sync. Here you can see that hovering over the information icon indicates the key is not valid. For this example I created a VERY_IMPORTANT_BOOL_7 but didn’t add support for that to the firmware. This is the result of our callback function returning GOLIOTH_SETTINGS_KEY_NOT_RECOGNIZED. In cases like this, Golioth’s Over-the-Air firmwmare update (OTA) service can help you to push a new firmware to your devices in the field that will recognize the newly created Settings variable.

IoT Fleet control on day one

The pain of IoT often comes when you need to scale. Golioth’s mission is to make sure you avoid that pain and are ready to grow starting from day one.

Our device settings service gives you the power to remotely change settings on your entire fleet, on a group of devices, and all the way down to a single device. You are able to verify your devices are in sync, and receive helpful information when they are not. Take Golioth for a test drive today, our dev tier is free for the first 50 devices.

Today we’re announcing a new feature on the Golioth Console and on our Device SDKs that enables Remote Procedure Calls (RPCs) for all users on the platform. From the cloud, you can initiate a function on your constrained device in the field, ensure the device received and executed the command, and receive a response from the device back to the Cloud.

What is a Remote Procedure Call (RPC)?

A Remote Procedure Call allows you to call a function on a remote computing device and optionally receive a result. An easy way to think about it is you’re calling a function, like you would in any other program…you’re just doing it from another computer. In this case, you’re triggering actions from the Golioth Cloud.

RPC from the Golioth Cloud (Console)

Each device in your project has a page where you can view details about things like LightDB State, LightDB Stream, Settings, and now RPC. Our Console includes an interface to directly send RPCs to the remote device. The URL will look something like:

https://console.golioth.io/devices/<YOUR_DEVICE_ID>/management/rpc

In all of our examples, we are sending an RPC to single devices. However, they can also be triggered from the REST API. As a reminder, any function you see on the Golioth Console is available on the REST API.

One critical function of RPCs is a confirmation that the remote function has actually run. The device firmware needs to send back a success message that the function has completed, and optionally a returned value. When there is a problem connecting with your device and the RPC does not complete successfully, you will see a screen that looks like this:

An RPC sent to a device that was disconnected from Wi-Fi

Also note that round trip time is measured for all RPCs, including successful ones. Transit times will depend upon your connectivity medium, in addition to the processing time of the function on the remote device.

When an RPC successfully completes, you can click the button with 3 dots to receive the returned value. In the example and in the video, we were using a method called double that takes an integer input, multiplies it by two, and then returns the value to the Cloud. Below, you can see the result when we sent “double” method with a parameter of “37”.

RPC from the Device SDK perspective

Any new feature on Golioth has Device SDK support, in addition to the new APIs and UIs on our Web Console. Earlier this week, Nick wrote about how we test hardware and firmware at Golioth, especially when a new feature is released across the platform. Now that we support 3 SDKs (Zephyr, NCS, ESP-IDF), the testing area has increased.

In the video, the focus is on ESP-IDF, which has a simple way to set up and respond to new RPCs. First, we register the new method, so we’ll recognize the command coming from the Golioth Cloud:

The function that we tie to that newly registered RPC needs to return the RPC_OK variable for the Cloud to be alerted that the function has processed properly.

If you have satisfied these requirements in the ESP-IDF SDK and copied the format, you can customize logic to do whatever task you’d like on the remote device. Let’s look at some examples.

Use cases for an RPC

The double() example is a simple showcase of the minimum requirements to create an RPC in the ESP-IDF SDK. We send a command and a value, we return a modified value.

The remote_reset command we created and showcased in the video is more like a critical function you would want to add to your project. When you want to trigger a remote function like a reset, you want to ensure that the command was properly received, that the function executed, and then that there was output data that validated the reset has happened. In the final point, that includes inferring that the device has restarted from the log messages also being sent back to the Golioth Console. Put all together, it’s a reliable way to tell the device has been reset.

Other use cases could be as simple as sending arbitrary text to a display. You would still want to know the text has been received and properly sent to the physical display. Or perhaps you have a valve and you want to be able to send an arbitrary value to the valve, but you also want to take a reading on an encoder that measures the distance the valve has moved.

Many of these functions could also be achieved with LightDB State (which the RPC service is built upon), but the context for creating an RPC is more targeted at situations like the examples above.

What will you build?

RPCs are another way for you to communicate with your constrained IoT devices from the Golioth Cloud and to get useful information back from your devices. You can start testing out this feature today.

For more questions or assistance, check out our Forums, our Discord, or drop us a note at [email protected].

 

Golioth just rolled out a new settings service that lets you control your growing fleet of IoT devices. You can specify settings for your entire fleet, and override those global settings by individual device or for multiple devices that share the same blueprint.

Every IoT project needs some type of settings feature, from adjusting log levels and configuring the delay between sensor readings, to adjusting how frequently a cellular connection is used in order to conserve power. With the new settings service, the work is already done for you. A single settings change on the Golioth web console is applied to all devices listening for changes!

As you grow from dozens of devices to hundreds (and beyond), the Golioth settings service makes sure you can change device settings and confirm that those changes were received.

Demonstrating the settings service

Golioth settings service

The settings service is ready for you use right now. We have code samples for the Golioth Zephyr SDK and the Golioth ESP-IDF SDK. Let’s take it for a spin using the Zephyr samples.

I’ve compiled and flashed the Golioth Settings sample for an ESP32 device. It observes a LOOP_DELAY_S endpoint and uses that value to decide how long to wait before sending another “Hello” log message.

Project-wide settings

On the Golioth web console, I use the Device Settings option on the left sidebar to create the key/value pair for this setting. This is available to all devices in the project with firmware that is set up to observe the LOOP_DELAY_S settings endpoint.

Golioth device settings dialog

When viewing device logs, we can see the setting is observed as soon as the device connects to Golioth. The result is that the Hello messages are now issued ten seconds apart.

[00:00:19.930,000] <inf> golioth_system: Client connected!
[00:00:20.340,000] <inf> golioth: Payload
                                  a2 67 76 65 72 73 69 6f  6e 1a 62 e9 99 c4 68 73 |.gversio n.b...hs
                                  65 74 74 69 6e 67 73 a1  6c 4c 4f 4f 50 5f 44 45 |ettings. lLOOP_DE
                                  4c 41 59 5f 53 fb 40 24  00 00 00 00 00 00       |LAY_S.@$ ......  
[00:00:20.341,000] <dbg> golioth_hello: on_setting: Received setting: key = LOOP_DELAY_S, type = 2
[00:00:20.341,000] <inf> golioth_hello: Set loop delay to 10 seconds
[00:00:20.390,000] <inf> golioth_hello: Sending hello! 2
[00:00:30.391,000] <inf> golioth_hello: Sending hello! 3
[00:00:40.393,000] <inf> golioth_hello: Sending hello! 4

Settings by device or by blueprint

Of course, you don’t always want to have the same settings for all devices. Consider debugging a single device. It doesn’t make much sense to turn up the logging level or frequency of sensor reads for all devices. So with Golioth it’s easy to change the setting on just a single device.

settings change for a single device on the Golioth console

In the device view of the Golioth web console there is a settings tab. Here you can see the key, the value, and the level of the value. I have already changed the device-specific value in this screen so the level is being reported as “Device”.

[00:07:30.466,000] <inf> golioth_hello: Sending hello! 45
[00:07:40.468,000] <inf> golioth_hello: Sending hello! 46
[00:07:43.728,000] <inf> golioth: Payload
                                  a2 67 76 65 72 73 69 6f  6e 1a 62 e9 9c 17 68 73 |.gversio n.b...hs
                                  65 74 74 69 6e 67 73 a1  6c 4c 4f 4f 50 5f 44 45 |ettings. lLOOP_DE
                                  4c 41 59 5f 53 fb 40 00  00 00 00 00 00 00       |LAY_S.@. ......  
[00:07:43.729,000] <dbg> golioth_hello: on_setting: Received setting: key = LOOP_DELAY_S, type = 2
[00:07:43.729,000] <inf> golioth_hello: Set loop delay to 2 seconds
[00:07:50.469,000] <inf> golioth_hello: Sending hello! 47
[00:07:52.471,000] <inf> golioth_hello: Sending hello! 48

When I made the change. the device was immediately notified and you can see from the timestamps that it began logging at a two-second cadence as expected.

Golioth settings applied at the blueprint level

It is also possible to change settings for a group of devices that share a common blueprint. Here you will find this setting by selecting Blueprint from the left sidebar and choosing your desired blueprint.

Settings are applied based on specificity. The device-level is the most specific and will be applied first, followed by blueprint-level, and finally project-level. Blueprints may be created and applied at any time, so if you later realize you need a more specific group you can change the blueprint for those devices.

Implementation: The two parts that make up the settings service

Fundamentally, there are two parts that make our device settings system work: the Golioth cloud services running on our servers and your firmware that is running on the devices. The Golioth device SDKs allow you to register a callback function that receives settings values each time a change is made to the settings on the cloud. You choose how the device should react to these settings, like updating a delay value, enabling/disabling features, changing log output levels, really anything you want to do.

Don’t worry if you already have devices in the field. You can add the settings service or make changes to how your device handles those settings, then use the Golioth OTA firmware update system to push out the new behavior.

Take control of your fleet

Scale is the hope for most IoT companies, but it’s also where the pain of IoT happens. You need to know you can control your devices, securely communicate with them, and perform updates as necessary. Golioth has you covered in all of these areas. The new settings service ensures that your ability to change how your fleet is performing doesn’t become outpaced by your growth.

This article is an introduction to the concept of “The Five Clouds of IoT.” It’s a tool I’ve been developing for some time as a way to help people trying to understand the role of the cloud. This is a mental model I developed as a way to explain service offerings to people interested in building fleets of IoT devices. It includes services developed in-house and services they choose to purchase from external providers.

Today I will discuss the genesis of these classifications and why there is more than one cloud type in the first place. In future articles, I’ll take a deep dive into each of these clouds to compare and contrast their features and look at example services.

I believe it’s important for us to focus our understanding of these broadly-general terms (like cloud and IoT). Doing so lets us define what problems need to be solved in the IoT space, and how best to approach them.

Giving advice about IoT Cloud Services

I have been building internet connected (IoT) devices for a long time. When someone comes to me for advice on their startup or business enhancement, they typically ask for guidance on selecting from different service offerings in the marketplace. There have been, and continue to be, many options out there.

The conversation usually starts out with, “should I use foo for software updates? Or is bar better because it includes more stuff?” They’re coming from the perspective of looking for solutions without understanding the problems they need to solve in the first place.

So as a first step I ask questions about their needs. Questions like:

  • How are you connecting to the internet?
  • What kind of device are you building on?
  • What’s the scale of your deployment?
  • What’s your cost and cost structure?
  • What is your business model?

But the most important questions to answer for picking cloud services are:

  • What are you willing to outsource?
  • What should you outsource?
  • What would get you to market faster if you went with a provider?

From there we can have a more grounded conversation on which aspects of the cloud are needed and which combination of providers they might want to evaluate. Very few service offerings will cover every aspect of an IoT deployment and business needs.

The more of these conversation I’ve had, the more I started to realize that there might be different types of IoT cloud solutions. In fact, I think there are five. And while in practice different cloud companies might offer overlapping features, describing them as separate clouds will help us better understand the core strength of each cloud type and why we might want to use them.

Why are there “Five Clouds of IoT”?

Here are The Five Clouds of IoT, as I define them:

  • The device cloud
  • The connectivity cloud
  • The data cloud
  • The application cloud
  • The development cloud

“You’re just making these up!”, you say? Yes and no. These are classifications based on the service offerings of companies throughout the ecosystem, including the company I founded a couple of years ago (Golioth).  But at the end of the day, these are my classifications and how I view the IoT ecosystem…so yeah, I’m making these up.

However, each cloud represents a business case being served. Each cloud type I listed has a prime example of at least one company serving a particular area of focus. That’s how big the market is and how much need exists for services.

Bringing IoT to the Masses

Each IoT cloud solution helps make deployments possible without massive internal cloud teams at a device maker or service company in the IoT space. Someone who wants to keep their core team small can do so by hiring out parts of the business to different service companies representing one or more of the Five Clouds. They focus on a particular part of the business and lean on service providers to help scale their application.

We’ve already mentioned Golioth as one example of a device cloud. We’re a device cloud because we focus on the management & security of devices and the data they produce. An example of a different cloud would be Soracom, which is a connectivity cloud that addresses different aspects of connecting devices to the internet that spans SIMs, data plans, VPNs, and more.

The key here is that the fundamental focus of a cloud like Golioth and Soracom are different:  device offerings vs connectivity offerings. You need to understand what problem you want to solve when choosing between clouds. For example, why you’d want to leverage Golioth, Soracom, or both.

Which Cloud is right for you?

In future articles in this series, I’ll take a deep dive into each individual cloud and look at example companies that exemplify the characteristics of each cloud. As you start to put together your business needs for your company or your next startup idea, you will be able to piece together which clouds you will need, and find some good companies that can help you achieve your goals.

Node-RED is a graphical programming tool that makes it easy to interact with web-based events. It’s called a “Low-code” solution because you can do a lot without writing much code. I love it because it’s the right balance of handling the hard stuff for me. Initial ease of use, while still making it possible to drop in custom code when I need it. I’ve been using that to control my IoT devices on the Golioth Cloud, and today I’m sharing how I do it.

Note: the concept of a node will be highlighted with bold text throughout this article to help clarify we’re discussing one of the blobs shown on screen in Node-RED

Why use Node-RED with Golioth

Earlier this year Ben Mawbey showed us how he uses Node-RED to manipulate data as it arrives on the Golioth cloud. This approach listens for realtime data using WebSockets. But this has a couple of limitations:

  • We will only see the data when it changes
  • We aren’t able to send or update data using WebSockets.

On that second point, the Golioth REST API lets us send/update data and query stored data.

Node-RED is a good choice here, since it’s relatively easy to set up a connection to WebSockets and the REST API. It’s powerful enough to do any data manipulation we want. It does need to run on a server, but on the upside that means it is always running and can be accessible to multiple end users via any web browser, no need to install an app. Let’s dive in!

Prerequisites

This example assumes you have already taken care of the following:

  • a Node-RED server
    • The Get Started page offers several solutions like running it locally on a Raspberry Pi, or on a cloud server
  • Golioth account, and a device added on the Golioth Console
    • As always, our getting started guide will walk you through all of these details
    • You can follow the demo below without having a hardware device, we just need to create a device on the Golioth Console to send data back and forth to the cloud

Connect Node-RED to WebSockets

The secret sauce in connecting a WebSocket is the URL which includes the Golioth API key. Go to the Golioth Console and choose API Keys from the left sidebar to create a new key:

The exact formatting of the WebSockets URL is detailed on our reference page, but I’ll show you how to do it here. Notice the three pieces of important data in the diagram below: project id, device id, and finally the API key.

I have chosen to connect to the dataendpoint to monitor LightDB state data. (If you want to monitor LightDB stream you can change data to stream.)

wss://api.golioth.io/v1/ws/projects/node-red-demo/devices/62695497404e12cd12628117/data?x-api-key=ynhJQQjLautKYLxQESuqB8VJLWMEVpH7

Setting up the Node-RED flow begins by adding a WebSocket-in node and connecting it to a debug node. I have also connected a JSON node for convenient data access later.

The WebSocket-in node is configured as follows:

The WebSocket type is “Connect to” which allows us to add a new URL. Click the pencil icon or the right side of that field and in the new window paste the URL. Because we are passing the API key as part of the URL, we do not need to add a TLS configuration.

Data will only appear on this WebSocket-in node when it first arrives on the Golioth Cloud. So generate an update by going to the Golioth Console and adding the key/value pairs shown above. In the Node-RED shown in the flow setup step, notice the debug output includes the raw payload as well as the JSON object.

Connect Node-RED to REST API

Setting up the REST API connection requires a bit of code, but uses the same project, device, and API key info from the last step. Two inject nodes (that send a 1 or a 0), a function node, an https request node, and a debug node complete this flow:

The inject node and http request node need a quick settings adjustment:

Set the payload of the inject node to pass 0 or 1 as a number (not a string). In the http request node, change the method to “set by msg.method”. The magic will all happen in the function node:

Code for this function is listed below. The variables at the top are used to set up the API URL, so you will need to enter your project id, device id, API key, and the LightDB state key that makes up the endpoint (led0 in my example).

var proj_id = "node-red-demo"
var dev_id = "62695497404e12cd12628117";
var api_key = "ynhJQQjLautKYLxQESuqB8VJLWMEVpH7";
var endpoint_name = "led0";

var dev_url = "https://api.golioth.io/v1/projects/" + proj_id + "/devices/";
var endpoint = "/data/" + endpoint_name;

var data = msg.payload;

var msg = {
	"method" : "PUT",
	"url" : dev_url + dev_id + endpoint,
	"headers" : {
		"Content-Type": "application/json",
		"x-api-key": api_key
	},
	"payload" : JSON.stringify(data)
};

return msg;

This code intercepts the incoming data (which will be a 0 or 1 from the inject node), formats it as a PUT command, and sends it to the http request node which will submit it to Golioth. In this demo, clicking one of the inject nodes updates the led0 value in LightDB state.

Head over to the Golioth Console and look at the LightDB state data for your device to see the changes. In practice, you can implement the desired state versus actual state principles discussed in my recent article and use this flow to update an LED on an actual piece of hardware.

Further Exercises: Dashboard/Web App

I’ve covered a lot of ground in this article and unfortunately have run out of column inches. But before signing off, I want to mention the potential for turning Node-RED flows into web apps.

The Node-RED dashboard node adds a UI which can be loaded on any browser. It looks spectacular with almost no work from us. Here you can see I’ve set up a display that shows the state of the LED, displays the latest value of the counter, and adds two buttons to turn the LED on or off.

If you want to test this out, here is an export of this flow that you can import into your Node-RED.

A word of caution: your flow contains an API key for accessing device information on Golioth. This must be kept secure. Please make time to review how to secure Node-RED and ensure that you authenticate users if you decide to build and share a web interface.

Communicating with IoT devices over a network connection presents a few challenges you must consider when designing your firmware. The most obvious is how to deal with spotty connections. Did your device get the command you sent? Does the reported state in the cloud accurately reflect the actual device state?

At Golioth we’ve built a device cloud that eases the process of provisioning, connecting with, and controlling fleets of IoT devices. To get the most out of the platform consider using a few design patterns related to Digital Twin, a virtual representation of a device that is present in the cloud and provides a framework for interacting with the real hardware.

I’ll demonstrate this using the Golioth platform, but of course the concept is applicable to all connected devices.

Avoiding Confused Users and Confused Devices

While there are innumerable ways in the IoT realm to confuse your users and your devices (trust me, I’m trying to find them all), the most obvious is the challenge keeping the device and the cloud in sync with each other. For ease of understanding, let’s consider a box with one button and one LED that is connected to the internet over a cellular connection. We want the button to toggle the LED but also allow the cloud side to change the state of the LED. This might represent the state of some running service, but to keep it simple we’ll leave that out of the example.

If we rely on the cloud to control the device LED, the user will push the button and have to wait for the button press to make it to the cloud, wait for the state of LED on the cloud to be changed, and wait for this change to make its way back to the device to update the LED. The user is likely to think the button press was missed and press it again.

If the device directly controls the LED, what happens if the cloud misses the button press because of connectivity issues? When not carefully planned for, the state of the LED will be different from the state of the service in the cloud. What happens if the LED state on the cloud is changed at almost the same time the button is pressed? We have a race condition to see which update propagates the fastest.

One design pattern we should all be familiar with uses a separation of state from command/control so that the device and its digital twin will remain in sync even when there are latency or connection issues. A device just coming on line should be able to quickly enter a desired state from the server, and report back the actual state.

Keeping Desired State and Actual State Separate

The issues outline above can be avoided by using different endpoints for a device to communicate with its virtual equivalent.

  • On the cloud there will be one representation where the physical device reports its current state. For the cloud this data is read-only, only the device can make updates to it.
  • A separate representation is made for the desired state of the device. The cloud side can make changes to this and the physical device will watch for and react to these changes. The physical device usually has a mechanism that indicates the desired operations were received.

Digital twin uses both a desired state and an actual stateOur simple example only has one “actual state” variable that keeps track of the LED state. A server might watch this data via a WebSocket to toggle some service as the value changes.

The cloud can populate the “desired state” with endpoints that request changes to the LED status. This might happen when some service the cloud is watching changes state and the IoT device needs to be aware of that.

Demo Using Golioth

Golioth stores persistent data using LightDB state. We can use a simple data structure to represent the digital twin of our hypothetical IoT device:

{
  "state": {
    "led": 0
  },
  "cmd": {
    "led_on": 1649691282
  }
}

The state endpoint stores the last-reported state (on) of the device’s LED. The cmd endpoint reflects the cloud’s desire for that LED to be on by setting the led_on key to the current timestamp. The cloud could also set led_off to a timestamp, and when the device acts upon the desired state, it can compare timestamps for all existing keys to establish which command is the most recent.

I like to have the IoT device delete the command keys once they are received (another approach would be to have the device set an acknowledged value to the timestamp the desired state was serviced). In the above example, since the led_on key exists, we assume the device has not yet seen it. Once it does, the device will turn on the LED, update the status key for that LED to 1, and delete the cmd/led_on endpoint.

Our app can be set up to watch the desired state by observing the cmd endpoint and running an on_update function when a change is detected.

err = golioth_lightdb_observe(client,
					GOLIOTH_LIGHTDB_PATH("cmd"),
					COAP_CONTENT_FORMAT_TEXT_PLAIN,
					observe_reply, on_update);

The IoT device will write a simple string of “0” or “1” as the actual state when the button is pressed, and each time the device acts upon a desired state request.

char sbuf[2] = "0";
if (led_state &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp; 0x01) {
	sbuf[0] = '1';
}
int err = golioth_lightdb_set(client,
					GOLIOTH_LIGHTDB_PATH("state/led"),
					COAP_CONTENT_FORMAT_TEXT_PLAIN,
					sbuf, strlen(sbuf));

So let’s imagine that our IoT device is in power-saving mode. When it wakes up and connects to the network, it will immediately see the command endpoint is requesting an LED state change. Using this design pattern means your desired state changes are not missed; even when a network connection is unavailable the device will be able to update to the most recent commands the next time it connects.

You can try this out for yourself on the Golioth cloud right now. I’ve implemented the desired state pattern in sample code, and the Dev tier of Golioth lets you test 50 devices on our platform for free.

Further Learning

At its core, this design pattern is incredibly simple. When you put it into use, things may become more complex. Perhaps you want to have a command queue so that there is a history of changes for the device to review the next time it connects to a network. Instead of deleting the commands when acting upon them, a device may place an acknowledged timestamp next to them, or some other scheme that fits your needs. But at its core, the separation between a device reporting it’s state and the cloud requesting state changes is an excellent IoT approach, and a great way to start learning about the concept of digital twins.

Live data visualization shows you what is going on with your IoT fleet right now, and it makes for impressive customer demos. We previously wrote about how to visualize live data on the self-hosted version of Grafana using the WebSocket plugin we wrote. Today let’s walk through how to connect live Golioth IoT data to Grafana Cloud.

Overview

Golioth is the translation layer between your fleet of IoT devices and the cloud. Among our many ways of accessing the collected data, WebSockets are one way to get live updates as soon as the data arrives.

Grafana visualizing temperature, pressure, and humidity

We wrote a WebSockets plugin for Grafana, the monitoring and observability software. The plugin is open source (here’s the code repository) and as of just a few days ago it’s available in Grafana Cloud, the cloud-hosted version of the software. Today I’ll guide you through:

  1. Sending some temperature sensor data to Golioth
  2. Configuring the Golioth WebSocket API on the Grafana Cloud
  3. Graphing the temperature data in realtime

This example has everything you need to adapt it for your own needs.

1. Send Sensor Data to Golioth

Prerequisite: You should already have a Golioth account, and have the Zephyr toolchain installed. To do so, please follow the Golioth Getting Started guide and Setting Up Zephyr for ESP32.

For this guide I’m using the LightDB Stream example that is part of the Golioth SDK, and running it on an ESP32. This example is a good one, since it will either use a temperature sensor you have configured in the devicetree, or simulate a temperature sensor if there is not one available.

Get device credentials from console.golioth.io

First, find your device credentials in the Golioth Console by selecting Devices from the left sidebar menu and clicking the name of your device on the resulting list.

CONFIG_GOLIOTH_SYSTEM_CLIENT_PSK_ID="esp32-temperature-pskid@ttgo-demo"
CONFIG_GOLIOTH_SYSTEM_CLIENT_PSK="a_strong_password"

CONFIG_ESP32_WIFI_SSID="your_wifi_ssid"
CONFIG_ESP32_WIFI_PASSWORD="your_wifi_password"

Now add the device credentials–Identity (PDK-ID) and Pre-Shared Key (PSK)–and your WiFi access point credentials to the project’s prj.conf file. This file should be located at ~/zephyrproject/modules/lib/golioth/samples/lightdb_stream/prj.conf. Add the lines in the format shown above.

$ west build -b esp32 samples/lightdb_stream -p
$ west flash

Finally, use the commands above to build and flash the sample to your ESP32.

streaming temperature data in Golioth console

If you go to Monitor→LightDB Stream you will see new temperature data coming in every five seconds. In this window, choose the Last 4h setting from the time selector, your device from the device selector, and enable 5s auto-refresh in the upper right.

Each time one of these packets comes in, the data will be automatically updated in Grafana. We just need to set that up, which is next!

2. Configure the Golioth WebSocket API in Grafana Cloud

generating Golioth API keys

The prerequisite for this step is to get an API key from Golioth.

  1. Use the API Keys option on the left sidebar of the Golioth Console
  2. Click the Create button in the upper right of the main window
  3. In the resulting dialog (show above), press the Create button

The final dialog displays the key and some useful path information. We will use this path along with the wss://api.golioth.io/v1/ws/ WebSocket endpoint which I looked up in the Golioth WebSocket docs.

Grafana Cloud login

Log into Grafana Cloud. There are a number of services offered, the one we want is simply called Grafana.

On the left sidebar, hover over the gear icon and select Plugins. Search for WebSocket on the next screen and click the “WebSocket API” tile. Follow this 3-step process to add the WebSocket plugin (shown in the screenshots above):

  1. Click the Install via grafana.com button
  2. On the next screen click the Install plugin button
  3. Go back to the previous screen and refresh that page. It may take a minute or two, but when the new button appears click Create a WebSocket API data source

Grafana WebSocket data source configuration

This brings us to setting up our data source.

  1. Name can be anything you want
  2. Host is the wss://api.golioth.io/v1/ws/ from the Golioth Docs
  3. Click the Add header button
    1. Enter x-api-key for the header value
    2. Paste your API key from the Golioth Console in the Value box
  4. Click the Save & test button

We have completed the WebSocket connect between Grafana and Golioth. Now it’s time to visualize the incoming data!

3. Graphing Temperature Data in Grafana

In the left sidebar of Grafana, click the big plus icon and choose Dashboard, then click Add Panel.

Grafana panel setup

  1. In the Data source box on the left, select the name of the data source you just set up.
  2. For now, enter just a $into the Field box
  3. Click the Path tab
    1. Follow this format which was shown on the confirmation dialog when you created your API key:/projects/<name of Golioth project>/stream
    2. Notice I have switched from /devices to /stream.
  4. At the top of the window in the center, turn the Table view on

You should begin to see JSON data come in over the table view. Remember that WebSockets visualize realtime data, so JSON will only appear when new packets are sent.

Now that we’ve seen the JSON data coming in, we can use it to identify two types of data to use for the graph: temperature and timestamp.

More specific JSON fields

Going back to the Fields tab, we can change the $ character (which means to show all the JSON coming in) to a specific key/value pair.

  • Enter $.result.data.timestamp
    • You will see the data in the table-view change as you type, this is helpful in navigating the hierarchy
  • Click the + button all the way to the right of the Field box to add another field
  • Enter $.result.data.data.temp

Graphing simulated temperature data

When you click the Table view button at the top, the view will be replaced by a visualization of your data. Here you can see it is obvious that I’m using simulated temperature data. You can also change the way it is visualized by clicking on the Time series menu in the upper right.

Conclusion

This is one simple example that is graphing a single stream from one device. But the principle scales with your IoT fleet. Graphing large numbers of devices across multiple projects takes little more than changing how you use your Path and Field settings on the panel. With Grafana you can easily group together multiple panels into a dashboard, with visualization options tailored to almost any data type.

 

Golioth has just released a Grafana WebSocket Data Source plugin. This open source plugin allows you to create a graphic dashboard using data from any WebSocket URL that uses JSON formatting. This means you can you directly connect the Golioth Cloud API and have Grafana render incoming data in real time as graphs, charts, maps, etc.

Grafana Takes Care of Data Visualization

Temperature data display as graphs

Realtime data from Golioth devices looks great in Grafana!

Grafana is an open source visualization and analytics software from GrafanaLabs. It allows you to query, visualize, alert on, and understand your metrics no matter where they are stored. We chose Grafana because we heard it mentioned multiples times by our community members and found that it streamlines the process of managing device data and visualizing. You can build from source and run it locally (the power of open source), or use the free tier of the hosted Grafana Cloud platform.

We’re Going to Need WebSockets

At Golioth, we recently added WebSocket support for some of our services like Logs, LightDB State, and LightdDB Stream. Actaully, Chris wrote a blog post all about it. You can now connect to a WebSocket URL and get real-time data updates from your devices without the need to poll for updates like you would with a REST API.

As awesome as it is, Grafana didn’t have a WebSockets plugin, so we created one! It allows you to connect to any WebSocket URL and visualize the data in Grafana. You can get a copy of the WebSockets plugin code in our repo. But we have also submitted it as an official plugin – it should be available in Grafana soon!

Let’s Try the Demo!

Setting Up a Device to Send Data

As a hardware beginner, I followed one of our easiest samples using PlatformIO and our Arduino SDK to get my ESP32 board connected to our platform, see our examples folder on our Arduino SDK repo.

Note: The arduino-sdk repository showcased in this post is deprecated. GoliothLabs has an experimental repository that may work as a replacement:

I’m writing data on both of our services in a 5 seconds loop, updating my device’s state with LightDB State and reporting time series data for our LightDB Stream service.

// Every 5 seconds loop
...
  int core_temp = randomNumber(40, 60);
  int room_temp = randomNumber(27, 33);
  int uptime = millis() / 1000;
  String payload =
    "{
      \"core\":
        {
          \"temp\":" + String(core_temp) + ",
          \"uptime\":" + String(uptime) +
        "},
    \"room\":
      {
        \"temp\":" + String(room_temp) + "
      }
    }";
  
  client->setLightDBStateAtPath("/", payload.c_str());
  client->sendLightDBStream("/", payload.c_str());
...

This small program sends random numbers as core/room temperatures, and the device’s uptime. We have data, now let’s graph it!

Golioth’s WebSockets API

LightDB state data

Devices > Your Device > LightDB State

LightDB Stream Data

Monitor > LightDB Stream

WebSockets “listens” for changes, so every 5 seconds those updates on State and Stream are also available thru Golioth’s WebSocket API. We can follow the WebSockets docs to setup a connection to listen to them.

Below you can see how the WebSocket hosts URLs are formatted. Use the Golioth Console to find the projectId, deviceId, API key, and any paths you need. (This information is also available using the goliothctl command line tool.)

LightDB State

// Websocket URL format: wss://api.golioth.io/v1/ws/projects/{projectId}/devices/{deviceId}/data{/path=**}?{x-api-key|jwt}={API_KEY|JWT}

wss://api.golioth.io/v1/ws/projects/smart-house/devices/61d315e441da400dd6934493/data?x-api-key={projectApiKey}

LightDB Stream

// Websocket URL format: wss://api.golioth.io/v1/ws/projects/{projectId}/stream?{x-api-key|jwt}={API_KEY|JWT}

wss://api.golioth.io/v1/ws/projects/smart-house/stream?x-api-key={projectApiKey}

With those I was able to test the endpoint using websocat to make sure that the data my device was sending created notifications through the WebSocket connection as expected.

websocat command line tool used to test WebSocket feed

Grafana WebSocket Plugin Setup

For this demo I’m running everything locally (both Grafana and the Plugin itself), but we have submitted the signed plugin to Grafana’s Cloud so soon you will be able to find it there by default.

To use the plugin, there are a few configuration steps we need to follow:

  • Once you’ve logged into Grafana, go to the Configuration page (gear icon) and then to Plugins
  • Search for WebSocket API, select and then click on Create a WebSocket API data source

Now we fill in the fields as the API requires and hit save. For Golioth’s WebSocket API you can see I’ve entered the URL, and I’ve specified x-api-key and stored the API key that I generated on the Golioth Console.

Grafana WebSocket data source

Notice that I’m not using the full path, but only through the project level (smart-house). This way we can select the specific path we want to listen to later on the Panel’s Query page. This allows more flexibility.

Building the Dashboard that Shows the Data

With the data source set up, it’s time for us to build the query on an actual panel.

Something to keep in mind is that, right now, it will only listen to future events, so if there’s any update on the Query Field, Path, Panel or Dashboard, the current stored data will be wiped.

Graphing LightDB Stream:

  • Hover on the plus sign at the sidebar and select Dashboard
  • Then click on Add a new panel
  • Select WebSocket API under the Query’s data source
  • Go to Query A Path tab and fill with your desired path. In my case /stream

Grafana WebSockets path setting

  • Under Fields tab, use JSONPath reference to transform the JSON result into something that the dashboard can display. Here I’ve used $.result.data.timestamp, $..core.temp, and $..room.temp
  • I want to display the last 5 minute events of my device’s temperature, both for the core and room. So I changed the panel’s type into Time Series on the top right corner, and filled with the fields required:
  • Hit Save, and its done.

Grafana WebSockets field configuration

As long as data is streaming in from a device, it will be graphed in real time. One tip as you get used to handling data, try starting with Table View on to better understand the JSON by your query.

Displaying LightDB State

To create another panel for the LightDB State we can just duplicate the first one and update his its Path, so:

Grafana state data path configuration

There’s a small differences here, LightDB State can have deeper paths e.g. .../data/core/temp but they will only be notified via WebSockets when there’s an update on them. Because I want to listen to multiple path updates for this Panel, I will leave the path set to the root level, that is .../data.

Grafana state field queries

  • Transform the query fields to use $..core.temp and $..room.temp
  • Hit Save, and it’s another one done.

Final Results

Grafana state field queries

Like that, we have 2 panels ready to listen for updates coming from both LightDB Stream and State in real-time. Feel free to add more data and to listen to different State paths.

Conclusion

Grafana is an awesome tool that allows us to connect and create professional visualizations. Because it is open source, a major part of its improvement over time is a result of its community. We’re building an amazing platform at Golioth and we also want to empower our community with the freedom to build things they love with us. With WebSockets now available on our platform, we built this plugin to share that excitement with our users and with the entire world.

Hopefully you’ll give this a try, and we want your feedback! Get in touch with us on the Golioth Discord server and please join us there for our Office Hours every Wednesday at 10 am Pacific Time.

This guest post is contributed by Ben Mawbey, a community member who is active on the Golioth Discord and frequently takes part in Office Hours.

Data wants to be visualized. The impact of showing a customer a slick plot of the information their devices have been collecting is massive compared to pointing at a few hundred lines of text from a log file or database query.

I was looking for some sort of dashboard or charting application for a demo for a sensor system we’ve built. I figured this will appease the clients’ need for pretty pictures over boring reports. I found exactly what I needed, and it only took me about 30 minutes to get it up and running.

Grafana graphing data

This looks great, even if you have no idea what the graphs mean! Let’s dig into how to get from numbers in a database to pretty pictures in a dashboard.

Pieces of the Puzzle: Golioth WebSockets, Node-RED, InfluxDB, and Grafana

Golioth is brilliant at getting your device data from the real world up to the cloud, climbing the IoT bean-stalk some would say. While abstracting some of the trickiest IoT problems, Golioth can present your time-series data as a convenient cloud resource using LightDB Stream service. By leveraging the built-in WebSockets support and some open-source tools we can rapidly store, manipulate and display this data!

Grafana is the open-source blockbuster for this application and can be easily set up to graph sensor data directly from Golioth LightDB Stream using the REST API. This has two major drawbacks:

  • Grafana must periodically poll for new data
  • While LightDB Stream does provide convenient data retention, I prefer to use my own data storage

Enter InfluxDB, the time-series database powerhouse and an ideal companion to Grafana when talking about IoT real-time applications. This pairing is so popular that the InfluxDB data source integration is baked right into Grafana! By utilizing InfluxDB to store our sensor data, we can perform more complex queries much faster.

The question remaining is how to shuffle data from Golioth to InfluxDB? There are many potential solutions to this hurdle, but my favorite is Node-RED which defines itself as a low code programming tool for wiring up event-driven systems.

Node-RED editor window

Node-RED uses graphical flows to connect data sources and destinations, bending protocols and translating data formats in innumerable ways

Node-RED has exploded in popularity and provides all sorts of integrations to connect your systems together. It provides simple blocks to perform actions and a slick graphical interface to wire up your data flows. Conceptually Node-RED acts as our rule engine to process and direct data.

Dashboards: Grafana and InfluxDB

System diagram

Grafana is immensely powerful at providing example custom views, data transformations, and alerting. That said, it is only as good as the quality of data you can provide. Having a tightly coupled InfluxDB instance with carefully curated data via Node-RED, allows you to quickly configure complex data queries on large datasets with low latency.

Before we can play with Grafana the first step is InfluxDB setup. After you’ve installed InfluxDB, create a new database on your InfluxDB instance:

> CREATE DATABASE golioth

Configure the InfluxDB Data Source in your Grafana instance by clicking the gear icon on the left sidebar, choosing and adding a data source, then searching for the InfluxDB plugin. Here’s how I’ve set up my data source:

Grafana InfluxDB source configuration

Assuming you have some data in your DB, we can quickly create a new time-series dashboard panel in Grafana and query the dataset using this integration:

Grafana panel configuration

This simple query shows how we structured the data in the DB, allowing us to select from a particular measurement, specifying a specific device identity tag and aggregating data points with specific time buckets. Adjusting the time range instantly updates the graph from our local InfluxDB instance.

Now how do we get the data from Golioth into InfluxDB?

Node-RED

Several networking integrations are provided with Node-RED. Presently the most relevant to Golioth would be HTTP nodes for REST API requests and the WebSockets node which is the easiest to configure.

You can see your sensor data collecting in the Golioth LightDB Stream by using the Golioth Console web interface:

Golioth Console showing LightDB stream data

We can use Node-RED flows to connect to Golioth via WebSockets and store the resulting data in our local DB:

NodeRED editor window

The nodes in this flow were set up as follows, taking care to give them appropriate names making it obvious to see their function. All of the nodes I’m using should come as part of every Node-RED installation except for the InfluxDB nodes. But don’t worry, these are trivial to install. On Linux is looks something like this:

cd ~/.node-red
npm install node-red-contrib-influxdb
sudo systemctl restart nodered.service

WebSockets Node:Node-RED websockets

First set up the credentials to your Golioth project using your generated API key and connect to the WebSockets LightDB Stream endpoint.

Debug Node:Node-RED debug

Drop a few of these nodes along the way and click the small green button to turn the debug log on. This is super handy to check data coming through and make sense of it.

JSON Node:Node-RED json

The LightDB Stream endpoint provides us with a JSON object representation containing our sensor data as well as meta information such as the data timestamp and device identity. This node allows us to parse this JSON into a javascript object so that we can work with it more easily in subsequent nodes.

Change Node:Node-RED change node

This node clearly shows the power of Node-RED as we can craft any sort of data manipulations or transformation.

We could do without this node and jump straight to InfluxDB, however, should any malformed data arrive we risk polluting the DB with bad data. By selectively transforming the incoming data and mapping it into a new object we can not only filter only good packets and arrange the measurement names but also add tags to build a solid data representation in the DB making our queries far more powerful.

InfluxDB Out Node:

Node-RED InfluxDB input nodeNode-RED InfluxDB server settings window

Finally, we can configure the data connection to our InfluxDB instance, set appropriately for your server configuration and database created earlier.

Assuming your flow is set up correctly, you should be able to see data collecting in your database. We know it works, but as I mentioned before, this visual is not going to impress our customers.

InfluxDB data

Revisiting the Grafana panel we previously created, you can see InfluxDB data is now being plotted!

Grafana graphing data

Corner Cases

One of the downsides of WebSockets is their ethereal nature, should there be some temporary connectivity issue, any data packets would be lost from the point of view of your InfluxDB database. A solution around this could be to set up another flow that executes periodically to sync with the LightDB Stream using the REST API. Node-RED could then be configured to check this data and add any missing values into the InfluxDB instance and prevent consistency issues.

Another concern with open-source self-hosted systems is security. It can be challenging to secure your server and services should they be public-facing. If you are handling sensitive data then it would be best to consult with an expert in this field. Fortunately, all of the tools discussed have subscription-based cloud services available that sort all of this out in the background.

Conclusion

Being able to set up a simple demo like this in less than 30 minutes demonstrates the power and flexibility of these modern open-source solutions. Coupled with the reliability and maintenance advantages of the Docker system, it’s a breeze to test locally on your desktop or Raspberry Pi and then deploy to production a moment later on your cloud server of choice. The rules engine and ease of wiring up blocks provided by Node-RED opens up a massive pool of possibilities, from countless other integrations to building intelligent processes. One such idea I would like to explore is integrating a device provisioning process into the flow such that we can link a device to a dataset or location during deployment or maintenance.

Every IoT project needs to provision devices that are going to be available in the field. Leveraging open standards, Golioth cuts down on the required time and hassle for IoT development teams.

Provisioning is a critical step in IoT projects when they go to production. Unfortunately, this process remains a mystery for many engineers due to lack of information about the process. At a high level, provisioning is passing configurations and credentials to an IoT device so it can connect securely to the cloud. Once provisioned, the device can send telemetry, receive commands, or be updated (by OTA DFU) when it’s out in the field. How you provision a device depends a lot on the use case. 

(click the image above to see the full diagram)

Example use cases

First, let’s examine a customer-facing product like a smart light bulb. In this scenario, the first step would be for the user to provide WiFi credentials to connect to the user’s home network. On the platform side, the device would obtain a new set of credentials to connect to the backend services. These credentials would be specific to that particular user and device. Later, the user might decide to clean up the device to sell it, so the ability to remove device configurations and deleting a given set of credentials is important. This is a perfect example for using BLE provisioning like shown in the video below.  The user experience is seamless with any existing mobile app used for controlling the bulb and reporting data back from the end device.

Next, we’ll consider factory-level provisioning. An example device like a cellular asset tracker would be pre-provisioned at the factory before being used by your customer. Later the user will only associate that device with their account, but the credentials to talk to the cloud are already set on the device. This can be done as part of the manufacturing process, probing the device via Serial/UART to get the device hardware ID, provisioning it to the cloud, and sending credentials back to the device via the same transport. We can even have different firmware that will only provision in the factory. The device accepts the initial device configuration and saves the credentials to flash. Subsequent firmware that doesn’t have that initial feature enabled, making sure external parties can’t change or reverse engineer the initial configuration.

There are myriad ways that provisioning can be done. Each instance will depend on the factory environment, the capabilities of the user, and on the end application. The video below is a setup similar to the first example explained above, using a Bluetooth application to read and then program the end device, all while working with the Golioth cloud.

Our demo application

As you can see in the video, we developed an end-to-end sample that shows a practical scenario of provisioning IoT devices with a native mobile app, talking with an IoT device over Bluetooth, and provisioning device/credentials in Golioth Cloud. We leverage different tools for doing so:

  • MCUmgr as the device management subsystem and protocol.
  • Zephyr as the real-time operating system, that implements MCUmgr.
  • Open-source mobile SDK to integrate MCUmgr on an app
  • Golioth’s API and the Device/Credentials Management capabilities. 

The MCUmgr community developed multiple types of transports to interact with devices, a benefit of MCUmgr being an open standard and having a vibrant community. One option is to communicate with the device over serial UART using the `mcumgr` cli or even integrate that into your own set of provisioning tools. Another option is to use a mobile SDK that implements MCUmgr protocols over BLE to talk with devices.

We took the Bluetooth approach and forked Nordic’s MCmgr Example application, adding communication with Golioth APIs to manage devices. Once we discover the name of the device, we assign credentials via the REST API and securely send them over Bluetooth to the end device. The device is running one of Golioth’s samples that accepts dynamic configuration for WiFi and DTLS Pre Shared Keys to talk securely with our cloud. The device uses a different Golioth service called LightDB. Using this configuration engine, we can publish the on/off state of the light bulb using LightDB,show that data on a UI, and even send commands to change the state on the device. 

Source code for the mobile app:

More details on how to use our REST API and how to generate API Keys can be checked on our docs website.

References