DevCon22 was a 2 day event that Espressif held last month to showcase upcoming products from Espressif as well as external partner offerings. Mike and I (Chris), gave a talk about how  the Espressif IoT Development Framework (ESP-IDF) makes it really easy to add Golioth to your IoT projects.

The ESP32 line of Wi-Fi based devices deliver the internet connection, while the Golioth ESP-IDF SDK provides access to things like:

Golioth covers your hardware

Golioth supports multiple hardware ecosystems, and we expect to enable even more in the future! We’ve built everything firmware engineers need to seamlessly connect their devices to the Golioth Cloud. Our goal is to meet engineers where they work, and this often means working directly inside the ecosystems their projects are built upon.

We were excited to support the ESP-IDF, which is based off the popular FreeRTOS core. This open source Real Time Operating System (RTOS) is prevalent throughout the electronics industry, with ports into many different chip ecosystems. The Espressif team has taken the core capabilities and ported it to work across their growing catalog of components. They have also bundled a range of meta tools that includes the build system (idf.py), the programmer (esptool.py), and more.

The Golioth ESP-IDF SDK leverages the necessary components to immediately access the Golioth Cloud on Espressif devices. We use CoAP as a transport layer, so we include a CoAP library to allow your device to “talk” CoAP. Golioth is secure by default, so we also include an mbedtls library, to allow your packets to be encrypted (using your PSK-ID and PSK). The net result is that firmware and hardware engineers don’t need to worry about the low level details of connecting to our servers, instead you can utilize high level APIs shown in the presentation video. For anyone using ESP32, Golioth just works!

The challenges of scale

The core of the talk is to showcase Cloud technologies that make it easier to scale your fleet. If you’re the classic example of a startup in a garage, you probably aren’t planning to scale out to millions of devices. Even if the future is uncertain, it’s good to understand the capabilities that will make it easier to control your fleet, from 10 devices to 10 million devices.

We also want to make sure that firmware and hardware engineers don’t need to become Cloud engineers to enable these capabilities. The features described above and in the video are available by default, without any additional Cloud-side code required. This means engineers can start sending data to the Cloud immediately and focus on the higher value tasks of pushing the data to those that need it, like their internal software or data science teams.

Mike shows many of these services in action during the talk, including showing how users can interact with the data that is sent to the Cloud using the ESP-IDF SDK. Each of these features enables users to grow their Cloud capabilities immediately:

Are you ready to scale?

Golioth is ready to take your fleet to the next level, using the ESP-IDF SDK or any of our other supported hardware ecosystems. You can immediately pull the SDK into your next project and start sending data to the Cloud. Reach out to us on our Forums or the Golioth Discord to discuss how we can help you to build out your fleet.

Slides

If you’d like to preview the slides used the video presentation, check them out below. If you’d like a PDF copy of the slides, please contact [email protected].

The Internet of Things is all about transferring data between devices and the cloud. Of course not all data is equal, but with just a few core services you can cover all the bases. One example is the need for persistent data that is easily accessible.

With the Golioth LightDB State service, data can be stored and retrieved from the both the Device and the Cloud side. This is useful for reporting status information, restoring configurations after a power cycle/network outage, and indicating desired changes.

How Golioth enables persistent data

Recording sensor readings might be the first use that comes to mind for IoT devices (our LightDB Stream service is built for that!) but many times we also need stateful data.

State data has no concept of time but instead answers the question: “what was the last status we received?” It acts as a single point of truth that’s quick and easy to access from both the device side and the cloud side. Every time you request a state endpoint you can be sure you’re getting the most recently updated value.

Devices in the field can use state data to report their actual state, things like:

  • What is the timestamp of the last reboot?
  • How full is that SD card used for offline storage
  • What units has the user set for sensor data?

From the Cloud side, state data may be used for command and control. By setting a “desired state” in the Golioth LightDB State, devices with infrequent connects will receive their marching orders the next time they reconnect. Of course this use-case is also solved by the Golioth Device Settings Service which allows project-wide settings adjustments.

LightDB State data can be as simple as a single key/value pair, or as complex as a nested data structure. Devices registered to observe values will be notified every time there is new state data available.

LightDB State using Zephyr RTOS

Your device chooses a path to store and access the LightDB State data. The cloud doesn’t need to be prepared in advance. If your device sends data to a new state endpoint it will automatically be created. There are number of examples of LightDB State API in the Golioth Zephyr SDK.

/* Setting a value on the "counter" endpoint */
err = golioth_lightdb_set_cb(client, "counter",
                 GOLIOTH_CONTENT_FORMAT_APP_JSON,
                 sbuf, strlen(sbuf),
                 counter_set_handler, NULL);

/* Observing data changes on the "desired/my_config" endpoint */
err = golioth_lightdb_observe_cb(client, "desired/my_config",
                 GOLIOTH_CONTENT_FORMAT_APP_JSON,
                 desired_state_handler, NULL);

Here we see asynchronous API calls to set a counter endpoint and to observe data changes on a desired/my_config endpoint. In both cases a “handler” callback function is registered to process the received data (in the case of the observe operation) or handle error messages received from the server.

The Golioth Zephyr SDK also includes the option to send your sensor data using CBOR encoding. This is an alternative to JSON which serializes the data stream to a more compact form to save battery (less radio-on time) and bandwidth (fewer bits being transmitted).

LightDB State using ESP-IDF

The ESP-IDF SDK shares a very similar set of API calls. The device again chooses the endpoint to use on the Golioth LightDB State service. The golioth_basics sample code demonstrates how to use the set/get/observe functions.

/* Setting a value on the "counter" endpoint */
golioth_lightdb_set_int_async(client, "counter", counter, NULL, NULL);

/* Observing data changes on the "desired/my_config" endpoint */
golioth_lightdb_observe_async(client, "desired/my_config", on_my_config, NULL);

Here we see asynchronous API calls used to set the counter endpoint and observe the desired/my_config endpoint.

LightDB State using your own solution

For the extra ambitious out there, you could roll your own device side code that allows you to connect to Golioth. This is the kind of work our wonderful firmware team does whenever they spin up a new SDK. Some of the required elements include:

  • mbedtls in order to create a DTLS connection to the Golioth Cloud
  • A CoAP library
  • (optional) a CBOR library to encode the data

We hardly ever recommend doing this unless you are a very advanced user. Instead, our existing SDKs cover a wide swath of parts, ecosystems, and Real Time Operating Systems. If you are interested in one we don’t already cover, please let us know.

Formatting data and access from the cloud

The Golioth LightDB State service will accept and store any properly-formatted data encoded in either JSON or CBOR. If the endpoint doesn’t yet exist, it will be created when data is received. null will be received when requesting a non-existent endpoint.

As an example, your device might send the following JSON string to LightDB state:

{ "status": { "last_reboot": 1666586559, "location": {"lat":42.988814,"lon":-90.139254,"alt":369.7,"accuracy":2.3,"speed":0.0,"heading":0.0,"time":"2022-10-31T22:49:03.239Z"}, "debug_mode": 0 } }

This imaginary device is recording the last time it rebooted, saving a location from its GPS unit, and reporting on whether or not the debug mode is activated. Whether this data is sent as JSON (which is the case here) or CBOR, the Golioth Console will unpack the information and display it as human-readable JSON:

Golioth LightDB State

What’s really great is that once the data arrives on the Golioth Servers it’s incredibly easy to work with. You can get real-time updates using WebSockets, connect your data to your favorite cloud provider (e.g. AWS, Azure, or GCP) using our Output Streams, and access/update the data using the Golioth REST API.

Start using LightDB State Today!

All of our Device SDKs show how to use LightDB State with minimal code writing from the user. Just pass your state data up to Golioth and you’ll be on your way. Stop by our Forums or send us an email if you get stuck.

How can security improve when manufacturing large volumes of devices? That’s the question I ended on in my last article about Golioth Pre-Shared Keys (PSK).

Securing a large population of devices (10k or more) in a way that’s scalable and meets your project’s budgets is not trivial, but it is solvable. It takes a combination of modern technologies, and even not so modern technologies.

Certificates have been around for decades. Today the most recognizable certificate-based security protocol is TLS, which started all the way back in the mid-90s as SSL. But certificates haven’t found their way to all IoT applications because of their complexity and infrastructure requirements.

Many connected devices work around the complexity by eschewing good security practices:

  • Completely ignoring security (authentication or encryption)
  • Relying on “security by obscurity” (proprietary solutions that are not secure, but would need to be intentionally exploited by an attacker)
  • Use product-wide passwords
  • Are produced in small enough quantities to not feel the pain (and cost) of handling per-device keys.

None of these are ideal at scale.

The Limits of Pre-Shared Key (PSK) Scaling

When using PSK, your device and your cloud need to pre-share a secret. That means you need to store and distribute a sensitive per-device key for every device (or a system to derive the key). For prototyping, this involves copying and pasting a key from the Cloud to your Device over serial or similar. There are challenges getting those keys to your devices on the production line, but the ultimate risk to your deployments are when those devices are out in the field

PSK symmetry

From a security perspective, the limitation of PSK is that it uses symmetric encryption. Both sides of the communication use the same key. That means that a potential attacker could extract the keys from the cloud (in bulk), and use them to impersonate authentic devices.

Instead, we could use an asymmetric key. With asymmetric encryption, there is no pre-shared secret, but rather a pair of keys: a public key, and a private key. In our case, the public key is shared between the cloud and the device. The private key is kept secret by the device, and can be used to decipher information encrypted with the public key. Only the holder of the private key can decipher information that is encrypted by the public key. That’s why it is called asymmetric. You cannot use the public key to decipher the information enciphered by it. The burden is again on the device maker to load the keys onto the constrained devices at the time of manufacture.

If we want to upgrade our experience even more, Certificates are more “usable” primitives that are based on asymmetric cryptography. But they offer even more trust.

Certificate signing

One extra added value of asymmetric cryptography (and public keys) is that the public keys are well suited for chains of signatures. In other words, they can be used to prove the authenticity of a chain of identity claims. A device claiming it has a specific serial number can prove its authenticity, and not just someone who copied the serial number and pretends to be the device.

How is that possible? The signatures can be independently verified by anyone out there using public keys. Even if you don’t have the public key for a specific device, you can decide you trust the manufacturer that produced the device. By extension, you trust that devices that can prove they were produced by that manufacturer.

But that also means that as an author (or producer) of a secure device, you have to choose wisely who you trust.

Chains of trust enable scaling of secure manufacturing

Let’s imagine a scenario where you design a super-secure device, powered by private / public keys in the form of certificates.

Next, you need to find someone to produce those devices, such as a contract manufacturer. Once the devices start to roll off the production line, they need to be loaded with the private-public key pairs. Someone needs to generate those pairs and help to get them on the device in a secure way.

From a simple perspective, you can use certificate chains to enable manufacturers to produce authentic devices on your behalf. When the devices first connect, they will be able to prove to the cloud that they were produced through a trust chain.

The advantage of that is that you no longer need to store all per-device keys or certificates. All you need to decide is what the trusted “root” certificates are for the devices that are authorized to interact with the cloud.

Downsides of certificates

If the security advantages of certificates are so clear, why don’t we just use them everywhere? If you come from the web world, you’ve likely never encountered PSK because certificates are everywhere. Your browser doesn’t ask for the password to access the SSL version of Google.com, they issue a certificate that can be traced back to a signing authority.

In the Internet of Things, not all use-cases have budgets left to make certificates commercially feasible.

Some of the major challenges you will need to consider:

  • You need to decide who you trust, and how much you trust them. If you can’t trust your manufacturing chain, it’s going to be very difficult (costly) to produce trustable devices
  • Secure storage of private keys for the root certificates can be costly, if you want to do it right. When compromised, private keys allow the attacker to produce certificates that are indistinguishable from authentic certificates
  • Setting up the end-to-end process of certificate provisioning can be costly, and the subsequent maintenance and rotation of certificates can carry a lot of cost, too, if not planned properly in advance

But mostly, when IoT projects are in the planning stage, security tends to be overlooked or be taken for granted. Teams scramble to get something in place just before starting a first large batch of devices; it’s often this time that companies discover the full extent of their security needs and the associated costs. At that point, budgets have already been spent and deadlines are tight, and using proper certificate authentication becomes commercially unfeasible.

Certificate authentication for Internet of Things

Certificates offer the ideal combination of security, usability, and scalability. The manufacturing chain is what ultimately defines how trustable your devices actually are, so you will need to choose who you trust wisely. The overhead of building and maintaining certificate chains and large volumes of certificates is not free, but is cheaper than the alternatives, especially if you account for risk.

At Golioth, we only allow secure device connections, at a bare minimum PSK. Our users have a choice in the degree of security that they need for their application, and that is commercially feasible for their use-case. If certificates are your choice, we have the supporting end-to-end tooling and infrastructure.

Ultimately, each company needs to consider the security sensitivity of their IoT application. Your challenge will be balancing the risks you are willing to take and the budget your application has for security overhead. Hardest of all is understanding where you should draw the line. If you’d like to talk to someone about your security needs at your company, please reach out to our DevRel team for a personalized conversation.

Golioth helps you get your data to and from your constrained IoT device in the field. We make it really easy to ship data up to the Cloud where you can do more interesting things than merely interacting with the device out the field.

But what happens when the data gets to the Cloud?

Once your data is in an environment with many more resources, you probably want it to pass between different services. For the average Device creator, we make this easy with our Output Streams feature. Output Streams immediately export data to 3rd party Cloud providers like AWS, Azure, and GCP. When we built that service (and our other “Cloud-to-Cloud” integrations), we wanted to build on top of an open standard that would reduce the friction of getting your data from the Golioth cloud to your own systems. We decided to reuse a specification we adopted on our internal infrastructure called CloudEvents.

What are CloudEvents?

First off, we need to ask “what are events”? It sounds like a generic term but for cloud developers it means something quite specific. “Event Data” vs “Message Data” is a pattern in Cloud-to-Cloud communication.

CloudEvents are a standardized way of formatting event data so that it’s easier for one Cloud provider to understand incoming data from another Cloud provider. The standard includes things like the Type of event, the Source of the event, and the formatting of the event. If you’re trying to figure out where an event came from (ie: change the value of LED from ‘true’ to ‘false’), you’d need to know if that update request came from an app, the Golioth Console, a 3rd party source via the REST API, or some other previously validated entity talking to our Cloud.

The History of CloudEvents

People have been doing event-driven design (architecture) for a long time. But there was no standard, which resulted in a lot of reinventing of “templates” for events. When you encounter an HTTP event of JSON, the payload is not the important part for processing that event. Without a format for producers and consumers to agree on, it means wasted time and lack of interoperability. Cloud engineers and members of the Cloud Native Computing Foundation in the Linux Foundation started CloudEvents to develop a standard.

An Event Driven Architecture

Cloud software has gotten extremely complex. Services stream all sorts of a data, often times between internal and external and external teams. Event-driven designs have been increasingly popular among modern cloud solutions to manage this growing complexity. Think of a large system with lots of micro-services and serverless functions talking to each other. Getting agreement up front on the message design would slow down innovation and be rather difficult. Events allow for the the “componentization” of the these API contracts. It decouples them. And as long as the agreed upon format is written down somewhere, both the servers (producers) and clients (consumers) can move faster and the process of designing very complex systems becomes easier.

Golioth is a modern cloud native software too, and our architecture internally has an event-driven design. We use CloudEvents internally. For example, when a device sends a LightDB Stream value over CoAP, it is processed by our CoAP API Gateway. The gateway then converts that message into a CloudEvent and publishes to a message broker. The broker and that entire part of the cloud ensures that CloudEvent data makes it to our time-series database securely and reliably. That piece in the middle is performant, scalable, and secure thanks to our design that leans on modern event broker software.

If we didn’t use the event pattern, we’d have to develop a bespoke system to handle CoAP messages in the middle of our stack, which would likely be far more difficult to build and more fragile to maintain.

Why should you care?

If you’re a device maker, do you need to care about CloudEvents? Generally, no. Golioth makes it really easy to connect to the Cloud and handles everything you’ll need to do in order to interact with the rest of the Internet.

But you’ll benefit from our use of CloudEvents because of how easy it is to connect to Golioth in a standard way. When you are working with your Cloud team (or a customer’s Cloud team, if you do design for hire), it will be really easy to interact with their Cloud. Even bespoke Cloud infrastructure will be able to easily connect to the Golioth Cloud. Thanks to the standardization, there are a range of libraries to make it easier to interact with CloudEvent data to match how your Cloud team is operating.

The clearest way to see that in action is to look at our WebHooks. These are primed to hook into and leverage the CloudEvents ecosystem. For example, you can use any of the dozen or so SDKs in your favorite language (we’re fans of Go!) or a bunch of integrations to process the data on your cloud. If you’re using one of our native output streams with AWS, GCP, or Azure, you don’t have to worry about developing the integration. Under the covers? It’s using CloudEvents too.

What are the future of CloudEvents?

CloudEvents have a specification that addressed the most common use cases, like HTTP + JSON or Kafka + Protobuf. But it’s also an extensible spec, so they added IoT protocols like MQTT. Anyone can define their own event format and publish it.

At Golioth, we continue to add functionality to our Output Streams capability, and we’re able to integrate new ones with lower overhead because of CloudEvent standardization. If you’re interested in extending Golioth capability or need help passing data between our Cloud and another Cloud, please reach out on our Forum, our Discord, or by email.

It’s no secret that we like Zephyr Real Time Operating System (RTOS) around here. Nor is it a secret that we’re very interested in the Internet of Things (IoT). Golioth Founder and CEO Jonathan Beri gave a talk at the 2022 Zephyr Developer Summit (ZDS) about why those two things are a perfect match, and how Zephyr can help you create an IoT product faster.

A high level overview

Since this talk was on the first day of ZDS, it was focused on people less familiar with the Zephyr ecosystem. So what should a beginner know about Zephyr and its relationship to IoT devices?

Batteries are included

Zephyr has a couple of key features that makes it a “one stop shop” for building an IoT device. Some other RTOSes rely on the engineer to piece together libraries and implementations to get started, which either means more startup time or reliance on hardware vendors to make a ready-to-go solution for you. Instead, Zephyr provides individual elements already configured to work together:

  • Device drivers
  • An OS kernel (including scheduler)
  • Services
  • A build system

Tying together the ecosystem with these tools provides a consistent experience. This also leads to another very important aspect.

Chip vendor buy-in

Because Zephyr has a well defined system, the chip vendors develop code to make their parts work within the ecosystem. With other RTOSes, that script is flipped; the RTOS is made to work with abstraction layers within the vendor specific ecosystem, meaning there is less likelihood of interoperability with other vendor solutions.

In addition to the chipsets from the vendors, a wide range of boards are supported. Targeting specific boards makes it easier to get dev boards working when getting started, and remapping pins to specific functions on a board doesn’t require a configurator tool like those available in many vendor IDEs.

Talking to the Internet

If you choose an RTOS that isn’t a broader ecosystem, you need to either define your own network layers or lean on a vendor implementation to do it for you. In Zephyr, the network layer and the protocol definitions are done as a part of the community. That means that chip, module, and software vendors are all working towards a common implementation. Add in the fact that there are modems and abstraction layers all the way up the stack, and an engineer using Zephyr doesn’t need to think about all of the pieces it takes to connect to the internet. At Golioth, we are able to switch between various networking technologies easily when using Zephyr.

Protocol Variety

Golioth talks to the internet at a very high level, so network access “just works”. This comes into focus as a small embedded device can talk to the variety of internet protocols available. Most people understand HTTP, but also consider the more common IoT favorites like MQTT and CoAP. Golioth favors CoAP but also supports MQTT. This enables implementations in other parts of the network stack. A good example is devices running OpenThread, a Thread network protocol implementation. As we showed in our recent post about Thread, a small Zephyr-based device utilizes 6LoWPAN and talks over CoAP using UDP packets to talk back to the Golioth cloud.

Last but not least: Security

Zephyr enables a secure connection back to the internet through the network stack and with features like DTLS, the basis of a secure connection over UDP. Security is a deeper topic in Zephyr though, all the way down to secure bootloaders and working within things like TrustZone. Talking to secure elements requires drivers that understand how to communicate with particular chip features (internal or external to a microcontroller). At a very high level, Zephyr focuses on things like Software Bill of Materials (SBOM), so your security teams understand the various software you are pulling into the build.

Built for IoT

Zephyr not only ❤️ internet…it’s purpose built for enabling IoT devices. Paired with Golioth, a Zephyr device has a higher likelihood of getting to market and growing to a massive scale. We’d love to hear your thoughts about the video on our Forum, our Discord, or by email.

 

Founding Engineer of Golioth, Alvaro Viebrantz went to the Zephyr Developer Summit…and said the hardware isn’t the only part of an IoT system that matters. That’s bold! This was a room full of hardware and firmware designers!

In reality, this talk is about everything else you should consider when building a device using the Zephyr Real Time Operating System (RTOS). Alvaro goes through the many layers and considerations when architecting large scale Internet of Things (IoT) deployment, from the servers, to the apps, to how devices are communicating back to the cloud, and yes, all the way down to the code put onto the end device.

Top Down Design

Most IoT users first experience a device not from holding the hardware in their hand and seeing it do some particular action. Instead, users rely on things like Dashboards and Applications. The physical device is often hidden from the end user, at least inside an enclosure, and sometimes entirely (as in the case of something like a home monitor in your attic).

It’s important to start by mapping your data model, often before you even start your first schematic. What will your device do? How will it be accessed hierarchically? What other elements are part of the system or will be in the future?

Also important, you should follow the ownership of both the physical device and that device’s output data all the way up the chain back to the user. How will a user claim a device belongs to them? How will you organize the various levels of permission required? What kind of questions will your users seek to answer with the end device or with the entire fleet? (ie. “Do any vehicles in the fleet have low battery?”)

Who is in charge here?

After you have considered how you will configure your database and what the end user will see when they are interacting with data, you should move on to access levels and which parts of the system are accessible by different individuals. This may include access to the end application, but should also include access control to the configuration components of the system. You don’t want your end users to be able to access your device management console, even if they are the ultimate owner of the device. Instead, you will want to provision the devices so you know the unique identifiers of the end-device, and then assign that and the access level to the user.

Additional layers required for a realistic IoT deployment

As you continue to move “down the stack” from Applications all the way to the hardware, there is still a good chunk of Cloud in between. You’ll want to make sure you at least understand all of the elements you’ll likely want in your design, including:

  • Device management
  • Device state and configuration
  • Bidirectional command and control
  • Time-series data and events

Connecting to the network

We’re almost down at the device itself, but the connection to the network is another key consideration before we get there. Not just the method of connection (which ultimately relies on the hardware), but also the protocol and the pathways of each packet coming back. In Golioth’s case, we default to CoAP, which is a lower overhead, UDP based protocol. All of our device SDKs include support to make it easy to talk to CoAP endpoints on the Golioth servers. As Alvaro mentioned in his talk, and as our SDKs also feature, encoding can help to shrink messages even further, using things like protobuf, JSON, and CBOR.

Finally the hardware

By this point in the room, we were squarely back into the realm of expertise of Zephyr developers, though many now had a greater appreciation of all the other things required for IoT systems. Alvaro mentioned how some of the interaction of the connectivity on-device in Zephyr could affect your decision making further up the stack. This talk to a packed room showed developers how to plot out a successful IoT deployment beyond the limits of the hardware, and to think about IoT systems from the hardware, all the way up to the end user.

 

Controlling 10 devices is easy, controlling 10,000 is a different story. The trick is to plan for scale, which is what we specialize in here at Golioth.

A few weeks ago we announced the Golioth Device Settings Service that enables you to change settings for your entire fleet at the click of a button. Of course, you can drill down to groups of devices and single devices too. While the announcement post showed the Settings Service using our ESP-IDF SDK, today we’ll look at this setting service from the device point of view with Zephyr RTOS.

Device-side building blocks

The good news is that the Golioth Zephyr SDK has done all the hard work for you. We just need to do three things to start using the settings service in any Zephyr firmware project:

  1. Enable the settings service in KConfig
  2. Register a callback function when the device connects to Golioth
  3. Validate the data and do something useful when a new setting arrives

Code walk-through

Golioth’s settings sample code is a great starting point. It watches for a settings key named LOOP_DELAY_S to remotely adjust the delay between sending messages back to the server. Let’s walk through the important parts of that code for a better understanding of what is involved.

1. Enable settings in KConfig

CONFIG_GOLIOTH_SETTINGS=y

To turn the settings service on, the CONFIG_GOLIOTH_SETTINGS symbol needs to be selected. Add the code above to your prj.conf file.

2. Register a callback for settings updates

static void golioth_on_connect(struct golioth_client *client)
{
    if (IS_ENABLED(CONFIG_GOLIOTH_SETTINGS)) {
        int err = golioth_settings_register_callback(client, on_setting);

        if (err) {
            LOG_ERR("Failed to register settings callback: %d", err);
        }
    }
}

To do anything useful with the settings service, we need to be able to execute a function when new settings are received. Register the callback function each time the device connects to Golioth. Your app may already have a golioth_on_connect() function declared, look for the following code at the beginning of main and add it if it’s not already there.

	client->on_connect = golioth_on_connect;
	client->on_message = golioth_on_message;
	golioth_system_client_start();

Each time the Golioth Client detects a new connection, the golioth_on_connect() function will run, which in turn registers your on_setting() function to run.

3. Validate and process the received settings

enum golioth_settings_status on_setting(
        const char *key,
        const struct golioth_settings_value *value)
{
    LOG_DBG("Received setting: key = %s, type = %d", key, value->type);
    if (strcmp(key, "LOOP_DELAY_S") == 0) {
        /* This setting is expected to be numeric, return an error if it's not */
        if (value->type != GOLIOTH_SETTINGS_VALUE_TYPE_INT64) {
            return GOLIOTH_SETTINGS_VALUE_FORMAT_NOT_VALID;
        }

        /* This setting must be in range [1, 100], return an error if it's not */
        if (value->i64 < 1 || value->i64 > 100) {
            return GOLIOTH_SETTINGS_VALUE_OUTSIDE_RANGE;
        }

        /* Setting has passed all checks, so apply it to the loop delay */
        _loop_delay_s = (int32_t)value->i64;
        LOG_INF("Set loop delay to %d seconds", _loop_delay_s);

        return GOLIOTH_SETTINGS_SUCCESS;
    }

    /* If the setting is not recognized, we should return an error */
    return GOLIOTH_SETTINGS_KEY_NOT_RECOGNIZED;
}

The callback function will provide a key (the name of the settings) that is a string type, and a golioth_settings_value that includes the value, as well as an indicate of the value type (bool, float, string, or int64). The callback function should validate that the expected key was received, and test that the received value is the proper type and within your expected bounds before acting upon the data.

The final piece of the device settings puzzle is to return an enum value indicating the device status after receiving a new setting. This is used by the Golioth Cloud to indicate sync status. Our example demonstrates the GOLIOTH_SETTINGS_SUCCESS and GOLIOTH_SETTINGS_KEY_NOT_RECOGNIZED status values, but it’s worth checking out the doxygen reference for golioth_settings_status to see how different invalid key and value messages can be sent to Golioth for display in the web console.

Coordinating with the Golioth Cloud

The device settings key format is tightly specified by Golioth, allowing only “A-Z (upper case), 0-9 and underscore (_) characters”. We recommend that you add a key to the Golioth web console while writing your firmware to ensure the device knows what to expect.

Each device page includes a status indicator to show if the settings are in sync with the Golioth cloud.

Additional information is available when a device is out of sync. Here you can see that hovering over the information icon indicates the key is not valid. For this example I created a VERY_IMPORTANT_BOOL_7 but didn’t add support for that to the firmware. This is the result of our callback function returning GOLIOTH_SETTINGS_KEY_NOT_RECOGNIZED. In cases like this, Golioth’s Over-the-Air firmwmare update (OTA) service can help you to push a new firmware to your devices in the field that will recognize the newly created Settings variable.

IoT Fleet control on day one

The pain of IoT often comes when you need to scale. Golioth’s mission is to make sure you avoid that pain and are ready to grow starting from day one.

Our device settings service gives you the power to remotely change settings on your entire fleet, on a group of devices, and all the way down to a single device. You are able to verify your devices are in sync, and receive helpful information when they are not. Take Golioth for a test drive today, our dev tier is free for the first 50 devices.

Today we’re announcing a new feature on the Golioth Console and on our Device SDKs that enables Remote Procedure Calls (RPCs) for all users on the platform. From the cloud, you can initiate a function on your constrained device in the field, ensure the device received and executed the command, and receive a response from the device back to the Cloud.

What is a Remote Procedure Call (RPC)?

A Remote Procedure Call allows you to call a function on a remote computing device and optionally receive a result. An easy way to think about it is you’re calling a function, like you would in any other program…you’re just doing it from another computer. In this case, you’re triggering actions from the Golioth Cloud.

RPC from the Golioth Cloud (Console)

Each device in your project has a page where you can view details about things like LightDB State, LightDB Stream, Settings, and now RPC. Our Console includes an interface to directly send RPCs to the remote device. The URL will look something like:

https://console.golioth.io/devices/<YOUR_DEVICE_ID>/management/rpc

In all of our examples, we are sending an RPC to single devices. However, they can also be triggered from the REST API. As a reminder, any function you see on the Golioth Console is available on the REST API.

One critical function of RPCs is a confirmation that the remote function has actually run. The device firmware needs to send back a success message that the function has completed, and optionally a returned value. When there is a problem connecting with your device and the RPC does not complete successfully, you will see a screen that looks like this:

An RPC sent to a device that was disconnected from Wi-Fi

Also note that round trip time is measured for all RPCs, including successful ones. Transit times will depend upon your connectivity medium, in addition to the processing time of the function on the remote device.

When an RPC successfully completes, you can click the button with 3 dots to receive the returned value. In the example and in the video, we were using a method called double that takes an integer input, multiplies it by two, and then returns the value to the Cloud. Below, you can see the result when we sent “double” method with a parameter of “37”.

RPC from the Device SDK perspective

Any new feature on Golioth has Device SDK support, in addition to the new APIs and UIs on our Web Console. Earlier this week, Nick wrote about how we test hardware and firmware at Golioth, especially when a new feature is released across the platform. Now that we support 3 SDKs (Zephyr, NCS, ESP-IDF), the testing area has increased.

In the video, the focus is on ESP-IDF, which has a simple way to set up and respond to new RPCs. First, we register the new method, so we’ll recognize the command coming from the Golioth Cloud:

The function that we tie to that newly registered RPC needs to return the RPC_OK variable for the Cloud to be alerted that the function has processed properly.

If you have satisfied these requirements in the ESP-IDF SDK and copied the format, you can customize logic to do whatever task you’d like on the remote device. Let’s look at some examples.

Use cases for an RPC

The double() example is a simple showcase of the minimum requirements to create an RPC in the ESP-IDF SDK. We send a command and a value, we return a modified value.

The remote_reset command we created and showcased in the video is more like a critical function you would want to add to your project. When you want to trigger a remote function like a reset, you want to ensure that the command was properly received, that the function executed, and then that there was output data that validated the reset has happened. In the final point, that includes inferring that the device has restarted from the log messages also being sent back to the Golioth Console. Put all together, it’s a reliable way to tell the device has been reset.

Other use cases could be as simple as sending arbitrary text to a display. You would still want to know the text has been received and properly sent to the physical display. Or perhaps you have a valve and you want to be able to send an arbitrary value to the valve, but you also want to take a reading on an encoder that measures the distance the valve has moved.

Many of these functions could also be achieved with LightDB State (which the RPC service is built upon), but the context for creating an RPC is more targeted at situations like the examples above.

What will you build?

RPCs are another way for you to communicate with your constrained IoT devices from the Golioth Cloud and to get useful information back from your devices. You can start testing out this feature today.

For more questions or assistance, check out our Forums, our Discord, or drop us a note at [email protected].

 

Golioth just rolled out a new settings service that lets you control your growing fleet of IoT devices. You can specify settings for your entire fleet, and override those global settings by individual device or for multiple devices that share the same blueprint.

Every IoT project needs some type of settings feature, from adjusting log levels and configuring the delay between sensor readings, to adjusting how frequently a cellular connection is used in order to conserve power. With the new settings service, the work is already done for you. A single settings change on the Golioth web console is applied to all devices listening for changes!

As you grow from dozens of devices to hundreds (and beyond), the Golioth settings service makes sure you can change device settings and confirm that those changes were received.

Demonstrating the settings service

Golioth settings service

The settings service is ready for you use right now. We have code samples for the Golioth Zephyr SDK and the Golioth ESP-IDF SDK. Let’s take it for a spin using the Zephyr samples.

I’ve compiled and flashed the Golioth Settings sample for an ESP32 device. It observes a LOOP_DELAY_S endpoint and uses that value to decide how long to wait before sending another “Hello” log message.

Project-wide settings

On the Golioth web console, I use the Device Settings option on the left sidebar to create the key/value pair for this setting. This is available to all devices in the project with firmware that is set up to observe the LOOP_DELAY_S settings endpoint.

Golioth device settings dialog

When viewing device logs, we can see the setting is observed as soon as the device connects to Golioth. The result is that the Hello messages are now issued ten seconds apart.

[00:00:19.930,000] <inf> golioth_system: Client connected!
[00:00:20.340,000] <inf> golioth: Payload
                                  a2 67 76 65 72 73 69 6f  6e 1a 62 e9 99 c4 68 73 |.gversio n.b...hs
                                  65 74 74 69 6e 67 73 a1  6c 4c 4f 4f 50 5f 44 45 |ettings. lLOOP_DE
                                  4c 41 59 5f 53 fb 40 24  00 00 00 00 00 00       |LAY_S.@$ ......  
[00:00:20.341,000] <dbg> golioth_hello: on_setting: Received setting: key = LOOP_DELAY_S, type = 2
[00:00:20.341,000] <inf> golioth_hello: Set loop delay to 10 seconds
[00:00:20.390,000] <inf> golioth_hello: Sending hello! 2
[00:00:30.391,000] <inf> golioth_hello: Sending hello! 3
[00:00:40.393,000] <inf> golioth_hello: Sending hello! 4

Settings by device or by blueprint

Of course, you don’t always want to have the same settings for all devices. Consider debugging a single device. It doesn’t make much sense to turn up the logging level or frequency of sensor reads for all devices. So with Golioth it’s easy to change the setting on just a single device.

settings change for a single device on the Golioth console

In the device view of the Golioth web console there is a settings tab. Here you can see the key, the value, and the level of the value. I have already changed the device-specific value in this screen so the level is being reported as “Device”.

[00:07:30.466,000] <inf> golioth_hello: Sending hello! 45
[00:07:40.468,000] <inf> golioth_hello: Sending hello! 46
[00:07:43.728,000] <inf> golioth: Payload
                                  a2 67 76 65 72 73 69 6f  6e 1a 62 e9 9c 17 68 73 |.gversio n.b...hs
                                  65 74 74 69 6e 67 73 a1  6c 4c 4f 4f 50 5f 44 45 |ettings. lLOOP_DE
                                  4c 41 59 5f 53 fb 40 00  00 00 00 00 00 00       |LAY_S.@. ......  
[00:07:43.729,000] <dbg> golioth_hello: on_setting: Received setting: key = LOOP_DELAY_S, type = 2
[00:07:43.729,000] <inf> golioth_hello: Set loop delay to 2 seconds
[00:07:50.469,000] <inf> golioth_hello: Sending hello! 47
[00:07:52.471,000] <inf> golioth_hello: Sending hello! 48

When I made the change. the device was immediately notified and you can see from the timestamps that it began logging at a two-second cadence as expected.

Golioth settings applied at the blueprint level

It is also possible to change settings for a group of devices that share a common blueprint. Here you will find this setting by selecting Blueprint from the left sidebar and choosing your desired blueprint.

Settings are applied based on specificity. The device-level is the most specific and will be applied first, followed by blueprint-level, and finally project-level. Blueprints may be created and applied at any time, so if you later realize you need a more specific group you can change the blueprint for those devices.

Implementation: The two parts that make up the settings service

Fundamentally, there are two parts that make our device settings system work: the Golioth cloud services running on our servers and your firmware that is running on the devices. The Golioth device SDKs allow you to register a callback function that receives settings values each time a change is made to the settings on the cloud. You choose how the device should react to these settings, like updating a delay value, enabling/disabling features, changing log output levels, really anything you want to do.

Don’t worry if you already have devices in the field. You can add the settings service or make changes to how your device handles those settings, then use the Golioth OTA firmware update system to push out the new behavior.

Take control of your fleet

Scale is the hope for most IoT companies, but it’s also where the pain of IoT happens. You need to know you can control your devices, securely communicate with them, and perform updates as necessary. Golioth has you covered in all of these areas. The new settings service ensures that your ability to change how your fleet is performing doesn’t become outpaced by your growth.

This article is an introduction to the concept of “The Five Clouds of IoT.” It’s a tool I’ve been developing for some time as a way to help people trying to understand the role of the cloud. This is a mental model I developed as a way to explain service offerings to people interested in building fleets of IoT devices. It includes services developed in-house and services they choose to purchase from external providers.

Today I will discuss the genesis of these classifications and why there is more than one cloud type in the first place. In future articles, I’ll take a deep dive into each of these clouds to compare and contrast their features and look at example services.

I believe it’s important for us to focus our understanding of these broadly-general terms (like cloud and IoT). Doing so lets us define what problems need to be solved in the IoT space, and how best to approach them.

Giving advice about IoT Cloud Services

I have been building internet connected (IoT) devices for a long time. When someone comes to me for advice on their startup or business enhancement, they typically ask for guidance on selecting from different service offerings in the marketplace. There have been, and continue to be, many options out there.

The conversation usually starts out with, “should I use foo for software updates? Or is bar better because it includes more stuff?” They’re coming from the perspective of looking for solutions without understanding the problems they need to solve in the first place.

So as a first step I ask questions about their needs. Questions like:

  • How are you connecting to the internet?
  • What kind of device are you building on?
  • What’s the scale of your deployment?
  • What’s your cost and cost structure?
  • What is your business model?

But the most important questions to answer for picking cloud services are:

  • What are you willing to outsource?
  • What should you outsource?
  • What would get you to market faster if you went with a provider?

From there we can have a more grounded conversation on which aspects of the cloud are needed and which combination of providers they might want to evaluate. Very few service offerings will cover every aspect of an IoT deployment and business needs.

The more of these conversation I’ve had, the more I started to realize that there might be different types of IoT cloud solutions. In fact, I think there are five. And while in practice different cloud companies might offer overlapping features, describing them as separate clouds will help us better understand the core strength of each cloud type and why we might want to use them.

Why are there “Five Clouds of IoT”?

Here are The Five Clouds of IoT, as I define them:

  • The device cloud
  • The connectivity cloud
  • The data cloud
  • The application cloud
  • The development cloud

“You’re just making these up!”, you say? Yes and no. These are classifications based on the service offerings of companies throughout the ecosystem, including the company I founded a couple of years ago (Golioth).  But at the end of the day, these are my classifications and how I view the IoT ecosystem…so yeah, I’m making these up.

However, each cloud represents a business case being served. Each cloud type I listed has a prime example of at least one company serving a particular area of focus. That’s how big the market is and how much need exists for services.

Bringing IoT to the Masses

Each IoT cloud solution helps make deployments possible without massive internal cloud teams at a device maker or service company in the IoT space. Someone who wants to keep their core team small can do so by hiring out parts of the business to different service companies representing one or more of the Five Clouds. They focus on a particular part of the business and lean on service providers to help scale their application.

We’ve already mentioned Golioth as one example of a device cloud. We’re a device cloud because we focus on the management & security of devices and the data they produce. An example of a different cloud would be Soracom, which is a connectivity cloud that addresses different aspects of connecting devices to the internet that spans SIMs, data plans, VPNs, and more.

The key here is that the fundamental focus of a cloud like Golioth and Soracom are different:  device offerings vs connectivity offerings. You need to understand what problem you want to solve when choosing between clouds. For example, why you’d want to leverage Golioth, Soracom, or both.

Which Cloud is right for you?

In future articles in this series, I’ll take a deep dive into each individual cloud and look at example companies that exemplify the characteristics of each cloud. As you start to put together your business needs for your company or your next startup idea, you will be able to piece together which clouds you will need, and find some good companies that can help you achieve your goals.