New Pipelines Data Destination: Kafka

A new Pipelines data destination for Kafka is now generally available for Golioth users. Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. There are many cloud-hosted services offering Kafka or Kafka-compatible APIs.

How It Works

Similar to the existing gcp-pubsub and azure-event-hubs destinations, the kafka data destination publishes events to the specified topic. Multiple brokers can be configured, and PLAIN, SCRAM-SHA-256, SCRAM-SHA-512 SASL mechanisms are supported for authentication. All data in-transit is encrypted with Transport Level Security (TLS).

Data of any content type can be delivered to the kafka destination. Metadata, such as Golioth device ID and project ID, will be included in the event metadata, and the event timestamp will match that of the data message.

filter:
  path: "*"
steps:
  - name: step0
    destination:
      type: kafka
      version: v1
      parameters:
        brokers:
          - my-kafka-cluster.com:9092
        username: kafka-user
        password: $KAFKA_PASSWORD
        topic: my-topic
        sasl_mechanism: SCRAM-SHA-256

Click here to use this pipeline in your Golioth project!

For more details on the kafka data destination, go to the documentation.

What’s Next

Kafka has been one of the most requested data destinations by Golioth users, and we are excited to see all of the new platforms that are able to be leveraged as part of the integration. Keep an eye on the Golioth blog for more upcoming Pipelines features, and reach out on the forum if you have a use-case that is not currently well-supported!

New Pipelines Transformer: Struct to JSON

A new Struct-to-JSON Pipelines transformer is now generally available to Golioth users. This transformer takes structured binary data, such as that created using packed C structs, and converts it into JSON according to a user provided schema. To see this transformer in action, check out the example in our documentation or watch our recent Hackday recap stream.

Compact and Low Overhead

Historically, many IoT devices have sent structured binary data to cloud applications, which then needed to unpack that data and serialize it into a format that downstream services can understand. It’s easier and faster to populate a struct than it is to serialize data into a common format in C, and the data is often significantly more compact, which is especially important for devices using constrained transports such as cellular, Bluetooth, or LoRA. As embedded devices have gotten more powerful and efficient, many of these devices have started using serialization formats like CBOR, JSON, or Protocol Buffers, which are easier for cloud applications to work with. These serialization formats have significant advantages over packed C structs in terms of flexibility and interoperability, and for most cases that’s what we recommend our customers use.

But there are still cases where sending raw binary data could make sense. If the structure of your data is well-defined, a widely accepted standard, and/or unlikely to change, the overhead of a flexible serialization format may not yield any practical advantages. If it’s important to squeeze every single byte out of a low bandwidth link, it’s hard to beat a packed C struct. If you’re working with legacy systems where updating them to use a new serialization format involves all kinds of dependencies, it may simply be easier to stick with the status quo. Golioth now supports these use cases through the new Struct-to-JSON transformer.

To use the transformer, you need to describe the structure of the data in your Pipeline YAML. The transformer currently supports standard integer sizes from 8 to 64 bits (signed and unsigned), single and double precision floating point numbers, and fixed- and variable-length strings. Here’s an example of setting up the transformer for a float and a couple strings:

transformer:
  type: struct-to-json
  version: v1
  parameters:
    members:
      - name: temperature
        type: float
      - name: string1
        type: string
        length: 5
      - name: string2_len
        type: u8
      - name: string2
        type: string
        length: string2_len

We can create a packed struct in C that matches the schema:

struct my_struct {
    float temperature;
    char string1[5];
    uint8_t string2_len;
    char string2[];
} __attribute__((packed));

struct my_struct *s = malloc(sizeof(struct my_struct) + strlen("Golioth!"));

s->temperature = 23.5f;
s->string2_len = strlen("Golioth!");
memcpy(s->string1, "hello", strlen("hello"));
memcpy(s->string2, "Golioth!", strlen("Golioth!"));

Sending that struct through the transformer gives us the following JSON:

{
  "temperature": 23.5,
  "string1": "Hello",
  "string2_len": 8,
  "string2": "Golioth!"
}

Monitoring Heap Usage with mallinfo()

Monitoring heap usage is a great way to gain insight into the performance of your embedded device. Many C standard library implementations provide a mallinfo() (or its newer analogue mallinfo2()) API that returns information about the current state of the heap. As this data is in a well-defined, standard format and unlikely to change, it’s a good candidate for sending directly to Golioth without any additional serialization. We can use the following pipeline to transform the struct returned by a call to mallinfo2() on a 64-bit Linux system to JSON and send it to LightDB Stream:

filter:
  path: "/mallinfo"
  content_type: application/octet-stream
steps:
  - name: step0
    transformer:
      type: struct-to-json
      version: v1
      parameters:
        members:
          - name: arena
            type: u64
          - name: ordblks
            type: u64
          - name: smblks
            type: u64
          - name: hblks
            type: u64
          - name: hblkhd
            type: u64
          - name: usmblks
            type: u64
          - name: fsmblks
            type: u64
          - name: uordblks
            type: u64
          - name: fordblks
            type: u64
          - name: keepcost
            type: u64
    destination:
      type: lightdb-stream
      version: v1

Click here to use this Pipeline.

See it in action

What’s Next

Stay tuned for additional Pipelines transformers and destinations to be released in the coming weeks. If you have a use-case that is not currently well supported by Pipelines, or an idea for a new transformer or destination, please reach out on the forum!

New Pipelines Data Destination: AWS S3

A new Pipelines data destination for AWS S3 is now generally available for Golioth users. It represents the first object storage destination for Golioth Pipelines, and opens up a new set of data streaming use-cases, such as images and audio. It also can be useful for scenarios in which large volumes of data are collected then batch processed at a later time.

How It Works

The aws-s3 data destination uploads events routed to it as objects in the specified bucket. The name of an object corresponds to its event ID, and objects are organized in directories for each device ID.

/
├─ 664b9e889a9590ccfcf822b3/
│  ├─ 28ebd981-80ae-467f-b700-ba00e7c1e3ee
│  ├─ e47e5b46-d4e3-4bf1-a413-9fc71ec9f6b0
│  ├─ ...
├─ 66632a45658c93af0895a70e/
├─ .../

Data of any content type, including the aforementioned media use-cases and more traditional structured sensor data, can be routed to the aws-s3 data destination. To authenticate, an IAM access key and secret key must be created as secrets, then referenced in the pipeline configuration. It is recommended to limit the permissions of the IAM user to PutObject for the specified bucket.

filter:
  path: "*"
steps:
  - name: step0
    destination:
      type: aws-s3
      version: v1
      parameters:
        name: my-bucket
        access_key: $AWS_ACCESS_KEY
        access_secret: $AWS_SECRET_KEY
        region: us-east-1

Click here to use this pipeline in your Golioth project!

For more details on the aws-s3 data destination, go to the documentation.

What’s Next

While any existing uses of the Golioth Firmware SDK that leverage data streaming can be used with the aws-s3 data destination, we’ll be introducing examples to demonstrate new use-cases in the coming weeks. Also, stay tuned for more object storage data destinations, and reach out on the forum if you have a use-case that is not currently well-supported by Pipelines!

A new Pipelines data destination for Memfault is now generally available for Golioth users. It enables devices to leverage their existing secure connection to Golioth to deliver data containing coredumps, heartbeats, events, and more to Memfault. To see this integration in action, check out the example firmware application, read the documentation, and tune into this week’s Friday Afternoon Stream, where we will be joined by a special guest from Memfault.

Golioth + Memfault

We have long been impressed with the functionality offered by Memfault’s platform, as well as the embedded developer community they have cultivated with the Interrupt blog. In our mission to build a platform that makes connecting constrained devices to the cloud simple, secure, and efficient, we have continuously expanded the set of cloud services that devices can target. This goal has been furthered by the recent launch of Golioth Pipelines.

Memfault’s observability features are highly desired by embedded developers, but leveraging them has typically required establishing a separate HTTP connection from a device to Memfault’s cloud, building custom functionality to relay data from an existing device service to Memfault, or utilizing an intermediate gateway device to provide connectivity. With Golioth, devices already have a secure connection to the cloud for table-stakes device management services and flexible data routing. By adding a Memfault data destination to Golioth Pipelines, that same connection can be used to route a subset of streaming data to Memfault. Leveraging this existing connection saves power and bandwidth on the device, and removes the need to store fleet-wide secrets on deployed devices.

How It Works

The Memfault Firmware SDK provides observability data to an application serialized in the form of chunks. An application can periodically query the packetizer to see if there are new chunks available.

bool data_available = memfault_packetizer_begin(&cfg, &metadata);

When data is available, it can be obtained from thepacketizer by either obtaining a single chunk via memfault_packetizer_get_chunk, or by setting enable_multi_packet_chunk to true in configuration and repeatedly invoking memfault_packetizer_get_next  until a kMemfaultPacketizerStatus_EndOfChunk status is returned. The latter strategy allows for obtaining all data in a single chunk that would exceed the default size limitations. Golioth leverages this functionality to upload both large and small chunks using CoAP blockwise transfers, a feature that was enabled in our recent v0.14.0 Golioth Firmware SDK release.

golioth_stream_set_blockwise_sync(client,
                                  "mflt",
                                  GOLIOTH_CONTENT_TYPE_OCTET_STREAM,
                                  read_memfault_chunk,
                                  NULL);

The read_memfault_chunk callback will be called repeatedly to populate blocks for upload until the entire chunk has been obtained from the packetizer.

static enum golioth_status read_memfault_chunk(uint32_t block_idx,
                                               uint8_t *block_buffer,
                                               size_t *block_size,
                                               bool *is_last,
                                               void *arg)
{
    eMemfaultPacketizerStatus mflt_status;
    mflt_status = memfault_packetizer_get_next(block_buffer, block_size);
    if (kMemfaultPacketizerStatus_NoMoreData == mflt_status)
    {
        LOG_WRN("Unexpected end of Memfault data");
        *block_size = 0;
        *is_last = true;
    }
    else if (kMemfaultPacketizerStatus_EndOfChunk == mflt_status)
    {
        /* Last block */
        *is_last = true;
    }
    else if (kMemfaultPacketizerStatus_MoreDataForChunk == mflt_status)
    {
        *is_last = false;
    }

    return GOLIOTH_OK;
}

Golioth views the data as any other stream data, which can be delivered to a path of the user’s choosing. In this case, the data is being streamed to the /mflt path, which can be used as a filter in a pipeline.

filter:
  path: "/mflt"
  content_type: application/octet-stream
steps:
  - name: step0
    destination:
      type: memfault
      version: v1
      parameters:
        project_key: $MEMFAULT_PROJECT_KEY

Click here to use this pipeline in your Golioth project!

Because the Memfault Firmware SDK is producing this data, it does not need to be transformed prior to delivery to Memfault’s cloud. Creating the pipeline shown above, as well as a secret with name MEMFAULT_PROJECT_KEY that contains a project key for the desired Memfault project, will result in all streaming data on the /mflt path to be delivered to the Memfault platform.

Livestream demo with Memfault

Dan from Golioth did a livestream with Noah from Memfault showcasing how this interaction works, check it out below:

What’s Next

We will be continuing to roll out more Pipelines data destinations and transformers in the coming weeks. If you have a use-case in mind, feel free to reach out to us on the forum!

A new JSON Patch Pipelines transformer is now generally available for Golioth users. It allows for modifying JSON payloads, or payloads that have been transformed into JSON in a previous pipeline step, in order to fit a specific structure or schema. To see this transformer in action, check out the documentation example or last week’s Friday Afternoon Stream.

Shaping Data for Different Destinations

Golioth frequently sits at the intersection of firmware and cloud platforms. One of our goals when launching Pipelines was to enable those to worlds to seamlessly interact, avoiding the need for one side to compromise to suit the other. For example, Pipelines can allow for devices to send data in a compact binary format, such as CBOR, then have it translated to a text-based representation, such as JSON, for delivery to a cloud service.

The json-patch transformer enhances this capability by not only changing the format, but also the structure and content. Furthermore, it allows for the structure required by the end destination to change over time, without requiring firmware updates. In the following example, fields are re-arranged to meet the requirements of the custom webhook data destination. Critically, if this destination changed, or a new one was added in the future, the pipeline could be updated, and the device could continue sending the same payloads.

filter:
  path: "*"
  content_type: application/cbor
steps:
  - name: change-format
    transformer:
      type: cbor-to-json
      version: v1
  - name: transform-and-deliver
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "add", "path": "/environment", "value": {}},
            {"op": "add", "path": "/location", "value": {}},
            {"op": "move", "from": "/temp", "path": "/environment/temp"},
            {"op": "move", "from": "/lat", "path": "/location/lat"},
            {"op": "move", "from": "/long", "path": "/location/long"}
          ]
    destination:
      type: webhook
      version: v1
      parameters:
        url: https://my-backend.example.com/data
        headers:
          x-api-key: $BACKEND_API_KEY

Click here to use this pipeline in your project.

Conditional Data Manipulation

In some cases it may be desirable to conditionally patch a JSON object payload based on the contents of the payload, or the metadata associated with the device that sent it. Combining the json-patch transformer with other transformers demonstrates the power of Pipelines. The test operation in a JSON Patch document conditionally applies a patch if the criteria is met.

For example, in the following pipeline, the key-value pair demo: true is injected into the payload if the device ID matches 649998262fecb43eb2d39859. The device ID is made available when applying the patch using the inject-metadata transformer. The metadata is subsequently stripped from the payload to ensure extraneous information is not delivered to the final destination.

filter:
  path: "*"
  content_type: application/json
steps:
  - name: get-metadata
    transformer:
      type: inject-metadata
      version: v1
  - name: conditional-patch
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "test", "path": "/device_id", "value": "649998262fecb43eb2d39859"},
            {"op": "add", "path": "/demo", "value": true}
          ]
  - name: remove-metadata
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "remove", "path": "/device_id"},
            {"op": "remove", "path": "/project_id"},
            {"op": "remove", "path": "/timestamp"}
          ]
  - name: send-lightdb
    destination:
      type: lightdb-stream
      version: v1

Click here to use this pipeline in your project.

See it in action

What’s Next

The json-patch transformer is the first of many new transformers and data destinations we’ll be adding over the next few weeks. If you have a use-case in mind, feel free to reach out to us on the forum!

Today, we are thrilled to announce the launch of Pipelines, a powerful new set of features that redefines how you manage and route your IoT data on Golioth. Pipelines is the successor to our now-deprecated Output Streams and represents a significant upgrade in our functionality, scalability and user control.

Two years ago, we introduced Output Streams to seamlessly connect IoT data to various cloud services like AWS SQS, Azure Event Hubs, and GCP PubSub. This enabled Golioth users to efficiently stream sensor data for real-time processing, analytics, and storage, integrating easily with their existing cloud infrastructure.

Since then, we’ve gathered extensive feedback, and now we’re excited to introduce a more versatile solution for data routing: Pipelines. Previously, all stream data had to flow into LightDB Stream and conform to its JSON formatting requirements, which was restrictive for those with regulatory and data residency needs. With Pipelines, you can direct your data to LightDB Stream, your own database, or any other destination, in any format you choose.

Pipelines also introduces filtering and transformation features that simplify or even eliminate backend services through low-code configuration. Configurations are stored and managed as simple YAML files, making them easily deployable across multiple projects without manual recreation. This approach allows you to version your data routing configurations alongside your application code.

Pipelines Example Screenshot

Internally, Pipelines architecture is designed to support our future growth, enabling us to scale our data routing capabilities to billions of devices. This robust foundation allows us to quickly iterate and add new features rapidly, ensuring that our users always have access to the most powerful and flexible data management tools available.

All Golioth users can start taking advantage of Pipelines today. Projects that were previously only using LightDB Stream and did not have any Output Streams configured have been automatically migrated to Pipelines. Users in those projects will see two pipelines present, which together replicate the previous behavior of streaming to LightDB Stream. These pipelines can be modified or deleted, and new pipelines can be added to support additional data routing use-cases.

Projects with Output Streams configured will continue using the legacy system, but can be seamlessly migrated to Pipelines with no interruptions to data streaming. To do so, users in those projects must opt-in to migration.

New projects created on Golioth will have a minimal default pipeline created that transforms CBOR data to JSON and delivers it to LightDB Stream. This pipeline is compatible with Golioth firmware examples and training, but may be modified or removed by a user if alternative routing behavior is desired.

Pipelines are especially advantageous for users with specific data compliance requirements and those transmitting sensitive information, such as medical device products. Removing the requirement of routing data through LightDB Stream, where it is persisted on the Golioth platform, provides two main benefits:

  1. Regulatory Compliance: Users can route data to their own compliant storage solutions, making it suitable for many sensitive applications that require data not be persisted on other third-party platforms.
  2. Cost Savings: For users who do not need data persistence, routing data directly to other destinations can avoid the costs associated with streaming data to LightDB Stream. This flexibility allows for more efficient and cost-effective data management.

Getting Started with Pipelines

Alongside the launch of Pipelines, we have also released a new version of the Golioth Firmware SDK, v0.13.0 , which introduces new functionality to support streaming arbitrary binary data to destinations that support it. Previously, only CBOR and JSON data could be streamed to Golioth, as everything flowed through LightDB Stream, which only accepts JSON data. Now, rather than streaming data to LightDB Stream, data is sent to the Golioth platform and routed to its ultimate destination via the pipelines configured in a project. Devices using previous versions of the Golioth Firmware SDK will continue working as expected.

Pipelines can be configured in the Golioth Console using YAML, which defines filters and steps within your pipeline. Here’s an example:

filter:
  path: "*"
  content_type: application/cbor
steps:
  - name: step-0
    destination:
      type: gcp-pubsub
      version: v1
      parameters:
        topic: projects/my-project/topics/my-topic
        service_account: $GCP_SERVICE_ACCOUNT
  - name: step-1
    transformer:
      type: cbor-to-json
      version: v1
  - name: step-2
    transformer:
      type: inject-path
      version: v1
    destination:
      type: lightdb-stream
      version: v1
  - name: step-3
    destination:
      type: influxdb
      version: v1
      parameters:
        url: https://us-east-1-1.aws.cloud2.influxdata.com
        token: $INFLUXDB_TOKEN
        bucket: device_data
        measurement: sensor_data

This pipeline accepts CBOR data, delivers it to GCP PubSub, before transforming it to JSON and delivering it to both LightDB Stream (with the path injected) and InfluxDB. This is accomplished via three core components of Pipelines.

Filters

Filters route all or a subset of data to a pipeline. Currently, data may be filtered based on path and content_type. If either is not supplied, data with any value for the attribute will be matched. In this example, CBOR data sent on any path will be matched to the pipeline.

filter:
  path: "*"
  content_type: application/cbor

Transformers

Transformers modify the structure of a data message payload as it passes through a pipeline. A single transformer may be specified per step, but multiple steps can be chained to perform a sequence of transformations. This transformer will convert data from CBOR to JSON, then pass it along to the next step.

- name: step-1
  transformer:
    type: cbor-to-json

Destinations

Destinations define where the transformed data should be sent. Each step in a pipeline can have its own destination, allowing for complex routing configurations. When a step includes a transformer and a destination, the transformed data is only delivered to the destination in that step. This destination sends JSON data to LightDB Stream after nesting the object using the message path. The next step receives the data as it was prior to the path injection.

- name: step-2
    transformer:
      type: inject-path
    destination:
      type: lightdb-stream
      version: v1

The full list of Output Stream destinations is now available as Pipelines destinations (with more to come):

  • Azure Event Hub
  • AWS SQS
  • Datacake
  • GCP PubSub
  • InfluxDB
  • MongoDB Time Series
  • Ubidots
  • Webhooks

For detailed documentation visit, visit our Pipelines Documentation.

Updated Pricing Model

We’re keeping the same usage-based pricing as Output Streams but also introducing volume discounts. We want to emphasize transparent pricing optimized for MCUs and are revising the pricing structure for Pipelines to accommodate a wider range of data usage patterns, ensuring affordability and predictability in billing for both low and high bandwidth use cases while ensuring customers with large fleets of devices can enjoy discounts that come with scale.

Data routed to External Pipelines Destination
Data Volume (per Month) Per MB Price ($)
0 – 1 GB $0.40
1 – 10 GB $0.34
10 – 50 GB $0.28
50 – 150 GB $0.22
150 – 300 GB $0.16
300 – 600 GB $0.10
600 – 1 TB $0.04
1 TB+ $0.01
Data routed to LightDB Stream
Data Volume (per Month) Per MB Price ($)
0-1 TB + $0.001

The first 3MB of usage for Pipelines is free, allowing users who are prototyping to do so without needing to provide a credit card. This includes usage routing data to LightDB Stream through Pipelines.

For full details, visit Golioth Pricing.


Pipelines marks a significant step forward in Golioth’s IoT data routing capability, offering a new level of flexibility and control. We’re excited to see how you’ll use Pipelines to enhance your IoT projects. For more details and to get started, visit our Pipelines Documentation.

With our new infrastructure, we can rapidly add new destinations and transformations, so please let us know any you might use. Additionally, we’d love to hear about any logic you’re currently performing in backend services that we can help you streamline or even delete. If you have any questions or need assistance, don’t hesitate to reach out. Contact us at [email protected] or post in our community forum. We’re here to help!

Migration from Output Streams to Pipelines

As previously mentioned, projects currently using Output Streams will continue to leverage the legacy infrastructure until users opt-in to migration to Pipelines. We encourage users to try out Pipelines in a new project and opt-in existing projects when ready. Output Streams does not currently have an end of life date but we will be announcing one soon.

Stay tuned for more updates and happy streaming!

Golioth works with Qualcomm

We’re excited to announce the latest update to the Golioth Firmware SDK, release 0.12.0, which now includes support for Zephyr’s newly introduced Modem Subsystem. This enhancement significantly increases the flexibility of our platform, enabling support for a broader array of cellular modem technologies, starting with Qualcomm. 0.12.0 adds support for the Quectel BG95, joining the Nordic Semiconductor nRF9160 (our go-to modem around here!) as a first class cellular modem. We also introduced additional ways to securely store credentials.

Zephyr’s New Modem Subsystem

Introduced in Zephyr 3.5.0, the Modem Subsystem is a unified interface for modem drivers. This addition simplifies the integration of cellular modems (and others) into Zephyr-based projects, greatly expanding the range of devices and technologies that developers can utilize effectively. For a detailed overview of the modem subsystem, check out this summary from Zephyr’s Developer Advocate, Benjamin Cabé.

Integration in Golioth Firmware SDK

With the integration of this modem subsystem in the Golioth Firmware SDK, Golioth users can now more flexibly incorporate a wider array of modem technologies into their IoT projects. There are a lot of great modems and module vendors in the market and providing choice is at the heart of what we do at Golioth.

First Supported Modem and Board

The first modem we are supporting with this updated SDK is the BG95 from Quectel, based on Qualcomm technology. The BG95 is paired with the nRF52840 on the RAK5010 development board from RAKwireless. This combination highlights the flexibility of Qualcomm’s technology integrated into Quectel’s hardware, offering developers robust tools for deploying cellular IoT solutions efficiently.

Why Qualcomm?

We chose to support Qualcomm modems because our community asked for it! Developers have different design needs and want maximum flexibility. They need more options that accommodate diverse business needs. Qualcomm chipsets offer the latest in connectivity protocols and radio technology at competitive prices. Qualcomm provides silicon and support for a wide ecosystem of module vendors, such as Quectel, U-Blox, Telit, and more. Golioth customers have used Qualcomm modems in their products in the past, but needed to do much of the integration engineering themselves. Zephyr’s Modem Subsystem makes it easier to develop applications that integrate Qualcomm modems. Connecting this wider range of modems to Golioth is more hands-off for the user, reducing complexity. Developers can focus more on innovation and less on technical hurdles.

Also in This Release

In addition to new modem support, this release introduces a another feature: DTLS socket offloading for Zephyr. This includes an example for the long-supported Nordic Semiconductor nRF9160.

DTLS socket offloading leverages a modem’s secure credential store, which allows for the use of secure, offloaded sockets. This means there is not any private key material in RAM. This can be a significant advantage as it helps reduce RAM usage, code size, CPU load, and power consumption. Actual benefits will vary depending on the application and how the code is utilized.

This new feature enhances device security and efficiency, contributing further to the versatility and robustness of the Golioth Firmware SDK. Mike previously wrote how to store credentials on the nRF9160 using TLS tags.

Getting Started

To get started with the latest SDK:

  1. Update to the newest release, 0.12.0, from the Golioth Firmware SDK repository.
  2. Explore the documentation and examples provided for integrating the RAK5010 board or try DTLS socket offloading with the nRF9160.
  3. Visit our community forums or support channels if you need help or want to discuss your projects.

Focused on Your Success

At Golioth, we’re committed to providing you with the tools and flexibility needed to succeed in the fast-evolving world of IoT. By adding support for new modems and enhancing the ways you can manage credentials, we aim to streamline your development process and empower your innovative projects. Whether you’re integrating the latest modem technology or implementing secure credential management, Golioth is here to support every step of your journey towards building smarter, more connected solutions.

If you ask any seasoned hardware engineer, they will tell you there are only two types of people:

  1. Those who have accidentally swapped TX and RX
  2. And those who will
TX RX Fail

There’s even a fail badge of honor!

Despite our best efforts, mistakes can creep into hardware designs. Design specifications can change over time. Taking a hardware design from concept to production is a journey that nearly always involves iterating through multiple revisions of the PCB assembly.

In this post, we’ll walk through some of the tools that Zephyr & Golioth provide for managing multiple board revisions.

Let’s dive in and look at how you can support Rev ARev B, all the way to Rev X in your Zephyr firmware without losing your sanity!

Aludel Elixir Board

Here at Golioth, Chris Gammell has been designing a new rapid prototyping board called the “Aludel Elixir“. We use this board internally for developing and testing our growing collection of reference designs, and we’re using it for live demos at Embedded World 2024.

Aludel Elixir Rev B

The image above shows the 2nd hardware revision of the board (Rev B), which fixes some of the hardware issues we found when testing the 1st revision (Rev A).

Supporting the Rev B hardware requires changes to the Zephyr firmware that runs on the internal MCU in the nRF9160 SIP. However, since we have Rev A and Rev B hardware “in the wild”, we want to support building firmware for all current and future board revisions in Golioth projects—like our Reference Design Template.

Multiple board revisions in Zephyr

Fortunately, Zephyr has support for multiple board revisions as a standard part of the build system.

Note: Shortly after Zephyr 3.6.0 was released, a new hardware model was introduced to Zephyr. This new model overhauls the way both SoCs and boards are named and defined and is not backwards compatible. This blog post assumes the old hardware model used in Zephyr 3.6.0 or earlier.

Building for multiple board revisions in Zephyr

Before jumping into the implementation details, it’s helpful to see how an end-user would build for a specific board revision.

We can build a Golioth Zephyr app for a specific revision of the Aludel Elixir board by simply appending a @<revision> specifier to the board name:

# Build firmware for Rev A
west build -b aludel_elixir_ns@A

# Build firmware for Rev B
west build -b aludel_elixir_ns@B

# Build firmware for the "default" revision (which is currently Rev B)
west build -b aludel_elixir_ns

Adding multiple board revisions in Zephyr

The Zephyr Board Porting Guide has a detailed section on how to add support for multiple board revisions.

When we build for a board with a revision specifier—e.g. aludel_elixir_ns@B—the build system looks for a revision.cmake file in the board directory:

boards/arm/aludel_elixir
├── ...
└── revision.cmake

Here’s the revision.cmake file for the aludel_elixir board:

board_check_revision(
  FORMAT LETTER
  EXACT
  DEFAULT_REVISION B
  VALID_REVISIONS A B
)
  • FORMAT LETTER tells the build system that the revision format is “Letter revision matching” (A, B, C, etc)
  • EXACT requires that the revision is an exact match
  • DEFAULT_REVISION sets the revision to be used when no revision is specified (e.g. west build -b aludel_elixir_ns)
  • VALID_REVISIONS defines the set of valid revisions that can be specified

Kconfig settings for specific revisions

It’s possible to specify Kconfig symbols that are specific to a particular board revision by adding optional <board>_<revision>.conf files in the board directory. These will be merged into the board’s default Kconfig configuration.

For example, the Elixir Rev A board was accidentally built with the NB-IoT only variant of the nRF9160, which requires some Kconfig settings that only apply to the Rev A board revision.

boards/arm/aludel_elixir 
├── ...
├── aludel_elixir_A.conf
└── aludel_elixir_ns_A.conf

Devicetree overlays for specific revisions

It’s also possible to describe hardware changes in devicetree that are specific to a particular board revision by adding optional <board>_<revision>.overlay files in the board directory. These will be added to the common <board>.dts devicetree file.

For example, the Elixir Rev A board connects the spi2 peripheral to the mikroBUS socket headers, while the Rev B board connects the spi3 peripheral instead. We added devicetree overlay files for each board revision that specify the correct SPI peripheral to use:

boards/arm/aludel_elixir
├── ...
├── aludel_elixir_A.overlay
├── aludel_elixir_ns_A.overlay
├── aludel_elixir_B.overlay
└── aludel_elixir_ns_B.overlay

Distributing board definitions as a Zephyr Module

The golioth-zephyr-boards repo stores the Zephyr board definitions for the Aludel Elixir board revisions, allowing us to use them across multiple Zephyr projects as a Zephyr Module.

For example, here’s how it’s included in our Reference Design Template app via the west.yml manifest:

- name: golioth-zephyr-boards
  path: deps/modules/lib/golioth-boards
  revision: v1.1.1
  url: https://github.com/golioth/golioth-zephyr-boards

Note that it’s also possible to add application-specific Kconfig and devicetree overlay files for each board revision:

<app>/boards/
├── ...
├── aludel_elixir_ns_A.conf
├── aludel_elixir_ns_A.overlay
├── aludel_elixir_ns_B.conf
└── aludel_elixir_ns_B.overlay

If you leave off the @<revision> specifier, these will be applied to all revisions of the board:

<app>/boards/
├── ...
├── aludel_elixir_ns.conf
└── aludel_elixir_ns.overlay

Golioth Blueprints

In a real-world IoT deployment, it’s likely that a fleet of devices will have multiple hardware revisions deployed simultaneously. Golioth provides support for managing different hardware revisions through a concept called “Blueprints“.

Blueprints are a flexible way to segment devices based on variations in hardware characteristics, such as the board revision. For example, when creating a new over-the-air (OTA) firmware release in the Golioth console, a blueprint can be specified to limit the scope of the release to only Rev A hardware devices.

If you’d like a step-by-step introduction to deploying a fleet of IoT devices with Zephyr and Golioth, we’d love to have you join us for a free Zephyr developer training. Our next session is just two weeks away. Sign up now!

New Pricing with Golioth. Device Management should be free.

Today, we’re excited to roll out a major update to Golioth’s pricing. We are transitioning from per-device per-month fees to a usage-based model. This change allows individual developers to use Golioth’s device management features for free, with no limit on the number of devices connected, and significantly broadens accessibility for teams using our platform across a diverse array of IoT use cases. By doing so, we reinforce our belief that critical device management features, like Over-the-Air (OTA) updates, are a basic right for IoT developers and a necessity for creating secure products.

Introducing Our New Pricing Structure

Golioth’s new pricing model is designed to meet the varied requirements of our users, from individual developers to large enterprises:

  • Individual Developer Plan: Free
  • Teams Plan: $299/mo (user management and support)
  • Enterprise Plan: starting at $2,799/mo (features like SSO and private deployments)

This new model provides 1GB of OTA downloads and 200MB of log messages per month for free, sufficient for managing a moderately sized fleet in production, with no fees for device connections across all plans. For those who need more, additional usage is $0.35/MB for OTA downloads and $0.20/MB for logging. We continue to offer data ingestion and routing, with data streamed through Golioth at $1/GB to LightDB Stream and $0.40/MB for data streamed out to third-party services. Our new pricing model is designed to support developers at every stage of their journey, ensuring that Golioth remains the most accessible and developer-friendly IoT platform on the market.

Aligning Pricing with Device Behavior

Our updated pricing model is designed to accommodate the diverse behavior profiles of IoT devices. This is particularly advantageous for “sleepy devices” — those that wake up periodically to send sensor readings before going back to sleep. These types of devices, such as soil sensors, often operate with minimal data transmission, making them highly cost-sensitive under per-device fee structures. By eliminating these fees and focusing on actual usage, Golioth enables projects of every scale and complexity to leverage our advanced features without facing prohibitive costs.

Moreover, for projects with high bandwidth requirements, we offer volume discounts, ensuring that even large-scale deployments can be managed cost-effectively. This approach not only supports a broader range of IoT applications but also ensures that pricing scales sensibly with your project’s needs.

Why This Change Matters

Making device management free for individual developers emphasizes our commitment to the broader IoT community. OTA updates are crucial for the security, efficiency, and longevity of IoT products. We believe that every developer should have the capability to manage and update their devices remotely, without financial barriers.

This pricing update forces Golioth and our competitors to focus on innovation and creating unique value. We are moving away from the per-device fee model, recognizing it as outdated for supporting the wide range of IoT device use cases and behavior profiles. Instead, our focus is on creating a flexible developer experience, ensuring Golioth is compatible across numerous MCU platforms and can connect to any cloud destination, serving as the universal connector for IoT. This approach advances the IoT developer community and sets a new standard for what enterprises should expect from platform providers.

Looking Forward

We’re excited to see how these changes will enable developers to bring their visions to life with greater ease and less overhead. Our team is committed to continuously enhancing our platform and pricing to meet the evolving needs of the IoT industry.

For a detailed overview of our new pricing and to find the plan that’s right for you, visit our pricing page at Golioth Pricing.

We believe these changes will not only benefit our users but, also encourage a more innovative and secure IoT ecosystem. Thank you for your continued support, and we look forward to seeing what you create with Golioth.

As always, never hesitate to reach out if you have any questions, comments or feedback: [email protected]

View the official press release here. 

Black text on yellow background that reads "Understanding Your Golioth Usage".

Today we are launching a new feature in the Golioth console: usage visualization. All Golioth users can now access real-time usage data, broken down by project and service, on the settings page for organizations in which they are an admin.

Screenshot of Golioth console with usage metrics shown for an organization with three projects.

This feature follows a series of console updates, including the launch of the Golioth Simulator, and a restructuring of project and organization navigation. With each new release, Golioth is becoming more conducive to managing large fleets with greater visibility, while also enabling more seamless collaboration between users in an organization.

Usage data is currently displayed for the following services:

By default, the current month’s usage is displayed, but the selector in the top right can be used to change the range. Additional usage metrics, as well as more advanced filtering and visualization functionality will be introduced over the next few weeks.

Sign in to the Golioth console to view your usage data, and reach out on the forum with any feature requests or feedback!