We’re on a roll with our showcase of examples that came out of Golioth’s AI Summer. Today I’m discussing an example that records audio on an IoT device and uploads the audio to the cloud.

Why is this useful? This example is a great approach to sending sensor data from your edge devices back to the cloud to use in a machine learning (ML) training set, or just a great way to collect data samples from your network. Either way, we’ve designed Golioth to efficiently handle data transfer between constrained devices and the cloud.

The full example code is open source and ready to use.

Overview

The bones of this example are the same as the image upload example we showcased earlier. The main components include:

  1. An audio sample (or any other chunk of data you’d like to send).
  2. A callback function to fill a buffer with blocks from your data source.
  3. A function call to kick off the data upload.

That’s it for the device side of things. Sending large chunks of data is a snap for your firmware efforts.

The cloud side is very simple too, using Golioth Pipelines to route the data as desired. Today we’ll send the audio files to an Amazon S3 bucket.

1. An Audio Sample

The details of audio recording are not important for this example. WAV, MP3, FLAC…it’s all just 1’s and 0’s at the end of the day! The audio is stored in a buffer and all we need to know is the address of that buffer and its length.

If you really want to know more, this code is built to run on one of two M5 Stack boards: the Core2 or the CoreS3. Both have a built-in i2s microphone and an SD card that is used to store the recording. SD cards storage is a great choice for prototyping because you can easily pop out the card and access the file on your computer to confirm what you uploaded is identical. Full details are found in the audio.c file.

2. Callback function

To use block upload with Golioth, you need to supply a callback function to fill the data buffer. The Golioth Firmware SDK will call this function when preparing to send each block.

uint8_t audio_data[MAX_BUF_SIZE];
size_t audio_data_len;

/* Run some function to record data to buffer and set the length variable */

static enum golioth_status block_upload_read_chunk(uint32_t block_idx,
                                                   uint8_t *block_buffer,
                                                   size_t *block_size,
                                                   bool *is_last,
                                                   void *arg)
{
    size_t bu_offset = block_idx * bu_max_block_size;
    size_t bu_size = audio_data_len - bu_offset;
    if (bu_size <= block_size)
    {
        /* We run out of data to send after this block; mark as last block */
        *block_size = bu_size;
        *is_last = true;
    }
    /* Copy data to the block buffer */
    memcpy(block_buffer, audio_data + bu_offset, *block_size);
    return GOLIOTH_OK;
}

The above code is a very basic version of a callback. It assumes you have a global buffer audio_data[] where recorded audio is stored, and a variable audio_data_len to track the size of the data stored there. Each time the callback runs it reads from a different part of the source buffer by calculating the offset based on the supplied *block_size variable. The callback signals the final block by setting the *is_last variable to true, and updating the *block_size to indicate the actual number of bytes in the final block.

You can see the full callback function in the example app which includes full error checking and passes a file pointer as the user argument to access data on the SD card. The file handling APIs from the standard library are used, with a pointer to the file passed into the callback.

3. API call to begin upload

Now we start the upload by using the Stream API call, part of the Golioth Firmware SDK. Just provide the important details for your data source and the path to use when uploading.

int err = golioth_stream_set_blockwise_sync(client,
                                            "file_upload",
                                            GOLIOTH_CONTENT_TYPE_OCTET_STREAM,
                                            block_upload_read_chunk,
                                            NULL);

This API call includes four required parameters shown above:

  • client is a pointer to the Golioth client that holds info like credentials and server address
  • "file_upload" is the path at which the file should be uploaded (change this at will)
  • GOLIOTH_CONTENT_TYPE_OCTET_STREAM is the data type (binary in this case)
  • block_upload_read_chunk is the callback we wrote in the previous step

The final parameter is a user argument. In the audio sample app we use this to pass a pointer to read data from the file on the SD card.

Routing your data

The example includes a Golioth pipeline for routing your data.

filter:
  path: "/file_upload*"
  content_type: application/octet-stream
steps:
  - name: step0
    destination:
      type: aws-s3
      version: v1
      parameters:
        name: golioth-pipelines-test
        access_key: $AWS_S3_ACCESS_KEY
        access_secret: $AWS_S3_ACCESS_SECRET
        region: us-east-1

You can see the path in the pipeline matches the path we used in the API call of the previous step. This instructs Golioth to listen for binary data (octet-stream) on that path, and when found, route it to an Amazon S3 bucket. Once enabled, your audio file will automatically appear in your S3 bucket!

IoT data transfer shouldn’t be difficult

That’s worth saying twice: IoT data transfer shouldn’t be difficult. In fact, nothing in IoT should be difficult. And that’s why Golioth is here. It’s our mission to connect your fleet to the cloud, and make accessing, controlling, updating, and maintaining your fleet a great experience from day one. Take Golioth for a test drive now!

One of my favorite engineering processes at Golioth is our architecture design review. When building new systems, making consequential changes to existing systems, or selecting a third-party vendor, an individual on the engineering team authors an architecture design document using a predefined template. This process has been in place long enough (more than 18 months) that we have started to observe long-term benefits.

Some of the benefits are fairly obvious: more efficient implementation of large-scale functionality, better communication across engineering domains, smoother context sharing during new engineer on-boarding. Others are more subtle. One aspect of codifying a decision making process that I personally find extremely valuable is the ability to check the pulse of an organization over time. How thorough are design documents? How robust is the feedback provided? Are individuals providing push back regardless of any organizational hierarchy? Are discussions reaching resolution in an appropriate manner? Many of these questions center on how disagreements are resolved.

Disagreement is one of my favorite aspects of the engineering process. When done correctly, it drives a team towards an optimal solution, builds a stronger sense of trust between individuals, and results in more comprehensive exploration and documentation of a problem space. Through healthy disagreement, the Golioth engineering team typically arrives at one of three possible outcomes.

  1. Consensus is reached around one of the presented solutions.
  2. Consensus is reached around a new solution that incorporates aspects of each of the presented solutions.
  3. It is determined that more information is needed, or the decision does not have to be made to move forward.

However, reaching one of these outcomes does not necessarily mean that the process was effective. One failure mode is reaching perceived consensus around one solution, when in reality one individual doesn’t feel comfortable pushing back against the other. Another is abdicating responsibility by deferring a decision that actually does need to be made now. In the moment, it is not always clear whether the process is effective, but the beauty of codifying the interaction is that it can be evaluated in the future with the benefit of hindsight.

This week I opened up the review window for a design document I recently authored, and within 24 hours I had received high quality feedback from multiple members of the engineering organization. Furthermore, there were some key points of disagreement included in the feedback, which we resolved efficiently, with outcomes ranging from reaching consensus on a counter proposal to deferring a portion of the system to a future design document.

Compared to the early days of instituting the review process, more recent architecture design documents have involved more disagreement, but also more efficient resolution. While excess conflict can sow seeds of division, a mature engineering organization will turn differences of opinion into progress. Tackling any complex problem will involve some disagreement — for a strong team it will be the right amount.

It has been over three months since we announced Golioth Pipelines, and we have already seen many users reduce costs and unlock new use cases by migrating to Pipelines. As part of the announcement, we provided options for users who were currently leveraging Output Streams, which offered a much more constrained version of the same functionality, to seamlessly migrate their existing projects to Pipelines. Today, we are announcing December 12th, 2024 as the official end of life date for Output Streams.

For users operating projects that either started out with Pipelines, or have been transitioned to Pipelines as part of the opt-in migration process, there will be no change required. For the few projects that are still leveraging Output Streams, we encourage users to start the migration process now by submitting project information here, or to reach out to us at [email protected] with any questions or concerns. On December 12th, all projects that have not already been migrated to Pipelines will be automatically migrated with Pipelines configured to replicate the previous behavior of Output Streams. Output Stream configuration will no longer be accessible in the Golioth console.

The rapid adoption of Pipelines by the Golioth community has been exciting to witness, and we are looking forward to the ongoing growth of the platform via new Transformers and Destinations. If you are currently using Pipelines, or would like to see new functionality added, contact us on the forum!

Goiloth is excited to announce Golioth Solutions. Two new capabilities will help businesses to deploy IoT devices in a short period of time:

  • Golioth Solutions Services
  • Golioth Solutions Marketplace

A New Service Offering

Golioth Solutions Services solves many of the difficult problems at the beginning of developing an IoT product, namely pushing a fully formed idea out into the world. Golioth Solutions Engineers will help to identify how companies can best deploy the Golioth platform to solve their business needs and deliver a product that captures real-world data and provides consistent insight and control of your devices.

Our Solutions Engineers will work with you to formulate what is required for your particular business use-case and what kind of solution will get you there fastest. This includes hardware, firmware, cloud capabilities, fleet management, and application development. Our Solutions Engineers fill in the gaps where your team needs help. Perhaps you are a cloud software company looking to deploy a hardware device? Solutions Engineers will utilize existing hardware and firmware Solutions to send data up to Golioth and out to the platform of your choosing using Pipelines. What if you’re on the other spectrum and are a hardware company looking to connect custom hardware to the cloud? Our Solutions Engineers can set you up with known working hardware and firmware that you can use as reference while you develop your own custom hardware, while Solution Engineers consult on how data should be hitting the cloud and routing to outside services.

A Marketplace of Solutions

We are also launching Golioth Solutions Marketplace where customers can view existing solutions. These form the basis of “starting points” for many custom projects that Golioth Solutions Services will deliver.

In order to deliver IoT solutions in a short amount of time, we want our Solutions Engineers to have an arsenal of ready-made designs that can be customized to customers needs. This will include our internal Reference Designs as well as designs from Partners. We will continue to add to these designs and highlight them here on the blog when we have a new one available.

Designs from our Partners

The Golioth Solutions Marketplace includes production-grade hardware that has been produced by our Design Partners. The Solution also includes custom firmware and cloud capabilities that are targeted at a particular solution space and vertical application. Each of these designs are built on the Golioth platform and are customizable to specific business needs.

Many of these designs can also be repurposed towards a different vertical, based on the capability contained within the Solution. Our Solutions Engineers know how each of these technologies might fit a new, custom application. Since these solutions are developed by our Design Partners, the same creators of the hardware can also enhance and customize the product to your needs. As customers decide to scale, our Design Partners are well prepared to guide customers through production and productization.

Are you a product development shop interested in having your hardware listed in our Solutions Marketplace? Submit your designs now to start the process.

Introducing the Glassboard Shunt

One of our first Solutions comes from our design partner Glassboard, based out of Indianapolis in the US. The IoT Power Monitoring for Micromobility Solution includes a cellular-connected current shunt. This design is intended to measure battery currents on small vehicles. It works in both directions; measuring current being sourced to motors during motion, as well as charging currents going back into the battery. We recorded a video about this design and how it fits in with Golioth Solutions:

While this is initially targeted at micromobility applications, it’s easy to imagine how this device and starter firmware could be retargeted at a different vertical. One example could be monitoring a DC power source that powers LED lighting for a construction application.

How Golioth Solutions Engineers use designs

Solutions Engineers take input from customers and determine if any of our existing designs (like the Glassboard current shunt) are a good fit for the application at hand. Perhaps there is a new DC current measurement that could benefit from the existing hardware, but it needs to be tweaked to better fit the application space. Our Solutions Engineers first modify and test the firmware to fit the device needs, and then work with the customer to determine where the resulting data will go, and if there are additional needs around visualization or control of the fleet of devices. If the hardware requires some kind of modification, our Solutions Engineers will connect customers with the original designers to discuss the logistics of creating a custom version of the existing hardware.

Golioth Reference Designs

Another source of Golioth Solutions includes our range of Reference Designs, which can be customized and delivered by Golioth Solutions Services. We have been working on and refining Reference Designs for a few years now. These are end-to-end demonstrations of Golioth, built on top of custom hardware.

What about licensing? Well, all Golioth Reference Design Hardware is open source with a very permissive license. Customers can take the underlying hardware to one of our Design Partners and have them modify and extend the capabilities and refine it for production. You will be starting from a solution that is continually tested and can be easily extended using off-the-shelf sensor breakouts. Our Solutions Engineers can get you started extra quickly using the Follow Along Hardware version of these designs, which includes firmware that targets off-the-shelf development boards, in addition to the sensors. You can get started quickly, with no custom hardware required.

New Services + New Marketplace = Quicker time to market

Golioth Solutions and our associated marketplace exists to help users that need an IoT solution for their business, but don’t necessarily have the time or capabilities to build it themselves. We can bootstrap your solution from a sketch on a page to a working IoT device backed by a powerful IoT platform that handles all your data. Once the idea is proven out, we have a well-defined handoff to our Design Partners who can assist building that first device into a fleet of production-ready hardware that you can deploy to the field. You will be prototyping and testing using an IoT platform that is built for scale.

If you’d like to start building an IoT Solution that will serve your business, please get in touch! You can email [email protected] to find out more or fill out this form to directly schedule a meeting.

Recently, we teamed up with Qualcomm to showcase Golioth’s AI features. This demo stands out because we used the Qualcomm RB3 Gen 2 Development Kit running Linux. Staying true to our demo culture, we wanted to share how we pulled it off, what we learned about using Golioth with Linux, and where we might take this in the future.

Let’s dive in!

Wait, Golioth supports Linux??? 🐧

If you’ve been following us for a while, you probably know about our support for microcontrollers—from Zephyr tips to our cross-platform Firmware SDK. But you won’t find much mention of connecting Linux devices in our docs or blogs because we don’t officially support managing Linux-based devices. I say officially because we’ve actually had a Linux port for quite a while. It started as part of our CI testing infrastructure, helping us speed up tests on the Firmware SDK so that we can test more frequently than what you can do with physical devices.

Over the years we’ve received many requests to support Linux-based applications with a few different configurations. Sometimes a company was developing a product that had both an MCU and Linux Gateway (like a Thread Border Router) and wanted to manage the entire fleet with Golioth. Other customers were building a complex system that had both a MPU and MCU in the same device. And of course, many more are building a Linux edge-type device.

Since the scope of the Linux port was initially narrow, it was never designed to be a full “SDK”. Whenever a customer would ask if they could use the port for their embedded Linux device, we usually steered them away and pointed them to folks like Foundries.io or Balena.

Working with Foundries.io & AI Hub

We recently caught up with our friends at Foundries.io, who joined Qualcomm a few months ago, to see what they’ve been up to (we’ve collaborated in the past.) They were excited about some of Golioth’s new model management capabilities and connected us with the team from AI Hub (formally Tetra AI.) We discussed doing a joint demo together and Qualcomm wanted to highlight this new Linux-based device. Our mission is to be the universal connector for IoT so we were up for the challenge. After some brainstorming we got our hands on their latest devkit and got to work.

Getting the Firmware SDK up and running on Linux

Building the a Linux application that uses the Golioth Firmware SDK is as straightforward as building any other C program with CMake and requires minimal dependencies (see an example here). However, getting that application onto an embedded Linux device, and managing its lifecycle requires additional infrastructure.

RB3 Gen2 Device on Foundries

Foundries.io is a perfect complement in this scenario, with existing support for the RB3 Gen2 devkit, and a simple container-based GitOps workflow for managing applications running on their Linux microPlatform (LmP) distro. Flashing the device with the LmP image, building an OCI image with a basic Golioth “Hello, World” application, and remotely deploying the application to the device only took minutes.

Leveraging the Hardware

The value of any embedded device is tied to how well the software is able to leverage the hardware, and the RB3 Gen 2 is an embarrassment of riches compared to the microcontrollers we usually interface with at Golioth. Based on the QCS6490 SoC, it includes a Kryo 670 CPU with 8 application processing cores, an Adreno 643L GPU, and a Hexagon DSP for accelerating AI workloads. Additionally, the RB3 Gen 2 Development Kit boasts a low and high resolution camera, as well as audio peripherals and an array of sensors.

RB3 Gen 2 Object Detection

AI Hub offers pre-tuned AI models optimized for Qualcomm hardware like the QCS6490, many of which leverage its robust image processing capabilities. Furthermore, Qualcomm provides the Intelligent Multimedia SDK (IM SDK), which includes a suite of GStreamer plugins that make it straightforward to access both peripherals and acceleration hardware. Combining these together with Golioth means that we can add connectivity to the equation, making it possible to stream data to the cloud, control processing remotely, and manage artifacts used in the processing pipeline.

Streaming Inference Results

We selected the YoloNAS model from AI Hub to perform object detection on the RB3 Gen2. The application constructed a GStreamer pipeline that pulled video from the high resolution camera, passed it to the model for inference, then composed the object detection results with the original video data in order to render a bounding box around objects before passing the final video to the display.

RB3 Gen 2 Inference Stream

We also injected Golioth into the GStreamer pipeline, such that messages could be streamed to the cloud to notify when certain classes of objects were detected. As with all data streamed to Golioth, these messages could subsequently be routed to any other destination via Golioth Pipelines.

Remotely Controlling Image Processing

Outside of the GStreamer pipelines, we setup Golioth Remote Procedure Call (RPC) handlers that allowed for image processing and inference to be paused and resumed remotely. This functionality could be further extended to stream the current frame to an object storage destination via Golioth when processing is paused, all without requiring any physical intervention with the device.

RB3 Gen 2 RPC

Managing and Deploying AI Models

While Foundries.io handles application updates, being able to manage other types of artifacts used by applications, such as the AI models and labels, enables efficient updates without needing to rebuild and deploy. Integrating Golioth OTA into the application meant that the application was notified immediately when a new model was available, and was able to download and integrate it into the processing pipeline quickly.

RB3 Gen 2 Model Update

Lessons and future explorations

We had set out to create Golioth application that would be useful on Linux and it was a success. We’ve proven to ourselves that Golioth’s services are useful for other IoT device types, especially embedded Linux, and that the Firmware SDK can work effectively in this context. Taking the code we developed we’ve already identified how we might evolve it into more of an agent or daemon and how we might better integrate with update mechanisms, especially on Yocto and Buildroot based distributions.

We’ll continue to explore the Linux for IoT space and see if and when it makes sense for us to do more here. Of course, you can count on us to continue to do more and more with MCUs. But we’re curious to hear from the community what they think – should Golioth invest in supporting Linux officially? What features or use cases would you like to see? Please share your thoughts on our forums!

Using the Hugging Face Inference API for Device Audio Analysis

Golioth Pipelines works with Hugging Face, as shown in our recent AI launch. This post will highlight how to use an audio classification model on Hugging Face that accepts data recorded on a microcontroller-based device, sent over a secure network connection to Golioth, and routed through Pipelines.

While most commonly known as the place where models and data sets are uploaded and shared, Hugging Face also provides a compute service in the form of its free serverless inference API and production-ready dedicated inference endpoints. Unlike other platforms that offer only proprietary models, Hugging Face allows access to over 150,000 open source models via its inference APIs. Additionally, private models can be hosted on Hugging Face, which is a common use case for Golioth users that have trained models on data collected from their device fleets.

Audio Analysis with Pipelines

Because the Hugging Face inference APIs use HTTP, they are easy to target with the webhook transformer. The structure of the request body will depend on the model being invoked, but for models that operate on media files, such as audio or video, the payload is typically raw binary data.

In the following pipeline, we target the serverless inference API with an audio sample streamed from a device. In this scenario, we want to perform sentiment analysis of the audio, then pass the results onto Golioth’s timeseries database, LightDB Stream, so that changes in sentiment can be observed over time. An alternative destination, or multiple destinations, could easily be added.

Click here to use this pipeline in your project on Golioth.

filter:
  path: "/audio"
steps:
  - name: emotion-recognition
    transformer:
      type: webhook
      version: v1
      parameters:
        url: https://api-inference.huggingface.co/models/superb/hubert-large-superb-er
        headers:
          Authorization: $HUGGING_FACE_TOKEN
  - name: embed
    transformer:
      type: embed-in-json
      version: v1
      parameters:
        key: text
  - name: send-lightdb-stream
    destination:
      type: lightdb-stream
      version: v1

Note that though Hugging Face’s serverless inference API is free to use, it is rate-limited and subject to high latency and intermittent failures due to cold starts. For production use-cases, dedicated inference endpoints are recommended.

We can pick any supported model on Hugging Face for our audio analysis task. As shown in the URL, the Hubert-Large for Emotion Recognition model is targeted, and the audio content delivered on path /audio is delivered directly to Hugging Face. An example for how to upload audio to Golioth using an ESP32 can be found here.

Results from the emotion recognition inference look as follows.

[
  {
    "score": 0.6310836672782898,
    "label": "neu"
  },
  {
    "score": 0.2573806643486023,
    "label": "sad"
  },
  {
    "score": 0.09393830597400665,
    "label": "hap"
  },
  {
    "score": 0.017597444355487823,
    "label": "ang"
  }
]

Expanding Capabilities

Countless models are uploaded to Hugging Face on a daily basis, and the inference API integration with Golioth Pipelines makes it simple to incorporate the latest new functionality into any connected device product. Let us know what models you are using on the Golioth Forum!

TensorFlow Lite is a machine learning (ML) platform that runs on microcontroller-based devices. It’s AI for IoT, which raise a few interesting challenges. Chief among them is figuring out how to update the ML model the device is currently using. Wonder no longer, Golioth has already figured this part out! Let’s walk through how to update a TensorFlow Lite model remotely!

Today’s example is based on the ESP-IDF ecosystem and uses an m5stack CoreS3 board. The application uses a TensorFlow Lite learning model to recognize when you speak the words “yes” and “no”. After performing an over-the-air (OTA) update of the learning model, the words “stop” and “go” will also be recognized.

What is a TensorFlow Lite Model?

Applications that use TensorFlow Lite need a machine learning model that has been trained on a large data set. TensorFlow (TF) can run on microcontrollers because this learning model has already been trained using vastly greater processing power. The “edge” device can use this pre-trained model, but will not be able to directly improve the learning set. But that set can be updated in the field.

Golioth has a TensorFlow Model Update example application that updates the TF learning model whenever a new version is available on the cloud. In this way, you can train new models and deploy them to a fleet of devices. If you capture data on device and send it up to Golioth, you can use your live captures to also train new models like Flox Robotics does.

Overview of the Model Update Process

The basic steps for updating a TensorFlow model are as follows:

  1. Upload a new learning model as an Artifact on Golioth.
  2. Roll out a release that includes your model as a non-firmware artifact.
    • You can update your Model all by itself without performing a full firmware update.
  3. Device recognizes and downloads the newly available version.
  4. Device re-initializes the TensorFlow application to use the new model.

The ability to upload the model separately from the device firmware delivers a few benefits. It saves bandwidth and power budget as the download will take less time to download. You will also be tracking fewer firmware versions as the model will be versioned separately.

Core Concepts from the Golioth Firmware SDK

There are two core concepts from the Golioth Firmware SDK used in this example. The first is that an IoT device can register to be notified whenever a new release is available from the cloud.

/* Register to receive notification of manifest updates */
enum golioth_status golioth_ota_observe_manifest_async (struct golioth_client *client, golioth_get_cb_fn callback, void *arg)

/* Convert the received payload into a manifest structure */
enum golioth_status golioth_ota_payload_as_manifest (const uint8_t *payload, size_t payload_size, struct golioth_ota_manifest *manifest)

The second concept is the ability to retrieve Artifacts (think binary files like a firmware update or a new TF Lite model) from Golioth.

/* Use blockwise download to retrieve an Artifact */
enum golioth_status golioth_ota_download_component (struct golioth_client *client, const struct golioth_ota_component *component, ota_component_block_write_cb cb, void *arg)

These two concepts are applied in the Golioth TF Lite example to detect when a new model is available, download it to local storage, and begin using it in the application. While this example uses ESP-IDF, the Golioth Firmware SDK also works with Zephyr and ModusToolbox.

Model Update Walk Through

1. Upload your new TensorFlow Model to Golioth

This step couldn’t be simpler: head over to the Golioth web console, navigate to Firmware Updates→Artifacts, and click the Create button.

Browser window showing the "Upload an Artifact" dialog on the Golioth web console

Give the artifact a Package ID that the device will use to recognize it as a new model. Here I’ve used the clever name: model.

Each file you upload requires an Artifact Version number that follows the semantic versioning syntax (ie. v1.2.3). Once you’ve filled in these fields select the file you want to upload and click Upload Artifact.

2. Roll out a release of the new Model

Rolling out your new model to devices is even easier than the upload step. Navigate to Firmware Updates→Releases and click the Create button.

Golioth web console showing the Create Release dialog

Under the Artifacts dropdown menu, select the artifact created in the previous step (note the package name and version number). I have also enabled the Start rollout? toggle so that this release will be immediately available to devices once the Create Release button is clicked.

This will roll out the model to all devices in the fleet. However, the Blueprint and Tags fields may optionally be used to target to a specific device or group of devices.

3. Device-side download and storage

Learning models tend to be large, so it’s a good idea to store the model locally so that it doesn’t need to be re-downloaded the next time the device goes through a power cycle. However, the process is the same no matter what approach you take. The model will be downloaded in blocks, using a callback function your app supplies to place the block data into some storage location.

There is a bit of a song and dance here to avoid deadlocking callbacks. The first step is to register a callback when a new release manifest is received from Golioth:

/* Listen for OTA manifest */
int err = golioth_ota_observe_manifest_async(client, on_manifest, NULL);

Here’s the on_manifest callback with all the error checking and most of the logging removed for brevity. Since this is running in a callback, I push the desired manifest component into a queue which will be read later from the main loop.

#define MODEL_PACKAGE_NAME "model"

static void on_manifest(struct golioth_client *client,
                        const struct golioth_response *response,
                        const char *path,
                        const uint8_t *payload,
                        size_t payload_size,
                        void *arg)
{
    struct golioth_ota_manifest man;

    golioth_ota_payload_as_manifest(payload, payload_size, &man);

    for (int i = 0; i < man.num_components; i++)
    {
        if (strcmp(MODEL_PACKAGE_NAME, man.components[i].package) == 0)
        {
            struct golioth_ota_component *stored_component =
                (struct golioth_ota_component *) malloc(sizeof(struct golioth_ota_component));
            memcpy(stored_component, &man.components[i], sizeof(struct golioth_ota_component));

            xQueueSendToBackFromISR(xQueue, &stored_component, NULL);
        }
    }
}

Next, we have a function to perform the download of the components in the queue. I’ve removed some housekeeping code to make this more readable. At its core, this function gets a pointer to write the file to an SD card, generates the path and filename, then begins a block download using write_artifact_block as a callback for each block received.

static void download_packages_in_queue(struct golioth_client *client)
{
    while (uxQueueMessagesWaiting(xQueue))
    {
        struct golioth_ota_component *component = NULL;
        FILE *f = NULL;

        /* Store components with name_version format: "componentname_1.2.3" */
        size_t path_len = sizeof(SD_MOUNT_POINT) + strlen("_") + strlen(component->package)
            + strlen("_xxx.xxx.xxx") + strlen("\0");

        char path[path_len];
        snprintf(path,
                 sizeof(path),
                 "%s/%s_%s",
                 SD_MOUNT_POINT,
                 component->package,
                 component->version);

        GLTH_LOGI(TAG, "Opening file for writing: %s", path);
        f = fopen(path, "a");

        /* Start the block download from Golioth */
        golioth_ota_download_component(client, component, write_artifact_block, (void *) f);

        fclose(f);
        free(component);
    }
}

Here’s the full block callback function. It’s quite straight-forward. The Golioth SDK will repeatedly run the callback; each time it is called, your application needs to write the data from block_buffer to a storage location.

Normally the offset for each write is calculated by multiplying the block_idx by the block_size. However, since I’ve passed a file stream pointer in as the user argument, we simply make subsequent writes and the file pointer will increment automatically.

static enum golioth_status write_artifact_block(const struct golioth_ota_component *component,
                                                uint32_t block_idx,
                                                uint8_t *block_buffer,
                                                size_t block_size,
                                                bool is_last,
                                                void *arg)
{

    if (!arg)
    {
        GLTH_LOGE(TAG, "arg is NULL but should be a file stream");
        return GOLIOTH_ERR_INVALID_FORMAT;
    }
    FILE *f = (FILE *) arg;

    fwrite(block_buffer, block_size, 1, f);

    if (is_last)
    {
        GLTH_LOGI(TAG, "Block download complete!");
    }

    return GOLIOTH_OK;
}

The new Model is now stored as a file on the SD card, named to match the package name and version number. This is quite handy for troubleshooting as you can pop out the SD card and inspect it on a computer.

4. Switching to the new model on the device

Switching to the new model is where you will likely spend the most time making changes on your own application. I was working off of the TensorFlow Lite micro_speech example from Espressif which hardcodes several of the parameters relating to loading and using a learning model.

The approach that I took was to move the pertinent learning model settings to RAM and load them from a header that was added to the model. This header formatting is explained in the README for the Golioth example. In our example code, the bulk of this work is done in model_handler.c.

For your own application, keep in mind any variables necessary to load a new model and how those may change with future training updates.

Take Golioth for a Spin!

Golioth is free for individuals, with usage pricing that includes 1 GB per month of OTA data. So you can get small test fleet up and running today before seeking budget approval.

Those interested in pushing sensor data back up to the cloud for training future models will find our examples on uploading audio and uploading images helpful. We’d love to hear your questions or just see what cool things you’re working on, so take a moment to post your progress to the Golioth forum.

Shortly after our Golioth for AI launch, which included integrations with platforms such as OpenAI, Anthropic, Hugging Face, and Replicate, OpenAI announced support for Structured Outputs. Structured Outputs allow callers of OpenAI’s APIs to provide a JSON schema to define the structure in which responses should be formatted.

Because OpenAI APIs are invoked via Pipeline Transformers (not to be confused with transformer models in this context) on Golioth, the responses are likely to be subsequently passed to a Pipelines Destination, or even another Transformer. It is helpful if these subsequent steps in a Pipeline can be certain of the structure of the payload they will receive.

The following Pipeline demonstrates the use of Structured Outputs.

filter:
  path: "/accel"
  content_type: application/cbor
steps:
  - name: convert
    transformer:
      type: cbor-to-json
  - name: embed
    transformer:
      type: embed-in-json
      version: v1
      parameters:
        key: readings
  - name: create-payload
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {
              "op": "add",
              "path": "/model",
              "value": "gpt-4o-2024-08-06"
            },
            {
              "op": "add",
              "path": "/messages",
              "value": [
                {
                  "role": "user",
                  "content": [
                    {
                      "type": "text",
                      "text": "Rank the top three values in the following sensor readings."
                    },
                    {
                      "type": "text",
                      "text": "PATCH"
                    }
                  ]
                }
              ]
            },
            {
              "op": "add",
              "path": "/response_format",
              "value": {
              "type": "json_schema",
              "json_schema": {
                "name": "math_response",
                "strict": true,
                "schema": {
                  "type": "object",
                  "properties": {
                    "readings": {
                      "type": "array",
                      "items": {
                        "type": "object",
                        "properties": {
                          "reading": {
                            "type": "number"
                          },
                          "rank": {
                            "type": "number"
                          }
                        },
                        "required": ["reading", "rank"],
                        "additionalProperties": false
                      }
                    }
                  },
                  "required": ["readings"],
                  "additionalProperties": false
                }
              }
              }
            },
            {
              "op": "move",
              "from": "/readings",
              "path": "/messages/0/content/1/text"
            },
            {
              "op": "remove",
              "path": "/readings"
            }
          ]
  - name: extract
    transformer:
      type: webhook
      version: v1
      parameters:
        url: https://api.openai.com/v1/chat/completions
        headers:
          Authorization: $OPENAI_TOKEN
  - name: parse-payload
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "add", "path": "/text", "value": ""},
            {"op": "move", "from": "/choices/0/message/content", "path": "/text"}
          ]  
  - name: send-webhook
    destination:
      type: webhook
      version: v1
      parameters:
        url: $SLACK_WEBHOOK

When looking at more complex Pipelines, it is helpful to break them down by each step. Before our first step, we have our Filter.

filter:
  path: "/accel"
  content_type: application/cbor

This restricts the data that will be passed to this Pipeline to messages from devices on the /accel path, indicating that they are accelerometer sensor readings, with content type of application/cbor. Being able to deliver data from devices in a binary encoded format such as CBOR reduces the amount of bandwidth each device consumes. However, because many cloud services operate on JSON data, our first step takes care of converting our CBOR payload to JSON.

- name: convert
  transformer:
    type: cbor-to-json

In order to be able to manipulate this data as part of a larger JSON object, we then embed it with key readings.

- name: embed
  transformer:
    type: embed-in-json
    version: v1
    parameters:
      key: readings

Now it’s time to create a request payload that we can deliver to OpenAI. This will include not only our accelerometer readings, but also information about what model we want to use, a prompt for what the model should do, and our JSON schema that defines how we want it to respond. With our readings embedded in a JSON object, we can operate on that object using the JSON Patch transformer.

- name: create-payload
   transformer:
     type: json-patch
     version: v1
     parameters:
       patch: |
         [
           {
             "op": "add",
             "path": "/model",
             "value": "gpt-4o-2024-08-06"
           },
           {
             "op": "add",
             "path": "/messages",
             "value": [
               {
                 "role": "user",
                 "content": [
                   {
                     "type": "text",
                     "text": "Rank the top three values in the following sensor readings."
                   },
                   {
                     "type": "text",
                     "text": "PATCH"
                   }
                 ]
               }
             ]
           },
           {
             "op": "add",
             "path": "/response_format",
             "value": {
             "type": "json_schema",
             "json_schema": {
               "name": "math_response",
               "strict": true,
               "schema": {
                 "type": "object",
                 "properties": {
                   "readings": {
                     "type": "array",
                     "items": {
                       "type": "object",
                       "properties": {
                         "reading": {
                           "type": "number"
                         },
                         "rank": {
                           "type": "number"
                         }
                       },
                       "required": ["reading", "rank"],
                       "additionalProperties": false
                     }
                   }
                 },
                 "required": ["readings"],
                 "additionalProperties": false
               }
             }
             }
           },
           {
             "op": "move",
             "from": "/readings",
             "path": "/messages/0/content/1/text"
           },
           {
             "op": "remove",
             "path": "/readings"
           }
         ]

Altogether, this sequence of patch operations will transform a data payload that looks like this:

{
    "readings": "[3.2, 4.78, 2.36, 5.99, 6.7]"
}

Into a request payload that looks like this:

{
    "model": "gpt-4o-2024-08-06",
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Rank the top three values in the following sensor readings."
                },
                {
                    "type": "text",
                    "text": "[3.2, 4.78, 2.36, 5.99, 6.7]"
                }
            ]
        }
    ],
    "response_format": {
        "type": "json_schema",
        "json_schema": {
            "name": "math_response",
            "strict": true,
            "schema": {
                "type": "object",
                "properties": {
                    "readings": {
                        "type": "array",
                        "items": {
                            "type": "object",
                            "properties": {
                                "reading": {
                                    "type": "number"
                                },
                                "rank": {
                                    "type": "number"
                                }
                            },
                            "required": [
                                "reading",
                                "rank"
                            ],
                            "additionalProperties": false
                        }
                    }
                },
                "required": [
                    "readings"
                ],
                "additionalProperties": false
            }
        }
    }
}

We have asked the model to rank the top three values from our sensor readings, and provide the rankings in an array of objects, each with the reading value and its ranking. We can deliver this payload to OpenAI, leveraging Pipeline Secrets to provide our API token.

- name: extract
  transformer:
    type: webhook
    version: v1
    parameters:
      url: https://api.openai.com/v1/chat/completions
      headers:
        Authorization: $OPENAI_TOKEN

We can once again leverage the JSON Patch transformer to pull the structured output response (/choices/0/messages/content) out of the full JSON object returned by OpenAI. In this case we move it to the key /text as that is expected by our final destination.

- name: parse-payload
  transformer:
    type: json-patch
    version: v1
    parameters:
      patch: |
        [
          {"op": "add", "path": "/text", "value": ""},
          {"op": "move", "from": "/choices/0/message/content", "path": "/text"}
        ]

Finally, we pass our modified JSON object to the Webhook Destination, which we use to deliver a message to Slack for demonstration purposes.

- name: send-webhook
  destination:
    type: webhook
    version: v1
    parameters:
      url: $SLACK_WEBHOOK

In Slack, we can see the ranked sensor readings from ChatGPT, just as we requested.

{"readings":[{"reading":6.7,"rank":1},{"reading":5.99,"rank":2},{"reading":4.78,"rank":3}]}

Where To Next?

One of the most interesting aspects of using an AI model for data processing in this context is that the structure of the data sent by the device could change, and the model could still rank the values, and return in them in a predictable format. This can be extremely valuable if you are supporting a device fleet in which the data payloads may differ, either due to multiple firmware versions or because payload structure is dependent on the device’s environment. And ranking sensor readings is one of the simplest tasks that these models can perform — we’re excited to see users try out more complex operations!

Golioth for A Cover

Today, we are thrilled to announce the launch of Golioth for AI, a comprehensive set of features designed to simplify and enhance the integration of AI into IoT products.

At Golioth, we envision a future where AI and IoT converge to create smarter, more efficient systems that can learn, adapt, and improve over time. The fusion of AI and IoT has the potential to unlock unprecedented levels of innovation and automation across various industries. However, integrating AI into IoT devices can be complex and challenging, requiring robust solutions for managing models, training data, and performing inference.

Today, at Golioth, we are addressing these challenges head-on. Our new set of features focuses on three core pillars: training data, model management, and inference. By streamlining these processes, we aim to empower developers and businesses to quickly add AI to their IoT projects, where it was not readily possible to do so before.

Training Data: Unlocking the Potential of IoT Data

At Golioth, we recognize that IoT devices generate rich, valuable data that can be used to train innovative AI models. However, this data is often inaccessible, in the wrong format, or difficult to stream to the cloud. We’re committed to helping teams extract this data and route it to the right destinations for training AI models that solve important physical world problems.

We’ve been building up to this with our launch of Pipelines, and new destinations and transformers have been added every week since. Learn more about Pipelines in our earlier announcement.

In v0.14.0 of our Firmware SDK, we added support for block-wise uploads. This new capability allows for streaming larger payloads, such as high-resolution images and audio, to the cloud. This unlocks incredible potential for new AI-enabled IoT applications, from connected cameras streaming images for security and quality inspection to audio-based applications for voice recognition and preventative maintenance for industrial machines.

For an example of uploading images or audio files to Golioth for training, see:

We’ve recently added three new object storage destinations for Pipelines:

These storage solutions are perfect for handling the rich media data essential for training AI models, ensuring your training set is always up to date with live data from in-field devices.

Partnership with Edge Impulse

Today, we’re also excited to announce our official partnership with Edge Impulse, a leading platform for developing and optimizing AI for the edge. This partnership allows streaming of IoT data from Golioth to Edge Impulse for advanced model training, fine-tuned for microcontroller class devices. Using Edge Impulse’s AWS S3 Data Acquisition, you can easily integrate with Golioth’s AWS S3 Pipelines destination by sharing a bucket for training data:

filter:
  path: "*"
  content_type: application/octet-stream
steps:
  - name: step-0
    destination:
      type: aws-s3
      version: v1
      parameters:
        name: my-bucket
        access_key: $AWS_ACCESS_KEY
        access_secret: $AWS_ACCESS_SECRET
        region: us-east-1

This streamlined approach enables you to train cutting-edge AI models efficiently, leveraging the power of both the Golioth and Edge Impulse platforms. For a full demonstration of using Golioth with Edge Impulse see: https://github.com/edgeimpulse/example-golioth

Model Management: Flexible OTA Updates

Deploying AI models to devices is crucial for keeping them updated with the latest capabilities and adapting to new use cases and edge cases as they arise. However, deploying a full OTA firmware update every time you need to update your AI model is inefficient and costly.

To address this, Golioth’s Over-the-Air (OTA) update system has been enhanced to support a broader range of artifact types, including AI models and media files.

Golioth Console Display OTA AI Model Releases

Our updated OTA capabilities ensure that your AI models can be deployed and updated independently of firmware updates, making the process more efficient and streamlined. This allows models to be updated without having to perform a complete firmware update, saving bandwidth, reducing battery consumption, and minimizing downtime while ensuring your devices always have the latest AI capabilities.

We’ve put together an example demonstrating deploying a TensorFlow Lite model here: Deploying TensorFlow Lite Model.

Flox Robotics is already leveraging our model management capabilities to deploy AI models that detect wildlife and ensure their safety. Their AI deterrent systems prevent wildlife from entering dangerous areas, significantly reducing harm and preserving ecosystems. Read the case study.

Inference: On-Device and in the Cloud

Inference is the core of AI applications, and with Golioth, you can now perform inference both on devices and in the cloud. On-device inference is often preferred for applications like real-time monitoring, autonomous systems, and scenarios where immediate decision-making is critical due to its lower latency, reduced bandwidth usage, and ability to operate offline.

However, sometimes inference in the cloud is ideal or necessary for tasks requiring significant processing power, such as high-resolution image analysis, complex pattern recognition, and large-scale data aggregation, leveraging more powerful computational resources and larger models.

Golioth Pipelines now supports inference transformers and destinations, integrating with leading AI platforms including Replicate, Hugging Face, OpenAI, and Anthropic. Using our new webhook transformer, you can leverage these platforms to perform inference within your pipelines. The results of the inference are then returned back to your pipeline to be routed to any destination. Learn more about our new webhook transformer.

Here is an example of how you can configure a pipeline to send audio samples captured on a device to the Hugging Face Serverless Inference API, leveraging a fine-tuned HuBERT model for emotion recognition. The inference results are forwarded as timeseries data to LightDB Stream.

filter:
  path: "/audio"
steps:
  - name: emotion-recognition
    transformer:
      type: webhook
      version: v1
      parameters:
        url: https://api-inference.huggingface.co/models/superb/hubert-large-superb-er
        headers:
          Authorization: $HUGGING_FACE_TOKEN
  - name: embed
    transformer:
      type: embed-in-json
      version: v1
      parameters:
        key: text
  - name: send-lightdb-stream
    destination:
      type: lightdb-stream
      version: v1

Golioth is continually releasing new examples to highlight the applications of AI on device and in the cloud. Here’s an example of uploading an image and configuring a pipeline to describe the image with OpenAI and send the transcription result to Slack:

filter:
  path: "/image"
steps:
  - name: jpeg
    transformer:
      type: change-content-type
      version: v1
      parameters:
        content_type: image/jpeg
  - name: url
    transformer:
      type: data-url
      version: v1
  - name: embed
    transformer:
      type: embed-in-json
      version: v1
      parameters:
        key: image
  - name: create-payload
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {
              "op": "add",
              "path": "/model",
              "value": "gpt-4o-mini"
            },
            {
              "op": "add",
              "path": "/messages",
              "value": [
                {
                  "role": "user",
                  "content": [
                    {
                      "type": "text",
                      "text": "What's in this image?"
                    },
                    {
                      "type": "image_url",
                      "image_url": {
                        "url": ""
                      }
                    }
                  ]
                }
              ]
            },
            {
              "op": "move",
              "from": "/image",
              "path": "/messages/0/content/1/image_url/url"
            },
            {
              "op": "remove",
              "path": "/image"
            }
          ]
  - name: explain
    transformer:
      type: webhook
      version: v1
      parameters:
        url: https://api.openai.com/v1/chat/completions
        headers:
          Authorization: $OPENAI_TOKEN
  - name: parse-payload
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "add", "path": "/text", "value": ""},
            {"op": "move", "from": "/choices/0/message/content", "path": "/text"}
          ]  
  - name: send-webhook
    destination:
      type: webhook
      version: v1
      parameters:
        url: $SLACK_WEBHOOK

For a full list of inference examples see:

Golioth for AI marks a major step forward in integrating AI with IoT. This powerful collection of features is the culmination of our relentless innovation in device management and data routing, now unlocking advanced AI capabilities like never before. Whether you’re an AI expert or just starting your AI journey, our platform provides the infrastructure seamlessly train, deploy, and manage AI models with IoT data.

We’ve assembled a set of exciting examples to showcase how these features work together, making it easier than ever to achieve advanced AI integration with IoT. We can’t wait to see the AI innovations you’ll create using Golioth.

For detailed information and to get started, visit our documentation and explore our examples on GitHub.

Thank you for joining us on this exciting journey. Stay tuned for more updates and build on!

You Might Not Need GNSS

Building hardware involves thinking far into the future to ensure that a device will continue to serve its intended purpose in any environment in which it may be deployed. However, simply adding more and more components to the bill of materials (BOM) can lead to hitting constraints around price, power, cost, and availability. One common requirement for connected device products is the ability to identify the current location. Generally this functionality is provided via a global navigation satellite system (GNSS), which requires dedicated hardware, has a negative impact on battery life, and can be unreliable in some locations.

There are a number of GNSS’s. You may be more familiar with a specific implementation, such as the Global Positioning System (GPS), which is operated by the United States government.

For some use cases, these attributes are not worth the precision offered by GNSS. For example, if it is only necessary to obtain the rough position of a device, it may not make sense to increase the BOM cost and decrease the battery life. Furthermore, in brownfield scenarios, it may be desirable to add location capabilities to a product when the existing hardware does not support it.

How to Know Where You Are

GNSS works by communicating with satellites that are orbiting the Earth. These satellites broadcast their position information, and after acquiring signals from four satellites, a device can use a process called trilateration to identify its own location. In short, devices can find their own location by talking to systems that know their own location.

Device obtaining location by communicating with 4 GNSS satellites.

Issues with GNSS occur when communication with these satellites is interrupted by physical obstacles or some other interference. After all, these satellites are very far away and communicate at a low data rate. To combat these limitations, some devices will employ an A-GNSS (Assisted GNSS) service, which supplies information about the satellites that aid the device in reaching its time to first fix (TTFF). This is referred to as Mobile Station Based (MSB) A-GNSS. Alternatively, the device can send the partial position data it has been able to obtain to the A-GNSS service, which the service will then use to calculate the position and return it to the device. This mode is referred to as Mobile Station Assisted (MSA).

Device obtaining location information by communicating with GNSS satellites and an A-GNSS service.

A-GNSS is a helpful solution when the behavior of the device is impacted by its location. However, in the event that the location of the device is only relevant to an external system, such as a human tracking a device, A-GNSS may not be necessary.

An alternative option to talking to a system that knows its own location is to talk to a system that knows the location of other devices that are near to you. For example, a device with a WiFi radio can identify the networks that are advertising in its area, and if a third-party is aware of the location of that network, then it can determine the rough location of the device. The same is true for a cellular device connecting to a tower, or even Bluetooth devices surrounded by other devices that may have a known location. A major advantage of this option is that the device already needs to establish this communication for connectivity, so leveraging it for location removes the need to include additional hardware, credentials, or firmware networking stacks.

Who Do You Ask

These third-parties that know where everything is sound extremely useful, but who are they? A few weeks ago, Chris Gammell wrote a blog post about using one such service, HERE, to build a geofence device with a WiFi capable ESP32-C3. Other services include those from popular smartphones makers, such as Google and Apple, each of which leverage their vast network of multi-radio devices to build out a robust database of wireless access point location information. If my phone has obtained a GNSS signal, it can survey the nearby WiFi access points and Bluetooth devices, and report their MAC addresses along with its current location back to the service. That data can then be used to help the next device identify its location just by sending one or more of those MAC addresses. If that sounds to you like it could be used for nefarious purposes, you’re right.

Many other location services you may encounter are actually just thin wrappers on the services listed above. In fact, Chris’ aforementioned post effectively built a wrapper service on HERE using the open source n8n project. However, when determining how to more seamlessly expose location services to Golioth users, we wanted to ensure that we preserved flexibility for moving between services. We also wanted to allow for devices that may already be sending network information to Golioth to be enhanced with location information without needing changes to be made to their firmware.

Device obtaining location information by sending nearby wireless network data to a positioning service.

Fortunately, our recently released webhook transformer enables leveraging external services to augment data sent from devices to Golioth. This feature is a perfect candidate for transforming network information into location data, then forwarding it along to its final destination. As with all Golioth Pipelines functionality, this can be introduced alongside existing streaming data routing without interruption.

HERE

In the following example, CBOR data streamed on the /netinfo path is transformed into JSON before being delivered to HERE’s positioning service.

filter:
  path: "/netinfo"
  content_type: application/cbor
steps:
  - name: convert
    transformer:
      type: cbor-to-json
  - name: get-location
    transformer:
      type: webhook
      parameters:
        url: $HERE_URL
  - name: send-lightdb
    destination:
      type: lightdb-stream
      version: v1

The $HERE_URL secret is in the format https://positioning.hereapi.com/v2/locate?api-key={YOUR_API_KEY}.

For example, the following device payload (shown as JSON for readability), which includes information about nearby WiFi access points, could be used for identifying the device’s location.

{
  "wlan": [
    {
      "mac": "00:18:39:59:8C:53",
      "rss": -87
    },
    {
      "mac": "00:21:55:61:F3:0A",
      "rss": -86
    },
    {
      "mac": "00:11:5C:6B:9A:00",
      "rss": -71
    }
  ]
}

The location data in this case is delivered to LightDB Stream, creating a timeseries history of the device’s location.

Location data in LightDB Stream.

HERE supports supplying information about multiple types of networks, such as WiFi and cellular, to enable more precise positioning.

Google Geolocation API

The true power of Pipelines is on display with the ability to switch out the service that resolves device location, without needing to update the firmware or systems that consume the location data. In the following pipeline, we switch from using HERE to Google’s Geolocation API, which expects a different request body.

filter:
  path: "/netinfo"
  content_type: application/cbor
steps:
  - name: convert
    transformer:
      type: cbor-to-json
  - name: transform-google-geo
    transformer:
      type: json-patch
      parameters:
        patch: |
          [
            {"op": "add", "path": "/wifiAccessPoints", "value": [{"macAddress": "", "signalStrength": 0}, {"macAddress": "", "signalStrength": 0}, {"macAddress": "", "signalStrength": 0}]},
            {"op": "move", "from": "/wlan/0/mac", "path": "/wifiAccessPoints/0/macAddress"},
            {"op": "move", "from": "/wlan/0/rss", "path": "/wifiAccessPoints/0/signalStrength"},
            {"op": "move", "from": "/wlan/1/mac", "path": "/wifiAccessPoints/1/macAddress"},
            {"op": "move", "from": "/wlan/1/rss", "path": "/wifiAccessPoints/1/signalStrength"},
            {"op": "move", "from": "/wlan/2/mac", "path": "/wifiAccessPoints/2/macAddress"},
            {"op": "move", "from": "/wlan/2/rss", "path": "/wifiAccessPoints/2/signalStrength"},
            {"op": "remove", "path": "/wlan"}
          ]
  - name: get-location
    transformer:
      type: webhook
      parameters:
        url: $GOOGLE_GEO_API_URL
  - name: transform-standard-loc
    transformer:
      type: json-patch
      parameters:
        patch: |
          [
            {"op": "add", "path": "/location/accuracy", "value": 0},
            {"op": "move", "from": "/accuracy", "path": "/location/accuracy"}
          ]
  - name: send-lightdb
    destination:
      type: lightdb-stream
      version: v1

The $GOOGLE_GEO_API_URL secret is in the format https://www.googleapis.com/geolocation/v1/geolocate?key={YOUR_API_KEY}.

We can utilize the json-patch transformer to update the structure of the device payload after converting it to JSON. After patching, the same structure for device payload sent to the HERE pipeline now looks as follows.

{
  "wifiAccessPoints": [
    {
      "macAddress": "3c:37:86:5d:75:d4",
      "signalStrength": -35
    },
    {
      "macAddress": "30:86:2d:c4:29:d0",
      "signalStrength": -35
    },
    {
      "macAddress": "30:22:96:6B:9A:11",
      "signalStrength": -22
    }
  ]
}

The Google Geolocation API returns the following response.

{
  "accuracy": 20,
  "location": {
    "lat": 37.4241224,
    "lng": -122.0915874
  }
}

This looks almost the same as the response from HERE, but the accuracy measure is outside of the location object. We can apply a second json-patch transformer to make it match exactly. The result is location data appearing in LightDB Stream with no discernible difference when moving between services. Location data in LightDB Strema.

Visualizing Device Location

With data flowing to a destination, we can then query that data source to expose location information in a helpful context. For example, we can setup a Grafana dashboard that live plots location updates for the device, allowing us to observe its movement over time without it needing to be equipped with GNSS capabilities.

Device shown in Grafana map view.

Going Further

In all of the use cases shown above, device location data was sent to a destination for consumption, but it was not returned back to the device. You can watch the demo from Chris’ post to see how Golioth’s LightDB State service can be used to return location information to a device when necessary. However, we’ll soon be introducing functionality that makes this process even easier. In the mean time, reach out on the forum to let us know how you are using Golioth Pipelines and what features you would like to see added!