Golioth Location Private Access text above laptop screen showing Golioth console on new location tab.

Today we are launching Golioth Location in private access. This service offers network positioning functionality, allowing devices to leverage the same radios (e.g. cellular and Wi-Fi) used for communicating with Golioth to obtain location information. Doing so may extend battery life, reduce hardware costs, and enable more rapid acquisition of a device’s position. Device location data can optionally be returned to the device, stored on Golioth, or forwarded to external destinations via Pipelines.

Golioth customers with a Teams or Enterprise tier organization can request access using this form.

Screenshot of Golioth console showing map with location entries for devices.

Why Build a Location Service?

Golioth users have already been able to leverage third-party location services via the Pipelines webhook transformer. However, there are a few drawbacks to this approach:

  • Users must identify a third-party provider and manually integrate it into their Pipelines, or they must establish a costly second secure internet connection from the device to the provider.
  • Users must manage a separate billing relationship with the third-party provider, which may involve complex pricing that is not aligned with managing a large fleet of constrained devices.
  • Users are responsible for building location functionality into their firmware, rather than relying on functionality offered by the Golioth Firmware SDK.
  • Location data is not returned to devices and is not made available in the Golioth console when manually integrating via Pipelines.

Golioth Location alleviates these concerns by enabling location functionality with a single function call in the Golioth Firmware SDK. There is no need to implement additional protocols or initiate multiple network connections, both of which can bloat firmware image size and reduce battery life, as all communication takes place over Golioth’s secure CoAP transport. Furthermore, because location data is flowing through the Golioth platform, it can easily be stored for fleet visualization, while still allowing for forwarding to external locations via Pipelines.

Why Network Based Positioning?

Location is a key component of many connected device applications, ranging from asset tracking and geofencing to compliance and stolen device recovery. We have previously written about the benefits of using network based positioning relative to GNSS. While typically less accurate, network positioning offers a number of advantages:

  • Extends battery life by eliminating the need to operate power-hungry GNSS radios.
  • Reduces the cost of hardware by utilizing components that are already necessary for cloud communication.
  • Offers faster location resolution by eliminating time to first fix (TTFF) delays.

Many devices only need approximate location at a relatively infrequent interval. However, use-cases that require greater accuracy than that offered by network positioning techniques may combine GNSS and network solutions to quickly obtain general location before switching to more precise tracking. In some cases, such as indoor devices and those in dense urban areas, network positioning can offer superior accuracy to GNSS.

Golioth Location offers network positioning via Wi-Fi access point data, single-cell and multicell information, or a combination of both. Additionally, though network positioning has been identified as the top location feature request by existing Golioth customers, we anticipate expanding capabilities on feedback, such as adding support for assisted GNSS (A-GNSS) and predicted GNSS (P-GNSS).

Accessing Golioth Location

As of today, Golioth Location is available in private access to organizations on Teams and Enterprise tiers. During the private access period, we will be working closely with customers to ensure that necessary functionality is made available both on the Golioth platform and in the Golioth Firmware SDK. Customers that participate in this program will receive specialized evaluation pricing. Public pricing will be finalized prior to general availability.

What’s Next?

In the coming weeks we’ll share more details about the benefits and tangible use-cases of network positioning. If you are interested in sharing your own experience, or have any questions or feature requests, feel free to reach out on the forum!

OTA Event Log visuals

At Golioth, we believe that the ability to perform Over-The-Air (OTA) updates is a crucial part of any IoT product, as it not only lets manufacturers upgrade their products in the field, but also provides an extra safety net that lets them recover from any unforeseen software issues. Because OTA updates are the safety net, the importance of control and overview is even higher in the update mechanism than any other aspect of the product.

A few months ago, we introduced Cohorts as a new way of managing OTA updates for devices connected to Golioth. Cohorts provided a safer user experience for OTA by explicitly grouping devices that were receiving the same OTA updates into cohorts and presenting the update history for each cohort separately.

Introducing the OTA Event Log

Since the release of Cohorts, we have been working on improving your ability to observe the OTA update process, and we’re excited to reveal our first OTA observability improvement: The OTA Event Log.

Screenshot of Golioth Console displaying OTA Event Log

The OTA Event Log is a timeline of events recorded in a cohort. The event log is a available in a new tab in the cohort management page, and is presented as an interactive timeline with an accompanying log of events.

The timeline view contains an hourly summary of the reported events in the last week, as well as markers for any deployments to the cohort.

The OTA Events are based on the OTA state reports that are reported by the device as it moves through the update process. These are the same reports that power the firmware status tab of the device management page, but with the new timeline view, you’ll be able to review the device’s progress through each OTA deployment after the fact, and catch any reported errors or unexpected delays in the upgrade process for each device in the cohort.

The OTA Event Log is available for each cohort, as well as for each device, and can be found in the Event Log tab of the Cohort management page and the Firmware tab of the Device management page in the Golioth Console.

Event logs data retention

For developers on the Free Tier, OTA Event Logs will be available for the past 7 days. Developers on the Teams Tier will be able to access logs from the last 30 days, and developers on the Enterprise Tier will be able to define custom retention policies.

Note that we started recording events on December 5th 2024, and earlier events are unavailable.

What’s next?

The OTA Event Log is just the first step towards expanding the observability of OTA deployments. The new event collection mechanism powering the OTA Event Log enables us to build more complex heuristics and statistics for OTA updates, so stay tuned for incoming improvements to the Cohort pages, which are about to become a lot more powerful.

Golioth Firmware SDK v0.16.0

Yesterday we released v0.16.0 of the Golioth Firmware SDK. This release includes a number of improvements, which are described in the following sections. Importantly, this release also introduces the Golioth Root X1 root CA certificate, which Golioth device services will start using 1 year from today. Golioth users are encouraged to update to their firmware to the v0.16.0 release at their earliest convenience.

For the full set of changes in this release, see the changelog.

Golioth Root X1

Note: this update does not impact devices that are using pre-shared keys (PSKs) for authentication. Golioth does not recommend the use of PSKs in production.

Golioth device services have always relied on Let’s Encrypt for server certificates. Let’s Encrypt is a fantastic service that has contributed to making a more secure internet for all. We plan to continue using Let’s Encrypt for the foreseeable future for the Golioth Management API and all of our web properties.

Let’s Encrypt is a public Certificate Authority (CA). The benefit of using certificates issued by a public CA is that its root certificates are widely trusted. That trust is what allows your browser to securely connect to this website. Operating systems, browsers, and other applications include a set of root CA certificates in their root store (see Chromium’s list for example), meaning that they will trust leaf certificates that have a chain of trust rooted in one of those root CA certificates. When someone wants to serve content on the public internet, they can request a leaf certificate from a service like Let’s Encrypt after completing an ACME challenge verifying ownership of their domain. Because Let’s Encrypt’s root CA certificates are included in almost all root stores, clients will automatically trust that site without requiring any manual intervention.

While this system underpins the security of the entire internet, the value of widely trusted certificates is less applicable to embedded devices. Typically, root CA certificates are loaded into a secure element or baked into firmware by the manufacturer of the device or the developer of its firmware. This is in stark contrast to buying a PC with a pre-installed operating system or downloading a web browser.

Embedded devices also differ in their constraints. With limited flash and memory, every root CA certificate and supported cipher suite cuts into precious resources. Furthermore, many of the devices are deployed in hard-to-access locations, meaning that losing remote connectivity can be fatal. Extreme care must be taken to ensure that devices are designed to continue communicating securely in perpetuity. One key aspect of developing a strategy is ensuring robust support for Over-the-Air (OTA) updates, which Golioth makes simple and straightforward.

As part of our ongoing commitment to our users we have evaluated all the ways in which changes in the trust chain for certificates used by Golioth device services could negatively impact devices in the field. For example, a slight change in the cipher suite leveraged by an intermediate certificate could render a device unable to securely connect to the platform if the new cipher suite was not enabled in its firmware. While public CAs like Let’s Encrypt typically communicate changes with ample lead time, by using their certificates we, and in turn our users, are subject to their policies and procedures (as well as any future changes to them). This constitutes a level of risk that we are not comfortable with.

For this reason, the v0.16.0 release of the Golioth Firmware SDK includes the Golioth Root X1 root CA certificate alongside Let’s Encrypt’s ISRG Root X1 root. It also formally starts the transition period for devices connecting to the Golioth platform to move to including Golioth Root X1 in their root store. The simplest way to accomplish this is by upgrading to the v0.16.0 release for your next firmware update. We have set the transition period to 1 year from today, but as always, we will work with all current and future users to ensure that the transition process is seamless.

If you have any questions please do not hesitate to reach out to us on the forum.

Asynchronous Callback Status Handling

All asynchronous callbacks now have both a status member and a coap_rsp_code member to replace the response member. All of the same information remains accessible, but the updated structure supports more granular error handling in callbacks, such as performing operations when a request is unable to be sent. These callbacks are now also invoked when requests are canceled prior to receiving a response, which may be the case if the Golioth client is manually stopped or connection is lost.

Callback function signatures must be updated and accesses to response->status should be updated to status.

OTA Download Resume

golioth_ota_download_component() has a new uint32_t *next_block_idx parameter. This parameter can be used to specify an offset when downloading an artifact. This is particularly useful if an artifact download was interrupted and it is desirable to resume from the last successful transfer.

Set next_block_idx to NULL to use previous functionality in existing code.

Zephyr and NCS Version Updates

This release also includes an update to Zephyr’s new major version release, v4.0.0. There is also corresponding support for v2.8.0 of the Nordic Connect SDK (NCS).

What’s on the Horizon?

Recent updates in the Golioth Firmware SDK have focused on improved OTA stability, and that focus will continue into the coming releases. We are also excited about coming support for new Golioth device services, which will greatly expand the capabilities of devices communicating with the platform.

NAT is the Enemy of Low Power Devices

If you have ever tried communicating with a device on a private network, you may have encountered Network Address Translation (NAT). Fundamentally, when one device needs to send data to another device, it needs to know how to address it. On IP-based networks, devices are addressed using an IP address. Unfortunately, the number of connected devices has long outpaced the number of unique addresses in the IPv4 address space. Because of this, public IP addresses have to be shared between devices, which causes a few problems.

How to Share an IP Address

You probably accessed this blog post from a machine that does not have a public IP address. Rather, it has been assigned a private IP address on a network, perhaps via the Dynamic Host Configuration Protocol (DHCP), and it talks to a router that is responsible for sending data to and from the device. To access this post, your device first had to use the Domain Name System (DNS) to map a public IP address to blog.golioth.io, then had to send a request to that IP address for the content of this page.

NAT - Device to Cloud

When that request arrives at a router or some other intermediary, it knows where to deliver the request because the IP address of the server hosting blog.golioth.io is specified. It forwards the request along, and the server responds with the requested content. However, the server does not know that your device sent the request. The router has replaced the private IP address and port from your device with its own public IP address and port, and it has made an entry in a translation table noting that incoming data for that port should be directed to your device. The server sends the content back to the router, which replaces its own public IP address and port with your device’s IP address and port, then forwards it along. The content arrives at your device, appearing as though the server sent it directly to you. Meanwhile, the router is doing the same song and dance for many other devices, maintaining all of the mappings from its own IP address and ports to internal IP addresses and ports. This is known as Network Address Translation (NAT).

NAT - Cloud to Device

What Could Go Wrong?

This works great in simple request-response scenarios like fetching a blog post from a server with a public IP address. However, what if the server wants to say something to the device before the device talks to it? The server may know the public IP address of the router, but the router has no way of knowing which device the message is actually intended for. There is no entry in the NAT translation table until an outgoing message creates one. This also becomes a problem in peer-to-peer scenarios, where both devices are on a private network, making it such that neither device can talk to the other (this is solved using a public rendezvous point, such as a STUN server, but that’s a story for another post).

NAT - Cloud to Device, Cloud Initiates

Another problem is that routers don’t want to maintain mappings forever. At some point if no outgoing messages have been observed, the entry will be removed from the translation table and any subsequent incoming traffic will be dropped. In many cases, this timeout is quite aggressive (e.g. 5 minutes or less). Typically this is resolved by sending “keep alive” messages, ensuring that entries are not removed and data can flow freely in both directions. For your laptop or a server in a data center that might work fine for the most part. For highly constrained devices, it can quickly drain battery or consume precious limited bandwidth.

NAT - Cloud to Device Timeout

Maybe you decide that its okay for incoming traffic to be dropped after some period of time, as long as when you next contact the server you are able to re-establish a mapping and fetch any data that you need. Unfortunately, there is no guarantee that the router, or any other layer in the hierarchy of intermediaries performing NAT (it’s actually much more complicated, with Carrier-Grade NAT adding even more translation steps), will assign you the same public IP address and port. Therefore, when you try to continue talking to the server over a previously established session, it will not recognize you. This means you’ll have to re-establish the session, which typically involves expensive cryptographic operations and sending a handful of messages back and forth before actually delivering the data you were interested in sending originally.

NAT - Device to Cloud, New Session

The worst case scenario is that your device needs to send data somewhat frequently, but not frequently enough that NAT mappings are maintained. For example, if a device needs to send a tiny sensor reading every 30 minutes, and the NAT timeout is 5 minutes, it will either need to send a keep alive message every 5 minutes (that’s 5x the messages you actually need to send!), or it will need to re-establish the session every time it delivers a reading. In both cases, you are going to be using much more power than if you were just able to send your sensor reading alone every 30 minutes.

Solving the Problem

Unfortunately, the distributed nature of the internet means that we aren’t going to be able to address the issue by nicely asking carriers and ISPs to extend their NAT timeouts. However, we can make it such that being issued a new IP address and port doesn’t force us to re-establish a session.

More than a year ago, we announced support for DTLS 1.2 Connection IDs. DTLS provides a secure transport over UDP, which many devices, especially those that are power constrained, use to communicate with Golioth’s CoAP device APIs. Typically, DTLS sessions are established based on a “five tuple”: source address, source port, transport protocol, destination address, destination port. If any of these change, a handshake must be performed to establish a new session. To mitigate this overhead, a Connection ID can be negotiated during the initial handshake, and can be used in subsequent records to continue to associate messages even after changes in source IP or port.

NAT - DTLS Connection ID

Going back to our previous example of a device that sends a single sensor reading message every 30 minutes, enabling Connection ID would mean that a new handshake would not have to be performed after NAT timeout, and that single message can be sent then the device can go back to sleep. In fact, depending on how long the server is willing to store connection state, the device could sleep for much longer, sending once a day or more infrequently. This doesn’t solve the issue of cloud to device traffic being dropped after NAT timeout (check back for another post on that topic), but for many low power use cases, being able to sleep for an extended period of time is less important than being able to immediately push data to devices.

Configuring the Golioth Firmware SDK for Sleepy Devices

By default, the Golioth Firmware SDK will send keep alive messages to ensure that an entry is preserved in the NAT translation table. However, this functionality can be disabled by setting CONFIG_GOLIOTH_COAP_KEEPALIVE_INTERVAL to 0, or just modifying it to be set to a large upper bound.

CONFIG_GOLIOTH_COAP_KEEPALIVE_INTERVAL_S=0

If using Zephyr, we’ll also need to set the receive timeout to a value greater than the interval at which we will be sending data. Otherwise, the client will attempt to reconnect after 30 seconds by default if it has not received any messages. In this example we’ll send data every 130 seconds, so setting the receive timeout to 200 ensures that we won’t attempt to reconnect between sending.

CONFIG_GOLIOTH_COAP_CLIENT_RX_TIMEOUT_SEC=200

To demonstrate the impact of NAT timeouts, we’ll initially build the hello example without enabling Connection IDs. To ensure that we wait long enough for a NAT timeout, we need to update the loop to send every 130 seconds instead of every 5 seconds.

This example is using a Hologram SIM and connecting via the AT&T network. NAT timeouts may vary from one carrier to another. AT&T currently documents UDP inactivity timeouts as 30 seconds.

while (true)
{
    LOG_INF("Sending hello! %d", counter);

    ++counter;
    k_sleep(K_SECONDS(130));
}

Building and flashing the hello sample on a Nordic Thingy91 results in the following behavior.

*** Booting nRF Connect SDK v2.7.0-5cb85570ca43 ***
*** Using Zephyr OS v3.6.99-100befc70c74 ***
[00:00:00.506,378] <dbg> hello_zephyr: main: start hello sample
[00:00:00.506,378] <inf> golioth_samples: Bringing up network interface
[00:00:00.506,408] <inf> golioth_samples: Waiting to obtain IP address
[00:00:13.236,877] <inf> lte_monitor: Network: Searching
[00:00:17.593,994] <inf> lte_monitor: Network: Registered (roaming)
[00:00:17.594,696] <inf> golioth_mbox: Mbox created, bufsize: 1232, num_items: 10, item_size: 112
[00:00:18.839,904] <inf> golioth_coap_client_zephyr: Golioth CoAP client connected
[00:00:18.840,118] <inf> hello_zephyr: Sending hello! 0
[00:00:18.840,179] <inf> hello_zephyr: Golioth client connected
[00:00:18.840,270] <inf> golioth_coap_client_zephyr: Entering CoAP I/O loop
[00:02:28.840,209] <inf> hello_zephyr: Sending hello! 1
[00:02:32.194,396] <wrn> golioth_coap_client: 1 resends in last 10 seconds
[00:02:46.252,868] <wrn> golioth_coap_client: 4 resends in last 10 seconds
[00:03:03.419,219] <wrn> golioth_coap_client: 3 resends in last 10 seconds
[00:03:04.986,389] <wrn> golioth_coap_client: Packet 0x2001e848 (reply 0x2001e890) was not replied to
[00:03:06.045,715] <wrn> golioth_coap_client: Packet 0x2001e638 (reply 0x2001e680) was not replied to
[00:03:15.213,592] <wrn> golioth_coap_client: 6 resends in last 10 seconds
[00:03:21.874,298] <wrn> golioth_coap_client: Packet 0x2001ec90 (reply 0x2001ecd8) was not replied to
[00:03:25.419,921] <wrn> golioth_coap_client: 5 resends in last 10 seconds
[00:03:36.565,765] <wrn> golioth_coap_client: 5 resends in last 10 seconds
[00:03:40.356,933] <wrn> golioth_coap_client_zephyr: Receive timeout
[00:03:40.356,964] <inf> golioth_coap_client_zephyr: Ending session
[00:03:40.356,994] <inf> hello_zephyr: Golioth client disconnected
[00:03:47.035,675] <inf> golioth_coap_client_zephyr: Golioth CoAP client connected
[00:03:47.035,705] <inf> hello_zephyr: Golioth client connected
[00:03:47.035,827] <inf> golioth_coap_client_zephyr: Entering CoAP I/O loop

After initially connecting and successfully sending Sending hello! 0, we are inactive for 130 seconds (00:18 to 02:28), then when we attempt to send Sending hello! 1, we see that the server never responds, eventually causing us to reach the Receive timeout and reconnect. This is because when we send Sending hello! 1, our entry has been removed from the NAT translation table, and when we are assigned a new public IP address and port the server is unable to associate messages with the existing DTLS session.

Because using Connection IDs does involve sending extra data in every message, it is disabled in the Golioth Firmware SDK by default. In scenarios such as this one where the few extra bytes clearly outweigh more frequent handshakes, Connection IDs can be enabled with CONFIG_GOLIOTH_USE_CONNECTION_ID.

CONFIG_GOLIOTH_USE_CONNECTION_ID=y

Now when we build and flash the hello example on a Thingy91, we can see our 130 second delay, but then the successful delivery of Sending hello! 1. 130 seconds later, we see another successful delivery of Sending hello! 2.

*** Booting nRF Connect SDK v2.7.0-5cb85570ca43 ***
*** Using Zephyr OS v3.6.99-100befc70c74 ***
[00:00:00.508,636] <dbg> hello_zephyr: main: start hello sample
[00:00:00.508,666] <inf> golioth_samples: Bringing up network interface
[00:00:00.508,666] <inf> golioth_samples: Waiting to obtain IP address
[00:00:13.220,001] <inf> lte_monitor: Network: Searching
[00:00:16.318,908] <inf> lte_monitor: Network: Registered (roaming)
[00:00:16.319,641] <inf> golioth_mbox: Mbox created, bufsize: 1232, num_items: 10, item_size: 112
[00:00:21.435,180] <inf> golioth_coap_client_zephyr: Golioth CoAP client connected
[00:00:21.435,394] <inf> hello_zephyr: Sending hello! 0
[00:00:21.435,424] <inf> hello_zephyr: Golioth client connected
[00:00:21.435,546] <inf> golioth_coap_client_zephyr: Entering CoAP I/O loop
[00:02:31.435,455] <inf> hello_zephyr: Sending hello! 1
[00:04:41.435,546] <inf> hello_zephyr: Sending hello! 2

Next Steps

To see how often your devices are being forced to reconnect to Golioth after periods of inactivity, check out our documentation on device connectivity metrics. Devices that effectively maintain long lasting connections will see a significant difference between their Session Established and Last Report timestamps. If you have any questions about optimizing your devices for low power, reach out to us on the forum!

Zephyr has all of the bells and whistles. Your project only needs a handful of them. But which handful? To be fair, you can build with every possible module in your local tree and only the necessary bits will be pulled in. But wouldn’t it be nice to know exactly which modules need to be added to a manifest allow list? Answer that question and your users won’t be stuck cloning tons of unnecessary files. That could save time on each build, which really adds up over the course of a project’s life.

The west meta-tool used by Zephyr includes a package management system based on manifest files, often called west.yml. Part of the power of this system is that manifest files may inherit other manifest files. The downside to this is that you may be cloning a large number of packages your project will never use. Limit this by using an allow-list in your manifest. But what packages do you need to add to your allow list?

There is no answer to this question

Let’s be up-front about this: there is no definitive answer to this question.

Your project needs to allow all of the modules it uses. Sometimes that means modules that are enabled for some builds and disabled for others. For instance, the Golioth Firmware SDK includes example apps that will build for Espressif, Nordic, and NXP processors. Each have their own HAL but only one of them is used in any given build. You can’t really programmatically generate a modules list in a case like this; you just need to know these packages are needed, even if currently not in the build.

Even without an automated tool, I’ve had to answer this question for myself and I have some pointers on how to approach the problem.

The low-hanging fruit: check your build directory

The first thing you need to do is make sure your project builds without an allow list. That all files inherited from Zephyr or from NCS (Nordic’s Zephyr-based nRF Connect SDK) will be included from the build.

manifest:
  projects:
    - name: zephyr
      revision: v3.7.0
      url: https://github.com/zephyrproject-rtos/zephyr  
      west-commands: scripts/west-commands.yml
      import: true

This manifest will include dozens of modules available from the upstream Zephyr repository. There isn’t actually anything wrong with that. You clone the modules once and they live on your hard drive. But, it does take a long time to clone all of them and it will occupy several gigabytes of space. And it’s a good practice to know exactly which packages are actually in use. So let’s try to limit what is cloned in the future.

Directory listing with a few dozen Zephyr modules names shown

The build/modules directory from a Zephyr app

Above is a listing of the build/modules directory from a Zephyr application. All of these modules were scanned during the build process, but almost none of them have any object files that will be used in the build.

├── hal_rpi_pico 
│   ├── CMakeFiles 
│   └── cmake_install.cmake 
├── hal_silabs 
│   ├── CMakeFiles 
│   └── cmake_install.cmake 
├── hal_st 
│   ├── CMakeFiles 
│   └── cmake_install.cmake 
├── hal_telink 
│   ├── CMakeFiles 
│   └── cmake_install.cmake

In fact, we can use this to help us find the modules that are actually at work in a project. Here’s a one-liner you can run from the build/modules directory to get a list of modules we know are needed for this build:

➜ find . -type f -not -name "cmake_install.cmake" | cut -d/ -f2 | uniq
mbedtls
golioth-firmware-sdk
zcbor
hal_nxp

Let’s add this these modules to an allow-list and move to the next step.

manifest:
  projects:
    - name: zephyr
      revision: v3.7.0
      url: https://github.com/zephyrproject-rtos/zephyr
      west-commands: scripts/west-commands.yml
      import:
        name-allowlist:
          - mbedtls
          - zcbor
          - hal_nxp

The trial-and-error step

Okay, the easy part is behind us. Now it’s time to figure things out the hard way. Begin by removing your module sources. These are usually in a modules directory that is a sibling of the zephyr directory where the Zephyr tree is stored. Check carefully that you do not have any uncommitted changes in these modules before removing them from your local storage. (I’ve learned this the hard way.)

Next, add an allow-list with the modules we found in the previous section. Run west update to clone the modules. This should happen rather quickly as we’ve greatly narrowed down what will be checked out. Try to build your application. If it fails, we need to divine which module was missing from the build and add that to the allow-list.

warning: HAS_CMSIS_CORE (defined at modules/cmsis/Kconfig:7) has direct dependencies 0 with value n, but is currently being y-selected by the following symbols:
 - CPU_CORTEX_M (defined at arch/arm/core/Kconfig:6), with value y, direct dependencies ARM (value: y), and select condition ARM (value: y)

Part the build error is pointing to a modules/cmsis. If you look in the west.yml from the Zephyr tree you’ll see there is indeed a module named cmsis. We can add to our allow list, run `west update`, and then rebuild.

Guess what? That was it… the project now builds! Here’s what my entire manifest looks like:

manifest:
  projects:
    - name: zephyr
      revision: v3.7.0
      url: https://github.com/zephyrproject-rtos/zephyr
      west-commands: scripts/west-commands.yml
      import:
        name-allowlist:
          - mbedtls
          - zcbor
          - hal_nxp
          - cmsis

  self:
    path: modules/lib/golioth-firmware-sdk
    west-commands: scripts/west-commands.yml
    userdata:
      patches_dirs:
        - patches/west-zephyr

Note that the golioth-firrmware-sdk was one of the modules our search of the build directory turned up. But since that module is being added explicitly in this manifest file, it doesn’t need to be on the allow-list for the inherited Zephyr manifest.

Take control of your manifest with allow lists

Knowing exactly what libraries are being used in your build is part of good project management. Since manifest files let you target libraries and modules with version tags or commit hashes, this locks your project to a known-working state. I’m a huge advocate of this and gave an entire talk about Zephyr manifest files at the Embedded Open Source Summit.

Limiting your manifest files to libraries you are explicitly using helps you understand when upstream dependencies change. It may be a bit of a hassle to go through this process the first time, but doing so is a basic form of vetting your build and your product will be better for it.

Golioth Over-the-Air (OTA) Updates in most common cases are used for single-image firmware upgrade purposes. In that scenario, a device is notified about a new release. Such notification includes a release manifest, which contains information about new firmware. The most important metadata that a device gets is firmware version, hash, and URL (used to download the firmware).

Firmware is the only artifact that is tied to OTA release in that scenario. But the Golioth OTA service may also include multiple artifacts. This allows you to implement multi-image upgrades, e.g. when there are many MCUs on a single device. Golioth OTA even supports artifacts that are not firmware, but large blobs of data of any kind. Examples include AI models, images and arbitrary binary blobs.

Device with a display

This article shows an example application running on a device with a display. Implemented functionality is simple: just displaying an arbitrary image. In the future we would like to add more capabilities, so firmware upgrade will be implemented. Additionally we would like to change the displayed image without upgrading the whole firmware.

Multi-component OTA

The Golioth SDK exposes high-level APIs to easily setup single-image firmware upgrade (golioth_fw_update_init()). This automatically creates a thread that observes newest firmware release, upgrades it when notified and reboots to run the new version.

In the case of multi-component releases we will handle the manifest in application code. Let’s first implement a callback that gets executed when a new release is available:

struct ota_observe_data
{
    struct golioth_ota_manifest manifest;
    struct k_sem manifest_received;
};

static void on_ota_manifest(struct golioth_client *client,
                            const struct golioth_response *response,
                            const char *path,
                            const uint8_t *payload,
                            size_t payload_size,
                            void *arg)
{
    struct ota_observe_data *data = arg;

    LOG_INF("Manifest received");

    if (response->status != GOLIOTH_OK)
    {
        return;
    }

    LOG_HEXDUMP_INF(payload, payload_size, "Received OTA manifest");

    enum golioth_ota_state state = golioth_ota_get_state();
    if (state == GOLIOTH_OTA_STATE_DOWNLOADING)
    {
        GLTH_LOGW(TAG, "Ignoring manifest while download in progress");
        return;
    }

    enum golioth_status status =
        golioth_ota_payload_as_manifest(payload, payload_size, &data->manifest);
    if (status != GOLIOTH_OK)
    {
        GLTH_LOGE(TAG, "Failed to parse manifest: %s", golioth_status_to_str(status));
        return;
    }

    if (data->manifest.num_components > 0) {
        k_sem_give(&data->manifest_received);
    }
}

The above code checks whether the release manifest was received correctly and OTA is not already in progress. Then the CBOR encoded manifest is decoded with golioth_ota_payload_as_manifest(). If the manifest is valid and it contains at least one component, the main application thread is notified by releasing a semaphore with k_sem_give(&data->manifest_received).

Now it is time to start manifest observation in main() and wait until a release manifest is received:

int main(void)
{
    struct ota_observe_data ota_observe_data = {};

    /* ... */

    golioth_ota_observe_manifest_async(client, on_ota_manifest, &ota_observe_data);

    k_sem_take(&ota_observe_data.manifest_received, K_FOREVER));

    /* ... */
}

At this point the application continues execution after the manifest is successfully received and parsed. The next step is handling of received components:

int main(void)
{
    /* ... */

    for (size_t i = 0; i < ota_observe_data.manifest.num_components; i++) {
        struct golioth_ota_component *component = &ota_observe_data.manifest.components[i];
        uint8_t hash_bin[32];

        hex2bin(component->hash, strlen(component->hash), hash_bin, sizeof(hash_bin));

        struct component_desc *desc = component_by_name(component->package);
        if (!desc) {
            LOG_WRN("Unknown '%s' artifact package", component->package);
            continue;
        }

        if (desc->version ?
            (component_version_cmp(desc, component->version) == 0) :
            (component_hash_cmp(desc, hash_bin) == 0)) {
            continue;
        }

    LOG_INF("Updating %s package", component->package);

        status = golioth_ota_download_component(client, component, desc->write_block, NULL);
        if (status == GOLIOTH_OK) {
            reboot = true;
        }
    }

    /* ... */
}

Information about each component is stored in the ota_observe_data.manifest.components[] array. Either version or hash is compared with the received component. When it differs, the new component is downloaded with golioth_ota_download_component() API.

Firmware and background components require different handling. This is achieved with component_descs[] array and helper functions:

struct component_desc
{
    const char *name;
    const char *version;
    uint8_t hash[32];
    ota_component_block_write_cb write_block;
};

static struct component_desc component_descs[] = {
    { .name = "background", .write_block = write_to_storage },
    { .name = "main", .write_block = write_fw, .version = _current_version },
};

static int component_hash_update(struct component_desc *desc, uint8_t hash[32])
{
    memcpy(desc->hash, hash, 32);

    return 0;
}

static int component_hash_cmp(struct component_desc *desc, const uint8_t hash[32])
{
    return memcmp(desc->hash, hash, 32);
}

static int component_version_cmp(struct component_desc *desc, const char *version)
{
    return strcmp(desc->version, version);
}

static struct component_desc *component_by_name(const char *name)
{
    for (size_t i = 0; i < ARRAY_SIZE(component_descs); i++) {
        struct component_desc *desc = &component_descs[i];

        if (strcmp(desc->name, name) == 0) {
            return desc;
        }
    }

    return NULL;
}

Downloaded firmware is written to flash directly, into second application slot:

static struct flash_img_context flash;

enum golioth_status write_fw(const struct golioth_ota_component *component,
                             uint32_t block_idx,
                             uint8_t *block_buffer,
                             size_t block_size,
                             bool is_last,
                             void *arg)
{
    const char *filename = component->package;
    int err;

    LOG_INF("Writing %s block idx %u", filename, (unsigned int) block_idx);

    if (block_idx == 0) {
        err = flash_img_prepare(&flash);
        if (err) {
            return GOLIOTH_ERR_FAIL;
        }
    }

    err = flash_img_buffered_write(&flash, block_buffer, block_size, is_last);
    if (err) {
        LOG_ERR("Failed to write to flash: %d", err);
        return GOLIOTH_ERR_FAIL;
    }

    if (is_last) {
        LOG_INF("Requesting upgrade");

        err = boot_request_upgrade(BOOT_UPGRADE_TEST);
        if (err) {
            LOG_ERR("Failed to request upgrade: %d", err);
            return GOLIOTH_ERR_FAIL;
        }
    }

    return GOLIOTH_OK;
}

The background image is written to file system using write_to_storage() callback:

enum golioth_status write_to_storage(const struct golioth_ota_component *component,
                                     uint32_t block_idx,
                                     uint8_t *block_buffer,
                                     size_t block_size,
                                     bool is_last,
                                     void *arg)
{
    const char *filename = component->package;
    struct fs_file_t fp = {};
    fs_mode_t flags = FS_O_CREATE | FS_O_WRITE;
    char path[32];
    int err;
    ssize_t ret;

    LOG_INF("Writing %s block idx %u", filename, (unsigned int) block_idx);

    if (block_idx == 0) {
        flags |= FS_O_TRUNC;
    }

    sprintf(path, "/storage/%s", filename);

    err = fs_open(&fp, path, flags);
    if (err) {
        LOG_ERR("Failed to open %s: %d", filename, err);

        return GOLIOTH_ERR_FAIL;
    }

    err = fs_seek(&fp, block_idx * CONFIG_GOLIOTH_BLOCKWISE_DOWNLOAD_BUFFER_SIZE, FS_SEEK_SET);
    if (err) {
        goto fp_close;
    }

    ret = fs_write(&fp, block_buffer, block_size);
    if (ret < 0) {
        err = ret;
        goto fp_close;
    }

fp_close:
    fs_close(&fp);

    if (err) {
        return GOLIOTH_ERR_FAIL;
    }

    return GOLIOTH_OK;
}

Displaying (updated) background

Firmware is updated automatically during next boot, so there is nothing more needed to start using it. Background image, on the other hand, needs to be loaded from file system in the application code:

static lv_img_dsc_t img_background;

static int background_show(void)
{
    char hash[32] = {};
    struct fs_dirent dirent;
    struct fs_file_t background_fp = {};
    lv_img_header_t *img_header;
    uint8_t *buffer;
    int err;
    ssize_t ret;

    err = fs_stat("/storage/background", &dirent);
    if (err) {
        if (err == -ENOENT) {
            LOG_WRN("No background image found on FS");
        } else {
            LOG_ERR("Failed to check/stat background image: %d", err);
        }

        return err;
    }

    LOG_INF("Background image file size: %zu", dirent.size);

    buffer = malloc(dirent.size);
    if (!buffer) {
        LOG_ERR("Failed to allocate memory");
        return -ENOMEM;
    }

    err = fs_open(&background_fp, "/storage/background", FS_O_READ);
    if (err) {
        LOG_WRN("Failed to load background: %d", err);
        goto buffer_free;
    }

    ret = fs_read(&background_fp, buffer, dirent.size);
    if (ret < 0) {
        LOG_ERR("Failed to read: %zd", ret);
        err = ret;
        goto background_close;
    }

    if (ret != dirent.size) {
        LOG_ERR("ret (%d) != dirent.size (%d)", (int) ret, (int) dirent.size);
        err = -EIO;
        goto background_close;
    }

    err = mbedtls_sha256(buffer, dirent.size, hash, 0);
    if (err) {
        LOG_ERR("Failed to get update sha256: %d", err);
        goto background_close;
    }

    LOG_HEXDUMP_INF(hash, sizeof(hash), "hash");

    component_hash_update(&component_descs[1], hash);

    img_header = (void *)buffer;
    img_background.header = *img_header;
    img_background.data_size = dirent.size - sizeof(*img_header);
    img_background.data = &buffer[sizeof(*img_header)];

    lv_obj_t * background = lv_img_create(lv_scr_act());
    lv_img_set_src(background, &img_background);
    lv_obj_align(background, LV_ALIGN_CENTER, 0, 0);

background_close:
    fs_close(&background_fp);

buffer_free:
    free(buffer);

    return err;
}

Note that besides loading the background image, there is also SHA256 calculation using mbedtls_sha256(). This is needed to compare with the SHA256 hash received from OTA service in order to decide whether the background image needs to be updated.

Testing with native_sim

Round display with a black bezel around a white image with the Golioth Echo mascot at the center. A USB cable is plugged into the device on the right side of the screen.

XIAO ESP32S3 with Seeed Studio XIAO Round Display

The example-download-photo application is compatible with XIAO ESP32S3 with Seeed Studio XIAO Round Display. However it is possible to test with Native Simulator as well. To achieve that, the following command can be used:

# Build the example
west build -p -b native_sim/native/64 $(west topdir)/example-download-photo

# Run the example
west build -t run

Native Simulator uses the SDL library to emulate a display. On the first run it is blank because no background image is available. Now it is time to upload a background image as an OTA artifact and create a release. An example background image is included in the repository in background/Echo-Pose-Stand.bin. After rolling out an OTA release, this image is downloaded automatically to /storage/background file on the device, which is indicated with the following logs:

[00:00:01.310,007] <inf> example_download_photo: Received OTA manifest
...
[00:00:01.310,007] <inf> example_download_photo: component 0: package=background version=1.0.5 uri=/.u/c/[email protected] hash=6b4d243a362c0c4f63c535b2d2f7b8dfe4bcfbca69e7b2f8009f917458794c5e size=35716
[00:00:01.310,007] <inf> example_download_photo: Updating background package
[00:00:01.560,008] <inf> example_download_photo: Writing background block idx 0
[00:00:01.700,009] <inf> example_download_photo: Writing background block idx 1
...
[00:00:06.320,042] <inf> example_download_photo: Writing background block idx 34

Starting Native Simulator again shows the following screen:

We’re on a roll with our showcase of examples that came out of Golioth’s AI Summer. Today I’m discussing an example that records audio on an IoT device and uploads the audio to the cloud.

Why is this useful? This example is a great approach to sending sensor data from your edge devices back to the cloud to use in a machine learning (ML) training set, or just a great way to collect data samples from your network. Either way, we’ve designed Golioth to efficiently handle data transfer between constrained devices and the cloud.

The full example code is open source and ready to use.

Overview

The bones of this example are the same as the image upload example we showcased earlier. The main components include:

  1. An audio sample (or any other chunk of data you’d like to send).
  2. A callback function to fill a buffer with blocks from your data source.
  3. A function call to kick off the data upload.

That’s it for the device side of things. Sending large chunks of data is a snap for your firmware efforts.

The cloud side is very simple too, using Golioth Pipelines to route the data as desired. Today we’ll send the audio files to an Amazon S3 bucket.

1. An Audio Sample

The details of audio recording are not important for this example. WAV, MP3, FLAC…it’s all just 1’s and 0’s at the end of the day! The audio is stored in a buffer and all we need to know is the address of that buffer and its length.

If you really want to know more, this code is built to run on one of two M5 Stack boards: the Core2 or the CoreS3. Both have a built-in i2s microphone and an SD card that is used to store the recording. SD cards storage is a great choice for prototyping because you can easily pop out the card and access the file on your computer to confirm what you uploaded is identical. Full details are found in the audio.c file.

2. Callback function

To use block upload with Golioth, you need to supply a callback function to fill the data buffer. The Golioth Firmware SDK will call this function when preparing to send each block.

uint8_t audio_data[MAX_BUF_SIZE];
size_t audio_data_len;

/* Run some function to record data to buffer and set the length variable */

static enum golioth_status block_upload_read_chunk(uint32_t block_idx,
                                                   uint8_t *block_buffer,
                                                   size_t *block_size,
                                                   bool *is_last,
                                                   void *arg)
{
    size_t bu_offset = block_idx * bu_max_block_size;
    size_t bu_size = audio_data_len - bu_offset;
    if (bu_size <= block_size)
    {
        /* We run out of data to send after this block; mark as last block */
        *block_size = bu_size;
        *is_last = true;
    }
    /* Copy data to the block buffer */
    memcpy(block_buffer, audio_data + bu_offset, *block_size);
    return GOLIOTH_OK;
}

The above code is a very basic version of a callback. It assumes you have a global buffer audio_data[] where recorded audio is stored, and a variable audio_data_len to track the size of the data stored there. Each time the callback runs it reads from a different part of the source buffer by calculating the offset based on the supplied *block_size variable. The callback signals the final block by setting the *is_last variable to true, and updating the *block_size to indicate the actual number of bytes in the final block.

You can see the full callback function in the example app which includes full error checking and passes a file pointer as the user argument to access data on the SD card. The file handling APIs from the standard library are used, with a pointer to the file passed into the callback.

3. API call to begin upload

Now we start the upload by using the Stream API call, part of the Golioth Firmware SDK. Just provide the important details for your data source and the path to use when uploading.

int err = golioth_stream_set_blockwise_sync(client,
                                            "file_upload",
                                            GOLIOTH_CONTENT_TYPE_OCTET_STREAM,
                                            block_upload_read_chunk,
                                            NULL);

This API call includes four required parameters shown above:

  • client is a pointer to the Golioth client that holds info like credentials and server address
  • "file_upload" is the path at which the file should be uploaded (change this at will)
  • GOLIOTH_CONTENT_TYPE_OCTET_STREAM is the data type (binary in this case)
  • block_upload_read_chunk is the callback we wrote in the previous step

The final parameter is a user argument. In the audio sample app we use this to pass a pointer to read data from the file on the SD card.

Routing your data

The example includes a Golioth pipeline for routing your data.

filter:
  path: "/file_upload*"
  content_type: application/octet-stream
steps:
  - name: step0
    destination:
      type: aws-s3
      version: v1
      parameters:
        name: golioth-pipelines-test
        access_key: $AWS_S3_ACCESS_KEY
        access_secret: $AWS_S3_ACCESS_SECRET
        region: us-east-1

You can see the path in the pipeline matches the path we used in the API call of the previous step. This instructs Golioth to listen for binary data (octet-stream) on that path, and when found, route it to an Amazon S3 bucket. Once enabled, your audio file will automatically appear in your S3 bucket!

IoT data transfer shouldn’t be difficult

That’s worth saying twice: IoT data transfer shouldn’t be difficult. In fact, nothing in IoT should be difficult. And that’s why Golioth is here. It’s our mission to connect your fleet to the cloud, and make accessing, controlling, updating, and maintaining your fleet a great experience from day one. Take Golioth for a test drive now!

One of my favorite engineering processes at Golioth is our architecture design review. When building new systems, making consequential changes to existing systems, or selecting a third-party vendor, an individual on the engineering team authors an architecture design document using a predefined template. This process has been in place long enough (more than 18 months) that we have started to observe long-term benefits.

Some of the benefits are fairly obvious: more efficient implementation of large-scale functionality, better communication across engineering domains, smoother context sharing during new engineer on-boarding. Others are more subtle. One aspect of codifying a decision making process that I personally find extremely valuable is the ability to check the pulse of an organization over time. How thorough are design documents? How robust is the feedback provided? Are individuals providing push back regardless of any organizational hierarchy? Are discussions reaching resolution in an appropriate manner? Many of these questions center on how disagreements are resolved.

Disagreement is one of my favorite aspects of the engineering process. When done correctly, it drives a team towards an optimal solution, builds a stronger sense of trust between individuals, and results in more comprehensive exploration and documentation of a problem space. Through healthy disagreement, the Golioth engineering team typically arrives at one of three possible outcomes.

  1. Consensus is reached around one of the presented solutions.
  2. Consensus is reached around a new solution that incorporates aspects of each of the presented solutions.
  3. It is determined that more information is needed, or the decision does not have to be made to move forward.

However, reaching one of these outcomes does not necessarily mean that the process was effective. One failure mode is reaching perceived consensus around one solution, when in reality one individual doesn’t feel comfortable pushing back against the other. Another is abdicating responsibility by deferring a decision that actually does need to be made now. In the moment, it is not always clear whether the process is effective, but the beauty of codifying the interaction is that it can be evaluated in the future with the benefit of hindsight.

This week I opened up the review window for a design document I recently authored, and within 24 hours I had received high quality feedback from multiple members of the engineering organization. Furthermore, there were some key points of disagreement included in the feedback, which we resolved efficiently, with outcomes ranging from reaching consensus on a counter proposal to deferring a portion of the system to a future design document.

Compared to the early days of instituting the review process, more recent architecture design documents have involved more disagreement, but also more efficient resolution. While excess conflict can sow seeds of division, a mature engineering organization will turn differences of opinion into progress. Tackling any complex problem will involve some disagreement — for a strong team it will be the right amount.

It has been over three months since we announced Golioth Pipelines, and we have already seen many users reduce costs and unlock new use cases by migrating to Pipelines. As part of the announcement, we provided options for users who were currently leveraging Output Streams, which offered a much more constrained version of the same functionality, to seamlessly migrate their existing projects to Pipelines. Today, we are announcing December 12th, 2024 as the official end of life date for Output Streams.

For users operating projects that either started out with Pipelines, or have been transitioned to Pipelines as part of the opt-in migration process, there will be no change required. For the few projects that are still leveraging Output Streams, we encourage users to start the migration process now by submitting project information here, or to reach out to us at [email protected] with any questions or concerns. On December 12th, all projects that have not already been migrated to Pipelines will be automatically migrated with Pipelines configured to replicate the previous behavior of Output Streams. Output Stream configuration will no longer be accessible in the Golioth console.

The rapid adoption of Pipelines by the Golioth community has been exciting to witness, and we are looking forward to the ongoing growth of the platform via new Transformers and Destinations. If you are currently using Pipelines, or would like to see new functionality added, contact us on the forum!

Goiloth is excited to announce Golioth Solutions. Two new capabilities will help businesses to deploy IoT devices in a short period of time:

  • Golioth Solutions Services
  • Golioth Solutions Marketplace

A New Service Offering

Golioth Solutions Services solves many of the difficult problems at the beginning of developing an IoT product, namely pushing a fully formed idea out into the world. Golioth Solutions Engineers will help to identify how companies can best deploy the Golioth platform to solve their business needs and deliver a product that captures real-world data and provides consistent insight and control of your devices.

Our Solutions Engineers will work with you to formulate what is required for your particular business use-case and what kind of solution will get you there fastest. This includes hardware, firmware, cloud capabilities, fleet management, and application development. Our Solutions Engineers fill in the gaps where your team needs help. Perhaps you are a cloud software company looking to deploy a hardware device? Solutions Engineers will utilize existing hardware and firmware Solutions to send data up to Golioth and out to the platform of your choosing using Pipelines. What if you’re on the other spectrum and are a hardware company looking to connect custom hardware to the cloud? Our Solutions Engineers can set you up with known working hardware and firmware that you can use as reference while you develop your own custom hardware, while Solution Engineers consult on how data should be hitting the cloud and routing to outside services.

A Marketplace of Solutions

We are also launching Golioth Solutions Marketplace where customers can view existing solutions. These form the basis of “starting points” for many custom projects that Golioth Solutions Services will deliver.

In order to deliver IoT solutions in a short amount of time, we want our Solutions Engineers to have an arsenal of ready-made designs that can be customized to customers needs. This will include our internal Reference Designs as well as designs from Partners. We will continue to add to these designs and highlight them here on the blog when we have a new one available.

Designs from our Partners

The Golioth Solutions Marketplace includes production-grade hardware that has been produced by our Design Partners. The Solution also includes custom firmware and cloud capabilities that are targeted at a particular solution space and vertical application. Each of these designs are built on the Golioth platform and are customizable to specific business needs.

Many of these designs can also be repurposed towards a different vertical, based on the capability contained within the Solution. Our Solutions Engineers know how each of these technologies might fit a new, custom application. Since these solutions are developed by our Design Partners, the same creators of the hardware can also enhance and customize the product to your needs. As customers decide to scale, our Design Partners are well prepared to guide customers through production and productization.

Are you a product development shop interested in having your hardware listed in our Solutions Marketplace? Submit your designs now to start the process.

Introducing the Glassboard Shunt

One of our first Solutions comes from our design partner Glassboard, based out of Indianapolis in the US. The IoT Power Monitoring for Micromobility Solution includes a cellular-connected current shunt. This design is intended to measure battery currents on small vehicles. It works in both directions; measuring current being sourced to motors during motion, as well as charging currents going back into the battery. We recorded a video about this design and how it fits in with Golioth Solutions:

While this is initially targeted at micromobility applications, it’s easy to imagine how this device and starter firmware could be retargeted at a different vertical. One example could be monitoring a DC power source that powers LED lighting for a construction application.

How Golioth Solutions Engineers use designs

Solutions Engineers take input from customers and determine if any of our existing designs (like the Glassboard current shunt) are a good fit for the application at hand. Perhaps there is a new DC current measurement that could benefit from the existing hardware, but it needs to be tweaked to better fit the application space. Our Solutions Engineers first modify and test the firmware to fit the device needs, and then work with the customer to determine where the resulting data will go, and if there are additional needs around visualization or control of the fleet of devices. If the hardware requires some kind of modification, our Solutions Engineers will connect customers with the original designers to discuss the logistics of creating a custom version of the existing hardware.

Golioth Reference Designs

Another source of Golioth Solutions includes our range of Reference Designs, which can be customized and delivered by Golioth Solutions Services. We have been working on and refining Reference Designs for a few years now. These are end-to-end demonstrations of Golioth, built on top of custom hardware.

What about licensing? Well, all Golioth Reference Design Hardware is open source with a very permissive license. Customers can take the underlying hardware to one of our Design Partners and have them modify and extend the capabilities and refine it for production. You will be starting from a solution that is continually tested and can be easily extended using off-the-shelf sensor breakouts. Our Solutions Engineers can get you started extra quickly using the Follow Along Hardware version of these designs, which includes firmware that targets off-the-shelf development boards, in addition to the sensors. You can get started quickly, with no custom hardware required.

New Services + New Marketplace = Quicker time to market

Golioth Solutions and our associated marketplace exists to help users that need an IoT solution for their business, but don’t necessarily have the time or capabilities to build it themselves. We can bootstrap your solution from a sketch on a page to a working IoT device backed by a powerful IoT platform that handles all your data. Once the idea is proven out, we have a well-defined handoff to our Design Partners who can assist building that first device into a fleet of production-ready hardware that you can deploy to the field. You will be prototyping and testing using an IoT platform that is built for scale.

If you’d like to start building an IoT Solution that will serve your business, please get in touch! You can email [email protected] to find out more or fill out this form to directly schedule a meeting.