We are getting ready for trade-show season here at Golioth. We will be at the Zephyr Developer Summit next week and Embedded World later in the month. We’re excited to be able to show off some of our capabilities live and in person!

Showcasing data in a cogent way is a key component to IoT deployments. As a hardware developer, the “atoms” (hardware implementation) are the fun part to work on, but the “bits” (data implementation) are how you get the message across. This isn’t just for trade shows. Your clients, co-workers, and customers are all going to want to see what kind of useful data is being generated by your IoT system.

Today, let’s dig into how you might want to showcase “real-time” versus “historic” data. We are using Grafana for our data dashboard, and there are some interesting differences in how you configure and show the data to users.

Configuration Differences

We have written about and showcased Grafana before. It’s an “observability platform” that helps chart data. Normally it targets monitoring infrastructure like servers and cloud implementations, but it’s also possible to use Grafana for IoT systems. We like it because there is a hosted cloud version (which we use for this demo), and it can also be deployed locally in a container.

We are interacting with Golioth data in two different ways: using WebSockets for Real-time data and using our REST API for Historical data. For the former, we actually wrote and published a WebSockets plugin for Grafana. This allows us to monitor when there are changes to variables, both on LightDB State and LightDB Stream. For more information on getting this set up, Mike has written an in-depth tutorial on how to hook up a sensor and get it transmitting data using WebSockets.

In my case, I’m monitoring temperature data on an OpenThread demo that we are showcasing.  There are 3 different devices transmitting their sensor data back to Golioth. Once that data is in the Golioth cloud, I have two different “data sources” set up in Grafana to extract that data for later viewing.

Real-time data via WebSockets plugin:

Historical data via JSON API plugin:

Why the data source difference?

Now that we have covered some of “how”, let’s talk about the “why” of showing real-time vs historical data. Can’t we just use the same data source for both?

In short, no. Or you could, but it will make things difficult.

The key thing to think about is how much data we’re trying to show for each. In the image below, we’re showing the most recent value and when it was update in the two left panes. This is the “real-time” component. All you need to gather is the latest data point.

For the “historic” chart, we actually want to go backwards in time and fetch a range of values. The line chart’s legend also shows “last” value for each of the 3 sensors, but note that they don’t match the left side dials. That’s because the line chart shows the “last” reading from when the chart was fetched, which happens every 15 minutes. The dials themselves display the most recent reading, live streamed to the dashboard.

In the data source configuration screenshots, note that the two endpoints on Golioth are different. In the case of the JSON API plugin talking to the REST API, we are actually calling out the project name. When we do a POST to the LightDB Stream endpoint (/stream), we tell the API which data range we want to go fetch. This allows us to gather all data points within a pre-defined range. This is the basis of the “historical” measurement.

If we were trying to do gather the same data with the WebSocket (charting data on a time-series graph), we would only be able see the data on the graph for as long as we were capturing output from the WebSocket. For instance, if we reload the page, the chart would no longer have any data points on it, until we start capturing newly generated data points. There is no capability to go and pull historic data from the WebSocket endpoint.

The JSON API data source has an additional element that is helpful for charting time-series data on a graph: it allows grouping data by one of the columns, in our case the device name. Below I have also added a label to Sensor 1 and Sensor 2, with the last sensor not yet labeled.

This is something that could be implemented in the WebSocket data source in the future, but it doesn’t do that currently. Instead, on the “dial” and “stat” panels shown above, we ingest all data coming from the WebSocket (representing all LightDB Stream data from our project, from all devices) and then filter that data to only show the specific device we want represented in the visualization:

Setting up your next project

There are a wide variety of ways you can set up your projects to showcase the data being generated by devices in the field. It’s good to start by asking yourself: what do you want out of the data?

Do you want to be alerted to the most recent data point? You might want something like the dial shown above, including mapping colors around expected values. You can also set up alerts in Grafana to trigger as you move past certain thresholds.

Are you looking to see how things are changing over time? That might make more sense to target a “historical” view, like a line chart pulling from the Golioth REST API. This allows you to understand what your device has experienced in the past and potentially take action based on one-time events, such as a spike in data.

Or maybe you care more about how things are trending? You can always gather multiple data points and then apply statistical methods to them. This might help smooth out spikes in data and only show the median or mean of data over a defined time period. Applying statistical methods could work for either the Real-time or Historical data, but would have the same caveat of only capturing the data you are actively observing via the WebSocket endpoints.

However you decide to slice and dice your data, it’s important to understand the a capabilities of outputs from the Golioth platform and your visualization platform. We will continue to publish about our own dashboards and other ways we find to help our users visualize their data.

Photo by Brett Sayles from Pexels

One small step for debug

If you are getting started in Zephyr, you can get a lot done using the serial/terminal output. I normally refer to this as “printf debugging”. With Zephyr it would be “printk debugging” because of the difference in commands to print to the serial output (or to a remote logging service like Golioth). Honestly, this method works great for example code, including many of our tutorials.

In addition to Zephyr’s ad hoc nature as a package management platform for embedded software, it is also a Real Time Operating System (RTOS). We use Zephyr’s package management as a starting point: we want users to be able to bootstrap a solution by downloading toolchains and vendor libraries. We don’t dig into the operating system very often on this blog or in our tutorials. Much like printk debugging, the details aren’t really needed when getting started with Zephyr and Golioth. But when you begin to dig deeper, you will be kicking off your own threads and workers, and utilizing other features of the RTOS like semaphores and queues. Once you’re doing that, I can all but guarantee you’ll want more visibility into what’s happening in your system.

This article showcases SEGGER Ozone and SystemView tools, which will help you peek inside. It also adds a few pieces to getting started with these platforms that I found lacking when searching for answers on the broader internet.

Saving battery budgets

My motivator to dig deeper on these systems is getting ready for the upcoming conference season. We will be at the Zephyr Developer Summit and Embedded World representing Golioth. We want to showcase our technology, including our capabilities as a Device Management solution for Thread-based device networks. Our demo of Zephyr, OpenThread, and Golioth runs on a battery-based device, which isn’t something we normally do. Most of our demos expect you’ll be powering your platform using a USB cable. When you start to care about power draw, you start to care about where your program is spending its time. Understanding whether a device is in sleep mode and how long it spends processing a piece of data is critical to optimizing for battery life. Since I don’t want to lug along an entire suitcase of batteries with me to the conference, I started wondering where we’re hanging out in the various threads of Zephyr. This is where a debugger and a real-time process recorder come in.

Tooling up for debugging

So I know I need a debugger.

My experience as a hardware engineer is that silicon vendors normally have a dedicated path for their code examples, Real Time Operating Systems (RTOSes), and Integrated Development Environments (IDEs). If I’m being honest, that was the path I took in the past: it was a low friction way to get something blinking or talking back to the network. Going outside of that path to use Zephyr means the tooling is more DIY. Even some vendors that provide support for Zephyr as their primary or secondary solution don’t have a “one way” of doing things. The fact that Zephyr is flexible is both a blessing and a curse. I can implement anything I’d like! But I need to go figure it out.

SEGGER Ozone

SEGGER are the makers of the popular J-Link programmer, in addition to a wide suite of software tools for embedded developers. I was interested in Ozone because of the open nature of their debugger, completely decoupled from any IDE or vendor toolchain.

I was excited to see a webinar from our friends at NXP talking about using SEGGER Ozone with Zephyr (registration required). The webinar showcases using a Zephyr sample called “Synchronize”, which is available at <zephyr_install_directory>/zephyr/samples/synchronize.

The basic idea of the sample is you are sharing a semaphore between two threads. It’s like passing a ball back and forth. Once the loop for one thread runs, it release the semaphore and the other thread can pick it up and use it. Each thread is effectively running the same code, it just only does so when the thread and semaphore line up properly.

the main function of main.c on the synchronize sample (with a small modification)

The net result is that you can see the threads ping-pong-ing back and forth on a debugger. See the NXP link above for a video example of this in action.

Using Ozone with Zephyr

One thing that wasn’t clear to me from the NXP webinar is getting everything set up. This was the genesis of this article. I wanted to put the required steps in one place.

Requirements:

  • J-Link programmer
  • Development board with SWD or JTAG access
  • Compatible chipset/board in the Zephyr ecosysttem (we will be showing the mimxrt1060_evkb below)
  • SEGGER Ozone installed on your machine. You can download and install the program from this page.

Step 1: Compile the program

The first step is to compile the project at <zephyr_install_directory>/zephyr/samples/synchronize with some added settings in the prj.conf file.

You will need to have the Zephyr toolchains installed. For our example, I will compiling for the NXP RT1060 EVKB board, which means I need to include the NXP Hardware Abstraction Layer (HAL). If you’re a regular Golioth user, this is not installed by default (but will be soon). Instead, I recommend you install Zephyr directly from the tip of main or start from a “vanilla” Zephyr install already on your machine. Start a virtual environment if you have one (or prefer one) and then run the following:

mkdir ~/RTOS_test
cd ~/RTOS_test
west init
west update

This will be an entire Zephyr default install and will take a bit to download/install. We’re showing this for the RT1060 board but this should work on almost any board in the main Zephyr tree, including virtual devices.

cd ~/RTOS_test/zephyr/
nano samples/synchronize/prj.conf

Add the following to the sample code, if it’s not already there. This will allow Ozone to understand some of the threads in the program.

# enable to use thread names
CONFIG_THREAD_NAME=y
CONFIG_SCHED_CPU_MASK=y
CONFIG_THREAD_ANALYZER=y

Finally, build the code:

west build -b mimxrt1060_evk samples/synchronization/ -p    #you can swap this out for another board
west flash

This loads the binary file (zephyr.bin) onto your board.

Step 2: Load the ELF into SEGGER Ozone

Normally it’s the “binary” version of your program that is loaded onto the board. To use a debugger, we want something called an ELF File instead, which stands for “Executable and Linkable Format”. I think of it as an annotated version of your binary, because it includes the source files and all of the references as you go through your program.

Start a new project using the New Project Wizard, walking through the various dialogs:

If it’s not already selected, choose your processor (in the case of the mimxrt1060_evkb, the part is actually the rt1062)

Choose your J-Link (required for SEGGER Ozone). On my board it uses Single Wire Debug (SWD) but some boards might use JTAG.

Load the ELF file from your build directory. This will be located at  <zephyr_install_directory>/zephyr/build/zephyr, using the instructions above.

The most critical piece!

I wanted to call this out because it took me so long to find how to enable the “thread aware” debugging part of Ozone. You need to run the following command in the Console:

Project.SetOSPlugin ("ZephyrPlugin")

This tells SEGGER to run a built-in script and enable a new window in the “View” menu. Select the newly available “Zephyr” option or hit Alt + Shift + O to enable it. You should now see a new window pop up on your screen.

This window shows the two threads that are available in the “Synchronize” program.

I set a breakpoint on the printk command that is writing to the terminal (click the gray button next to the line where you want to set a breakpoint). Then I start debugging from the menu Debug -> Start Debug Session -> Attach to Running Program. This should start the debugger and then halt where you set a breakpoint:

Click the Resume button or hit F5 and you will see the Zephyr window switching between Thread A and Thread B.

SEGGER SystemView

SystemView is something I first became aware of in Brian Amos’s book “Hands-On RTOS with Microcontrollers”. I was reading it to learn more about the pieces of Real Time Operating Systems and he uses SystemView to help analyze where an RTOS is spending the majority of time. This is critical because operating systems rely on the concept of a “scheduler”, which relinquishes control over precisely what is happening when in a program.

SystemView is a separate piece of software from Ozone and is licensed differently. It is free to use as a trial, but extended usage by commercial operations will need to purchase a license. You can download the software from SEGGER for trial usage.

Using SystemView with Zephyr

There are some additional steps required to get a program working with SystemView on Zephyr.

The most critical piece(s)!

There are two critical pieces to get a Zephyr program running with SystemView:

  1. You must be doing your logging using RTT.
    • Using only UART logging of messages will not work. SystemView requires an “RTT Control Block” in your code and if it’s not there, SystemView will timeout while trying to capture events.
    • The message I kept receiving was “Could not find SystemView Buffer”.
  2. You must log traces using RAM instead of UART (default)
    • This allows the debugger to extract trace information from memory. Some other OSes can pull in UART trace messages but this is not enabled on Zephyr yet.

You can enable RTT and other required settings in the prj.conf file (these can also be set through Zephyr’s menuconfig):

CONFIG_THREAD_NAME=y
CONFIG_SEGGER_SYSTEMVIEW=y
CONFIG_USE_SEGGER_RTT=y  #see point 1 above
CONFIG_TRACING=y
CONFIG_TRACING_BACKEND_RAM=y  # see point 2 above

Recompile the program and flash to your board. You should now be able to open SystemView and get started. Upon opening the program, you’ll need to configure for your J-Link:

And your board settings:

Finally when you hit the “Play” button (green arrow) or hit F5, it should start to capture events on your device.

As you can see below in the “Timeline” window, control is bouncing back and forth between “Thread A” and “Thread B”.

Using Ozone and SystemView together

These are two different tools using the same interface. The cool thing is that you can use them together at the same time. This is especially useful because SystemView will capture all events, which can quickly become overwhelming. You might instead only want to see a small subset of events. You can set a breakpoint in Ozone, start recording in SystemView, and then get a targeted look at the program execution right where the breakpoint is happening. You can target smaller subsections of your code to really pinpoint and optimize your functions.

Giant Leaps in Debugging

These are just some of the tools that will help to give you more insight into your Zephyr programs as you dig deeper into the ecosystem and the Golioth Zephyr SDK. Once you start adding more capabilities, you will be able to visualize the finest details of what is happening and develop better software for your customers.

If you need help getting started with the tools described here, you can always join us on the Golioth Discord or check out the Golioth Forums for assistance. Happy debugging!

Devices that connect to the Golioth Cloud communicate securely thanks to a pre-shared key (PSK) that encrypts all messages. But how do you get a unique set of credentials onto every device? The options for provisioning your devices just got a whole lot more interesting thanks to some new Golioth features. There are now two ways to set the credentials using either the device shell, or our command line tool that also fetches those credentials automatically!

Zephyr settings using the device shell

Zephyr has a shell option that runs on the device itself. This is great for things like network or i2c debugging (there are specialized commands for both of those and much more). You send your credentials over a serial connection and Golioth leverages the Zephyr settings subsystem to store the device credentials (PSK-ID and PSK) in flash memory.

As part of the getting started guide, you already set up a device in the Golioth Console. Use the Devices sidebar option to find that device again (or create a new one) and click on the Credentials tab to access your PSK-ID and PSK:

Golioth Device Credentals

Now we need some code to run on the device. The Golioth settings example already has this feature built in. For this example I’m using a Nordic nRF9160dk. You can build and flash the example right away. Normally we’d put credentials into the prj.conf file first, but this time we’ll just assign those from the shell!

cd ~/golioth-ncs-workspace/modules/lib/golioth/
west build -b nrf9160dk_nrf9160_ns samples/settings/
west flash

Once programming has completed, load up your serial terminal tool of choice. I like to use minicom -D /dev/ttyACM0 but you should have the same success with screen /dev/ttyACM0 115200 or any other similar tools.

  • You’ll be greeted by the uart:~$ command prompt
  • The syntax that we need is settings
    • PSK-ID needs to be assigned to golioth/psk-id
    • PSK needs to be assigned to golioth/psk
  • Reboot the device after changing the settings

Here’s what that looks like in action (important lines highlighted):

uart:~$ *** Booting Zephyr OS build v2.6.99-ncs1-1 ***
[00:00:00.209,716] <inf> golioth_system: Initializing
[00:00:00.216,278] <inf> fs_nvs: 2 Sectors of 4096 bytes
[00:00:00.216,278] <inf> fs_nvs: alloc wra: 0, fa8
[00:00:00.216,278] <inf> fs_nvs: data wra: 0, 90
uart:~$ settings set golioth/psk-id nrf91-settings-demo-id@blog-demo
Setting golioth/psk-id to nrf91-settings-demo-id@blog-demo
Setting golioth/psk-id saved as nrf91-settings-demo-id@blog-demo
uart:~$ settings set golioth/psk my_complex_password
Setting golioth/psk to my_complex_password
Setting golioth/psk saved as my_complex_password
uart:~$ kernel reboot cold

After rebooting, the board connects to a cell tower and the connection to Golioth is successfully established!!

uart:~$ *** Booting Zephyr OS build v2.6.99-ncs1-1 ***
[00:00:00.215,850] <inf> golioth_system: Initializing
[00:00:00.222,381] <inf> fs_nvs: 2 Sectors of 4096 bytes
[00:00:00.222,412] <inf> fs_nvs: alloc wra: 0, fa8
[00:00:00.222,412] <inf> fs_nvs: data wra: 0, 90
[00:01:06.672,241] <dbg> golioth_hello.main: Start Hello sample
[00:01:06.672,485] <inf> golioth_hello: Sending hello! 0
[00:01:06.673,004] <wrn> golioth_hello: Failed to send hello!
[00:01:06.673,095] <inf> golioth_system: Starting connect
[00:01:06.967,102] <inf> golioth_system: Client connected!
[00:01:11.673,065] <inf> golioth_hello: Sending hello! 1
[00:01:16.674,316] <inf> golioth_hello: Sending hello! 2

Golioth Credentials automatically set from the command line

What if I told you that a one-line command could look up your device credentials from the Golioth Cloud and automatically send them to the device? This is literally the feature we’ve implemented. Now, I’m excited about the shell settings above, but this new command line feature is absolutely legendary!

Golioth device name

  1. Look up your device name from the Golioth Console
  2. Issue the command, using your device name and the correct port:
    1. goliothctl device config --name --port

Here’s what it looks like in action:

$ goliothctl device config --name nrf91-settings-demo --port /dev/ttyACM0
failed to get golioth/psk-id from device: setting not found
device success setting golioth/psk-id saved as nrf91-settings-demo-id@blog-demo
failed to get golioth/psk from device: setting not found
device success setting golioth/psk saved as my_complex_password
closing serial read

And check this out, it’s a quick way to make sure you have the device credentials correct. Since you’re not copy/pasting or typing the credentials, you know you have it right as long as you get the name of the device right. Running the command a second time confirms those settings are correct:

$ goliothctl device config --name nrf91-settings-demo --port /dev/ttyACM0
golioth/psk-id in the device is already set to nrf91-settings-demo-id@blog-demo
golioth/psk in the device is already set to my_complex_password
closing serial read

Visions of End Users and Bulk Provisioning

Two really easy ways to see the new features put to use are end users and manufacturing. Imaging sending devices with “stock” firmware out to customers and having them add their own credentials (we have a snazzy web-based demo in the works so stay tuned). The other thought is toward bulk-provisioning where a script can be used to register the new device on Golioth, generate credentials, and send them to the device all in the same step.

We’d love to heard about your experiences with these new tools. Catch up with us on the Golioth Discord so we can have a chat!

USB is one of the most important computer interfaces for an embedded developer. It’s how we flash, test and debug our hardware. It is supported on the 3 major desktop operating systems allowing developers to use the OS they are most productive in. Yet, despite having “universal” in its name, USB support isn’t the same everywhere. A device programmer may only have a Windows binary, while a compiler may only work in Linux.

When Microsoft released Windows Subsystem for Linux (WSL), Windows developers were excited about the potential of using both Windows and Linux tools from one machine. At launch, it mostly delivered on that promise with one big caveat: no USB support.

Of course there were hacks with copy/pasting and custom network drivers, but none of them worked reliably. But late last year Microsoft announced that they’ve added USB support to WSL! I was certainly keen to try it out and over the past few months I’ve been testing it as part of my Windows development workflow. It’s now in a state where I can recommend for use to other developers, though there are a few gotchas to make note of (we’ll discuss that later.) Today I will walk you through the basic steps of programming an embedded device over USB using WSL.

What you need to get started

Windows 10 or 11

Obviously to do development on Windows…you need a copy of Windows. What’s less obvious is that WSL & USB are both supported by Windows 10. You just need Windows 10 version 2004 or the latest build of Windows 11.

WSL

There are a few ways to install WSL and you may have WSL already installed on your machine. But before we can get to using USB we need to check which Linux kernel is currently installed. It needs to be a kernel version of 5.10.60.1 or later. You can check by running uname -a.

If you installed WSL using the command line (see docs), then you may have a recent-enough kernel via Windows Update. However, I recommend installing Windows Subsystem for Linux Preview from the Microsoft Store. It’ll keep your WSL kernel up to date without having to wait for a service pack.

You also need a Linux distribution and you can use any distribution you like from the Microsoft Store. I prefer to keep things easy and use Ubuntu.

usbipd-win

This is what makes all the magic happen. The instructions are pretty straightforward. It’s mostly a matter of installing the usbipd-win program on the Windows side and a little bit of configuration on the Linux side.

(If you’re curious how this works, make sure to check out the underlying USB/IP protocol. It’s been part of the Linux kernel for years!)

Windows Terminal (optional)

Strictly not needed to work with WSL or USB, Windows Terminal is a much more modern terminal than what comes stock with Windows or PowerShell. It also offers a tabbed interface which is especially useful when needing to switch between Windows and WSL while managing your USB devices.

Using USB from WSL

With the dependencies out of the way, we can start interacting with a USB device.

Attaching a device to WSL from Windows

You’ll need to get comfortable with usbipd as it’s how we’ll enable USB to be used with WSL. The basic steps are:

  1. Launch Windows Terminal as an Administrator
  2. Run usbipd wsl list to list all the devices on your machine, noting the BUS-ID for the device you want to use
  3. Run usbipd wsl attach --busid={BUS-ID} to use the device from WSL
  4. Run usbipd wsl list again to verify that the device is now being used by WSL

Accessing a USB device from WSL

At this point you have a real USB device in Linux. To quickly verify that everything works:

  1. Open Ubuntu (doesn’t need to be as an Administrator)
  2. Run lsusb and note the USB device
  3. Start interacting with the device like you normally would

I followed the Golioth Getting Started Guide for the ESP32 using the Linux instructions and it worked like a treat. But I’ve also tested the mainline Zephyr, Arduino CLI & an STM32 with dfu-util.

Running the west flash command in WSL2

What to keep in mind

If you’re new to Linux, you’ll need to get a little familiar with tools like udev and how to find your device paths, but there are a ton of resources online. Some of the devices I tried already had the udev rules defined in my release of OpenOCD but one didn’t. It took me awhile to figure out the ancient incantation to make udev happy. Usually If you’re using an open source tool they provide pretty good documentation for usage on Linux, so check the manual.

One challenge with this approach is that usbipd tries to re-attach a device if it reboots. This can happen when you flash firmware. It doesn’t always recover, especially if the Bus ID changes, so you may need to re-attach the device. It’s a little annoying if you forget, so make sure you keep that admin console open.

Lastly, while many USB, USB/Serial & HID devices are included with the current WSL kernel, some may not be by default. Leave a comment on this issue if you want to request additional drivers.

USB all the things!

I’m grateful that Microsoft has added support for USB in WSL. It’s not only helpful for Golioth developers but anyone else doing embedded development on Windows. I didn’t go into every bit of detail in this post so if you have any questions or run into issues, feel free to reach out to me on Twitter or join our Discord.

At Golioth, we talk about 3 things that make for likely hardware/firmware compatibility with our Cloud:

  • Running Zephyr RTOS
  • Have sufficient overhead to run the Golioth code in conjunction with other Zephyr code (about 2K extra code space)
  • A network interface in Zephyr

(this is not the only way to connect, just a good formula for getting started)

It’s that last point that disqualifies a bunch of boards in Zephyr. Maybe you love the STM32 feature set, but your board doesn’t have a modem to get to the internet. What then?

The great thing about Zephyr is that network interfaces are often abstracted to the point that you can add one after the fact to your board, say with a wire harness to a different PCB. If you’re at the design phase, you could also add the ESP32 as a co-processor to add connectivity. We have shown this in the past with Ethernet and with WiFi, and we’re working on a sample that adds a non-native cellular modem.

This article will show how to add WiFi to your Zephyr project in a cheap and efficient manner, using a $5 ESP32 board put into ESP-AT mode. Your project instantly has network connectivity (and a few other tricks too!).

AT commands? Like on my brick phone?

We’ll talk about the hardware in a bit, but the software part of this hinges on communication between processors using the ESP-AT command set.

AT Commands?? Like from the 80s?

Actually, exactly like that. And not just from your brick phone, the Hayes Command Set was created in 1981 for a 300 baud modem. It has survived 40 years later due to the easy connection over a serial interface (UART), which makes boards-to-board or chip-to-chip connectivity well understood and almost universally available. In fact, many of the cellular modems on the market if not using AT command sets directly (it has an ETSI standard), at least have an “AT mode” for setting up communications with cellular towers and troubleshooting.

The benefit is that the ESP32 acting as a secondary processor means a wide range of parts can talk over the UART interface. Though we’re talking about Zephyr in this post, a previous example showed a Cortex-m0+ running our Arduino SDK in conjunction with the ESP32 modem. On the Zephyr side of things, you can view the wide range of boards that are supported on our hardware catalog, including boards as powerful as the Altera Max 10 FPGA board and as small as the Seeeduino XIAO.

Set up the modem

The ESP32 AT command firmware is just a binary. If you find the proper module and chipset, you should be able to download it directly onto your board. The board the ESP32 module is mounted on doesn’t really matter, as long as you have access to the pins and can tell which pin on the PCB routes back to which pin on the module.

In this example, we are working with the ESP32-WROOM-32. This is one of the most common modules on the market today. You can find which module you have by looking at the laser etching on the metal can on the module itself.

I downloaded the latest binaries (V2.2.0.0 as of this writing) from the Espressif site. I will also show the command below using that version number, though you should use the newest version that is available. There is also a page that lists the different type of binaries and the associated pin numbers you’ll need to connect to when testing below.

esptool.py write_flash --verify 0x0 ~/Downloads/ESP32-WROOM-32-V2.2.0.0/factory/factory_WROOM-32.bin

Testing the modem

Once you have successfully programmed the modem, you’ll want to test it. This will involve manually typing in AT commands to a serial interface / terminal. While that might seem like an inefficient way to work with a modem, it’s a good skill set to have if you need to troubleshoot your setup at a later time.

You will need a USB to serial converter, or some other way to communicate with a UART. These are available on Amazon for $5 or less. You do not need any fancy features on this device.

If you’re using the ESP32-WROOM32 like me, you’ll have a setup like above. Hook up your USB to serial converter TX pin to pin 16 (ESP32 RX) and the converter’s RX pin to pin 17 (ESP32 TX). Note that there are pins labeled TX and RX on the dev kit, but those are the console output for the processor. The easy way to test is if you hit the Reset button (labeled “EN” on this board), you will see all of the boot sequence scrolling across the screen if hooked into TX/RX. If you are connected to the proper output (16/17), you will only see a ready prompt when the board is booted. Reminder to check the pin numbers if you’re using a different module than above.

In terms of the program to connect you to the USB to serial and communicate with the ESP32, a small warning about line endings. After initially using screen on Linux, I found that the line endings were not compatible with the ESP-AT family. I could see the ready prompt, but I could not enter any data. After some digging I found that you need to be able to send a Carriage Return / CR (\r) and a Line Feed / LF (\n). I followed this advice and downloaded and installed picocom and used the following command on the command line to launch a more interactive terminal: picocom /dev/ttyUSB0 --baud 115200 --omap crcrlf

This enabled me to try out various commands in the ESP-AT Command Set. Two in particular stood out to me as interesting, even though they are not implemented below:

  • AT+SLEEPWKCFG – Allows you to set the “light sleep” command for the modem and tell the modem which pin will be used for waking the modem.
  • AT+BLEGATTSSETATTR – This sets the GATT profile for the modem in Bluetooth LE mode. The command is actually just one of many commands…I didn’t realize that it was also possible to use the modem as a Bluetooth LE gateway as well!

Use the modem with samples

One hardware combination that is well supported in Golioth samples is the nRF52840 and the ESP32. Our “Hello” sample shows how you can configure the device and compile firmware for the nRF52840 while still taking advantage of the ESP-AT modem connected to it.

If you don’t have the nRF52840DK (Developer Kit), there are a range of other boards that will work. When you start actually running the demo, it will be very similar to our getting started using the ESP32 (natively), or the nRF9160. Our goal is to make a seamless experience once you have a network connection. We always love discussing projects in our forum, our Discord, and on Twitter.

Embedded software can be difficult to debug and the embedded developer’s tool chest is typically more limited than what is accessible when working with larger platforms. One of the most common debugging tools is the UART peripheral over which log output can be printed to report information regarding the state of a program’s execution and the contents of important items in device memory. This insight aids in determining the root cause of bugs and also verifying the success of a program that has run as intended. One of the lesser-known methods of obtaining such console log output is via Real Time Transfer (RTT).

Real Time Transfer is a Segger technology that is available when using a J-Link hardware debug probe. It is a high-performance bidirectional communication protocol for communication between a host CPU and target MCUs in the Cortex-M and RX processor families. And it has few notable advantages beyond what a UART can offer.

RTT offers better timing; reduces dedicated pins

Debugging using print statements and the UART console may interrupt time-sensitive applications, but using RTT for console output preserves MCU real-time performance. And one of the biggest advantages of RTT is that UART pins are not needed.

RTT uses the SWD pins on the standard 10 pin J-Link header that you’ve likely already accounted for in order to program the chip. When designing a custom platform that will be used for development prior to manufacturing, UART pins do not need to be exposed in the design.

Enabling RTT in Zephyr

Both hardware and software configuration for RTT in Zephyr is trivial. A J-Link, an RTT-compatible debug target, and access to SWD programming pins round out the hardware requirements. For this example I’m using Zephyr OS, which has abstracted use of the RTT protocol. It is enabled with a few configuration options and greatly eases accessibility of the technology. I tested this example using the Nordic Semiconductor Thingy:91 as the target.

If you’re looking for a simple app to test this out on, try the Golioth Getting Started Guide and use the basic Hello application. The Kconfig directives shared below will simply move the console log output found in the Hello application from the standard UART pin routing and use the RTT over the SWD pins instead.

To enable RTT in a Zephyr application, add the following configurations to the application prj.conf file:

CONFIG_UART_CONSOLE=n
CONFIG_RTT_CONSOLE=y
CONFIG_USE_SEGGER_RTT=y

The first config will disable passing of console output to the standard UART path. The second configuration will enable the RTT console path. The third configuration enables use of the Segger RTT communication protocol.

Viewing RTT output on a computer

Viewing the output requires a special viewing tool provided in the free J-Link software package. Navigate to the folder on your computer that contains the SEGGER J-Link software tools. Find JLinkRTTViewer, and execute it. A window similar to the following will appear.

RTT setup screen

The default configurations are acceptable other than the Target Device. The tested example target is the Thingy:91 and the corresponding Target Device is the nrf9160_xxAA. Before clicking OK, build and flash the target with the configurations specified above.

The Viewer will not connect unless the associated software has been enabled to facilitate the connection. This is an important distinction between this method versus UART. The viewer will actively attempt a connection with the target and will fail to connect if the target has not been configured to use RTT previously. It will not monitor for an active connection to occur after programming. The Viewer will need to be restarted after flashing if it fails to connect.

RTT viewer window

The lower console window reports RTT Viewer activity and will indicate when it has successfully connected to the target. The Hello app output will be viewable in the “All terminals” tab, along with a terminal-specific tab (in this case Terminal 0) which include colored syntax highlighting.

Simple and Easy

There aren’t any secrets to using RTT, other than knowing that it is an option. But having the ability to see console output in cases where your target doesn’t have extra pins available, or when all of the UART periphersals are already in use, is a trick you’ll want to keep in mind. Real Time Transfer has a few other tricks up its sleeve, but those are a story for another post.

This guest post is contributed by Vojislav Milivojević, an Embedded software lead at IRNAS.

As embedded software engineers we usually have some associated hardware sitting next to our computer that is used for developing and testing our code. In most cases, this piece of hardware is connected to our computer with a so-called “programmer”, an additional tool that allows us to access processors and controllers for which we are developing code. Here we will explore the relationship between devices we are developing and a computer, but it won’t be a standard one, it will be a long-distance relationship.

I lead the firmware development at IRNAS, where we push the limits of efficient solution development on IoT devices, but since I live in a different country than the rest of my team, there are usually a lot of packages with PCBs going back and forth. While that is not a big problem, there are times when some pieces just cannot get to me quickly enough to meet customer demands. There have also been times where a specific LTE network is not available in my region. Overcoming this issue is usually done with remote desktop solutions that are not so efficient, or with some special equipment that in a nutshell is again a computer with some additional hardware. Since I needed such a solution, and none of the existing ones were able to give me a nice out-of-the-box experience, I decided to design and document a process that works for me and the complete IRNAS engineering team.

Using Segger tools

There are many solutions, commercial and open-source, that provide embedded development tools such as programmers, IDEs, logging features, etc. One of these is solutions providers is Segger, and their hardware sometimes comes as part of development boards which is really nice. At IRNAS we tend to favor using Segger J-Link tools as our ‘go-to’ solution for target flashing and debugging while building connected products. Besides that, they have a range of very useful features for embedded developers, and one of these tools is Segger Tunnel mode. This is a remote server that allows the users to connect to a J-Link programmer (and thus its connected target device) over the internet. This means a device located anywhere in the world can be debugged or brought up.

Mixing with Zephyr (west tool)

Since most of the projects that I am working on are using Zephyr RTOS, this means that the west tool is used for flashing, debugging, and many other things. West is a meta-tool that abstracts software for all different programmers and gives us the ability to easily flash for multiple targets while not needing to remember long command lines. West does support Segger J-Link for specific targets and it can be selected as one of the offered runners. The good thing about west is that it will let us pass commands to the selected runner which gives us the ability to fully utilize all the functions of the selected J-Link software.

Set up the hardware and software

In December of 2020 there was great news from Segger that the complete J-Link software is now available to run for Linux on ARM architecture, which meant that Raspberry Pi is now supported as a host machine! The idea was to connect a J-Link programmer to Raspberry Pi, add in some software, and we have ourselves a remote programming jig.

Components needed for this demo:

    • Raspberry Pi
    • J-Link programmer
    • Board with the target MCU

For the purposes of this demo, we will be using the Nordic Semiconductor nRF9160DK development kit since it already contains both a J-Link and the target MCU hardware. The board connects via USB to the Raspberry Pi which connects to power and Ethernet (WiFi is also an option).

nRF9160DK connected to Raspberry Pi

Now J-Link software needs to be installed on Raspberry Pi so it can work as a remote J-Link Server. In the Raspberry Pi user home directory, download and un-tar the Segger utilities for the Raspberry Pi (choose the Linux ARM 32-bit TGZ archive). Then configure the udev rules as per the README.txt file in the JLink_Linux_Vxxx_arm directory.

$ wget --post-data 'accept_license_agreement=accepted&non_emb_ctr=confirmed&submit=Download+software' https://www.segger.com/downloads/jlink/JLink_Linux_arm.tgz
$ tar xvf JLink_Linux_arm.tgz
$ cd JLink_Linux_V646g_arm/
$ cat README.txt
$ sudo cp 99-jlink.rules /etc/udev/rules.d/
$ sudo reboot

Next, it is time to start the remote server. On a GUI-based system, this can be done with a small application from Segger, but the good thing is that the CLI tool is also provided. I recommend checking all available options for this tool by starting it and then typing ? at the prompt.

pi@raspberrypi:~ $ JLinkRemoteServer
SEGGER J-Link Remote Server V7.22b
Compiled Jun 17 2021 17:32:35

'q' to quit '?' for help

Connected to J-Link with S/N 960012010

Waiting for client connections...
?Command line options:
? - Prints the list of available command line options
-Port - Specifies listening port of J-Link Remote Server
-UseTunnel - Specifies if tunneled connection shall be used
-SelectEmuBySN - Specifies to connect to a J-Link with a specific S/N
-TunnelServer - Specify a tunnel server to connect to (default: jlink.segger.com:19020)
-TunnelBySN - Specifies to identify at tunnel server via J-Link S/N
-TunnelByName - Specifies to identify at tunnel server via custom name
-TunnelPW - Specifies to protect the connection with a password
-TunnelEncrypt - Specifies to encrypt any transferred data of a tunneled connection
-TunnelPort - Specifies to connect to a tunnel server listening on a specific port
-select - <USB/IP>[=<SN/Hostname>] Specify how to connect to J-Link

Before entering the command we need to think of a name for our tunnel and a password. For me, this will be tunnel name: remote_nrf91 and password: 19frn. Then start the remote server with the command:

JLinkRemoteServer -UseTunnel -TunnelByName remote_nrf91 -TunnelPW 19frn

Demo time

To test this remote flashing we will build a demo application on our host computer. nRF Connect SDK (NCS) that is based on ZephyrRTOS contains some sample applications and we will use shell_module, which enables us to use shell commands over UART with nRF9160. The selected application is located in the ncs/zephyr/samples/subsys/shell/shell_module folder of NCS. To build it for nRF9160DK we will use the command:

west build -b nrf9160dk_nrf9160_ns -p

After that let’s flash the board that is connected to our remote Raspberry Pi. The default runner for flashing the nRF9160DK is nrfjprog, but instead of that, we want to use the J-Link supported runner. Since the west tool does not natively support remote flashing, parameters will be sent directly to the J-Link software using the --tool-opt flag.

west flash -r jlink --tool-opt="ip tunnel:remote_nrf91:19frn"

This will flash our target MCU that is connected to J-Link and Raspberry Pi. To validate the result, open the serial terminal on Raspberry Pi and see if shell commands are working.

minicom -D /dev/ttyACM1 -b 115200

Summary

While Segger provides very interesting tools for embedded developers, there is still some work that needs to be done so they are properly integrated into our development workflow. Remote flashing is just one part of all capabilities, so this can be a starting point for a great remote development setup!

What if you could open a text document on a device, write code, click save, and everything magically starts working? This is the promise of high level programming languages like CircuitPython. Golioth Labs now has an SDK to utilize the language’s fast prototyping capabilities. In addition to Golioth’s cloud functions, it’s super easy to pass data from a networked device up to the Golioth cloud. Click save to stream IoT device data to the cloud!

What is CircuitPython?

Adafruit created CircuitPython (CP), which started as a hardware specific fork of MicroPython (MP). Both CP and MP are based upon the ideas of the Python3 programming language, such as using an interpreter and basing language syntax on whitespace separators. The challenge is that the interpreter must live on a much smaller target than most computers running Python; fitting all of that into 128K of flash and 32K of RAM is a challenge! The hardware specific ports of CP encompass many of the Adafruit boards, but are really targeted around the chipset on those boards. Targets like the Microchip SAMD21 and SAMD51, the Raspberry Pi RP2040, the ESP32-S2, and more. The project continues to grow, both from Adafruit’s ongoing contributions and from a strong community contributing to the project.

Developing for prototyping

One of the strongest features of a language like CP is the ability to quickly iterate code. The “click save and your code starts running” is in contrast to the traditional method of compiling code on an external device (ie. your laptop), downloading it through a debugger (ie. a J-Link), debugging the code, and then watching the output. Because the CP interpreter is on the device itself, it processes the code as the device starts. While this means there is less room for user code, it is a much faster turn around once you click save.

During Golioth “Office Hours” on our community Discord (happening every Wednesday at 1 pm EST / 10 am PST), we had users asking for a faster way to prototype. Golioth developed an SDK in the “Golioth Labs” section of GitHub to interface with the Golioth API(s). This allows users with CircuitPython based programs to connect their IoT devices to the cloud and pass data back to Golioth. Much like the Arduino SDK experiment repo by Golioth Labs, we are interested in trying to extend the functionality of the Golioth cloud to many different platforms. If you have a hardware platform you would like supported, please post about it on our community forum!

Transport? I just want to get there!

As Alvaro points out in the video below, the transport layer is not something the user needs to care about with the Golioth CircuitPython SDK; any time you are working with a Golioth hosted SDK, you will be working at a higher level than the transport layer of communication, such as CoAP or MQTT. Many IoT platforms stop at the transport layer, notably MQTT examples. By moving up to working with APIs to LightDB State or LightDB Stream, you get additional functionality on the device side and you can better maintain your data on the Golioth console. Users always have the option to work at a lower level and peek under the hood at how the communication is happening. For people wanting to get started quickly, it’s important to have a high level way to start streaming data to the cloud.

Getting started with CircuitPython and Golioth

In the demo below, Alvaro showcases a sample setup using the following

First follow the Getting Started Guide to program your chosen hardware with the .UF2 file that represents the underlying CP tools, such as the interpreter and the code required to make your specific microcontroller into a CircuitPython based device.

The next step is hooking up the MicroMod device to the ESP32 modem using jumper wires. Read your board documentation to find an accessible UART port (TX/RX). Once completed, this enables the (Bluetooth based) nRF52840 to communicate over a UART connection to the modem and gain a connection to the internet using the ESP32’s WiFi interface. Credentials for the WiFi modem to connect to a local hotspot are sent from the nRF52840 to the ESP32 using AT Commands early in the CircuitPython program.

The example code on the Golioth CircuitPython SDK primarily lives in code.py. The device first connects using credentials for the WiFi network and PSK information for validating onto the Golioth cloud contained within secrets.py. After the UART is configured, the main processor (nRF52840 in the example below) starts communicating with the modem. In Alvaro’s example, he is sending the internal temperature of the processor using LightDB stream. He is able to send an update down to the board using LightDB State to remotely turn on an LED on the board, which is listening for changes on the /led path, similar to other examples that use LightDB State.

Watch the demo

It all comes together in the video below. Learn about how Golioth and CircuitPython pair to make a great combination for prototyping your next sensor project.

Troubleshooting high complexity systems like Zephyr requires more thorough tools. Menuconfig allows users to see the layers of their system and adjust settings without requiring a complete system recompilation.

The troubleshoot loop

Modify, compile, test.

Modify, compile, test.

Modify, compile, test.

Modify, compile, test.

How do we break out of this loop of trying to change different settings in a program, recompiling the entire thing, and then waiting for a build to finish? Sure, there are some tools to modify things if you’re step debugging, such as changing parameters in memory. But you can’t go and allocate new memory after compiling normally. So what happens when you need to change things? You find the #define in the code, change the parameter, and recompile. What a slow process!

Moving up the complexity stack

We move up the “complexity stack” from a bare-metal device to running a Real Time Operating System (RTOS) in order to get access to higher level functions. Not only does this allow us to abstract things like network interfaces and target different types of hardware, but it also allows us to add layers of software that would be untenable when running bare-metal firmware. The downside, of course, is that it’s more complex.

When you’re trying to figure out what is going wrong in a complex system like Zephyr, it can mean chasing problems through many layers of functions and threads. It’s hard to keep track of where things are and what is “in charge” when it comes time to change things.

Enter Menuconfig

Menuconfig is a tool borrowed from Linux development that works in a similar way: a high complexity system that needs some level of organization. Obviously, in full Linux systems, the complexity often will be even higher than in an RTOS. In the video below, Marcin shows how he uses Menuconfig to turn features on and off during debugging, including with the Golioth “hello” example. As recommended in the video, new Zephyr users can also utilize Menuconfig to explore the system and which characteristics are enabled and available.

 

 

One of the first challenges any embedded software developer faces is installing and configuring their development environment and toolchain. Toolchain version, silicon vendor libraries, Windows versus Linux, debug configuration, IDE settings, and environment variables are just a few components of the modern embedded developers workspace. The result of all this complexity is a fragile, hard to reproduce workspace for software often used in critical systems. We consider this developer experience equivalent to torture, and believe it is trapping value from reaching the market.

We know there is a better way. If the development environment can be entirely packaged and abstracted away from the developer they will be able to more quickly begin application development.  A remotely managed toolchain also facilitates more efficient teamwork. It eliminates the cryptic mantra: “Works On My Machine” accompanied by an obligatory shrug.

The Established Way

The traditional approach to eliminating toolchain headaches has typically been through the use of Integrated Development Environments (IDEs).  However, these packages are generally locked to a particular silicon vendor or compiler, may have paywalls to expose premium features, and can be constrained in feature availability. Our ‘gold standard’ for years has been the following:

  • Ubuntu Virtual Machine
  • Eclipse
  • GNU MCU Eclipse Tools
  • USB Passthrough from VM for debugging boards

Modern Web Options

Technologies like, VS Code, containerization, Microsoft’s Debug Protocol, and Language Server protocol have come together to enable a transformational developer experience. Most of the current approaches to bringing these technologies together in the market are built on top of some variation of VS Code. Each solution is vying to take advantage of VS Code’s capacity to run in the browser as seamlessly as it runs on a local machine.

One option is Github Codespaces.  Which option requires the user to be on a paid plan, is not focused on embedded development, and uses a closed source server that is closed source. Another option is Keil Studio.  Keil Studio is optimized for embedded development with ARM based microcontrollers. Pricing and roadmap are not yet established. It provides no terminal access and offers a limited number of embedded targets to work with.

Why We Chose Gitpod

We chose Gitpod in part for our mutually valued stance on open source, sustained active community engagement, and obsession with developer experience. Of note is Gitpod’s full-feature free tier. They provides 50 hours of running workspace per month; No payment details required. As a result, the psychological barrier of getting up to get one’s credit card is avoided. Fifty hours is enough time to introduce oneself to the Zephyr and Golioth ecosystems. Finally, Gitpod being open source enables us and our developers to optimize their workspaces to their needs.

 

Our Current Gitpod Workflow

Setting up the application begins with cloning the Example Application from Zephyr GitHub. Next, the Golioth SDK is added as a dependency.  Changes are then added to the .gitpod.yml and .gitpod.Dockerfile. After running ‘west update’ with the configurations in place, the hello application is copied from the Golioth SDK directory to the project root directory in place of or at the same level as the ‘app’ folder. Here is a link to the end result.

Also, some background info on Gitpod.

Our target embedded cloud developer experience would be one in which the developer instantiates the cloud development environment and has zero local tool dependencies.  They can then plug their debug hardware into any machine from anywhere with internet access and develop.  Our current implementation requires three local tools to facilitate debugging functionality with the current state of the Gitpod software and VS Code. Gitpod provides a Gitpod Local Companion which allows localhost access to any TCP port in a remote workspace. The second piece of software required locally is SSH.  SSH is necessary to establish an ssh tunnel between the Gitpod instance and local machine. The final software that is run on the local machine is JLinkGDBServerCL.

 

 

State of the Art

The technology to facilitate cloud-based development has arrived and it will enable remarkable gains in productivity and developer experience.  Unfortunately, we still have local dependencies and in the current state things are not optimized for use over the internet. Step debugging was accomplished, but some work remains for embedded cloud development to compete with a local development environment. With this in mind, a future blog post will show we actually can be effective developers when using this solution with a virtualized QEMU target.

To do this proper we’d serve an MS DAP compatible debug server such as Probe.rs to the developers browser, and hook it up to the target board using webUSB. A challenge that exists, is the lack of open source microcontroller debuggers written in JavaScript or WebAssembly.  The translation from C code to WebAssembly is not straightforward and can be error-prone. However, valid translators of Rust to WebAssembly do exist, and Probe.rs is an open-source debugger written in Rust.  We also need to convince Microsoft to push this PR forward.

Stay tuned for a future post about how to build, run, and debug Golioth examples with QEMU in less than 10 clicks.