My book about .NET nanoFramework is out!

I’m delighted to share some wonderful news: my book, Embedded Systems with .NET nanoFramework, is now published.

After several months of writing – and more than a few rounds of iterations with the publishing team – it’s finally out. What a great way to start the new year.

This all started with an invitation from Ryan Byrnes, Acquisitions Editor from Apress, who reached out to me in August 2024 with the idea of writing a book about .NET nanoFramework.

I had never written a book before, and I’ll admit that my first reaction was a mix of excitement and a healthy dose of fear: the workload, the responsibility, the commitment. But as the idea matured, it became clear that turning it down would be a missed opportunity – both to help the community and to document a framework I genuinely care about and have invested a lot of energy, work and time into. Therefore, I went for it.

From the start, I wanted this book to be practical and complete, while still approachable. Because .NET nanoFramework spans everything from firmware internals and tooling to day-to-day C# development for microcontrollers, I tried to balance:

  • the lower layers (firmware architecture and build system—often considered opaque, and typically under-documented), and
  • the upper layers (the C# coding patterns and application structure most developers interact with daily).

Another aspect was the use cases, because .NET nanoFramework can fit so many usage scenarios, the goal was to show both connected and stand-alone usages.

A second goal was to cover real usage scenarios, because .NET nanoFramework can fit a wide range of projects—from fully connected devices to completely stand-alone systems.

I also wanted to help demystify the idea that .NET and C# are “not for embedded.” This is not about replacing C/C++, those remain essential. But .NET nanoFramework can empower teams to build embedded products more efficiently by lowering the entry barrier and enabling rapid development with familiar tools and workflows. And importantly, the stack remains flexible: you can go from hardware choices all the way to firmware and application design, tailoring the solution to each project’s constraints. This is very powerful and often misinterpreted by engineering management.

To validate those ideas and widen the perspective, I reached out to my fellow IoT MVPs
Sandervan de Velde and Peter Gallagher, and also to Laurent Ellerbach from the core team.

To validate all these ideas and gather a wider view I’ve reached out to my fellow IoT MVPs Sander and Pete and also to Laurent Ellerbach from the Core Team.

The writing process took time in different ways. The early chapter(s) on project origins and history required genuine digging and “archaeology” to get names, dates, and context straight. The build system chapters were among the hardest to write. Because they’re complex and much of the knowledge lives “in people’s heads” or in the code itself.

On the other hand, the hardware-interface chapters allowed a faster pace because the concepts are familiar to embedded engineers – but I still wanted them in the book to make it truly end-to-end and to show how well .NET nanoFramework fits embedded work.

Because I wanted the book to be as practical as possible, I structured it around a typical embedded project journey. Choosing an example was surprisingly tricky: I wanted to avoid the usual “temperature/humidity sensor,” but I also didn’t want something so complex that it would distract from the learning goals. A pool controller ended up being the right balance.

For those chapters, I worked with Zan Gligorov (OrgPal) – a long-time supporter of the project and a heavy user of .NET nanoFramework, with deep real-world embedded experience and plenty of field knowledge. Having a real board to anchor the sample project made the content stronger and makes it easier for readers to reproduce the work in practice. Therefore the work and code is based in ORGPAL THREE board.

As I approached the end of the manuscript, one more topic felt like a perfect fit for the final chapter: bringing AI-adjacent capabilities to constrained embedded devices in a concrete, engineering-friendly way. That led to the last chapter, which includes implementing a Model Context Protocol (MCP) server for the pool controller – negotiated with the editor and developed with the advice and revision from Laurent Ellerbach, who has been driving MCP support in the nanoFramework ecosystem.
Along the way, I also took the opportunity to contribute improvements (including prompt support) to strengthen both the library and the chapter’s practical outcomes.

I want to thank Ryan and the Apress team for the opportunity, their patience, and their professional guidance – especially helpful for a first-time author. And thanks as well to Nirmal Selvaraj (Production Editor) for supporting me through revisions, changes, and the occasional deadline slip.

Looking at the final result, I’m genuinely proud. The book is packed with hands-on content and aims to cover the full development spectrum with .NET nanoFramework—from the lower layers up to everyday C# application code. Thanks to the professional team from Spring Nature, the book looks really great.

I truly hope it’s useful to you – and possibly to your team. If it helps more engineers see .NET nanoFramework as a serious option for resource-constrained embedded devices, then it will have achieved exactly what I hoped.

Have fun with .NET nanoFramework!

Check the book page: and LinkedIn post with the announcement.

The Long Road to Generics in .NET nanoFramework: a Personal Story

If you’ve read the official announcement about generics in .NET nanoFramework, you already have the “what” and the “how”. This post is the other half: the “why”, the “when”, and the very human parts in between.

Because, honestly, this journey didn’t start with a grand roadmap, an architecture meeting, or a neatly planned milestone.

It started during the COVID lockdowns.

Early 2020: confinement, curiosity, and a slightly dangerous mix

Back in early 2020, the world went into lockdown and, like many of you, I suddenly had more time in my head than in my calendar. And when that happens to developers, things tend to… happen.

Generics have been a fundamental feature in desktop .NET for years. But when .NET Micro Framework (.NETMF) first appeared, generics weren’t even a thing yet. That has a bigger impact than it sounds: the original building blocks weren’t designed with generics in mind, because the feature simply didn’t exist.

When generics eventually arrived in C#, the question naturally came up in the .NETMF world too. The answer was almost always the same, and it usually sounded like some variation of:

  • too much memory required
  • too much performance impact
  • code size will bloat
  • and a major implementation challenge

nanoFramework was no different. As the project matured and gained momentum, developers started asking the same questions – and the replies were usually based on the same assumptions. And, to be fair, if the people who invented .NETMF kept saying it wasn’t doable, that must be correct… right?

Still, generics remained one of those “it would be amazing, but…” topics. You know the kind:

  • “Wouldn’t it be great if we had generics?”
  • “Yeah, but… this can’t be done on these platforms.”

And here’s the thing about that sentence – “this can’t be done” – it tends to have a strange effect on my engineer brain.

It’s almost like a switch flips and I immediately want to test whether “can’t” means “physically impossible” or just “nobody has tried hard enough yet”.

So yes: a mix of stubbornness and curiosity. Possibly not the most responsible cocktail, but definitely a productive one.

The “Boldly Go” Moment

I’ve joked about it more than once, but it’s true: when I hear “this can’t be done”, it triggers that very specific urge to boldly go where no developer has gone before.

Not because I enjoy suffering (well… not always), but because difficult problems tend to hide the most interesting lessons. And if there’s one thing I love, it’s learning something new that also happens to make the platform better for everyone.

I also realized pretty quickly that this wouldn’t be a “single-module” change. Generics touch everything: the execution engine, the type system, the toolchain, the metadata processor, and yes, the debugger too.

So it began – with a naive C# application and one small generic class.

Roslyn compiled it and the metadata processor parsed it… and then loading it on nanoFramework failed (exactly as expected). From there, it became “just” a matter of peeling the onion. Backward.

It turned into a ping-pong between the metadata processor and the runtime type system: understand what was missing, decide what was absolutely required (without inflating nanoFramework beyond what makes sense), implement, test, repeat. Because this spans multiple components – some in C#, others in C and C++ – every iteration took time and demanded a lot of patience and care. Every change had to be validated, and every fix had to prove it wasn’t silently breaking something else.

Eventually, the iterations got longer… and things got more stable.

At some point, the debugger had to be brought in. After all, code using generics should be debugged like any other code, right?

For a few reasons, I kept most of this work quiet at the time: it was still fragile, I didn’t want to overpromise, and I also didn’t want to distract the team (or the community) with something that might still hit a hard wall. Only the core team knew the details. When I finally felt confident that this was a minimal viable concept, I was excited enough to tweet about it and share a screenshot of the trace from that naive app.

A hard task – and a very quiet internet

Once I got serious about it, reality hit quickly.

This hadn’t really been done before in these constrained environments in a way that stayed faithful to the .NET model and still fit inside the nanoFramework world.

And the usual “developer safety net” wasn’t there:

  • no helpful blog posts
  • no random GitHub repos to peek at
  • no “someone on Stack Overflow already solved this in 2014”
  • no expert I could casually ping with “quick question about MethodSpecs”.

It was just me… and the spec.

At one point, I remember feeling so comically alone in it that I tweeted what felt like a perfect representation of developing generics for nanoFramework: a picture of Lucky Luke riding into the sunset.

Because that’s honestly what it felt like: heading into a big quiet desert with a problem, a plan, and no idea where the next water stop would be.

ECMA-335: on of mine unlikely lockdown companions

If you’ve never had ECMA-335 as bedtime reading… let’s just say it’s not exactly a page-turner.

But it’s also a masterpiece.

That specification became my guide, my reference, my “okay, so what does the CLI actually promise here?”, and occasionally my “wait… that’s how they intended it?”

I had to learn so much:

  • metadata details I didn’t even know existed
  • how generics are represented and resolved
  • how signatures carry the story
  • and how the runtime is supposed to reason about it all.

There were plenty of moments where I had to stop, reread, draw diagrams, and reconstruct the mental model piece by piece.

The surprise: how beautifully “simple” it is

And this is the part that still makes me smile.

After all the fear, all the complexity, all the “this must be impossible on a microcontroller”… I was genuinely surprised by the elegance of the underlying reasoning.

Generics are not magic. They’re not “special runtime fairy dust”.

They are, in a very real sense, a carefully defined set of rules encoded into metadata.

Once you see how it’s captured in things like:

  • signatures
  • TypeSpecs
  • MethodSpecs
  • GenericParams

…you start to realize it’s all there. Beautifully folded into the metadata in a way that a runtime can systematically unfold.

And at the end of the day, a generic type doesn’t remain some abstract concept floating above the execution engine. It gets resolved, specialized, and becomes something concrete. Something that the interpreter can run.

A “generic” ends up – effectively – as a regular type… just reached through a more powerful path.

That moment, when the mental model clicks and the complexity collapses into something understandable, is one of my favorite feelings in the engineering thought process.

Then life came back (and the project slowed down)

Of course, confinement didn’t last forever.

Once daily work ramped up again, the available “deep focus time” started shrinking. And as anyone who’s worked on hard runtime features knows, generics aren’t the kind of thing you casually chip away at in 20-minute intervals between meetings.

So progress slowed. Then stalled.

Not because the idea stopped being important, and not because the goal became less exciting – but because time is a very real constraint, and real life has a habit of demanding attention.

And still… the work was there. The understanding was there. The foundation was there. Waiting patiently.

2024: a serious boost (thank you, Skyworks)

In 2024, during a work project with Skyworks Inc., they showed interest in having generics in .NET nanoFramework. Beyond the feature itself, it would make their lives significantly easier when reusing and sharing code between desktop .NET and nanoFramework.

Skyworks decided to sponsor the work – and that’s when generics became active again in a serious way.

It’s more than fair that I acknowledge the people who backed this. In particular: Mike Muegel (Technical Director and Software Engineer), who has been a great friend and supporter of the project; Zehra Gozde Fidan (then Director of Solutions Development), who backed the idea of bringing .NET nanoFramework into Skyworks Timing Products’ developer toolbox; and Francisco Tamayo, who took over that role later.

It’s also worth highlighting something I genuinely respect: Skyworks doesn’t just take from open source. They contribute back and help ensure that the projects they depend on are maintained and can thrive.

This sponsorship gave generics development a real boost and moved it forward substantially. Then priorities shifted again, and progress slowed down once more.

2025: back at it, with renewed enthusiasm

Fast forward to 2025. A couple of months before the MVP Summit, it crossed my mind that this would be a great occasion to finally announce that generics were available in .NET nanoFramework.

So I got back to work with renewed enthusiasm.

Barely on time, it happened. Generics were in good enough shape to be used, and I had the pleasure of announcing it to my fellow IoT MVPs. That’s when the private preview started.

Private preview and the “last mile”

Once feedback started arriving, the real “last mile” began.

Work kicked off on the libraries needed to support broader, everyday use of generics – especially collections. And that, in turn, opened a classic can of worms: edge cases and execution paths that simply hadn’t been exercised before. More changes were required in type system structures, decoders, and handlers for IL instructions that interact with generics.

If you want a mental picture, try this:

One moment, 2.000+ unit tests in mscorlib and the metadata processor are all green and life is good.

Then you start adding tests for reported issues, and adding new generic-heavy types like Span<T>, Stack<T>, and List<T>.

And suddenly, a good portion of the test suite turns red.

That moment comes with a special mix of puzzlement and surprise – and, if I’m being honest, a few cursing words (which I’ll skip here to keep this post polite). It’s usually followed by a short burst of panic: “What if this can’t be done after all?”

And then you take a breath and think straight: “Wait. We already had generics running. We’ve tested plenty of scenarios. This has to be something specific – not proof that the whole idea is broken.”

So it was time to do the unglamorous work: go instruction by instruction, check the code behind each IL path, and figure out what was missing or short – then fix it.

The last months went by with slow progress, watching the test suite go green one case at a time. Sometimes several tests would flip to green at once. Hooray.

The slowness wasn’t accidental: each remaining bug (or missing chunk of implementation) was harder to find – and harder to fix – than the previous one.

More often than I’d like to admit, I had to fix the previous fix… and then fix that fix… until it covered all the variations and execution paths.

Along the way, I also discovered that a few IL instructions didn’t even have an implementation. They simply weren’t used by code that doesn’t deal with generics. No worries – let’s tackled those too.

And last but not least, when testing code ported from full .NET, I “stumbled” onto stackalloc. Which is almost a must-have when working with Span<byte> – and Span<byte> is, in turn, close to a must-have for embedded scenarios.

That unlocked another serious batch of work: allocating storage, managing it safely, and making it fit into runtime structures that were never designed to deal with external storage.

Let me write it again so there are no doubts: this was a hard process. Often painful. Sometimes it involved a serious roadblock that lasted long enough to make me wonder whether I’d finally hit an impossible limit.

But persistence, enthusiasm (and yes, a solid portion of stubbornness) brought it to where it is today.

With the public preview, this is not the end of the journey. I’m pretty sure there are still bugs lurking in the code, and the community will find edge cases and execution paths that aren’t covered by the tests yet.

But that’s how it goes in this industry.

Why I’m Sharing This?

I wanted this companion post to exist because big technical features don’t appear out of thin air. They come from late nights, stubborn curiosity, lots of reading, countless wrong turns, and those rare “oh wow, it’s actually elegant” moments.

Generics in .NET nanoFramework are not just a checkbox feature.

They’re a story:

  • of a weird period in the world,
  • of a personal challenge triggered by “can’t be done”,
  • of learning an intimidating spec deeply enough to trust it,
  • and of pushing a constrained platform closer to the .NET experience we all love.

And yes—also a story with Lucky Luke riding into the sunset.

I can’t tell you how happy (and proud) I am that all this can be released to public preview. This brings nanoFramework even closer feature-wise to full .NET – and it’s a testament to nanoFramework as a serious development framework.

Have fun with .NET nanoFramework!

Introducing the Würth Elektronik Orthosie-I Module with .NET nanoFramework

First off, a big thank you to the Portuguese team of Würth Elektronik for supplying the Orthosie-I evaluation kit – this support made it possible to add official .NET nanoFramework support for the module and to develop the sample code showcased in this post.

Orthosie-I Overview (ESP32-C3 Module with Wi-Fi & BLE)

The Orthosie-I is a compact IoT radio module (only 9.5 × 13 mm) that integrates an onboard PCB antenna for wireless connectivity. It’s based on Espressif’s ESP32-C3 system-on-chip (32-bit RISC-V core up to 160 MHz) with 4 MB flash and 400 kB RAM, and it supports 2.4 GHz Wi-Fi (802.11 b/g/n) as well as Bluetooth LE 5.0 wireless protocols. In spite of its tiny size, the module exposes a variety of interfaces (UART, SPI, I²C, ADC, etc.) and provides 15 configurable GPIOs for connecting sensors or peripherals. It’s offered by Würth Elektronik as “BYOF”build your own firmware – module, shipping blank with no firmware, so you can flash your own IoT application or use platforms like .NET nanoFramework.

Ready-to-Use .NET nanoFramework Support

Excitingly, the Orthosie-I is now fully supported by a ready-to-use .NET nanoFramework firmware image. The module can run the standard ESP32_C3_REV3 firmware provided for ESP32-C3 revision 3/4 devices, meaning you can flash the prebuilt firmware and get up and running with C# in minutes. If you’re new to .NET nanoFramework, check out the official getting started guide – it walks you through installing the Visual Studio extension, flashing the firmware (using the nanoff tool), and deploying your first C# application on a microcontroller. In short, Orthosie-I plus .NET nanoFramework provides an out-of-the-box managed code experience on the ESP32-C3.

Available GPIOs on the Orthosie-I Module

One of the key considerations in using any module is knowing which pins you can use in your application. Out of Orthosie-I’s pads, the following GPIOs are exposed for general-purpose use (for things like digital I/O, I²C/SPI signals, etc.) according to the module pinout: GPIO3 to 7 and 10. These six GPIOs are free for your application logic. In addition, the Orthosie-I’s ESP32-C3 integrates a USB interface through GPIO18 and 19. But, if you don’t require the USB functionality, GPIO18 and GPIO19 can be repurposed as normal GPIOs as well. (Internally, the USB pins are only needed for native USB or serial-JTAG; if you disable or don’t use USB CDC, you effectively gain two extra GPIOs for other purposes.)

Note: Certain other pins on the module serve special roles – for example, GPIO8, GPIO9, GPIO2 are used as boot mode strapping pins, and GPIO20/GPIO21 are tied to UART0 TX and RX which is reserved by .NET nanoFramework for communication with the module as well as logging output. The six main GPIOs listed above have no such constraints, making them the primary choices for hooking up external devices.

Using I²C and SPI Buses (Sensors, Displays, etc.)

With .NET nanoFramework on Orthosie-I, you can easily configure the available GPIOs to act as I²C or SPI bus lines to connect a variety of peripherals (sensors, displays, etc.). The ESP32-C3’s flexible I/O matrix allows most pins to be used for these communication interfaces. For example:

  • I²C Bus Example: Use GPIO4 as SDA and GPIO5 as SCL. This gives you an I²C bus for connecting devices like temperature/humidity sensors, accelerometers, IO expanders, etc. Many sensors use I²C, and you can choose any free GPIO pair for SDA/SCL. In code, you would create an I2cDevice with those pin numbers (or use the default I²C bus if it’s pre-mapped to certain pins).
  • SPI Bus Example: Use GPIO6 for MOSI, GPIO7 for SCLK, and GPIO10 for MISO (plus one more GPIO of your choice for the CS line). This configuration can drive an SPI device such as an OLED/LCD display, a SPI flash, or a stepper motor controller. You could, for instance, connect a small OLED screen over SPI and use .NET nanoFramework graphics libraries to display information.

Of course, these are just example pin choices – you could use different GPIOs as needed (for instance, GPIO3 or GPIO18/19 if available). The .NET nanoFramework HAL lets you remap I²C/SPI pins in software by calling the configuration APIs (e.g. Configuration.SetPinFunction() with DeviceFunction.I2C_SDA/SPI_MOSI, etc.), so you have a lot of flexibility. This means no hardwired limitations on which pins can be I²C/SPI – just pick what suits your hardware layout.

To make life even easier, .NET nanoFramework provides a large collection of device driver libraries for common peripherals. The nanoFramework.IoT.Device repository contains drivers/bindings for all sorts of sensors, displays, and actuators. You can browse this library collection (ported largely from the .NET IoT bindings) and pull in a NuGet package for whatever device you’re using. For example, if you connect a BME280 environmental sensor or an SSD1306 OLED display to your Orthosie-I, there are C# drivers available – so you can read sensor data or draw to a screen with just a few lines of code, instead of writing low-level bus transactions. This rich set of device bindings means plug-and-play hardware integration: plug in the sensor, install the corresponding NuGet, and you’re ready to go.

Cloud Connectivity (Azure, AWS, and MQTT)

Connecting your Orthosie-I based device to cloud services is straightforward with .NET nanoFramework, thanks to ready-made libraries for popular IoT platforms and protocols. In particular:

  • Azure IoT Hub: The nanoFramework.Azure.Devices SDK provides clients to fully interact with Azure IoT Hub services. It supports features like DPS (Device Provisioning Service), telemetry (device-to-cloud messages), cloud-to-device commands, direct methods, and device twin properties – essentially mirroring the functionality of the official Azure IoT .NET SDK, but optimized for microcontrollers. With a few lines of C# you can connect to an IoT Hub, authenticate with SAS tokens or X.509 certs, and start sending telemetry or receiving commands, all using the familiar patterns from full .NET.
  • AWS IoT Core: Similarly, there is a nanoFramework.Aws.IoTCore.Devices package that allows your device to connect to AWS IoT Core using MQTT and implement the AWS IoT device client protocols. This makes it easy to integrate with AWS IoT services if your cloud backend is built around AWS.
  • Standard MQTT Brokers: For any other cloud or local MQTT use cases, nanoFramework includes the nanoFramework.M2Mqtt library – a fully featured MQTT client that supports MQTT 3.1.1 and even 5.0. This lets you connect to any MQTT broker (e.g. Eclipse Mosquitto, HiveMQ, EMQX, or cloud services that use MQTT) for pub/sub messaging. Whether you’re sending data to a private server or subscribing to an MQTT topic for remote control, the M2Mqtt client has you covered with a familiar publish/subscribe API.

To illustrate how easy it is to use MQTT on the Orthosie-I (or any .NET nanoFramework device), here’s a short code snippet that connects to a public test broker, subscribes to a topic, and publishes a message:

using nanoFramework.M2Mqtt;
using nanoFramework.M2Mqtt.Messages;
using System.Text;

...
// Connect to an open MQTT broker (Mosquitto test server in this example)
MqttClient client = new MqttClient("test.mosquitto.org", 1883, false, null, null, MqttSslProtocols.None);

// Set up a message received handler to process incoming messages
client.MqttMsgPublishReceived += (sender, e) =>
{
    string msg = Encoding.UTF8.GetString(e.Message);
    Debug.WriteLine($"Received on {e.Topic}: {msg}");
};

// Connect to the broker with a client ID
client.Connect("OrthosieDevice");

// Subscribe to a topic with QoS 0
client.Subscribe(new string[] { "test/topic" },
                 new MqttQoSLevel[] { MqttQoSLevel.AtMostOnce });

// Publish a message to the topic
client.Publish("test/topic",
               Encoding.UTF8.GetBytes("Hello from Orthosie!"),
               MqttQoSLevel.AtMostOnce,
               retain: false);

As shown, the code is very straightforward: you create an MqttClient, attach an event handler (to log any incoming messages), call Connect() with a unique client ID, then subscribe and publish. The MQTT client handles the low-level protocol for you. This could easily be extended to use TLS encryption (by specifying certificates) or to connect to cloud IoT hubs. With this in place, your Orthosie-I device can send events and react to commands in real time, integrating with IoT dashboards or other services.

Hosting a REST API or Web Config Portal

Beyond cloud connectivity, you might also want to provide a local web interface – for example, a small built-in web server on the device for configuration or status monitoring. The nanoFramework.WebServer library makes this remarkably simple. It’s a lightweight, asynchronous web server implementation that mimics the ASP.NET Core style of defining RESTful endpoints via C# attributes and handlers. Despite its size, it’s packed with features – you can define a REST API (GET/POST routes) using method attributes, handle multiple requests concurrently, parse query string parameters, and even serve static files for a simple web UI.

In practice, this means you could have your Orthosie-I module hosting a small configuration portal or REST API that users can access via a web browser or HTTP client. For example, you could create an endpoint /config/wifi that serves a page to configure Wi-Fi credentials, or an API like /api/sensor/latest that returns the latest sensor readings in JSON. The WebServer library takes care of the underlying HTTP parsing and routing – you just decorate your methods with the appropriate route attributes and the framework will call them on HTTP requests. This opens the door to easy integration with web apps or mobile apps (you could call REST endpoints on the device) and is great for user-friendly configuration without needing a physical serial connection. Essentially, your .NET nanoFramework device can also behave like a mini web server or IoT gateway, enabling interactive device setups and dashboards with minimal effort.

Lowering the entry barrier with .NET & Visual Studio

To wrap up, developing on the Orthosie-I with .NET nanoFramework brings a level of productivity and power that .NET developers coming from an ESP32 (IDF or Arduino) background will appreciate. You get to leverage modern C# APIs and the vast .NET ecosystem instead of writing in C/C++ – for instance, use rich collection classes, string manipulation, networking, parsing libraries, no need to deal with memory allocation and having to free it, all on a microcontroller. The integration with Visual Studio means you have a full IDE at your disposal: code completion, NuGet package management, and one-click deployment. Most impressively, you can deploy and debug live over USB with the .NET nanoFramework extension – set breakpoints and step through C# code running on the Orthosie-I in real time, just as you would debug a desktop app. This real-time debugging ability (no more adding printf traces and reflashing firmware repeatedly) saves an enormous amount of time and makes embedded development feel much more approachable. In short, .NET nanoFramework effectively lowers the barrier to entry for embedded/IoT development: you can go from zero to a cloud-connected, web-enabled device very quickly, using high-level tools and languages, all while still targeting a tiny, resource-constrained module. It’s the best of both worlds – the power and convenience of .NET, running on the efficient, wireless-enabled hardware that modules like the Würth Elektronik Orthosie-I provide.

Now go grab your Orthosie-I EVK and have fun with .NET nanoFramework!

Celebrating IoT Day with .NET nanoFramework – Empowering IoT Solutions with Ease

April 9 is Global IoT Day, a worldwide initiative that began in 2010 to spark conversations and collaboration around the Internet of Things. Since its inception, IoT Day has been celebrated annually with events across hundreds of locations globally – from conference halls and online meetups to casual gatherings in cafés and classrooms.​

In honor of IoT Day, we want to congratulate the community and shine a spotlight on how .NET nanoFramework is enabling developers and decision-makers to build powerful IoT solutions with ease. 🎉

As we celebrate the innovations in IoT, let’s explore how .NET nanoFramework makes developing for IoT simple, productive, and fun. From writing embedded code in C# to leveraging the rich tooling of Visual Studio, and from broad hardware support to ready-made cloud libraries, .NET nanoFramework is empowering the IoT community to create the next generation of smart devices.

Coding IoT in C# – Simplicity & Productivity

One of the biggest advantages of .NET nanoFramework is the ability to write embedded software in C#, a high-level, memory-managed language. This means you can leverage your existing C#/.NET skills to program microcontrollers, instead of having to learn low-level C/C++ or obscure RTOS details. Developers can harness the powerful and familiar Microsoft Visual Studio IDE and their .NET C# knowledge to quickly write code without worrying about the low-level hardware intricacies of microcontrollers​. In other words, .NET nanoFramework abstracts away the tedious bits of hardware access, letting you focus on your application logic.

Writing IoT applications in C# dramatically improves developer productivity. You get modern language features, a huge standard library subset, and even automatic memory management – all on tiny MCU devices! Desktop .NET developers will feel at home and can reuse their skills in embedded systems​, effectively enlarging the pool of who can develop IoT solutions. This simplicity lowers the barrier to entry for newcomers and accelerates development for experts, making IoT prototyping and product development faster and more approachable than ever.

Full-Featured Development with Visual Studio

Another highlight of .NET nanoFramework is its top-notch development experience. It integrates with Visual Studio through an extension, so you get the same rich IDE features you’re used to: IntelliSense, project templates, solution management, and one-click deployment. Perhaps most impressively, you can set breakpoints and debug your embedded code in real time on the target device – just as you would debug a desktop application. Using Microsoft Visual Studio, a developer can deploy and debug the code directly on the real hardware​, inspecting variables, stepping through code, and diagnosing issues with ease.

Think about that: no more printf debugging or blinking LEDs to figure out what your firmware is doing – you can actually step through C# code running on an ESP32 or STM32 in-circuit! This tight debugger integration saves countless hours and headaches. The familiar Visual Studio tooling also means easy project management with multiple projects, NuGet package integration for libraries, and source control – bringing modern DevOps workflows to IoT development. Overall, .NET nanoFramework provides an enterprise-grade developer experience for tiny embedded devices, which is both comforting and empowering for developers.

Multi-Platform Hardware Support (ESP32, STM32 and more)

IoT projects come in all shapes and sizes, and so do the hardware platforms. .NET nanoFramework shines here by offering broad support for multiple microcontroller architectures and boards. It currently runs on ARM Cortex-M cores and Espressif ESP32 series chips, among others. In fact, there are reference firmware builds for a variety of popular targets: numerous STM32 Nucleo and Discovery boards, ESP32 DevKits (most of ESP32 series are supported), as well as devices from Texas Instruments (CC3220, CC1352) and NXP (i.MX RT1060), to name a few.

What does this mean for you? Flexibility and choice. You can prototype on a cheap Wi-Fi enabled ESP32, move to a high-performance STM32 if needed, or pick hardware that best fits your product requirements – all without rewriting your application code. .NET nanoFramework’s HAL (Hardware Abstraction Layer) and drivers take care of the device-specific details. For IoT solution architects and technical decision-makers, this broad hardware support de-risks your technology stack: you’re not locked into a single vendor or chip family. Whether you need a power-efficient MCU for a battery IoT sensor or a more powerful MCU for a complex task, chances are .NET nanoFramework has you covered. And if a new board comes along, the community can extend support to it thanks to the open-source nature of the platform. It’s IoT development on your terms!

Extensive IoT Device Bindings and Libraries

Building IoT solutions isn’t just about the microcontroller – it’s about all the sensors, actuators, and peripherals that bring the solution to life. .NET nanoFramework comes with an extensive collection of IoT device bindings to interact with a wide range of hardware components. This repository of bindings includes sensors, small displays, motors, and pretty much anything else you might want to connect to your nanoFramework-powered device​. From temperature/humidity sensors and accelerometers to OLED screens and GPS modules, there’s likely a NuGet package ready to plug into your project.

These device libraries are a huge productivity booster: instead of writing low-level interface code (e.g., handling I²C/SPI transactions or bit-banging protocols), you can pull in a high-level API that exposes sensor readings or device controls in a simple, object-oriented way. Many of the bindings have been migrated from the mainstream .NET IoT repository, ensuring a level of maturity and consistency. In fact, the nanoFramework APIs strive to follow the ones from .NET IoT as closely as possible, to facilitate code reuse and leverage existing samples​. This means if you find a C# code example for a sensor in a Raspberry Pi .NET Core context, there’s a good chance you can adapt it easily to .NET nanoFramework. The result is a rich ecosystem of device support that saves you time and lets you focus on the solution, not the boilerplate.

Seamless Cloud Connectivity (Azure and Beyond)

Modern IoT solutions often need to connect to cloud services for telemetry, remote control, or data storage. Here, .NET nanoFramework truly empowers developers by providing built-in libraries for cloud connectivity. Right out of the box, you have access to libraries for Azure IoT services and MQTT, making it straightforward to get your device online and talking to the cloud.

For instance, connecting to Azure IoT Hub or Azure IoT Central is made easy with official client libraries that support all the key features – device provisioning (DPS), SAS token or certificate authentication, cloud-to-device (C2D) and device-to-cloud (D2C) messaging, direct method calls, device twin properties, and even IoT Plug and Play capabilities​. The nanoFramework Azure IoT libraries (such as nanoFramework.Azure.Devices.Client, nanoFramework.Azure.Devices.Provisioning.Client, and nanoFramework.Azure.Devices.Shared) implement these features over the MQTT protocol, in a way that mirrors the familiar patterns of the full Azure SDK for .NET​. This means if you’re used to Azure’s .NET SDK, you’ll find the nanoFramework versions comfortable and easy to use. With just a few lines of C# code, your device can register itself with Azure IoT Hub and start sending telemetry or receiving commands.

Not using Azure? No problem – the ecosystem includes MQTT support through the nanoFramework.M2Mqtt library, which lets you connect to any MQTT broker for pub/sub messaging​. Whether it’s a local MQTT server, AWS IoT, or another cloud platform, you can speak MQTT from your device with minimal effort. There are also libraries like nanoFramework.Networking.AzureStorage to directly interact with Azure Storage services (e.g. to upload logs or download configs), showcasing the breadth of cloud integration. Using AWS? We’ve got you covered with nanoFramework.Aws.IoTCore.Devices library.

In short, .NET nanoFramework has ready-made building blocks for cloud connectivity, so you can take your IoT project from a standalone gadget to a connected device in the broader IoT ecosystem with ease. This enables scenarios like remote monitoring, firmware updates, data analysis in the cloud, and more – all using high-level C# libraries that abstract the heavy lifting.

Open-Source and Community Driven

Perhaps one of the most inspiring aspects of .NET nanoFramework is its community and open-source model. .NET nanoFramework is an open-source platform, hosted on GitHub, and developed by an enthusiastic community of contributors and maintainers. Being open-source means that anyone can inspect the code, suggest improvements, or even contribute features and fixes. This collaborative spirit ensures the framework stays up-to-date and grows to meet the needs of the IoT community. It also gives confidence to technical decision-makers that there’s transparency and no vendor lock-in – the source is open and governed under the .NET Foundation umbrella.

The project’s GitHub repository is the hub of activity – with over dozens of repositories encompassing the core runtime, libraries, samples, and documentation. If you’re curious or have a specific hardware in mind, you can likely find a sample or existing discussion in the repo. Moreover, the maintainers are very welcoming on GitHub and on the community Discord channels, so help is always available. The fact that .NET nanoFramework has already been running on devices for years (it’s a spiritual successor to the earlier .NET Micro Framework) gives it a solid foundation and a trove of community knowledge to draw from.

To dive deeper and explore all that .NET nanoFramework offers, make sure to check out the official API documentation which details all available namespaces and classes (the docs closely follow .NET IoT APIs, as noted)​. The documentation site is a great resource for finding how to use specific peripherals or libraries, with plenty of examples. And of course, if you’re ready to get your hands dirty, head over to the 👉 GitHub repository to get the source code, samples, and even the Visual Studio extension. You can grab the Visual Studio extension from the Marketplace and start deploying to a device in minutes.

Join the IoT Revolution with .NET nanoFramework

In this exciting era of IoT, .NET nanoFramework stands out as a platform that truly empowers developers and tech leaders alike. It brings the productivity of modern .NET development to the embedded world – enabling rapid prototyping, agile iteration, and reliable production code for IoT devices. As we congratulate the global IoT Day initiative on another year of fostering IoT innovation, it’s a perfect time to imagine what you could build next. Whether you’re a hobbyist connecting a few sensors at home, or a decision-maker architecting a fleet of smart devices for industry, .NET nanoFramework can be your secret weapon to build powerful IoT solutions with ease.

Happy IoT Day, and have fun with .NET nanoFramework! Here’s to the next generation of connected things that we’ll create together, with the help of tools like .NET nanoFramework. Now go forth and innovate – the IoT world is yours to shape. 🚀

(Reposted from original @ .NET nanoFramework blog)

Decoding ESP32 back trace

Any developer working with ESP32 has most likely came across with one of those infamous “Guru Meditation Error” messages when the execution crashes. They look like this:

Basically, a cryptic output with a bunch of addresses that are begging to be decoded so one can even start make sense out of it! 😯 😥

It could be a limitation of mine, but I’ve struggled a long time with these messages and what to do with them. There must be a way to decode these, right?! After a long search I’ve found that such tool exists and it’s readily available in the IDF SDK. Surprisingly (again this could be my fault 😀 ) I haven’t seen it highlighted, let alone mentioned, in IDF documentation about debugging…

The tool goes by the (somewhat obvious) name of xtensa-esp32-elf-addr2line. It gets installed in the xtensa tools folder, along with the GCC compiler. There is one specific for each ESP32 series, so you have xtensa-esp32s2-elf-addr2line for ESP32-S2 and so on.

As inputs, the tool takes the executable file (in ELF format) and the backtrace information from the guru meditation error message. This bit here:

It also accepts various options to format the decoded output, which can be listed using the usual -h option that will output the help information.

For the sake of the example, this is the full command line that I’m using to decode the above back trace data:

xtensa-esp32-elf-addr2line -aspCfire "my_executable_file.elf" 0x400f426e:0x3ffc5cb00x400f3a11:0x3ffc5cd0 0x400ec290:0x3ffc5cf0 0x400ec38b:0x3ffc5d40 0x400da253:0x3ffc5d60 0x400da321:0x3ffc5d80 0x400d8384:0x3ffc5da0 0x400d901d:0x3ffc5dc0 0x400e3575:0x3ffc5e00 0x400f88da:0x3ffc5e30 0x400f700d:0x3ffc5e50

And here’s the decoded output, which (out of curiosity) belongs to a nasty bug that we’ve chasing for a long time in .NET nanoFramework.

Now we have a nice stack trace with the call thread and, hopefully, a good starting point to start investigating what could be causing the fault.

VS Code Task for .NET nanoFramework

To make life easier for developers working in .NET nanoFramework a VS Code Task was added to the toolbox which makes decoding an ESP32 backtrace very easy.

Assuming that the Tasks template that is provided in nf-interpreter repository as been setup, the Task can be found under the name Decode ESP32 backtrace.

Next step is to choose the ESP32 series (it defaults to ESP32).

Follows the path to the executable file to decode against. It defaults to the nanoCLR.elf that’s supposed to be found in the build directory after a successful build. In case the backtrace comes from a different executable, it’s a matter of replacing the default path with the full path to the respective file.

Last step is to enter the backtrace data coming from the guru meditation error output. Mind that only the backtrace line is required! Adding more than that won’t work.

The output with the stack trace shows in the console pane. Like this:

And this is it: simple and effective!

Now you can decode Guru Meditation Error messages from ESP32.

And have fun with .NET nanoFramework! 😉

20 years of .NET and what is missing in the picture

WOW! .NET it’s celebrating 20 years and that’s something! A bit of context so I’m not misinterpreted with what I’ll say next: I’ve been developing with .NET since 2005. Started with desktop applications, embraced embedded systems when .NETMF showed up, worked on several Azure technologies, Visual Studio extensibility, msbuild and other endeavors. Since 2016 I’ve  been working relentlessly in .NET nanoFramework. I 💜 .NET and I use it for anything and everything that I can! The landing page for the celebration of 20 years of .NET it’s pretty neat and gives a perfect snapshot of .NET history and also – if you can read between the lines – the general overview on how the .NET ecosystem is pictured by the “top levels”. .NET it’s the ONLY ecosystem that allows coding from the most complex multi-node, distributed cloud application down to blinking an LED on the tiniest micro-controller. Using exactly the same tools, the same language, the same technology and the same awesome experience from coding to debugging. This is a fact that Scott Hanselman highlighted and completely demonstrated during the last .NET Conf in the session “.NET Everywhere – Windows, Linux, and Beyond”. So expressive and clear that just makes me want to repeat the above: .NET it’s the ONLY ecosystem where this is possible. (if you can prove me wrong, I’ll be happy to learn about it 🙂 ) Yet, the “small end” part keeps been treated as the stepchild of the .NET ecosystem… For the life of me, I can not understand why! On the landing page of the 20 years of .NET two pivotal moments are clearly missing :
pivotal_1
And at the top right corner, that the sentence over there, should be something like this: “Supported on Windows, Linux, macOS and microcontrollers.” 
multiplatform_1
Yes, .NET can (also) run on Raspberry PIs and on tiny micro-controllers with just a few kilobytes of RAM and flash! And that should be shout out for everyone to know about it. Until this is “fixed” and the “small end” of the .NET ecosystem is truly embraced and presented side by side, as an equal, just like the other tools and technologies there, we’ll keep failing on attracting even more developers, more students learning to code and more companies developing their products. They’ll inevitably default to other technologies, by the simple fact that they don’t even know that this exists, that it is possible, ridiculously simple and that it can give them an immense productivity, efficiency, and joy. Or, to put it on other words, someone could be doing more on empowering every person and every organization on the planet to achieve more. 😉 Because those persons could and they would, if only they just knew…

Multithreading in .NET nanoFramework

.NET nanoFramework, just like the full .NET framework, has multithreading capability out of the box.

Because .NET nanoFramework (usually) runs on single core processor there is no real parallel execution of multiple threads. There is a scheduler that allots a 20ms time slice to each thread in a round and robin fashion. If a thread is capable of running (meaning that is not blocked for any reason) it will execute for 20ms, after which the execution will resume on to the next “ready to run” thread and so on.

By default, a .NET program is started with a single thread, often called the primary thread. However, it can create additional threads to execute code in parallel or concurrently with the primary thread. These threads are often called worker threads.

Why use multiple threads?

Unlike a desktop or server application that can take advantage of running on a multiprocessor or multi-core system, .NET nanoFramework threads can be a very useful programming technique when you are dealing with several tasks that benefit from simultaneous execution or that need input from others. For example: waiting for a sensor to be read to perform a computation and decide on the action to take. Another example would be interacting with a device that requires pooling data constantly. Or a web service that is serving requests, there’s another typical example of multithreading use case.

How to use multithreading

You create a new thread by creating a new instance of the System.Threading.Thread class. There are a few ways of doing this.

Creating a thread object and passing in the instance method using a delegate.
(note that the instance method can be either a static method or a method in a class already instantiated)

Thread instanceCaller = new Thread(
                new ThreadStart(serverObject.InstanceMethod));

After this the thread can be started with like this:

instanceCaller.Start();

Another (and very cool!) way is to create a thread object using a lambda expression.

(note that this does not allow retaining a reference to the thread and the thread is started immediately.

new Thread(() => 
    {
        Debug.WriteLine(
            ">>>>>> This inline code is running on another thread.");

        // Pause for a moment to provide a delay to make
        // threads more apparent.
        Thread.Sleep(6000);

        Debug.WriteLine(
            ">>>>>> The inline code by the worker thread has ended.");

    }).Start();

Passing parameters

Occasionally it is required to pass parameters to a working thread. The simplest way of doing this is using a class to hold the parameters and have the thread method get the parameters from the class.

Something like this:

// Supply the state information required by the task.
ThreadWithState tws = new ThreadWithState(
    "This report displays the number", 42);

// Create a thread to execute the task, and then...

Thread t = new Thread(new ThreadStart(tws.ThreadProc));

// ...start the thread
t.Start();

Retrieving data from threads

Another common situation is the requirement of accessing data that results from a thread execution. Like a sensor reading that arrives upon a pooled access to a bus.

In this case setting up a call-back that gets executed by the thread before leaving it’s execution it’s a very convenient way of dealing with this.

// Supply the state information required by the task.
ThreadWithState tws = new ThreadWithState(
    "This report displays the number",
    42,
    new ExampleCallback(ResultCallback)
);

Thread t = new Thread(new ThreadStart(tws.ThreadProc));
t.Start();

Controlling threads execution

Threads execution can be controlled by means of the thread API that allows Starting, Suspending and Aborting a thread’s execution.

// create and start a thread
var sleepingThread1 = new Thread(RunIndefinitely);
sleepingThread1.Start();

Thread.Sleep(2000);

// suspend 1st thread
sleepingThread1.Suspend();

Thread.Sleep(1000);

// create and start 2nd thread
var sleepingThread2 = new Thread(RunIndefinitely);
sleepingThread2.Start();

Thread.Sleep(2000);

// abort 2nd thread
sleepingThread2.Abort();

// abort 1st thread
sleepingThread1.Abort();

Threads synchronization

A common scenario is to require a thread to wait for an external event before executing a block of code. To help these use cases there are two classes that block a thread execution until they are signalled elsewhere. ManualResetEvent requires that the code resets the event opposed to AutoResetEvent.

// ManualResetEvent is used to block and release threads manually. It // is created in the unsignaled state.
private static ManualResetEvent mre = new ManualResetEvent(false);

private static AutoResetEvent event_1 = new AutoResetEvent(true);
ivate static AutoResetEvent event_2 = new AutoResetEvent(false);

The API of both type of events it’s very flexible and convenient, allowing creating events already set or reset and have wait handlers waiting forever or for a specified timeout.

private static void ThreadProc()
{
    Debug.WriteLine(
        $"{Thread.CurrentThread.ManagedThreadId} starts and calls mre.WaitOne()");

    mre.WaitOne();

    Debug.WriteLine($"{Thread.CurrentThread.ManagedThreadId} ends.");
}

Accessing shared resources

In multithread applications another common requirement is to manage access to shared resources, like a communication bus. In these situations, several threads that need to access the resource can only do it one at the time. For this there is the C# lock statement. Using it ensures that the code protected by it can only be accessed by one and only one thread. Any other thread that tries to execute that code block is put on hold until the thread executing it leaves that block.

private readonly object _accessLock = new object();
(…)
lock (_accessLock)
{
    if (_operation >= operationValue)
    {
        _operation -= operationValue;
        appliedAmount = operationValue;
    }
}
(…)

Conclusion

All this is available out of the box from C# .NET base class library. Extremely easy to use and incredibly powerful. This empowers developers working on embedded systems to design and code multithreading applications with very little effort.

When debugging a multithreading application, developers can take advantage of the debugging capabilities of Visual Studio, examine execution of each thread, set breakpoints, check variables content, etc. All this without any special hardware or debugger configuration.

Samples for all the above scenarios are available in .NET nanoFramework samples repository. Make sure to clone it and use it to explore the API and it’s capabilities.

Have fun with C# .NET nanoFramework!

Posted as Hackster.io project here.

Debugger output in C#

This came up following a conversation with a fellow developer about the use of Console.WriteLine() to output debug information and the fact that that output is useless in production code so it should be removed.

Basically, the conversation was around the removal of this call when the build was made in Release flavour. If it should be automatically handled by the compiler or if one has to use other means to deal with this, like wrapping with compiler defines.

I got curious about this and made my way to the .NET documentation website so this would not become another instance of RTFM. 😉

Risking stating the obvious for some, here are my findings about this, mostly for my personal future reference and shared here in case they are useful for anyone else.

According to the documentation Console.WriteLine() – and friends – are meant to be used to write to the standard output stream of a console applications. This begs for a clarification about what exactly a “console” is… Again, from the documentation, a console “is an operating system window where users interact with the operating system”. In Windows OS, this is what we usually call the Command Prompt.

An interesting note on the documentation that caught my attention highlights this interesting bit: “do not use console class to display output in unattended applications”.
I found it particularly interesting because of my involvement in nanoFramework. nanoFramework applications are undoubtedly (and by definition) unattended applications. So, technically, we have it wrong offering Console.WriteLine() in the API.

What is the alternative then? Let us dig deeper into .NET documentation.

The answer is the Debugger class. There we can find the equivalent WriteLine() method and friends. The description is abundantly clear: “use methods in the Debug class to print debugging information and check your logic with assertions”. Bingo!

Further reading uncovers the answer to what brought me here: “the ConditionalAttribute attribute is applied to the methods of Debug. Compilers that support ConditionalAttribute ignore calls to these methods unless DEBUG is defined as a conditional compilation symbol”.

Wow! That is exactly the behaviour that we were looking for. Out of the box. Using Debug.WriteLine() instead of Console.WriteLine() outputs debug information – as intended – but it also removes it from production code without any hassle.

A final note to point out that the output can show on Visual Studio debug window or in Immediate window, depending on the setting exposed in the following option in Visual Studio.

This makes it perfectly clear that one should use Debug.WriteLine() to outout debug information to VS output window and keep Console.WriteLine() when the intention is indeed output to the console.

Taking this a step further, one can make use of the ConditionalAttribute to decorate methods that are meant only for debug. Something like this:

[System.Diagnostics.Conditional("DEBUG")]
public static void OutputDebugDetails (string message);

And that’s all folks!

Posting native events in nanoFramework

To deal with situations that require native coding or when one faces the situation of needing to add native code that has no place on the core library, there is Interop. It’s a rather powerful feature that opens immense possibilities.

Sending data from the native code to the C# managed application can be done easily by using either parameters passed by reference or sending the data on the return value. But… what if you need to signal your application of random events that occurs on the native code? Well now you can!

Support for CustomEvent has just been added to nanoFramework.Runtime.Events.

Let me explain how easy it is to use this cool feature. 😉

On your native code, all you have to do it’s just adding a simple line of code. Really! Like this:

PostManagedEvent( EVENT_CUSTOM, 0, 1111, 2222 );

The payload that’s available consists of the last two parameters on that call. The first it’s an uint16 and the second an uint32.

Feel free to use those as you please. This includes all possible variants like: use only the first one, or the second or none if you just need to signal something. Or, if you have a lot of events to process, you can encode those.

This is it. Now up to the managed application!

Requirements? Add a reference to nanoFramework.Runtime.Events NuGet.

using nanoFramework.Runtime.Events;

Last step is setting up the event handler:

CustomEvent.CustomEventPosted += CustomEventHandler;

The event handler receives the data payload posted on the native code.

private static void CustomEventHandler(object sender, CustomEventArgs e)
{
    Console.WriteLine($"Custom event received. Data1: { e.Data1 } Data2: { e.Data2 }.");
}

And this is it! Easy-peasy.

When the event handler is called you can access the data posted on the native code through the properties in CustomEventArgs class.

If you need, grab the sample from the official repo here.

Now, to paraphrase Carl Franklin from the .NET Rocks podcast: “go write some code!” 😊

Interop in .NET nanoFramework

Have you ever faced the situation of needing to add support for a specific hardware? Or to perform some computing intensive task that would be more efficiently executed in C/C++ rather than with managed C# code?

This is possible with the support that .NET nanoFramework has to plug “code extensions”. It’s called Interop.

What exactly does this? Allows you to add C/C++ code (any code, really!) along with the correspondent C# API.
The C/C++ code of the Interop library is added to an nanoFramework image along with the rest of the nanoCLR.
As for the C# API: that one is compiled into a nice .NET nanoFramework library that you can reference in Visual Studio, just like you usually do.

The fact that this is treated as an extension of the core is intended and, in fact, very positive and convenient. A couple of reasons:

  • Doesn’t require any changes in the main core code (which can be broken or may prove difficult to merge with changes from the main repository).
  • Keeps your code completely isolated from the rest. Meaning that you can mange and change it as needed without breaking anyone’s stuff.

How cool is this? 🙂

For the purpose of this post we are going to create an Interop project that includes two features:

  • Hardware related: reads the serial number of the CPU (this will only work on ST parts).
  • Software only related: implementing a super complicated and secret algorithm to crunch a number.

Note: it’s presumed that you have properly setup your build environment and toolchain and are able to build a working nanoFramework image. If you don’t suggest that you look the documentation about it here.

Before we start coding there are a few aspects that you might want to consider before actually start the project.

Consider the naming of the namespace(s) and class(es) that you’ll be adding. Those should have meaningful names. You’ll see latter on that these names will be used by Visual Studio to generate code and other bits of the Interop project. If you start with something and keep changing it you might find yourself in trouble because your version control system will find diferences. Not to mention that other users of your Interop library (or even you) might start getting breaking changes in the API that you are providing them. (You don’t like when others do that to you, do you? So… be a pal and pay attention to this OK? 🙂 )

Creating the C# (managed) Library

Create a new .NET nanoFramework project in Visual Studio

This is the very first step. Open Visual Studio, File, New Project.
Navigate to C# nanoFramework folder and select a Class Library project type.
For this example we’ll call the project “NF.AwesomeLib”.

nanoframework-interop-sample-01

Go to the Project properties (click the project icon in the Solution explorer an go to the Properties Window) and navigate to the nanoFramework configuration properties view. Set the “Generate stub files” option to YES and the root name to NF.AwesomeLib.

nanoframework-interop-sample-02

Now rename the Class1.cs that Visual Studio adds by default to Utilities.cs. Make sure that the class name inside that file gets renamed too. Add a new class named Math.cs. On both make sure that the class is public.

Your project should now look like this.

nanoframework-interop-sample-03.png

Adding the API methods and the stubs

The next step will be adding the methods and/or properties that you want to expose on the C# managed API. These are the ones that will be called on a C# project referring your Interop library.
We’ll add an HardwareSerial property to Utilities class and call to the native method that supports the API at the native end. Like this.

using System.Runtime.CompilerServices;

namespace NF.AwesomeLib
{
public class Utilities
{
private static byte[] _hardwareSerial;
///

/// Gets the hardware unique serial ID (12 bytes)
///

public static byte[] HardwareSerial
{
get
{
if (_hardwareSerial == null)
{
_hardwareSerial = new byte[12];
NativeGetHardwareSerial(_hardwareSerial);
}
return _hardwareSerial;
}
}

#region Stubs

[MethodImpl(MethodImplOptions.InternalCall)]
private static extern void NativeGetHardwareSerial(byte[] data);

#endregion stubs
}

}

A few explanations on the above:

  • The property HardwareSerial
  • has a only a getter because we are only reading the serial from the processor. As that can’t be written, it doesn’t make sense providing a setter, right?
  • The serial number is being stored in a backing field to be more efficient. When it’s read the first time it will go and read it from the processor. On subsequent accesses that won’t be necessary.
  • Note the summary comment on the property. Visual Studio uses that to generate an XML file that makes the awesome IntelliSense show that documentation on the projects referencing the library.
  • The serial number of the processor is handled as an array of bytes with length of 12. This was taken from the device manual.
  • A stub method must exist to enable Visual Studio to create the placeholder for the C/C++ code. So you need to have one for each stub that is required.
  • The stub methods must be implemented as extern and be decorated with the MethodImplAttribute attribute. Otherwise Visual Studio won’t be able to do it’s magic.
  • You may want to find a working system for you regarding the stub naming and where you place them in the class. Maybe you want to group them in a region, or you prefer to keep them along the caller method. It will work on any of those ways, just a hint on keep things organized.

Moving on to the Math class. We’ll now add an API method called SuperComplicatedCalculation and the respective stub. It will look like this:

using System.Runtime.CompilerServices;

namespace NF.AwesomeLib
{
public class Math
{
///

/// Crunches value through a super complicated and secret calculation algorithm .
///

/// Value to crunch.
///
public double SuperComplicatedCalculation(double value)
{
return NativeSuperComplicatedCalculation(value);
}

#region Stubs

[MethodImplAttribute(MethodImplOptions.InternalCall)]
private static extern double NativeSuperComplicatedCalculation(double value);

#endregion stubs
}
}

And this is all what’s required on the managed side. Build the project and look at the project folder using VS Code for example. This is how it looks like after a successful build:

nanoframework-interop-sample-04.png

From the top, you can see in the bin folder (debug or release flavor) the .NET library that should be referenced in other projects. Please note that besides the .dll file there is the .xml file (the one that will allow IntelliSense to it’s thing), the .pdb file and another one with a .pe extension.
When distributing the Interop library make sure that you supply all four files. Failing to do so will make Visual Studio complain that the project can’t build. You can add all those in a ZIP or even better, as a Nuget package.

Working on the C/C++ (native) code

Moving to the Stubs folder we find a bunch of files and a .cmake file. All those are required when building the nanoCLR image that will add support for your Interop library.
Look at the file names: they follow the namespace and classes naming in the Visual Studio project.
Something very, very important: don’t even think on renaming or messing with the content of those files. If you do that you risk that the image build will fail or you can also end up with the Interop library not doing anything. This can be very frustrating and be very hard to debug. So, again, DO NOT mess around with those!

The only exception to that will be, of course, the ones that include the stubs for the C/C++ code that we need to add. Those are the .cpp files that end with the class name.
In our example those are: NF_AwesomeLib_NF_AwesomeLib_Math.cpp and
NF_AwesomeLib_NF_AwesomeLib_Utilities.cpp.

You’ve probably have noted that there are a couple of other files with a similar name but ending with _mshl. Those are to be left alone. Again DO NOT change them.

Let’s look at the stub file for the Utilities class. That’s the one that will read the processor serial number.

void Utilities::NativeGetHardwareSerial( CLR_RT_TypedArray_UINT8 param0, HRESULT &hr )
{
}

This an empty C++ function named after the class and the stub method that you’ve placed in the C# project.

Let’s take a moment to understand what we have here.

  • The return value of the C++ function matches the type of the C# stub method. Which is void in this case.
  • The first argument has a type that is mapping between the C# type and the equivalent C++ type. A array of bytes in this case.
  • The last argument is an HRESULT type who’s purpose is to report the result of the code execution. We’ll get back to this so don’t worry about it for now. Just understand what’s the purpose of it.

According to the programming manual STM32F4 devices have a 96 bits (12 bytes) unique serial number that is stored starting at address 0x1FFF7A10. For STM32F7 that address is 0x1FF0F420. In other STM32 series the ID may be located in a different address. Now that we know were it is stored we can add code to read it. I’ll start with the code first and then walk through it.

void Utilities::NativeGetHardwareSerial( CLR_RT_TypedArray_UINT8 param0, HRESULT &hr )
{
if (param0.GetSize() < 12)
{
hr=CLR_E_BUFFER_TOO_SMALL;
return;
}

memcpy((void*)param0.GetBuffer(), (const void*)0x1FFF7A10, 12);
}

The first if statement is a sanity check to be sure that there is enough room in the array to hold the serial number bytes. Why is this important?
Remember that here we are not in the C# world anymore where the CRL and Visual Studio take care of the hard stuff for us. In C++ things are very different! On this particular example if the caller wouldn’t have reserved the required 12 bytes in memory to hold the serial array, when writing onto there the 12 bytes from the serial could be overwriting something that is stored in the memory space ahead of the argument address. For types other than pointers such as bytes, integers and doubles this check is not required.

Still on the if statement you can see that, if there is not enough room we can’t continue. Before the code returns we are setting hr to CLR_E_BUFFER_TOO_SMALL (that’s the argument that holds the execution result, remember?). This is to signal that something went wrong and give some clue on what that might be. There is still more to say about this result argument, so we’ll get back to it.

In the next piece of code is were – finally – we are reading the serial from the device.
As the serial number is accessible in a memory address we can simply use a memcpy to copy it from its memory location to the argument.
A few comments about the argument type (CLR_RT_TypedArray_UINT8). It acts like a wrapper for the memory block that holds the array (or a pointer if you prefer). The class for that type provides a function – called GetBuffer() – that returns the actual pointer that allows direct access to it. We need that because we have to pass a pointer when calling memcpy. This may sound a bit complicated, I agree. If you have curiosity on the implementation details or want to know how it works I suggest that you delve into the nanoFramework repo code and take a look at all this.

And that’s it! When this function returns the CPU serial number will be in the argument pointer and will eventually pop up in the C# managed code in that argument with the same name.

For the Math class there won’t be any calls to hardware or any other fancy stuff, just a complicated and secret calculation to illustrate the use of Interop for simple code execution.
Visual Studio has already generated a nice stub for us to fill in with code. Here’s the original stub:

double Math::NativeSuperComplicatedCalculation( double param0, HRESULT &hr )
{
double retVal = 0;
return retVal;
}

Note that the stub function, again, matches the declaration of it’s C# managed counterpart and, again, has that hr argument to return the execution result.
Visual Studio was kind enough to add there the code for the return value so we can start coding on that. Actually that has to be exactly there otherwise this code wouldn’t even compile. 😉

Where is the super complicated and secret algorithm:

double Math::NativeSuperComplicatedCalculation( double param0, HRESULT &hr )
{
double retVal = 0;

retVal = param0 + 1;

return retVal;
}

And with this we complete the “low level” implementation of our Interop library.

Adding the Interop library to a nanoCLR image

The last step that is missing is actually adding the Interop source code files to the build of a nanoCLR image.

You can place the code files pretty much anywhere you want it. Despite that, and to keep things tidy, the repo has a folder named InteropAssemblies that you can use for exactly this: holding the folders of all the Interop assemblies that you’re adding. Any changes inside that folder won’t be picked up by Git so it’s up to you to keep those under source control. Or not.
To make it simple we’ll follow that pattern, and we just copy over what is in the Stubs folder into a new folder InteropAssemblies\NF.AwesomeLib\.

The next file to get our attention is FindINTEROP-NF.AwesomeLib.cmake. nanoFramework uses CMake to generate the build files. Skipping the technical details suffice that you know that as far as CMake is concerned the Interop assembly is treated as a CMake module and, because of that, the file name to have it properly included in the build it has to be named FindINTEROP-NF.AwesomeLib.cmake and copied over inside the CMake\Modules folder.

Inside that file the only thing that requires your attention is the first statement where the location of the source code folder is declared.


(...)
# native code directory
set(BASE_PATH_FOR_THIS_MODULE "${BASE_PATH_FOR_CLASS_LIBRARIES_MODULES}/NF.AwesomeLib")
(...)

If you are placing it inside the InteropAssemblies folder the required changes are:


(...)
# native code directory
set(BASE_PATH_FOR_THIS_MODULE "${PROJECT_SOURCE_DIR}/InteropAssemblies/NF.AwesomeLib")
(...)

And this is it! Now to the build.

If you are using the CMake Tools module to build inside VS Code you need to declare that you want this Interop assembly added to the build. Do so by opening the CMakePresets.json file at the target folder you’ll be building.
There you need to add the following entry to the cacheVariables list (in case you don’t already have it there

"NF_INTEROP_ASSEMBLIES": "NF.AwesomeLib",

A couple of notes about this:

  • The NF_INTEROP_ASSEMBLIES option expects a comma separated list of Interop assembly names. This is because you can add as many Interop assemblies as you need to a nanoCLR image.
  • The name of the assembly must match exactly the class name. Dots included. If you screw up this you’ll notice it in the build.

Another option, if you prefere to call CMake directly from the command prompt, is add the option to the CLI options following the same pattern, like this:

-DNF_INTEROP_ASSEMBLIES="NF.AwesomeLib"

It’s important to stress this: make sure you strictly follow the above.
Mistakes such as: failing to add the CMake find module file to the modules folder; having it named something else; having the sources files in a directory other that the one that was declared; will lead to errors or the library wont’ be included in the image. This will can lead very quickly to frustration. So, please, be very thorough with this part.

The following task is launching the image build. It’s assumed that you have properly setup your build/toolchain so go ahead and launch that build!

Fingers crossed that you wont’ get any errors… 😉

First check is on the CMake preparation output you should see the Interop library listed:

nanoframework-interop-sample-05

A successful CMake preparation stage (that include the Interop assembly as listed above) will end with:

nanoframework-interop-sample-06

After the build completes successfully, you should be seeing something similar to this:

nanoframework-interop-sample-09

Reaching this step is truly exciting, isn’t it?! 🙂
Now go and load the image on a real board!

The next check after loading a target with the nanoCLR image that includes the Interop library is seeing it listed in the Native Assemblies listing. After booting the target is listed in Visual Studio Device Explorer list and after you click on the Device Capabilities button you’ll see it in the output window like this:

nanoframework-interop-sample-07.png

Congratulations, you did it! 😀 Let’s go now and start using the Interop library.

Using an Interop library

This works just like any other .NET library that you use everyday. In Visual Studio open the Add reference dialog and search for the NF.AwesomeLib.dll file that was the output result of building the Interop Project. You’ll find it in the bin folder. As you are going through that note the companion XML file with the same name. With that file there you’ll see the documentation comments showing in IntelliSense as you code.

This is the code to test the Interop library. On the first part we read the CPU serial number and output it as an hexadecimal formatted string. On the second we call the method that crunches the input value.


namespace TestInteropMFConsoleApplication
{
public class Program
{
public static void Main()
{
// testing cpu serial number
string serialNumber = "";

foreach (byte b in NF.AwesomeLib.Utilities.HardwareSerial)
{
serialNumber += b.ToString("X2");
}

Console.WriteLine("cpu serial number: " + serialNumber);

// test complicated calculation
NF.AwesomeLib.Math math = new NF.AwesomeLib.Math();
double result = math.SuperComplicatedCalculation(11.12);

Console.WriteLine("calculation result: " + result);
}
}
}

Here’s a screen shot of Visual Studio running the test app. Note the serial number and the calculation result in the Output window (in green). Also the DLL listed in the project references (in yellow).

nanoframework-interop-sample-10.png

Supported types in interop method calls

Except for strings, you’re free to use any of the standard types in the arguments of the Interop methods. It’s OK to use arrays of those too.

As for the return data, in case you need it, you are better using arguments passed by reference and update those in C/C++. Just know that arrays as returns types or by reference parameters are not supported.

Follows a table of the supported types and correspondence between platforms/languages.

CLR Type C/C++ type C/C++ Ref Type
(C# ref)
C/C++ Array Type
System.Byteuint8_tUINT8*CLR_RT_TypedArray_UINT8
System.UInt16uint16_tUINT16*CLR_RT_TypedArray_UINT16
System.UInt32uint32_tUINT32*CLR_RT_TypedArray_UINT32
System.UInt64uint64_tUINT64*CLR_RT_TypedArray_UINT64
System.SByteint8_tChar*CLR_RT_TypedArray_INT8
System.Int16int16_tINT16*CLR_RT_TypedArray_INT16
System.Int32int32_tINT32*CLR_RT_TypedArray_INT32
System.Int64int64_tINT64*CLR_RT_TypedArray_INT64
System.Singlefloatfloat*CLR_RT_TypedArray_float
System.Doubledoubledouble*CLR_RT_TypedArray_double

Final notes

To wrap this up I would like to point out some hints and warnings that can help you further when dealing with this Interop library stuff.

  • Not all CLR types are supported as arguments or return values for the Interop stubs in the C# project. If the project doesn’t build and shows you an enigmatic error message, that’s probably the reason.
  • Every time the Interop C# project is build the stub files are generated again. Because of this you may want to keep on a separate location the ones that you’ve been adding code to. Using a version control system and a proper diff tool will help you merge any changes that are added because of changes in the C# code. Those can be renames, adding new methods, classes, etc.
  • When Visual Studio builds the Interop C# project a fingerprint of the library is calculated and included in the native code. You can check this in the NF_AwesomeLib.cpp file (in the stub folder). Look for the Assembly name and an hexadecimal number right bellow. This is what .NET nanoFramework uses to check if the native counterpart for that particular assembly is available in the device before it deploys an application. And when I say that I mean it. If you change anything that might break the interface (such a method name or an argument) it will. On the “client” project Visual Studio will complain that the application can’t be deployed. Those changes include the project version in the C# Interop project too, so you can use this as you do with any project version number.
  • The hr (return parameter) is set to S_OK by default, so if nothing goes wrong in the code you don’t have to change it. When there are errors you can set it to an appropriate value that will surface in the C# as an exception. You may want to check the src\CLR\Include\nf_errors_exceptions.h file in the nanoFramework repo.
  • Feel free to mix managed code to your C# Interop project too. If you have a piece of C# code that helps you achieve the library goal, just add it there. As long as it builds anything is valid either before or after the calls to the C/C++ stubs. If it helps and makes sense to be in the library, just add it there. You can even get crazy and call as many C/C++ stubs as you want inside a C# method.

And that’s all! You can find all the code related with this blog post in nanoFramework samples repo.

With all this I expect I was able to guide you through this very cool (and handy!) feature of .NET nanoFramework. Enjoy it!

Note: this post is a reviewed version of the original one published back in 2016. That post was about the defunct .NET MicroFramework.