cerkit.com https://cerkit.com Sat, 14 Mar 2026 23:03:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://cerkit.com/wp-content/uploads/2026/03/cropped-cerkit-logo-round-1-32x32.png cerkit.com https://cerkit.com 32 32 Reimagining the Keys: The Left-Handed Keyboard Inversion VST https://cerkit.com/reimagining-the-keys-the-left-handed-keyboard-inversion-vst/ https://cerkit.com/reimagining-the-keys-the-left-handed-keyboard-inversion-vst/#respond Sat, 14 Mar 2026 23:02:03 +0000 https://cerkit.com/?p=100569 Discover how mirroring the piano keyboard around the axis of D can transform the playing experience for left-handed musicians. Learn the music theory and technical implementation behind this MIDI inversion VST designed for better ergonomics and creative flow.

The post Reimagining the Keys: The Left-Handed Keyboard Inversion VST first appeared on cerkit.com.

]]>
As a left-handed musician, I’ve often felt the subtle “right-hand bias” built into the architecture of the piano. The traditional layout—where the right hand handles the high-flying melodies and the left hand provides the lower accompaniment—can sometimes feel at odds with the natural dexterity of a southpaw.

That’s why I developed the Left-Handed Keyboard Inversion VST. This MIDI effect plugin flips the script (and the scales) by mirroring the entire keyboard, allowing for a completely different ergonomic experience.

Check out a short video demonstrating the inverted keyboard.

The Theory: Mirroring Around the Axis of D

The concept is based on the inherent symmetry of the piano keyboard. If you look at the pattern of black and white keys, the note D sits exactly in the center of the two black keys. This makes it a perfect axis of symmetry.

The plugin works by applying a simple but powerful mathematical formula to every MIDI note you play:

Inverted MIDI Note = 124 - Original MIDI Note

By using 124 as the sum, the pivot point becomes D4 (MIDI note 62). When you play a D4, you get a D4. But as you move physically “up” the keyboard to the right, the pitches move “down” into the bass register.

How It Works in Practice

When the plugin is active, your physical relationship with the instrument is reversed:

  • Left Hand (Physical Low End): Now plays the high-frequency melody notes.
  • Right Hand (Physical High End): Now handles the low-frequency bass notes.

This isn’t just a gimmick; it’s an ergonomic shift. It places the most complex, dexterous part of the performance—the melody—firmly in the hands of the most dexterous part of the musician: the left hand.

Technical Implementation

The plugin is built using the JUCE framework and operates as a pure MIDI effect. It handles:

  • Real-time Note Translation: Every Note On and Note Off message is intercepted and recalculated instantly.
  • State Tracking: It intelligently tracks active notes to ensure that if you release a key, the correct inverted “Note Off” is sent, preventing hanging notes even if you toggle the bypass mid-performance.
  • Zero Latency: Because it’s a simple mathematical transformation of MIDI data, it introduces no perceptible delay to your playing.

Why Invert?

For many left-handed pianists, this creates a fascinating new way to interact with virtual instruments. It allows you to leverage your natural hand-dominance in a way that the standard piano layout doesn’t easily permit. Whether you’re looking for a fresh creative spark or a more natural flow for your lead lines, the keyboard inversion provides a literal new perspective on your music.

The post Reimagining the Keys: The Left-Handed Keyboard Inversion VST first appeared on cerkit.com.

]]>
https://cerkit.com/reimagining-the-keys-the-left-handed-keyboard-inversion-vst/feed/ 0
The Three Me’s – A Strategy for Making Good Decisions https://cerkit.com/the-three-mes-a-strategy-for-making-good-decisions/ https://cerkit.com/the-three-mes-a-strategy-for-making-good-decisions/#respond Wed, 04 Mar 2026 03:11:59 +0000 https://cerkit.com/?p=100547 The Three Me's" is a proven strategy for making good decisions. Explore this mental framework to navigate dilemmas, balance priorities, and build a better future. (163 characters)

The post The Three Me’s – A Strategy for Making Good Decisions first appeared on cerkit.com.

]]>
When I’m faced with a decision, dilemma, or a situation that requires additional consideration, it is easy to get overwhelmed by the immediate pressure of choosing the “right” path. Over time, I’ve developed a mental framework to help navigate these choices: I like to think of myself as three different people. There is the Past Me, the Current Me, and the Future Me. By taking a step back and consulting all three of these perspectives, I can cut through the noise and gain a much clearer view of the best path forward.

First, I check in with the Current Me. This step is about defining the immediate reality of the situation and asking myself what I genuinely want the outcome to be right now. The Current Me represents my immediate desires, constraints, capabilities, and emotions. It is important to acknowledge what I want in the present moment, but it is equally important not to let this be the only voice in the room. The present is often heavily influenced by fleeting feelings or the appeal of short-term convenience, which is why the other two perspectives are so vital.

Next, I consult the Past Me. This version of myself holds the library of my experiences, successes, and, most importantly, my mistakes. I ask the Past Me what he would do based on everything he has learned so far. This is the voice of hard-earned wisdom. Has he been in a similar situation before? How did it play out? I treat this perspective as my personal advisor. If the Past Me gives a warning based on a previous misstep, I make sure to listen closely. Ignoring the lessons of the past is a quick way to repeat them.

Finally, I project forward and consider the Future Me. I think about what this older version of myself would think about the decision that I’m trying to make today. This step is entirely about long-term consequences. If the Future Me would look back and be angry, stressed, or disappointed with the Past Me (what is currently the Current Me), I take special care to re-evaluate my options and make the right decision. The ultimate goal is to set the Future Me up for success and peace of mind, rather than leaving him to clean up a mess.

Ultimately, this framework is about balancing the desires of the present with the lessons of the past and the hopes for the future. It forces you to step outside of the immediate pressure of a dilemma and view your choices across the continuous timeline of your life. This strategy has served me incredibly well, and by checking in with the Three Me’s, I find that I have fewer regrets as the years speed by.

The ‘Three Me’s’ has helped me trade short-term impulses for long-term peace of mind. But I’m curious—how do you navigate tough choices? Do you have a mental framework or a ‘gut check’ ritual that never lets you down? Drop your best decision-making hacks in the comments below; I’d love to learn from your experience!

The post The Three Me’s – A Strategy for Making Good Decisions first appeared on cerkit.com.

]]>
https://cerkit.com/the-three-mes-a-strategy-for-making-good-decisions/feed/ 0
Introducing cerkit ClearCast: Real-Time Radio Transcription Powered by AI https://cerkit.com/introducing-cerkit-clearcast-real-time-radio-transcription-powered-by-ai/ https://cerkit.com/introducing-cerkit-clearcast-real-time-radio-transcription-powered-by-ai/#respond Sat, 28 Feb 2026 02:24:02 +0000 https://cerkit.com/?p=100507 Learn how a conversation at a local MARC meeting inspired cerkit ClearCast — a cross-platform desktop application (Windows, macOS, Linux, and Raspberry Pi) that captures live radio audio through an audio interface and uses Google Gemini AI to transcribe emergency communications in real time.

The post Introducing cerkit ClearCast: Real-Time Radio Transcription Powered by AI first appeared on cerkit.com.

]]>
I attended a MARC meeting last night, and the topic of transcription came up. It was stated that some people talk too fast for the person writing the information down during a local emergency. When communications are flying back and forth on the radio during an active situation, it’s nearly impossible for a human note-taker to capture every detail accurately and in real time.

I immediately thought: what if the computer could do the listening for us?

That idea quickly turned into a working application. I set out to create cerkit ClearCast — a cross-platform desktop program (running on Windows, macOS, Linux, and Raspberry Pi) that allows the user to select an attached audio interface, enabling the computer to “hear” connected radios. ClearCast then listens for audio from the radio(s), and when it detects speech, it samples the audio and sends it to the Google Gemini AI system (GenAI), where it is converted to text that is then displayed on the program’s main screen.

Here’s how cerkit ClearCast works under the hood.


Capturing Audio from the Radio

The first challenge is getting the radio’s audio into the computer in a way the software can work with. This is where an audio interface comes in. A device like the Focusrite Scarlett 18i8 acts as a bridge between the analog audio output of a radio and the digital world of the computer. You connect the radio’s audio output (typically from a speaker or headphone jack) to one of the interface’s inputs, and the interface digitizes that signal and makes it available to the operating system as an audio input device.

The application uses PortAudio, a cross-platform audio I/O library, to interact with the audio interface. When the application launches, it enumerates all available audio input devices on the system and presents them in a drop-down list so the user can choose exactly which device to capture from. This means you’re not locked into a single hardcoded device — if you have multiple audio interfaces, built-in microphones, or virtual audio devices, they’ll all appear in the list. The application also remembers your selection between sessions, so you only have to pick your device once.

Once the user selects a device and clicks Start, the application opens an audio input stream on that device, configured for:

  • 16,000 Hz sample rate — this is the standard rate for speech recognition and keeps data sizes manageable.
  • Mono (single channel) — radio communications are mono by nature, so there’s no need for stereo.
  • 32-bit floating point samples — this gives us high-precision audio data to work with before conversion.

As audio data flows in from the interface, PortAudio invokes a callback function that receives small buffers of raw audio samples. These buffers are immediately queued up for processing.


Detecting Speech and Sampling Audio

Not every moment of a radio channel contains useful audio. There are long stretches of silence, static, or squelch noise between transmissions. Sending all of this to the AI for transcription would be wasteful — both in terms of API usage and processing time.

To solve this, the application implements a silence detection mechanism using RMS (Root Mean Square) analysis. RMS is a mathematical measure of the “energy” or “loudness” of an audio signal. The application continuously calculates the RMS value of incoming audio chunks and compares it against a configurable threshold. The user can adjust this threshold via a slider in the UI to fine-tune sensitivity for their specific radio and environment.

Here’s the flow:

  1. Buffering — Raw audio samples are accumulated from the PortAudio callback into a working buffer.
  2. Chunking — Once enough samples have been collected to fill a 3-second window (48,000 samples at 16 kHz), the application carves out a chunk for analysis.
  3. RMS Check — The RMS energy of the chunk is calculated. If it falls below the silence threshold (defaulting to ~0.005, or roughly -46 dBFS), the chunk is discarded as silence and the application continues listening.
  4. WAV Encoding — If the chunk does contain speech (i.e., it exceeds the threshold), the raw floating-point samples are converted to 16-bit PCM format and wrapped in a standard WAV file header. This produces a self-contained audio clip ready for the AI.

This approach means the application only sends meaningful audio to the cloud, keeping things efficient and responsive.


Sending Audio to Google Gemini for Transcription

Once a valid audio chunk has been captured and encoded as a WAV file, it’s time to send it to the AI. The application uses the Google Gemini API (specifically the gemini-2.5-flash model) for speech-to-text transcription.

The process works like this:

  1. API Initialization — When the user clicks “Start,” the application initializes a Gemini API client using the provided API key. This key is securely stored locally so the user doesn’t have to re-enter it each time.
  2. Building the Request — Each audio chunk is packaged as a multi-part request containing:
    • The WAV audio data as an inline binary blob.
    • text prompt instructing the model to transcribe the audio exactly as spoken, with no additional commentary or formatting.
  3. Sending to Gemini — The request is sent asynchronously to the Gemini GenerateContent endpoint. The AI processes the audio and returns the transcribed text.
  4. Displaying Results — The transcribed text is appended to the main transcript area of the application’s UI in real time. As new transmissions come in, the transcript grows, giving the user a running log of everything said on the radio.

The entire pipeline — from audio capture through AI transcription to on-screen display — operates asynchronously. This means the UI remains responsive while audio is being captured, processed, and transcribed in the background.


Getting a Google Gemini API Key

To use this application, you’ll need a Google Gemini API key. The good news is that getting started is free.

Head over to Google AI Studio and sign in with your Google account. From there, you can create a new API key in just a few clicks — look for the “Get API Key” option in the left-hand menu. Google offers a free tier that includes a generous number of requests per minute, which is more than enough to start testing the application and getting a feel for how it performs with your specific radio setup.

Once you’re ready to move the application into full-time use — what we’d refer to as “Production” — you’ll need to attach a billing account to your Google Cloud project. The free tier has rate limits that could be hit during sustained, high-traffic radio events, and a billing account ensures uninterrupted service. Google’s pay-as-you-go pricing for the Gemini API is very reasonable, especially considering the value of not missing critical communications during an emergency.


The User Experience

The application features a clean, dark-themed interface built with Avalonia UI, a cross-platform .NET UI framework that runs natively on both Windows and macOS. The main screen includes:

  • An API key field (masked for security) where the user enters their Google Gemini key.
  • An audio device drop-down that lists all available input devices, letting the user select which interface to capture from.
  • Start/Stop buttons to control the listening session.
  • silence threshold slider for adjusting speech detection sensitivity.
  • monitor toggle that lets the user hear the incoming radio audio through their computer’s speakers — useful for verifying the audio connection.
  • transcript area that displays the real-time transcription output in a clean, monospaced font.
  • status bar that shows the current state of the application (listening, transcribing, idle, etc.).

Why This Matters

During an emergency, every word matters. When dispatchers, incident commanders, and field units are communicating rapidly over the radio, critical details can be lost if the person logging the traffic can’t keep up. cerkit ClearCast provides a safety net — an AI-powered assistant that never gets tired, never falls behind, and captures every transmission faithfully.

It’s not a replacement for a skilled operator, but it’s a powerful supplement. Having a complete, searchable text log of radio communications after an event can be invaluable for after-action reviews, accountability, and training.

What started as a passing thought at a MARC meeting is now cerkit ClearCast — and I’m excited to continue refining it.

The post Introducing cerkit ClearCast: Real-Time Radio Transcription Powered by AI first appeared on cerkit.com.

]]>
https://cerkit.com/introducing-cerkit-clearcast-real-time-radio-transcription-powered-by-ai/feed/ 0
Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 7: Persisting the Pi Calculus Session State with a Minimal API and PostgreSQL https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-7-persisting-the-pi-calculus-session-state-with-a-minimal-api-and-postgresql/ https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-7-persisting-the-pi-calculus-session-state-with-a-minimal-api-and-postgresql/#respond Wed, 25 Feb 2026 10:14:47 +0000 https://cerkit.com/?p=100502 Discover how to add durable state persistence to a real-time Pi Calculus architecture. Learn to capture dynamic MQTT session data from Node-RED using a lightning-fast .NET 10 Minimal API and store complex JSON UI payloads natively in PostgreSQL using Entity Framework Core.

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 7: Persisting the Pi Calculus Session State with a Minimal API and PostgreSQL first appeared on cerkit.com.

]]>
In Part 6, we secured our Pi Calculus ecosystem. We stopped unauthorized clients in their tracks by implementing strict MQTT authentication across our Node-RED orchestration layer, our pi-console terminal client, and our pi-wasm Blazor browser application. With secure, credentialed access over WebSockets, our dynamic handshakes were finally safe.

But as our system matured, a new challenge emerged: State Persistence.

Our Node-RED flows were doing an excellent job of instantly spinning up dynamic, private MQTT channels (the νz in our Pi Calculus model) and feeding custom UI layouts to connected clients. However, this orchestration was entirely ephemeral. If Node-RED restarted, or if we needed to audit historical connection records, that session data was gone forever.

We needed a backend capable of durably recording every handshake and saving the exact layout payloads delivered to each device. To solve this, we added pi-functions: a .NET 10 Minimal API that acts as a serverless-style backend, writing session states directly to a PostgreSQL database.

Enter pi-functions and Minimal APIs

When clients connect and request a UI, Node-RED negotiates the private channel. We wanted to seamlessly map that negotiation into a database. Rather than building a bulky web application, .NET 10 Minimal APIs provided the perfect “serverless” development experience—lightweight, extremely fast, and highly focused.

We created a simple Program.cs that spins up an ASP.NET Core web application, registers Entity Framework Core for our data access layer, and exposes essential HTTP endpoints:

// Function 1: Node-RED calls this to save a handshake state
app.MapPost("/api/state", async (SessionState state, PiCalculusDbContext db) =>
{
    // 1. Check if this client already has a session in the DB
    var existingSession = await db.SessionStates
        .FirstOrDefaultAsync(s => s.ClientId == state.ClientId);

    if (existingSession is not null)
    {
        // 2. UPDATE: The client exists, just update their active properties
        existingSession.Status = state.Status;
        existingSession.ActiveChannel = state.ActiveChannel;
        existingSession.CurrentUiState = state.CurrentUiState;
        existingSession.LastUpdatedAt = DateTimeOffset.UtcNow;
    }
    else
    {
        // 3. INSERT: Brand new client
        db.SessionStates.Add(state);
    }

    // 4. Save changes
    await db.SaveChangesAsync();
    
    return Results.Ok(state);
});

This acts as a seamless webhook for Node-RED. During the MQTT handshake process, an HTTP Request node can silently POST the session details to http://pi-functions:8080/api/state.

The Magic of PostgreSQL’s JSONB

Our UI layouts and dynamic menus are JSON objects pushed from Node-RED down the MQTT pipe. Creating rigid relational database tables to represent the infinite possibilities of a dynamic UI would be tedious and fragile.

Instead, we utilized PostgreSQL 16. Postgres offers native support for the JSONB data type, which stores JSON data in a decomposed binary format. This makes it incredibly fast to process and query, without the overhead of parsing raw text on the fly.

By simply tagging our CurrentUiState string property using the Fluent API in Entity Framework Core, we mapped it directly to a native JSONB column:

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    base.OnModelCreating(modelBuilder);

    // Add an index to ClientId so your Node-RED lookups are blazing fast
    modelBuilder.Entity<SessionState>()
        .HasIndex(s => s.ClientId)
        .IsUnique();

    // Store the raw Node-RED UI payload natively.
    modelBuilder.Entity<SessionState>()
        .Property(b => b.CurrentUiState)
        .HasColumnType("jsonb");
}

Now, the entire pi-console or pi-wasm configuration schema is durably saved exactly as it was requested, ready to be analyzed or restored at a moment’s notice.

Orchestrating the Ecosystem with Podman

With a new API and a database to manage, our local developer environment grew from two containers to four. Here’s a sample of what that Podman Compose architecture looks like (with credentials replaced by placeholders):

  postgres:
    image: postgres:16
    container_name: postgres
    restart: unless-stopped
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_USER=admin
      - POSTGRES_PASSWORD=<YOUR_STRONG_PASSWORD>
      - POSTGRES_DB=PiCalculusDb
    volumes:
      - postgres_data:/var/lib/postgresql/data

  pi-functions:
    build: 
      context: ../Development/pi-console
      dockerfile: pi-functions/Dockerfile
    container_name: pi-functions
    restart: unless-stopped
    ports:
      - "5001:8080"
    environment:
      - ConnectionStrings__DefaultConnection=Host=postgres;Database=PiCalculusDb;Username=admin;Password=<YOUR_STRONG_PASSWORD>
    depends_on:
      - postgres

Podman Compose easily links the pi-functions bridge directly to the Postgres host postgres:5432. It builds the container straight from our C# workspace using a multi-stage Dockerfile.

The Evolution of the Pi Calculus Ecosystem

What started as a fun terminal experiment using Spectre.Console has bloomed into a highly decoupled, real-time distributed application framework.

By treating our UI configurations as data that moves over dynamic channels (the Pi Calculus model), we completely abstracted the concept of a graphical layout away from the client application. Now, with the addition of pi-functions and PostgreSQL, that ephemeral real-time orchestration is given memory and historical context.

Whether we are connecting through an SSH terminal in pi-console, or booting up the WebAssembly clone in pi-wasm, our central backend immediately authenticates the client, provisions a secure dynamic channel, saves the current UI layout state in Postgres via our minimal API, and paints the screen with the exact commands needed.

The ecosystem is robust, secure, and fully stateful. Next time, we’ll dive into building out custom sensor modules and watching the telemetry flow back upstream!

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 7: Persisting the Pi Calculus Session State with a Minimal API and PostgreSQL first appeared on cerkit.com.

]]>
https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-7-persisting-the-pi-calculus-session-state-with-a-minimal-api-and-postgresql/feed/ 0
Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 6: Securing the Pi Calculus Ecosystem with MQTT Authentication https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-6-securing-the-pi-calculus-ecosystem-with-mqtt-authentication/ https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-6-securing-the-pi-calculus-ecosystem-with-mqtt-authentication/#comments Mon, 23 Feb 2026 16:23:25 +0000 https://cerkit.com/?p=100494 Secure your Pi Calculus ecosystem! Learn how to lock down a Mosquitto broker and implement authenticated MQTT connections using secrets.json for a .NET 10 console UI and appsettings.json for a Blazor WebAssembly client.

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 6: Securing the Pi Calculus Ecosystem with MQTT Authentication first appeared on cerkit.com.

]]>
In Part 5, we evolved our dynamic console application into a versatile multi-client ecosystem. We seamlessly brought our “Qubit BBS” dashboard experience from the terminal to the browser using a new .NET 10 Blazor WebAssembly client (pi-wasm). We decoupled our core orchestration logic into the Pi.Shared library and routed customized UI configurations via Targeted Handshakes over MQTT WebSockets.

But as our Pi Calculus ecosystem grew, we realized a critical flaw: Our MQTT connections were entirely unauthenticated.

Anyone with a basic MQTT client could theoretically connect to our broker, intercept the dynamic Handshake topics, or inject unauthorized layout JSON into our applications. It was time to lock down our Node-RED backend and secure our .NET clients.

Mosquitto Password Protection

The first step was securing our Mosquitto MQTT broker. By default, Mosquitto allows anonymous connections. We updated our mosquitto.conf file to disable anonymous mode and point to a generated password file:

# Global Authentication Settings
allow_anonymous false
password_file /mosquitto/config/passwd

Next, we used the mosquitto_passwd utility to generate a secure user (pi_user) and a corresponding encrypted password hash. A quick restart of the Docker container, and our broker was officially sealed.

Of course, the immediate side-effect was that our poor pi-console and pi-wasm clients started throwing NotAuthorized exceptions! The brokers wouldn’t let them in to perform their Pi Calculus orchestration. We needed to teach our shared .NET 10 library how to handle credentials.

Teaching Pi.Shared to Authenticate

Inside our Pi.Shared library, we updated our central orchestration engine, the MqttService. We added new Username and Password properties to the class, and updated our runtime connection block to append .WithCredentials() to the MqttClientOptionsBuilder:

if (!string.IsNullOrEmpty(Username) && !string.IsNullOrEmpty(Password))
{
    mqttClientOptionsBuilder = mqttClientOptionsBuilder.WithCredentials(Username, Password);
}

Now, the service was capable of authenticating. But where would it get those credentials? Hardcoding passwords directly into our Program.cs files or the shared library is a monumental security anti-pattern, especially since we track this project in a public GitHub repository.

We needed a strategy to load settings at runtime without ever letting Git see them. Because our clients run in two entirely different environments (a local terminal vs. a browser sandbox), we had to implement two distinct credential delivery mechanisms.

Securing the Terminal: secrets.json

For the native macOS terminal application (pi-console), we have full access to the local machine’s file system. We opted for a secrets.json file.

We created a .secrets folder at the root of our local workspace and added a secrets.json file containing our credentials:

{
  "MqttIpAddress": "localhost",
  "MqttPort": 9001,
  "Username": "pi_user",
  "Password": "super_secret_password"
}

Since the secrets.json file sits at the root of the solution, but the dotnet run execution happens deep inside the bin/Debug/net10.0/ folder, our MqttService needed to be smart enough to find it. We implemented a directory traversal loop to search upwards from the AppContext.BaseDirectory until it located the .secrets directory. Once found, it parses the JSON and populates the Username and Password fields.

Most importantly, we added .secrets/ to our .gitignore file, ensuring our Mosquitto passwords never accidentally get pushed to GitHub.

Securing the Browser: appsettings.json

Our Blazor WebAssembly client (pi-wasm) presented a completely different challenge. Since it runs inside the strict sandbox of your web browser, it has absolutely zero access to your local machine’s file system. It can’t directory-traverse its way to secrets.json.

Instead, Blazor WASM applications rely on configuration files served by the hosting web server. For pi-wasm, we created an appsettings.json file directly inside the wwwroot folder—the static web directory that gets bundled and sent to the browser:

{
    "Mqtt": {
    "Username": "pi_user",
    "Password": "super_secret_password"
    }
}

In pi-wasm/Program.cs, we tell our Dependency Injection container to pull the credentials directly from Blazor’s configuration builder:

service.Username = builder.Configuration["Mqtt:Username"];
service.Password = builder.Configuration["Mqtt:Password"];

When the user launches the website, the browser downloads appsettings.json, parses out the MQTT variables, injects them into the MqttService, and establishes an authenticated WebSocket connection.

Just like with the console app, we immediately added pi-wasm/wwwroot/appsettings.json to our .gitignore to protect the file from entering source control.

Connected and Secured

With both clients updated, our Pi Calculus orchestration system is fully authenticated. Node-RED requires a password to serve handshakes, and our .NET 10 UI clients dynamically inject their credentials based on their execution environment.

Whether you’re hitting the Qubit BBS from your local terminal or browsing it over WebSockets, the dynamic UI remains snappy, responsive, and—finally—secure.

Stay tuned as we continue expanding our MQTT UI orchestrator. You can follow along by cloning the pi-console Repo on GitHub!

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 6: Securing the Pi Calculus Ecosystem with MQTT Authentication first appeared on cerkit.com.

]]>
https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-6-securing-the-pi-calculus-ecosystem-with-mqtt-authentication/feed/ 1
Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 5: Blazor WebAssembly, Shared Libraries, and Targeted Handshakes https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-5-blazor-webassembly-shared-libraries-and-targeted-handshakes/ https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-5-blazor-webassembly-shared-libraries-and-targeted-handshakes/#comments Sun, 22 Feb 2026 17:18:30 +0000 https://cerkit.com/?p=100487 Expand your .NET 10 dynamic console UI to the web! Learn to build a Blazor WebAssembly client, use MQTT WebSockets, and route targeted Node-RED handshakes.

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 5: Blazor WebAssembly, Shared Libraries, and Targeted Handshakes first appeared on cerkit.com.

]]>
In Part 4, we bridged the gap between our dynamic UI rendering loop and our headless Node-RED backend by implementing isolated ActionTopics, targeted PanelUpdates, and a dedicated execution processor. Our terminal console dashboard was finally fully interactive, firing off backend commands without breaking the beautiful, live Spectre.Console UI.

But what if we aren’t at our terminal? What if we want that exact same “Qubit BBS” dashboard experience—dynamically orchestrated via Pi Calculus—but available in a web browser from anywhere?

In this fifth installment, we evolved our single-client console application into a versatile multi-client ecosystem. We decoupled our core logic into a reusable Pi.Shared library, launched a brand-new .NET 10 Blazor WebAssembly client (pi-wasm), and implemented “Targeted Handshakes” to route custom UI configs to multiple active clients simultaneously.

The Pi.Shared Core Library

To support multiple UI frontends without duplicating our complex orchestration and MQTT logic, the first step was a major architectural refactor.

We created a new .NET 10 Class Library called Pi.Shared. Into this library, we migrated:

  • The core data Models (MenuItemUiConfigData, etc.)
  • The MqttService responsible for backend communication
  • The DynamicUiOrchestratorService responsible for handling the Pi Calculus handshakes and session states

To decouple the orchestrator from Spectre.Console directly, we introduced an 

IUiService interface containing abstractions like UpdatePanel and UpdateMenu. Now, our backend logic simply calls _uiService.UpdatePanel(), completely agnostic to whether those pixels are rendering in a native terminal or a web browser.

Enter pi-wasm: The Blazor WebAssembly Client

With our core engine safely abstracted, we generated a new .NET 10 Blazor WebAssembly standalone project: pi-wasm.

Implementing the new client was incredibly straightforward. We simply added a project reference to Pi.Shared and implemented 

IUiService inside a new BlazorUiService class.

Instead of writing out CLI blocks, the Blazor UI binds strongly-typed component state directly to CSS Grid elements, maintaining the exact visual layout of our original “Qubit BBS” mockup (Header, Operations Menu, Output, and Status panels).

UI Display for the pi-wasm Blazor interface

Translating Spectre.Console to HTML

One unique challenge: Our Node-RED responses and orchestrator states were heavily utilizing Spectre.Console markup tags like [green]ONLINE[/] to inject colors into the text stream.

To keep the backend blissfully unaware of the frontend rendering engine, we built a lightweight regex-based 

SpectreConsoleParser in the WASM app. It seamlessly intercepts incoming Spectre tags and translates them into HTML <span> elements with inline CSS coloring. The parsed string is then injected into the UI via Blazor’s MarkupString, giving our browser UI the exact same color profiles as the CLI.

Bypassing TCP Ghosts with WebSockets

During testing, we encountered the “Tale of Two Brokers”. Our browser-based pi-wasm client securely connected to the Docker Mosquitto instance via WebSockets (localhost:9001) and instantly fetched the default menus. However, our native Mac pi-console application was randomly dropping packets over the standard TCP port (1883).

It turned out the host OS had a standalone TCP broker intercepting traffic, preventing packets from crossing the Docker network bridge!

The solution? We standardized both clients to connect explicitly over MQTT via WebSockets to bypass the native network conflicts.

Targeted Handshakes

With both pi-console AND pi-wasm successfully connected to the same Node-RED backend simultaneously, we faced our final Pi Calculus challenge: How do we send a different UI configuration to the Console app vs the Browser app?

We achieved this by implementing Targeted Handshakes.

  1. During MqttService initialization in Program.cs, each client is statically assigned a unique ClientId (e.g., "pi-console" or "pi-wasm").
  2. When announcing their presence on startup (pi-console/client/startup), clients now include their ID in the JSON payload: {"clientId": "pi-console"}.
  3. Node-RED parses this ID and dynamically routes the ensuing Pi Calculus handshake to a specific targeted listener: pi-console/handshake/{clientId}.
  4. Node-RED then looks up the clientId against a dictionary in pi-console-configs.json and pushes the tailored UI layout parameters and menu options down the secure session channel.

Thanks to this targeted routing, our Node-RED backend can serve entirely different, customized dashboards to different clients—all utilizing the exact same shared .NET 10 orchestration engine!

Stay tuned as we continue to push the limits of dynamic MQTT UI orchestration!

Clone the pi-console Repo on GitHub and develop your own UI orchestrations (Node-RED flows in the Architecture file in the repo)

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 5: Blazor WebAssembly, Shared Libraries, and Targeted Handshakes first appeared on cerkit.com.

]]>
https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-5-blazor-webassembly-shared-libraries-and-targeted-handshakes/feed/ 1
Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 4: Dynamic Menu Actions and Thread-Safe UI Updates https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-4-dynamic-menu-actions-and-thread-safe-ui-updates/ https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-4-dynamic-menu-actions-and-thread-safe-ui-updates/#comments Sun, 22 Feb 2026 00:58:19 +0000 https://cerkit.com/?p=100474 Discover how to use Pi Calculus over MQTT to trigger UI actions and dynamically update Spectre.Console panels in your .NET app using Node-RED orchestrations.

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 4: Dynamic Menu Actions and Thread-Safe UI Updates first appeared on cerkit.com.

]]>
In Part 3: Dynamic UI Configuration and Session Initialization, we solved the problem of aesthetic scalability by leveraging Pi Calculus concepts to automatically negotiate and transmit custom UI layouts via MQTT. Our terminal gracefully configures its own title, borders, and colors on the fly based directly on the Node-RED orchestrator’s commands.

But a beautiful interface is only half the battle. A console dashboard needs to be interactive. It needs to tell Node-RED when to execute physical operations, fetch server stats, or deploy code—and it needs to render the results of those actions instantly without freezing the active menu.

In this next phase of the project, we bridged the gap between our dynamic UI rendering loop and our headless Node-RED backend by implementing isolated ActionTopics, targeted PanelUpdates, and a dedicated execution processor.

Node-RED flow for the dynamic menu action processor.

Triggering Actions: The actionTopic

Up until this point, our Spectre.Console terminal received a JSON array of MenuItem objects from the Pi session channel and drew them onto the screen. To make these items actionable, we appended two new fields to our MenuItem model: an icon string for fetching Unicode character art, and an actionTopic string (or action for shorthand).

When a user cursors down the list and hits Enterpi-console no longer runs static C# logic. Instead, it captures the payload and offloads the request entirely:

var payload = new { sessionChannel = _currentSessionChannel };
var jsonPayload = System.Text.Json.JsonSerializer.Serialize(payload);
await _mqttService.PublishAsync(item.ActionTopic, jsonPayload);

By firing the trigger asynchronously on a background Task.Run hook, the console remains responsive. Notice the payload structure: we deliberately bundle the sessionChannel tracker into the MQTT message. Node-RED receives this trigger, parses the requested action via a Switch node, executes its logic, and uses that session string to route its response right back down the established Pi Calculus tunnel to our exact client instance!

Painting with Precision: Targeted PanelUpdate Responses

When Node-RED completes a task (such as polling system uptime or restarting a router), it replies to the session channel with a brand new message schema:

{
    "messageType": "PanelUpdate",
    "data": {
        "targetPanel": "outputPanel",
        "content": "[green]System Status: ONLINE[/]\nCPU Usage: 42%\nMemory: 2.1GB / 8.0GB"
    }
}

The orchestrator service intercepts PanelUpdate messages and deserializes the payload. But we faced an architecture hurdle. In a standard synchronous loop, updating a single panel requires re-rendering the entire screen. This would destroy the user’s active keyboard selection cursor in the Menu panel!

To solve this, we implemented thread-safe, localized refresh delegate hooks directly into our AnsiConsole.Live rendering engine inside Engine.cs:

_refreshOutput = () =>
{
layout[“Output”].Update(CreatePanel(“Output”, _lastOutputContent));
ctx.Refresh();
};

When the outputPanel or operationsPanel target is hit, the application updates only the specific localized string variable mapped to that panel, and fires the pinpoint refresh hook. The terminal seamlessly paints the newly computed data inside the box, allowing our dynamic Unicode-enriched menu array to continuously run undisturbed next to it.

The Invisible Engine: commandProcessor

Modifying visual elements dynamically is powerful, but what if Node-RED needs to interact directly with the C# application layer? We created a specialized, invisible target panel in our UpdatePanel evaluator named commandProcessor.

Instead of drawing text to the screen, passing data to the commandProcessor triggers internal application states.

  • Sending "content": "EXIT" halts the _isRunning variable loop and invokes a clean Environment.Exit() command, allowing Node-RED to securely shut down any connected client terminal at will.
  • Sending "content": "RESTART" is even better. It kicks off a localized background task that forcibly re-publishes the initial { "status": "online" } handshake payload back to the public initialization channel.

The RESTART trigger essentially commands the active console to hot-reload. Node-RED replies with a fresh UI configuration block and an updated menu array.

By pushing all heavy lifting to our MQTT backend and wiring up pinpoint UI refresh targets, pi-console has evolved into an incredibly modular, stateless, and ultra-responsive control layer.

If you’re building your own terminal tools or want to explore these asynchronous update patterns in .NET, the full code is continually updated over at the pi-console repository on GitHub.

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 4: Dynamic Menu Actions and Thread-Safe UI Updates first appeared on cerkit.com.

]]>
https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-4-dynamic-menu-actions-and-thread-safe-ui-updates/feed/ 2
Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 3: Dynamic UI Configuration and Session Initialization https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-3-dynamic-ui-configuration-and-session-initialization/ https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-3-dynamic-ui-configuration-and-session-initialization/#comments Sat, 21 Feb 2026 21:57:24 +0000 https://cerkit.com/?p=100466 Discover how to apply Pi Calculus over MQTT to dynamically orchestrate a .NET console UI. Learn to negotiate private channels using Node-RED and Spectre.Console.

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 3: Dynamic UI Configuration and Session Initialization first appeared on cerkit.com.

]]>
In Part 2: The Pi Calculus Menu System, we explored how a mathematical model for concurrent systems—the Pi Calculus—could natively solve our channel mobility problem. By utilizing a “handshake” process on a public MQTT topic, our static console evolved into a dynamic node that negotiates its own private, isolated MQTT channels on the fly.

Part 4: Dynamic Menu Actions and Thread-Safe UI Updates

Originally, if pi-console caught a PROVIDE_MENU command over its public handshake broker, it would tunnel into a private session, reply with READY, receive its localized JSON array of MenuItem objects, and render them in isolation.

While dynamically patching isolated menus into a statically orchestrated dashboard was cool, it meant the entire layout (our panels, titles, and structural colors) inherently remained statically compiled into the .NET application. If a terminal in the garage needed a red border for its “Operations” screen, and the one in the living room needed a sleek blue layout with a customized header, we were out of luck.

It was time to fully embrace the Pi Calculus paradigm. We needed an orchestration protocol that didn’t just configure the menu, but negotiated and shipped the entire UI configuration.

Re-envisioning the Handshake: INITIATE_SESSION

To represent this broader orchestration role, the initial handshake payload action was upgraded from a narrow PROVIDE_MENU mandate to a robust INITIATE_SESSION sequence.

When pi-console boots and publishes its standard { "status": "online" } presence announcement, the Node-RED orchestration server replies to the pi-console/handshake topic with our new dynamic session payload:

{"action": "INITIATE_SESSION", "channel": "session_id_xyz123"}

Just like in Part 2, the client immediately subscribes to this new private channel (session_id_xyz123), records it to its active Operations tracker, and publishes a {"status": "READY"} payload natively into that localized tunnel. But what happens next fundamentally changes the flexibility of the entire dashboard.

Shaping the Screen: Dynamic UiConfig

Before Node-RED delivers the final menu logic, it now drops an entirely new PiSessionMessage across the session channel with the messageType of UiConfig.

Because we decoupled the layout from the runtime logic, the orchestrator can inject customized titleborderColor, and titleColor parameters directly into every single structural panel of our Spectre.Console grid interface:

{
  "messageType": "UiConfig",
  "data": {
    "headerPanel": {
      "title": "GARAGE TERMINAL",
      "borderColor": "red",
      "titleColor": "white"
    },
    "menuPanel": {
      "title": "System Actions",
      "borderColor": "blue",
      "titleColor": "cyan"
    },
    "outputPanel": {
      "title": "Active Logs",
      "borderColor": "grey"
    }
  }
}

The pi-console client consumes this data into a strongly-typed UiConfigData class. The .NET 10 application’s engine iterates across its internal Spectre.Console Panel generation methods (CreateBanner, CreatePanel, CreateOperationsPanel, etc.) and safely parses these color strings into active markup block formats ([cyan]...[/]).

It evaluates everything securely. If a color is omitted, it gracefully falls back to the native layout. If an overriding title string is passed for the Header panel, it dynamically recalculates its beautiful ASCII FigletText logic around that updated string.

Screenshot of the Pi-Console app

Finalizing the Node-RED Sequence

Flow definition for the Node-RED MQTT architecture

Armed with its new customized aesthetic framework, the .NET application re-renders the live console and immediately publishes a {"status": "UI_READY"} payload back to the session_id_xyz123 channel.

Node-RED catches this acknowledgment, and finally transmits the {"messageType": "Menu"} payload (the identical JSON array of MenuItem objects from Part 2). The application parses the command sequence, populates the System Actions panel it just drew for us, and awaits our keyboard input.

By abstracting away our hard-coded layouts into a localized, state-aware Pi Calculus handshake, pi-console has graduated into an entirely stateless thin-client. Whether it’s running in an industrial shed or a home office, it dynamically shapes its colors, titles, formatting, and operations identically to the specific topological needs of the session channel.

If you want to check out the updated C# MQTT JSON data structures or try extending the UiConfigData schema for yourself, grab the latest commits from the pi-console repo on GitHub.

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 3: Dynamic UI Configuration and Session Initialization first appeared on cerkit.com.

]]>
https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-3-dynamic-ui-configuration-and-session-initialization/feed/ 1
Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 2: The Pi Calculus Menu System https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-2-the-pi-calculus-menu-system/ https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-2-the-pi-calculus-menu-system/#comments Sat, 21 Feb 2026 13:45:09 +0000 https://cerkit.com/?p=100455 In computer science, the π-calculus (pi calculus) is a process calculus used to describe concurrent systems whose configurations change dynamically.

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 2: The Pi Calculus Menu System first appeared on cerkit.com.

]]>
In Part 1 of this series, we built pi-console, a .NET 10 Bulletin Board System (BBS) style dashboard driven entirely by MQTT and Node-RED. By combining Spectre.Console for a sleek, static terminal layout with MQTTnet for messaging, we created a dashboard where the UI is just a dumb display, and the logic lives in Node-RED.

It worked beautifully. But there was a catch: our topics, like pi-console/menu/items, were static and global.

If you booted up three different consoles on your network (say, one in the office, one in the garage, and one in the living room), they would all subscribe to the exact same broadcasted menu. If we wanted unique, context-aware dashboards, we needed a way to establish private, dynamic communication channels on the fly.

To solve this, I turned to a concept from theoretical computer science: The Pi Calculus.

Enter the Pi Calculus

In computer science, the π-calculus (pi calculus) is a process calculus used to describe concurrent systems whose configurations change dynamically. Its defining feature is channel mobility—the ability to pass communication channels as data over other channels.

I decided to bring this mathematical concept to life in our MQTT architecture. Instead of hardcoding the topics that the console listens to for its menus and operations, the application UI configuration now occurs entirely dynamically through a handshake process.

Waking Up and Shaking Hands

The architecture of pi-console has evolved. Here is how the new channel mobility system handles a session lifecycle:

1. The Startup Announcement

When pi-console boots, it no longer publishes a static GUID to an initialization topic. Instead, it publishes an empty payload to the pi-console/client/startup topic to announce its presence to the broker.

2. The Session Handshake

The app immediately begins listening on a dedicated channel: pi-console/handshake. When a controller (like Node-RED) detects a new startup, it fires back connection instructions formatted in JSON. For example, to provide a dynamic menu, Node-RED sends:

{"action": "PROVIDE_MENU", "channel": "session_id"}

(Note: The system also supports a generic {“action”: “CONNECT”, “replyToChannel”: “session_id”} for standard communication.)

3. Channel Mobility in Action

This is where the pi calculus magic happens. As soon as the application receives this handshake, it extracts the dynamic channel string (“session_id”). The app then instantly opens a new subscription to that active channel and registers the active session in the live “Operations” screen of the console.

We just used a public channel to pass the name of a private channel, dynamically altering the network topology of our application!

Dynamic Menus over Private Channels

Once the pi-console subscribes to its newly assigned session_id channel for a PROVIDE_MENU action, it needs to tell Node-RED that it successfully migrated. It does this by publishing a {“status”: “READY”} payload back across that exact dynamic channel.

Now that the private tunnel is established, Node-RED publishes the JSON array of MenuItem objects directly to that specific session channel:

[
{ "id": 1, "label": "System Status", "icon": "info", "color": "green" },
{ "id": 2, "label": "Device Settings", "icon": "settings", "color": "purple" }
]

The console receives the array, parses it, and renders the vibrant, interactive Spectre.Console menu—but this time, the menu is completely isolated to that specific terminal session. Meanwhile, global system alerts can still be pumped to the pi-console/status topic to dynamically patch the bottom status panel in real-time.

Conclusion

By implementing a pi calculus-inspired handshake, pi-console has evolved from a dashboard listening to a static broadcast tower into a smart node capable of negotiating private channels on the fly. This “channel mobility” opens the door for unlimited, unique, and secure dynamic dashboards on your home automation network, all running simultaneously.

If you want to check out the updated C# MQTT integration or try running the pi-calculus architecture for yourself, you can clone the updated code from the pi-console repo on GitHub.

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED – Part 2: The Pi Calculus Menu System first appeared on cerkit.com.

]]>
https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red-part-2-the-pi-calculus-menu-system/feed/ 3
Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red/ https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red/#comments Sat, 21 Feb 2026 00:47:03 +0000 https://cerkit.com/?p=100445 Dynamic UI system using .NET 10, Spectre.Console library, and MQTT messages to drive a Console UI.

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED first appeared on cerkit.com.

]]>
Part 2 – Implementing a Pi calculus menu system

Working with console applications in .NET has come a long way, especially with libraries like Spectre.Console that make creating rich, dynamic terminal interfaces a breeze. Recently, I’ve been building a project called pi-console, a classic Bulletin Board System (BBS) style layout designed to run as a dashboard.

However, instead of hardcoding menus or statuses, I wanted to build something dynamic. What if the console itself was just a dumb display, and the content logic lived entirely in a home automation workflow tool like Node-RED?

Enter MQTT.

The Architecture of pi-console

At its core, pi-console is a decoupled dashboard:

  • .NET 10 Framework: Runs the application engine and handles the execution loop.
  • Spectre.Console: Powers the layout management, splitting the screen into header, operations, menu, output, and status panels without breaking the terminal scroll buffer.
  • MQTTnet: Allows the application to subscribe and publish to a remote MQTT broker.

When you boot up pi-console, it displays a sleek layout out of the box. But here’s the trick: the menu is completely empty, and the status bar is idling. It relies on MQTT to wake up.

Waking up the Console

When pi-console starts up, it announces itself to the MQTT broker by publishing a unique GUID to an initialization topic:

pi-console/initialize 

Payload: "7f5407aa-cac5-4952-80ca-c73863d78fc4"

By broadcasting this initialization signal, the console informs any listeners that it is online and ready to receive instructions. That listener, in my setup, is Node-RED.

Driving the UI with Node-RED

Node-RED Flow

Node-RED serves as the brain for the console. It constantly listens to the pi-console/initialize topic. The moment it detects that the console has booted up (by seeing the GUID), Node-RED fires back a customized menu structure.

Node-RED publishes a JSON array back to the console on another topic: pi-console/menu/items.

Here is what the payload looks like:

[
    {
        "id": 1,
        "label": "System Status",
        "icon": "info",
        "color": "green"
    },
    {
        "id": 2,
        "label": "Network Config",
        "icon": "wifi",
        "color": "blue"
    },
    {
        "id": 3,
        "label": "Sensor Logs",
        "icon": "list",
        "color": "yellow"
    }
]

Parsing the Dynamic Menu

Back in .NET, the MqttService receives the payload on pi-console/menu/items. Because we defined a C# MenuItem model that matches the JSON schema, we can easily map the incoming data.

The service parses the array, sorts the items by their id property to maintain consistency, and raises a MenuItemsReceived event. The UI engine then immediately triggers a re-render.

Spectre.Console is remarkably fast and handles the redrawing gracefully. By leveraging the new color property from our JSON payload, the engine renders each item with native terminal colors ([black on green][black on blue], etc.), providing an instant, vibrant menu dynamically driven entirely by Node-RED!

Live Status Updates

Menus aren’t the only thing that Node-RED controls. The footer panel of pi-console is designed as a raw system status display.

Node-RED can pipe data (like server health alerts, CPU usage, or sensor triggers) directly into the pi-console/status topic. Every time a new string payload hits that topic, the UI immediately patches the content of the bottom panel without dropping a frame on the screen.

Conclusion

Combining the rock-solid UI rendering of Spectre.Console, the reliable machine-to-machine messaging of MQTT, and the incredible orchestration power of Node-REDpi-console has evolved from a static script into a highly responsive, remote-controlled smart dashboard.

If you want to try something similar or check out how I structured the C# MQTT integration, feel free to clone the repo on GitHub!

The post Building a Dynamic Console UI with .NET 10, MQTT, and Node-RED first appeared on cerkit.com.

]]>
https://cerkit.com/building-a-dynamic-console-ui-with-net-10-mqtt-and-node-red/feed/ 1