I had this realisation that I could use euclidean rhythms on basically anything, not just on "beats". ie. if you have a string of boolean values you loop through they don't have to mean "hit or don't hit the drum", it could mean high/low, loud/quiet, open/close, accent/base, anything. All obvious to all of you, I'm sure.

When dreaming up new ideas for synth firmwares for platform 16 I found that envelopes beyond basic decay-only ones can take up a lot of the 16 knobs I have available so I wanted to try my hand at a drone synth. And then come up with basic yet interesting "one-knob" ways to add variation. So for each of the pitch, filter cutoff and volume "sections" I dedicate one knob to picking a euclidean rhythm, one to a base value and then one to an "accent" value. Worked fine as a "two-tone" thing but was a bit boring so for pitch the "accent" knob is now "choose a arpeggio mode". And then the true/false values from the rhythm becomes "advance the arpeggio or stay where you are".

I was low on knobs so the key is hardcoded (always C natural minor, can easily change it) and then the "pitch degree" knob just moves forward in that scale where it maps the chord to the "correct" major, minor, diminished or augmented cord for that note in the scale. And then three note arpeggios felt a bit limiting so I kinda arbitrarily added 7th notes to the mix. Was worried it would be limiting but I think it turned out pretty well and at a nice compromise for using so few knobs. By hardcoding the key it makes this easier to sync up with my other synths and end up with something vaguely musical anyway.

Started out as a drone and yet here it is playing something that's more "melody" than "done", but what can you do. ¯_(ツ)_/¯

Kinda proud of how "knob-efficient" it came out while remaining one-knob-per-function and not having to resort to a modal interface.

Code's here:

| Static Attic

I tried out Claude Code on a little project - a static blog generator for this blog to be run on GitHub Pages. I used Gemini to help come up with a temporary name just so I could create the folder and move on. I asked for two words that rhyme, starting with "static" and it came up with Static Attic.

I got the html5 template from there too and got to work. I wanted a super minimal template, so I just started hand-crafting some html that would serve as the first post, the design and also an example for Claude as to what I want.

Claude helped to kickstart a CLAUDE.md. This then caused me to have all sorts of questions like what I should use for parsing YAML front matter out of a file that's otherwise markdown, what to use for rendering markdown in a GitHub Markdown compatible way, how to syntax hilight code blocks so that it generates static html with no react or other client-side JavaScript necessary.. This way I could refine the plan, the features, the tech stack, the style, etc.

One of claude's suggestions was to go with no template engine. So I figured I could just use JavaScript template strings. But then how do you render those if your template is in its own file? According to Gemini 'This is a classic "gotcha" in JavaScript.' and it taught me about the function constructor technique. Which I then told Claude to use.

I got a bit side-tracked with which node.js library to use to parse YAML front matter out of .md files since none of them seemed to have any recent updates. I settled on gray-matter after a brief detour trying front-matter in a modern TypeScript and JavaScript module setup.

Along the way I remembered that I wanted to play with TypeScript's erasable syntax so that I can execute .ts files with node directly. This meant a minimum node.js version of 24.

I also felt that it is time I try out pnpm over npm. Not that it would matter with such a simple project's lock file.

For markdown and syntax hilighting I landed on markdown-it and @shikijs/markdown-it. I realised that YouTube embeds' iframes got escaped so you would see the html rather than get the embed and Gemini pointed out that I have to pass html: true to markdown-it's options.

And that was basically it. Claude generated a working project structure with eslint, prettier, tests and everything set up on first try, it generated the validation script, it even made a build script. At some point I ran out of tokens and switched to copilot to do some minor refactors so that the style would be more like how I would do it, but really it was fine. I asked copilot to help me with the GitHub workflow so it would publish whenever I push to main, got it to fill out some tests and the README file and that was it. Done.

Admittedly the smallest project imaginable, but I'm still quite impressed with the result. And happy. I've got something small, minimal and self contained that I fully understand and can extend if/when I ever need more.

Was it faster than if I'd have written it myself? I'm not sure. I still did quite a bit of researching, thinking, planning and designing especially up front. I was still quite opinionated and particular about how I wanted it to work and what aesthetic I'm going for. We'll see how my behaviour evolves over time.

| Platform 16 Arpeggios

Added an arpeggiator to the new "two-tone euclidean polymeter" firmware for platform 16 this morning, reusing code from when I made the fm chord synth. So you can pick a "degree" in the scale and then it chooses the "right" chord shape for that point in the scale (major, minor, diminished, augmented) and you can arpeggiate the notes in that chord. Various arpeggio algorithms to pick form starting with off where it just plays the chord's root note.

But then in keeping with the vibe of the rest of the firmware, there's a "rhythm" at play where if the note in the rhythm is "high" it advances the arpeggio, otherwise it plays the last note. So depending on the dedensity of the "hits" in the rhythm you either have it always advance or you can play something a bit stochastic or it can just drone and rarely/slowly change the note.

And it automatically doesn't loop often or at least in an obvious way because most of these rhythms and now the arpeggio lengths are likely to be co-prime.

arpeggio.hpp is mostly vibe-coded. What's interesting is that I spent more time thinking about what I want and writing the instructions than it saved to generate the code. And that's usually the case.

Also added euclidean rhythms the other day and rhythms.hpp is the same story. Although I did generate the vectors for containing the rhythms separately using a JavaScript script.

Then the synthetic clock generator thingy was written by me manually originally then factored out manually, then it vibe-wrote the description. And got it perfectly accurate!

fun!

| A LS013B7DH03 driver in the ESP32 ULP co-processor

I've been working with Sharp's LS013B7DH03 memory LCD. While looking for a driver to use with MicroPython I realised that they all invert some COM signal every time they update the display, usually with a vague comment about how you have to update the display regularly. The EXTCOMIN and EXTMODE pins also baffled me until I eventually found this handy bit of documentation by Peter Hinch that explains it.

So basically one way or another the COM signal has to be inverted regularly or you'll gradually damage the LCD. I agree that doing that in software seems to be easier. For my application I want to keep the ESP32 in deep sleep as long as possible and waking from deep sleep takes a significant part of a second, so waking every second or so to invert the signal is out of the question.

The ESP32 also has a ULP (Ultra Low Power) Coprocessor, so I got the idea to use that to periodically invert the COM signal. But then I thought.. Is it possible to just have the entire display driver live in the ULP processor with the entire display buffer in the 8K of RTC memory along with the code? And the answer turns out to be.. kinda. But it is just a bit too slow.

The ULP is extremely limited. It has 4 general purpose registers, a weird hidden 8-bit register that's only really useful for loops and very few instructions. It is clocked from the ±8Mhz clock. Seems like you can maybe also clock it from 40Mhz/4 = ±10Mhz but I haven't figured out how to do that yet. Instructions take 4, 6, 8 or 12 clock cycles to execute and load the next instruction. The macros for reading/writing peripheral registers take who knows how many more. It can only access memory at 32-bit word boundaries, but it can only read/write to the lower 16 bits. The upper 16-bits of each 32-bit word is inaccessible.

The display is 128x128 monochrome, so a full screen buffer is 2048 bytes, but because half the memory is inaccessible it actually takes 4096 bytes, or 1024 32-bit words. ie. exactly half of the 8K of RTC memory. The ULP is designed to do ADC readings or monitor something over an I2C bus or maybe check some pins and then wake up the main processor. The display talks SPI, the ULP has no access to SPI hardware. So I would have to bitbang it..

I've never written anything in assembly before (let alone for such an obscure target), so I had no feel for what kind of speed I'd get. I decided to just wing it and if it doesn't work out I could always go back to plan A which was to only run the ULP while the main processor is asleep and have the main processor take over the SPI bus to quickly update the display otherwise.

So my driver borrows from MicroPython nano-gui's driver in that it subclasses framebuf.FrameBuffer, but all the Python code does is it has a 2048 byte screen buffer that it copies over to the RTC memory in the "only use half of it" weird style that the ULP understands. This way it can behave just like a normal MicroPython framebuffer display driver. The ULP code basically then just stores the last COM signal, wakes up periodically, inverts it and copies the buffer contents to the display from that patch of RTC memory. Then goes back to sleep.

I optimised this a fair amount and.. it takes about 140ms-ish according to my oscilloscope to copy over a frame. I can maybe get that down closer to 100ms which would be 10 frames per second and I could maybe boost that by a further 20% if I can switch to a faster clock, but that's still gonna be in the range of 10 FPS for full screen updates which isn't great. It should be possible to speed up "off" pixels by a bit and to make it only update changed lines, but I'm not crazy about those limitations - I like things to perform predictably.

This was interesting and I learned a lot. I put my hacky code up as a Gist.

What I'll rather end up doing would be to copy the frame data to the display using the main processor from the main program and then only use the ULP during deep sleep with the "maintain memory internal data" mode where you send it one byte including the inverted COM bit. I was hoping to avoid that because that has different bits of complexity: You have to make sure that control of the SPI line gets properly handed off between the main processor and the ULP.

prototype pcb with sharp memory LCD