dbushell.com (blog) David Bushell’s Blog only feed https://dbushell.com/blog/ Tue, 17 Mar 2026 15:00:00 GMT David Bushell en-GB SMTP on the edge Disclaimer: this post includes my worst idea yet! Until now my contact form submissions were posted to a Cloudflare worker. The worker encrypted the details with PGP encryption. It then used the Amazon AWS “Simple Email Service” API to send an email to myself. […] https://dbushell.com/2026/03/17/smtp-on-the-edge/ https://dbushell.com/2026/03/17/smtp-on-the-edge/ Tue, 17 Mar 2026 15:00:00 GMT Disclaimer: this post includes my worst idea yet!

Until now my contact form submissions were posted to a Cloudflare worker. The worker encrypted the details with PGP encryption. It then used the Amazon AWS “Simple Email Service” API to send an email to myself. PGP encryption meant that any middleman after the worker, like Amazon, could not snoop. (TLS only encrypts in transit.) The setup was okay but involved too many services.

If you thought that was over-engineered, get a load of my next idea.

SMTP experiments

My experiment with a self-hosted SMTP server was short-lived but I did learn to code SMTP protocol with server-side JavaScript. During that tinkering I had issue upgrading TLS on the SMTP server for receiving email.

In my recent AT Protocol PDS adventure I learned that Proton Mail can generate restricted tokens for SMTP client auth. I’ve also been slowly migrating from Cloudflare to Bunny in my spare time. I was reminded that Bunny has Deno edge workers.

Lightbulb moment: can I rawdog SMTP in a Bunny worker?

New idea

  • PGP encryption in the browser
  • POST to Bunny edge worker
  • SMTP directly to Proton

This cuts out the AWS middleman. Neither Bunny nor Proton ever see the unencrypted data. True end-to-end encryption for my contact form!

I threw together a proof-of-concept. My script opened a TCP connection to Proton using Deno.connect and sent the SMTP STARTTLS message. The connection was upgraded with Deno.startTls to secure it. It then followed a very fragile sequence of SMTP messages to authenticate and send an email. If the unexpected happened it bailed immediately.

Surprisingly this worked! I’m not sharing code because I don’t want to be responsible for any misuse. There is nothing in Bunny’s Terms of Service or Acceptable Use policy that explicitly prohibits sending email. Magic containers do block ports but edge scripting doesn’t.

I asked Bunny support who replied:

While Edge Scripting doesn’t expose the same explicit port limitation table as Magic Containers, it’s not intended to be used as a general-purpose SMTP client or email relay. Outbound traffic is still subject to internal network controls, abuse prevention systems, and our Acceptable Use Policy.

Even if SMTP connections may technically work in some cases, sending email directly from Edge Scripts (especially at scale) can trigger automated abuse protections. We actively monitor for spam and unsolicited email patterns, and this type of usage can be restricted without a specific “port block” being publicly documented.

If you need to send transactional emails from your application, we strongly recommend using a dedicated email service provider (via API) rather than direct SMTP from Edge Scripting.

bunny.net support

…that isn’t an outright “no” but it’s obviously a bad idea.

New idea v2

To avoid risking an account ban I decided to use the Bunny edge worker to forward the encrypted data to a self-hosted API. That service handles the SMTP. In theory I could decrypt and log locally, but I’d prefer to let Proton Mail manage security. I’m more likely to check my email inbox than a custom GUI anyway.

The OpenPGP JavaScript module is a big boy at 388 KB (minified) and 144 KB (compressed). I load this very lazily after an input event on my contact form.

Last year in a final attempt to save my contact form I added a Cloudflare CAPTCHA to thwart bots. I’ve removed that now because I believe there is sufficient obfuscation and “proof-of-work” to deter bad guys.

Binning both Cloudflare and Amazon feels good. I deleted my entire AWS account.

My new contact form seems to be working. Please let me know if you’ve tried to contact me in the last two weeks and it errored. If this setup fails, I really will remove the form forever!


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>
What is agentic engineering? Below is a parody of Simon Willison’s What is agentic engineering? I use the term agentic engineering to describe the practice of casino gambling with the assistance of random superstitions. What are random superstitions? They’re superstitions that can both write and execute entropy. […] https://dbushell.com/2026/03/16/what-is-agentic-engineering/ https://dbushell.com/2026/03/16/what-is-agentic-engineering/ Mon, 16 Mar 2026 15:00:00 GMT Below is a parody of Simon Willison’s What is agentic engineering?


I use the term agentic engineering to describe the practice of casino gambling with the assistance of random superstitions.

What are random superstitions? They’re superstitions that can both write and execute entropy. Popular examples include blowing on dice, wearing lucky socks, and saying a prayer.

What’s a superstition? Clearly defining that term is a challenge that has frustrated gambling researchers since at least the 1990s BC but the definition I’ve come to accept, at least in the field of Random Number Generators (RNGs) like GPT-5 and Gemini and Claude, is this one:

The “superstition” is a belief that calls upon God with your prompt and passes it a set of magic definitions, then calls any ritual that the deity requests and feeds the results back into the slot machine.

For random superstitions, those rituals include one that can confirm bias.

You prompt the random superstition to define a bias. The superstition then generates and executes random numbers in a loop until that bias has been confirmed.

Dogmatic faith is the defining capability that makes agentic engineering possible. Without the ability to directly play a hand, anything output by an RNG is of limited value. With automated card shuffling, these superstitions can start iterating towards gambling that demonstrably “works”.

[…]


Enough of that. If you want to experience agenetic engineering yourself, visit my homepage and play the one-armed code bandit!


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>
SvelteKit i18n and FOWL Perhaps my favourite JavaScript APIs live within the Internationalization namespace. A few neat things the Intl global allows: Natural alphanumeric sorting Relative date and times Currency formatting It’s powerful stuff and the browser or runtime provides locale data for free! […] https://dbushell.com/2026/03/11/sveltekit-internationalization-flash-of-wrong-locale/ https://dbushell.com/2026/03/11/sveltekit-internationalization-flash-of-wrong-locale/ Wed, 11 Mar 2026 15:00:00 GMT Perhaps my favourite JavaScript APIs live within the Internationalization namespace.

A few neat things the Intl global allows:

It’s powerful stuff and the browser or runtime provides locale data for free!

That means timezones, translations, and local conventions are handled for you. Remember moment.js? That library with locale data is over 600 KB (uncompressed). That’s why JavaScript now has the Internationalization API built-in.

SvelteKit

SvelteKit and similar JavaScript web frameworks allow you to render a web page server-side and “hydrate” in the browser. In theory, you get the benefits of an accessible static website with the progressively enhanced delights of a modern “web app”.

I’m building attic.social with SvelteKit. It’s an experiment without much direction. I added a bookmarks feature and used Intl.DateTimeFormat to format dates.

<script>
  const dateFormat = new Intl.DateTimeFormat(undefined, {
    dateStyle: "medium",
    timeStyle: "short",
  });
</script>

<!-- later in the template -->
<time datetime={entry.createdAt}>
  {dateFormat.format(new Date(entry.createdAt))}
</time>

Perfect! Or was it? Disaster strikes! See this GIF:

formatted date flipping between US and British English
This is a GIF with a hard G not a JIF

What is happening here?

Because I don’t specify any locale argument in the DateTimeFormat constructor it uses the runtime’s default. When left unconfigured, many environments will default to en-US. I spotted this bug only in production because I’m hosting on a Cloudflare worker. SvelteKit’s first render is server-side using en-US but subsequent renders use en-GB in my browser. My eyes are briefly sullied by the inferior US format!

Is there a name for this effect? If not I’m coining: “Flash of Wrong Locale” (FOWL).

Fixing FOWL

To combat FOWL we must ensure that SvelteKit has the user’s locale before any templates are rendered. Browsers may request a page with the Accept-Language HTTP header.

The place to read headers is hooks.server.ts.

import { acceptsLanguages } from "$lib/negotiation";
import type { Handle } from '@sveltejs/kit';

export const handle: Handle = async ({ event, resolve }) => {
  const languages = acceptsLanguages(event.request);
  event.locals.locale = languages[0] === "*" ? undefined : languages[0];
  return resolve(event);
};

I’ve vendored the @std/http negotiation library to parse the request header. If no locales are provided it returns * which I change to undefined. SvelteKit’s event.locals is an object to store custom data for the lifetime of a single request.

Event locals are not directly accessible to SvelteKit templates. That could be dangerous. We must use a page or layout load function to forward the data.

import type { PageServerLoad } from "./$types";

export const load: PageServerLoad = async ({ locals }) => {
  return {
    locale: locals.locale,
  };
};

Now we can update the original example to use the locale data.

<script lang="ts">
  import type { PageProps } from "./$types";
  
  let { data }: PageProps = $props();
  
  const dateFormat = $derived(
    new Intl.DateTimeFormat(data.locale, {
      dateStyle: "medium",
      timeStyle: "short",
    }),
  );
</script>

<!-- later in the template -->
<time datetime={entry.createdAt}>
  {dateFormat.format(new Date(entry.createdAt))}
</time>

I don’t think the $derived rune is strictly necessary but it stops a compiler warning.

This should eliminate FOWL unless the Accept-Language header is missing. Privacy focused browsers like Mullvad Browser use a generic en-US header to avoid fingerprinting. That means users opt-out of internationalisation but FOWL is still gone.

If there is a cache in front of the server that must vary based on the Accept-Language header. Otherwise one visitor defines the locale for everyone who follows unless something like a session cookie bypasses the cache.

You could provide a custom locale preference to override browser settings. I’ve done that before for larger SvelteKit projects. Link that to a session and store it in a cookie, or database. Naturally, someone will complain they don’t like the format they’re given. This blog post is guaranteed to elicit such a comment. You can’t win!

Oh, Safari

Why can’t you be normal, Safari? Despite using the exact same en-GB locale, Safari still commits FOWL by using an “at” word instead of a comma.

formatted date flipping between US and British English in Safari browser

Who’s fault is this? The ECMAScript standard recommends using data from Unicode CLDR. I don’t feel inclined to dig deeper. It’s a JavaScriptCore quirk because Bun does the same. That is unfortunate because it means the standard is not quite standard across runtimes.

By the way, the i18n and l10n abbreviations are kinda lame to be honest. It’s a fault of my design choices that “internationalisation” didn’t fit well in my title.


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>
Building on AT Protocol At Protocol has got me! I’m morphing into an atmosphere nerd. AT Protocol — atproto for short — is the underlying tech that powers Bluesky and new social web apps. Atproto as I understand it is largely an authorization and data layer. Atproto data All atproto data is inherently public. […] https://dbushell.com/2026/03/10/building-on-at-protocol/ https://dbushell.com/2026/03/10/building-on-at-protocol/ Tue, 10 Mar 2026 15:00:00 GMT At Protocol has got me! I’m morphing into an atmosphere nerd.

AT Protocol — atproto for short — is the underlying tech that powers Bluesky and new social web apps. Atproto as I understand it is largely an authorization and data layer.

Atproto data

All atproto data is inherently public. In theory it can be encrypted for private use but leaky metadata and de-anonymisation is a whole thing. Atproto users own the keys to their data which is stored on a Personal Data Server (PDS). You don’t need to manage your own. If you don’t know where your data is stored, good chance it’s on Bluesky’s PDS.

You can move your data to another PDS like Blacksky or Eurosky. Or if you’re a nerd like me self-host your own PDS. You own your data and no PDS can stop you moving it.

Atproto auth

Atproto provides OAuth; think “Sign in with GitHub”. But instead of an account being locked behind the whims of proprietary slopware, user identity is proven via their PDS.

Social apps like Bluesky host a PDS allowing users to create a new account. That account can be used to login to other apps like pckt, Leaflet, or Tangled. You could start a new account on Tangled’s PDS and use that for Bluesky. Atproto apps are not required to provide a PDS but it helps to onboard new users.

I built a thing!

Of course I did. You can sign in at attic.social

Login form for attic.social with account handle suggestions from Bluesky

Attic is a cozy space with lofty ambitions. What does Attic do? I’m still deciding… it’ll probably become a random assortment of features. Right now it has bookmarks. Bookmarks will have search and tags soon.

Technical details: to keep the server stateless I borrowed ideas from my old SvelteKit auth experiment. OAuth and session state is stored in encrypted HTTP-only cookies. I used the atcute TypeScript libraries to do the heavy atproto work. I found @flo-bit’s projects which helped me understand implementation details. Attic is on Cloudflare workers for now. When I’ve free time I’ll explore the SvelteKit Bunny adapter.

I am busy on client projects so I’ll be scheming Attic ideas in my free time.

Decentralised

What’s so powerful about atproto is that users can move their account/data. Apps write data to a PDS using a lexicon; a convention to say: “this is a Bluesky post”, for example. Other apps are free to read that data too. During authorization, apps must ask for permission to write to specific lexicons. The user is in control.

You may have heard that Bluesky is or isn’t “decentralised”. Bluesky was simply the first atproto app. Most users start on Bluesky and may never be aware of the AT Protocol. What’s important is that atproto makes it difficult for Bluesky to “pull a Twitter”, i.e. kill 3rd party apps, such as the alternate Witchsky.

If I ever abandon attic.social your data is still in your hands. Even if the domain expires! You can extract data from your PDS. You can write a new app to consume it anytime. That’s the power of AT Protocol.


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>
Bunny.net shared storage zones Whilst moving projects off Cloudflare and migrating to Bunny I discovered a neat ‘Bunny hack’ to make life easier. I like to explicitly say “no” to AI bots using AI robots.txt†. Updating this file across multiple websites is tedious. With Bunny it’s possible to use a single file. […] https://dbushell.com/2026/03/04/bunny-shared-storage-zones/ https://dbushell.com/2026/03/04/bunny-shared-storage-zones/ Wed, 04 Mar 2026 15:00:00 GMT Whilst moving projects off Cloudflare and migrating to Bunny I discovered a neat ‘Bunny hack’ to make life easier. I like to explicitly say “no” to AI bots using AI robots.txt. Updating this file across multiple websites is tedious. With Bunny it’s possible to use a single file.

I’m no fool, I know the AI industry has a consent problem but the principle matters.

The trick

My solution was to create a new storage zone as a single source of truth.

Bunny.net interface for a storage zone with a single robots.txt file

In the screenshot above I’ve uploaded my common robots.txt file to its own storage zone. This zone doesn’t need any “pull zone” (CDN) connected. The file doesn’t need to be publicly accessible by itself here.

With that ready, I visited each pull zone that will share the file. Under “CDN > Edge rules” in the menu I added the following rule.

Bunny.net interface for an edge rule

I chose the action: “Override Origin: Storage Zone” and selected the new shared zone. Under conditions I added a “Request URL” match for */robots.txt. Using a wildcard makes it easier to copy & paste. I tried dynamic variables but they don’t work for conditions.

I added an identical edge rule for all websites I want to use the robots.txt. Finally, I made sure the CDN cache was purged for those URLs.

Bunny.net interface for purge URL list

This technique is useful for other shared assets like a favicon, for example.

Neat, right?

One downside to this approach is vendor lock-in. If or when Bunny hops the shark and I migrate elsewhere I must find a new solution. My use case for robots.txt is not critical to my websites functioning so it’s fine if I forget.


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>
MOOving to a self-hosted Bluesky PDS Bluesky is a “Twitter clone” that runs on the AT Protocol. I have to be honest, I’d struggle to explain how atproto works. I think it’s similar to Nostr but like, good? When atproto devs talk about The Atmosphere they sound like blockchain bros. The marketing needs consideration. […] https://dbushell.com/2026/03/02/mooving-to-a-self-hosted-bluesky-pds/ https://dbushell.com/2026/03/02/mooving-to-a-self-hosted-bluesky-pds/ Mon, 02 Mar 2026 15:00:00 GMT Bluesky is a “Twitter clone” that runs on the AT Protocol. I have to be honest, I’d struggle to explain how atproto works. I think it’s similar to Nostr but like, good? When atproto devs talk about The Atmosphere they sound like blockchain bros. The marketing needs consideration. Bluesky however, is a lot of fun. Feels like early Twitter.

Nobody cool uses Twitter anymore ever. It’s a cesspit of racists asking Gork to undress women.

Self-hosting

Mastodon and Bluesky are the social platforms I use. I’ve always been tempted to self-host my own Mastodon instance but the requirements are steep. I use the omg.lol server instead. Self-hosting the Bluesky PDS is much less demanding.

My setup includes:

Raspberry Pi 5

This is the host machine I glued an NVMe onto the underside. All services run as Docker containers for easy security sandboxing. I say easy but it took many painful years to master Docker. I have the Pi on a VLAN firewall because I’m extra paranoid.

Bluesky PDS

I setup my Bluesky PDS using the official Docker container. It’s configure with environment variables and has a single data volume mounted. I backup that volume to my NAS.

Caddy

I’ve put Caddy in front of the PDS container. Right now it just acts as a reverse proxy. This gives me flexibility later if I want to add access logs, rate limiting, or other plugins.

Cloudflare Tunnel

Booo! If you know a good European alternative please let me know! The tunnel links Caddy to the outside world via Cloudflare to avoid exposing my home IP address. Cloudflare also adds an extra level of bot protection.

The guides I followed suggest adding wildcard DNS for the tunnel. Cloudflare has shuffled the dashboard for the umpteenth time and I can’t figure out how. I think sub-domains are only used for user handles, e.g. user.example.net. I use a different custom domain for my handle (@dbushell.com) with a manual TXT record to verify.

Proton SMTP

Allowing the PDS to send emails isn’t strictly necessary. It’s useful for password resets and I think it’ll send a code if I migrate PDS again. I went through the hassle of adding my PDS domain to Proton Mail and followed their SMTP guide.

PDS_EMAIL_SMTP_URL=smtp://[email protected]:[email protected]:587
[email protected]

This shows how the PDS enviornment variables are formatted. It took me forever to figure out where the username and password went.

PDS MOOver

PDS MOOver by Bailey Townsend is the tool that does the data migration. It takes your Bluesky password and probably sees your private key, so use at your own risk! I setup a new account to test it before I YOLO’d my main.

MOOve successful!

I still login at bsky.app but I now select “custom account provider” and enter my PDS domain. SkyTools has a tool that confirms it. Bluesky Debug can check handles are verified correctly. PDSIs.dev is a neat atproto explorer.

I cross-referenced the following guides for help:

Most of the Cloudflare stuff is outdated because Cloudflare rolls dice every month.

Bluesky is still heavily centralised but the atproto layer allows anyone to control their own data. I like doing that on principle. I don’t like maintenance, but I’ve heard that’s minimal for a PDS. Supposedly it’s possible to migrate back to Bluesky’s PDS if I get bored.

I’m tempted to build something in The Atmosphere. Any ideas?

Update for 3rd March 2026

Xan suggested I add a favicon which can appear on witchsky.app. In Docker I mounted a “public” directory to the Caddy container. In the Caddyfile route I added a handle to match /favicon.ico and serve the file (before the reverse proxy to the PDS container). I knew Caddy would come in handy!

Update for 9th March 2026

I’ve written a post about building on AT Protocol. See my latest side quest!


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>
Croissant and CORS proxy update Croissant is my home-cooked RSS reader. I wish it was only a progressive web app (PWA) but due to missing CORS headers, many feeds remain inaccessible. My RSS feeds have the Access-Control-Allow-Origin: * header and so should yours! Blogs Are Back has a guide to enable CORS for your blog. […] https://dbushell.com/2026/02/27/croissant-cors-proxy-update/ https://dbushell.com/2026/02/27/croissant-cors-proxy-update/ Fri, 27 Feb 2026 10:00:00 GMT Croissant is my home-cooked RSS reader. I wish it was only a progressive web app (PWA) but due to missing CORS headers, many feeds remain inaccessible.

Bypassing CORS requires some kind of proxy. Other readers use a custom browser extension. That is clever, but extensions can be dangerous. I decided on two solutions. I wrapped my PWA in a Tauri app. This is also dangerous if you don’t trust me. I also provided a server proxy for the PWA. A proxy has privacy concerns but is much safer.

No more proxy

I’m sorry if anyone is using Croissant as a PWA because the proxy is now gone. If a feed has the correct CORS headers it will continue to work.

Sorry for the abrupt change. That’s super lame, I know! To be honest I’ve lost a bit of enthusiasm for the project and I can’t maintain a proxy. Croissant was designed to be limited in scope to avoid too much burden. In hindsight the proxy was too ambitious.

Can you self-host the PWA & proxy?

Technically, yes! But you’ll have to figure that out by yourself. If you have questions, such as where to find the code, how the code works etc, the answer is no. I don’t mean to be rude, I just don’t have any time! You’re welcome to ask for support but unless I can answer in 30 seconds I’ll have to decline.

What’s new

Croissant is feature complete! It does what I set out to achieve. I have fixed several minor bugs and tweaked a few styles. Until inspiration (or a bug) strikes I won’t do another update anytime soon. Maybe later in the year I’ll decide to overhaul it? Who can predict!


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>
Everything you never wanted to know about visually-hidden Nobody asked for it but nevertheless, I present to you my definitive “it depends” tome on visually-hidden web content. I’ll probably make an amendment before you’ve finished reading. If you enjoy more questions than answers, buckle up! I’ll start with the original premise, even though I stray […] https://dbushell.com/2026/02/20/visually-hidden/ https://dbushell.com/2026/02/20/visually-hidden/ Fri, 20 Feb 2026 15:00:00 GMT Nobody asked for it but nevertheless, I present to you my definitive “it depends” tome on visually-hidden web content. I’ll probably make an amendment before you’ve finished reading. If you enjoy more questions than answers, buckle up! I’ll start with the original premise, even though I stray off-topic on tangents and never recover.

The question

I was nerd-sniped on Bluesky. Ana Tudor asked:

Is there still any point to most styles in visually hidden classes in ’26?

Any point to shrinking dimensions to 1px and setting overflow: hidden when clip-path to nothing via inset(50%)/ circle(0) reduces clickable area to nothing? And then no 1px dimensions = no need for white-space.

@anatudor.bsky.social

Ana proposed the following:

.visually-hidden { /* shouldn't this be enough in 2026? */
  position: absolute; /* take out of document flow */
  clip-path: circle(0); /* reduce clickable area to nothing */
}

Is this enough in 2026?

As an occasional purveyor of the visually-hidden class myself, the question wriggled its way into my brain. I felt compelled to investigate the whole ordeal. Spoiler: I do not have a satisfactory yes-or-no answer, but I do have a wall of text!

Table of contents

I went so deep down the rabbit hole I must start with a table of contents:

Accessibility notice

I’m writing this based on the assumption that a visually-hidden class is considered acceptable for specific use cases. My final section on native visually-hidden addresses the bigger accessibility concerns. It’s not easy to say where this technique is appropriate. It is generally agreed to be OK but a symptom of — and not a fix for — other design issues.

Appropriate use cases for visually-hidden are far fewer than you think.

Class walkthrough

Skip to the history lesson if you’re familiar.

visually-hidden, sr-only — there have been many variations on the class name. I’ve looked at popular implementations and compiled the kitchen sink version below.

.visually-hidden {
  border: 0;
  clip: rect(0 0 0 0);
  clip-path: inset(50%);
  height: 1px;
  margin: -1px;
  overflow: hidden;
  padding: 0;
  position: absolute;
  white-space: nowrap;
  width: 1px;
}

Please don’t copy this as a golden sample. It merely encompasses all I’ve seen.

There are variations on the selector using pseudo-classes that allow for focus. Think “skip to main content” links, for example.

What is the purpose of the visually-hidden class? The idea is to hide an element visually, but allow it to be discovered by assistive technology. Screen readers being the primary example. The element must be removed from layout flow. It should leave no render artefacts and have no side effects. It does this whilst trying to avoid the bugs and quirks of web browsers.

If this sounds and looks just a bit hacky to you, you have a high tolerance for hacks! It’s a massive hack! How was this normalised? We’ll find out later.

I’ll whittle down the visually-hidden properties for those unfamiliar.

.visually-hidden {
  position: absolute;
}

Absolute positioning is vital to remove the element from layout flow. Otherwise the position of surrounding elements will be affected by its presence.

.visually-hidden {
  clip: rect(0 0 0 0);
  clip-path: inset(50%);
}

This crops the visible area to nothing. clip remains as a fallback but has long been deprecated and is obsolete. All modern browsers support clip-path.

.visually-hidden {
  border: 0;
  padding: 0;
}

These two properties remove styles that may add layout dimensions.

.visually-hidden {
  height: 1px;
  margin: -1px;
  width: 1px;
}

This group effectively gives the element zero dimensions. There are reasons for 1px instead of 0px and negative margin that I’ll cover later.

.visually-hidden {
  overflow: hidden;
}

Another property to ensure no visible pixels are drawn. I’ve seen the newer clip value used but what difference that makes if any is unclear.

.visually-hidden {
  white-space: nowrap;
}

This was added to address text wrapping inside the 1px square (I’ll explain later).

So basically we have position: absolute and a load of properties that attempted to make the element invisible. We cannot use display: none or visibility: hidden or content-visibility: hidden because those remove elements from the accessibility tree.

So the big question remains: why must we still ‘zero’ the dimensions? Why is clip-path not sufficient? To make sense of this mystery I went back to the beginning.

'Impossible. Perhaps the archives are incomplete.' says a perplexed Obi-Wan Kenobi, who searches the Jedi archives for a mysterious planet (from Star Wars: Episode II)

Where it all began

It was tricky to research this topic because older articles have been corrected with modern information. I recovered many details from the archives and mailing lists with the help of those involved. They’re cited along the way.

Our journey begins November 2004.

A draft document titled “CSS Techniques for WCAG 2.0” edited by Wendy Chisholm and Becky Gibson includes a technique for invisible labels.

While it is usually best to include visual labels for all form controls, there are situations where a visual label is not needed due to the surrounding textual description of the control and/or the content the control contains. Users of screen readers, however, need each form control to be explicitly labeled so the intent of the control is well understood when navigated to directly.

Creating Invisible labels for form elements (history)

The following CSS was provided:

.nosize {
  position: absolute;
  width: 0px;
  height: 0px;
  overflow: hidden;
}

Could this be the original visually-hidden class?

My research jumped through decades but eventually I found an email thread “CSS and invisible labels for forms” on the W3C WAI mailing list. This was a month prior, preluding the WCAG draft. A different technique from Bob Easton was noted:

.off-left {
  position: absolute;
  left: -999px;
  width: 990px;
}

The beauty of this technique is that it enables using as much text as we feel appropriate, and the elements we feel appropriate. Imagine placing instructive text about the accessibility features of the page off left (as well as on the site’s accessibility statement). Imagine interspersing “start of…” landmarks through a page with heading tags. Or, imagine parking full lists off left, lists of access keys, for example. Screen readers can easily collect all headings and read complete lists. Now, we have a made for screen reader technique that really works!

Screenreader Visibility - Bob Easton (2003)

Easton attributed both Choan Gálvez and Dave Shea for their contributions.

In same the thread, Gez Lemon proposed overflow to ensure that text doesn’t bleed into the display area. Following up, Becky Gibson shared a test case covering the ideas.

.offscreen {
  position: absolute;
  width: 0px;
  overflow:hidden;
}

.offscreen2 {
  position: absolute;
  left: -200em;
}

Lemon later published an article “Invisible Form Prompts” about the WCAG plans which attracted plenty of commenters including Bob Easton.

The resulting WCAG draft guideline discussed both the nosize and offscreen ideas.

Note that instead of using the nosize style described above, you could instead use postion:absolute; and left:-200px; to position the label “offscreen”. This technique works with the screen readers as well. Only position elements offscreen in the top or left direction, if you put an item off to the right or the bottom, many browsers will add scroll bars to allow the user to reach the content.

Creating Invisible labels for form elements

Two options were known and considered towards the end of 2004.

  1. Zero dimensions
  2. Position off-screen

Why not both? Indeed, it appears Paul Bohman on the WebAIM mailing list suggested such a combination in February 2004.

.hidden {
  position: absolute;
  left: 0px
  top: -100px;
  width: 1px;
  height: 1px;
  overflow: hidden;
}

Bohman even discovered possibly the first zero width bug.

I originally recommended setting the height and width to 0 pixels. This works with JAWS and Home Page Reader. However, this does not work with Window Eyes. If you set the height and width to 1 pixel, then the technique works with all browsers and all three of the screen readers I tested.

Re: Hiding text using CSS - Paul Bohman

Later in May 2004, Bohman along with Shane Anderson published a paper on this technique. Citations within included Bob Easton and Tom Gilder.

Aside note: other zero width bugs have been discovered since. Manuel Matuzović noted in 2023 that links in Safari were not focusable.

The zero width story continues as recently as February 2026 (last week).

In browse mode in web browsers, NVDA no longer treats controls with 0 width or height as invisible. This may make it possible to access previously inaccessible “screen reader only” content on some websites.

NVDA 2026.1 Beta TWO now available - NV Access News

Digger further into WebAIM’s email archive uncovered a 2003 thread in which Tom Gilder shared a class for skip navigation links.

a.skip {
  position: absolute;
  overflow: hidden;
  width: 0;
  height: 0;
}

I found Gilder’s blog in the web archives introducing this technique.

I thought I’d put down my “skip navigation” link method down in proper writing as people seem to like it (and it gives me something to write about!). Try moving through the links on this page using the keyboard - the first link should magically appear from thin air and allow you to quickly jump to the blog tools, which modern/visual/graphical/CSS-enabled browsers (someone really needs to come up with an acronym for that) should display to the left of the content.

Skip-a-dee-doo-dah - Tom Gilder

Gilder’s post links to a Dave Shea post which in turn mentions the 2002 book “Building Accessible Websites” by Joe Clark. Chapter eight discusses the necessity of a “skip navigation” link due to table-based layout but advises:

Keep them visible!

Well-intentioned developers who already use page anchors to skip navigation will go to the trouble to set the anchor text in the tiniest possible font in the same colour as the background, rendering it invisible to graphical browsers (unless you happen to pass the mouse over it and notice the cursor shape change).

Building Accessible Websites - 08. Navigation - Joe Clark

Clark expressed frustration over common tricks like the invisible pixel.

<a href="#skip">
  <img src="/media/core/1x1clear.gif"
    alt="[skip navigation links]"
    width="1"
    height="1"
  />
</a>

It’s clear no visually-hidden class existed when this was written.

Choan Gálvez informed me that Eric Meyer would have the css-discuss mailing list. Eric kindly searched the backups but didn’t find any earlier discussion. However, Eric did find a thread on the W3C mailing list from 1999 in which Ian Jacobs (IBM) discusses the accessibility of “skip navigation” links.

The desire to visually hide “skip navigation” links was likely the main precursor to the early visually-hidden techniques. In fact, Bob Easton said as much:

As we move from tag soup to CSS governed design, we throw out the layout tables and we throw out the spacer images. Great! It feels wonderful to do that kind of house cleaning. So, what do we do with those “skip navigation” links that used to be attached to the invisible spacer images?

Screenreader Visibility - Bob Easton (2003)

I had originally missed that in my excitement seeing the off-left class.


I reckon we’ve reached the source of the visually-hidden class. At least conceptually. Technically, the class emerged from several ideas, rather than a “eureka” moment. Perhaps more can be gleaned from other CSS techniques such a the desire to improve accessibility of CSS image replacement.

Bob Easton retired in 2008 after a 40 year career at IBM. I reached out to Bob who was surprised to learn this technique was still a topic today. Bob emphasised the fact that it was always a clumsy workaround and something CSS probably wasn’t intended to accommodate. I’ll share more of Bob’s thoughts later.

I might have overdone the enthusiasm

Let’s take an intermission!

My contact page is where you can send corrections by the way :)


Further adaptations

The visually-hidden class stabilised for a period. Visit 2006 in the Wayback Machine to see WebAIM’s guide to invisible content — Paul Bohman’s version is still recommended.

Moving forward to 2011, I found Jonathan Snook discussing the “clip method”. Snook leads us to Drupal developer Jeff Burnz the previous year.

[…] we still have the big problem of the page “jump” issue if this is applied to a focusable element, such as a link, like skip navigation links. WebAim and a few others endorse using the LEFT property instead of TOP, but this no go for Drupal because of major pain-in-the-butt issues with RTL.

In early May 2010 I was getting pretty frustrated with this issue so I pulled out a big HTML reference and started scanning through it for any, and I mean ANY property I might have overlooked that could possible be used to solve this thorny issue. It was then I recalled using clip on a recent project so I looked up its values and yes, it can have 0 as a value.

Using CSS clip as an Accessible Method of Hiding Content - Jeff Burnz

It would seem Burnz discovered the clip technique independently and was probably the first to write about it. Burnz also notes a right-to-left (RTL) issue. This could explain why pushing content off-screen fell out of fashion.

2010 also saw the arrival of HTML5 Boilerplate along with issue #194 in which Jonathan Neal plays a key role in the discussion and comments:

If we want to correct for every seemingly-reasonable possibility of overflow in every browser then we may want to consider [code below]

.visuallyHidden {
    border: 0;
    clip: rect(0 0 0 0);
    height: 1px;
    margin: -1px;
    overflow: hidden;
    padding: 0;
    position: absolute;
    width: 1px;
}

This was their final decision. I’ve removed !important for clarity. This is very close to what we have now, no surprise since HTML5 Boilterplate was extremely popular. I’m leaning to conclude that the additional properties are really just there for the “possibility” of pixels escaping containment as much as fixing any identified problem.

Thierry Koblentz covered the state of affairs in 2012 noting that: Webkit, Opera and to some extent IE do not play ball with [clip]. Koblentz prophesies:

I wrote the declarations in the previous rule in a particular order because if one day clip works as everyone would expect, then we could drop all declarations after clip, and go back to the original

Clip your hidden content for better accessibility - Thierry Koblentz

Sound familiar? With those browsers obsolete, and if clip-path behaves itself, can the other properties be removed? Well we have 14 years of new bugs features to consider first.

In 2016, J. Renée Beach published: Beware smushed off-screen accessible text. This appears to be the origin of nowrap (as demonstrated by Vispero.)

Over a few sessions, Matt mentioned that the string of text “Show more reactions” was being smushed together and read as “Showmorereactions”.

Beach’s class did not include the kitchen sink.

.accessible_elem {
  clip: rect(1px, 1px, 1px, 1px);
  height: 1px;
  overflow: hidden;
  position: absolute;
  white-space: nowrap;
  width: 1px;
}

The addition of nowrap became standard alongside everything else.

Aside note: the origin of margin: -1px remains elusive. One Bootstrap issue shows it was rediscovered in 2018 to fix a browser bug. However, another HTML5 Boilterplate issue dated 2017 suggests negative margin broke reading order. Josh Comeau shared a <VisuallyHidden> React component in 2024 without margin. One of many examples showing that it has come in and out of fashion.

We started with WCAG so let’s end there. The latest WCAG technique for “Using CSS to hide a portion of the link text” provides the following code.

.visually-hidden {
  clip-path: inset(100%);
  clip: rect(1px, 1px, 1px, 1px);
  height: 1px;
  overflow: hidden;
  position: absolute;
  white-space: nowrap;
  width: 1px;
}

Circa 2020 the clip-path property was added as browser support increased and clip became deprecated. An obvious change I’m not sure warrants investigation (although someone had to be first!) That brings us back to what we have today.

Are you still with me?

Minimum viable technique

As we’ve seen, many of the properties were thrown in for good measure. They exist to ensure absolutely no pixels are painted. They were adapted over the years to avoid various bugs, quirks, and edge cases. How many such decisions are now irrelevant?

This is a classic Chesterton’s Fence scenario.

Do not remove a fence until you know why it was put up in the first place.

Well we kinda know why but the specifics are practically folklore at this point. Despite all that research, can we say for sure if any “why” is still relevant?

Back to Ana Tudor’s suggestion.

.visually-hidden {
  position: absolute;
  clip-path: circle(0);
}

How do we know for sure? The only way is extensive testing. Unfortunately, I have neither the time nor skill to perform that adequately here. There is at least one concern with the code above, Curtis Wilcox noted that in Safari the focus ring behaves differently.

Other minimum viable ideas have been presented before.

Scott O’Hara proposed a different two-liner using transform.

.vs-hidden {
  position: absolute;
  transform: scale(0);
}

JAWS, Narrator, NVDA with Edge all seem to behave just fine. As do Firefox with JAWS and NVDA, and Safari on macOS with VoiceOver. Seems also fine with iOS VO+Safari and Android TalkBack with Firefox or Chrome.

In none of these cases do we get the odd focus rings that have occurred with other visually hidden styles, as the content is scaled down to zero. Also because not hacked into a 1px by 1px box, there’s no text wrapping occurring, so no need to fix that issue.

transform scale(0) to visually hide content - Scott O’Hara

Sounds promising!

It turns out Katrin Kampfrath had explored both minimum viable classes a couple of years ago, testing them against the traditional visually-hidden class.

I am missing the experience and moreover actual user feedback, however, i prefer the screen reader read cursor to stay roughly in the document flow. There are screen reader users who can see. I suppose, a jumping read cursor is a bit like a shifting layout.

Exploring the visually-hidden css - Katrin Kampfrath

Kampfrath’s limited testing found the read cursor size differs for each class. The clip-path technique was favoured but caution is given.

A few more years ago, Kitty Giraudel tested several ideas concluding that sr-only was still the most accessible for specific text use.

This technique should only be used to mask text. In other words, there shouldn’t be any focusable element inside the hidden element. This could lead to annoying behaviours, like scrolling to an invisible element.

Hiding content responsibly - Kitty Giraudel

Zell Liew proposed a different idea in 2019.

.hide-accessibly {
  position: absolute !important;
  opacity: 0;
  pointer-events: none;
}

Many developers voiced their opinions, concerns, and experiments over at Twitter. I wanted to share with you what I consolidated and learned.

A new (and easy) way to hide content accessibly - Zell Liew

Liew’s idea was unfortunately torn asunder. Although there are cases like inclusively hiding checkboxes where near-zero opacity is more accessible.

I’ve started to go back in time again!

I’m also starting to question whether this class is a good idea. Unless we are capable and prepared to thoroughly test across every combination of browser and assistive technology — and keep that information updated — it’s impossible to recommend anything.

This is impossible for developers! Why can’t browser vendors solve this natively?

'Help me, web standards working groups. You're my only hope.' caption superimposed over Princess Leia, originally asked Obi-Wan for help (from Star Wars: Episode IV)

Native visually-hidden

Once you’ve written 3000 words on a twenty year old CSS hack you start to question why it hasn’t been baked into web standards by now.

Ben Myers wrote “The Web Needs a Native .visually-hidden” proposing ideas from HTML attributes to CSS properties. Scott O’Hara responded noting larger accessibility issues that are not so easily handled. O’Hara concludes:

Introducing a native mechanism to save developers the trouble of having to use a wildly available CSS ruleset doesn’t solve any of those underlying issues. It just further pushes them under the rug.

Visually hidden content is a hack that needs to be resolved, not enshrined - Scott O’Hara

Sara Soueidan had floated the topic to the CSS working group back in 2016. Soueidan closed the issue in 2025, coming to a similar conclusion.

I’ve been teaching accessibility for a little less than a decade now and if there’s one thing I learned is that developers will resort to using visually-hidden utility to do things that are more often than not just bad design decisions.

Yes, there are valid and important use cases. But I agree with all of @scottaohara’s points, and most importantly I agree that we need to fix the underlying issues instead of standardizing a technique that is guaranteed to be overused and misused even more once it gets easier to use.

csswg-drafts comment - Sara Soueidan

Adrian Roselli has a blog post listing priorities for assigning an accessible name to a control. Like O’Hara and Soueidan, Roselli recognises there is no silver bullet.

Hidden text is also used too casually to provide information for just screen reader users, creating overly-verbose content. For sighted screen reader users, it can be a frustrating experience to not be able to find what the screen reader is speaking, potentially causing the user to get lost on the page while visually hunting for it.

My Priority of Methods for Labeling a Control - Adrian Roselli

In short, many believe that a native visually-hidden would do more harm than good. The use-cases are far more nuanced and context sensitive than developers realise. It’s often a half-fix for a problem that can be avoided with better design.

I’m torn on whether I agree that it’s ultimately a bad idea. A native version would give software an opportunity to understand the developer’s intent and define how “visually hidden” works in practice. It would be a pragmatic addition.

The visually-hidden technique has persisted for over two decades and is still mentioned by WCAG. Yet it remains hacks upon hacks! How has it survived for so long? Is that a failure of developers, or a failure of the web platform?

The web is overrun with inaccessible div soup. That is inexcusable. For the rest of us who care about accessibility — who try our best — I can’t help but feel the web platform has let us down. We shouldn’t be perilously navigating code hacks, conflicting advice, and half-supported standards. We need more energy money dedicated to accessibility. Not all problems can be solved with money. But what of the thousands of unpaid hours, whether volunteered or solicited, from those seeking to improve the web? I risk spiralling into a rant about browser vendors’ financial incentives, so let’s wrap up!

I’ll end by quoting Bob Easton from our email conversation:

From my early days in web development, I came to the belief that semantic HTML, combined with faultless keyboard navigation were the essentials for blind users. Experience with screen reader users bears that out. Where they might occasionally get tripped up is due to developers who are more interested in appearance than good structural practices.

The use cases for hidden content are very few, such as hidden information about where a search field is, when an appearance-centric developer decided to present a search field with no visual label, just a cute unlabeled image of a magnifying glass.

[…] The people promoting hidden information are either deficient in using good structural practices, or not experienced with tools used by people they want to help.

Bob ended with:

You can’t go wrong with well crafted, semantically accurate structure.

Ain’t that the truth.


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>
Web font choice and loading strategy When I rebuilt my website I took great care to optimise fonts for both performance and aesthetics. Fonts account for around 50% of my website (bytes downloaded on an empty cache). I designed and set a performance budget around my font usage. […] https://dbushell.com/2026/02/17/web-font-choice-and-loading-strategy/ https://dbushell.com/2026/02/17/web-font-choice-and-loading-strategy/ Tue, 17 Feb 2026 15:00:00 GMT When I rebuilt my website I took great care to optimise fonts for both performance and aesthetics. Fonts account for around 50% of my website (bytes downloaded on an empty cache). I designed and set a performance budget around my font usage.

I use three distinct font families and three different methods to load them.

Default practice

Web fonts are usually defined by the CSS @font-face rule. The font-display property allows us some control over how fonts are loaded. The swap value has become somewhat of a best practice — at least the most common default. The CSS spec says:

Gives the font face an extremely small block period (100ms or less is recommended in most cases) and an infinite swap period.

In other words, the browser draws the text immediately with a fallback if the font face isn’t loaded, but swaps the font face in as soon as it loads.

CSS Fonts Module Level 4 - W3C

That small “block period”, if implemented by the browser, renders an invisible font temporarily to minimise FOUC. Personally I default to swap and don’t change unless there are noticeable or measurable issues.

Most of the time you’ll use swap. If you don’t know which option to use, go with swap. It allows you to use custom fonts and tip your hand to accessibility.

font-display for the Masses - Jeremy Wagner

Google Fonts’ default to swap which has performance gains.

In effect, this makes the font files themselves asynchronous—the browser immediately displays our fallback text before swapping to the web font whenever it arrives. This means we’re not going to leave users looking at any invisible text (FOIT), which makes for both a faster and more pleasant experience.

Speed Up Google Fonts - Harry Roberts

Harry further notes that a suitable fallback is important, as I’ll discover below.

My font choices

My three fonts in order of importance are:

Headings

Ahkio for headings. Its soft brush stroke style has a unique hand-drawn quality that remains open and legible. As of writing, I load three Ahkio weights at a combined 150 KB. That is outright greed! Ahkio is core to my brand so it takes priority in my performance budget (and financial budget, for that matter!)

Testing revealed the 100ms block period was not enough to avoid FOUC, despite optimisation techniques like preload. Ahkio’s design is more condensed so any fallback can wrap headings over additional lines. This adds significant layout shift.

side-by-side comparision of a default sans-serif wrapping four lines and Ahkio font only two

Chrome blog mention a zero second block period. Firefox has a config preference default of 100ms.

My solution was to use block instead of swap which extends the block period from a recommended 0–100ms up to a much longer 3000ms.

Gives the font face a short block period (3s is recommended in most cases) and an infinite swap period.

In other words, the browser draws “invisible” text at first if it’s not loaded, but swaps the font face in as soon as it loads.

CSS Fonts Module Level 4 - W3C

This change was enough to avoid ugly FOUC under most conditions. Worst case scenario is three seconds of invisible headings. With my website’s core web vitals a “slow 4G” network can beat that by half. For my audience an extended block period is an acceptable trade-off.

Hosting on an edge CDN with good cache headers helps minimised the cost.

Update: Richard Rutter suggested font-size-adjust which gives more fallback control than I knew. I shall experiment and report back!

Body copy

Atkinson Hyperlegible Next for body copy. It’s classed as a grotesque sans-serif with interesting quirks such as a serif on the lowercase ‘i’. I chose this font for both its accessible design and technical implementation as a variable font.

One file at 78 KB provides both weight and italic variable axes. This allows me to give links a subtle weight boost. For italics I just go full-lean.

@font-face {
  font-family: "Atkinson Hyperlegible Next";
  src: url("AtkinsonHyperlegibleNextVF.woff2") format("woff2");
  font-display: swap;
  font-weight: 1 900;
}

a {
  font-weight: calc(var(--font-weight) + 50);
}

:is(i, em) {
  font-style: italic;
  font-variation-settings: "ital" 1;
}

I currently load Atkinson Hyperlegible with font-display: swap out of habit but I’m strongly considering why I don’t use fallback.

Gives the font face an extremely small block period (100ms or less is recommended in most cases) and a short swap period (3s is recommended in most cases).

In other words, the font face is rendered with a fallback at first if it’s not loaded, but it’s swapped in as soon as it loads. However, if too much time passes, the fallback will be used for the rest of the page’s lifetime instead.

CSS Fonts Module Level 4 - W3C

The browser can give up and presumably stop downloading the font. The spec actually says that swap and block “[must/should] only be used for small pieces of text.” Although it notes that most browsers implement the default auto with similar strategies to block.

Code snippets

0xProto for code snippets. If my use of Ahkio was greedy, this is gluttonous! A default monospace would be acceptable. My justification is that controlling presentation of code on a web development site is reasonable. 0xProto is designed for legibility with a personality that compliments my design.

I don’t specify 0xProto with the CSS @font-face rule. Instead I use the JavaScript font loading API to conditionally load when a <code> element is present.

if (document.querySelector("code")) {
  const font = new FontFace(
    "ZeroxProto",
    `url('0xProto-Regular.woff2') format('woff2')`,
    { weight: "400" },
  );
  document.fonts.add(font);
  font.load();
}

Note the name change because some browsers aren’t happy with a numeric first character.

Not shown is the DOMContentLoaded event wrapper around this code. I also load the script with both fetchpriority="low" and type="module" attributes. This tells the browser the script is non-critical and avoids render blocking. I could probably defer loading even later without readers noticing the font pop in.

Update: for clarity, browsers will conditionally load @font-face but JavaScript can purposefully delay the loading further to avoid fighting for bandwidth. When JavaScript is not available the system default is fine.

Thoughts

There we have it, three fonts, three strategies, and a few open questions and decisions to make. Those may be answered when CrUX data catches up.

My new website is a little chunkier than before but its well within reasonable limits. I’ll monitor performance and keep turning the dials.

Web performance is about priorities. In isolation it’s impossible to say exactly how an individual asset should be loaded. There are upper limits, of course. How do you load a one megabyte font? You don’t. Unless you’re a font studio providing a complete type specimen. But even then you could split the font and progressive load different unicode ranges. I wonder if anyone does that?

Anyway I’m rambling now, bye.


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>
Declarative Dialog Menu with Invoker Commands The off-canvas menu — aka the Hamburger, if you must — has been hot ever since Jobs’ invented mobile web and Ethan Marcott put a name to responsive design. My journey Making an off-canvas menu free from heinous JavaScript has always been possible, but not ideal. […] https://dbushell.com/2026/02/12/declarative-dialog-menu-invoker-commands/ https://dbushell.com/2026/02/12/declarative-dialog-menu-invoker-commands/ Thu, 12 Feb 2026 15:00:00 GMT The off-canvas menu — aka the Hamburger, if you must — has been hot ever since Jobs’ invented mobile web and Ethan Marcott put a name to responsive design.

My journey

Making an off-canvas menu free from heinous JavaScript has always been possible, but not ideal. I wrote up one technique for Smashing Magazine in 2013. Later I explored <dialog> in an absurdly titled post where I used the new Popover API.

Current thoughts

I strongly push clients towards a simple, always visible, flex-box-wrapping list of links. Not least because leaving the subject unattended leads to a multi-level monstrosity.

I also believe that good design and content strategy should allow users to navigate and complete primary goals without touching the “main menu”. However, I concede that Hamburgers are now mainstream UI. Jason Bradberry makes a compelling case.

My new menu

This month I redesigned my website. Taking the menu off-canvas at all breakpoints was a painful decision. I’m still not at peace with it. I don’t like plain icons. To somewhat appease my anguish I added big bold “Menu” text.

The HTML for the button is pure declarative goodness.

<button type="button" commandfor="menu" command="show-modal">
  <span class="visually-hidden">open</span> Menu
</button>

Accessibility updates: I originally added the extra “open” for clarity. It was noted that prefixes can cause issues for voice control and that my addition is unnecessary anyway. I removed that from my live site. It was also noted there was no navigation landmark on the page. This can be solved by wrapping the <button> in a <nav> element, which I have now done. Thanks for the feedback!

Aside note: Ana Tudor asked do we still need all those “visually hidden” styles? I’m using them out of an abundance of caution but my feeling is that Ana is on to something.

The menu HTML is just as clean.

<dialog id="menu">
  <h2 class="hidden">Menu</h2>
  <button type="button" commandfor="menu" command="close">
    Close <span class="visually-hidden">menu</span>
  </button>
  <nav>
    <ul>
      <li><a href="/" aria-current="page">Home</a></li>
      <li><a href="/services/">Services</a></li>
      <li><a href="/about/">About</a></li>
      <li><a href="/blog/">Blog</a></li>
      <li><a href="/notes/">Notes</a></li>
      <li><a href="/contact/">Contact</a></li>
    </ul>
  </nav>
</dialog>

It’s that simple! I’ve only removed my opinionated class names I use to draw the rest of the owl. I’ll explain more of my style choices later.

This technique uses the wonderful new Invoker Command API for interactivity. It is similar to the popover I mentioned earlier. With a real <dialog> we get free focus management and more, as Chris Coyier explains. I made a basic CodePen demo for the code above.

The JavaScript

So here’s the bad news. Invoker commands are so new they must be polyfilled for old browsers. Good news; you don’t need a hefty script. Feature detection isn’t strictly necessary.

const $menu = document.querySelector("#menu");
for (const $button of document.querySelectorAll('[commandfor="menu"]')) {
  $button.addEventListener("click", (ev) => {
    ev.preventDefault();
    if ($menu.open) $menu.close();
    else $menu.showModal();
  });
}

Keith Cirkel has a more extensive polyfill if you need full API coverage like JavaScript events. My basic version overrides the declarative API with the JavaScript API for one specific use case, and the behaviour remains the same.

WebKit focus, visible?

Let’s get into CSS by starting with my favourite:

:focus-visible {
  outline: 2px solid magenta;
  outline-offset: 2px;
}

A strong contrast outline around buttons and links with room to breath. This is not typically visible for pointer events. For other interactions like keyboard navigation it’s visible.

The first button inside the dialog, i.e. “Close (menu)”, is naturally given focus by the browser (focus is ‘trapped’ inside the dialog). In most browsers focus remains invisible for pointer events. WebKit has bug. When using showModal or invoker commands the focus-visible style is visible on the close button for pointer events. This seems wrong, it’s inconsistent, and clients absolutely rage at seeing “ugly” focus — seriously, what is their problem?!

I think I’ve found a reliable ‘fix’. Please do not copy this untested. From my limited testing with Apple devices and macOS VoiceOver I found no adverse effects. Below I’ve expanded the ‘not open’ condition within the event listener.

if ($menu.open) {
  $menu.close();
} else {
  $menu.showModal();
  if (ev.pointerId > 0) {
    const $active = document.activeElement;
    if ($active.matches(":focus-visible")) {
      $active.blur();
      $active.focus({ focusVisible: false });
    }
  }
}

First I confirm the event is relevant. I can’t check for an instance of PointerEvent because of the click handler. I’d have to listen for keyboard events and that gets murky. Then I check if the focused element has the visible style. If both conditions are true, I remove and reapply focus in a non-visible manner. The focusVisible boolean is Safari 18.4 onwards.

Like I said: extreme caution! But I believe this fixes WebKit’s inconsistency. Feedback is very welcome. I’ll update here if concerns are raised.

Click to dimiss

Native dialog elements allow us to press the ESC key to dismiss them. What about clicking the backdrop? We must opt-in to this behaviour with the closedby="any" attribute. Chris Ferdinandi has written about this and the JavaScript fallback.

That’s enough JavaScript!

Fancy styles

My menu uses a combination of both basic CSS transitions and cross-document view transitions. For on-page transitions I use the setup below.

#menu {
  opacity: 0;
  transition:
    opacity 300ms,
    display 300ms allow-discrete,
    overlay 300ms allow-discrete;

  &[open] {
    opacity: 1;
  }
}

@starting-style {
  #menu[open] {
    opacity: 0;
  }
}

As an example here I fade opacity in and out. How you choose to use nesting selectors and the @starting-style rule is a matter of taste. I like my at-rules top level.

My menu also transitions out when a link is clicked. This does not trigger the closing dialog event. Instead the closing transition is mirrored by a cross-document view transition.

The example below handles the fade out for page transitions.

@view-transition {
  navigation: auto;
}

#menu {
  view-transition-name: --menu;
}

@keyframes --menu-old {
  from { opacity: 1; }
  to { opacity: 0; }
}

::view-transition-old(--menu) {
  animation: --menu-old 300ms ease-out forwards;
}

Note that I only transition the old view state for the closing menu. The new state is hidden (“off-canvas”). Technically it should be possible to use view transitions to achieve the on-page open and close effects too. I’ve personally found browsers to still be a little janky around view transitions — bugs, or skill issue?

It’s probably best to wrap a media query around transitions.

@media not (prefers-reduced-motion: reduce) {
  /* fancy pants transitions */
}

“Reduced” is a significant word. It does not mean “no motion”. That said, I have no idea how to assess what is adequately reduced! No motion is a safe bet… I think?

So there we have it! Declarative dialog menu with invoker commands, topped with a medley of CSS transitions and a sprinkle of almost optional JavaScript. Aren’t modern web standards wonderful, when they work?


I can’t end this topic without mentioning Jim Nielsen’s menu. I won’t spoil the fun, take a look! When I realised how it works, my first reaction was “is that allowed?!” It work’s remarkably well for Jim’s blog. I don’t recall seeing that idea in the wild elsewhere.


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>
RSS Club #005: Expedition 33 I didn’t intend for my RSS-only posts to be video game reviews but that’s what you’re getting again today! I had big plans for my holiday break but I spent it gaming. If you subscribed for web dev stuff don’t worry these off-topic posts are rare :) Warning: there will be spoilers. […] https://dbushell.com/2026/01/12/RSS005/ https://dbushell.com/2026/01/12/RSS005/ Mon, 12 Jan 2026 15:00:00 GMT I didn’t intend for my RSS-only posts to be video game reviews but that’s what you’re getting again today! I had big plans for my holiday break but I spent it gaming.

If you subscribed for web dev stuff don’t worry these off-topic posts are rare :)

Warning: there will be spoilers.

Clair Obscur: Expedition 33 was a breath of fresh air.

I hope it spurs more variety in video game story telling. The world-building and lore was fascinating, even if the plot got overly complicated towards the end.

Expedition 33 is not perfect. Early exploration was slow and unrewarding. Stray off the main path and everything is overpowered. Return again mid-game and nope; still too hard. Only late into Act 3 can most optional areas be tackled. When you are powerful enough it’s some of the best content in the game. I wish the game had more “hidden” bosses with unique mechanics.

The turn-based combat was so much fun to learn. The skills and upgrades allowed for a variety of options. Naturally, I refused to budge from the same lineup the entire game. By endgame I had an unstoppable combo with Lune. I never played Verso unless forced to. I don’t like forced party swaps, but that’s par for the course in RPGs.

Platforming controls were outright terrible. Thankfully, they’re rarely required. Most of the game is on rails with invisible walls so you can’t fall off. I can only think of a few tricky platforming jumps, all optional areas. Why put that into the game at all when it’s so janky? That said, I did enjoy the Gestral Beach mini-game that makes fun of this fact.

I never bothered with a NG+ play-through of any game but I can see myself starting Expedition 33 again. (Although I probably won’t, too busy.)

This game has resparked my interest in game development. I really want to publish small web-based game ideas this year.


Thanks for reading! Follow me on Mastodon and Bluesky. Subscribe to my Blog and Notes or Combined feeds.

]]>