I found it notable that Matt Web was able to just vibe code one up in a weekend: mist.
<ag-input> looked solid and fired events correctly, but it lacked Form-Associated Custom Element (FACE) support. This meant it was essentially invisible to native <form> submissions.
I was completely unaware of this until a conversation with my friend Marc van Neerven. We were discussing the nuances of Web Components and Shadow DOM when Marc pointed out the importance of form association.
Fueled by the mild embarrassment of having missed something so fundamental, I immediately started digging through articles like ElementInternals and Form-Associated Custom Elements to understand how this Form-Associated Custom Element stuff worked. Of course, I understood that native HTML form controls have built-in submission logic and FormData support, but I had absolutely no idea a FACE API even existed.
It’s a massive facepalm moment when you believe you’re “code complete” on a dozen different form components, only to realize they don’t support the most basic functionality of a form. If you wrap a naively built custom element in a <form> and hit submit, the browser will have no idea the component is even there. Try setting a breakpoint on your submit handler; you’ll see an empty FormData object staring back at you.

That empty object is what eventually reaches your server. Additionally, if you call form.reset(), your custom fields remain filled, and even a <fieldset disabled> wrapper gets completely ignored.
Fixing this meant retrofitting every single form component in AgnosticUI. It was a massive undertaking, but it forced me to distill the spec’s complexities into a single, reusable Lit mixin. This helped encapsulate the boilerplate in one place, keeping the code DRY and ensuring my components finally became form-aware.
The following is what I learned during that process.
Enabling FACE starts with a deceptive bit of boilerplate. You tell the browser your element wants to participate in forms, and then you grab a handle to the ElementInternals API.
class MyInput extends HTMLElement {
static formAssociated = true; // The "I'm a form control" flag
constructor() {
super();
// This gives you the keys to the kingdom
this._internals = this.attachInternals();
}
}

If only it ended there. While those two steps “engage” the API, the actual work happens through ElementInternals. This is your side of the contract with the browser’s form system. It isn’t just a single property; it’s a suite of methods and properties that let your component finally talk to the parent <form>.
Through _internals, you can:
setFormValue() so your element actually shows up in FormData.setValidity() to participate in form.checkValidity() and trigger native browser validation UI..states property to toggle custom pseudo-classes like :state(checked), which is a lifesaver for styling..form, .willValidate, or .validationMessage directly from the instance.On the flip side, the browser expects you to handle specific lifecycle callbacks. It’ll call formResetCallback when the form clears, formDisabledCallback when a <fieldset disabled> ancestor changes, and formStateRestoreCallback when the browser tries to help the user autofill a form after a navigation.
It’s a lot of “stuff” to manage. As of early 2026, browser support is now broadly available (Chromium, Firefox, and Safari 16.4+), so we’re finally at a point where we can use this without reaching for a clunky polyfill.
The first decision when rolling FACE out across a dozen components is where to put the shared code. The boilerplate is identical every time: you need the static flag, the attachInternals() call, and about six different getters to proxy the internal state.
In efforts to keep things DRY, a base class like AgFormControl extends LitElement seems like the obvious choice. But, JavaScript only allows single inheritance, so if a component already needs to extend something else, you’ll be stuck.
The solution was a Lit Mixin. It allows us to “plug in” form capabilities to any component while keeping the code DRY. To keep TypeScript happy with protected members, we use a companion declare class: this acts as a “blueprint” that tells the compiler exactly what the mixin is adding to the class.
// 1. The "Blueprint" for TypeScript
export declare class FaceMixinInterface {
static readonly formAssociated: boolean;
protected _internals: ElementInternals;
name: string;
readonly form: HTMLFormElement | null;
readonly validity: ValidityState;
readonly validationMessage: string;
readonly willValidate: boolean;
checkValidity(): boolean;
reportValidity(): boolean;
formDisabledCallback(disabled: boolean): void;
formResetCallback(): void;
}
type Constructor<T = {}> = new (...args: any[]) => T;
// 2. The Actual Mixin
export const FaceMixin = <T extends Constructor<LitElement>>(superClass: T) => {
class FaceElement extends superClass {
static readonly formAssociated = true;
protected _internals: ElementInternals;
@property({ type: String, reflect: true }) name = "";
constructor(...args: any[]) {
super(...args);
this._internals = this.attachInternals();
}
get form() {
return this._internals.form;
}
get validity() {
return this._internals.validity;
}
get validationMessage() {
return this._internals.validationMessage;
}
get willValidate() {
return this._internals.willValidate;
}
checkValidity() {
return this._internals.checkValidity();
}
reportValidity() {
return this._internals.reportValidity();
}
formDisabledCallback(disabled: boolean) {
(this as any).disabled = disabled;
}
formResetCallback() {
/* Subclasses override this */
}
}
// This cast merges the blueprint with the original class
return FaceElement as unknown as Constructor<FaceMixinInterface> & T;
};
Using it is a one-liner:
export class AgInput extends FaceMixin(LitElement) { ... }.
The mixin owns the infrastructure, but the component owns the semantics. Here’s how I split the responsibilities:
formAssociated flag, attachInternals, the name property, and all proxy getters (like validity and validationMessage).setFormValue(), what actual value to submit, and the specific logic for formResetCallback().Each component knows what “value” means for itself. The mixin just provides the megaphone to tell the browser about it.
One of the more instructive things the rollout revealed is that constraint validation splits cleanly into two strategies. You need to understand and use both!

If a component renders a native <input>, <textarea>, or <select> in its Shadow DOM, don’t reimplement the wheel. Just delegate to it. That inner element already knows how to run the browser’s full constraint validation engine, giving you required, minlength, and pattern for free. To keep things DRY across AgInput, AgSelect, and AgCheckbox, we use a single utility helper, syncInnerInputValidity, to bridge that internal state to our host element.
export function syncInnerInputValidity(
internals: ElementInternals,
inputEl:
| HTMLInputElement
| HTMLTextAreaElement
| HTMLSelectElement
| null
| undefined,
): void {
if (!inputEl) return;
if (!inputEl.validity.valid) {
// We pass the inputEl as the "anchor" so the browser
// knows where to point the validation bubble.
internals.setValidity(inputEl.validity, inputEl.validationMessage, inputEl);
} else {
internals.setValidity({});
}
}
The third argument to setValidity is the validation target. It tells the browser where to render the native validation bubble. By passing the inner <input>, the tooltip appears in the correct place (rather than floating awkwardly over the custom element’s host boundary).
AgnosticUI components that use this pattern: AgInput, AgCheckbox, AgSelect.
The AgRadio Caveat: Even though AgRadio renders an inner <input type="radio">, delegation isn’t enough. Shadow DOM isolation actually breaks the native required constraint for radio groups. I’ll explain how to handle that specifically in the AgRadio section below.
If a component uses a custom widget like AgToggle (which uses a <button role="switch">), there’s no native input to delegate to. In these cases, we have to implement _syncValidity() directly against the component’s reactive state.
While we use the required attribute in our component’s API, the browser’s internal validation engine tracks this failure as valueMissing so we use that in the call to setValidity.
// AgToggle example
private _syncValidity(): void {
if (this.required && !this.checked) {
// Note: In a production library, 'validationMessage' should be a
// localized property rather than a hard-coded string.
this._internals.setValidity({ valueMissing: true }, this.validationMessage);
} else {
this._internals.setValidity({});
}
}
For a required switch or checkbox, valueMissing is typically the only constraint that applies. A more complex custom component such as a range slider might, for example, account for flags like rangeUnderflow or stepMismatch.
The rule: If the component renders an inner native form control (like an input, textarea, or select), delegate. If it’s a custom-built widget like AgToggle, we own the validation logic.
The fastest way to confirm your FACE implementation is wired up correctly is to run a few manual “smoke tests” directly in the browser.
The ultimate proof is in the FormData object itself. Hook into a form’s submit event and log the entries:
form.addEventListener("submit", (e) => {
e.preventDefault();
const data = Object.fromEntries(new FormData(e.target).entries());
console.log(data);
});
If your component’s name and value are missing from that object, one of three things happened: setFormValue() wasn’t called, formAssociated is missing, or the component doesn’t have a name attribute set in the DOM.
Select your component in the Elements panel so it becomes $0 in the console, then run these checks:
$0.form; // Should return the parent <form>, not undefined
$0.willValidate; // Should return true
$0.validity.valid; // Should reflect the current validation state
If $0.form returns null, the element isn’t form-associated. This usually means static formAssociated = true is missing or attachInternals() wasn’t called in the constructor.
Finally, check if the form itself “sees” your component as one of its controls:
Array.from(document.querySelector("form").elements).map((el) => el.tagName);
Your custom elements should appear in this list alongside native inputs.
AgInput established the pattern for the rest of the library. It’s a textbook example of Strategy 1: Delegation.
Value submission: We call _internals.setFormValue(this.value) in three places: the input handler (every keystroke), the change handler (on commit), and during firstUpdated, Lit’s lifecycle hook that runs after the component’s first render. Syncing on firstUpdated is critical, as without it, the form doesn’t know the initial value until the user clicks into the field.
Validation (The Delegation Path): Because AgInput renders a native <input>, we don’t need to write custom logic for required or minlength. We simply point ElementInternals at the inner element’s state.
Wait, what if I don’t have a native input? If you were building a custom slider or a star-rating component (Strategy 2), you wouldn’t “sync” from an inner element. Instead, you would manually call this._internals.setValidity({ valueMissing: true }, "Message") inside your own property setters (like set value()).
Accessible error messages: The error container in AgInput uses role="alert" and aria-atomic="true". The container is always in the DOM. We only swap out its text content when an error occurs. This matters because screen readers register the alert region on page load. If you show and hide the whole element instead, screen reader announcements become unreliable.
AgToggle differs from text inputs in two ways.
Null form value: A native checkbox that is unchecked is simply absent from FormData, not an empty string. Passing null to setFormValue replicates this:
this._internals.setFormValue(this.checked ? this.value || "on" : null);
The 'on' default matches native checkbox behavior when no value attribute is set and the checkbox is checked. For any server processing form submissions, a missing key and an empty-string key are handled differently.
Direct validity: Only required applies. There is no inner <input> to delegate to, so we implement validity directly against this.checked (called in _performToggle() on every state change).
The value property default: The component uses this.value || 'on' so that FormData always produces 'on' when no explicit value is configured. The property itself defaults to ''. This keeps form submission behavior correct while the property API stays clean.
AgCheckbox is perhaps the most instructive component in the rollout because it highlights exactly why we need FACE even when we are using native inputs encapsulated within a Shadow DOM.
Shadow DOM inputs are invisible to parent forms. Here’s the deal: an <input> rendered inside a Shadow Root is isolated from the parent document, so even with a name and a value, it will never appear in the parent form’s FormData. This isolation is why FACE isn’t an optional feature; it’s a requirement for any UI library or design system using Shadow DOM.
Delegation still works. Despite the isolation, that inner checkbox still has a native .validity object. We can still use Strategy 1 by mirroring those properties to the host:
private _syncValidity(): void {
// Our utility helper doesn't care if it's an input, textarea, select, or checkbox
syncInnerInputValidity(this._internals, this.inputRef);
}
Syncing programmatic changes. Unlike AgInput, where a user usually types to change the value, a checkbox is often toggled programmatically. Think of “Select All” buttons or state-driven resets. To make the component reliable, we have to handle both cases: the user’s manual click and the developer’s code. If we don’t sync on both paths, the form data won’t match what the user sees on the screen.
// 1. User interaction path
handleChange(e: Event) {
this.checked = (e.target as HTMLInputElement).checked;
this._updateFormValue();
}
// 2. Programmatic path
override updated(changedProperties: PropertyValues) {
super.updated(changedProperties);
if (changedProperties.has('checked')) {
this._updateFormValue();
}
}
private _updateFormValue() {
this._internals.setFormValue(this.checked ? (this.value || 'on') : null);
this._syncValidity();
}
It’s important to note that AgSelect is a direct wrapper around the native <select> element. Unlike “custom” dropdowns that use divs and ARIA lists, AgSelect uses the platform’s native control. This allows us to leverage native properties that would be difficult to track manually.
Handling multiple values: setFormValue() has three overloads. While a string works for most components, multiple select requires the FormData overload. By passing a FormData object to setFormValue, you’re providing a list of entries that the browser will automatically “spread” into the parent form’s master collection at submission time.
private _syncFormValue(): void {
if (!this.selectElement) return;
if (this.multiple) {
const formData = new FormData();
Array.from(this.selectElement.selectedOptions).forEach(opt => {
// The browser merges these entries into the parent form's data
formData.append(this.name, opt.value);
});
this._internals.setFormValue(formData);
} else {
this._internals.setFormValue(this.selectElement.value || '');
}
}
Resetting to the original selection: Native elements have a built-in memory of their initial state. option.defaultSelected reflects the selected attribute as it was originally parsed from HTML. It’s the perfect source of truth for our formResetCallback:
override formResetCallback(): void {
if (this.selectElement) {
Array.from(this.selectElement.options)
.forEach(opt => (opt.selected = opt.defaultSelected));
}
this._syncFormValue();
this._internals.setValidity({});
}
This ensures that hitting “Reset” restores the form to its original HTML state, matching the exact behavior users expect.
Radio groups require coordination: when one is selected, others must deselect. While native <input type="radio"> handles this automatically, elements isolated in separate Shadow DOM trees are “blind” to their siblings. This breaks everything from value syncing to required validation.
We don’t need a complex messaging system. When an AgRadio is checked, it finds other <ag-radio> instances with the same name and sets instance.checked = false.
Crucially, this isn’t “magic.” Because checked is a Lit @property, this manual assignment triggers the updated() lifecycle on every radio in the group. We then tap into that lifecycle to run our glue code, explicitly calling setFormValue() and _syncValidity() to push the new state into the ElementInternals engine:
override updated(changedProperties: PropertyValues) {
super.updated(changedProperties);
// This is the "glue": Lit tells us something changed,
// and we manually inform the browser's form engine.
if (changedProperties.has('checked')) {
this._internals.setFormValue(this.checked ? this.value : null);
this._syncValidity();
}
}
required and Shadow IsolationThis is the biggest “gotcha.” Normally, a browser knows a required radio group is valid if any radio is checked. But because our inner inputs are isolated in separate Shadow Roots, the browser can’t see the group. Each unchecked radio will incorrectly report valueMissing: true.
To understand why we need this.getRootNode(), we have to look at where our <ag-radio> tags are actually being placed. It isn’t about the framework’s internal engine; it’s about whether the tags are sitting in the global document or inside a private “neighborhood”:
<ag-radio> tags directly into the main page. Here, document.querySelectorAll works fine because everything is “on the main street.”ag-radio you place inside it is hidden from the outside world. Even in Svelte or Solid, the Web Components Shadow Root acts as a barrier that the global document cannot pierce.getRootNode()We have to manually verify the group state. Instead of asking the global document, we ask the element: “What is the root of the neighborhood I live in?” We use this.getRootNode() to find that root.
document.ShadowRoot.By querying that local root, we find our siblings regardless of how many layers of nesting are involved.
private _isGroupChecked(): boolean {
if (this.checked) return true;
// Find our "neighborhood" (either the Document or a ShadowRoot)
const root = this.getRootNode() as Document | ShadowRoot;
// Now we can find all radios sharing our scope and `name`
return Array.from(root.querySelectorAll(`ag-radio[name="${this.name}"]`))
.some((el) => (el as AgRadio).checked);
}
private _syncValidity(): void {
if (!this.required) return this._internals.setValidity({});
if (this._isGroupChecked()) {
this._internals.setValidity({});
} else {
this._internals.setValidity({ valueMissing: true }, this.validationMessage);
}
}
If you set radio.checked = false on a sibling that was already false, Lit’s updated() won’t fire. But that sibling still needs to re-run _syncValidity() because the group state just changed. We have to force the sync manually:
allRadios.forEach((radio) => {
if (radio !== this && radio instanceof AgRadio) {
radio.checked = false;
// Force a re-sync because Lit won't trigger updated()
// if the value was already false.
radio._syncValidity();
}
});
AgSlider already had a partial, hand-rolled FACE infrastructure. It manually declared static formAssociated, called attachInternals(), and featured a custom _updateFormValue() method alongside six different getters for form and validity.
Because it didn’t use our FaceMixin, it was missing critical browser integrations: formDisabledCallback (for <fieldset> propagation) and formResetCallback (for form.reset() support). It also failed to set its initial form value on boot.
The refactor resulted in removing the manual _internals field, the constructor-based attachInternals(), and all six hand-rolled getters. FaceMixin now provides all of that out of the box.
We then added the “missing links” to handle initial state and resets:
override firstUpdated() {
// Capture the initial state provided by the consumer
this._defaultValue = Array.isArray(this.value)
? ([...this.value] as [number, number])
: this.value;
this._updateFormValue();
}
override formResetCallback(): void {
// Restore the captured default value
this.value = Array.isArray(this._defaultValue)
? ([...this._defaultValue] as [number, number])
: this._defaultValue;
this._updateFormValue();
}
firstUpdated?We capture this._defaultValue here because it’s the first moment we can be sure the component has processed its initial properties. By shallow-copying the array for dual mode, we ensure that future movements of the slider don’t accidentally mutate our “save point” for the form reset.
The existing _updateFormValue() utilized a sophisticated ElementInternals feature: the FormData overload. In dual-slider mode, we need to submit both a min and max value under a single name key.
// A peek at the existing logic inside _updateFormValue
const data = new FormData();
data.append(this.name, String(this.value[0]));
data.append(this.name, String(this.value[1]));
this._internals.setFormValue(data);
AgRating uses a custom role="slider" div: there is no inner <input> at all. Like AgToggle, this means we must implement _syncValidity() directly on the host element. A rating of 0 is treated as the unselected state. We explicitly map this to the browser’s valueMissing state so that required validation works as expected.
private _syncValidity(): void {
if (this.required && this.value === 0) {
this._internals.setValidity({ valueMissing: true }, this.validationMessage);
} else {
this._internals.setValidity({});
}
}
Note: Why treat 0 as null? In a 5-star system, 0 usually means “unselected” rather than a score of zero. By submitting null for a 0 value, we ensure the field is omitted from the form payload. This allows the server to distinguish between an intentional score and a skipped field.
In AgRating, all user interactions flow through the commitValue() method. This includes clicks, pointer events, and keyboard interactions. By wiring the FACE synchronization here, we ensure that every manual change is immediately reflected in the form state.
We also include a synchronization call in the updated() lifecycle to handle programmatic changes. These two points of contact keep the internal form state and the visual UI in lock-step without the need for complex event listeners.
// Inside AgRating
commitValue(val: number) {
this.value = val;
this._updateFormValue(); // Syncs to ElementInternals
}
override updated(changedProperties: PropertyValues) {
super.updated(changedProperties);
if (changedProperties.has('value')) {
this._updateFormValue(); // Syncs programmatic changes
this._syncValidity();
}
}
Selection groups are composite widgets: they consist of individual buttons or cards inside a coordinating group element. The group is the brain, not the items. The group element manages the name, the type (radio vs. checkbox), and the full set of selected values.
This follows the same model as the native <select> element. The options are not form-associated; the select is. Both groups use a type property to determine form value semantics:
private _syncFormValue(): void {
const selected = this._getSelectedValues();
if (this.type === 'radio') {
// Single value or null
this._internals.setFormValue(selected.length > 0 ? selected[0] : null);
} else {
// Multiple values via FormData overload
if (selected.length === 0) {
this._internals.setFormValue(null);
} else {
const formData = new FormData();
selected.forEach(val => formData.append(this.name, val));
// The browser merges these entries into the parent form's data
this._internals.setFormValue(formData);
}
}
}
The formResetCallback (not shown) handles the cleanup. It clears internal values, resets the form value to null, and triggers _syncValidity(). This ensures a required group correctly reports as invalid after a reset while updating child elements so the UI reflects the cleared state immediately.
AgCombobox can appear complex because it manages a text input, a filtered dropdown, and a multi-tag UI. However, the form value logic is remarkably stable: only a committed selection counts as the value. While a user types, the _searchTerm state updates to filter the list, but this.value remains untouched until an option is explicitly selected.
To bridge this with ElementInternals, we synchronized the state across the two primary interaction paths:
// Picking an item from the list
selectOption(optionOrValue: ComboboxOption | string) {
// ... logic to update _selectedOptions
this._selectionChanged(); // This updates this.value
// FACE: sync form value and validity after selection
this._syncFormValue();
this._syncValidity();
}
// Clearing the selection
clearSelection() {
this._selectedOptions = [];
this._selectionChanged();
// FACE: sync form value and validity on clear
this._syncFormValue();
this._syncValidity();
}
Like our other complex components, we use the updated() lifecycle as a safety net for programmatic changes. If a developer sets something like combobox.value = 'CSS' via JavaScript, the component detects the property change and triggers the synchronization logic.
override updated(changedProperties: Map<string, unknown>) {
super.updated(changedProperties);
if (changedProperties.has('value')) {
this._syncFormValue();
this._syncValidity();
}
}
The formResetCallback ensures the component returns to a clean state when a form is cleared. It nulls the internal selection, clears the form value, and resets the validity state so that a required combobox doesn’t stay in an “invalid” state after the user clicks reset.
ElementInternalsIf there is one elephant in the room after this migration, it is this: implementing FACE is a significant undertaking. It is a necessary evil for anyone building a robust web component system. While it provides the magic of native form integration, it requires manual wiring for every state: validation, disabled states, and value syncing. These are features that we often take for granted in framework-specific components.
formAssociated = true is just an invitation. Setting this property only “opens the door.” Values do not appear in FormData until you call setFormValue(). Validation does not work until you call setValidity(). Nothing happens automatically.<input> inside a shadow root is invisible to an ancestor <form>. Using setFormValue() on the host element is the only way to create that connection.this.closest('form').requestSubmit() to bridge that gap.formDisabledCallback only fires when an ancestor, such as a <fieldset>, is disabled. It does not fire when the element’s own disabled attribute is toggled. You must manage both paths to ensure they do not overwrite each other.These strategic lessons are universal rules for all FACE components. They apply whether you are delegating to a native input (Strategy 1) or managing state directly (Strategy 2).
firstUpdated sync is non-negotiable. Every component must call setFormValue() in firstUpdated(). Without this, a pre-filled form where the value is set via a property will not register its data until a user interacts with it.updated(). While event handlers cover user input, the updated() lifecycle covers everything else. This includes test code, parent components, and controlled modes. In Strategy 1, this ensures property changes reach the inner native element. In Strategy 2, it keeps ElementInternals in sync.null to setFormValue() ensures the key is absent from the form payload. Passing an empty string '' keeps the key present. Matching native checkbox behavior is critical for backend compatibility.A few items remain on the roadmap — formStateRestoreCallback, cleaner disabled-state separation, and runtime validation injection — but the core contract is fulfilled.
The irony isn’t lost on me. I spent months building form components and missed the most fundamental thing a form component needs to do: participate in a form.
FACE humbled me. I walked away thinking: “Gee, that’s a LOT of code to manage…I hope I didn’t make a mistake”. But, I suppose it also saved me, because now every ag-* form control properly submits its value, respects resets, and actually listens to its parent fieldset. No consumer workarounds, no hacks, no prayers required. I’m still trying to figure out how “I feel” about all this, but hey, sometimes finishing the thing is what’s important.
.setHTML() method in JavaScript, part of the Sanitizer API, can be a one-to-one replacement for .innerHTML(), making sites more secure from XSS attacks. I think that’s pitch-perfect feature branding from Mozilla on this: Goodbye innerHTML, Hello setHTML: Stronger XSS Protection in Firefox 148.
Listen to Frederik Braun go deep into this on ShopTalk recently and a bonus blog post where he shows the recipe to make only setHTML work “essentially removing all DOM-XSS risks”.
]]><geolocation> element in HTML now. Looks like Chrome led it up and got it into Chrome first. Now we’re in the ol’ 🤷♀️ state on when we’ll get it elsewhere. But the process certainly involved other browser makers, so that’s good.
Manuel Matuzović has a good intro blog post.
The element doesn’t behave right within an <iframe> so embedding a demo here doesn’t make sense. You can see a small demo here, and the code is here.
Here’s what I think you should know:
<button> with an enforced design. It’s got a map icon and text that says “Use location” (or “Use precise location” if you use accuracymode="precise")location event.<button> inside with event handlers that go through a flow where you aren’t 100% sure if you have granted permissions. Or polyfill it.It’s that last one we can dig into a little here, as I find it quite interesting. I’m not sure if we’ve had an element in HTML that behaves quite like this before.
I don’t think accessibility itself is actually the motivation behind these rules. It’s actually about security and the danger of “tricking” people into exposing their geolocation when they may not want to. For example: “Want 100 free Robux? Click here, then click Allow on the next pop-up.” It can’t totally stop that, but it can try.
The enforced-accessibility behavior comes in several forms:
As far as I can tell, anyway! The content is in a user-agent Shadow Root and there isn’t any part attributes or anything for styling access.

The normal things that penetrate the Shadow DOM still will, though. Like the color still sets the color and the SVG icon is set to fill: currentColor; so the icon color will change along with the text.
But!
You do get automatic localization, which is really nice. Whatever the lang attribute is set to in that area of the DOM, you’ll get text in the corrrect language.
Some CSS you try to apply to a <geolocation> button will simply be ignored.
geolocation {
/* NOPE */
translate: 100px 100px;
transform: scale(0);
opacity: 0.75;
filter: opacity(0);
inline-size: 2px;
clip-path: inset(50%);
}
I don’t know if that’s comprehensive, but the point is, if you try to write some CSS for this button and it doesn’t work, it’s probably on purpose. There is some conflicting information, like the Chrome post says 2D translates are allowed, but in practice, they are not.
These are pretty strange!
geolocation {
/* Actually capped at between -0.65px and 2.6px */
letter-spacing: 10em;
/* Actually capped at between 0 and 6.5px */
word-spacing: 10em;
/* Allowed, but the minimum is content size */
block-size: 1px;
height: 1px;
/* Allowed, but the minimum is content size */
inline-size: 1px;
width: 1px;
}
Perhaps the strangest one is font-size in that it’s capped but also has functionality limits.
geolocation {
/* Allowed */
font-size: 50px;
/* Forces minimum size of 8px and stops working */
font-size: 1px;
/* Allowed, but stops working. */
font-size: 12px;
/* Minimum size to work */
font-size: 13px;
}
I also note that font-size: 1px; actually does render when the button is in an <iframe>, so uhhhh, whatever you wanna make of that.
Like font-size above, there is other CSS you can apply that is allowed (renders) but then makes the button just not work. By not work, I specifically mean it will not trigger location events.
geolocation {
background: white;
color: white;
}
That succeeds in hiding the button from view (on a white page), but if you find and click it, it won’t work.
This is quite easy to happen! For instance:
geolocation {
/* Failure state */
color: orange;
}
The color orange (with a white background) is not enough contrast to be acceptable.
This does trigger an “issue” in Chrome DevTools. It’s not a JavaScript error so you won’t see it in the console, it comes up in the Issues area.

It’s quite weird how there is all this CSS that it’s happy to forcibly rein in for you. But then, other CSS just allows it through and disables the button. Or, I should say, “just makes not work”, because the button does not present itself in the accessibility tree as disabled.
]]>I feel like an idiot, because I’m very guilty of telling people that one of the amazing benefits of Anchor Positioning in CSS is that you can position elements relative to other elements regardless of where they are in the DOM. It’s that italic, regardless, that’s the problem.
No, Chris, you can’t. Sorry about that. There are a bunch of limitations which can feel quite inscrutable at first. New types of problems that, to my knowledge, haven’t existed quite like this in CSS before.
Here’s a little rant about it:
I’m trying to be dramatic there on purpose because I really do think that the CSS powers that be should do something about this. I’m sure there are reasons why it behaves the way it does now, and I’ll bet a dollar that speed is a part of it. But it’s way too footgunny (as in: easy to do the wrong thing) right now. I gotta imagine the anchor-resolving part of the grand CSS machine could do a “second pass” or the like to find the anchor.
If you’re logged into CodePen, open this demo and move the DOM positions as I did in the video to see it happen.
It’s a smidge convoluted that I’d move the tooltip before the anchor, I suppose. You can just: not do that. But it’s symbolic that you can’t just do whatever you want with DOM placement of these things an expect it to work.
Temani Afif has a good article about all this, which has a strong callout that I’ll echo:
The anchor element must be fully laid out before the element that is anchored to it.
So you have to be thinking about the position value quite a bit. There is almost a 100% chance that the element you’re trying to position to an anchor is position: absolute;. It’s the anchor itself that’s more concerning. If they are siblings, and the anchor has any position value other than the default static, the anchor has to come first. If they are in other positions in the DOM, like the anchor is a parent or totally elsewhere, you need to ensure they are in the same “containing block” or that the anchor parent still has that static positioning. Again, Temani has a deeper dive into this that explains it well.
James Stuckey Weber also has an article on this. His callout is a bit more specific:
For the best chance of having anchor positioning work, here’s my recommendation:
- Make the anchor and the positioned element siblings.
- Put the anchor first in the DOM.
Part of me likes that simplified advice as it’s understandable and teachable.
A bigger part of me hates that. This is a weird new problem that CSS has given us. We haven’t had to root out problems like this in CSS before and I don’t exactly welcome a new class of troubleshooting. So again: I think CSS should fix this going forward if they can.
If you’re further confused by how to position things even if you have the anchor working…
align and justify stuff inside of it. But where does this “cell” occupy? It’s called the Inset-Modified Containing Block (IMCB) and Bramus has a good explanation. The inset part basically means shrinking it by pushing against the cell walls. Also IMCB is so weirdly close to ICBM, but I guess they can both blow you up. $.ajax which made it hugely more usable.
Speaking of old Gmail, this is a crazy fact about its launch.
]]>So I did what any self-respecting developer does: opened up the console and indexed a lookup table with Math.random():
const options = [
"Grind LeetCode. Hate life. Land FAANG.",
"Hard pivot to PM or Design.",
"Quit. Live off the land.",
];
const nextMove = options[Math.floor(Math.random() * options.length)];
There was just one problem with that.
!options.includes(correctAnswer)
I came up with a better move for myself: actually finish what I’d already started. So I dusted off AgnosticUI, a project I’d started in 2020 and needed a modern update.
The first version of AgnosticUI had solved a real problem: branding consistency across React, Vue, Svelte, and Angular. But keeping JSX, Vue, and Svelte SFC components, and ViewEncapsulation in sync across a single CSS source of truth was an absolute maintenance nightmare. I almost archived the repo, but I had unfinished business.
Some DM exchanges with Cory LaViska (the creator of Shoelace) nudged me toward Web Components as the right primitive for the job.
The Plan: A full rewrite using Lit with the following non-negotiables:
The work itself had to be the point.
One concern that used to follow Web Components was framework compatibility. The website custom-elements-everywhere.com tracks this, and as of 2026, scores are high across the board. While React 19 now gets a perfect score, I feel that @lit/react still improves DX significantly. More on that shortly.
As a Lit web components noob, the central question surfaced fast: encapsulation is great, but how do consumers customize anything?
The answer is ::part.
Encapsulation is a feature. The Shadow DOM boundary keeps your components visually consistent regardless of the CSS on the host page, and for a design system, that’s the whole game. But consumers still need styling hooks for the essentials: colors, padding, and border radii.
CSS custom properties get you partway there. You expose --ag-* tokens, and consumers override them. But custom properties only work where you’ve anticipated them. For anything else, the shadow DOM is a wall.
::part punches a deliberate hole in that wall:
<!-- AgInput exposes named parts -->
<input
part="ag-input"
...
/>
<label
part="ag-input-label"
...
/>
A consumer can now target those parts directly from outside the shadow DOM:
ag-input::part(ag-input) {
border-radius: 999px;
border-color: hotpink;
}
No leaking internals. No !important wars. Clean styling hooks, nothing more.
::part.Here’s a minimal working example showing both the token override and ::part approach together:
<ag-input label="Email" placeholder="[email protected]"></ag-input>
<style>
/* Custom properties: broad-stroke theming via exposed --ag-* tokens.
Overriding these affects the entire system — any component consuming
--ag-space-2 will reflect the change, not just this one. */
:root {
--ag-space-2: 0.5rem;
--ag-space-3: 0.75rem;
--ag-border-subtle: #cbd5e1;
--ag-text-primary: #0f172a;
--ag-background-primary: #ffffff;
--ag-font-size-sm: 0.875rem;
}
/* ::part: surgical overrides for what tokens can't reach.
Targets named parts the component explicitly exposes —
everything else in the shadow DOM remains untouchable. */
ag-input::part(ag-input) {
border-radius: 999px;
border-color: hotpink;
}
ag-input::part(ag-input-label) {
font-weight: 700;
color: hotpink;
}
</style>
Notice the label in that example? It lives inside the shadow root, not in the light DOM where you’d expect it.
In standard HTML, a <label for="some-id"> connects to an <input id="some-id"> across the document. Shadow DOM breaks that contract. The for/id association doesn’t cross the boundary.
The workaround: own the entire form control inside the shadow DOM. Label, input, helper text, error message; all of it lives together, wired up with internally-generated IDs:
<!-- Both label and input share IDs generated at component instantiation -->
<label
id="${this._ids.labelId}"
for="${this._ids.inputId}"
part="ag-input-label"
>
${this.label}
</label>
<input
id="${this._ids.inputId}"
aria-describedby="${this._getAriaDescribedBy()}"
...
/>
Consumers can’t relocate the label, but part="ag-input-label" means they can restyle it.
The shadow DOM a11y trade-offs were covered above. But there’s an additional, thornier problem: native form participation.
The line static formAssociated = true sounds like a declaration of intent, but it’s just an opt-in signal to the browser. The actual work requires attachInternals(), and then you’re on the hook for reimplementing behaviors the browser gives native inputs for free: required, disabled, validation state, form reset, value submission.
AgInput doesn’t fully implement this yet. Open ticket: Issue #274, captured and ready to tackle. Once resolved, the Experimental badges can finally come down.
Web components are framework-agnostic by design, but that doesn’t mean “frictionless everywhere.” React is the obvious stress test.
React 19 made genuine progress on web component support, but consuming a web component directly in JSX still surfaces paper cuts that accumulate fast.
Consider using <ag-input> directly in a React 19 app:
// Raw React 19: web component consumed directly
export default function RawExample() {
const inputRef = useRef(null);
useEffect(() => {
// Custom events must be wired manually via ref in React 18 and below.
// React 19 adds declarative support, but event names must match exactly
// including case and use the on prefix. Easy to get wrong.
const el = inputRef.current;
el?.addEventListener("ag-change", handleChange);
return () => el?.removeEventListener("ag-change", handleChange);
}, []);
return (
// kebab-case required: JSX won't recognize PascalCase for custom elements
// Boolean props must be passed as strings or omitted entirely
// camelCase props like labelPosition may silently fail; React 18 lowercases
// them to labelposition; React 19 checks for a matching property first
<ag-input
ref={inputRef}
label="Email"
label-position="top"
placeholder="[email protected]"
required
></ag-input> // explicit closing tag required; self-closing silently breaks
);
}
Together, they’re a DX tax that requires knowing which React version you’re on.
The @lit/react createComponent wrapper eliminates the entire surface area of those problems. Here’s the actual wrapper for AgInput:
import * as React from "react";
import { createComponent } from "@lit/react";
import { AgInput, type InputProps } from "../core/Input";
export const ReactInput = createComponent({
tagName: "ag-input",
elementClass: AgInput,
react: React,
events: {
// Native events (click, input, change, focus, blur) work automatically.
// No mapping needed.
},
});
And consuming it:
// @lit/react wrapper: standard React DX, no web component roughness
export default function WrappedExample() {
return (
<ReactInput
label="Email"
labelPosition="top"
placeholder="[email protected]"
required
onChange={(e) => console.log(e.target.value)}
/>
);
}
PascalCase component name. camelCase props. Self-closing syntax. Native event handlers wired up like any other React component. Thin wrapper, big DX win.
React 19 narrowed the gap. @lit/react closes it.
AgnosticUI’s Vue wrappers are hand-rolled .vue SFC files. Story for another day.
Most component libraries ship as npm packages and expect consumers to absorb every update. I wanted to optimize for the consumer instead.
The AgnosticUI CLI takes a different approach: rather than installing a versioned package and praying the next update doesn’t break your overrides, you copy the component source directly into your project. Two commands:
# One-time project setup npx agnosticui-cli init # Add the components you actually need npx agnosticui-cli add button input card
The components land as TypeScript files, readable and modifiable by you or your LLM. Your build tool needs to handle TypeScript compilation (Vite works great). If a future release has something you want, opt in deliberately with another add.
The philosophy is simple: own the source, make the LLM’s job easier.
npm link, Use npm packnpm link is the obvious tool for local package development. It’s also, in my experience, a reliable source of subtle bugs: symlink resolution issues, mismatched peer dependencies, stale module caches.
The npm pack tarball workflow is slightly slower but more trustworthy.
My typical workflow across two terminal tabs:
# Tab 1: in the library root
# Run all checks, then pack a fresh tarball
npm run lint && npm run typecheck && npm run test && npm run build && npm pack
# Produces: agnosticui-core-2.0.0-alpha.[VERSION].tgz
# Tab 2: in the consuming app (docs site, playbook, or test project)
npm run clear:cache && npm run reinstall:lib && npm run docs:dev
# Or install directly by path
npm install ../../lib/agnosticui-core-2.0.0-alpha.13.tgz
I use this for all consumer tests: Storybooks, Kitchen Sink spot testing, CLI testing, and playbooks.
Playbooks are UI that model real scenarios: a Login Form, an Onboarding Wizard, a Discovery Dashboard. Building the Login playbook isn’t about testing AgInput. It’s just about using it. When something feels off, you know immediately.
So, while unit tests may tell you if a component works in isolation, playbooks tell you if it works in practice. That’s the ultimate litmus test and the whole point of dogfooding. Each playbook I shipped sent me back upstream to fix things I otherwise would have missed.
That feedback loop catches things unit tests miss. Each playbook I shipped sent me back upstream to fix something I wouldn’t have caught otherwise. The components powering these aren’t just theoretically correct; they’ve been used and broken in something resembling the real world.

The playbooks are designed to be starting points, not finished products. A few ideas:
AgnosticUI v2 isn’t finished, and it may always be a WIP labor of love. Some components are still marked Experimental. Form association is an open ticket.
But the loop is closed.
I ramped up on Lit and Web Components. I used AI effectively without taking my hands off the wheel. I shipped something I can point to.
That’s enough.
]]>attr() function and a bit of trickery. This allows design effects to be applied to those numbers. Today, we’ll look at an odometer effect, meaning numbers that “spin” vertically, like the mileage meter on a vehicle. This effect is useful for dynamically displaying numeric values and drawing the user’s attention when the values change, such as a rolling number of online users, a tracked price, or a timer.
The above example shows an amount upto the place value of millions. I’ll include more examples as we go.
<data id="amount" value="3284915">
<span class="digit"> <!-- Millions --> </span>
<span class="digit"> <!-- Hundred Thousands --> </span>
<span class="digit"> <!-- Ten Thousands --> </span>
<span class="digit"> <!-- Thousands --> </span>
<span class="digit"> <!-- Hundreds --> </span>
<span class="digit"> <!-- Tens --> </span>
<span class="digit"> <!-- Ones --> </span>
</data>
The amount is in the value attribute of the <data> element. You can use any other suitable element and attribute combination, like <div data-price="60589">. I’ve not included the comma separator in the HTML now; we’ll get to that later.
Let’s first get the number from the HTML attribute into a CSS variable using the attr(<attr-name> <attr-type>) function.
#amount {
--amt: attr(value number);
}
We’ll also need each .digit’s position, for which we use sibling-index().
#amount {
--amt: attr(value number);
.digit {
--si: sibling-index();
}
}
Now, we fill each .digit’s pseudo-elements with each digit from the number. To extract the digits from the number one by one, we use the mod() function.
.amt {
--amt: attr(value number);
.digit {
--si: sibling-index();
/* autofill digits */
&::after {
/* Divide the number by the power of 10, round down,
and use mod() to isolate a single integer (0-9) */
counter-set: n mod(round(down,var(--amt)/(10000/pow(10,var(--si)-1))),10);
content: counter(n);
}
}
}
The CSS mod() function returns the remainder of a division.
To make it easier to demonstrate, here’s an example of autofilling digits for a three-digit number:
<data id="weight" value="420">
<span class="digit"></span>
<span class="digit"></span>
<span class="digit"></span>
gms
</data>
#weight {
--wgt: attr(value number);
.digit {
--si: sibling-index();
&::after {
counter-set: n mod(round(down,var(--wgt)/(100/pow(10,var(--si)-1))),10);
content: counter(n);
}
}
}
Here’s how the math works:
sibling-index() = 1
mod(round(down, 420/(100/pow(10,1-1))), 10)
mod(round(down, 420/(100/pow(10, 0))), 10)
mod(round(down, 420/(100/1)), 10)
mod(round(down, 420/100), 10)
mod(round(down, 4.2), 10)
mod(4, 10)
= 4
sibling-index() = 2
mod(round(down, 420/(100/pow(10,2-1))), 10)
mod(round(down, 420/(100/pow(10, 1))), 10)
mod(round(down, 420/(100/10)), 10)
mod(round(down, 420/10), 10)
mod(round(down, 42), 10)
mod(42, 10)
= 2
sibling-index() = 3
mod(round(down, 420/(100/pow(10,3-1))), 10)
mod(round(down, 420/(100/pow(10, 2))), 10)
mod(round(down, 420/(100/100)), 10)
mod(round(down, 420/1), 10)
mod(round(down, 420), 10)
mod(420, 10)
= 0
When we add a separator character in the mix, using sibling-index() alone won’t give the right position of the digits following the separator. We have to exclude the separators from the math. Here’s an example:
<data id="amount" value="7459328">
<span class="digit"></span>
<span class="digit"></span>
<span class="separator">,</span>
<span class="digit"></span>
<span class="digit"></span>
<span class="separator">,</span>
<span class="digit"></span>
<span class="digit"></span>
<span class="digit"></span>
KRW
</data>
.digit {
--si: sibling-index();
&::after {
counter-set: n mod(round(down,var(--amt)/(1000000/pow(10,var(--i)))),10);
content: counter(n);
}
/* first two digits */
&:nth-child(-n+2)::after {
--i: var(--si) - 1;
}
/* third and fourth digits */
&:where(:nth-child(3 of .digit),:nth-child(4 of .digit))::after {
--i: var(--si) - 2;
}
/* last three digits */
&:nth-last-child(-n+3)::after {
--i: var(--si) - 3;
}
}
For each separator break, decrement the sibling index by 1 for the following digits.
Now that the digits can be automatically separated into distinct elements, we can apply any animation we want to them individually. For the odometer effect, I’m adding animations that slide the digits up and down as the count decreases, mimicking the rolling style.
@property --n {
syntax: "<integer>";
inherits: false;
initial-value: 0;
}
@keyframes count {
from { --n: 9; }
to { --n: 0; }
}
@keyframes slideDown {
from { transform: translateY(-100%); }
to { transform: translateY(100%); }
}
@keyframes slideUp {
from { transform: translateY(100%); }
to { transform: translateY(-100%); }
}
The --n variable, of integer type, is animated in the @keyframes animation count, decrementing from 9 to 0.
&::after {
/* Save the digit in a variable */
--digit: mod(round(down, var(--amt) / (1000000/pow(10, var(--i)))), 10);
/* Show whichever is higher: the active countdown value (--n) or the target digit. Prevents the counter from dropping below the final value. */
counter-set: n max(var(--n), var(--digit));
content: counter(n);
/* The 1s is the countdown animation.
The 0.11s (1/9) slide animation repeats until countdown hits the target digit. */
animation: linear 1s, linear 0.11s calc(9 - var(--digit)) ;
}
&:nth-of-type(even)::after {
animation-name: count, slideUp;
}
&:nth-of-type(odd)::after {
animation-name: count, slideDown;
}
The demo from before:
Since the animation uses repeated vertical displacement to create the rolling effect, to speed up, pause, or slow down the digits, either by count or position (sibling index), set any animation’s time, delay, or repetition based on the count, position, or both.
Here’s an example where the later counts are slightly slower:
&:after {
animation:
1.4s linear,
0.11s linear calc(5 - var(--digit)),
0.22s linear 0.55s calc(4 - var(--digit));
}
&:nth-of-type(even)::after {
animation-name: count, slideUp, slideUp;
}
&:nth-of-type(odd)::after {
animation-name: count, slideDown, slideDown;
}
Here’s one where there’s no count or rolling, just a jittery effect.
animation: 0.1s linear calc(0.1s * var(--si));
Although this post covered the odometer effect, its concept can be applied to other graphic effects involving numbers. Being able to autofill numbers into individual elements, and compute and animate them, all in CSS, simplifies designing visual changes for dynamic numeric values on screen.
]]>About half an hour later, I had this:

Let’s see how I did it… and what went wrong.
We have a <nav> element with n children. Since we’ll be needing this number n to make styling choices, we pass it to the CSS as a custom property. The same goes for the index i of each nav item. To make it easier for myself, I used Pug to generate the HTML from a data object – the result looks as follows:
<nav style="--n: 7">
<a href="#" style="--i: 0">
tiger
<img src="tiger.jpg" alt="tiger drinking water" />
</a>
<a href="#" style="--i: 1">
lion
<img src="lion.jpg" alt="lion couple on a rock" />
</a>
<!-- the other cats -->
</nav>
It’s a pretty simple structure, just a nav wrapper around a items, each of these items containing text and an img child.
The sibling-index() and sibling-count() CSS functions are not yet a thing cross-browser, so we’re adding the item index and count as custom properties when we generate the HTML in order to pass them to the CSS. Because otherwise, the CSS does not know how many children an HTML element has.
Moving on to the CSS, our nav is using fixed positioning and made to cover all available viewport space (note that this excludes any scrollbars we might have).
nav {
position: fixed;
inset: 0;
}
The next step is to use a grid layout for it, limit the width of the grid’s one column, and middle-align this grid within the element:
nav {
display: grid;
grid-template-columns: min(100% - 1em, 25em);
place-content: center;
position: fixed;
inset: 0;
}
Note that we use 100% - 1em inside the min() to keep a little bit of space on the lateral sides of the grid to prevent it from kissing the viewport edges without adding a separate padding rule. Because why waste precious screen space on a non-essential declaration when we could find more important CSS to cram in there?

We’re done with the important styles on the nav, so we move on to prettifying touches. We slap on a subtle background and give it a viewport-relative font, kept within reasonable limits by a clamp() – we don’t want the text to get so small it’s unreadable, nor do we want it to balloon on huge screens.

With the nav styles settled, we turn our attention to the links, for which we use a flex layout. This allows us to middle-align the text content and the img vertically and push them to opposite ends horizontally:
nav a {
display: flex;
align-items: center;
justify-content: space-between;
}

Each link receives a thin border-bottom to create the separator line and a lateral padding. These are set as custom properties, which may not make much sense right now, but I promise it’s for a good reason.
nav a {
--pad: min(2em, 4vw);
--l: 1px;
display: flex;
align-items: center;
justify-content: space-between;
border-bottom: solid var(--l) #000;
padding: 0 var(--pad);
}
We give each link a color and strip the default underline with text‑decoration: none. These are purely cosmetic, and we’ll revisit them later in the article.

Next, we prepare the img elements for future magic by sizing them and ensuring they act like well-behaved cats – no stretching! The responsive image height and the aspect ratio are also set as custom properties next to the link padding and separator line width – the purpose of doing so will become clear shortly.
nav a {
--pad: min(2em, 4vw);
--l: 1px;
--r: 3/ 2;
--h: round(down, min(4em, 30vw, 100dvh/(var(--n) + 1)), 2px);
display: flex;
align-items: center;
justify-content: space-between;
border-bottom: solid var(--l) #000;
padding: 0 var(--pad);
}
nav img {
height: var(--h);
aspect-ratio: var(--r);
object-fit: cover;
}

Since the images will flip in 3D, they also get backface-visibility: hidden, so we only see them when they’re facing us and they’re invisible when facing the back of the screen.
nav img {
height: var(--h);
aspect-ratio: var(--r);
object-fit: cover;
backface-visibility: hidden;
}
This is handy when we want to make sure they’re is facing the right way. We may comment this out for a little while a bit later just to take a peek and check they’re in the right position even when facing the other way.
In order for the thumbnails to really look like they’re rotating in 3D, we add a perspective and a perspective‑origin to each img parent. The horizontal position of the origin needs to be a padding --pad plus half an img width (computed from the height --h and aspect ratio --r) to the left of the right edge (which is at 100%).
nav a {
--pad: min(2em, 4vw);
--l: 1px;
--r: 3/ 2;
--h: round(down, min(30vw, 100dvh/(var(--n) + 1)), 2px);
display: flex;
align-items: center;
justify-content: space-between;
border-bottom: solid var(--l) #000;
padding: 0 var(--pad);
perspective-origin:
calc(100% - var(--pad) - .5*var(--h)*var(--r));
perspective: 20em;
}
This is why we needed custom properties for those values, to ensure things stay consistent without having to make changes in multiple places when we want to tweak the lateral padding for the items or use different image dimensions.
So far, this is what we have:

Now let’s make it work!
Unfortunately, scroll-snap-points got deprecated, so now we need to resort to adding this abomination of a phantom branch to the DOM tree:
<div class='snaps' aria-hidden='true'>
<div class='snap'></div>
<div class='snap'></div>
<!-- as may of these as nav items -->
</div>
We need the nav content to remain permanently in view, so it cannot scroll. But, since just making the html tall doesn’t suffice for scroll snapping now anymore, we need to create these scrolling elements to snap to.
* { margin: 0 }
html {
scroll-snap-type: y mandatory;
overscroll-behavior: none
}
.snap {
scroll-snap-align: center;
scroll-snap-stop: always;
height: 100dvh
}

.snap elements are used hereWe’ve also added overscroll-behavior to kill the rubber‑band overscroll bounce and scroll-snap-stop to stop the scroll from skipping over snap points when going quickly up or down. Though, unless I’m misunderstanding what they’re supposed to do, neither of them actually works.
We introduce a new custom property --k to track the scroll progress. First, we register it via @property so the browser treats it as an animatable numeric value. Otherwise, it would just abruptly flip in between the animation end state values.
@property --k {
syntax: '<number>';
initial-value: 0;
inherits: true
}
Then we drive --k to 1 from its initial-value of 0 via a keyframe animation that we tie to the scroll timeline:
nav {
/* same as before */
animation: k 1s linear both;
animation-timeline: scroll();
}
@keyframes k { to { --k: 1 } }
We use this --k value to compute the current nav item index, which we call --j and which needs to be registered as an integer:
@property --j {
syntax: '<integer>';
initial-value: 0;
inherits: true
}
nav {
/* same as before */
--j: round(var(--k)*(var(--n) - 1));
}

There are two things to note here.
One, we need to register --j in order for the animation to work in Chrome. I don’t really understand why, since it’s not the CSS variable being animated here, and in Safari, the animation works the same whether it’s registered or not. I registered it at first just to follow the computed values in DevTools, and then noticed the demo breaks when I try to remove its @property block. Maybe someone who knows better can chime in.
Two, animating --k directly in steps from 0 to n - 1 would have been simpler. However, at this point, Firefox still refuses to animate a custom property to a value depending on another custom property.
We can now move on to computing the rotation and “hinge” position (set via transform-origin) based on each nav item’s index --i and the index of the current item --j.
We start by comparing each item’s own index (--i) with the scroll‑derived current index (--j). The sign of their difference tells us whether an item is ahead, behind, or exactly on target, and from that we derive a binary selection flag (--sel). When --sel is 1 the item is the one currently under the spotlight.
nav a {
/* same as before */
--sgn: sign(var(--i) - var(--j));
--sel: calc(1 - abs(var(--sgn)));
}
Think of this selection flag as a CSS boolean, which is something I’ve written about before, in a lot of detail even.

We have three possible cases here.
--i is bigger than --j (the item of index --i is ahead of the current one), so the sign of their difference is 1 and the selection flag is 0 (the item of index --i is not selected)--i is equal to --j (the item of index --i is the current one), so the sign of their difference is 0 and the selection flag is 1--i is smaller than --j (the item of index --i is behind the current one), so the sign of their difference is -1 and the selection flag is 0 (the item of index --i is not selected)Now we need to use these values to compute the rotation around the x-axis and the vertical position of the horizontal axis for our navigation items in all three scenarios.
In case you need a CSS 3D refresher, a rotation around the x-axis works as illustrated by the following live demo:
The x-axis we rotate around points towards the cat. From the point of view of the cat, a positive rotation is one she sees going clockwise.
Knowing all of this, we can use it as follows in our three cases:
i > j (ahead of the current item, when the sign is +1) – the image rotates by +180°, clockwise around a hinge that sits half a separator line thickness above the top edge of the image, a vertical position that can be expressed as -.5*l or, equivalently, 50% - +1·(50% + .5·l)i = j (the current item, when the sign is 0) – the image doesn’t rotate, so we can consider that to be a 0° rotation, or, equivalently, 0·180°; since there is no rotation, the hinge is irrelevant, so we can take its vertical position as being whatever, for example, just the default 50% or, equivalently, 50% - 0·(50% + .5·l)i < j (behind the current item, when the sign is -1) – the image rotates by -180°, anti-clockwise around a hinge that sits half a separator line thickness below the bottom edge of the image, a vertical position that can be expressed as 100% + .5*l or, equivalently, 50% - -1·(50% + .5·l)
The above is a lot, but it shows the position not just for the image of the current item, but for those of the items right before and right after, rotated and with the rotation axis highlighted. They are also translated horizontally so they don’t overlap – this is just to show them side by side, we don’t have this translation in the actual demo.
Now you may be wondering why the odd equivalent forms. They are used to show how all those values satisfy the same formula depending on the sign of the difference.
The rotation is:
+1·180° when the sign is +10·180° when the sign is 0-1·180° when the sign is -1Do you see a pattern? The rotation is the sign multiplied by 180°.
Similarly, the y axis position of the hinge is:
50% - +1·(50% + .5·l) when the sign is +150% - 0·(50% + .5·l) when the sign is 050% - -1·(50% + .5·l) when the sign is -1Again, it’s all almost the same, except for the sign.
Putting it all into CSS, we have:
nav a {
/* same as before */
--sgn: sign(var(--i) - var(--j));
--sel: calc(1 - abs(var(--sgn)));
}
nav img {
/* same as before */
transform-origin:
0 calc(50% - var(--sgn)*(50% + .5*var(--l)));
rotate: x calc(var(--sgn)*180deg);
}
The final piece here is transitioning the rotation so our images don’t just appear in place when the containing item is selected. Since we also want to have a color and text-indent transition on the item text as well, we set the duration as a custom property at item level:
nav a {
/* same as before */
--sgn: sign(var(--i) - var(--j));
--sel: calc(1 - abs(var(--sgn)));
--t: .5s;
}
nav img {
/* same as before */
transform-origin:
0 calc(50% - var(--sgn)*(50% + .5*var(--l)));
rotate: x calc(var(--sgn)*180deg);
transition: var(--t) rotate
}
Almost there, but not quite:

Things start out well with the image of the newly unselected item rotating out around its exit hinge. However, the image of the newly selected item doesn’t rotate in as it should, around its enter hinge. Instead, it just rotates in around its middle axis.
The problem is that once an item becomes selected, the second value of the transform-origin, which gives us the y position of the horizontal axis of rotation, abruptly moves from half a line thickness above/ below the top/ bottom edge to the middle of the element. We only want this to happen after the rotation, so we want to add a delay equal to the transition-duration of the rotation.
At the same time, we want to keep the current state of things once an item becomes deselected. Once it becomes deselected, we want its transform-origin to abruptly move half a line thickness above/ below the top/ bottom edge, depending of the direction we go in.
So we want a delay in the abrupt change (0s duration) of transform-origin only when an item becomes selected (--sel has flipped to 1), but not when it becomes deselected (--sel has flipped to 0). This means we need to multiply the delay with the selection flag.
The final transition declaration therefore looks like this:
transition:
0s transform-origin calc(var(--sel)*var(--t)),
var(--t) rotate

Besides the thumb flip, we also want the text to stand out a bit more when its containing item becomes the current one, so we bump up its contrast and slide it in.
The same --sel flag that tells us whether an item is selected drives both the color and the text‑indent change. The color goes from a mid grey in the normal case to an almost black in the selected case, while the text-indent goes from 0 to 1em. Both properties get a simple transition so the shift feels smooth.
/* relevant CSS for the visual motion part only */
nav a {
--sgn: sign(var(--i) - var(--j));
--sel: calc(1 - abs(var(--sgn)));
--t: .5s;
color: hsl(0 0% calc(50% - var(--sel)*43%));
text-indent: calc(var(--sel)*1em);
transition: var(--t);
transition-property: color, text-indent;
}
nav img {
transform-origin:
0 calc(50% - var(--sgn)*(50% + .5*var(--l)));
rotate: x calc(var(--sgn)*180deg);
transition: var(--t) rotate
}
Our demo now behaves like the original version, except it’s driven by scroll and the rotations are “hinged” around the separator lines. This is the version seen in the recording at the start of the article.
The final result, while looking good in Chrome, is glitchy in Epiphany, though this doesn’t seem to be as much of a problem in actual Safari, according to the responses I got when I asked on Mastodon and Bluesky. It also completely lacks any animation in Firefox. It turns out the root cause of the Firefox problem is this bug some rando filed a couple of years ago. That rando was seemingly me, though I have no recollection of it anymore.
Another issue is that, since both the nav and the snaps are using the dynamic viewport, there’s a lot of jumping around on mobile/ tablet. So it’s probably better to use the small viewport for the nav and the large one for the snaps.
.snap {
/* same as before */
height: 100lvh
}
nav {
/* same as before */
height: 100svh
}
However, using the small viewport for the nav means it may not cover the entire viewport in all scenarios, so we could get a white band at the bottom – the default page background contrasting with the subtle one on the nav. To fix this, we need to move the background from the nav to the html or the body.
Since our nav items are links, they should have usable :hover and :focus styles.
nav a {
/* same as before */
--hov: 0;
color:
hsl(345
calc(var(--hov)*100%)
calc(50% - var(--sel)*(1 - var(--hov))*53%));
&:is(:hover, :focus) { --hov: 1 }
&:focus-visible {
outline: dotted 4px;
outline-offset: 2px
}
}
And it’s probably best not to greet night owls with such a bright background, so we should respect user-set dark mode preferences, which means rethinking how we set the color.
html {
/* same as before */
color-scheme: light dark;
background: light-dark(#dedede, #212121)
}
a {
/* same as before */
border-bottom: solid var(--l) light-dark(#121212, #ededed);
color:
light-dark(
color-mix(in srgb,
#9b2226 var(--prc-hov),
color-mix(in srgb, #023047 var(--prc-sel), #454545)),
color-mix(in srgb,
#ffb703 var(--prc-hov),
color-mix(in srgb, #8ecae6 var(--prc-sel), #ababab))
);
}
Here’s that demo (and remember this is scroll-based not hover-based):
And maybe we shouldn’t have removed the underlines, though this is a navigation component, so it should be expected that what we have in there are links? Personally, I’m on the fence about this. The main reason why I decided against putting them back was the fact that I am not a designer and I was going down a deep rabbit hole unrelated to the main topic of the article just by repeatedly trying and failing to come up with a creative way of doing something aesthetically pleasing with them.
Finally, it’s often said scroll-jacking is a bad idea, don’t do it. I personally like scroll effects if they’re well done and not excessive, but I can understand others may have different preferences.
Since this is supposed to be a navigation, but the demo has no content to navigate to, maybe we should add content and make the effect happen on navigating to the corresponding section.
However, this comes with extra challenges when sections have different heights, as well as when skipping sections via the navigation. Neither of which I’m capable of solving.
Below is the best I could get. It uses JavaScript, and the animation looks bad when skipping items. It’s also not responsive, and I don’t really know what to do about it on small or very large viewports.
The most important one is probably that things don’t turn out as you expect them to.
I needlessly complicated this demo early on (setting custom properties instead of sibling-index() and sibling-count(), not animating the current item index --j directly) for the sake of wider support/avoiding bugs. And in the end, I didn’t even need to do that because it doesn’t work cross-browser anyway.
I also aimed for a pure CSS solution with a nice hinging animation, but when I tried to make it usable, I couldn’t do it without JavaScript, and I couldn’t keep the animation looking nice.
The other very important one is that anything can turn into a deep rabbit hole when you’re incompetent like me. After completing the demo quite quickly, I was still unhappy with it, so I ended up spending a ridiculous amount of time on various improvement attempts, none of which worked out, so, in the end, I took them all out.
]]>