Cloud Four https://cloudfour.com Responsive web design and development, progressive web apps Mon, 16 Mar 2026 21:37:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 225560371 Little Dummies: Simple FPO Content Helpers https://cloudfour.com/thinks/little-dummies-simple-fpo-content-helpers/ https://cloudfour.com/thinks/little-dummies-simple-fpo-content-helpers/#respond Thu, 12 Mar 2026 21:26:26 +0000 https://cloudfour.com/?p=8497 I was delighted to present this talk at the final (for now) episode of The Eleventy Meetup:

Eleventy Build Awesome is great for rapid prototyping! Tyler shows off a few of his favorite shortcodes, filters and other techniques to quickly populate interactive mockups and wireframes with “dummy” or FPO (for placement only) content.

You can watch the presentation here:

Example Helpers

These are the main code samples from my slides, consolidated below for easy reference.

Each helper is written in ESM syntax (but should work in CommonJS with a few small changes), and each assumes it lives in its own file (imported by the config).

Text, Numbers, etc.

With the Chance dependency:

import Chance from "chance";
let chance;

export default function (method, ...args) {
  // Support JSON string for first argument
  if (typeof args[0] === "string") {
    args[0] = JSON.parse(args[0]);
  }

  // Instantiate Chance the first time
  if (!chance) {
    chance = new Chance();
  }

  // If the method exists, return its output
  if (typeof chance[method] === "function") {
    return chance[method](...args);
  }

  // Otherwise, log an error but proceed
  console.error(`[chance] No method named ${methodName}`);
  return "";
}
Eleventy Config Example
import chanceHelper from "./helpers/chance.js";

export default function (eleventyConfig) {
  eleventyConfig.addShortcode("chance", chanceHelper);
  eleventyConfig.addFilter("chance", chanceHelper);
}

Images

With our Simple SVG Placeholder dependency:

import simpleSvgPlaceholder from "@cloudfour/simple-svg-placeholder";

/**
 * Modify these to suit the project!
 * @see https://github.com/cloudfour/simple-svg-placeholder#option-reference
 */
const defaults = {
  bgColor: "rgb(0 0 0 / 0.8)",
  textColor: "white",
};

export default function (width, height, options = {}) {
  // Support JSON string for argument
  if (typeof options === "string") {
    options = JSON.parse(options);
  }

  return simpleSvgPlaceholder({...defaults, width, height, ...options});
}
Eleventy Config Example
import fpoImageHelper from "./helpers/fpo-image.js";

export default function (eleventyConfig) {
  eleventyConfig.addShortcode("fpoImage", fpoImageHelper);
}

Icons

This example uses the Iconify API via Eleventy Fetch.

I’ve made a small enhancement since the presentation, adding the entities package to escape attribute values on the inline SVG based on a recommendation from Zach Leatherman:

import EleventyFetch from "@11ty/eleventy-fetch";
import { escapeAttribute } from "entities/escape";

const apiUrl = "https://api.iconify.design";

const fetchOptions = {
  duration: "1y",
  type: "json",
};

const defaultAttr = {
  xmlns: "http://www.w3.org/2000/svg",
  class: "icon",
};

// Get a specific set:name icon string
async function getSpecificIcon(icon) {
  // If already specific, do nothing
  if (icon.includes(":")) {
    return icon;
  }

  const searchUrl = `${apiUrl}/search?query=${icon}&limit=1`;
  const searchData = await EleventyFetch(searchUrl, fetchOptions);
  const results = searchData.icons || [];

  if (results.length === 0) {
    throw new Error(`No icon found for ${icon}`);
  }

  return results[0];
}

// { class: "icon", width: 120 }
// => 'class="icon" width="120"'
function objectToAttributeString(obj) {
  return Object.entries(obj)
    .map(([key, value]) => {
      value = escapeAttribute(`${value}`);
      return `${key}="${value}"`;
    })
    .join(" ");
}

export default async function (icon, attr = {}) {
  // Support JSON strings for attributes
  if (typeof attr === "string") {
    attr = JSON.parse(attr);
  }

  try {
    icon = await getSpecificIcon(icon);
    const [setName, iconName] = icon.split(":");
    const iconDataUrl = `${apiUrl}/${setName}.json?icons=${iconName}`;
    const iconData = await EleventyFetch(iconDataUrl, fetchOptions);

    if (typeof iconData !== "object") {
      throw new Error(`Request for ${icon} returned ${iconData}`);
    }

    const { width, height } = iconData;
    const { body } = iconData.icons[iconName];
    const attrString = objectToAttributeString({
      ...defaultAttr,
      "data-icon": icon,
      viewBox: `0 0 ${width} ${height}`,
      width,
      height,
      ...attr
    });

    return `<svg ${attrString}>${body}</svg>`;
  } catch(err) {
    console.error(`[iconify] ${err.message}`);
    return "";
  }
}
Eleventy Config Example
import iconifyHelper from "./helpers/iconify.js";

export default function (eleventyConfig) {
  eleventyConfig.addShortcode("iconify", iconifyHelper);
}

Resources

More references from the talk (other than dependencies in the previous section):

Some Questions Answered

Two questions stood out to me during a live Q&A following my presentation:

Do you only prototype in Eleventy / Build Awesome?

No! Sometimes the teams we work with already have an environment with a suitable playground, or a different stack they’re more familiar with. But Build Awesome is a great fallback due to its stability, performance, flexibility and small footprint.

Do you share these sorts of helpers between projects?

We start from a private, opinionated template repository, which we tailor to the project’s needs. After a major milestone, we’ll see if anything we diverged on deserves to be added back to the template repo to benefit future projects. This allows us to easily adapt the helpers to the needs of the project without sweating breaking changes.

Acknowledgements

Big thanks to Sia Karamalegos for having me, Zach Leatherman for creating Build Awesome (new Kickstarter launching soon) and encouraging my unique possum interpretations, and especially everyone who attended the meetup!

If you want to see Cloud Four’s process in action, get in touch! We’re always looking for our next web app design challenge.


We’re Cloud Four

We solve complex responsive web design and development challenges for ecommerce, healthcare, fashion, B2B, SaaS, and nonprofit organizations.

See our work

]]>
https://cloudfour.com/thinks/little-dummies-simple-fpo-content-helpers/feed/ 0 8497
How We Do Code Reviews at Cloud Four https://cloudfour.com/thinks/how-we-do-code-reviews-at-cloud-four/ https://cloudfour.com/thinks/how-we-do-code-reviews-at-cloud-four/#respond Wed, 04 Mar 2026 16:30:00 +0000 https://cloudfour.com/?p=8489 Cartoon illustration showing one person struggling under the weight of an enormous cardboard box labeled “PR”, while the person they're trying to hand it to recoils in alarm.

Does this sound familiar to you?

Chat, is it good when your AI-obsessed colleague drops a +16,105 -193 pull request with 102 commits all titled “wip: implement next task” and asks that it be immediately approved for next release?
eva

The issue isn’t AI specifically, but the speed with which contributors can generate massive amount of code, exposing weaknesses in a team’s workflow.

Cloud Four is currently a small agency, with only a handful of devs. We often work together with our client’s internal developers, or with contractors. Because of this, we’ve developed a set of best practices that I’m quite proud of. If your team members dread the notification that they’ve been added as a reviewer on a pull request, I think the following guidelines can help.

All Code Gets Reviewed

Our first rule is a strict one. If a code change is going to production, it gets reviewed. We work for clients, so “move fast and break things” isn’t a realistic way to do business. If I get sloppy and push unreviewed code that causes an incident, I’ve put the client in a bad spot, triggered an urgent crisis for my team to deal with, and perhaps jeopardized our client relationship.

In every code repository we work in, we recommend enabling protection rules for the production branch. GitHub makes it easy to require a pull request before merging, and to require pull requests be approved by someone other than the author. The only exception we make is to grant certain senior developers permission to bypass these rules for safe changes like minor dependency updates.

I know teams that require two developers to review every pull request. Normally, one dev can be the person who submitted the PR, but in the case of AI-authored pull requests, two human devs still need to review it. That’s a clever way to ensure automated code gets a bit more attention than normal.

The natural consequence of requiring a review for all code is the dev team has to actually review all that code. This can be time-consuming, and if you don’t already have a team culture that values code review, this may be a tough pill to swallow. The rest of our guidelines are aimed at reducing the burden placed on code reviewers.

Prefer Many Small PRs Over One Giant PR

Firstly, we absolutely reserve the right to reject massive pull requests like the one described in the introduction. A common problem we run into is pull requests that make multiple unrelated changes, such as including a refactoring pass alongside new features or bug fixes. When I see this happen, I’ll reach out to the dev in a non-confrontational way and explain that by combining all their changes like this, it makes the reviewer’s task much more difficult.

Here are some things we’ll commonly ask a dev to pull out into a separate pull request:

  • Lint rule changes that result in many files being changed at once. This happens, but there’s no reason to combine it with anything else. It’s easier to review dozens of similar file changes when there’s no unrelated changes lurking in the code. Plus, it simplifies the acceptance criteria for the lint changes if you don’t expect any impact on functionality.
  • Moving code from one location to another. If there’s a huge block of red in one file, and a huge block of green in another file, and a comment saying “no changes, just relocated this code for [reasons],” that’s trivial to review. On the other hand, if the code moved and there are also unrelated changes, the code diff doesn’t show those changes. It’s far easier to review if you break it up into one PR to move the code, and another to modify it.
  • Unrelated bug fixes or features. I know it’s tempting to fix a bug you noticed while you were in that file, or to update some code to modern standards, but that just adds noise for the reviewer. It’s totally fine to file a second pull request at the same time, making that bug fix.

Once your team gets the hang of it, you’ll see fewer monster pull requests, and your team will become more comfortable pushing back on unnecessarily large pull requests in general.

Explain Why This Change is Needed

A pull request should not just explain what changes are being made, it should explain why they’re being made. That context is incredibly valuable to anyone who isn’t intimately familiar with the code you’re changing. Whether that’s your fellow dev who is taking time away from their tasks in a different part of the code base, someone who works on another team entirely, or yourself in the future.

I want to emphasize that last one. I can’t tell you how many times I’ve been trying to figure out why some feature works the way it does, and when I dive into git blame I see my name staring back at me. When I track the change back to a commit I authored with a well-written description, I’m relieved. Conversely, when I find a simple “change the client wanted” comment, I want to pull my hair out.

The same is true of any developer who ends up reviewing your code. Give them the context for why you’re making this change. What problem were you solving? Why was this the best solution? Armed with this information, your reviewer will have an easier time.

Provide Thorough Testing Instructions

The phrase “acceptance criteria” isn’t just for project managers! After a good description of why the change is being made, we like to provide detailed testing instructions. We started this when we were working with some new contractors, who didn’t necessarily have a full picture of how to test the application. It’s also proven valuable when non-developers step in to help out while we’re facing an impending deadline.

Don’t assume the person who is testing your pull request knows how to work with your app. We often literally provide step-by-step checklists like this:

  • Check out this branch in your local environment
  • Set new_feature_flag in config.js to true
  • Start the app and navigate to /new-feature
  • Apply a filter using the hamburger menu in the top-right
  • You should see the list of cards has been filtered
  • Remove the filter
  • Now you should see all the cards again
  • Etc…

This might feel like overkill, especially if another dev on your team will review it. But remember that other dev may be working on another part of your application, and hasn’t been paying careful attention to the part you’re working on. A checklist like this, that doesn’t assume the reviewer already knows how to test your feature, reduces the burden on any reviewer, and even opens the door to less-technically minded team members helping with testing.

Another benefit of this approach is that writing out testing instructions gives you an opportunity to follow those instructions. I can’t tell you how often, in the course of writing a step-by-step checklist for a pull request, I’ve uncovered an issue. If I can address that before the reviewer starts, I’m saving everyone time.

Bonus: writing testing instructions for a human tends to give you a clear idea for automated tests. After all, if you know how it should work, why not let your CI workflow check it for you?

Code Review is a Culture Issue

What all these suggestions have in common is being aware of the cognitive burden on another developer who may be stepping away from their assigned tasks to think about yours. Context switching is a productivity killer, and anything you can do to make it easier for someone else to review your code helps avoid code review becoming something your team members dread.

Requiring all code be reviewed before going to production shows you value the quality of what you ship. Asking your team to prefer small pull requests over large ones helps reduce the scope of review. Providing context for why a change was made helps the reviewer understand those decisions. And providing clear testing instructions means reviewers don’t get derailed figuring out how to test the changes.

All of this adds up to a culture that expects quality code, values the time it takes to review it, and respects the team’s efforts to do so.


We’re Cloud Four

We solve complex responsive web design and development challenges for ecommerce, healthcare, fashion, B2B, SaaS, and nonprofit organizations.

See our work

]]>
https://cloudfour.com/thinks/how-we-do-code-reviews-at-cloud-four/feed/ 0 8489
Getting Your Article Shared: Tips from Ten Years of Newsletter Curation https://cloudfour.com/thinks/getting-your-article-shared-tips-from-ten-years-of-newsletter-curation/ https://cloudfour.com/thinks/getting-your-article-shared-tips-from-ten-years-of-newsletter-curation/#respond Thu, 05 Feb 2026 16:30:00 +0000 https://cloudfour.com/?p=8476 Illustration of a bland grey door in a bland grey hallway, with the word “meh” on the door — which conceals the fact that on the other side of the door is a bright, colorful nature scene of a deer resting beside a waterfall — exciting content hidden behind a boring door.

For over ten years now, I’ve been sharing front-end links with the community via a newsletter and social media account called Friday Front-End. Every week, I bookmark 20–30 articles, and pick the best ones to include. In that time, I’ve learned some things I want to pass along to you: Recommendations to make your article more likely to be shared in newsletters and social media accounts like the one I run.

There’s loads of great content out there, and it’s funny how often someone puts the work into crafting an excellent post, but misses out on all the things that make it easy to widely share.

To understand what people like me are looking for, consider the format of a typical Friday Front-End post:

Illustration of a code editor with a cartoon-style word bubble containing “?!” as if the code editor is saying something surprising.

Your Exciting Post About #CSS: “Here’s a short quote from your article to interest people to read it.” https://example.com/

Pretty simple, right? A featured image, then the title, followed by a short quote, the URL, and a hashtag for the primary focus of the article (for Friday Front-End, this will almost always be #CSS, #JavaScript, or #a11y).

Now, here are the things you can do to make your post easier to share:

Add Open Graph tags

If you take just one thing from this post, make it this. Adding OG (Open Graph) tags to your post gets you the most bang for your buck. Here’s what OG tags look like:

<meta property="og:url" content="https://example.com/" />
<meta property="og:title" content="An exciting post about CSS" />
<meta property="og:description" content="Here's a short description of this exciting post about CSS" />
<meta property="og:image" content="https://example.com/thumbnail.jpg" />

There are more tags you could use, but you get the idea. In a nutshell, a long time ago, Facebook came up with a standard for describing your web page so their crawlers could understand it. They used that information to make little preview cards of a link when someone shared your page. The concept spread quickly, and now pretty much every social network out there will use your OG tags to make a preview card.

Even better, tools that people like me use to manage their social media accounts and newsletters, like Buffer or Curated, also understand them. That means I can simply pass them your post’s URL, and they automatically extract useful information like the title and thumbnail, which makes my job much easier.

Note: add Open Graph tags about your post, not your site

A surprisingly common problem I run into is a great post that does have OG tags, but they describe the person’s site as a whole, rather than the individual post I’m trying to share. This might feel better than nothing, but it actually backfires because when I share the link to your post, the preview card might show information about your site, rather than the actual post. (e.g., “Sandra’s Awesome CSS Blog” vs “An Exciting Post About CSS.”). If the goal of the preview card is to convince readers to click through, then you definitely want the card to show information about your post, not your site as a whole.

Learn more about Open Graph tags:

Add a sharing image

A well-chosen sharing image can be the perfect teaser for your post, something that catches the audience’s attention and makes them want to click through to learn more. If you have a great sharing image, it absolutely increases the likelihood that I will share your post, and that readers will click through.

However, I understand not everyone is lucky enough to work with a talented cartoonist like my coworker Tyler, who has created some of the best sharing images here on Cloud Four. If that’s the case for you, I recommend a visit to Unsplash, which has an excellent collection of free images that you can use for your sharing image, often just for the cost of giving the creator an image credit at the bottom of your post.

Another increasingly common approach is to automatically generate sharing images for your posts by creating an image of the post’s title. This is better than no sharing image, but unless they’ve got a bit of design flair, it can feel bland or even repetitive, since the post title will usually be displayed near the post thumbnail in the preview card.

One thing I see people do that is actually worse than providing no sharing image is to use a single sharing image for every page on your site, typically the site logo or a big photo of yourself. When readers see these seemingly random images in the preview card for your post, it can feel unintentional.

And to be clear, I’m not saying that a hand-crafted illustration is always better than a stock photo or a generated title card. For example, A stock photo of a footpath worn in the grass of a public space for a post about “paving the cowpaths” for user accessibility is great. A stock photo of a boat anchor on a post about CSS anchor positioning is less interesting.

Here’s some excellent advice on choosing stock photos for your post.

Use a descriptive title

It’s always frustrating to share a well-written post with a title that communicates nothing about the topic of the post. Let me give you an example with two titles for a hypothetical post about the flexibility of CSS custom properties to allow users to override your theme’s default colors:

  • You Can Go Your Own Way
  • How to Empower Developers with Custom Properties

I understand if you feel the second title is bland or even boring. But you know what it does well? It tells me what the post is about. The first example might be clever in context, but if I can’t understand it until after I’ve read the post, then it’s failing to convince me to read in the first place.

Add a blurb

Something I love, and will often copy directly into Friday Front-End posts, is when the author provides a TL;DR (Too Long; Didn’t Read) summary at the top of the post. Call it whatever you want: a blurb, a description, an excerpt, or a summary. The point is that it’s a short, punchy description that summarizes your post and convinces me to read.

There’s a sweet spot for length. Too short, and you won’t communicate enough. Too long, and I’ll just have to trim it to make it fit in a social media post. Here are two extreme examples:

  • How to override theme colors with custom properties.
  • CSS custom properties are a powerful tool that can empower developers and users alike. In this article, I’m going to show you how to make your theme fully customizable through the use of custom properties, discuss their limitations, explore browser support, and encourage you to adopt this fantastic tool into your arsenal.

The first is certainly an efficient description of the post topic. But it’s actually too short. There’s nothing there to convince me to read it. The second, on the other hand, is too verbose. I get distracted before I even finish reading it.

As a rule of thumb, I like to aim for 140 characters — a completely arbitrary length that happens to be half the length of a post on a certain social media site I don’t use anymore. That’s short enough to be easy to share in a social media post (along with the title and URL), but long enough to be able to tease the contents of the post.

Use canonical URLs properly

Okay, this one’s getting into the weeds a bit, but it’s something to check on your site. Some CMS tools will automatically add a canonical URL tag. This is really useful if you have a post with multiple valid URLs, or if you’re syndicating content from one site to another, and want to make sure all the SEO traffic goes to the original site.

<link rel="canonical" href="https://example.com">

However, a surprisingly common problem I see is that a post will have a canonical URL tag that points to the homepage of the site rather than the post itself. This is an insidious issue because you can still go to the URL directly, but tools that understand the canonical URL will link to the wrong place.

I’m particularly aware of this because some of the tools I use to save links, like Instapaper, will save the canonical URL if one is available. Which means when I go to review my links later, what’s actually been saved is a seemingly random link to someone’s homepage. If I’m lucky, I can skim their recent posts and remember the title of the post I tried to save, but sometimes I can’t figure it out, and their post doesn’t get shared.

So, please take a moment to view the source on one of your posts and, if you see a canonical URL tag, make sure it’s pointing to the correct URL.

Add a date

This last tip is perhaps more specific to the front-end industry, but since web technologies change so quickly, it’s important to know that you’re not unintentionally sharing some out-of-date information. If you show the date your post was published somewhere, it’s easy for me to check that I’m not accidentally linking to something written a long time ago, which may no longer be valid.

Conclusion

Putting out a weekly newsletter means I’m in the unusual position of getting a high-level overview of the most popular web development content being shared every week. It means I can spot some trends, see what people are interested in (or struggling with). It also means I see a wide variety of sites and how well they interact with common social media sharing tools. I hope this list of tips helps you avoid putting a lot of work into writing a post, only to see it struggle for views because it’s not easy to share.


We’re Cloud Four

We solve complex responsive web design and development challenges for ecommerce, healthcare, fashion, B2B, SaaS, and nonprofit organizations.

See our work

]]>
https://cloudfour.com/thinks/getting-your-article-shared-tips-from-ten-years-of-newsletter-curation/feed/ 0 8476
Faking a Fieldset-Legend https://cloudfour.com/thinks/faking-a-fieldset-legend/ https://cloudfour.com/thinks/faking-a-fieldset-legend/#respond Tue, 20 Jan 2026 16:56:45 +0000 https://cloudfour.com/?p=8460 My buddy Christopher Kirk-Nielsen wanted to mimic the look of a <legend> inside a <fieldset> for a section of a blog post: Specifically, the way the <legend> element magically overlays and partially clips the border of the containing <fieldset>.

Chris posed this challenge on Mastodon, where I suggested a solution he ended up building upon for his 2025 Yearnotes.

Here’s a refined version of the demo I shared:

A few details I’m proud of:

  • It’s actually transparent (backgrounds show through)
  • The border remains middle-aligned with the “legend,” even when it breaks to multiple lines
  • You can easily tweak its appearance via CSS custom properties

So, how’s it work?

HTML

Our markup consists of three elements:

  • An outer container (our fake <fieldset>)
  • A heading (our fake <legend>)
  • A wrapper for the inner content
<div class="legendary">
  <h3>This is not a fieldset</h3>
  <div>
    <!-- content -->
  </div>
</div>

A few quick notes:

  • This pattern won’t rely on specific elements. You should use whatever containers, heading levels, etc. make the most semantic sense for your use case.
  • The inner content wrapper may not be necessary if you’re willing to accept some compromises. (More on this later.)
  • I stole the excellent class name from Mr. Kirk-Nielsen’s implementation. Thanks, Chris!

CSS

Instead of struggling to overlay the “legend” while clipping the border beneath, we’re going to slice the containing shape into three chunks: One for either side of our legend, and one for everything below.

Hand-drawn sketch of the intended layout, with pencil lines marking slices for the northwest and northeast corners in addition to legend and content elements

We already have elements for our legend and lower content section. To avoid cluttering the markup, we’ll use pseudo elements to represent the “northwest” and “northeast” slices.

First, let’s translate our sketch to a CSS Grid. I like to use grid-template-areas to make a little text-based representation of the layout:

.legendary {
  display: grid;
  grid-template-areas:
    "nw      legend  ne"
    "content content content";
}

To keep our legend middle-aligned to the top of the adjacent borders, we’ll have it span an additional row (one earlier than the corner areas):

.legendary {
  display: grid;
  grid-template-areas:
    ".       legend  ."
    "nw      legend  ne"
    "content content content";
}

We should also add some column and row definitions so the browser knows to divide the legend space evenly, and to stretch the northeast corner (right of the legend):

.legendary {
  display: grid;
  grid-template-areas:
    ".       legend  ."
    "nw      legend  ne"
    "content content content";
  grid-template-columns:
    1em
    auto
    minmax(1em, 1fr);
  grid-template-rows: 1fr 1fr auto;
}

Now we can use a content view to render the aforementioned pseudo elements:

.legendary {
  /* ...  */

  &::before,
  &::after {
    content: "";
  }
}

And assign the grid areas we’ve defined:

.legendary {
  /* ...  */

  &::before {
    grid-area: nw;
  }

  &::after {
    grid-area: ne;
  }
  
  > :first-child {
    grid-area: legend;
  }

  > :last-child {
    grid-area: content;
  }
}

Now for the visual appearance!

Since this technique hinges on coordinating the same styles across separate elements, we’ll define a few custom properties up top:

.legendary {
  --border-color: currentColor;
  --border-radius: 0.25em;
  --border-style: solid;
  --border-width: 1px;
  --legend-gap: 0.375em;
  --padding: 1em;

  /* ... */
}

Which we’ll pepper throughout our final styles to draw borders and manage spacing:

.legendary {
  --border-color: currentColor;
  --border-radius: 0.25em;
  --border-style: solid;
  --border-width: 1px;
  --legend-gap: 0.375em;
  --padding: 1em;

  column-gap: var(--legend-gap);
  display: grid;
  grid-template-areas:
    ".       legend  ."
    "nw      legend  ne"
    "content content content";
  grid-template-columns:
    calc(var(--border-width) + var(--padding) - var(--legend-gap))
    auto
    minmax(calc(var(--border-width) + var(--padding) - var(--legend-gap)), 1fr);
  grid-template-rows: 1fr 1fr auto;

  &::before,
  &::after,
  > :last-child {
    border: var(--border-width) var(--border-style) var(--border-color);
  }

  &::before,
  &::after {
    border-bottom-width: 0;
    content: "";
  }

  &::before {
    border-right-width: 0;
    border-top-left-radius: var(--border-radius);
    grid-area: nw;
  }

  &::after {
    border-left-width: 0;
    border-top-right-radius: var(--border-radius);
    grid-area: ne;
  }
	
  > :first-child {
    font: inherit;
    grid-area: legend;
    margin: 0;
  }
	
  > :last-child {
    border-bottom-left-radius: var(--border-radius);
    border-bottom-right-radius: var(--border-radius);
    border-top-width: 0;
    grid-area: content;
    padding-top: var(--padding);
  }
}

(Note the use of calc in the first and last columns. This keeps the main content aligned with that of the heading while taking into account gaps between the legend and border.)

Variations

Depending on the needs of your project, there may be ways to adjust or simplify this technique.

If your background is a flat color and known ahead of time, you can give the legend the same background and use a faux container instead of separate corners:

A similar trick could work for varied backgrounds if you’re willing to set a blend mode (and accept any resulting color shifts):

And if minimal markup is the goal, you can pull this off without the inner <div> element, it’ll just impose a few more constraints:

We may one day get a CSS feature for mimicking <legend>’s display (as pointed out by Amelia Bellamy-Royds in the original thread). For now, it’s another fun excuse to solve an interesting (if eerily familiar) challenge with the niceties of modern CSS!


We’re Cloud Four

We solve complex responsive web design and development challenges for ecommerce, healthcare, fashion, B2B, SaaS, and nonprofit organizations.

See our work

]]>
https://cloudfour.com/thinks/faking-a-fieldset-legend/feed/ 0 8460
Responsive Letter Spacing https://cloudfour.com/thinks/responsive-letter-spacing/ https://cloudfour.com/thinks/responsive-letter-spacing/#comments Thu, 20 Nov 2025 16:30:43 +0000 https://cloudfour.com/?p=8252 Earlier this year, a longtime customer shared a new iteration of their brand guidelines. Of particular interest were changes to typography, including heavier weights for headings, and a request to tighten all letter-spacing by a certain percentage.

While the latter change worked well in print and certain other applications, it was a bit too aggressive for web and digital. The smaller the text, the more the loss of white space impaired readability:

(With tightened spacing) Headings seem okay… but this smaller text gets pretty tough to read. There are too few pixels for each character to work with, and too little negative space to waste. The longer the copy, the greater chore this becomes for our eyeballs.

A reasonable compromise was suggested: Only apply the letter-spacing above a certain font-size. But testing this solution, the design felt off. A tightened-up heading next to a loosey-goosey subhead felt a bit inconsistent, unharmonious.

What we really wanted was a gradual transition: As the font-size increases, the letter-spacing decreases. And ideally, that would happen everywhere by default.

Thankfully, modern CSS was up to the challenge. And it only took one rule to pull off:

* {
  letter-spacing: clamp(
    -0.05em,
    calc((1em - 1rem) / -10),
    0em
  );
}

As the font-size increases, the letter-spacing decreases down to the minimum value:

How it works

First off, we’re using the universal selector, *. This applies the rule to every element, and calculates the value based on each element’s unique font-size. (Depending on the project, you may want to constrain this to a handful of specific elements, or fine-tune the specificity using modern techniques like :where or @layer.)

* {
  /* everything! */
}

Next, let’s break down how the letter-spacing is calculated.

1em represents the current font-size. By contrast, 1rem (note the “r”) represents the root font-size. By subtracting one from the other, we get a representation of just how much bigger the text has grown from the default:

* {
  letter-spacing: calc(1em - 1rem);
}

But that value is in the wrong direction: We want to tighten the letter-spacing, not increase it in lockstep with the font-size. We can divide by a negative number to reverse the direction and slow the rate of change:

* {
  letter-spacing: calc((1em - 1rem) / -10);
}

Finally, we want to cap the possible values so the spacing won’t grow too tight or loose for our particular design. We use the clamp function to set a minimum and maximum… in this case, -0.05em (equivalent to -5% of the computed font-size) and 0em (the unit is required as of this writing):

* {
  letter-spacing: clamp(
    -0.05em,
    calc((1em - 1rem) / -10),
    0em
  );
}

The exact minimum and maximum values, as well as the rate of change (-10 above) and “zero” point (1rem above), will differ project to project. In our case, the minimum value was dictated by our customer’s brand guide, and we fleshed out the other amounts in the browser.

Near-future improvements

The progress() function will make CSS rules like this a lot more intuitive by reducing the need for complex math or magic numbers.

In this example, I can apply a percentage of the same letter-spacing range as the previous examples, based on where the current font-size (1em) sits between a minimum and maximum:

* {
  letter-spacing: calc(
    progress(1em, 18px, 48px) * -0.05em
  );
}

You can try this version today in supported browsers (Chrome and Edge as of this writing):

Taking care

As fun as I find these sorts of responsive CSS challenges, I generally avoid futzing too much with letter-spacing outside of large, stylized headings or specific functional use cases. I find it too easy to overdo, disrupting the intended rhythm of a typeface when I could have found a more condensed or extended alternative to start from.

But the joy of working with different clients is succeeding within unique and varied constraints. Sometimes, that means having input into foundational decisions like font selection. Other times, it’s about understanding at a high level the thousand decisions that led to this moment, so you can recommend the next best step to take.

In those cases, “CSS tricks” like this can really come in handy.


We’re Cloud Four

We solve complex responsive web design and development challenges for ecommerce, healthcare, fashion, B2B, SaaS, and nonprofit organizations.

See our work

]]>
https://cloudfour.com/thinks/responsive-letter-spacing/feed/ 2 8252
Testing HTML Light DOM Web Components: Easier Than Expected! https://cloudfour.com/thinks/testing-html-light-dom-web-components-easier-than-expected/ https://cloudfour.com/thinks/testing-html-light-dom-web-components-easier-than-expected/#respond Tue, 18 Nov 2025 16:30:00 +0000 https://cloudfour.com/?p=8430 An HTML custom element, `<light-dom-component>` wrapping three topics designed with green checkmarks to illustrate passing tests: render to DOM, accessibility, events

A recent project of ours involved modernizing a large, decades-old legacy web application. Our fantastic design team redesigned the interfaces and created in-browser prototypes that we referenced throughout development as we built HTML/CSS patterns and HTML web components. For the web components, we used the Light DOM and progressive enhancement where possible, keeping accessibility front and center.

Going in, we weren’t sure how challenging it’d be to write tests for HTML web components. Spoiler alert: It wasn’t too different than testing framework-specific components (e.g., Vue or React) and in some cases, even easier!

For a project of this scope, a strong test suite was crucial, as it enabled us to focus on new features instead of constantly fixing regression bugs and repeating manual browser testing again and again. These are the patterns that worked well for us.

The web component testing stack we used

Our testing stack consisted of:

Most of these tools are standard. The interesting choice here is using Lit’s html and render() in the tests. We built our first (and one of the most complex) web components with Lit, then switched to vanilla web components for the rest. Since lit was already a dependency, we continued to use its templating features, expressions, conditionals, and directives in our tests. A few of the benefits:

  • Setting up new tests included no manual DOM manipulation
  • Provided a declarative HTML-like syntax with editor syntax highlighting
  • We were able to create shared parameterizable example/demo Lit templates (shared by tests and Storybook stories), which significantly reduced the boilerplate
  • Helped standardize how each test (and Storybook story) should be set up for easier maintenance
  • The rendered HTML is immediately in the DOM, which means Testing Library queries and standard DOM APIs work without special setup
  • Better TypeScript support within our tests

Overall, a better, more efficient developer experience.

Worth noting, there is also a standalone lit-html library that we could have used instead of the full lit package. lit-html provides html and render, as well as the directive modules. Today I learned. 🙂

Light DOM web components simplified testing

One of the most impactful early architectural decisions we made was to build all web components using the Light DOM instead of Shadow DOM. While we didn’t realize it at the time, it dramatically simplified testing and component composition.

From a testing perspective, it meant we could query anything, anywhere, anytime:

render(
  html`
    <my-amazing-component>
      <my-tree-component 
        data=${JSON.stringify(jsonData())}
      ></my-tree-component>
      <dialog></dialog>
    </my-amazing-component>
  `,
  document.body,
);

With Shadow DOM, we’d need something like:

// ❌ What you'd have to do with Shadow DOM
const component = document.querySelector('my-amazing-component');
const shadowRoot = component.shadowRoot; // May be null!
const button = shadowRoot?.querySelector('button'); // Doesn't cross boundaries
// Or use special testing utilities that pierce shadow boundaries
// And if there are nested shadow boundaries, *eek!* 😬

With Light DOM, it’s much simpler:

// ✅ What we actually do
const component = document.querySelector('my-amazing-component');
const button = component.querySelector('button'); // Just works

And Testing Library queries also just work:

render(validationExample(), document.body);

// screen.getByRole() finds elements INSIDE your components
const emailInput = screen.getByRole('textbox', { name: /email/i });
const submitBtn = screen.getByRole('button', { name: 'Submit' });

// These work because there's no Shadow DOM boundary blocking queries
await user.click(submitBtn);
expect(emailInput).toHaveAttribute('aria-invalid', 'true');
expect(emailInput).toHaveFocus();

Testing Library’s philosophy is “query like a user would.” With Light DOM web components, the mental model matches perfectly.

Testing web component events

Most, if not all, web components we built dispatched custom events with detail data. We used the following Vitest features to confirm the expected event data:

it('Emits change event when "Confirm" is clicked', async () => {
  const user = userEvent.setup();
  render(
    html`<my-tree-component 
      data=${JSON.stringify(jsonData())}
    ></my-tree-component>`,
    document.body,
  );

  // Set up the handler function spy for the 'change' listener
  // Attached to `document` to confirm event bubbles
  const changeHandler = vi.fn();
  document.addEventListener('my-tree-component-change', changeHandler);

  // Click confirm button
  const confirmButton = screen.getByRole('button', { name: /^confirm$/i });
  await user.click(confirmButton);

  // Event should be emitted with correct details
  expect(changeHandler).toHaveBeenCalledWith(
    expect.objectContaining({
      type: 'my-tree-component-change',
      detail: {
        action: 'change',
        selectedEntityIds: ['test1'],
      },
      bubbles: true,
    }),
  );
	
  // Clean up the event listener
  document.removeEventListener('my-tree-component-change', changeHandler);
});

Testing hidden inputs generated by web components

One of our goals was to minimize the need for legacy backend code refactors. The legacy application relied on good ol’ traditional form submission architecture. Some of the legacy UI relied on JavaScript-generated hidden inputs that satisfied the backend code. Our new web components needed to match this behavior.

When testing the hidden inputs feature of a web component, Light DOM web components made it much simpler because any hidden inputs created by the web component are included in the form submission automatically:

render(
  html`
    <form>
      <my-tree-component 
        data=${JSON.stringify(jsonData())}
      ></my-tree-component>
      <button type="submit">Submit the form</button>
    </form>
  `,
  document.body,
);

// Get the form
const form = document.querySelector('form') as HTMLFormElement;

// Hidden inputs are in Light DOM, so form submission includes them automatically
const hiddenInputs = form.querySelectorAll('input[type="hidden"][name="entity"]');
expect(hiddenInputs).toHaveLength(5);

// They participate in form submission naturally
const formData = new FormData(form);
const entities = formData.getAll('entities');
expect(entities).toHaveLength(5);

In some cases, we needed to confirm the hidden inputs rendered in a specific order with specific prefixed values. To test this, Vitest’s toMatch() assertion with regular expressions came in handy:

expect(entities[0]).toMatch(/^OP(AND|OR|NOR|NAND)$/);
expect(entities[1]).toMatch(/^EL/);
expect(entities[2]).toMatch(/^OP(AND|OR|NOR|NAND)$/);
expect(entities[3]).toMatch(/^EL/);
expect(entities[4]).toMatch(/^EL/);

Testing both attribute and property APIs

Most of the web components supported both declarative (HTML attributes) and imperative (JavaScript properties) APIs. We added basic “render” tests for each use case:

it('Renders via `content` attribute', () => {
  render(
    html`<my-data-table
      data=${JSON.stringify(jsonData())}
    ></my-data-table>`,
    document.body,
  );
  
  const tableEl = screen.getByRole('table');
  expect(tableEl).toBeVisible();
});

it('Renders via `content` property', () => {
  render(html`<my-data-table></my-data-table>`, document.body);
		
  // Set the JSON data via the property
  const component = document.querySelector('my-data-table') as MyDataTable;
  component.data = jsonData();
  
  const tableEl = screen.getByRole('table');
  expect(tableEl).toBeVisible();
});

Typing web component references

We wrote all web components and tests in TypeScript. This helped catch API changes anytime we refactored or fixed bugs. In tests where we wanted to access component properties or methods, we needed to add a type assertion since querySelector() can possibly return null:

render(
  html`<my-tree-component
    data=${JSON.stringify(jsonData())}
  ></my-tree-component>`,
  document.body,
);

// Use a type assertion since we know the element is in the DOM
const myTreeComponent = document.querySelector(
  'my-tree-component',
) as MyTreeComponent; 

// Now you have full TypeScript support
expect(myTreeComponent.treeNodes).toHaveLength(21);
myTreeComponent.data = newData; // Type-safe property access

Tests and Storybook stories shared Lit html templates

As mentioned earlier, sharing Lit html templates helped reduce boilerplate and standardized how we set up all tests and Storybook stories. Below is an example Lit html template for an input validator component:

// input-validator-example.ts

import { html } from 'lit';
import { ifDefined } from 'lit/directives/if-defined.js';

export interface InputValidatorExampleArgs {
	type: HTMLInputElement['type'];
	validationError?: string;
	required?: boolean;
	pattern?: string;
	ariaDescribedby?: string;
}

/**
 * Used by tests and Storybook stories.
 */
export function inputValidatorExample({
	type,
	validationError,
	required = true,
	pattern,
	ariaDescribedby,
}: InputValidatorExampleArgs) {
	const inputId = `input-${type}`;
	let label = type.charAt(0).toUpperCase() + type.slice(1);

	if (type === 'select') {
		label = `${label} an option`;
	}

	if (type === 'text' && pattern) {
		label = `${label} with regex pattern`;
	}

	let field;
	if (type === 'select') {
		field = html`
			<select
				id=${inputId}
				?required=${required}
				?pattern=${pattern}
				aria-describedby=${ifDefined(ariaDescribedby)}
			>
				<option value=""></option>
				<option value="1">Option 1</option>
				<option value="2">Option 2</option>
				<option value="3">Option 3</option>
			</select>
		`;
	} else if (type === 'textarea') {
		field = html`
			<textarea
				id=${inputId}
				minlength="10"
				?required=${required}
				?pattern=${pattern}
				aria-describedby=${ifDefined(ariaDescribedby)}
			></textarea>
		`;
	} else {
		field = html` <input
			id=${inputId}
			.type=${type}
			minlength=${ifDefined(type === 'password' ? '5' : undefined)}
			?required=${required}
			.pattern=${ifDefined(pattern) as string}
			aria-describedby=${ifDefined(ariaDescribedby)}
		/>`;
	}

	return html`
		<div class="form-group">
			<label for=${inputId}>${label}</label>
			<input-validator validation-error=${ifDefined(validationError)}>
				${field}
			</input-validator>
		</div>
	`;
}

This allowed us to use the same HTML for both tests and Storybook stories:

// InputValidator.test.ts

render(inputValidatorExample({ type: 'tel' }), document.body);
// InputValidator.stories.ts

/**
 * The component supports various `<input>` `type` values. Below are examples
 * of different input types that can be used with the component.
 */
export const VariousInputTypesSupported: Story = {
  render: () =>
    html`${[
      'email',
      'url',
      'password',
      'tel',
      'number',
      'date',
      'time',
      'datetime-local',
      'month',
      'week',
      'search',
      'text',
      'checkbox',
    ].map((type) => inputValidatorExample({ type }))}`,
};

Testing for accessibility

Building accessible user interfaces is something we believe in and strive for as a team. Our web component tests helped reinforce this core value.

Every web component had an accessibility violation test assertion

As a baseline standard practice, we always included a vitest-axe toHaveNoViolations() test assertion (including checking multiple UI states as needed):

it('Has no accessibility violations', async () => {
  render(formValidatorExample(), document.body);
  
  const component = document.querySelector('form-validator') as HTMLElement;  
  const submitBtn = screen.getByRole('button', { name: 'Submit' });

  // Initial form state  
  expect(await axe(component)).toHaveNoViolations();  

  // Invalid form state
  await user.click(submitBtn);
  expect(await axe(component)).toHaveNoViolations();
});

Testing Library ByRole queries as the default query

With all our tests, we’d default to querying the DOM using Testing Library’s ByRole queries with accessible names. If a query fails, the control is not accessible (either an incorrect role or an incorrect/missing accessible name):

const submitBtn = screen.getByRole('button', { name: 'Submit' });

Remember: The “first rule of ARIA” is to prefer native, semantic HTML elements and only introduce ARIA roles as needed. In both cases, ByRole queries help confirm the proper role.

Assertions for ARIA attributes where applicable

In some cases, we’d assert certain ARIA attribute values where it made sense, for example:

expect(input).toHaveAttribute('aria-invalid', 'false');

Testing focus management

Keyboard users rely on proper focus management. For example, forms should move focus to the first invalid field on validation:

it('Validates an empty form on submit', async () => {
  // … setup
    
  // Submit empty form
  await user.click(submitBtn);
    
  // Focus moves to first invalid field
  expect(emailInput).toHaveFocus();
});

Other use cases where focus management assertions are important include testing that the focus returns to the appropriate elements after dialogs close or actions complete.

Tip: As part of our development process, we use our keyboards to navigate through the UI. Did the Tab jump to the control we expected? Did we expect the Escape key to close a dialog? After submitting an invalid form, is the focus on the first invalid input? Not using a mouse and manually testing these scenarios with a keyboard helps guide the assertions we include in the tests.

Test file organization

As our test suite grew, we started organizing test files by feature or concern. This helped avoid monolithic Component.test.ts files with hundreds of tests. Here are the common test file categories we saw occur organically:

Interaction tests: ComponentName.interactions.test.ts

Tests included assertions for user interactions, click handlers, keyboard navigation, and UI state changes in response to user actions.

Event tests: ComponentName.events.test.ts

Tests included assertions for custom event emissions, event bubbling, and event payloads.

Rendering tests: ComponentName.rendering.test.ts

Tests included assertions for initial render, conditional rendering, and DOM structure.

Feature-specific tests: ComponentName.feature.test.ts

For specific features, we named test files after the feature:

  • ComponentName.hidden-inputs.test.ts
  • ComponentName.sorting.test.ts
  • ComponentName.validation.test.ts

Directory structure patterns

We preferred to collocate the test files next to the component or feature. This kept the test suite maintainable, discoverable, and faster to run. We found it easier to find and run tests for specific features.

Pattern 1: Tests alongside component

For simpler components:

ComponentName/
├── _component-name.css
├── ComponentName.ts
├── ComponentName.stories.ts
├── ComponentName.interactions.test.ts
├── ComponentName.rendering.test.ts
└── ComponentName.events.test.ts

Pattern 2: Feature sub-directories with tests

For complex features that warrant their own folder:

ComponentName/
├── _component-name.css
├── ComponentName.ts
├── ComponentName.stories.ts
├── validation/
│   ├── use-validation.ts
│   ├── ComponentName.validation.test.ts
│   ├── ComponentName.validation.initial-render.test.ts
│   └── ComponentName.validation.attribute-changes.test.ts
├── single-select/
│   ├── component-name-single-select-example.ts // The `html` template for tests
│   └── ComponentName.single-select.test.ts
├── multi-select/
│   ├── component-name-multi-select-example.ts // The `html` template for tests
│   └── ComponentName.multi-select.test.ts
└── pre-select/
    ├── use-pre-select.ts
    ├── use-pre-select.test.ts
    └── use-pre-select.object-support.test.ts

Pattern 3: Helper/utility tests

Tests for pure functions and utilities:

MyTreeComponent/
├── _my-tree-component.css
├── MyTreeComponent.ts
├── MyTreeComponent.interactions.test.ts
├── MyTreeComponent.events.test.ts
└── helpers/
    ├── flatten-tree.ts
    ├── flatten-tree.test.ts
    ├── generate-node-id.ts
    ├── generate-node-id.test.ts
    ├── set-tree-ids.ts
    └── set-tree-ids.test.ts

Using a for/of loop to run repetitive test assertions

This pattern isn’t groundbreaking, but there were times when we used an array of input names and a for/of loop to run the same assertions against multiple inputs.

For example, without a loop:

const dataTable = document.querySelector('data-table') as DataTable;

expect(within(dataTable).getAllByRole('checkbox')).toHaveLength(6);

const selectAllCheckbox = within(dataTable).getByRole(
	'checkbox', 
	{ name: 'Select all' }
);
expect(selectAllCheckbox).toBeVisible();
expect(selectAllCheckbox).toBeChecked();

const entity01Checkbox = within(dataTable).getByRole(
	'checkbox', 
	{ name: 'Select Entity_01' }
);
expect(entity01Checkbox).toBeVisible();
expect(entity01Checkbox).toBeChecked();

const entity02Checkbox = within(dataTable).getByRole(
	'checkbox', 
	{ name: 'Select Entity_02' }
);
expect(entity02Checkbox).toBeVisible();
expect(entity02Checkbox).toBeChecked();

const entity03Checkbox = within(dataTable).getByRole(
	'checkbox', 
	{ name: 'Select Entity_03' }
);
expect(entity03Checkbox).toBeVisible();
expect(entity03Checkbox).toBeChecked();

const entity04Checkbox = within(dataTable).getByRole(
	'checkbox', 
	{ name: 'Select Entity_04' }
);
expect(entity04Checkbox).toBeVisible();
expect(entity04Checkbox).toBeChecked();

const entity05Checkbox = within(dataTable).getByRole(
	'checkbox', 
	{ name: 'Select Entity_05' }
);
expect(entity05Checkbox).toBeVisible();
expect(entity05Checkbox).toBeChecked();

Using a for/of loop:

const checkboxNames = [
	'Select all',
	'Select Entity_01',
	'Select Entity_02',
	'Select Entity_03',
	'Select Entity_04',
	'Select Entity_05',
];
const dataTable = document.querySelector('data-table') as DataTable;

expect(within(dataTable).getAllByRole('checkbox')).toHaveLength(
	checkboxNames.length,
);
for (const name of checkboxNames) {
	const checkbox = within(dataTable).getByRole('checkbox', { name });
	expect(checkbox).toBeVisible();
	expect(checkbox).toBeChecked();
}

Using the for/of loop felt cleaner and was easier to maintain.

Thinking critically about clicking various controls at once

This is less a pattern and more a reminder for our future selves to not just accept all ESLint rules without critical thinking.

Initially, we did the following:

// Expand each of the root nodes.
const tools = screen.getAllByRole('group', { name: /^tool\\\\./i });
for (const tool of tools) {
	await user.click(tool);
}

// Expand each of the second-level nodes.
const services = screen.getAllByRole('group', { name: /^service\\\\./i });
for (const service of services) {
	await user.click(service);
}

However, the no-await-in-loop ESLint rule flagged the await within the for/of loop. Technically, the rule is correct:

Performing an operation on each element of an iterable is a common task. However, performing an await as part of each operation may indicate that the program is not taking full advantage of the parallelization benefits of async/await.

Often, the code can be refactored to create all the promises at once, then get access to the results using Promise.all() (or one of the other promise concurrency methods). Otherwise, each successive operation will not start until the previous one has completed.

The ESLint rule suggested the following:

// Expand each of the root nodes.
const tools = screen.getAllByRole('group', { name: /^tool\\\\./i });
await Promise.all(tools.map((tool) => user.click(tool)));

// Expand each of the second-level nodes.
const services = screen.getAllByRole('group', { name: /^service\\\\./i });
await Promise.all(services.map((service) => user.click(service)));

That makes sense. However, for our use case, we want each successive operation to wait until the previous one has completed. Imagine a user clicking various UI controls. The user will not be clicking all of them at once, it’d be impossible! Instead, the user would click the controls one by one. Our tests should match how a user will interact with our UI. Additionally, if the DOM updates after each click, a race condition may occur, potentially making the test flaky.

We ended up turning off the no-await-in-loop rule in each of those lines with a comment to help explain why we disabled the ESLint rule:

// Expand each of the root nodes.
const tools = screen.getAllByRole('group', { name: /^tool\\\\./i });
for (const tool of tools) {
	// We want to run the clicks sequentially to avoid UI race conditions.
	// Additionally, this closer aligns with how a real user would interact with the UI.
	// eslint-disable-next-line no-await-in-loop
	await user.click(tool);
}

// Expand each of the second-level nodes.
const services = screen.getAllByRole('group', { name: /^service\\\\./i });
for (const checkbox of individualCheckboxes) {
	// We want to run the clicks sequentially to avoid UI race conditions.
	// Additionally, this closer aligns with how a real user would interact with the UI.
	// eslint-disable-next-line no-await-in-loop
	await user.click(checkbox);
}

Avoid leaking state between tests

An important detail I want to highlight in particular: We need to reset the document body after each test run to avoid leaking state. In our case, we set this up globally in our vitest-setup.ts config file using the Vitest afterEach() teardown function. That way, we wouldn’t have to worry about manually adding this in each test or accidentally forgetting:

// vitest-setup.ts

/**
 * We are using Lit's `render` and `html` functions to render in the tests.
 * We need to reset the document body after each test to avoid leaking state.
 */
afterEach(() => {
  render(html``, document.body);
});

Adding basic dialog functionality to jsdom

Our tests used jsdom. There is an open jsdom HTMLDialogElement issue that remains unresolved. We ended up mocking the HTMLDialogElement show(), showModal(), and close() methods in the vitest-setup.ts file as follows:

// vitest-setup.ts

// Add basic dialog functionality to jsdom
// @see https://github.com/jsdom/jsdom/issues/3294
HTMLDialogElement.prototype.show = vi.fn(function mock(
  this: HTMLDialogElement,
) {
  this.open = true;
});
HTMLDialogElement.prototype.showModal = vi.fn(function mock(
  this: HTMLDialogElement,
) {
  this.open = true;
});
HTMLDialogElement.prototype.close = vi.fn(function mock(
  this: HTMLDialogElement,
) {
  this.open = false;
});

This workaround allowed us to keep moving forward with any tests that included HTML dialog element assertions.

Wrapping up

At the beginning of the project, I was feeling a bit nervous because I wasn’t sure how easy it would be to test HTML web components. Choosing to build Light DOM web components was absolutely the right choice, and once we got rolling, it made testing HTML web components no different from testing framework-specific components. I’m elated to have gone through this experience, and I must say, I love me some HTML web components. ❤️

More resources


We’re Cloud Four

We solve complex responsive web design and development challenges for ecommerce, healthcare, fashion, B2B, SaaS, and nonprofit organizations.

See our work

]]>
https://cloudfour.com/thinks/testing-html-light-dom-web-components-easier-than-expected/feed/ 0 8430
Simple One-Time Passcode Inputs https://cloudfour.com/thinks/simple-one-time-passcode-inputs/ https://cloudfour.com/thinks/simple-one-time-passcode-inputs/#respond Tue, 11 Nov 2025 16:45:08 +0000 https://cloudfour.com/?p=8424 If you’ve signed into an online service in the last decade, chances are you’ve been asked to fill a one-time passcode (“OTP”) field with a handful of digits from a text, email or authenticator app:

Screenshot of the Slack interface after attempting a sign-in and being asked for a verification code from email. The code entry is divided into separate steps per digit.
Slack’s OTP entry form

Despite the prevalence of this pattern, it seems to cause plenty of anxiety in otherwise level-headed web developers… especially if they’ve fixated on the current trend of segmenting the input to convey the passcode’s length (a new spin on the ol’ input mask).

Why else would so many tumble down the rabbit hole of building their own <input> replacement, stringing multiple <input> elements together, or burdening their project with yet another third-party dependency?

If you find yourself in a similar situation, I have good news! You can ship a fully functional OTP input today without any CSS hacks or JavaScript frameworks.

All you need is some HTML.

Basic Markup

A single <input> element: That’s where the OTP magic happens!

<input type="text"
  inputmode="numeric"
  autocomplete="one-time-code"
  maxlength="6">

Let’s break down each of its attributes:

  • Even though our passcode will consist of numbers, it isn’t actually a number: A value of 000004 should not be the considered the same as a value of 4. For that reason, we follow the HTML spec and set type="text".
  • inputmode="numeric" enables a numeric virtual keyboard on touch devices.
  • autocomplete="one-time-code" adds support for autofill from password managers or via SMS.
  • maxlength="6" prevents visitors from typing too many characters.

We can opt into client-side validation by adding two more:

<input type="text"
  inputmode="numeric"
  autocomplete="one-time-code"
  maxlength="6"
  pattern="\d{6}"
  required>

pattern defines the code we expect, in this case exactly six ({6}) numeric digits (\d). required tells the browser this field must have a value that satisfies the pattern.

Example: In a Form

Now all our OTP-specific features are accounted for, but an input is meaningless without context. Let’s fix that by building out a full form with a heading, a label, a submit button and a support link in case something goes wrong:

<form action="…" method="post">
  <h2>Verify Account</h2>
  <label for="otp">
    Enter the 6-digit numeric code sent to the number ending in 55
  </label>
  <input type="text"
    id="otp"
    inputmode="numeric"
    autocomplete="one-time-code"
    maxlength="6"
    pattern="\d{6}"
    required>
  <button>
    Continue
  </button>
  <a href="…">
    Try another way…
  </a>
</form>

Note how the label specifies the intended length and format of the passcode. No input mask, icon or visual affordance can match the accessibility and clarity of straightforward text!

And with that, our OTP pattern is functionally complete!

Demo: With Styles

Since we’ve covered all the critical functionality in our HTML, we’re free to style our form however the project dictates.

In this example, I’ve chosen a large, monospaced font with some letter-spacing to keep every digit of the code distinct and readable. I’m also using the :invalid pseudo class to reduce the visual prominence of the <button> element until the code is valid:

Demo: Enhanced

Having a solid foundation in HTML and CSS alone doesn’t preclude us from leveraging JavaScript, too.

Here’s the same demo as before, but with a simple input mask web component to indicate remaining characters:

Because this builds atop existing patterns instead of replacing them outright, the code is tiny: Less than a kilobyte without any optimization or compression.

Takeaways

  • All critical features of a one-time passcode input are possible using HTML alone.
  • Clear labels and instructive text are more important than any visual affordance.
  • Custom design and behavior can be layered on as progressive enhancements.
  • This approach is quicker to implement and avoids many common performance and accessibility pitfalls.

We’re Cloud Four

We solve complex responsive web design and development challenges for ecommerce, healthcare, fashion, B2B, SaaS, and nonprofit organizations.

See our work

]]>
https://cloudfour.com/thinks/simple-one-time-passcode-inputs/feed/ 0 8424
Talking CSS, Web Components, App Design and (gulp) AI on ShopTalk Show https://cloudfour.com/thinks/talking-css-web-components-app-design-and-gulp-ai-on-shoptalk-show/ https://cloudfour.com/thinks/talking-css-web-components-app-design-and-gulp-ai-on-shoptalk-show/#respond Mon, 03 Nov 2025 18:25:04 +0000 https://cloudfour.com/?p=8422 I had a blast chatting with Chris and Dave on episode 689 of ShopTalk Show, my personal favorite podcast about building websites:

In this episode we sit down with Tyler Sticka to discuss upgrading his project, Colorpeek. We explore the practical applications of web components and CSS, and how they are shaping the future of web development. Tyler shares his experiences with prototyping and the challenges of maintaining simplicity in design.

You can listen right now wherever you subscribe to podcasts, or in video form on YouTube:


We’re Cloud Four

We solve complex responsive web design and development challenges for ecommerce, healthcare, fashion, B2B, SaaS, and nonprofit organizations.

See our work

]]>
https://cloudfour.com/thinks/talking-css-web-components-app-design-and-gulp-ai-on-shoptalk-show/feed/ 0 8422
That time some rando turned me into a meme coin https://cloudfour.com/thinks/that-time-some-rando-turned-me-into-a-meme-coin/ https://cloudfour.com/thinks/that-time-some-rando-turned-me-into-a-meme-coin/#comments Thu, 30 Oct 2025 22:10:49 +0000 https://cloudfour.com/?p=8299 We had barely started dinner when my wife asked what was wrong. “I think someone made one of my tweets into a meme coin.” She said, “I’m not sure I know what those words mean.” I replied, “I’m not sure I do either.”

An innocent DM turns ominous

It started innocently enough. A stranger reached out on LinkedIn with what I thought was a question about a Mastodon server where I’m a moderator. But they didn’t want to talk about Mastodon:

A LinkedIn direct message reads, "I'm a developer working in games, web, and blockchain. One of your recent posts on X (Twitter) went viral and became a meme, generating a lot of engagement.

As a result, a community called 'grigs X community' was formed and someone even created $GRIGS token inspired by it.

I purchased some and sent a portion to you as a gift. 

If you'd be open to dropping a message or two in the community, I'm sure your fans would really appreciate it. 

Thanks for you time, and keep doing what you do—big respect."

Every word in that message is English, but it might as well have been a foreign language. I read it several times trying to decipher what they were saying.

I thought it might be scam and was tempted to block the person, but I decided to dig in a little further before doing so. I’m glad I did.

The Grigs X Community

If communities existed on Twitter before Elon Musk ruined it, I never used them. So when I opened the “grigs X community,” I was beyond confused.

The grigs community had my name, photo, and tweet at the top along with references to “The first grok (2007).” The rest was gibberish: “send 30,000,0000 B7nchU6SE…pump for @grigs.”

Scrolling through the tweets in the community was even more incomprehensible and disconcerting. There were numerous AI generated images of me. Many of the images included Elon Musk and tagged his account as well. If this was a scam, it was the most elaborate scam I’d ever been the target of.

This tweet from "Gake X" contained an AI generated picture of me standing next to Elon Musk. I'm wearing a maroon polo shirt and holding a green mug like in my profile photo, but instead of the mug saying, "Progressive Web Apps," it said, "Progresshe Web•Apps." Elon Musk wears a black polo shirt. His eyebrows are raised and he looks surprised. Overlayed in white text on the bottom of the image is the word, "$grigs."
A tweet featuring an AI generated cartoon image of me with a fist in the air leading a large crowd of bald people. At the top of the image, it says, "GRIGS CULT." 

The tweet itself says, "$GRIGS Cult assemble."

What in the world was going on?

The $grigs meme coin

Fortunately, I had recently listened to an excellent podcast episode from Planet Money that explained meme coins. After some digging, I found the $grigs meme coin on pump.fun, a website that makes it simple to create meme coins.

A screenshot of pump.fun's market cap graph for the $grigs meme coin. At the top of the graph, it says, "The First Grok (2007) (@grigs)." On the right side, there is a photo of me in a maroon polo shirt holding a Progressive Web Apps coffee mug. This is my standard avatar photo.

The meme coin had been created a couple days earlier and had a market cap of around $12,000. So now I understood what the Twitter community was talking about, but why did they make a meme coin of me?

The First Grok

Planet Money’s meme coin podcast came in clutch again. I remembered part of the podcast where they talked about how important opinion leaders are for getting meme coins off the ground and how everyone wanted one particular opinion leader to notice their coins:

HOROWITZ-GHAZI: Zeke says there is one key leader whose opinion seems to be valued above all others in the world of meme coins, a man who has taken the joke so far as to helm a controversial new government entity named after that original Doge meme.

FAUX: What is really ideal is if Elon Musk will talk about whatever the coin is about.

HOROWITZ-GHAZI: Like, one of the big reasons Dogecoin is still the most valuable meme coin is because Elon Musk started tweeting about it in 2019.

FAUX: But it’s not just Dogecoin. A lot of the most successful coins have had some connection to Elon Musk.

HOROWITZ-GHAZI: Elon Musk is like the meme coin market mover-in-chief.

That explained why so many of the AI images people contained Elon Musk and I. Most tweets tagged Musk as well.

Why were people so bullish that Musk might be interested in $grigs coin? Because Musk had named his AI chatbot Grok, and I was the first person to use the word “grok” on Twitter back in July 2007 when I wrote, “trying to grok twitter.”

It finally made sense. This is why so many of the tweets and images in the community called me a time traveler, implied that I was secretly the original CTO of Twitter, or wildly, that my account was actually a Musk burner account.

A tweet from @CryptoStylesUSA says, "Clearly @grigs is a time traveler." Accompanying the tweet is an AI generated image of me in a red polo shirt holding a green mug that says, "Grok." I'm standing in front of a modified DeLorean. The stylized movie title, "Back to the Future" spans the top of the image.
This image repurposes a Scooby Doo Fred Reveal meme. 

On the top part of the image, it shows Fred from the Scooby Doo cartoon starting to remove the mask from a person wearing a cloth ghost costume. The text over the ghost says, "Let me see who is Grok."

The lower image shows the removed mask in Fred's hand. My head has been superimposed over the cartoon character who was under the mask and the label says, "Grigs."

The whole community was trying to get Musk to engage with the coin, but none of them knew that I had blocked Musk long ago.

I’m a meme coin. Now what?

I finally understood what was going on, but I had no idea what to do with this information.

The people behind the meme coin wanted me to promote it. They attempted to transfer coins to me so I would be incentivized to shill my meme coin.

I wanted to do the opposite. Meme coins are scams that rely on finding a greater fool. But would talking about the meme coin, even to badmouth it, just give it more attention and drive up the price?

I searched for articles by people who had been in similar situations, but couldn’t find any information. I reached out to several communities seeking guidance and got some good general feedback. Unfortunately, no one could predict what the impact on the coin’s valuation would be if I spoke up.

I slept on the problem and in the morning, I listened to a second Planet Money meme coin episode called, “The Parable of the Peanut Memecoin.” It was instructive.

Someone made Peanut the Squirrel into a meme coin without the knowledge of Peanut’s owner. Peanut’s owner was given some tokens to promote the meme coin just like I had. Unlike me, he accepted the gift and started promoting the meme coin. The price went up and he sold his coins to help fund his plans for an animal rescue. The community immediately turned on him and attacked him relentlessly. The whole episode is worth a listen.

If I wanted to avoid a similar fate, I couldn’t be silent. I needed to publicly disown the coin that bore my image:

Yesterday, I learned that there is a meme coin based on a tweet of mine from 2007. The coin is using my profile image. I have no affiliation with the coin. I do not own any of the tokens. Someone tried to give me some tokens, but I don’t have a crypto wallet.

The 2007 tweet appears to be of interest because I used the verb grok. They say it is the first time grok was used on Twitter. I haven’t verified that. It seems many aren’t aware that grok is a verb invented by Robert A. Heinlein in 1961’s Stranger in a Strange Land.

Many years after my 2007 tweet, Musk named his AI bot Grok. Because of this, people are hoping that Musk will see my 2007 tweet and respond to it. That’s unlikely. I blocked Musk in 2023 when he changed the algorithm to promote his own tweets.

Here’s the thing. Crypto is scam. Musk is an assclown. The only good thing that may come of this is that more people will read Stranger in a Strange Land. It is a wonderful and moving piece of fiction. I highly recommend it.

I’d like to think this thread is why the meme coin never took off, but other events likely did the coin in. Less than two days after I disavowed the coin, Musk and Trump were in a full-blown Twitter feud culminating with Musk saying Trump was in the Epstein files.

It seems Musk was too busy to grok the full potential of $grigs coin.


We’re Cloud Four

We solve complex responsive web design and development challenges for ecommerce, healthcare, fashion, B2B, SaaS, and nonprofit organizations.

See our work

]]>
https://cloudfour.com/thinks/that-time-some-rando-turned-me-into-a-meme-coin/feed/ 1 8299
How our dog increased my appreciation for accessibility https://cloudfour.com/thinks/how-our-dog-increased-my-appreciation-for-accessibility/ Mon, 11 Aug 2025 15:17:44 +0000 https://cloudfour.com/?p=8379 An illustrated brown and white spotted dog with its tongue hanging out and a goofy look on its face wags its tail. A thought bubble says, "a11y?"

Ouch! WTF was that Coco?”

It was Easter morning. I was bent over wiping Sophie’s feet. Sophie is my mom’s one-year-old cocker spaniel. We were dog sitting, and Coco, our husky pitbull mutt, was thrilled. In her excitement, Coco whipped around and headbutted me.

I yelled loud enough that our oldest came running to see what happened. I told them I was fine. I’d have a lump on my head or a black eye, but I was okay.

Or I so thought until the world started spinning. I sat down on the couch in hopes things would stabilize. I tried to look at my phone to figure out if I could take Tylenol or Ibuprofen. The screen was blurry and made me feel sick.

At urgent care later, the doctor confirmed what we already suspected. Coco had given me a concussion.

A dark room with no screens

The doctor’s prescription was simple. Give your brain a rest from all stimulus for three to four days. Assuming your concussion isn’t dangerous—and mine wasn’t—that means stay in a dark room with no screens.

While I lay in bed with blackout curtains closed and a mask over my eyes, I learned to appreciate some features of our HomePod Mini that I had never used before:

  • If I asked Siri to text someone, the HomePod would ping when they replied and I could ask to have the message read aloud.
  • Siri could call someone through the HomePod speaker.

These aren’t earth-shattering features, but they helped me feel connected when I couldn’t use screens or ear buds.

My new favorite accessibility features

When I felt well enough to try working again, I found it necessary to make several changes to my computer.

Reduce motion is your friend

Animation and motion would cause pain in my temple. This persisted for weeks after the concussion. I once had to leave a local meetup early because the presenter used animated gifs in their presentation, and it triggered concussion symptoms.

The reduce motion option in the macOS accessibility settings.

I have since returned to normal motion on my iPhone, but I still have reduced motion set on my desktop machine. Too much movement on a larger screen can still be overwhelming. I’ve also turned off auto-play of animated images wherever possible.

Dark mode isn’t only a fad

I’ll admit that I’ve been pretty dismissive of dark mode. I thought it was something developers came up with because they like working in the dark.

But even though dark mode isn’t included in accessibility settings, I now think of it as one. Bright lights were too much for me and dark mode helped. It has moved up my priority list for the next version of our own site.

As an aside, Gmail has a dark theme, but it doesn’t turn on automatically when someone has prefers-color-scheme set to dark. That seems silly to me. The hard part is developing a dark version of your app or site. Once you have one, don’t make people search around for it.

Gmail’s dark theme can be found by clicking on the gear icon to open settings. Then select the Themes tab and the Set Theme button. You’ll need to scroll down to find the Dark theme. Hover (sigh) over the blocks to see the theme name.

You can use Night Shift mode during the day

Apple’s Night Shift mode shifts the color of a display to be warmer in the evening because “studies have shown that exposure to bright blue light in the evening can affect your circadian rhythms and make it harder to fall asleep.”

The slightly warmer colors in Night Mode were easier for my brain to process after the concussion. I left it on all the time.

Accessibility is for everyone

My concussion gave me a greater appreciation for the accessibility features that operating systems and browsers have built. If you worked on those features, I want to thank you for everything you do. If you built your website to honor prefers-reduced-motion and dark mode, I want you to know that your work made a difference to me.

It’s a misnomer to think that accessibility is only for people with permanent conditions. I like the way Maria Town, president of the American Association of People with Disabilities, talked about this reality in an interview with Advocate magazine:

Everyone will become disabled if they’re lucky enough. Aging is a privilege. Far too few of us get the opportunity to live to be a ripe old age. And if you do get the opportunity, you will likely become disabled.

This is our reality. Whether it is a temporary injury, a permanent condition, old age, or a literal boneheaded dog giving you a concussion, you will need accessibility features at some point in your life.

So let’s recommit to supporting accessibility in our own work not only to support those who need it now, but also because it is in our own self-interest. You never know when you may suddenly find yourself needing accessibility features.

P.S. Coco was fine. She has a hard head.


We’re Cloud Four

We solve complex responsive web design and development challenges for ecommerce, healthcare, fashion, B2B, SaaS, and nonprofit organizations.

See our work

]]>
8379