Pixelflips https://pixelflips.com pixelflips - flippin' ideas into creative and clean web and interface designs while keeping a focus on web standards. Mon, 16 Mar 2026 01:31:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/pixelflips.com/wp-content/uploads/2017/01/apple-touch-icon.png?fit=32%2C32&ssl=1 Pixelflips https://pixelflips.com 32 32 98725035 Three AIs Missed It. One Human Didn’t https://pixelflips.com/blog/three-ais-missed-it-one-human-didnt https://pixelflips.com/blog/three-ais-missed-it-one-human-didnt#respond Mon, 16 Mar 2026 00:01:36 +0000 https://pixelflips.com/?p=10741 I had done everything right. Or at least, everything the current playbook says to do. Used AI to fill the gaps, reviewed it myself, then handed it off to more AI.

This past week, I got handed a project that required backend work. Not because I have backend experience – I don’t, really – but because “AI can handle the parts you don’t know.” And honestly, that’s not entirely wrong. Claude Code got me through it. I wrote what I could, leaned on it for what I couldn’t, and when the dust settled, the code was in decent shape. I ran my own review. Frontend looked solid. A few tweaks. Tests passed. Manual walkthrough, nothing glaring.

I pushed the branch, opened the PR, and handed it to Cursor for a second pass. A couple of issues surfaced. Fixed them. Then CodeRabbitAI got its turn. A couple more. Fixed those too. Three layers of review, every issue resolved. I tagged my human colleagues, feeling pretty good about the whole thing.

That’s where the confidence ended.

The Comment That Made Me Feel Two Things at Once

My colleague, a full-stack engineer with real backend experience, left a comment on the PR. They knew going in that I didn’t have much backend context, so they didn’t just tell me what was wrong. Instead, they asked me to try something: open the app in two windows, change profiles in one, make updates in the other, try to update and save, and see what happens.

I did it. The profiles fell out of sync.

It was immediately obvious, once I saw it, that it was the kind of thing where you go “oh, of course” the second it breaks. But I never would have thought to test that scenario on my own. I hadn’t spent years debugging backend state. I didn’t have the mental model for what could go wrong when a user has two sessions running simultaneously and starts mixing state between them. That’s hard-won intuition, built from experience I just don’t have yet. No AI tool flagged it either. Not Claude Code, not Cursor, not CodeRabbitAI. The bug just sat there, quiet, untouched by every layer of “smart” review I’d thrown at it.

What I appreciated most was how my colleague handled it. They didn’t write the fix in the comment. They didn’t tell me the answer. They trusted me to find it once I could see it. That’s a specific kind of mentorship, the kind that respects your ability to learn while acknowledging the gap. I was humbled and genuinely grateful at the same time, which is a weird combination but also kind of the best possible outcome of a code review.

The Myth of the Foolproof Stack

I’m not anti-AI. I use it every day, and I’m not going to pretend otherwise. But I want to be honest about what that PR showed me, because I don’t think the experience is unique to me.

We’ve been sold a version of AI-assisted development where layering more tools means fewer things fall through the cracks. More coverage, fewer blind spots, better outcomes. And when you’re in the middle of it, running your code through three different review passes, watching issues get flagged and resolved, it genuinely feels airtight. The pitch is clean. The reality, as it turns out, is messier.

Here’s the part that got me: CodeRabbit, one of the tools I used on that very PR, called 2025 “the year the internet broke.” Change failure rates up 30%. Incidents per pull request up 23.5%. Their own data, about the exact category of tool I was using, to feel safe. A separate 2026 Sonar survey found that teams using AI coding tools are 46% more likely to experience production incidents than those that don’t. That’s not a marginal difference.

Amazon’s internal AI agent, Kiro, autonomously decided the fix for a minor issue was to delete and rebuild a production environment. Thirteen-hour outage in an AWS. Amazon called it “user error.” Sure, ok. Forrester now predicts that 50% of companies that attributed headcount reductions to AI will quietly rehire for those same roles by 2027. Google’s own numbers show 20% of their 2025 AI engineering hires were former employees they’d previously let go.

That’s not optimization. That’s a very expensive loop.

The Short Stick

I understand AI is getting better. The models are improving, the tooling is maturing, and some of these failure modes will happen less as that continues. I genuinely believe that. I’m not writing this as someone who thinks we should go back to writing everything by hand.

But we’re living in the meantime. And in the meantime, companies are cutting engineers based on anticipated future capabilities, not demonstrated current ones. They’re shipping AI-reviewed code into production, watching things break, and then quietly hiring people back with no acknowledgment that the original bet didn’t pay off. The people who got laid off aren’t getting apologies or reinstatement. They’re getting contractor gigs at lower pay if they’re lucky. The idea that humans are optional is already starting to crack under its own evidence, but the cost of that lesson is being absorbed by the people who can least afford it.

The most valuable thing in that PR wasn’t a tool. It was a colleague who understood what I didn’t know, cared enough to teach me instead of just writing the fix themselves, and trusted me to get there once I could see the problem. That’s not something I can prompt my way to. No model is going to replicate that specific kind of knowledge transfer, built from real experience, offered with actual generosity.

AI caught some bugs on that PR. A human caught the one that mattered.

]]>
https://pixelflips.com/blog/three-ais-missed-it-one-human-didnt/feed 0 10741
AI Chose Your UI (Did It Choose Wrong?) https://pixelflips.com/blog/ai-chose-your-ui-did-it-choose-wrong https://pixelflips.com/blog/ai-chose-your-ui-did-it-choose-wrong#respond Sat, 07 Mar 2026 21:11:42 +0000 https://pixelflips.com/?p=10695 After years of building, maintaining, and supporting in-house design systems with real tokens, governance, versioning, support, and contribution models, I recently found myself building with Tailwind and shadcn. Thanks, AI!

I’m familiar but relatively newish to these tools, so take this for what it is. I’m not anti-AI. I use it every day. But I’ve been watching something happen across the industry that I think is worth talking about: It feels like AI is making our UI decisions for us, and we’re just going along with it. Maybe call me old school, but I think products should have their own soul when it comes to UI and UX. Otherwise, what actually sets us apart?

These days, someone fires up a vibe coding tool and builds a prototype. Could be a manager who’s excited about AI, a designer who started dabbling in code, a dev who wants to move fast. It comes back looking polished. React, Tailwind, shadcn, the whole default stack. It feels fast. It looks professional. And everyone accepts it. The UI direction, the component approach, the framework, the styling, all of it decided by whatever the AI defaulted to that day. By a language model optimizing for the most common output.

From what I can tell, this is happening more and more. And I don’t blame anyone for being excited. These tools can be fun to use. Designers can build something real without waiting on engineering. Decision-makers can see a working prototype in hours instead of weeks. That feels like progress, and in some ways, maybe it is. But there’s a difference between using AI as a tool and letting AI make the decisions. Right now, a lot of people are doing the second thing and calling it the first.

Your product now looks like everyone else’s

When you let AI decide your UI, everything converges. You can spot a shadcn app from across the room. The spacing, the rounded everything, the muted grays, the specific way the buttons and inputs feel. It’s the default aesthetic. Every vibe coding tool out there generates the same stack, so every prototype comes back looking like a cousin of the last one. That’s what happens when a model decides.

And there’s a feedback loop making it worse. AI generates shadcn code. More shadcn code ends up in training data. AI gets even better at generating shadcn code. Round and round. Your product’s look is being decided by what a language model saw the most during training, not by your brand team or your designers. Not by anyone who’s talked to your users. Super inspiring stuff.

If your product looks identical to your competitor’s product, what exactly is your differentiator? UI and UX used to be a competitive advantage. The way your app felt was part of why people chose it. When every SaaS dashboard looks like it came from the same AI prompt, that advantage disappears. Your product becomes a commodity before you even ship it.

shadcn released theming presets to address this. But swapping a color palette isn’t brand identity. It’s a skin. Your UI is part of your brand, and your UX is how people experience it. If your application looks and feels like every other AI-generated output, you’ve handed both to a default setting. Honestly, from a design standpoint, it feels lazy. We used to obsess over the details that made a product feel like ours. Now we’re accepting whatever the AI spits out because it looks clean enough. And it’s hard to even raise the concern because the AI made it look so good out of the box that questioning it feels like you’re the one slowing things down.

Choosing what’s right vs. what’s common

This is the part that gets lost. People see a polished AI-generated UI and assume the tool made a good decision. It didn’t decide at all. It predicted the most likely output based on its training data. shadcn and Tailwind are everywhere in that training data, so that’s what comes out. It’s not a recommendation. It’s a statistical echo.

But people are treating it like a recommendation. A manager vibe-codes a dashboard and thinks, “this is the direction.” A designer builds a prototype and assumes the stack is solid because the output looks professional. Nobody questions whether React is the right framework, whether Tailwind is the right styling approach, or whether shadcn is the right component strategy for their product and users. The AI picked it. It looks good. Ship it.

That’s not a design system either

On top of letting AI make UI decisions, people are also calling the output a design system. It’s not.

AI and shadcn are giving you components. It doesn’t give you governance, contribution models, versioning strategy, token architecture, or cross-team documentation. It doesn’t give you a shared language between design and engineering. What it gives you is a folder full of React files you copied into your repo.

A design system is an organizational tool. The hard part is never “make a button look nice.” The hard part is making sure multiple teams use the same button the same way, that it evolves without breaking things, and that someone owns the decision about what “primary” means across your entire product suite. shadcn doesn’t try to solve that. It’s not designed to. And that’s fine for what it is. But calling it a design system? That’s like calling a pile of lumber a house.

215 lines of React for a radio button

I get why shadcn exists. Building accessible UI primitives from scratch is hard. Getting focus management, keyboard navigation, and screen reader support right takes real expertise. That’s the problem shadcn and Radix are trying to solve, and it’s a legitimate one. But the solution has costs that nobody’s weighing because the AI never brings them up.

Paul Hebert recently tore down the shadcn radio button component. 215 lines of React. Seven imported files. 30 Tailwind classes. All to recreate something HTML has done natively for 30 years.

Instead of using <input type="radio">, shadcn renders a <button> with an SVG circle inside it, then uses ARIA attributes to tell screen readers it’s actually a radio button. Read that again. It’s a button pretending to be a radio button and relying on ARIA to cover for the fact that it didn’t just use the native element. The browser already solved this. Decades ago. But the AI doesn’t know that, and nobody asked.

Zoom out, and the abstraction stack is wild. Tailwind abstracts CSS. shadcn abstracts Radix. Radix abstracts the DOM. React abstracts the DOM. Four layers between your user and a checkbox. The AI chose every single one of those layers for you. It didn’t ask whether your team needs that complexity or whether a simpler approach would work. At enterprise scale, every one of those layers is a maintenance surface and a possible debugging headache. This is complexity cosplaying as simplicity.

There’s another cost nobody seems to talk about. Teams using Tailwind for long enough start forgetting actual CSS. The muscle memory goes away. When something breaks outside the utility class catalog, nobody knows how to fix it. You’ve traded foundational knowledge for convenience, and that trade gets expensive when things go sideways. The AI forgot to mention that part when it generated the code.

You’re locked in now

shadcn is React. Community ports exist for Vue and Svelte, but they’re unofficial, maintained by different people, with different APIs and different release timelines. If your enterprise has teams on Vue, Angular, Svelte, Rails, or plain JS, you’re now maintaining parallel component implementations or forcing everyone onto React whether they chose it or not.

Here’s what nobody notices: the AI chose React. Not your team. The prototype worked, it looked great, and it naturally became the starting point for real production code. Nobody went back to reconsider the foundation. Why would they? It worked in the demo.

Enterprise environments are messy. Legacy apps, acquired products on different stacks, internal tools built by teams who picked their own framework years ago. A design system needs to serve all of them. shadcn can’t.

What happens when React isn’t the dominant framework anymore? It will happen. jQuery was untouchable once. So was Bootstrap. If your UI is coupled to React’s ecosystem and release cycle, you’ve made a bet on one framework’s future. That bet has an expiration date.

The maintenance and governance

When you use shadcn, you copy source code into your repo. You now own every component. Bug fixes, accessibility patches, and breaking changes upstream are all your problem now. There’s no upgrade path. When shadcn updates, you manually diff and merge. Across how many apps? How many teams?

Tailwind without strict governance turns into class soup fast. Every team invents its own patterns. One team uses px-4 py-2 button padding, another uses p-3, a third wraps it in a custom class. Multiply that across hundreds of components and a dozen teams. Good luck with consistency.

Without shared tokens, “our blue” is three different hex values in three different repos. Without versioning, a component change in one app silently breaks patterns in another. Without contribution models, nobody knows who owns what or where the source of truth lives. That’s the work that goes into building a real system. None of it comes in the box with shadcn.

Nobody budgeted for any of this, by the way. Because nobody planned to adopt shadcn as the foundation. An AI picked it, a prototype got everyone excited, and now multiple teams are building on a decision that was never actually made.

So who’s actually deciding what our products look like?

That’s the question I keep coming back to. Vibe coding makes it easy for anyone to build something that looks production-ready, and that’s exciting. But “looks production-ready” and “is the right UI for your product” are not the same thing. Right now, AI is paving over that gap with polished output that feels like a decision was made when it wasn’t.

AI is great at generating code. It is not great at understanding your product, your users, or how your org works. It optimizes for “most common,” not “most appropriate.” And it will never tell you “yo, this might not be the right approach for what you’re building.” It just gives you shadcn and moves on.

Your UI, your UX, and your design system are business decisions. They touch brand, velocity, how teams work together, and how much technical debt gets carried for years. AI didn’t think about any of that when it decided what your product should look like.

I’m curious what other folks are seeing. Are we just accepting whatever UI the AI gives us now? Should vibe-coded prototypes quietly become our production UI? I might be wrong about some of this. I’d love to hear if others are seeing this and how they are navigating it.

]]>
https://pixelflips.com/blog/ai-chose-your-ui-did-it-choose-wrong/feed 0 10695
npm uninstall humans https://pixelflips.com/blog/npm-uninstall-humans https://pixelflips.com/blog/npm-uninstall-humans#respond Sun, 01 Mar 2026 04:56:05 +0000 https://pixelflips.com/?p=10654 In software, a dependency is a risk you accept. It’s a package you didn’t write, maintained by someone you’ve never met, that can break your entire application if it disappears. Good engineering means knowing which dependencies are worth the risk and which ones aren’t. Right now, I’m watching our industry decide that people aren’t worth it. It feels like the teams that build things are just another package to be evaluated, flagged, or uninstalled.

This week, Jack Dorsey’s Block laid off more than 4,000 people. Nearly half the company is gone. The stock jumped 24%. Dorsey told the remaining employees that most companies would reach the same conclusion within a year and make similar structural changes. Before that, Amazon cut 16,000 corporate roles. Chegg gutted 45% of its staff. HP announced plans to cut up to 6,000 positions. Every single one of these moves was framed as an “AI efficiency” play. Significantly smaller teams. Leaner operations. The future.

I’ve been building for the web for a long time. Long enough to have seen multiple cycles of “this technology will change everything.” And some of them did. But I’ve never seen one where the pitch is this blunt: we are removing people and replacing them with software, and we think you should be excited about it. As someone who’s in the codebase every day, who has worked alongside people doing real, solid work only to watch them get let go, this one hits different. And if you’re a web worker right now, I know you’re feeling it too.

Refactoring the Workforce

The language around these cuts is telling. “Significantly smaller teams.” “Restructuring for AI.” It’s the vocabulary of refactoring, applied to people. And it reveals something ugly about how the people making these decisions think about the folks doing the work: we’re not assets to lead, mentor, or invest in. We’re bottlenecks to be optimized away at the first opportunity. Honestly, it’s gross.

But here’s what that framing misses entirely: the people and their teams are the platforms. The companies, codebases, and products are just the artifacts. The real platform, the thing that makes a product actually work, is the people who understand it. They hold the context, and these days, context is everything. They carry the intent. They know why a decision was made, not just what the decision was. Strip them out, and you don’t have a leaner company. You have a company, product, or codebase with no one left who understands it.

When you move the people who help build your product away from the foundational work, that’s not optimization. That’s a platform failure. You’re not reducing friction. You’re removing the people who understood where the friction came from in the first place.

This is the dependency trap. Someone looked at an org chart the way an engineer looks at a bloated package.json file and decided to rip things out. But people aren’t packages. You can’t just swap in an AI module and expect the same output, no matter how good the LLMs are getting.

Frictionless Fantasy

There’s this fantasy going around right now: the frictionless company. Small teams, AI-augmented, shipping at scale with minimal overhead. It sounds clean on a slide deck. In practice, it ignores everything that actually makes a product work.

Institutional knowledge. The connective tissue of company culture. The engineer who remembers why that weird edge case exists, or why the team decided against a particular API pattern three years ago. The person who knows that a feature looks simple but took six iterations to get right. The colleague who mentored you when you were new and didn’t know what you didn’t know. You can’t run npm install for that knowledge. You can’t prompt your way into it. It lives in the people who showed up every day and did the work, and when they are replaced with a fancy autocomplete, it’s gone.

When you treat people as interchangeable npm packages, you lose the thing that made them valuable: they were the platform. They weren’t just writing code or building a product. They were the reason any of it made a damn bit of sense. They carried context that no onboarding doc or AI summary can replicate. You wouldn’t rip a critical package out of production without understanding what depends on it. But that’s exactly what’s happening in this new AI era. And stripping them out in the name of efficiency creates a different kind of debt. Call it human debt. It compounds the same way technical debt does, quietly and then all at once, except nobody’s tracking it on a Linear board.

Intent vs. Output

AI is really damn good at generating output. Code, copy, designs. I use it every day, and I’m not going to pretend otherwise. But output without intent is just loud noise.

AI doesn’t fight for accessibility or correct implementation. It doesn’t push back on a PM about a dark pattern. It doesn’t sit in a meeting and say “this is going to hurt our users” when everyone else is nodding along. It does what it’s told. That’s useful, but it’s not stewardship.

Stewardship is the people who see a feature request and think about the user on the other end of it. The folks who build with empathy baked in, not because a linter told them to, but because they’ve been shipping long enough to understand what’s at stake. A product’s soul doesn’t live in the code. It lives within the people who care enough to fight for it. That intent is the human dependency that holds everything together. It’s the package you can’t find on any open source registry or have a robot generate from a poorly written prompt. And it’s the first thing that disappears when you “refactor” a team down to a skeleton crew.

The Communication Vacuum

What makes all of this worse is how it’s communicated. Or rather, how it isn’t.

There was a time when being a leader and leadership meant advocating for the people doing the work. Now it feels like leadership means only auditing them. Reviewing the roster the way you’d review a package-lock.json. What can we flag? What can we drop? What’s the minimum viable team we can get away with? Somewhere along the way, leading people turned into looking down and deciding who’s worth keeping.

4,000 people lose their jobs, and the stock jumps 24%. Record profits in the same quarter you eliminate half your workforce. And the message to those still at their desks? “We’re moving faster now.” No acknowledgment of what was lost. You just forced to disagree and commit. Oh yeah, and btw, you need to hurry up.

The anxiety this is creating in our industry goes beyond job security. It’s the slow realization that the people making these decisions don’t see us as people. They see us as a third-party dependency to be evaluated and potentially removed. That changes how you show up every day. It changes whether you fight for the thing that needs fighting for, or keep your head down because you don’t want to be the next module that gets uninstalled. And honestly? That is corrosive to us all. It eats at the exact qualities that made you good at your job in the first place.

Human Dependency

I’m not anti-AI. I think these tools are powerful, and I use them to do better work. But there’s a real difference between using AI to help people improve or perform better and using AI as a justification to remove them entirely. It’s a scapegoat and helps those at the top dodge any real accountability.

The human dependency isn’t a bug. It’s a feature and the entire point. We build products for people, and it takes people to know what that means. The judgment to get it right and the empathy to care whether we do: that comes from being human. From the ones in the trenches. The ones who are actually the entire platform.

So the next time someone frames a layoff as “AI efficiency,” ask yourself what’s actually being optimized away. Because it’s probably not friction. It’s most likely the people who actually gave a damn.

If you’re reading this and feeling the weight of it, just know you’re not alone. A lot of us are sitting with this same knot in our stomachs, watching the industry we love and we’ve helped build move in a direction that doesn’t seem to love us back. It leaves a hole that no robot will be able to fill, and that’s a damn shame.

]]>
https://pixelflips.com/blog/npm-uninstall-humans/feed 0 10654
You’re Automating the Wrong 70% https://pixelflips.com/blog/youre-automating-the-wrong-70 https://pixelflips.com/blog/youre-automating-the-wrong-70#respond Sat, 28 Feb 2026 23:04:53 +0000 https://pixelflips.com/?p=10645 I came across a Medium post titled “AI Will Replace 70% of Design System Work.” The premise is that most design system work, documentation, component building, token management, accessibility audits, is “structurally automatable,” and that the real value lies in governance. The author argues that teams need to move “upward from execution to orchestration” or risk becoming obsolete.

Some of that is true. AI can help generate docs. It can lint tokens. It can draft release notes. I use AI tooling in my own design system work, and it’s useful for certain tasks. But the article’s argument falls apart because it misunderstands what design system teams actually do.

The 70% number is made up

Let’s start with the headline claim. 70% of design system work is replaceable by AI. Where does that number come from? The article never says. There’s no methodology, no survey, no data. It’s a confident assertion dressed up as a finding. That’s not analysis. That’s vibes.

And it matters because numbers like that end up in executive slide decks. They get used to justify headcount decisions. When you throw around “70%” with no backing, you’re not starting a conversation; you’re handing leadership a reason to cut your team.

The whole middle is missing

The article describes two modes of design system work: production (automatable) and governance (not automatable). That’s a clean framework, but it leaves out where most of the actual work happens.

Support. Maintenance. The daily human work of keeping a system alive and useful.

Answering questions in Slack. Pairing with a product engineer who’s trying to use your component in a context you didn’t anticipate. Triaging a bug that only surfaces in one team’s specific tech-stack setup. Writing a migration guide that accounts for six different integration patterns across your org. Helping a designer understand why the system works a certain way so they can make better decisions in their product.

This is humans supporting humans. It’s the thing that actually drives adoption, and it doesn’t show up in the article at all. You can’t automate a relationship. You can’t deploy a governance framework and expect people to trust your system. Trust is built through responsiveness. When someone files an issue and gets a thoughtful reply the same day, that’s what earns buy-in. No contribution policy document does that.

Fast and wrong is expensive

The article mentions that AI-generated output “was not perfect, but it was fast.” That sentence is doing a lot of heavy lifting. Fast and wrong is expensive. Someone still needs to review every AI-generated component API and token structure, and that someone needs deep expertise to know what “right” looks like.

You’re not eliminating the expert. You’re just changing what they do with their hands. The review work that remains requires the same knowledge, maybe more, because now you’re also debugging AI’s confident mistakes alongside your own.

The feedback loop doesn’t automate

Design systems are living things. You ship a component, teams adopt it, they use it in ways you didn’t expect, they file bugs, they request variants, you learn from that and iterate. That feedback loop is the engine of a healthy system.

AI doesn’t have relationships with your consumers. It doesn’t know that one team keeps misusing your modal because their product has a weird user flow. It doesn’t pick up on the pattern that three different teams have asked for the same thing in three different ways. That’s institutional knowledge built through support work, through being present, through paying attention to how people actually use what you’ve built.

Governance without execution is just meetings

The article elevates governance as the strategic, non-automatable layer. Governance matters, I’m not arguing that it doesn’t. But governance without strong execution underneath it is just Confluence pages and meeting invites. You earn the right to govern through the quality and reliability of what you ship. The components have to work. The documentation has to be accurate. The releases have to not break things. That foundation isn’t maintenance you automate away. It’s the thing that makes governance credible.

The real risk

The article says the true risk is that “some design systems never evolved beyond being structured UI libraries.” I’d argue the bigger risk is articles like this giving leadership permission to gut the teams that make systems work. If a VP reads “AI will replace 70% of this function” and takes it at face value, the people who get cut aren’t the governance strategists. It’s the engineers and designers who do the daily work of building, supporting, and maintaining the system.

Design systems aren’t factories, and they aren’t governance frameworks. They’re a service. The value lives in the ongoing, responsive, human relationship between the system team and the people who depend on it. AI is a useful tool in that work. It’s not a replacement for the people doing it.

]]>
https://pixelflips.com/blog/youre-automating-the-wrong-70/feed 0 10645
Your Design System’s Got Skills? https://pixelflips.com/blog/your-design-systems-got-skills https://pixelflips.com/blog/your-design-systems-got-skills#respond Sat, 21 Feb 2026 20:49:48 +0000 https://pixelflips.com/?p=10579 I’ve been tinkering with something lately that I think more design system teams should pay attention to. I’m trying to teach AI how my design system works through Claude skills and Cursor rules.

The problem AI has with your design system

If you’ve used any AI coding tool to generate UI, you’ve seen this. You ask for a card component, and it gives you something that looks right but uses none of your tokens, references components that don’t exist in your library, and ignores every pattern your team spent months establishing. The output is plausible. It’s also wrong in ways that are tedious to untangle.

Your design system has opinions. It has a token taxonomy, component APIs, composition patterns, and a11y requirements. AI doesn’t know any of that. It’s working from general training data, not your system. So it gives you generic Bootstrap-flavored markup and leaves you to clean it up.

That cleanup time adds up fast, especially across a team.

Skills and rules: the 80/20 play

MCP servers for design systems are getting well-deserved attention right now and rightfully so. It’s actually where I spent time early on building and setting up to help provide context for the design system I work on. MCP is powerful, and it lets AI tools query your system’s components, tokens, and docs through a structured protocol. But it requires infrastructure. You need to build and run a server.

Claude skills and Cursor rules are the simpler path. A skill is a markdown file with instructions, reference material, and optional scripts. A Cursor rule is very similar, a text file that gives the AI context about your project’s conventions. No server, no API. Just documents that describe how your system works, loaded into context when relevant.

I’ve started investigating and building a couple of these for my own work, and the payoff has been immediate. Instead of fixing AI output that ignores my component library, the AI starts from the right place. It knows which components exist and which tokens to use by either guiding it to the proper files or providing an explanation right inside the file itself. I am still in early trials myself, and it’s not perfect, but the gap between what it generates and what I’d actually ship has gotten a lot smaller.

Types of skills your DS team could build

Out of the gate, I have been trying to come up with and define a few skills that I think could be useful.

Skills for a DS Team
Component usage
Which component to use and when. Props, slots, do/don’t patterns
Token references
Semantic vs primitive. Theming logic. Mult-brand mappings.
Accessibility rules
Aria, focus management, contrast. Catch at source, not in review.
Migration/upgrades
Map old APIs to new. Developer guidance from legacy to current.
Documentation
Standards for component docs. Contribution guidelines and requirements.
Code patterns
File structure, component naming, or any shared knowledge.

A component usage skill is probably the highest-value starting point. Which component to reach for in common scenarios, correct props, slot patterns, and do/don’t examples from your docs. This alone changes the quality of what AI generates against your system.

A token reference seems like the natural companion. Your full taxonomy with guidance on when to use semantic vs. primitive tokens. If your system supports theming or multiple brands, this is where you encode that logic, so AI stops guessing.

An accessibility skill bakes your a11y standards into generation. ARIA patterns, keyboard navigation, focus management, contrast expectations. Catch it at the source instead of flagging it in review.

If you’ve got a legacy system and a current one (I’m dealing with this right now), a migration skill that knows both APIs and can map between them saves real time. A documentation skill keeps contributions consistent without a style guide that nobody reads. A code patterns skill captures your team’s file structure and naming conventions, the tribal knowledge that usually lives in someone’s head.

Project skills vs. personal skills

Not every skill needs to live in the repo. This is something I’ve been thinking about as I work on more of these.

Some skills belong at the project level, committed to the repo where the whole team benefits. Component usage, token reference, and a11y rules. These are the source-of-truth skills that keep everyone generating consistent, on-system code. When a new developer joins the team and starts using AI from day one, the AI already knows how your team works.

Other skills are just for you. Maybe you’ve got one for how you scaffold new components, or one that matches your PR description format, or shortcuts for tasks you repeat often, like version bump PRs. These can live in your user-level config and make you faster without imposing your preferences on anyone else.

PROJECT SKILLSPERSONAL SKILLS
Component usage
Which component, props, patterns
Component scaffolding
Your preferred file structure
Token references
Taxonomy, semantic vs primitives
Commits & PR descriptions
Your format and conventions
Accessibility rules
Aria, focus, contrast guidelines
Daily shortcuts
Repeated tasks, personal workflows
Migration guides
Legacy to new API mappings
Code review helpers
Your review checklist and style

The split matters. Project skills enforce consistency. Personal skills are about your own speed. Mixing them up creates friction, and nobody wants to inherit someone else’s personal workflow baked into the repo.

Why this matters now

Developers are already using AI to write code against your design system. That’s happening whether you’ve prepared for it or not. The question is whether the AI knows your system or is guessing.

I wrote recently about the AI productivity paradox, how AI often increases workload instead of decreasing it because you spend so much time reviewing and fixing output. Skills and rules attack that problem at the source. Give AI the right context upfront, and there’s less to fix on the other end.

We spent years meeting designers in Figma with component libraries. We met developers in their repos with npm packages. Now we need to meet them inside their AI tools. Skills and rules are how you do that without a big infrastructure investment.

What are you building?

I’m still early in this. I’ve got a few skills running that have already changed how I work, but I know there’s a lot more to figure out. Have you built Claude skills or Cursor rules for your design system? Found an approach that works well?

I’d love to hear what people are doing. If you’ve got something working, tell me about it.

]]>
https://pixelflips.com/blog/your-design-systems-got-skills/feed 0 10579
Let Me Reintroduce Myself https://pixelflips.com/blog/let-me-reintroduce-myself https://pixelflips.com/blog/let-me-reintroduce-myself#respond Sat, 21 Feb 2026 00:51:34 +0000 https://pixelflips.com/?p=10596 I came across a post by Cassidy Williams a few weeks ago about LLM discoverability. The gist: she asked ChatGPT some tech discovery questions, noticed it didn’t recommend her, then asked why. The AI gave her a list of things to fix. She fixed them. It worked.

I read that and immediately thought: what happens if I try this?

What Cassidy did

Her experiment was simple. Ask LLMs questions that should surface her work, see if they mention her, and if they don’t, ask them what she could do about it. The AI came back with practical stuff: create an llms.txt file, add structured data, keep messaging consistent across platforms. She implemented the suggestions over a few weeks and started showing up in LLM responses.

What stuck with me was less the technical recommendations and more the realization that people are using AI to discover people and tools now. Not just Google. If an LLM doesn’t know you exist, a lot of people won’t find you either.

So I tried it

I asked a few LLMs about me. About my work. Design systems people to follow, UX developers worth knowing, folks writing about the intersection of design and engineering. The results were humbling. Mostly blank. A couple got my name right but didn’t have much to say beyond that.

So I did what Cassidy did. I asked the AI: what should I do to show up more? What would make you recommend me?

The answer that kept coming back, across every model I tried: write more. Publish more. Put your thinking out in the open. The models learn from what’s publicly available, and if you’re not publishing, you’re invisible to them.

There’s something kind of absurd about an AI telling you to create more content so it can learn who you are. Like a robot tapping you on the shoulder and saying, “hey, I’d recommend you to people, but I don’t have enough to go on. Help me out here.”

So here we go

I’m taking the advice. Not entirely because an AI told me to. I’ve been meaning to write more for a while now. I’ve got opinions about design systems and AI tooling and the weird space between design and engineering that I’ve lived in for years. I just haven’t been putting them out there consistently.

So here I am, writing. The robots demanded it, and honestly, they’re not wrong.

Let’s see if they notice.

]]>
https://pixelflips.com/blog/let-me-reintroduce-myself/feed 0 10596
The AI Productivity Paradox https://pixelflips.com/blog/the-ai-productivity-paradox https://pixelflips.com/blog/the-ai-productivity-paradox#respond Fri, 20 Feb 2026 23:23:12 +0000 https://pixelflips.com/?p=10553 Satya Nadella said AI would fuel a “creative revolution.” GitHub told us Copilot would let developers “focus on creative and strategic work.” Sam Altman measures ChatGPT’s success by the percentage of human work it can accomplish. So why am I spending more time reviewing and fixing AI output than I ever imagined?

I’m not here to trash AI. I use it every day. But I need to talk about the gap between what we were promised and what actually showed up, because I don’t think I’m the only one feeling it.

The promises they made

The pitch was straightforward. AI handles the mundane stuff, the boilerplate, and the grunt work, and you get to spend your time on the interesting problems. GitHub’s own research claimed developers using Copilot were 55% faster and that 87% felt it preserved mental effort during repetitive tasks. Nadella called AI “bicycles for the mind” and talked about a future where a billion people could create on Microsoft’s platforms.

Who wouldn’t want that? Hand off the boring stuff, keep the fun stuff. More time to think and create. More time to actually use the skills you spent years developing.

That’s not what happened.

What actually happened

AI made producing things fast. Ridiculously fast. Code, documentation, copy, design specs, you can generate a first draft of almost anything in minutes now. The problem is that producing was never the hard part. Thinking was the hard part. Making good decisions was the hard part. AI doesn’t do that for you. It just gives you a pile of “done” that isn’t.

So now you review. Everything. You review your own AI-generated output because you didn’t actually write it. You prompted it, and prompting and writing are not the same cognitive process. You review your teammates’ AI-assisted work because they’re shipping faster too, and somebody has to make sure it all holds together. The volume of stuff landing on your desk went up. The quality bar didn’t move. You became the quality bar.

The research backs this up. A study on arXiv found that AI-assisted programming actually decreases the productivity of experienced developers by increasing technical debt and maintenance burden. Experienced developers reviewed 6.5% more code after Copilot’s introduction but saw a 19% drop in their own original output. A randomized controlled trial by METR found something even wilder: experienced open-source developers were 19% slower when using AI tools. And the kicker? They still believed AI had sped them up by 20%. We can’t even tell it’s not working.

And this isn’t just a code problem. Anything AI generates, whether it’s a blog post, a design comp, a project plan, or even a Slack message, needs a human to look at it before it ships. We didn’t eliminate work. We changed who does what. AI produces. You QA.

Are we losing the muscle?

This is the part that worries me. When I used to build something from scratch, write a component, architect a system, or draft a document, I was exercising a creative muscle. I was making hundreds of micro-decisions along the way, and each one built intuition. The act of producing wasn’t just about the output. It was about what the process did to my brain.

Now I spend a lot of that time reading something else’s work and deciding if it’s good enough. That’s a different cognitive mode entirely. Reviewing is not creating. They’re both valuable, but they’re not the same skill.

I think about junior developers coming up right now. If they’re leaning on AI to produce from day one, when do they develop the instincts that come from struggling through problems yourself? When do they build the taste that comes from making things badly, learning why, and making them better? You can’t shortcut that with a prompt. And if we’re all just reviewing AI output instead of producing our own work, I’m not sure how we keep that muscle from atrophying.

More work, different shape

I’m not anti-AI. I use Claude, I use Cursor, and I use AI tools constantly. They’re useful. But the narrative that AI reduces your workload? That hasn’t been my experience. The workload didn’t shrink. It shapeshifted.

Producing got faster, but reviewing and fixing filled the gap and then some. And the expectation from the outside didn’t adjust. If AI makes you faster, you should be producing more, right? Nobody factors in the review burden. Nobody accounts for the time spent wrestling mediocre AI output into something that meets your standards. The labor moved downstream and became invisible.

I keep coming back to one question: when does the promise actually land? When does AI get good enough that the review burden drops below what the production burden used to be? Maybe that’s next year. Maybe it’s five years out. Maybe the answer is that creative work was never about efficiency in the first place.

I want to know

I don’t have a clean answer here because I don’t think there is one yet. So instead I’ll ask: is this your experience too? Has AI freed up your creative time, or did it hand you a different pile of work? Are you producing more, or just reviewing more? Are you getting better at your craft, or getting better at evaluating someone else’s approximation of it?

I’d love to hear from people who feel like AI has given them their time back. Genuinely. Because I’d like to know what I’m doing wrong.

]]>
https://pixelflips.com/blog/the-ai-productivity-paradox/feed 0 10553
Why AI Needs UX Developers https://pixelflips.com/blog/why-ai-needs-ux-developers https://pixelflips.com/blog/why-ai-needs-ux-developers#respond Wed, 18 Feb 2026 03:17:53 +0000 https://pixelflips.com/?p=10542 The UX developer role has always been hard to explain to everyday folks. “So you design stuff?” Not exactly. “So you code stuff?” Sort of. For years, people who sit between design and engineering have fought for a seat at a table that wasn’t really built for them. Then AI showed up and flipped the whole thing.

Being a UX developer comes with a built-in identity crisis. You’re not designer enough for the design team and not engineer enough for the engineering team. Your title changes every two years depending on which LinkedIn trend is peaking. “UI developer.” “Design technologist.” “Frontend engineer.” The work stays the same. You’re the person translating between two groups that speak different languages, making sure what gets built actually matches what was intended.

For most of my career, that translation work has felt undervalued. Orgs didn’t know where to put us, so they just kept reorganizing until it was someone else’s problem. We’d get shuffled between departments. Left off project kickoffs. Skipped in planning. Then someone would ask us to justify why our role existed at all, because “the designers can just hand off specs and engineers can just build them.” Sure. And you can also throw a football at someone’s face and call it a pass.

What AI actually needs

Here’s what’s changing. AI tools can generate UI code at a speed that would’ve seemed absurd three years ago. You can describe a component in plain English and get something functional back in seconds. That’s impressive, and it’s only getting better.

But generating code isn’t the hard part. It never has been. The hard part is generating the right code, code that respects your design system’s API conventions, uses the correct tokens instead of hardcoded values, follows your accessibility patterns, and fits into the architecture your team has been building for years. AI doesn’t know any of that unless someone teaches it.

That “someone” is the person who already understands both sides. The person who knows why the design team chose a specific spacing scale and how the engineering team implements it. The person who can look at AI-generated output and immediately spot that it’s using the wrong component variant or ignoring a semantic token that exists for exactly this use case.

That’s the UX developer. That’s the bridge role. The one that your org still can’t figure out where to put on the chart.

Greyscale photograph of metal bridge architecture

The new leverage

What’s wild is that the day-to-day work of a UX developer, writing component documentation, defining token taxonomies, and building usage guidelines, is now directly feeding AI systems. The documentation you write becomes the context window. The component API you designed becomes the constraint that keeps generated code on the rails. Those design decisions you encoded into tokens? That’s the vocabulary AI uses to make choices.

This isn’t theoretical. I’ve been building MCP servers and Claude Code skills that let AI tools interact with our design system directly. The whole exercise is bridge work. You need to understand the design intent deeply enough to encode it as rules, and you need to understand the engineering architecture well enough to make those rules actually useful in a development workflow. Strip out either side, and the whole thing falls apart.

The UX developer role went from “nice to have” to a force multiplier. When AI can generate code but can’t generate judgment, the person who provides that judgment is steering the ship. You’re not slowing things down. You’re the reason the output is actually usable.

Stepping into it

I’m not writing this to celebrate a victory; it’s obviously still wild out here. The role still presents the same challenges it has always had: ambiguity, peculiarities in the organizational chart, and ongoing explanations. However, the leverage has changed. If you’ve spent years building the connection between design and engineering, you possess the exact skill set that enables AI to be useful rather than just quick.

Take ownership of it. Define how AI interacts with your systems. Create documentation that serves as the training context. Develop the tools that ensure the generated output aligns with your team’s standards. It turns out we’ve been building this bridge the whole time. The only thing that changed is that everyone, including the robots, now needs to cross it.

]]>
https://pixelflips.com/blog/why-ai-needs-ux-developers/feed 0 10542
Design Systems Are Having Their Moment https://pixelflips.com/blog/design-systems-are-having-their-moment https://pixelflips.com/blog/design-systems-are-having-their-moment#respond Sun, 15 Feb 2026 22:26:42 +0000 https://pixelflips.com/?p=10526 I keep seeing the same conversation play out online. AI is going to replace designers. AI is going to replace developers. AI is going to make everything faster and cheaper, and we should all be terrified or thrilled depending on who you follow on LinkedIn.

But there’s something I think people are sleeping on. AI coding tools are only as good as the constraints you give them. And what’s a design system if not a carefully defined set of constraints? AI doesn’t make design systems obsolete. It makes them necessary.

AI code is only as good as its guardrails

Here’s what I’ve learned from working with AI coding tools daily: they’re fast. Really fast. But speed without direction just means you get to the wrong place quicker.

LLMs generate code based on patterns. When you give them clear boundaries, specific component APIs, defined token values, and documented interfaces, they produce consistent output. When you don’t, they improvise. And LLM improvisation looks like slightly different button styles on every page, spacing that’s close-but-not-quite, and color values that came from who knows where.

A design system is basically a prompt engineering layer for your entire UI. You’ve already done the hard work of defining what “correct” looks like. Now AI can actually use that.

Tokens are the new API

Design tokens have been quietly important for years. In an AI-driven workflow, they become load-bearing.

When an AI agent can read your token taxonomy, your semantic color names, your spacing scale, and your typography ramp, it doesn’t have to guess at your brand. It knows. It’s not picking #3B82F6 because that’s what blue looks like in its training data. It’s picking color.action.primary because that’s what your system defines.

We’re already seeing this with tools like Figma’s MCP server, which feeds real component data, styles, and variables directly to AI agents. When those elements map to actual code through something like Code Connect, the agent isn’t hallucinating your UI. It’s being built with your real parts.

Tokens aren’t just a way to sync Figma and code anymore. They’re the shared language between humans, tools, and AI. The API your AI agent consumes to build things that actually look like they belong in your product.

Smaller teams, bigger systems

AI coding changes the economics of design systems in ways I don’t think enough people have caught on to yet.

Building and maintaining a component library used to require serious headcount. Dedicated engineers, designers, and documentation writers. A lot of orgs looked at that investment and said, “not right now.” So they shipped without a system and accumulated UI debt instead.

AI changes that equation. The bottleneck isn’t building components anymore. An AI agent can scaffold a component in minutes. The bottleneck is defining the rules. Token naming conventions. Component API patterns. Governance decisions about what goes in the system and what doesn’t. The thinking work.

That’s good news. A two-person team with clear opinions and good token architecture can maintain a system that used to require a squad. Orgs that couldn’t justify a design system team before, suddenly can.

The drift problem gets worse without a system

Here’s what should worry you if your org doesn’t have a design system: more people are about to ship more UI code, faster than ever.

Vibe coding is real. Product managers, designers, junior devs, and people who weren’t writing frontend code six months ago are now generating it with AI tools. That’s exciting. But every person generating UI code without shared constraints is another source of inconsistency.

Without a design system, you get ten different interpretations of what a simple component should look like. Ten slightly different button sizes. Ten versions of your brand color that are all close, but none of them match. AI doesn’t fix this. AI accelerates it.

A design system is the immune system. Not blocking people from shipping, but keeping what they ship coherent. The AI generates the code, and the system provides the grammar.

The strongest argument you’ve ever had

I’ve spent years in the design systems space, and I know how hard it can be to justify the investment to leadership. The ROI conversation never gets easier. “Consistency” and “reusability” are real, but they don’t always move the needle in a budget meeting.

AI coding just changed that conversation. Without a design system, your AI tools produce inconsistent output. With one, they produce on-brand, accessible, production-ready UI. That’s a direct line from design system investment to AI tool effectiveness. It’s the clearest ROI story design systems have ever had.

Design systems aren’t competing with AI. They’re the infrastructure AI needs to do its job well. The orgs that figure this out early will ship faster and stay more consistent.

If you’ve been building and maintaining a design system, your work should be becoming more valuable. If you haven’t started one yet, AI coding just handed you the best reason to.

]]>
https://pixelflips.com/blog/design-systems-are-having-their-moment/feed 0 10526
Zac Brown Band at Sphere https://pixelflips.com/blog/zac-brown-band-at-sphere https://pixelflips.com/blog/zac-brown-band-at-sphere#respond Wed, 31 Dec 2025 00:39:12 +0000 https://pixelflips.com/?p=10322 Some shows stay with you long after the lights come up. The Zac Brown Band at Sphere was one of them.

Waiting on the show to start. Felt like we were outdoors in GA.

I am relatively new to Zac Brown Band, really only getting into them over the past few years. After following the band’s updates in the months leading up to the Sphere shows, my expectations were extremely high. I had been impatiently waiting and excited for months to see this show, and this past weekend, it was finally time to board the plane and head to Vegas.

Live experiences like this remind me why I value authenticity and intention, themes I’ve been thinking about in my professional work as well.

Even with all that anticipation, the show still exceeded my expectations.

From the opening moments, it was obvious this was not going to feel like a normal concert. The Sphere itself is a huge part of the experience. The sound, the visuals, and the technology are on another level. It is hard to explain how immersive it feels unless you have been there, but it quickly becomes clear that this venue changes what a live show can be.

The Visuals and the Story

Definitely my favorite visuals of the show.

The visuals were easily the standout part of the night, and not just because they looked impressive. They were clearly designed to support a larger story. The entire show followed a narrative around Zac’s personal struggles and how he worked through them, using music as an outlet and a lifeline.

The opening song, “Heavy Is the Head,” was the perfect way to start the night. It is one of my favorite songs, and pairing it with the opening visuals immediately sets the tone. That moment alone pulled me in all the way.

A Phenomenal Experience

Submerged with ZBB in the Sphere

What surprised me most was how much the night felt like a shared and phenomenal experience rather than just a performance. The vibe in the room was incredibly positive. There was a lot of love in the air and a real sense that everyone was fully present and enjoying the moment together.

The story does begin in a heavier place, but that is intentional. As the show goes on, love and light clearly take over. That shift is what makes it work. Nothing about it felt dark or unsettling. It felt honest, human, and uplifting.

I was thrilled the entire time. Fully engaged. Singing along more than I probably should have, and hopefully not ruining it for anyone around me (sorry, wifey). I left the show feeling inspired and energized, and I have been playing the new album nonstop since.

It is safe to say that any future concert I attend is going to have a tough time topping this one.

Online “Demonic” Talk

Internet trolls losing it over this? The place was rocking!

Zac mentioned the online chatter during the show, and afterward, I saw the headlines claiming the visuals were demonic or satanic. Honestly, my first reaction was confusion. I do not understand how someone who watched the show could walk away with that impression.

It’s obvious they did not actually watch or attend the show.

Without the monologue and the complete storyline, it is easy to grab a few out-of-context visuals and turn them into clickbait. But that completely ignores what the show is actually about.

The message is very clear. The story starts in a rough and dark place and moves toward love, light, connection, and healing through music. That is the entire arc of the show. There was nothing sinister about it. The themes were creativity, resilience, and the power of music to pull someone through difficult moments.

The band is incredibly talented, and the production was thoughtful and intentional. Reducing that to outrage says far more about the critics than it does about the show itself. It feels like noise from people who either were not there, were not paying attention, or were simply looking for something to be mad about. Trolls make me sad.

See the Show

Veteran appreciation moment was epic!

This show was phenomenal. The visuals, the music, and the storyline all worked together to create something memorable and genuinely inspiring.

If you get the chance to see this show, go. Do not overthink it. It is easily one of the best live shows I have ever seen.

If the band ever sees this, I hope you all know how supported you are by the people who actually experienced it. We understood what you were doing. We felt it. And I appreciate you for taking the creative risk.

The Sphere is the star of the show!

Context matters. This show deserves to be seen in full. Do yourself a favor. If you enjoy ZBB you don’t want to miss it.


Lastly, on a side note, to the folks in the row behind us. I wish that no one ever has to sit near you at another concert ever.

Luckily, you were unable to ruin it for my wife and me. A huge thank you to my wife for one hell of a weekend celebrating our 20th anniversary and my birthday. Unforgettable.

]]>
https://pixelflips.com/blog/zac-brown-band-at-sphere/feed 0 10322