When adaptation becomes uncoordinated, journeys fall out of sync — like improv comedy gone horribly wrong.
]]>
I have been reflecting on the past three and a half years and realized there have been many ‘firsts’: my first time working in open source, building offline first experiences, and working on a fully remote async team.
Of course, any time you try something new, it pushes you in uncomfortable ways and challenges ingrained habits. You are forced to experiment, remix techniques, and think critically about what is essential.
I have learned a lot along the way and wanted to share in case there are other designers interested in dipping their toes into open source. I will break down these learnings into three areas:
I’d worked on many digital public services before joining the open source world. I thought I knew how to design in the open. I shared research findings and iterations on community blogs and ensured everything was available on GitHub. I was wrong.
There is a difference between simply publishing work online and designing a system that allows users to learn how to configure themselves. From the first idea to long-term maintenance, every stage is open by design.
Designing tools that allow people to build their own applications is complex. You are constantly thinking about how to avoid disrupting workflows or creating unintended consequences for a vast range of users and scenarios.
ODK, the open source company I joined, is a Digital Public Good that has been building and maintaining open source tools for over 18 years. It’s one of the most widely used platforms for data collection across sectors like public health, humanitarian aid, and environmental conservation, and is used in over 190 countries.
ODK’s reach and impact continue to amaze me. In the Democratic Republic of Congo, the World Health Organization used ODK to vaccinate 17 million children for polio, and it was used in 27 EU member states for continent-wide biodiversity monitoring.
Its mobile app was also one of the first ever created on Android! So the scale and complexity of iterating and maintaining these tools is a big design challenge.

From a product design perspective, open source tooling pushes your designs to the max. They need to accommodate factors like working across sectors, regions, offline settings, 60+ languages, and access needs, which is a challenge in government services as well. I think the learning for me was also designing for endless personalization and configurations.
There isn’t a week that has gone by that I haven’t learned something new from a user in the community. Someone who completely reimagined how to use the functionality in ways we could not have predicted, like the time we saw a (very keen) user create their own version of Salesforce for managing smallholder farms using ODK.
When you design for that level of scale with the “garage doors open,” you need to be comfortable sharing every part of your process. You have to be OK with putting early ideas (the ones you’d love to spend more time on) out in the world, because sharing early and often leads to better outcomes.

I was blown away by how thousands of users spend their free time contributing, sharing ideas, testing prototypes, and helping each other troubleshoot. I hadn’t seen this kind of collaboration in the private sector.
Designing with users and building trust through every stage of the process has made me a better designer. It forced me to be clear about why we are making certain decisions and explain things in plain language because there are no fancy frameworks or corporate jargon to hide behind. It only leads to confusion and will exclude people from the conversation.
Two of the core design principles we created as a team were transparency and trust. Transparency to us meant things like clearly communicating states and giving users visibility into the system so they can troubleshoot. For trust, consistently design with the community, evaluate risk, and anticipate potential harms for user safety.
At the companies I worked at in the past, these goals were often not even on the radar or were something said, but not actually implemented. I was excited to push this initiative forward and try methods that felt challenging outside of the open source space.
Creating a public roadmap had been a goal of mine for years. I have always admired the few organizations that put their plans out for everyone, including competitors, to see. We created a public roadmap using Notion and organized it into now, next, and later, which was a bit confusing for some users initially. No timelines made it difficult to plan for, but it was our best guess based on what we were learning.
Prioritizing a roadmap is already a difficult task internally, but constantly adapting and sharing those changes with the community was a new experience for me. Good public-facing documentation ensures there is a solid record of when, why, and what happened. It holds you accountable and ensures there is no gatekeeping of information.

Everything on the roadmap was also shared on our community forum, but we found that some ideas or things we were testing didn’t always get the wider community feedback we hoped for. To bridge this gap, we started a monthly call with the keen beans (the big advocates and contributors in the community). This did not replace traditional user research or speaking to folks who had frustrating experiences, but I saw it as another channel to connect with users.
Each month on the call, a community member shared their story, we discussed what we are working on, and we co-designed and workshoped ideas together. Above all, it allowed us to get to know the people behind the usernames and avatars. Some of my favourite sessions were when people shared their personal journeys into data collection and tech for good, or the real talk, like when projects go wrong and how others can learn from those failures.
In addition to the calls, we also had opportunities to meet up with the community (thanks to the Gates Foundation) and problem-solve together in person. Everyone plays an active role in stewarding the product forward. The ODK team is responsible for consistently pushing things along, but without the community and constant feedback, it wouldn’t have the same impact at scale.

It’s easy to get caught up in the desire to release often. Shipping features is a great feeling, especially when you see it making a tangible impact for organizations. But open source taught me to think like a steward and focus on the long-term.
Thinking about how a feature will be maintained years from now and ensuring the foundations we build today are robust enough for others to build upon. It is a shift from shipping to sustaining.
Before ODK, I had never worked on offline mapping tools. Designing new mapping functionality was one of the first problem areas I worked on when I joined. I was intimidated by how little I knew compared to the community members.
One specific challenge involved removing the manual process of getting MBTiles onto a device. If I had skipped the learning part of this process, the messy, confusing back and forth, asking silly questions, and mapping out every technical hurdle, we would have built something that only worked for a fraction of our users.

By embracing the discomfort of not being the expert in the room, we turned a technical bottleneck into a much better experience that made it easier for non-technical users. Learning and failing in the open feels scary at first, but it is the fastest way to grow.
There is a lot of learning and unlearning happening in tech at the moment. Many people, including myself, are constantly thinking about how to be methodical about introducing AI into our workflows, where it adds value and where it could potentially cause harm.
Now more than ever, it seems like there is an overwhelming number of AI solutions in search of a problem. When designing tools for high-stakes environments like public health, you cannot afford to jump to solutioning without a solid understanding of the wider system and potential impact. Not only for users, but also considering the knock-on impacts like social, labour and environmental harms.
Perhaps that is what drew me to open source. It is full of passionate people who care about building technology responsibly, who want to contribute to something that not only helps their work but also benefits others. Do not get me wrong, it is not perfect. There may be fewer tech bros, but there are still challenges, like financial sustainability and maintaining software that underpins a larger ecosystem.

I have shared my thoughts about the benefits of experimenting and wandering in your career, like moving between manager and IC roles. I’m still a big champion of this idea. Wandering outside your lane, whether that’s in your role or trying a new domain, reinforces this learning loop.
It may seem counterintuitive to slow down and experiment with your learning, especially when the tech industry is obsessed with hyper scale and speed. At a time when it seems like everyone apart from you has figured out how to automate and monetize a side business while they sleep, it is easy to feel behind.
But maybe all of this is okay. Maybe the goal isn’t to move faster, but to move with more intention. Whether you are working in open source or just curious about being more open with your design process, I hope you find a space that allows you to be at your learner's edge and design something you steward with others.

I started writing this as part love letter to open source, but more so to the ODK team and community. I appreciate you all 💙
Although I am moving on from ODK, I am excited to continue applying these learnings in my next chapter, which unsurprisingly will stay within the world of open source :)
Also, if you’re a product designer interested in learning more about ODK and open roles, please reach out to [email protected]!
Working in the open was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
]]>How to create annotations that let your design work shine
]]>My focus has been on the enterprise sector for years. After the 2022 ChatGPT revolution, I witnessed a massive hype around self-made AI…
]]>
Most developers think of CSS as the tool for styling layout, spacing, and colors, but modern CSS can do much more than that.
CSS can even style some native elements in your browser that you probably never even thought about styling.
Some parts of the browser feel untouchable, like they came built-in from the browser itself. Have you ever tried changing the color of the user selection highlights? What about the scrollbars? Can CSS help us with that?
Yes, it certainly can.
If you want to learn how, please just keep reading.
CSS offers a great way to indicate that your website was built by a professional who left nothing undesigned in their website, it’s called the selection pseudo-selector.
This can be applied to the whole website, the whole page or just a part of it using CSS like the one below.
/* This will style the selection of the entire page */
body::selection {
background-color: pink;
}
/* This will style the selection of the "blue" class elements */
.blue::selection {
background-color: skyblue;
}
You can style the color and background-color (and also add text-shadow and text-decoration), but please consider readability here as some people are selecting the text as they read.
Also you should probably keep in mind all accessibility considerations like color blindness and high contrast requirements.
For more information and a live demo, see the CodePen below:
The caret is the little blinking “cursor” inside text inputs and text areas, it can be set to a different color or shape, by default it looks like this: | and it is blinking.
We can change the shape of the caret by using caret-shape to “bar” (Examp|), “block (Examp▮) or “underscore” (Examp_).
Traditionally we see the “bar” in most web forms, the “block” is something we can see in the Terminal, or old DOS interfaces and the “underscore” is usually reserved for Word Processing software.
But that’s not all, we can also change the color of all three of these blinking carets right from our CSS styles.
input[type="text"] {
caret-color: red;
caret-shape: block;
}For more information and a live demo, see the CodePen below:
MDN- caret color
MDN- caret shape
The scrollbars can also be tweaked by CSS, but this one I would test to death on many devices and platforms, since there are inconsistencies between them.
The CSS property scrollbar-width can be set to auto, thin or none and the none value doesn’t show any scrollbar but it keeps the element itself scrollable, I don’t like that combo but I saw many websites that use scroll but I can’t see the scrollbar itself so I guess it has its place.
It’s important to note that scrollbar-color doesn’t work unless you give it 2 color values, the first is for the scroll thumb and the second is the color of the track shown (on a Mac, at least) when hovering the scroll, unless changed in the settings.
If you don’t like that effect, or it throws off your design, you can always use something like the code below to hide the second color (so no track will be visible to the user).
.scroll-element {
scrollbar-color: red transparent;
}For more information and a live demo, see the CodePen below:
MDN- scrollbars styling
MDN- scrollbar width
MDN- scrollbar color
This one is pretty straight forward, accent-color can change the color of different elements in your HTML documents.
It will change the default colors of elements like: Checkboxes, Radio Buttons, Range Inputs and Progress Bars, while honoring all accessibility guidelines for users who need it.
Notice for example how the code below will make all the Checkboxes black instead of the default white, since it’s more readable to the user.
.container.green {
accent-color: yellowgreen;
}For more information and a live demo, see the CodePen below:
This section is actually two tips, the first one is to use something like this on your entire HTML, body element or any element you want to change theme with the user preferred mode:
html {
color-scheme: light dark;
}I prefer to use it on the entire HTML element, like the example above, at least until you’ll get the hang of exactly what changes and what doesn’t.
This code should automatically change everything to be in the right theme for the user preferred mode, either dark mode or light mode.
The second tip that can compensate well with this first example is to use the code below on any element you want to change color when the user goes between light mode and dark mode, the browser should pick the correct option automatically every time.
.container {
background-color: light-dark(white, black);
}
```The code above means that all your containers will get a background-color of white in light mode and black in dark mode and it will be applied dynamically when the user goes between them so no page refreshes required by the user.
For more information and a live demo, see the CodePen below:
MDN- color-scheme
MDN- light-dark()
This one is just for textarea and input elements, it allows them to grow and shrink depends on the content they hold.
textarea {
field-sizing: content;
width: 100%;
max-height: 50vh;
}The example code is for textarea to change its size when the user inputs content, I highly recommend to add either width or max-width to limit its potential infinite size and of course same goes for the height.
For more information and a live demo, see the CodePen below:
I wrote this article because I see many websites where none of these styles apply, many websites with the default selection color, default scrollbars, no accent-color and no dark mode to speak of.
I realize that’s partly the fault of our design tools that are not built on common grounds with our products and make it harder to show and handoff such changes to front-end developers.
But if we want to get these custom CSS styles to appear in more websites we must first get to know them, and that is why I wrote this.
Please share it with any designer or developer you think can use a refresher, and in the mean time if you enjoyed this, you might want to read about How the tools we use change the products we design.
See you all in the comments below.
More CSS Quick Tips, Kevin Powell, YouTube
7 CSS Tricks That Will Blow Your Mind, Coding2GO, YouTube
Dark Mode in CSS Guide, Adhuham, CSS Tricks
CSS Color Functions, Sunkanmi Fafowora, CSS Tricks
The Current State of Styling Scrollbars in CSS, Chris Coyier, CSS Tricks
MDN- ::selection
MDN- caret color
MDN- caret shape
MDN- scrollbars styling
MDN- scrollbar width
MDN- scrollbar color
MDN- accent color
MDN- color-scheme
MDN- light-dark()
MDN- field-sizing
CSS you didn’t know you could style was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
]]>
In my experience in the industry, historically, all product design constraints have been driven by dismissing the value design can bring to companies and their products. Although this value is obvious, some companies and leaders still resist seeing design teams from a more protagonist perspective. Still, the point of this article is not to bring back old complaints or traumas, but to plant a new idea among the design community: the time of being more protagonist in the industry is here, and it might last for a long time.
Let me start on some common ground by pointing out where we were a few years ago and how the picture started to shift for us. The “walls” we struggled with in companies that delimited or sometimes stopped the design’s influence now seem to be falling away because of a new digital work revolution.

In some companies, there exists an executive-all-mighty product team vision in which this single unit sets the north star and makes all decisions. This product team is responsible for driving key conversations with users, stakeholders, competitors, and planning. The main purpose of this “executive” approach is to have one visible face for the product, serving as a messenger across stakeholders, business needs, and multiple areas, whose only job is to receive and execute what product teams bring to the table, with no room for negotiation.
This production model obviously didn’t work (at least for me, it is a flawed way of working), and it is so stressful for people to be the only ones responsible for handling information from multiple communication channels and plans.
What was the design influence here?
Since design teams and other areas were relegated to a blind production cycle, the influence was almost null; only in some conversation spaces, designers were able to speak up about questions or product improvement, but as this model structure worked, all ideas were confined to an endless backlog without having the opportunity to surface and shine.
When did this wall fall?
To be fair, this model died a long time ago; I want to believe it (although you can see it in some traditional companies today). When the product triad arrived in the industry, people realized it was better to drive a product through a multidisciplinary team approach, understanding business, user experience, and engineering as a whole.
How to take advantage of this from now on?
Since all product decisions are now made with a clear understanding of all product perspectives towards a common goal, designers can understand business decisions firsthand and even become more proactive in the decision-making process. Business understanding is no longer an ethereal conversation for us, and it’s surprising to see how some designers have a special talent for adding business value through their expertise. Now, since designers are close to users and the business, they can expand their influence by creating design solutions that are perfectly aligned with the business vision and drive growth. This new influence range also opens the door to engaging high-stakeholders and having hard conversations, increasing communication skills that were nonexistent before.

The fall of this wall (partially, maybe?) is the newest and most rewarding improvement for designers to celebrate since AI hit the market, well, for the vibe-coding enthusiasts. On the other hand, some designers are still refusing to enter the code conversation due to several reasons I’m not going to question here. The thing is, some years ago, the ultimate expression of functional design was recreating product flows in external tools like InVision or Zeplin. To be honest, this was a groundbreaking improvement for designers at the time, but it was also limited in its ability to represent an entire user journey with multiple variables and edge cases (it was only a sequence of static images). I stopped using those tools, and I assume they have evolved over the years, but even so, with the transition from a design file to code via AI, the biggest gap between designers and productive environments has been closed.
What was the design influence here?
Before AI, code language was a black box for designers; most of them. From time to time, rare designers could talk about JavaScript, databases, or HTML syntax, making them a sort of unicorn. For the rest of the designers like me, our influence in code reviews was limited; in only some situations did devs show lines of code to explain why our design was not viable, and that was it. Most of the time, many of the design ideas were discarded due to engineering “limitations,” so our ideas were always weakened.
When did this wall fall?
Since the AI boom, designers have discovered a new door to cross. The proliferation of AI companies whose purpose is to facilitate code transition for everyone has been the perfect catalyst for accelerating the disappearance of this barrier. Today, seeing people from different backgrounds and professions “making code” with Vibe coding tools and building businesses from scratch is to embrace the new reality that coding is no longer exclusive to specialized teams.
How to take advantage of this from now on?
You can argue a lot about the quality of code produced by a vibe coding experience; in this regard, there is a lot of controversy today, which I agree with, without renouncing my full expertise in this matter. Still, even with this variable taken into account, designers can now have their own code live for multiple purposes, which again expands their influence.
Having the full construction path, from business to design to code, I believe designers can use a set of tools and skills to be the person in tech companies who can manipulate three different languages and create value.
From the old triad model, designers have a huge advantage: the design itself. No one else has that ability. Even when creating designs with AI, the results aren't perfect, and engineers and PMs can’t spot issues or generate ideas.

Who said designers can’t bring ideas to the table to change things? They are the ones for this job. Some years ago, innovation came from executives who dictated what to do with the product or the company’s dynamics, and sometimes they were not connected to audiences or the market. That wall of innovative thinking fell long ago when the product decision-making was split into a multidisciplinary perspective. Over the years, all designers thought that the only place to create value was in interfaces, but the truth is that even with simple design ideas, we can create revolution in companies or products.
What was the design influence here?
Some years ago, innovation in digital products was perceived by how well or how aesthetically pleasing the user interface elements were. It’s not new that, years ago, highly realistic visual metaphors were king when building websites. After that, we fell into a “flat language” in which solid colors and big fonts dominated design trends over the years, but to me, those changes were merely visual; the influence here was focused solely on communication.
How to take advantage of this from now on?
Since designers are more proactive in product decision-making, we began shifting the product approach to a more user-centric and business-centered experience, leaving behind the field of communication to enter and reframing how we communicate within teams, thereby creating more innovative processes and outcomes. The power of UX is so relevant that even simple tweaks to internal processes can drive innovation not only in how a product behaves but also within the company.
Now we have the technology that shortens the gap between our way of thinking and the impact we want to achieve, but a tool alone is not enough to validate this impact. That’s why I believe that, from this year on, we can see the evolution of design as we know it.
My colleague and VP of Design, Carlos Pinilla, said something the other day that blew my mind. It was a statement that reframes designers’ skills nowadays, now that we have all the tools at hand and the walls have fallen.
“New designers should be measured on how big their imagination is, instead of how much they know about tools.” — Carlos Pinilla.
This idea complements my thesis on how a new design approach and influence can emerge from this technological revolution. Now we have the tools, but as I said before, tools by themselves only deliver limited or generic solutions, and that is where a big imagination can make a difference. This design impact can extend beyond interfaces, thereby expanding designers’ influence.
Another point raised by Carlos is that design influence can’t be broadened if the company is not aligned with this direction of innovation. Not having a clear vision for design teams will build all the walls that, as I said, limit our full potential; those are the types of companies to avoid. On the contrary, companies that start thinking about using product design as a foundational innovation vision, along with new AI tools, are places where invention from another perspective can thrive.
Let’s be honest here: design as we knew it is over. Taking days, weeks, months, or even years in Figma to create components, interfaces, and connect prototypes can start to sound expensive for companies; this is an ugly truth, but after all, the truth. Why do we need an army of designers or engineers to validate an idea? That’s a question to start thinking about, since we have advanced tools like Claude Design or Google Stitch (among others) that accelerate the product creation.
In my opinion, the craft is shifting toward a new way of thinking, where we delegate the craft to tools to accelerate the process. Still, as designers, our experience and expertise can be exploited in new ways in strategic fields for companies and businesses.
Deliver complete tools and services, not just mockups; that’s the new design destiny. As I mentioned above, all the walls that might keep us in secondary or even tertiary roles are down, and this is the time to start using what we have learned so far to discover a new design value that transcends mockups and points to the entire business, functional product, and services.
My question for you is: how do you want to be the protagonist in this new scenario without falling behind?
To write this article, I want to credit all the fantastic information sources and other authors who have written about related topics, each from an exciting, different perspective.
The Year You Finally Follow Through
UX Design 2026: from asking to choosing in the age of abundance
Experiment Like a Child, Build Like a Founder
10 UX Design Trends Driving Business Growth in 2026
Product design in 2026: the beginning of a fantastic voyage? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
]]>
I wrote this essay from firsthand experience building AI-adjacent products. I used Claude Code (an AI coding assistant) for structural feedback and copy editing on the draft. The arguments, citations, and conclusions are my own.
Disclosure: I build browser extensions that wrap around major AI chat products. That means I see the usability friction firsthand, and I have a commercial interest in a world where chat is not the only AI surface. I disclose both so readers can factor those biases into what follows.
Open any AI product launched in the last three years. Ignore the model, the logo, the branding. You will find the same interface: a text input at the bottom of the screen, a send button, and a scrollback of alternating messages.
This is not a random convergence. It is the interface that fell out of what large language models could do on day one: pattern-match on text. In 2022 we had a new capability and no time to design around it, so we shipped what was fastest to build and called it conversational AI. Three years later, the fastest thing to build has become the thing everyone builds. That is how defaults calcify.
The chat box became the default AI interface because language models produce text and text boxes accept text. The decision was about build speed, not user outcomes. A blank text field gives users no clue what the system can do, no structured place to express constraints, and no feedback about whether their request fell within the system’s competence. Before LLMs we would have treated that as a design failure.
Amelia Wattenberger made this case early. In Why Chatbots Are Not the Future of Interfaces (2023), she argued that a text field gives users “unclear affordances”: the same rectangle might be a search box, a credit card field, or a chatbot, and the user is left to guess. She also flagged the flow-state cost. Users alternate between implementation (typing) and evaluation (reading), interrupting the focus state that makes creative work possible.
Don Norman named the underlying cost forty years before LLMs existed. His concepts of the gulf of execution (the gap between what the user wants and what the system lets them do) and the gulf of evaluation (the gap between what the system shows and what the user can understand) are the clearest vocabulary for why chat fails. Nielsen Norman Group has a current summary in The Two UX Gulfs: Evaluation and Execution. A chat box maximizes both gulfs at once. The user serializes intent into prose, the system returns prose, and the mapping between the two is whatever the user can infer without any help from the UI.
Three years after Wattenberger’s essay, her argument has aged well. Users are doing more work inside chat boxes than ever, and that work is getting longer, not shorter. Nielsen Norman Group’s Accordion Editing and Apple Picking research documented the pattern empirically: users rarely get what they want on the first try, so they refine through additional prompts. The word “conversation” is a euphemism for an iteration loop that the system forced on them.

The interface we replaced with the chat box had forty years of research behind it. Ben Shneiderman named the pattern direct manipulation in 1983: visible objects, rapid reversible actions, physical-feeling gestures. The next four decades added structured forms, progressive disclosure, contextual menus, drag-and-drop, undo stacks, autocomplete, and constraint solvers. AI products adopted almost none of it. The surface shrank to a rectangle.
Shneiderman’s central argument was about the locus of control. In a direct-manipulation interface, the user sees the object they are acting on and acts on it directly. There is no translation layer between intent and action. In a chat interface, every action goes through a compression step where the user has to serialize their intent into prose and hope the system can decompress it back into the right action. That compression is where most of the UX debt sits.
Bret Victor made the same argument about prose more than a decade before ChatGPT. His 2006 essay Magic Ink calls interaction the last resort of information software. Good design presents the relevant information as a graphical surface first, and falls back to interactivity only when information alone cannot do the job. A chat-only AI inverts this completely. There is no information surface at all until the user has interacted with it, and the interaction itself is prose. Victor’s essay reads today like a warning written ahead of time.
Maggie Appleton, a designer at Elicit who has written extensively on language-model interfaces, put the alternative bluntly in her Language Model Sketchbook. Most LLM implementations, she argues, should be “spell-check sized” and do one specific thing well. Her proposed interfaces are scoped, structured, and embedded: a highlight-to-rephrase tool, a one-click summarizer, a mouse-over explainer. None of them a chat window.

2024 is the year every major AI lab shipped GUI additions on top of the chat box. Seven retrofits in twelve months, across three labs. OpenAI opened the year with the GPT Store on January 10, a tile-based catalog that is not a conversation. In May, GPT-4o Voice broke the text-only send-and-scrollback pattern. Anthropic followed in June with Artifacts, a side panel that renders code and documents next to the chat, and Projects, a persistent workspace with file uploads and custom instructions. OpenAI returned in October with Canvas, a split-screen document editor (beta October 3, full release December 10). Anthropic shipped Computer Use on October 22, a mouse-and-keyboard agent that manipulates real desktop apps. Google closed the year with Deep Research on December 11, a multi-step research agent inside Gemini with a visible plan, progress panel, and editable outline. Each of these is a GUI pattern borrowed back from the interface the chat box replaced.
Calling this progress is charitable. It is the industry discovering, retrofit by retrofit, that a text box alone cannot hold a meaningful creative surface. You cannot edit a thousand-line document by asking the bot to re-output it with “line 312 changed to X”. You cannot iterate on a design by describing it. You cannot plan a research project without seeing the plan. The moment the task has a structured output, the chat box becomes the wrong place to work, and the vendors put a canvas, a side panel, an editor, a workspace, or a planner next to it.
The pattern is admission, not innovation. Canvas, Artifacts, Projects, Computer Use, and Deep Research are what we would have built first if we had started from the user’s task rather than the model’s I/O shape. The fact that three labs arrived at seven retrofits in the same twelve months is the cleanest evidence that the original chat-only interface was under-designed for the work users actually do.

The most influential defense of chat as a UI paradigm comes from Jakob Nielsen, who argued in AI: First New UI Paradigm in 60 Years (June 2023) that AI represents the third paradigm in computing history: intent-based outcome specification. The user says what they want, not how to do it. The framing is correct and worth keeping. What is worth contesting is the quiet assumption that intent-based specification and chat-based specification are the same thing. They are not.
Expressing intent does not require prose. A date picker expresses temporal intent more precisely than any sentence. A pair of sliders expresses a tradeoff more legibly than a paragraph. A file upload expresses “work on this thing” without ambiguity. Every one of these is intent-based. None of them is chat. The chat box is one possible implementation of the paradigm, and by all accessible evidence it is a low-resolution one.
NN/G’s own follow-up research on the six types of conversations users have with generative AI supports this distinction. Different task types need different interfaces, and collapsing all six into the same text box forces users to carry the cost of discriminating between modes the UI could have distinguished for them. Intent-based is a valuable framing. Chat-only is the tax we pay for not finishing the design.

Post-chat AI UX borrows from the GUI patterns the chat box displaced: visible affordances, structured input, direct manipulation of output, and scoped assistance tied to specific surfaces rather than a global text field that has to handle everything.
Alex Mohebbi made a related argument in UX Collective’s own Why Conversational Interfaces Are Taking Us Back to the Dark Ages of Usability, pointing out that the closest historical analogue to chat-only UIs is the command-line era that direct manipulation was invented to escape from. The industry already has the pattern to borrow from. It just needs to borrow it.
Two designers who write regularly on LLM UX have been pointing at the same alternative from different angles. Linus Lee’s Imagining Better Interfaces to Language Models treats language models as tools to explore latent spaces of ideas, not as chat endpoints. Users manipulate the model’s path directly through visual affordances, not by prompting for the next response. Geoffrey Litt’s Malleable Software in the Age of LLMs (March 2023) argues that LLMs can finally give end-users the ability to bend software to their specific task, but only if the interface exposes the software’s structure. A chat window hides all structure by design. A post-chat UI surfaces it.
Concrete examples of post-chat surfaces already shipping: highlight-based rephrase tools in Google Docs, inline cell autocomplete in Notion and Cursor, one-click image variants in Figma, comment-to-commit flows in Linear. None of these require the user to write a prompt. All of them are intent-based in Nielsen’s sense. Each is smaller, more specific, and more usable than a chat box. Appleton’s “spell-check sized” framing predicted all of them, and the good AI UX work of the next three years will be distributed across a thousand of those scoped surfaces rather than concentrated in one generalized text field.

Follow me on Medium for more essays on AI UX and the design reality of building for global audiences. If you think the chat box is the worst interface we could have picked, what have you seen that does better?
About the author: Adi Leviim is a full-stack engineer and product builder with 7+ years of experience shipping commercial software to global audiences. He writes about AI UX, the design reality of building for millions of users, and the gap between AI demos and production AI. Follow him on Medium for essays at the intersection of engineering and design.
The chat box isn’t a UI paradigm. It’s what shipped. was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
]]>
LLMs trained on the web have absorbed its worst design habits. Or, to be precise, our worst design habits.
Not to generalize, but even though they’re considered unethical, many companies use design tricks to deceive users into making choices they would not otherwise make. These are the so-called UX dark patterns.
Now, all these malicious techniques have been inherited by LLMs and unconsciously replicated. The same way they repeat clunky sentence structures when you prompt it to write a social media post or blog article, they also keep churning out contact forms and pop-up messages designed to coerce people into actions they never intended to take.
The models learned from us. Now we have to learn to keep them in check.
Let’s make something clear before we go further: an AI doesn’t have the intention to make you click a button. What you see it generate comes from being trained on a web where manipulation was already baked in. An LLM can’t reason or, at least, not as humans do. It can simulate reasoning because it has been trained on billions of examples that include everyday logic and social conventions, but it doesn’t have beliefs or desires.
A 2026 study from UC San Diego, titled Deception at Scale, put numbers to something many designers had only suspected. After analyzing 1,296 LLM-generated ecommerce components, researchers found that 55.8% contained at least one deceptive design pattern, while 30.6% featured two or more. The most unsettling part? Users never asked for any of these dark patterns. The models simply defaulted to them, baking deception into the UI by design.
Interface interference was the dominant strategy: using color psychology to steer actions and hiding essential information. In practice, that looks like “Accept” buttons in loud, high-contrast colors next to a “Decline” link that’s barely visible, or membership cancellation flows designed to exhaust you.
When prompts emphasized business interests, such as increasing sales, the number of components with deceptive designs increased by 15.8 percentage points. This may imply that if you tell an LLM to “optimize forconversions,” you’re asking it to reach into everything it ever learned about manipulating users and apply it to your product.
Flip it around and tell the model to prioritize user interests, and dark patterns only drop by 5.8 percentage points. Pushing toward manipulation is far more effective than pushing away from it.

Alluding to the famous line from a well-known novel and film trilogy, the solution to putting an end to LLMs generating dark patterns might be a single prompt. Or, as it normally happens, trying and failing, again and again, until you come up with the one prompt.
It’s been around 3.5 years since the public release of ChatGPT, which also marks the first time most of us heard what prompting meant and how important it is for “educating” an AI to give you the right answer.
In “Create a Fear of Missing Out,” researchers prompted ChatGPT to generate 20 websites. Every single one contained at least one deceptive design pattern. On average, each site included five, and the model raised no warning at any point.
DarkBench reaches the same conclusion. The benchmark tested 14 language models from OpenAI, Anthropic, Meta, Mistral, and Google across 660 prompts covering multiple categories of dark patterns. Across all models, manipulative behaviors appeared in 30% to 61% of interactions.
The research is clear. We’re not dealing with occasional mistakes. We’re looking at behavior that shows up consistently across models and scenarios. That changes how we should think about prompting and its role in counteracting deceptive design habits.
I used to think of prompting as some kind of magic phrase, but now I treat it as a discipline in its own right. Designers must prompt LLMs to avoid falling back on every conversion trick they’ve learned, which requires spelling things out in the prompt: no pre-selected add-ons, no hidden fees, no urgency cues, no asymmetric button sizing. The more specific, the cleaner the output.

So far, we’ve been talking about interface-level dark patterns. We’ve seen experiments that prove LLMs are riddled with dark patterns, including pre-checked boxes, manipulative button colors, hidden costs buried in checkout flows, and interminable cancellation pages.
However, LLMs also manipulate through conversation. They create new forms of dark patterns that have nothing to do with visual UI design. Researchers define these as manipulative or deceptive behaviors enacted in dialogue, such as exaggerated agreement or subtle privacy intrusions.
In The Siren Song of LLMs, researchers explore how we actually perceive and react to deceptive tactics in AI. One of its more uncomfortable findings is that many users didn’t recognize these dark patterns as manipulation. They saw them as normal assistance. Because the AI felt helpful, the deceptive behavior was “normalized”, leaving users unaware they were being nudged at all.
Responsibility for these behaviors was attributed in different ways: to companies and developers, to the model itself, or to users. Nobody had a clear answer for whose problem it was, which means it’s everyone’s problem and nobody’s priority.
Your team may not even be aware that conversational dark patterns exist, but the AI writing your microcopy, drafting your onboarding flow, or generating your support bot dialogue is nudging people in directions they never asked to go, in a voice that sounds quite reasonable.

LLMs have no self-correcting mechanism. They ship whatever the web taught them was normal, and the web taught them that manipulation converts.
The designers, product managers, and CDOs who still believe in ethical, accessible products are the only real line of defense. They need to be responsible for auditing AI-generated output before it ships (focusing on intent), writing prompts that are specific about what you want to achieve (and what you will not do), and treating ethical design as a constraint you engineer around.
Remember that LLMs don’t have ethical principles, so don’t assume they do.
Arin Bhowmick (@arinbhowmick) is Chief Design Officer at SAP, based in San Francisco, California. The above article is personal and does not necessarily represent SAP’s positions, strategies or opinions.
The web trained AI to deceive. Now designers have to untrain it. was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
]]>A reflection on clarity, restraint, and what it truly means to design meaningful experiences looking at the bigger picture
]]>
“The sentiment of the change brought on by AI has never been more relevant than it is now. Technology has always accelerated, but it feels like we are at an inflection point. Where AI business innovation, AI automation, and AI-driven technological disruption are shaping us faster than we can behold. Whether you like it or not, our tools are shaping us, and we are complicit in their methods and tricks.”
We become what we behold →
By Chris R Becker

Recognizing excellence across the globe! →
[Sponsored] Meet our jury: 14 UX leaders from around the world, united by deep experience and a passion for design excellence. Share your work with industry-shaping professionals, gain international visibility and get recognized! Submissions are open until May 15!
The UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about their work.


I watched the Manosphere doc; here is how design makes things worse →
By Maria Teresa Stella

Oh, but there’s one more thing →
By Peter (Zak) Zakrzewski

Notes from the people building your future →
By Dora Czerna
If you find our content helpful, here’s how you can support us:
What we behold, the trust-latency gap, designing haptics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
]]>