DOC https://www.doc.cc For makers of products and seekers of meaning! Mon, 16 Mar 2026 17:23:03 GMT https://validator.w3.org/feed/docs/rss2.html https://github.com/jpmonette/feed <![CDATA[Critique]]> https://www.doc.cc/syntax/critique https://www.doc.cc/syntax/critique Sat, 21 Feb 2026 15:15:56 GMT

Critique

On elevating craft through critical thinking.

Photo by Seiji Seiji

“The trouble with most of us is that we'd rather be ruined by praise than saved by criticism.”
Norman Vincent Peale

A tool to improve design effectiveness

A design critique is a structured, collaborative session where designers, peers, and stakeholders evaluate a design artifact to provide constructive feedback and improve the design's effectiveness, quality, and alignment with user needs and business objectives. A good design critique is never about personal taste and should always strive to be as objective as possible.

Critiques as necessary collisions

A critique is not a governance mechanism, nor is it a group brainstorming session. It's a necessary collision. It’s the intentional application of adversarial thought to something that isn't finished yet. Its sole purpose is to pressure-test the underlying assumptions. Ultimately, critiques are about injecting constructive doubt into a designer's premature certainty before they build too much or go too far.

Four-panel photo showing the process of sculpting a realistic ear in clay, from an initial rough shape to the final detailed form.

Necessarily friction, necessarily collisions (Photo source)

Not a review, not a jam

Unlike a Design Review (checkpoint focused on business alignment and binary pass/fail standards), critiques seek friction to elevate the craft and integrity of the solution itself. A critique is not a Design Jam either, which is a co-creation activity focused on shared ownership. This might all sound like semantics, but it's not. It's crucial to get everyone on the same page about what kind of meeting this is before they even walk in. Start with the meeting invite.

Crits are a powerful training ground

Aside from leaving the room full of ideas on how to improve the designs, the benefits of critiques extend far beyond the work itself. A good critique helps designers build organizational capital and executive presence. Think of it as the cheapest, safest stage for a high-stakes failure. It forces the designer to instantly look past the thing they made and focus on the collective goal we're all trying to hit. It's a recurring workout: articulating business value and defending assumptions under friendly fire. That exercise strengthens a designer's ability to communicate conviction and absorb feedback without crumbling. Designers who do this often are the ones who get to drive high-level decisions later on.

Crits help designers decouple from artifacts

The palpable terror felt by junior designers during critique is a direct result of an identity crisis: they are yet to fully decouple the design artifact from the self. To them, the design is a proxy for their professional legitimacy, and feedback is not data but a judgment of their worth. Maturity hits the moment a designer realizes that the critique isn't an exam they have to pass, or a place to get pats on the back. Mature designers look forward to the "hard edges" of feedback because they understand that a critique session is like a controlled burn: it preemptively eliminates costly, hidden flaws in private, allowing them to harden the solution and move toward production with stronger confidence.

A well-facilitated crit is essential to extract value

The critique’s success is measured by the specificity of the resulting feedback, which all rides on the facilitator's ability to control the cognitive load of the room, facilitate the discussion, and follow up. This demands some rigor: setting expectations and the right context, defining the scope, and specifying the type of feedback that would be the most helpful at this stage in the design lifecycle. Whether this is done by the presenter or a third person, a well-facilitated crit is essential to extract value of that time.

Detailed pencil sketches for an animated insect character, including full body, skeletal views, head iterations, and breakdowns of limbs and body in various poses.

Tireless iteration is a feature, not a bug (Photo source)

A critique is a space for questions, not decisions

The greatest threat to a good critique is the designer's impulse to start premature solutioning—or, even worse, defending their personal investment in the thing they made. The session shouldn’t be a place for anyone to prove they are "right." Your job is to absorb the feedback, seek deeper understanding, document the conflict, and display intellectual detachment from the artifact. If you jump into immediate solutioning, you'll starve the room of future input.

“Good design begins with honesty, asks tough questions, comes from collaboration and from trusting your intuition.”
Freeman Thomas

Highlighting what works is as important as calling out what doesn’t

Effective feedback must begin by anchoring success and highlighting what works in the work being presented, as it validates the team's shared definition of “good” and dictates which design patterns should be scaled versus those that shouldn’t. Without that initial anchoring, the critique devolves into a reactive, negative-only exercise. You end up wasting resources fixing only what's broken, without the strategic recognition needed to replicate and leverage what's already proven

Bring the good and the bad

Don't overcurate the work. Resist the temptation to only bring the ideas that are already destined for success—the ones that make you look good in front of the team. A critique is not a place for a victory lap. It's a low-cost lab for trying new things. Make sure to show the artifacts you are most stuck on or afraid of. This act of deliberate vulnerability is strategic candor: it de-risks the organization by pushing ambiguity into the light when the cost of change is still quite low.

“Remember: when people tell you something’s wrong or doesn’t work for them, they are almost always right. When they tell you exactly what they think is wrong and how to fix it, they are almost always wrong.”
Neil Gaiman

Know when to move on

Failure is not the rejection of an idea, but stubborn insistence on the wrong ones. The better response to a failed idea is to archive the learnings collectively (documenting why it failed now) and move on. Accepting that a design is not solving the problem is, paradoxically, the fastest route to future growth.

Avoid participation theater

Few things kill a critique faster than the inevitable intrusion of low-leverage tactical feedback. "But is the color contrast AA accessible?" is rarely genuine curiosity. This form of what-aboutism is often a form of participation theater, a reflexive urge to demonstrate basic competency when the true strategic friction is too uncomfortable to touch or hard to articulate. This type of intellectual vanity ends up wasting the most expensive hour of cross-functional time on issues that can and should be automated, delegated to async review, or resolved through other mechanisms. Get the easy stuff out of the way and focus on the hard questions to be solved.

Discomfort is a good thing

If you enter a critique feeling entirely comfortable, it means one of two things: the work is not bold enough, or the artifact is being presented so late in the cycle that any major course correction is now painful. Feeling slightly exposed means you are operating at the edge of the team’s risk tolerance. If you are comfortable, you are not trying hard enough.

Two-panel image. Left: a stylized black-and-white sketch of a building complex. Right: a black-and-white photograph of the National Congress of Brazil building, featuring two towers and two domes.

Bold enough (National Congress of Brazil by Oscar Niemeyer, photo by Gonzalo Viramonte)

Crits are like human firewalls

In the age of AI, the human critique session becomes even more important. LLMs can generate ideas in 5 seconds, but stress-testing them with contextual knowledge, taste, and vision, is something that you should be better at. As AI accelerates the production of “technically correct” and “aesthetically optimized” work, relying on just AI creates the risks of mediocrity. AI is trained to be predictable; crits are all about friction: political, organizational, or strategic.

Crits are how we get better

Critiques develop critical thinking, not just improve designs. Each session should build on previous ones; it’s an opportunity to assess everyone’s skillsets and broaden our understanding of design. If people are attending a critique just to share their thoughts, they are not improving their way of thinking. You should only join a crit if you’re open to the new—ideas from others, ways of framing the problem, and paths to personal growth. A crit is a space to reflect on the design, but also reflect on how we think about design.

]]>
Design & Craft syntax
<![CDATA[Why AI is exposing design’s craft crisis]]> https://www.doc.cc/articles/craft-crisis https://www.doc.cc/articles/craft-crisis Sat, 21 Feb 2026 14:42:05 GMT

Why AI is exposing design’s craft crisis

AI didn’t create the craft crisis in design — it exposed the technical literacy gap that’s been eroding strategic influence for over a decade.

Written by Dolphia Arnstein

Drawing of a monkey

At Figma’s Config 2024, CEO Dylan Field stood before 10,000 designers and typed a simple prompt: “A personal portfolio website for a sustainable Middle Earth architect.” Seconds later, a complete UI layout materialized on screen. The crowd watched as Figma’s AI generated wireframes, layouts, and design systems from text prompts.

Field’s pitch was seductive: “In a world where more software is being created and reimagined because of AI, designing and building products is everyone’s business.”

Everyone’s business. The democratization promise we’ve heard before — this time powered by GPT-4o and Amazon’s image generators. Some features got huge reactions. The Rename Layers tool earned cheers. But on social media after the event, a different story emerged.

dylan field giving a talk

Dylan Field at Figma 2024 Config (Source: Figma blog)

Designer Sebastiaan de With wrote what many were thinking: “Figma AI will kill most — if not all — of these entry-level design jobs… if someone can simply get that poster (and 10 more like it) generated in a click, people will opt for that most of the time.” The anxiety wasn’t about the technology. It was about what the technology revealed: if AI can do your job with a text prompt, what exactly is your job?

Then came the Apple Weather app incident. Within days of launch, someone noticed Figma’s AI was generating designs that looked suspiciously like Apple’s Weather app. Not inspired by. Resembling. Field disabled the feature and admitted the company had rushed to hit a Config deadline. The feature relaunched months later as “First Draft,” with better guardrails.

But the damage was philosophical, not just technical. The promise of AI democratizing design ran headlong into a paradox nobody wanted to say out loud: these tools don’t make design accessible to everyone. They make design accessible to people who already understand what they’re looking at.

The Paradox Nobody Mentions

Here’s what actually happened when non-technical users got their hands on AI design tools.

In May 2025, accessibility expert Adrian Roselli tested Figma Sites — the company’s new publish-to-web feature that promised to turn Figma designs into live websites. He ran automated tests on Figma’s own flagship demo sites. The results were brutal: 210 WCAG violations on one demo site. Another had 107 violations. Images with no alt text. Contrast ratios failing basic accessibility standards. The HTML output was what Roselli called “div soup” — everything rendered as generic container elements, even headings and navigation.

The technical implementation was worse. Drop caps were created using background-colored underscores, a technique that belonged in 1974 typewriter manuals, not modern web development. One commenter created a satirical video called “Introducing: Webbed Sites.” Another posted a reaction titled “Figma’s New Horrific DIV Generator.”

This wasn’t an AI training problem. This was an expertise problem masquerading as a democratization tool. Figma Sites generated technically valid HTML that was utterly inappropriate for actual use. If you don’t know why that’s a problem, the tool won’t help you. If you do know why it’s a problem, you probably wouldn’t use the tool.

The pattern repeated across every AI design tool that launched in 2024–2025. V0 by Vercel produced code that stopped mid-generation. Users reported the AI losing context and forgetting previous work. Bolt.new earned a 1.5-star rating on Trustpilot, with users reporting they spent over a thousand dollars in tokens just trying to fix code problems. One developer burned through 20 million tokens on a single authentication issue. Critical files went missing. Projects deployed with blank screens.

Claude Code, despite being one of the more capable AI coding tools, experienced performance degradation so severe that GitHub users reported 30–40% drops in development speed. Tasks that previously took one day required two or three. The most telling detail: Anthropic had to publish an official postmortem admitting infrastructure bugs caused the AI to produce broken code with odd characters.

And then there was Builder.ai — the cautionary tale everyone should study. Valued at $1.5 billion with backing from Microsoft and Qatar’s sovereign wealth fund, Builder.ai promised AI-powered app development. The company employed over 1,500 people and raised $445 million. In May 2025, it filed for bankruptcy. The reason? The “AI” was actually 700 Indian engineers manually coding everything. The Wall Street Journal had exposed this back in 2019, but the company limped along for six more years. Enterprise clients experienced apps crashing under basic user loads. A former employee later joked that the “AI” stood for “Another Indian.”

The paradox crystalizes: tools marketed as democratization require more technical knowledge than traditional workflows, not less. You need to understand prompt engineering, debug AI-generated output, recognize when the AI is hallucinating, and know enough about technical constraints to evaluate whether what the AI produces is actually feasible.

We told ourselves AI would level the playing field. Instead, it raised the floor. If you can’t tell the difference between functional code and broken code, between accessible design and inaccessible design, between scalable architecture and technical debt, these tools just let you fail faster.

How we got here

The craft crisis didn’t start with AI. AI just made it impossible to ignore.

Somewhere in the early 2010s, the design industry made a collective decision: designers don’t need to code. Alan Cooper, the father of Visual Basic and interaction design pioneer, wrote definitively that coding wasn’t necessary for top-notch designers. Design Thinking became the dominant framework, emphasizing empathy and process over technical craft. Design bootcamps proliferated with promises of career changes in six months, explicitly advertising that no coding knowledge was required.

Google’s UX Certificate on Coursera states clearly: “UX design requires no knowledge of coding.” For $300 and six months at 10 hours per week, you could become a UX designer. Bootcamps charged $10,000 for 12-week programs focused on wireframing, prototyping, and user research — not technical implementation.

This wasn’t gatekeeping by engineers. This was the design industry telling itself a story about what designers needed to know. The story went like this: designers focus on users, empathy, and problem-solving. Engineers focus on implementation, code, and technical constraints. Clean separation of concerns. Everyone stays in their lane.

What we lost wasn’t the ability to code. What we lost was the ability to participate in technical conversations.

Understanding why an API limitation makes a feature infeasible. Recognizing when performance constraints should influence design decisions. Knowing enough about technical debt to engage in build-versus-buy discussions. Being able to evaluate whether AI-generated code actually works or just looks like it works.

Roger Wong, analyzing the design talent crisis in 2025, put it bluntly: entry-level designers at Big Tech now represent just 7% of all hires — a 50% drop from 2019. How do designers develop taste, craft, and strategic thinking without doing the foundational work? The industry created a pipeline that produces designers who can talk about empathy but can’t evaluate technical trade-offs.
The consequences show up everywhere. UXPin research found that 62% of developers spend significant time redoing designs due to communication breakdowns. Not because designers made bad aesthetic choices. Because designers proposed things that were technically nonsensical, or missed constraints that would have changed the entire approach.

A designer proposes a feature without understanding that it requires a new API endpoint, which requires backend changes, which requires database migrations, which means a three-month timeline instead of a three-week timeline. The feature gets de-scoped or killed. The designer learns their proposals aren’t strategic input — they’re suggestions engineers will simplify into whatever is actually buildable.

This happened so many times, in so many companies, that it became the pattern. Designers create mockups. Engineers override them based on technical constraints the designer didn’t know existed. The designer spends days in “alignment meetings” trying to advocate for the user, but can’t speak the technical language needed to actually influence the decision. Business stakeholders step in to make product calls because they can at least understand the technical constraints, even if they don’t understand users.

Rune Madsen, in his essay “Product Design Is Lost,” captured this perfectly: designers found themselves “in a place between strategy and implementation, yet they are not fully empowered to influence either.” We’re filling our time with checklists instead of focusing on the very thing that makes designers relevant. We’ve been painted into a corner.

The industry did this to itself. And we called it progress.

Why This Matters Now

Here’s the part that makes designers defensive: we lost influence long before AI showed up. AI just exposed the gap.

The “Great Design Handoff” wasn’t a single event — it was a slow erosion of strategic control. Growth teams took over conversion optimization. Algorithms dictated layout and personalization. Business stakeholders made product decisions. Design leaders found themselves defending the existence of their teams rather than shaping product strategy.

The evidence is everywhere. IDEO — the company that popularized Design Thinking — lost half its headcount between 2020 and 2025. Revenue dropped from $300 million to $100 million. Google laid off over 100 designers in November 2024, cutting some cloud design teams in half. IBM eliminated its chief design officer position entirely. Autodesk cut 1,350 employees in December 2024. Across tech, 150,000+ jobs disappeared in 2024 alone.

But here’s the telling detail: only one major company hired a chief design officer in the second half of 2023. PayPal. That’s it. Executive-level design roles are disappearing across corporate America, and it’s not because AI replaced them. It’s because the roles stopped delivering strategic value.

Fast Company reported on a March 2024 conference where design leaders from P&G, Ford, Verizon, GE Healthcare, and other Fortune 500 companies confronted the question directly: “Is Design Dead?” The consensus wasn’t about AI. It was about business priorities. Design has been deprioritized during a business cycle championing technology and marketing.

Netflix provides the clearest case study. Eighty percent of all viewing hours come from algorithmic recommendations. The homepage layout, row ordering, and even the thumbnails users see are determined by algorithms, not designers. Netflix’s personalization saves over a billion dollars per year in subscriber retention. When the business case is that strong, design becomes an execution function serving algorithmic strategy.

Growth teams present a similar challenge. Andrew Chen, a prominent growth expert, notes that growth teams work best when they can run lightweight experiments quickly. Design-conscious companies resist this because their incentives reward large, complex projects rather than many small changes. So growth teams route around design, making decisions that move metrics even when they compromise user experience. Princeton found that 1 in 10 of 11,000 shopping sites used dark patterns. Zurich University found 95% of popular Android apps had at least one dark pattern. The European Commission found 97% of apps used by EU consumers contained dark patterns.

These aren’t isolated bad actors. These are systemic outcomes when growth imperatives override design principles, and designers lack the technical fluency to advocate effectively in those conversations.

The root cause is simple: most product decisions are fundamentally technical. They involve technical constraints, technical trade-offs, technical debt, and technical feasibility. If you can’t participate in technical conversations, you can’t influence product strategy. You can’t influence product strategy, you end up in alignment meetings explaining user needs to people making technical decisions you don’t fully understand.

This is why the handoff happened. Not because AI replaced designers. Because designers couldn’t articulate why technical decisions should consider user impact in language that resonated with the people making those decisions.

This Isn’t About Coding

Let me be crystal clear: this is not an argument that designers should learn to code production software.

This is about strategic literacy. Understanding enough about technical systems to make informed design decisions and advocate effectively for users when those decisions are being made.

There’s a difference between writing production code and understanding what code does. Between implementing a database schema and knowing why data models influence user workflows. Between configuring a build pipeline and recognizing when technical debt is accumulating in ways that will limit future design possibilities.

Strategic literacy means you can:


• Evaluate whether AI-generated code is functional or broken
• Understand API constraints well enough to design features that are actually feasible
• Recognize when performance implications should influence interaction patterns
• Participate in technical debt conversations because you understand what’s being traded off+
• Spot when “technical constraints” are actually implementation choices masquerading as limitations

The design engineers who maintained influence through the 2024–2025 crisis weren’t necessarily writing backend services. But they understood technical systems deeply enough to design with constraints in mind, advocate for users in technical conversations, and ship work that didn’t require wholesale engineering rewrites.

Rauno Freiberg at Vercel describes his role as providing “the Design team with Engineering firepower to lead and ship our own projects end-to-end.” Paco Coursey at Linear says design and code don’t feel separate to him. These aren’t people who abandoned design for engineering. They’re people who refused to treat design as separate from implementation.

The counter-argument is always gatekeeping: requiring technical knowledge excludes people from design. But what actually excludes people is creating a professional track that leads to strategic irrelevance. Roger Wong’s research shows entry-level hiring has collapsed. Matt Ström warns that if we don’t train junior designers now, we’ll have no senior designers for complex work AI can’t do. Design Week points out that students aren’t being equipped with essential technical skills.

We told designers they didn’t need technical knowledge. Then we eliminated their jobs when they couldn’t influence technical decisions. That’s not inclusion. That’s malpractice.

The question isn’t whether designers should code. The question is whether designers should understand the medium they’re designing for. Whether they should be able to evaluate their own work. Whether they should have enough technical fluency to advocate for users when technical decisions are being made.

That’s the baseline for strategic relevance.

What Changes

The opportunity here is clearer than the design industry wants to admit: technical understanding is learnable, and the designers who develop it will have influence that others don’t.

Look at what technical literacy enables. At Airbnb, a designer studying the star rating system realized replacing stars with hearts could feel more personal. That design change increased bookings by more than 30%. Not because the designer had great visual taste. Because the designer understood the technical system well enough to identify a high-leverage intervention.

At GitHub, designers contribute production code to the Primer design system. They don’t do this because they want to be engineers. They do it because understanding the technical implementation makes them better designers. They can prototype with real constraints, collaborate with engineers as peers, and ship work that actually functions.

Vercel’s design engineering team operates with “aesthetic sensibility with technical skills,” which allows them to “deeply understand a problem, then design, build, and ship a solution autonomously.” This isn’t a replacement for traditional design — it’s an expansion of what design can influence.

The technical skills that matter most:


• Understanding data models and how they constrain user workflows
• Knowing enough about APIs to design technically feasible features
• Recognizing performance bottlenecks that should influence interaction patterns
• Evaluating AI-generated outputs for technical correctness
• Understanding how design systems map to component libraries
• Participating in build-versus-buy discussions with actual technical context
• None of this requires a computer science degree. It requires curiosity about how things actually work and a willingness to learn technical fundamentals.

The designers maintaining strategic influence through 2024–2025 aren’t the ones with the best Figma skills. They’re the ones who can participate in technical conversations, evaluate trade-offs, and advocate for users in language that resonates with engineers and product leaders.

This is the path forward. Not back to “designers should code” debates from the 2010s. Forward to “designers should understand the systems they’re designing for” as a baseline professional expectation.

The Reckoning

When Figma’s AI generated Apple Weather app clones, it revealed that many designers couldn’t articulate what made a design original beyond surface aesthetics. When Figma Sites produced 210 WCAG violations, it showed that accessibility knowledge wasn’t widespread enough to catch obviously broken output. Research shows that 81% of homepages have low-contrast text, and when AI trains on this data, it perpetuates systemic exclusion. When Builder.ai collapsed after six years of pretending AI was doing what engineers actually did, it demonstrated how easy it is to fake technical competence in an industry that stopped valuing technical craft.

The designers who maintained influence through this period share one thing: technical depth. Not necessarily coding ability, but deep understanding of technical systems. They can evaluate AI output. They can participate in architecture discussions. They can advocate for users in technical terms that actually influence decisions.

The designers who lost influence — and there were many, given 150,000+ layoffs — often shared a different trait: they treated design as separate from implementation. They created mockups for engineers to “build.” They relied on tools without understanding outputs. They couldn’t evaluate technical trade-offs or participate meaningfully in technical conversations.

The question facing the industry isn’t whether AI will replace designers. It’s whether designers will develop the technical literacy needed to remain strategically relevant in a world where most product decisions are fundamentally technical.

Some will argue this sets the bar too high. That requiring technical understanding excludes people. But what actually excludes people is creating a professional track that leads to irrelevance. Entry-level hiring has collapsed. Design leadership roles are disappearing. The “Great Design Handoff” already happened, and it wasn’t to AI — it was to growth teams, algorithms, and business stakeholders who could at least speak the technical language of trade-offs and constraints.

The industry told designers they didn’t need to understand implementation. Then eliminated their influence when they couldn’t participate in implementation discussions. That’s not AI’s fault. That’s ours.

The technical reckoning is here. The designers who face it with curiosity about how systems actually work will have opportunities the industry hasn’t seen in years. The designers who keep insisting technical knowledge isn’t their job will find that strategic influence isn’t theirs either. 

The choice, as always, is ours to make.

]]>
Articles Design & Industry
<![CDATA[A new navigation paradigm]]> https://www.doc.cc/articles/ai-navigation https://www.doc.cc/articles/ai-navigation Sat, 07 Feb 2026 18:42:32 GMT

A new navigation paradigm

Is AI truly eliminating navigation, or is it simply shifting its agent and form?

Written by Francisco Nunes

This essay is the result of two encounters. The first took place during a session of It's Book Time, a study group focused on practicing English among Brazilian designers and product enthusiasts. In that meeting, we discussed Andrew Sims’ article, “Do AI Products Even Need Navigation?”—an article that served as the starting point for this reflection on the navigational paradigm introduced by AI.

The second encounter happened at Friends of Figma Porto Alegre, where we exchanged ideas on how we use AI tools in our daily workflows, and what perspectives we see for a future increasingly shaped by artificial intelligence.

While writing this essay (still a work in progress), I recalled another text I had read in Aeon Magazine, written by Bryan Norton, on Bernard Stiegler's philosophy (a reading I believe every designer working with technology should explore, particularly about how our tools shape us). What follows is the result of several days of reflection on the question of navigation in digital products. I hope the reader finds the journey worthwhile.

Mark Bradford painting

Black Venus by Mark Bradford, 2005. Mixed media collage; 130 × 196 in © Mark Bradford

Navigation and Navigability

To navigate is to read the world in order to move through it, whether it means scanning a crowd to find a familiar face, deciphering the logic of a bookstore’s layout, or following the stars at sea. This ability has always been mediated by tools (many of them disruptive and transformative). Still, the rise of artificial intelligence presents us with a radical promise: a world where we no longer need maps, because the information or the product “comes to us.”

Faced with this transformation, this essay begins with a central question: Is AI truly eliminating navigation, or is it simply shifting its agent and its form?

My central argument is that AI (or more precisely, AI as a product, rather than a pure technology) does not eliminate navigation. Instead, navigation is delegated to a systemic agent that operates invisibly. From this starting point, I want to reflect on how such delegation may impact our navigational abilities. In some cases, generate a kind of cognitive debt by depriving us of a formative practice.

Finally, I want to leave you, the reader, with a provocation: perhaps this transformation demands more than just an individual adaptation, maybe it requires rethinking more systemically how we engage with these new tools of agency.

Agency and the transformation of navigability

In this essay, I use the term navigability (navigation + ability) to define navigation as a human practice that predates the digital world. This clarification matters: I am not referring to navigability as the product's capacity to be navigable, as often seen in product design literature.

To understand AI's impact on interaction and navigation, we must first reclaim navigation from its purely digital sense. At its core, navigation is a form of human technology; a savoir-faire that has always allowed us to go from point A to point B.

Technology here is not understood as something merely “technological,” but rather, as Bryan Norton explains through Bernard Stiegler’s thought, as “[...] an open-ended creative process—a relationship with our tools and with the world.”

It is the craft of moving through the world, a creative, open-ended relationship between ourselves, our tools, and our environments. This practice, which I often refer to as anthropotechnics [1], is not an instinct but a learned ability that emerges from the very experience of “going through”, facing the friction and cognitive effort imposed by the material world.

[1] By anthropotechnics, I borrow the concept from philosopher Peter Sloterdijk, who uses it to describe the techniques and exercises (both physical and mental) through which human beings produce and transform themselves. Navigation is not just an action, it is a formative practice.

This understanding opens up a compelling perspective for thinking about design itself. Here, I consider navigation equivalent to cognition: the cognitive operations we perform, whether intentional or not. Therefore, every product inherently has a navigational function. [2]

[2] During this study, I realized that a product has two kinds of navigational functions: the declared and the undeclared ones. I intend to explore this further.

What does it mean to navigate, essentially?

Historically, we’ve always delegated part of the navigational effort to tools: maps, compasses, astrolabes. It may sound obvious, but one of the defining features of human evolution is our ability to create tools. On this point, Stiegler offers an important insight:

According to Stiegler, the process of making and the use of technology in its broadest sense is what makes us human. Our unique way of existing in the world, distinct from other species, is shaped by the experiences and knowledge that our tools make possible. Whether it’s a state-of-the-art supercomputer like Neuralink or a prehistoric axe used to cut down trees.[3]

The digital era has accelerated this delegation (GPS being a prime example, or the calculator), but the fundamental paradigm described by Sims has remained: we are still the agent operating a tool to read a map. The current software structure still offers us a sense of confidence (or at least it should), a sense of place (or at least it should).

[3] The last few decades have produced many studies seeking to measure the cognitive toll of technological tools. Terms like “brain drain” or “brain-rotted” reflect a growing vocabulary of concern around cognitive degradation.

Keep Walking by Mark Bradford, Hamburger Bahnhof — Nationalgalerie der Gegenwart © Mark Bradford und Hauser & Wirth / Photo: © Staatlichen Museen zu Berlin, Nationalgalerie / Jacopo LaForgia

How does AI impact navigation in digital products?

The shift introduced by AI runs deeper. Sims observes that AI disrupts the spatial model “not by moving things around, but by removing the need to go looking in the first place.” This is where the vector of the process inverts. AI promises that we no longer need to go through the information; instead, the information will now come to us. But what happens when the need to “go looking” is removed?

What occurs is a transfer of agency. Navigation still happens, but its agent is no longer the user; it is now a system operating invisibly. The interface ceases to function as a map we read, and becomes more like a “genie in a bottle” to which we make requests. The apparent immediacy of the response conceals the entire complex path the system has traced on our behalf. [4]

[4] Don Norman and other thinkers from the 1980s and 1990s argued that the problem with the interface is the interface itself. The very interface that facilitates the journey also introduces friction. This led to speculation that hyper-personalized products might not need an interface at all. Today, that vision feels closer to reality.

Structural navigation model by Frank Krausch (Source)

The question is not whether navigation ceases to exist, but what happens to us when we stop practicing it.

If I’m not the one navigating, then who is?

If navigation doesn’t disappear but merely becomes hidden, we must ask: Who benefits from the erasure of this process? On a material level, the answer is clear. In today’s digital product ecosystem, removing navigational friction is not a neutral gesture—it is a strategy of optimization with clear commercial goals. A “seamless” interface that anticipates or instantly satisfies a desire is designed to reduce cart abandonment, accelerate the customer journey, and ultimately increase conversion and profit.

In this model, the AI agent functions simultaneously as a facilitator for the user and an efficient data collector for the company’s business intelligence.

For this optimization to be accepted, it must operate under a deeper ideological function: naturalization [5]. It aims to make this technological mediation feel as natural and obvious as the air we breathe, hiding its constructed nature, programmed biases, and underlying interests. Pierre Bourdieu once said, “The strength of ideology lies in not being recognized as such.” Symbolic violence is precisely the kind that becomes normalized and accepted as part of the natural order of things.

This process, the loss of savoir-faire or the unconscious internalization of technical mediation, is what Bernard Stiegler calls cognitive proletarianization. By erasing the traces of the path, the system disables our ability to question it. Indeed, empirical science is beginning to measure this effect. One example comes from a recent study by the team at Fermat’s Library, which investigated the impact of AI assistants on essay writing. The study found a reduction in participants’ neural connectivity and a lower ability to recall content and feel authorship over their own texts. They called this effect cognitive debt:

“A growing dependence on AI assistants for cognitively demanding tasks may lead to a decrease in memory retrieval effort, harming the consolidation of learning and the sense of authorship.” (Fermat’s Library, Your Brain on ChatGPT, 2023)

This debt is paid with our capacity to internalize, remember, and feel ownership over thought. It is the long-term cost we pay for short-term convenience. The system also strips us of its formative benefits by sparing us the cognitive stress that is essential to the journey.

We lose the space for reflection that comes from the act of choosing. We also lose the possibility of serendipity: those “epoched” processes that arise during non-linear searches and lead us to unexpected discoveries. More gravely, we risk the atrophy of cognitive skills like researching, comparing, and organizing—which are the foundations of autonomous and critical thinking.

This concern is powerfully explored in Old Man’s War, a novel by John Scalzi. In the story, human soldiers use a brain-integrated AI system that functions like a superbrain, translating languages, performing calculations, and organizing decisions. However, during a conflict with an alien species, this technology is suddenly disabled, and the soldiers cannot act, think, or decide independently.

[5] The concept of naturalization describes the social and ideological process through which historical constructions are made to appear “natural.” It legitimizes structures of power by erasing their arbitrary origins.

Some of John Scalzi's books (Photo)

The metaphor is clear: by outsourcing our cognitive faculties, we may also lose the anchor of our autonomy. When technology fails, what remains is not just the silence of the system, but the emptying out of the subject itself.

Remedy and Poison

The analysis thus far confronts us with a fundamental dilemma: the same tool that offers liberating convenience threatens to atrophy a vital cognitive capacity. How can we make sense of this duality without falling into naïve technophobia or uncritical optimism? Stiegler offers us a conceptual tool for this: the notion of pharmakon.

For the philosopher, every technique is a pharmakon: simultaneously a remedy and a poison. Technology is not something external to us: it shapes and constitutes who we are, for better and for worse. Navigational AI is an excellent example of the pharmakon. Its remedy is the promise of a frictionless life, the optimization of tasks, and the freeing of our attention for other pursuits. As we’ve seen, its poison is cognitive debt — the loss of navigational savoir-faire and our growing dependence on systems whose interests are not aligned with our own.

Recognizing AI as a pharmakon prevents us from offering simplistic answers. The question is no longer “Is AI good or bad?” but rather “How can we care for this poison that is also a remedy?” For Stiegler, the answer is not to reject technique, but to develop a therapeutics: a practice of care, a conscious mode of relating to technology that mitigates its toxic effects while amplifying its benefits. This is precisely what we must strive for: a critical response that allows us to pilot the tool, rather than being piloted by it.

book cover on how to detect poisoned food from 1820

A Treatise on Adulteration of Food and Culinary Poisons, 1820 (Source)

The promise of the dynamic interface

Before we delve into therapeutics, it’s worth returning to Andrew Sims's vision, which catalyzed much of this discussion. For Sims, traditional navigation has long been the “backbone of the experience”—the element that makes software legible and gives users a sense of predictability and place.

In his view, the disruption introduced by AI lies precisely in the promise of a dynamic, conversational interface, where tools “arrive just in time,” eliminating the need to go looking in the first place.

Sims’ description of the user experience is accurate. Indeed, it often feels like we are no longer moving through “rooms in a house.” However, our analyses diverge in the diagnosis of implications. What Sims describes as “the removal of the need to go looking” is, in my view, not an elimination but a delegation.

The search still happens, but an invisible agent carries it out. Sims’ “dynamic interface” is the materialization of a structure that becomes naturalized, and its tools that “arrive just in time” are the very mechanisms that, by optimizing the task, deprive us of reflection and serendipity. In this case, the map does not disappear; it merely becomes hidden. The navigational structure still exists, but it turns invisible, embedded in the algorithmic decisions that determine what we see, when we see it, and how it appears.

This process reflects what some theorists call the algorithmic black box: a system that makes decisions based on data and criteria inaccessible to the user. The interface no longer offers clues about how the information was organized or selected, creating an asymmetrical relationship between subject and tool. We don’t just stop navigating, we stop knowing that navigation is happening at all. [6]

[6] Authors like Safiya Noble, Wendy Chun, and Matteo Pasquinelli develop critical analyses of algorithmic systems that conceal their internal logic under the guise of neutrality. Pasquinelli offers the notion of automated cognitive mapping, showing how algorithms don’t just organize navigation but also produce ways of thinking. This opacity becomes a symbolic and epistemic form of governance, reorganizing access to knowledge, memory, and decision-making.

Asvirus 87 by Derek Lerner (Source)

Is it still… precise… to navigate? [7]

The delegation of navigation to opaque algorithmic systems does not eliminate navigation. It transforms it, shifting its agency and obscuring its criteria. While this transformation is still in its early stages, it foreshadows a deeper reconfiguration: the dissolution of visible maps in favor of systems that guide the user without revealing how (or even where) they are being led.

But what is lost in this process? And what must be reclaimed?

This is not about technical nostalgia or moralistic technophobia. As we’ve seen with Stiegler, the technique is always a pharmakon: remedy and poison, as well as the condition of possibility and risk. The question is whether we can build therapeutics capable of restoring navigation to its formative value, power as resistance, and openness to the unexpected.

These therapeutics won’t be found in a new button but perhaps in new ways of teaching, thinking, and practicing design, in approaches that understand that designing an interface also creates a structure of thought. Choosing what to reveal, what to hide, or how to order things is not merely a functional decision—it is a form of discourse, ideology, and power.

In light of this, some urgent questions remain:

If delegating navigation to AI becomes the norm, what forms of navigational resistance or relearning might emerge outside the commercial circuit?

Can interface design be understood as a politically implicated field in reconfiguring agency, or is it structurally subordinated to the logics of efficiency and capital?

How can we incorporate a critical education on tool usage into design processes without falling into moralism or empty idealism?

This essay is more than offering a definitive answer; it is an invitation to continue navigating. After all, like every meaningful journey, it only makes sense if it transforms us along the way.

[7] In Brazilian Portuguese, "preciso" has a double meaning of "precise" and "necessary" (as in "a necessity"). This phrase is attributed to Fernando Pessoa’s poem, which originates from a Roman poet called Pompeu (Navigare necesse, vivere non est necesse). Fernando Pessoa built on the idea that navigation needs to be technically precise, but to live is not. At the same time, navigation is necessary to go in the world, and paradoxically, it is also not.

I would like thank all my colleagues from “It’s Book Time” who helped me to think about the theme of this essay.

]]>
Articles Design & Craft
<![CDATA[To grow, we must forget… but now AI remembers everything]]> https://www.doc.cc/articles/we-must-forget https://www.doc.cc/articles/we-must-forget Sat, 01 Nov 2025 14:34:12 GMT

To grow, we must forget… but now AI remembers everything

AI’s infinite memory could endanger how we think, grow, and imagine. And we can do something about it.

Written by Amy Chivavibul

old family photo album

When Mary remembered too much

Imagine your best friend (we’ll call her Mary),  had a perfect, infallible memory.

At first, it feels wonderful. She remembers your favorite dishes, obscure movie quotes, even that exact shade of sweater you casually admired months ago. Dinner plans are effortless: “Booked us Giorgio’s again, your favorite — truffle ravioli and Cabernet, like last time,” Mary smiled warmly.

But gradually, things become less appealing. Your attempts at variety or exploring something new are gently brushed aside: “Heard about that new sushi place, should we try it?” you suggest. Mary hesitates, “Remember last year? You said sushi wasn’t really your thing. Giorgio’s is safe. Why risk it?”

Conversations start to feel repetitive, your identity locked to a cached version of yourself. Mary constantly cites your past preferences as proof of who you still are. The longer this goes on, the smaller your world feels… and comfort begins to curdle into confinement.

Now, picture Mary isn’t human, but your personalized AI assistant.

A new mode of hyper-personalization

With OpenAI’s memory upgrade, ChatGPT can recall everything you’ve ever shared with it, indefinitely. Similarly, Google has opened the context window with “Infini-attention,” letting large language models (LLMs) reference infinite inputs with zero memory loss. And in consumer-facing tools like ChatGPT or Gemini, this means persistent, personalized memory across conversations, unless you manually intervene.

tweet screenshot

OpenAI CEO Sam Altam introduced ChatGPT’s infinite memory capabilites on X.

The sales pitch is seductively simple: less friction, more relevance. Conversations that feel like continuity: “Systems that get to know you over your life,” as Sam Altman writes on X. Technology, finally, that meets you where you are.

In the age of hyper-personalization — of the TikTok For You page, Spotify Wrapped, and Netflix Your Next Watch — a conversational AI product that remembers everything about you feels perfectly, perhaps dangerously, natural.

netflix screenshot

Netflix “knows us.” And we’re conditioned to expect conversational AI to do the same.

Forgetting, then, begins to look like a flaw. A failure to retain. A bug in the code. Especially in our own lives, we treat memory loss as a tragedy, clinging to photo albums and cloud backups to preserve what time tries to erase.

But what if human forgetting is not a bug, but a feature? And what happens when we build machines that don’t forget, but are now helping shape the human minds that do?

Forgetting is a feature of human memory

“Infinite memory” runs against the very grain of what it means to be human. Cognitive science and evolutionary biology tell us that forgetting isn’t a design flaw, but a survival advantage. Our brains are not built to store everything. They’re built to let go: to blur the past, to misremember just enough to move forward.

Our brains don’t archive data. They encode approximations. Memory is probabilistic, reconstructive, and inherently lossy. We misremember not because we’re broken, but because it makes us adaptable. Memory compresses and abstracts experience into usable shortcuts, heuristics that help us act fast, not recall perfectly.

Evolution didn’t optimize our brains to store the past in high fidelity; it optimized us to survive the present. In early humans, remembering too much could be fatal: a brain caught up recalling a saber-tooth tiger’s precise location or exact color would hesitate, but a brain that knows riverbank = danger can act fast.

This is why forgetting is essential to survival. Selective forgetting helps us prioritize the relevant, discard the outdated, and stay flexible in changing environments. It prevents us from becoming trapped by obsolete patterns or overwhelmed by noise.

And it’s not passive decay. Neuroscience shows that forgetting is an active process: the brain regulates what to retrieve and what to suppress, clearing mental space to absorb new information. In his TED talk, neuroscientist Richard Morris describes the forgetting process as “the hippocampus doing its job… as it clears the desktop of your mind so that you’re ready for the next day to take in new information.”

Crucially, this mental flexibility isn’t just for processing the past; forgetting allows us to imagine the future. Memory’s malleability gives us the ability to simulate, to envision, to choose differently next time. What we lose in accuracy, we gain in possibility.

So when we ask why humans forget, the answer isn’t just functional. It’s existential. If we remembered everything, we wouldn’t be more intelligent. We’d still be standing at the riverbank, paralyzed by the precision of memories that no longer serve us.

When forgetting is a “flaw” in AI memory

Where nature embraced forgetting as a survival strategy, we now engineer machines that retain everything: your past prompts, preferences, corrections, and confessions.

What sounds like a convenience, digital companions that “know you,” can quietly become a constraint. Unlike human memory, which fades and adapts, infinite memory stores information with fidelity and permanence. And as memory-equipped LLMs respond, they increasingly draw on a preserved version of you, even if that version is six months old and irrelevant.

Sound familiar?

This pattern of behavior reinforcement closely mirrors the personalization logic driving platforms like TikTok, Instagram, and Facebook. Extensive research has shown how these platforms amplify existing preferences, narrow user perspectives, and reduce exposure to new, challenging ideas — a phenomenon known as filter bubbles or echo chambers.

feedback loop showing  simplified showing that was you engage the algorithm improves

Positive feedback loops are the engine of recommendation algorithms like TikTok, Netflix, and Spotify. From Medium.

These feedback loops, optimized for engagement rather than novelty or growth, have been linked to documented consequences including ideological polarization, misinformation spread, and decreased critical thinking.

Now, this same personalization logic is moving inward: from your feed to your conversations, and from what you consume to how you think.

“Echo chamber to end all echo chambers”

Just as the TikTok For You page algorithm predicts your next dopamine hit, memory-enabled LLMs predict and reinforce conversational patterns that align closely with your past behavior, keeping you comfortable inside your bubble of views and preferences.

Jordan Gibbs, writing on the dangers of ChatGPT, notes that conversational AI is an “echo chamber to end all echo chambers.” Gibbs points out how even harmless-seeming positive reinforcement can quietly reshape user perceptions and restrict creative or critical thinking.

Jordan Gibb’s conversation with ChatGPT from Medium.

In one example, ChatGPT responds to Gibb’s claim of being one of the best chess players in the world not with skepticism or critical inquiry, but with encouragement and validation, highlighting how easily LLMs affirm bold, unverified assertions.

And with infinite memory enabled, this is no longer a one-off interaction: the personal data point that, “You are one of the very best chess players in the world, ” risks becoming a fixed truth the model reflexively returns to, until your delusion, once tossed out in passing, becomes a cornerstone of your digital self. Not because it’s accurate, but because it was remembered, reinforced, and never challenged.

When memory becomes fixed, identity becomes recursive. As we saw with our friend Mary, infinite memory doesn’t just remember our past; it nudges us to repeat it. And while the reinforcement may feel benign, personalized, or even comforting, the history of filter bubbles and echo chambers suggests that this kind of pattern replication rarely leaves room for transformation.

What we lose when nothing is lost

What begins as personalization can quietly become entrapment, not through control, but through familiarity. And in that familiarity, we begin to lose something essential: not just variety, but the very conditions that make change possible.

Research in cognitive and developmental psychology shows that stepping outside one’s comfort zone is essential for growth, resilience, and adaptation. Yet, infinite-memory LLM systems, much like personalization algorithms, are engineered explicitly for comfort. They wrap users in a cocoon of sameness by continuously repeating familiar conversational patterns, reinforcing existing user preferences and biases, and avoiding content or ideas that might challenge or discomfort the user.

While this engineered comfort may boost short-term satisfaction, its long-term effects are troubling. It replaces the discomfort necessary for cognitive growth with repetitive familiarity, effectively transforming your cognitive gym into a lazy river. Rather than stretching cognitive and emotional capacities, infinite-memory systems risk stagnating them, creating a psychological landscape devoid of intellectual curiosity and resilience.

So, how do we break free from this? If the risks of infinite memory are clear, the path forward must be just as intentional. We must design LLM systems that don’t just remember, but also know when and why to forget.

How we design to forget

If the danger of infinite memory lies in its ability to trap us in our past, then the antidote must be rooted in intentional forgetting — systems that forget wisely, adaptively, and in ways aligned with human growth. But building such systems requires action across levels — from the people who use them to those who design and develop them.

For users: reclaim agency over your digital self

Just as we now expect to “manage cookies” on websites, toggling consent checkboxes or adjusting ad settings, we may soon expect to manage our digital selves within LLM memory interfaces. But where cookies govern how our data is collected and used by entities, memory in conversational AI turns that data inward. Personal data is not just pipelines for targeted ads; they’re conversational mirrors, actively shaping how we think, remember, and express who we are. The stakes are higher.

Memory-equipped LLMs like ChatGPT already offer tools for this. You can review what it remembers about you by going to Settings > Personalization > Memory > Manage. You can delete what’s outdated, refine what’s imprecise, and add what actually matters to who you are now. If something no longer reflects you, remove it. If something feels off, reframe it. If something is sensitive or exploratory, switch to a temporary chat and leave no trace.

chatgpt settings screenshot

You can manage and disable memory within ChatGPT by visiting Settings > Personalization.

You can also pause or disable memory entirely. Don’t be afraid to do it. There’s a quiet power in the clean slate: a freedom to experiment, shift, and show up as someone new.

Guide the memory, don’t leave it ambient. Offer core memories that represent the direction you’re heading, not just the footprints you left behind.

For UX designers: design for revision, not just retention

Reclaiming memory is a personal act. But shaping how memory behaves in AI products is a design decision. Infinite memory isn’t just a technical upgrade; it’s a cognitive interface. And UX designers are now curating the mental architecture of how people evolve, or get stuck.

Forget “opt in” or “opt out.” Memory management shouldn’t live in buried toggles or forgotten settings menus. It should be active, visible, and intuitive: a first-class feature, not an afterthought. Users need interfaces that not only show what the system remembers, but also how those memories are shaping what they see, hear, and get suggested. Not just visibility, but influence tracing.

old photography of a person by the ocean

How can we decide what memories to keep?

While ChatGPT’s memory UI offers user control over their memories, it reads like a black-and-white database: out or in. Instead of treating memory as a static archive, we should design it as a living layer, structured more like a sketchpad than a ledger: flexible and revisable. All of this is hypothetical, but here’s what it could look like:

Memory Review Moments: Built-in check-ins that ask, “You haven’t referenced this in a while — keep, revise, or forget?” Like Rocket Money nudging you to review subscriptions, the system becomes a gentle co-editor, helping surface outdated or ambiguous context before it quietly reshapes future behavior.

Time-Aware Metadata: Memories don’t age equally. Show users when something was last used, how often it comes up, or whether it’s quietly steering suggestions. Just like Spotify highlights “recently played,” memory interfaces could offer temporal context that makes stored data feel navigable and self-aware.

Memory Tiers: Not all information deserves equal weight. Let users tag “Core Memories” that persist until manually removed, and set others as short-term or provisional — notes that decay unless reaffirmed.

Inline Memory Controls: Bring memory into the flow of conversation. Imagine typing, and a quiet note appears: “This suggestion draws on your July planning — still accurate?” Like version history in Figma or comment nudges in Google Docs, these lightweight moments let users edit memory without switching contexts.

Expiration Dates & Sunset Notices: Some memories should come with lifespans. Let users set expiration dates — “forget this in 30 days unless I say otherwise.” Like calendar events or temporary access links, this makes forgetting a designed act, not a technical gap.

several old photos organized in stacks

We need to design other ways to visualize memory

Sketchpad Interfaces: Finally, break free from the checkbox UI. Imagine memory as a visual canvas: clusters of ideas, color-coded threads, ephemeral notes. A place to link thoughts, add context, tag relevance. Think Miro meets Pinterest for your digital identity, a space that mirrors how we actually think, shift, and remember.

When designers build memory this way, they create more than tools. They create mirrors with context, systems that grow with us instead of holding us still.

For AI developers: engineer forgetting as a feature

To truly support transformation, UX needs infrastructure. The design must be backed by technical memory systems that are fluid, flexible, and capable of letting go. And that responsibility falls to developers: not just to build tools for remembering, but to engineer forgetting as a core function.

This is the heart of my piece: we can’t talk about user agency, growth, or identity without addressing how memory works under the hood. Forgetting must be built into the LLM system itself, not as a failsafe, but as a feature.

One promising approach, called adaptive forgetting, mimics how humans let go of unnecessary details while retaining important patterns and concepts. Researchers demonstrate that when LLMs periodically erase and retrain parts of their memory, especially early layers that store word associations, they become better at picking up new languages, adapting to new tasks, and doing so with less data and computing power.

illustration of the brain of a person as a library

Illustration by Valentin Tkach for Quanta Magazine

Another more accessible path forward is in Retrieval-Augmented Generation (RAG). A new method called SynapticRAG, inspired by the brain’s natural timing and memory mechanisms, adds a sense of temporality to AI memory. Models recall information not just based on content, but also on when it happened. Just like our brains prioritize recent memories, this method scores and updates AI memories based on both their recency and relevance, allowing it to retrieve more meaningful, diverse, and context-rich information. Testing showed that this time-aware system outperforms traditional memory tools in multilingual conversations by up to 14.66% in accuracy, while also avoiding redundant or outdated responses.

Together, adaptive forgetting and biologically inspired memory retrieval point toward a more human kind of AI: systems that learn continuously, update flexibly, and interact in ways that feel less like digital tape recorders and more like thoughtful, evolving collaborators.

To grow, we must choose to forget

So the pieces are all here: the architectural tools, the memory systems, the design patterns. We’ve shown that it’s technically possible for AI to forget. But the question isn’t just whether we can. It’s whether we will.

Of course, not all AI systems need to forget. In high-stakes domains — medicine, law, scientific research — perfect recall can be life-saving. However, this essay is about a different kind of AI: the kind we bring into our daily lives. The ones we turn to for brainstorming, emotional support, writing help, or even casual companionship. These are the systems that assist us, observe us, and remember us. And if left unchecked, they may start to define us.

We’ve already seen what happens when algorithms optimize for comfort. What begins as personalization becomes repetition. Sameness. Polarization. Now that logic is turning inward: no longer just curating our feeds, but shaping our conversations, our habits of thought, our sense of self. But we don’t have to follow the same path.

We can build LLM systems that don’t just remember us, but help us evolve. Systems that challenge us to break patterns, to imagine differently, to change. Not to preserve who we were, but to make space for who we might yet become, just as our ancestors did.

Not with perfect memory, but with the courage to forget.

]]>
Articles Design & Society
<![CDATA[The UX butterfly effect]]> https://www.doc.cc/articles/ux-butterfly-effect https://www.doc.cc/articles/ux-butterfly-effect Mon, 06 Oct 2025 23:19:49 GMT

The UX butterfly effect

Understanding unintended consequences in design and how to plan for them.

An illustration of a black semi-circle resembling a setting or rising sun against a bright yellow background. The semi-circle is surrounded by concentric black rings that gradually fade into the yellow background, creating a ripple effect.

Each minute, millions of teens scroll through videos on social media platforms. These platforms are designed to connect people, but their overuse among young users is leading to serious, unintended consequences.

The impact of social media on teen mental health has received significant media attention. After Facebook became available to American college students, their rates of depression rose by 7% and anxiety by 20%. In Australia, 44% of teens report negative online experiences, including being excluded from events or social groups.

But the effects of endless scrolling go beyond mental health. Consuming video content on social media also takes a toll on the environment. Watching TikTok for just one minute generates 2.63 grams of CO₂ equivalent emissions.

With one billion users spending an average of 46 minutes per day on TikTok, this adds up to 14.7 million tonnes of CO₂ emissions annually—equivalent to flying the entire population of London to New York.

The law of unintended consequences

As designers, we constantly make decisions. Whether we design objects, devices, websites, apps, or policies, we choose one option over another, setting parameters for subsequent actions to unfold.

Designing an object like a chair involves decisions about what materials to use, how the product will be manufactured and transported, how to address cost considerations, its use function, and what happens at the end of its life span.

Designing a website includes making decisions that shape how people will use the site and putting the elements in place that influence how much time users spend clicking and scrolling their way through the site.

The law of unintended consequences observes that every decision made can have both positive and negative outcomes that were not foreseen by the person making the decision.

As Jony Ive put it in a recent interview:

“I think when you’re innovating, of course, there will be unintended consequences. You hope that the majority will be pleasant surprises. Certain products that I’ve been very, very involved with, I think there were some unintended consequences that were far from pleasant.”

Visualising unintended consequences with systems maps

To identify unintended consequences requires us to understand the underlying elements of a product, service, or initiative and how these elements influence each other.

These so-called ‘feedback loops’ create ripples that carry across to the very edges of a system. Systems maps help us visualise the systems behind our designs.

TikTok uses a reinforcing feedback loop to increase user engagement. Based on the user’s interactions, like swiping and watching video content, TikTok’s algorithms build a model of the kind of content a user likes.

Every additional data point (such as likes and comments) is absorbed by the model to generate more relevant content feeds. As the model becomes more and more attuned to the interests of individual users, they will respond by staying longer on the site watching more content.

A causal loop diagram illustrating the reinforcing feedback loop (R1) of TikTok’s algorithm. The diagram shows four main elements: ‘TikTok users,’ ‘TikTok videos being watched,’ ‘user behaviour to feed into algorithms,’ and ‘satisfaction with feed content being shown.’ Arrows indicate positive relationships between these elements, forming a cycle that reinforces itself.

The systems map and reinforcing feedback loop (R1) behind TikTok’s recommendation algorithm / Adapted from UX Collective

But TikTok’s systems map also includes the infrastructure on which the company’s servers operate — in charge of processing data points, generating models, and selecting video content that is sent to the user’s device. Data centres use as much energy as France, and AI systems are predicated to account for half of total data centre consumption by the end of the year.

A ripple effect of TikTok’s recommendation algorithm is the increased need for more data centres and the greater use of energy to power TikTok’s servers. This increases pressure on the electricity grid and ultimately more raw materials need to be mined to keep up with the demand created by the feedback loop.

a  black silhouette of a person looking at a mobile device with lines going from the device to what depicts the underlying physical infrastructure made of signal towers, data centres, satellites, and the physical web. Part of the illustration is superimposed in front of a yellow circle.

A systems view reveals the infrastructure that sits behind the UI of social media apps (Source)

Another effect that can be observed through extending TikTok’s systems map is that watching videos reduces time available for socialising and studying. This in turn has a damaging effect on the user’s mental health, academic performance, and quality of life and opportunities.

A complex causal loop diagram showing six reinforcing loops (R1 to R6) and their interconnections. Each loop is represented by labeled circles connected by arrows indicating positive or negative causal relationships. R1 involves TikTok users, satisfaction with feed content, and videos being watched. R2 to R3 link user behaviour, electricity use, data centres, carbon emissions, and environmental quality. R4 to R6 relate time spent on social media to mental health and other social factors.

TikTok’s systems map showing the ripple effects leading to environmental (R2, R3) and societal (R4, R5, R6) unintended consequences (Source)

These are unintended but not necessarily unpredictable consequences. So why aren’t social media companies addressing them from the start? Because their business models offer little incentive to look beyond the immediate feedback loop (R1).

This is where government and regulation come into play—applying pressure on private firms to ensure that products and services do not harm communities, societies, or the systems that sustain human well-being.

Companies that ignore the broader impacts of their design decisions—failing to recognise the systems and feedback loops at play—often face serious consequences. Take Juul, for example: the company paid a $439 million fine for marketing vaping products to teens.

Mapping the impact ripples

The best way to start planning for unintended consequences is by creating a systems map. Once we have captured the elements within a system and how they influence each other, we can turn to tools like the impact ripple canvas to identify intended and unintended consequences.

By placing them on a canvas, consequences become a tangible possibility, and we are able to start thinking about ways to address them. As a tool, the impact ripple canvas helps us to visualise and think about the impacts of our decisions in a multi-level and networked way.

A circular diagram divided into fourconcentric rings labeled ACTION, DIRECT IMPACT, INDIRECT IMPACT, and BIG PICTURE IMPACT. The DIRECT IMPACT ring includes Ability to share content, Follow & connect with others, Free access to video content, etc. The INDIRECT IMPACT ring includes Less time for socializing, Less time for study, etc. The BIG PICTURE IMPACT ring includes Quality of life, Depression, Social isolation, Spreading of fake news, More data centers in use, Deforestation, etc.

Impact ripple canvas capturing the intended (black) and unintended (white) consequences of TikTok’s business model. Based on a tool developed by Manuela Taboada and Md Shahiduzzaman (Source)

In systems thinking, the indirect (second order) impacts are often delayed consequences of an action or decision. For example, a decision to cut costs in one area of a business may lead to increased costs in another area due to the need to hire additional staff or invest in new technology.

Big picture (third order) effects refer to the additional consequences caused by the second order effects. For example, the decision to cut costs may negatively affect customer satisfaction, resulting in a decline of sales and profits.

Revealing the invisible elements

In organisations, unintended consequences are difficult to identify and consider because they are largely invisible (until their second and third order effects start to show). We can use iceberg visuals to map out the visible and hidden components of a system.

Organisations typically focus on what’s visible, as these components can be controlled and measured. This includes aspects like technology, processes, and policies. Unintended consequences typically emerge from those components that are invisible, such as the culture, values, and beliefs of people or organisations.

A graphic representation of an iceberg divided into two parts. The top portion above the waterline is colored yellow and labeled with visible organizational elements: ‘POLICIES,’ ‘PROCEDURES,’ ‘PROCESSES,’ ‘STRATEGY,’ ‘VISION,’ and ‘TECHNOLOGY.’ The larger bottom portion below the waterline is colored black and labeled with underlying cultural elements: ‘ASPIRATIONS,’ ‘BELIEFS,’ ‘FEELINGS,’ ‘PERCEPTIONS,’ ‘VALUES,’ ‘CULTURE,’ and ‘TRADITION.’

Iceberg visuals reveal the invisible elements that may be responsible for unintended consequences occurring in a system (Source)

The value of the iceberg visual is to highlight the hidden components that are responsible for unintended consequences. When Uber launched in South America, it failed to consider the societal issues that cities like São Paulo faced.

The company only realised they needed to improve their safety mechanisms when the consequence of this oversight — the murder of an Uber driver — was reported in the news. In fact, armed robberies were an unintended consequence of Uber’s attempt to turbo-charge growth in a crucial new market by allowing customers to pay in cash for their rides.

The Uber example is a stark reminder of the law of unintended consequences. Small changes in one part of the system can have big, unexpected impacts elsewhere in the wider system and adjacent systems.

Small changes, big impact—the UX butterfly effect

Chaos theory describes the observation that even tiny perturbations like the flutter of a butterfly can lead to dramatic, non-linear effects elsewhere over time. Seemingly small changes or decisions that we make as designers can have significant and often unforeseen consequences.

As designers, we can’t directly control the chain of reactions that will follow an action. Reactions are difficult to predict, as they occur depending on factors beyond our direct control.

But by using tools like systems maps, the impact ripple canvas, and iceberg visuals, we can take potential reactions out of the unpredictable pile and shift them into the foreseeable pile.

A systems map helps us to quickly situate our organisation and its offering within a broader societal and systemic context. It helps us to see the relationship between what we do, the choices that we make, and the impacts those have.

Just as critically, is the conversation that needs to take place in order to create a systems map, impact ripple canvas, or iceberg visual, which requires making explicit relationships and impacts, defining boundaries between what’s ‘in’ and what’s ‘outside’ of our control, and justifying those distinctions.

For organisations looking to innovate in terms of their business model, service delivery, or even purpose, a systems map provides important context.

Additionally, a systems map can help demonstrate second and third order impacts — direct and indirect effects — that can help us and our organisation better understand our role in society.

Acknowledgment & disclaimers

This article is adapted from chapter 5 of our book Designing Tomorrow, which introduces strategic design tactics for changing the planetary impact of design practice and organisations.

No AI tools were used in the writing of this article and the authors take full responsibility for the accuracy of the content. AI use was limited to generating the alt text for the illustrations and diagrams and brainstorming title and subtitle options.

]]>
Articles Design & Society
<![CDATA[The creator and the machine]]> https://www.doc.cc/articles/the-creator-and-the-machine https://www.doc.cc/articles/the-creator-and-the-machine Sun, 07 Sep 2025 11:25:29 GMT

The creator and the machine

Written by Caio Barrocal

photo of Lillian Schwartz at Bell Labs

"In computational art and design, many responses to the questions of what and why continue historic lines of creative inquiry centered on procedure, connection, abstraction, authorship, the nature of time, and the role of chance." ➪ Levin & Brain, 2021

Disruption seems to be the norm for design, especially when it comes to the techniques and tools we employ in our craft. In 2025, the field faces yet another shift. Technical optimism seems to have lost unanimity, and creative professionals are trying to understand their place in a future of economic uncertainty, in which AI seems capable of delivering aesthetic quality with unmatched speed. In more philosophical terms, this topic has become a central discussion once again, raising a set of questions: in a world where machines can create, what is creativity, really? How does generative creation influence our creative processes? And perhaps more interestingly: who is the author of an artifact produced through generative techniques?

My views on technology are those of sociotechnical systems, meaning I don't see how the tech tools our societies create can be considered apart from their organization, motivations, and human elements in general. Therefore, in a moment of pressing climate challenges, economic turmoils, and increasing global tension, I cannot help but see the race for AI as a symptom of a logic that pushes human needs aside, seeks profit at all costs, and disregards environmental and social long-term consequences. Yet, this is a trend we need to observe and shaping a critical view of such a complex topic requires a comprehensive set of skills that a single expertise won't likely cover. Therefore, this text is first and foremost a call for multidisciplinarity and collaboration. Equally important is the understanding that sharing authorship with machines is not something new, but rather something that has been intensified. In fact, designers, artists, and intellectual workers, in general, have been apprehending computational technologies since their conception while seeking (or being pressured) to explore new mediums and perform their tasks more efficiently.

How did we end up here in the first place?

The first efforts to bring computers to the creative domain began in the 1950s, amid an effervescent and diverse artistic scene. Pioneers like Georg Nees, Frieder Nake, Vera Molnár, and Lillian Schwartz in the US and Europe, Hiroshi Kawano in Japan, and Waldemar Cordeiro in Brazil were among the first to explore what would become known as “computer art.” They began experimenting with computers and algorithms at a time when these machines were still very expensive and limited. Yet, driven by the rapid advancements in computer graphics, they—along with other creators—recognized the computer as a potentially rich medium for expression.

In the beginning, explorers of “computer art” mainly were mathematicians, physicists, and engineers — the ones who could operate giant and complex computers and who were interested in how the machines could help visualize the phenomena and models they were studying. In parallel, there was also within the community significant enthusiasm for exploring graphical synthesis per se and making the use of computers more appealing. Such efforts in improving graphic user interfaces ultimately made computers easy to operate and accessible to the general audience; however, until very recently, one had to be extremely knowledgeable about programming techniques, mathematical principles, and how digital images are manipulated and rendered in order to tackle the complete expressiveness of the medium. Interestingly, the grammar of creating with computers mixes up technical concepts such as logical thinking, algorithms, programming, and resource consumption with design principles such as proportion, balance, hierarchy, rhythm, usability, and perception. 

As I was interested in outlining how computers reached the level of aesthetic relevance they have today, I concluded that explaining it solely as a product of developments in computer graphics wouldn't do it justice. There would still be a missing piece: the one capable of clarifying the creative practice and motivations. With time, I framed this object in two components: first, as a continuation and extrapolation of the employment of mathematics and rationalization as mediums for art, design, and creativity. And secondly, as a consequence of the improvements in computer graphics that have occurred since the last century, which essentially enabled computers to synthesize with ever-increasing quality.

collage of sketches and fresco from  della Francescca

Left: fresco by Piero della Francescca; Right: della Francescca's sketches on proportion.

While using the computer as a medium for creative production is a somewhat recent phenomenon, mathematics, art, and design have met each other many times across history. After the Middle Ages, for example, Brunelleschi (1377–1446) is credited with having rediscovered the concept of depth and formulated an understanding of linear perspective and the vanishing point. The architect and engineer was one of the pioneers of the Renaissance period and is most famous for designing machines to help build the dome of the Cathedral of Santa Maria del Fiore. Brunelleschi influenced many other artists and mathematicians who were seeking to create convincing depth and realism in paintings. Further steps towards creating an aesthetically convincing perspective system were taken by Piero della Francesca (1416–1492) who wrote many mathematical texts discussing geometry, algebra, and the application of perspective principles into art. In the first volume of his On Perspective In Painting (near the 1460s), Della Francesca establishes geometry theorems and then discusses their application to frames. In the other two volumes, the artist examines three-dimensional drawing techniques applied to prisms and deals with more complex shapes such as the human body and architectural ornamentation (Robertson & O’Connor, 2003).

Both artists were deeply invested in their artistic intentions to the point that they turned to the design of their very creative processes, leading them to elaborate machines and concepts to help them realize what they had in mind. For instance, in the introduction of On Perspective In Painting della Francesca explains his process — one in which mathematics was not only a tool for creating with greater quality, but also a creativity engine: “First is sight, that is to say the eye; second is the form of the thing seen; third is the distance from the eye to the thing seen; fourth are the lines which leave the boundaries of the object and come to the eye; fifth is the intersection, which comes between the eye and the thing seen, and on which it is intended to record the object.”

Some years later, Leonardo Da Vinci (1452–1519) built on top of important contributions made by those before him, as he enriched the field with further studies on perspective and the optical principles of the eye. The deep understanding of perspective and the physics of shade and light that Da Vinci developed throughout his life turned out to be his signature and the epitome of a Renaissance man: a multidisciplinary creator in sync with artistic sensibilities and attentive to the techniques of creation. In his various sketches, he registered ideas, sketches, theorems, prototypes, and mathematical concepts as he was trying to represent and design the real world. In Leonardo, perhaps more than any other Renaissance artist, mathematics, art, and creativity were fused in a single concept (Robertson & O’Connor, 2003).

It's important to note that having mathematics and geometry as engines for creativity isn't exclusive to European cultures. In other parts of the world, and in different times, creators relied on such instrumentalization in order to realize their creative intentions. That is the case, for example, for the Japanese "Sangaku", which refers to geometrical problems and theorems that were carved into wooden tablets to be placed as offerings at Shinto shrines or Buddhist temples, and also to the usage of mathematical abstract patterns found in Islamic art

At the dawn of the computational era at the beginning of the 20th century, Europe went through the consolidation of industrialization and the rise of thoughts on modernity and positivism. These times were shaped by technological optimism and an emphasis on human-centered progress, which also saw the rise of rationalism as a normalized way to see life, meaning emphasis on reason, logic, quantification, and systematization. The trends that led to Bauhaus' functionalism, for instance, also influenced many creators and scientists who started to seek ways to formalize, describe, and rationalize aesthetic creation. To these intentions, the computer came as a perfect creative partner and a catalyst of motivations that were around before.

When the computer made art

Using the computer to create and manipulate images became possible in the 1950s when the first displays were made available to exhibit graphics, and the first plotters were created to print them. Although the initial focus was on visualizing and understanding mathematical models and physical phenomena, it didn't take long for artists to see the computer as a potential creative ally and the provider of new languages and means of expression. 

The first artworks of this kind were produced using analog computers associated with cathode-ray oscilloscopes, which served as basic displays. The graphics could then be registered in analog film strips or photographed. One example is the Electronic Abstractions, a series of computer-generated graphics produced by American Ben F. Laposky in 1952 and credited as one of the first pieces of computer art.

examples of abstract eletronic art made by laposky

Ben F. Laposky. Electronic Abstractions. 1952

Distinguishably, Laposky recognized the beauty and artistic capabilities of computers at a time when most discussions were focused on practical, mathematical uses, hence illustrating a first generation of creators of the kind. This is reflected in his descriptions: “Electronic Abstractions are abstract art forms, traced by intricate electrical waves on the screen of a cathode-ray oscilloscope. [...] They are compositions of electrical vibrations in light as pleasing to the eye as compositions of sound vibrations in music are pleasing to the ear. These beautiful visual rhythms and harmonies of electronic abstract art may be recorded by means of photography".

Although not calling it explicitly computer art, Laposky intended for an artistic placement of his oscillons, which is demonstrated not only by his descriptions above but also by how he published and presented the artifacts as artistic material (Laposky, 1952).

examples of abstract eletronic art made by laposky

Ben F. Laposky. Electronic Abstractions. 1952

The term "computer art" came only later, and generally speaking, it refers to a broad set of artistic procedures, acts, and strategies that artists can employ in association with the computer. Authors of the field also crafted categories for classifying the role of a computer in a given creative process, regardless if they remain digital or transferred to another medium. In short, the computer can be seen either as a tool for creating and synthesizing preconceived ideas or as a creative means through which the very concept of computing becomes an artistic subject. Such distinction is important as it results in different levels of autonomy and novelty.

Despite a variety of mediums, it is undeniable that the popularity of computer art as a practice and the level of aesthetic relevance computers gained in the previous decades cannot be disassociated from the chain of improvements in computer graphics happening since then. Although exciting, the first explorations were limited by modest graphic expression, which changed drastically after Ivan Sutherland developed the Sketchpad in 1962 and set the foundations for modern user interfaces and real-time graphics. In another article, “From Computer Graphics to Computer Art”, I discussed in depth how developments in computer graphics enabled high-quality output and sparked the interest of a community of artists who began using computers to create art.

Yet, computer-aided art and design faced a troubled beginning as a valid discipline due to resistance from conservative practitioners and critics who questioned its relevance and legitimacy as an artistic practice. It was argued that the predominantly technological and scientific focus of the first publications on the subject, as well as the difficulty of establishing methodology and definitions for it, should place computationally produced works in a category of “non-art”, unable to find space in exhibitions. The very multidisciplinary nature of making art with a computer also made it hard for the discipline to find its place within the community. From the beginning, the scene was composed of scientists, artists, engineers, designers, physicists, and mathematicians who often have different perspectives on the work to be developed and on which aspects should be emphasized (Taylor, 2014).

The art-historian Frank Popper pointed out that many were the influences that sparked what he called “computer and virtual art”. In his book From Technological to Virtual Art, Popper looks back at some of the art movements of the 20th century and outlines their impact on a growing class of creators who were beginning their experiments with the computer. Among some of the main influences are the luminous aspects of kinetic art, the exploratory nature of Pop Art, and the effervescence of cinema and animation. Mainly though, during the second half of the 20th century, authors of the fields of philosophy, psychology, aesthetics, sciences, and art started exploring the concept of information aesthetics, which was a short-lived but strongly influential movement that sought to create mathematical models capable of evaluating and, thus, quantifying the aesthetic quality of artifacts. Such an effort had Abraham Moles (1920–1992) and Max Bense (1910–1990) as its most notable agitators and resulted in theories that were widely spread among European designers and artists during the 1960s.

The authors of the information aesthetics proposed that "modern aesthetics" should be developed, and they intended to create universal mathematical models capable of describing the aesthetic qualities of all forms. In other words, they aimed to define perception in objective terms, using mathematical principles instead of personal interpretation. The ideas of Moles and Bense ended up finding better adoption in fields of contemporary art that sat closer to mathematical and scientific communities, such as abstract art, concrete art, and the soon-to-be computer art. Despite being a possibly too simplistic and schematic approach for apprehending the vast territory of artistic creation, their ideas undeniably set the scene for the closest of relationships between arts and digital technology. As a matter of fact, the computer was the perfect partner for exploring how quantitative concepts and procedures could become aesthetic production.

Bense's collaborators Georg Nees and Frieder Nake are considered the first ones to put up exhibitions of computer art, which happened during the early 1960s. Both German scientists belonged to a group of pioneering scientist-artists known as The Algorists, a name for those who created their own algorithms to synthesize visual pieces. Max Bense was Nees’ supervisor and also the main responsible for introducing him to the still-young medium of computer art. After getting in touch with Nees’ computer graphics experiments in 1964, Bense decided to invite him to exhibit such works at his gallery. At that time, the place was mainly dedicated to concrete art and explorations based on the rationalist approaches Bense had been developing. Nees and Bense were also the publishers of the booklet Rot 19. Computer-Grafik (1965), a small publication that is possibly the first one made on computer art, and which contains much of Nees’ work accompanied by explanations of the algorithms behind them. Around the same time, Frieder Nake—another scientist-artist influenced by Max Bense— was experimenting with computer art using the famous Graphomat Z64, a flatbed drawing machine of high precision created by engineer Konrad Zuse. Nake became especially known for his series of colored computer drawings produced through operations of matrix multiplications, having contributed to all major exhibitions of computer art (Compart, 2018, 2012).

portraits of nees nake and bense

Left: Georg Nees in 1986 © Alex Kempkens; Center: Frieder Nake; Right: Max Bense in 1964 © Goebel Weyne.

cover of nees book

Cover of the booklet Rot 19; Right: Georg Nees. Andreaskreuz. 1965

Although short-lived, the ideas of information aesthetics and its strong association with digital computers influenced experimental artists and designers around the World. In North America, John Whitney (1917—1995) became a pioneering artist for his works on computer-generated animation, while in Latin America the Brazilian-Italian Waldemar Cordeiro (1925—1973) is usually considered the most notable agitator of the field. 

The trajectory of Cordeiro is also a good example of how the creative use of computers evolved as a product of the Modernist intentions of the 20th century. The artist was significantly in sync with the discussions on how technology would impact the arts and design to the point that it's impossible to discuss Brazilian contemporary art without considering his contributions. As he wrote in 1973, computer art could be seen “as a process of objectifying ideas through images, approaching psychological, ethical, sensory, ideological, sensitive, and intellective variables through arithmetic and logical operations”. Cordeiro identified that this new kind of art would have a tendency to create multidisciplinary works by taking advantage of scientific research and discoveries of the time, which for him was a continuation of the trends on concrete art “developed in the historical conditions of the first industrial revolution (suprematism, neoplasticism, constructivism, etc.)”. More interestingly, Cordeiro considered the rationalization of the creative process and the employment of computers a way to reflect on our human creativity, which illustrates the mindset of the creators of his time who were excited about this new type of partnership: “In case the artistic issues can be treated by machines or by teams including a ‘partner’ — computer — we will learn more about how man handles artistic issues.”

In 1971, Cordeiro introduced computer art to Latin America through an initiative and exhibition he called Arteônica. It took place at the Fundação Armando Alvares Penteado in São Paulo and was one of the first events worldwide dedicated to art and technology.

Waldemar Cordeiro photo and art work example

Left: Waldemar Cordeiro ©The Mayor Gallery; Right: Waldemar Cordeiro. A mulher que não era BB. 1973.

A similar path was shared by pioneer Vera Molnár (1924–2024), which began with her studies in 1947 at the Faculty of Plastic Arts in Budapest. There, she was trained as a traditional painter but developed a style based on the already-mentioned rationalist and mathematical trends that were influencing European art in the 1950s. In her early work, Molnár focused on exploring the aesthetic possibilities of combining simple shapes and colors. Over time, she deepened her reflection on the mechanisms of artistic creation, which made her study the work of people such as Piet Mondrian (1872–1944), Kazimir Malevich (1879–1935), and the artists of concrete art. Ultimately, she approached the scientific community and in particular, the mathematicians, which helped her elaborate her signature style.

Molnár’s engagement with mathematics and geometry led her to incorporate methodical patterns into her work and develop an algorithmic mode of creation. However, she found such iterative procedures to be exceedingly laborious and prone to a lack of precision when done by hand, which motivated her to seek out mechanical alternatives. In 1968, she discovered the computer and the benefits it could bring to her practice. What is particularly fascinating is that Molnár’s ability to work with computers predates most efforts on making them easier to manipulate and program, implying that she and other programming artists of her time were extremely determined.

Vera Molnár in her atelier

Vera Molnár in her atelier. 2017. ©Galerie La Ligne, Zürich.

In an interview in 1975, Molnár described her creative process, referring specifically to the RESEAUTO, a computer program she created to render the artworks below:

“This program permits the production of drawings starting from an initial square array of sets of concentric squares. The available variables are: the number of sets, the number of concentric squares within a set, the displacement of individual squares, the deformation of squares by changing angles and length of sides, the elimination of lines or entire figures, and the replacement of straight lines by segments of circles, parabolas, hyperbolas and sine curves. Thus, from the initial grid, an enormous variety of different images can be obtained”.

molnár artwork

Vera Molnár. Structure de quadrilatères. 1988; Right: Vera Molnár. 144 trapezes. 1975.

Essentially, Molnár believed that the computer could satisfy artists’ desires for innovation and, at the same time, encourage the mind to work in ways other than conventional. What is important for our discussions is to acknowledge that for Molnár, Cordeiro, and other like-minded creators of their time, using the computer came as a desire to gain efficiency and precision, but also as a strong push to reflect on their practice and explore new ways to create. Molnár recently passed away in 2024, and her remarkable career was fondly remembered by major art publications and newspapers.

In the second half of the 1960s, computer art began to gain significance in the art world as the computer itself became an irresistible cultural object (Taylor, 2014). The chains of technical improvements associated with ever-increasing enthusiasm culminated in one of the most important events for the field: the Cybernetic Serendipity, an exhibition of computer-aided art and creativity that happened at the Institute of Contemporary Arts of London in 1968. The exhibition, credited with being the first one of its kind, was curated by Jasia Reichardt and contained not only graphic pieces created with the aid of computers but also music, poetry, dance, and animation. It was a milestone for the dissemination of the artistic qualities of the computer to the world. 

In her book that was published on the occasion of the exhibition, Reichardt wrote two things that caught my attention. First, there is a realization that the potential of the computer is still unknown, as demonstrated by her words:

“Cybernetic Serendipity deals with possibilities rather than achievements, and in this sense, it is prematurely optimistic. There are no heroic claims to be made because computers have so far neither revolutionized music, nor art, nor poetry, the same way that they have revolutionized science”. 

But what is more pertinent is what Reichardt considered the biggest impact of computers on art, design, and creativity: "New media, such as plastics, or new systems such as visual music notation and the parameters of concrete poetry, inevitably alter the shape of art, the characteristics of music, and content of poetry. [...] It is very rare, however, that new media and new systems should bring in their wake new people to become involved in creative activity [...]. This has happened with the advent of computers. [The engineers] have occasionally become so interested in the possibilities of this visual output, that they have started to make drawings which bear no practical application, and for which the only real motives are the desire to explore, and the sheer pleasure of seeing a drawing materialize. Thus, people who would never have put pencil to paper, or brush to canvas, have started making images, both still and animated, which approximate and often look identical to what we call 'art' and put in public galleries”.

Reichardt was naturally referring to the advent of computing, but her point is still very much pertinent to the current debate around generative AI.

scans of article about Computer Dance performance

Computer Dance performance during the exhibition Cybernetic Serendipity in London, 1968; Right: Computer paintings exhibited during the same expo.

It is true that creators had been since the 1950s successfully experimenting with printed computer art, but the field gained another level of expressiveness through the efforts of researchers, engineers, and companies in improving the quality of displays and rendering technologies. What is particularly interesting to observe is that this relationship happened both ways. While many creators were motivated by the ever-increasing improvements and investments in graphic technologies, some were also actively working with universities, laboratories, and companies to push the limits of computer-generated graphics, improving both their quality and expressiveness. 

Computer artist Lilian Schwarz, for instance, collaborated with various tech laboratories throughout her career, such as the famous Bell Labs, when she partnered with engineers to experiment with graphics and animation through programming languages such as BEFLIX, EXPLOR, and SYMBOLICS. Her art experiments as a programming artist combined elements of hand painting, digital collage, and digital image processing, resulting in pieces that mix traditional artistic techniques and digital technology. Unlike other pioneers, Schwartz transcended the somewhat rational aesthetics of early works, employing the computer as an artistic medium to develop a fun, vivid, and colorful style. The artist was also known for playing with color perception and sound to create interactive installations.

photo of Lillian Schwartz at Bell Labs

Lillian Schwartz at Bell Labs. ©Lillian Schwartz's Website

Lillian Schwartz and Ken Knowlton. Frames from the Pixillation movie. 1970. ©Lillian Schwartz’s Website

During the 1970s, the field of computer graphics saw most of its formal methodologies developed as well as an increase in popularity due to efforts that took them from research labs to industries, television, and other mass media. It was also during the 1970s that Thomas A. DeFanti developed the GRASS and ZGRASS programming languages, which enabled creators to script 2D animation in an easier way and thus became a hit in the artistic world. In the upcoming years, three-dimensional rendering became more accessible and efficient as researchers and animation companies delivered many improvements. Ultimately, these graphics came to occupy a position of great prominence within the community, which aesthetically influenced many practitioners. It was around the 80s as well that 3D computer graphics acquired a prominent position outside laboratories and studios, catching the attention of general audiences through TV, movies, and advertisements (Jankel; Morton; Leach, 1984). 

From the 1990s on, the very term "computer art” acquired a somewhat nostalgic character and is often used today to describe the initial phase of the discipline in which equipment was less powerful and the aesthetics of results simpler. With greater computational power and diversification, a new generation of artists began to explore bolder aesthetics, interaction, and animation as machines evolved to deliver greater expression. Moreover, computers also evolved to allow for artistic experiments to assume a distributed scope, not necessarily occurring on a single machine, but rather through the internet. At the same time, it also became possible to deploy experiences onto a diversity of new devices such as wearables, projections, IoT, sensors, actuators, mobile devices, and VR equipment. 

Also in the 90s, developments in user interfaces enabled designers and artists to employ computers professionally, making their work more efficient, although mostly through proprietary software. Such new ease of manipulation ultimately implied two types of relationships creators could establish with computers: one in which the computer is seen as an executor of the ideas of a human designer operating it mainly through proprietary software (eg. Adobe, Macromedia, Sketch, Figma…), and one in which computers are seen as co-creators, sharing part of the creative process with a designer that intentionally employs generative creation. 

Over time and for the need of greater focus, the term computer art gave place to many other "sub-areas" that saw computers as co-creators, such as net art, generative art, software art, creative coding, algorithmic art, and, finally, AI-art—all with a certain element of generative creation at their core. Interestingly, this very distinction is open to debate today, as proprietary tools with features enabled by generative AI have been repackaging the approach of generative creation into ready-to-use user interfaces. 

Generative creation and the role of abstraction

At first, the introduction of computers to the creative community faced resistance, especially because of their mechanical, mathematical, and multidisciplinary nature. The core of the debate was the comprehension that creating artifacts alongside computational systems involved ceding a part of the creative process — which before belonged entirely to humans— to the machine. Such a shift naturally challenged the conventional conception of ownership, as it raised an intuitive question:

Who should be considered the author of a given piece of intellectual production, art, or design when computers are involved? The human, the machine, or both?

With time, and as computers got more popular, such resistance diminished as creators embraced digital techniques, whether by choice or pressure of the market. In any case, an important remark for our discussion is the realization that sharing authorship with machines is not something new brought by AI, but rather something that has been unfolding throughout the past decades and that was more recently intensified. This is also why we are now seeing a re-edition of the ownership debate I discussed before.

Nonetheless, the possibility of pairing with computational intelligence has since the beginning motivated professionals to explore these machines as a fruitful creative medium.

For designers, for example, accustomed to apprehending current technologies and repurposing them to the tasks at hand, such exploration came with many intents. Some of these were to extrapolate the capabilities delimited by available proprietary software (think of Adobe, Macromedia, Sketch, Figma), to obtain novel and unpredictable aesthetics, to enable the work with parameterization and optimization, or for building artifacts that respond more autonomously in real-time. In recent years, many designers have been leveraging generative creation very literally, building or employing systems that can render flexible visual identities, parametric objects, responsive interfaces, and generative fonts.

In many ways, ceding a part of our creative processes to machines allowed us to design more efficiently, accurately, and with greater—or at least novel—expression.

We say something is generative when it is capable of producing an outcome or reproducing itself autonomously. Therefore, designing with generative creation implies intentionally employing an autonomous element that contributes to the achievement of a certain goal or to the synthesis of desired outcomes. Such autonomy can be granted in several ways: by letting systems make choices based on complex models, by relying on sufficiently smart Gen-AI agents, by designing with genetic algorithms, or simply by designing systems to respond to unexpected interactions.

This is why we say generative AI is generative. Because such agents are capable of autonomously generating outputs and synthesizing artifacts, regardless of how predictable the input is. Some even learn from previous interactions to respond in smarter ways and produce even better.

Formally, generative creation means employing systems or processes, which are put into execution with a certain degree of autonomy, contributing to or resulting in a complete work. The critical point is that computational intelligence is intentionally used as an active participant in the creative process and not only to support the decisions made by humans (Groß et al., 2018; Grünberger, 2019; Galanter, 2003).

In design, working with such computational autonomy promotes a fundamental change in the creative process as designers become no longer executors of tasks, but conductors. A role that Groß and his collaborators consider to be that of an “orchestrator of decision-making processes” in their book Generative Design (2018). Essentially, bringing generative agents to our work means giving up total control, which is now partially conducted by a form of computational intelligence we need to manage.

To illustrate this fundamental change, Groß proposed a model for the design process around generative creation characterized by an emphasis on abstraction. The main change, according to him, is not only that traditional craft recedes to the background while abstraction and information become the protagonists, but also that designers need to constantly reflect on how to translate their ideas into information that autonomous agents can "understand".

Thus, the relevant question is no longer “How do I draw/sketch/paint?”, but “How do I abstract?”.

Groß's original illustration for the model focused on generative design through coding, but I amended it to highlight the possible role of generative AI agents (in orange).

Groß's illustration for the model

Unlike the conventional process where designers implement ideas directly, generative creation involves a process of abstraction to transform our ideas into pieces that can feed the generative engine. Until recently, to work with such technology, designers would either need to program themselves or partner with engineers skilled in building systems around generative logic. Tools powered by Gen-AI, however, brought new interfaces capable of streamlining such processes, allowing designers to more easily partner with intelligent agents to render their intentions. Prompting in natural language has become the quasi-standard interface; every popular design tool nowadays either has or is developing its own GenAI features, aiming to cater best to how we work (sketching, manipulating images, brainstorming visual ideas…).

Regardless of the approach, bringing generative creation into the design process turns it into a somehow “indirect” one, meaning that designers participate, even if only partially, through activities that happen on a level of abstraction above the craft, such as planning input to AI systems, prompting through natural language, creating algorithms or rules, programming, evaluating the results generated, and refining output until they get a satisfactory result. In fact, when suggesting that ‘How do I abstract?’ is the most relevant question now, Groß is not only illustrating the existence of a layer of abstraction that intermediates the design process but also demonstrating the continuation and intensification of the approximation between design, art, and computing we have been witnessing throughout the last decades.

When implying that ‘How do I abstract?’ is the most relevant question when designing alongside generative creation, Groß is not only illustrating the existence of a layer of abstraction that intermediates the creative process but also establishing a continuation and intensification of the approximation between design, art, and computing we have been discussing so far. For designers, this shift poses the need for comprehending concepts of both areas so they can interact with such systems more intentionally, to which the trajectory we have discussed in this text can serve as inspiration.

In essence, when abstracting, people express an idea in a specific context while suppressing details irrelevant to that context (Beecher, 2016). In this way, the ability to abstract is related to the act of choosing the correct details to be removed from a problem so that it can be better understood or represented. By placing “How do I abstract?” as the central question when designing with generative systems and acknowledging that designers might no longer elaborate the solution directly, Groß illustrates the insertion of a more prominent layer of abstraction that mediates the creative process. In this context, GenAI tools come as one more layer above.

At the crossroads of a crucial moment for the future of technology, I believe every designer should consider the following: working with generative creation can indeed help us make our work faster and more efficient, and unlock our creative processes to paths we wouldn’t consider otherwise. Aesthetically, for example, one of the main strengths of generative creation is its ability to offer new directions to design projects and break with habitual and predictable choices of form and representation — something that occurs because AI agents can be adopted as co-drawers with whom designers need to “negotiate” creation (Agkathidis, 2015).

When it comes to how we design with generative systems, it helps to consider the existence of two possible approaches. In the first approach, which authors Zhang and Funk call concept-based ideation, creative work begins conceptually, in our heads, probably before computational tools are used. Since the main challenge in this approach is to turn an already existing abstract idea into a satisfactory outcome, generative creation comes as a tool that designers employ to materialize concepts or execute tasks according to their expectations.

In the second approach, material-based ideation, these systems are seen as the creative “material” that will be experimented with, and which will point out concepts to be elaborated. More practically, this means that the creative work will gradually take a more concrete shape and direction only after a series of experiments with different software tools, AI agents, and systems are executed. In his book Analog Algorithm (2021), Christoph Grünberger acknowledges that this is essentially a practice in which total control is abandoned in favor of results, which can mean greater quality, speed, or aesthetic novelty. Because of this, the author states that designers start behaving mainly as interpreters and curators.

On the other hand, designers should also keep in mind that generative systems, whether powered by AI or not, are not neutral or immaterial entities but rather exist as a complex chain of interests, ownership, capital, and usage of natural resources with very practical and concerning implications. If today’s drive of weaving technology into creativity continues a thread that has been unfolding for decades already, embracing this paradigm without thorough criticism would mean simply disregarding the urgency of our times. In the words of authors Brian and Levin: “With the fracturing of civic life after social media, the malignant growth of digital authoritarianism, and the looming threat of environmental catastrophe, the sheen has come off Silicon Valley and the folly of technological solutionism has become clear. To the extent that we continue to prototype new futures within the framework of late capitalism, [...] there is a new urgency for artists and designers to have a seat at the tables where technological agendas are set”. 

In this sense, the authors argue that “technologically literate” designers have an essential role to play in “checking society’s worst impulses”. Not only because we can ring the alarms when freedom and imagination are threatened, but also because we are in a better position to try and make space in conversations and organizations for as many perspectives as possible. Hopefully, also the most critical ones. In fact, many designers and artists have been engaging in political, experimental and subversive initiatives through generative poetics aiming to challenge their main applications and also to repurpose their use. An example is Lucas Rochelle's QT.bot, an AI agent that generates speculative queer futures along with possible scenes, defying the "normalizing character" of classification agents.

Another example is the set of projects Design Against the Machine, developed by Professor Boris Müller's students when challenged to explore "speculative future scenarios through creative, experimental websites, co-created with AI". One of the projects, Luminari uses the help of GenAI to picture a "Solar Renaissance Eco-Utopia", in which humans beat climate change through miraculous improvements in photovoltaic technologies, and manage to establish a healthy relationship with the environment. In another more provocative project, The User Manual, we are invited to consider a world in which machines are not only made to be operated by humans but also by other machines. Who is the user in this case? Who should we have in mind when planning the User Manual?

And as a final note: designers can and should be active, critical agents in this debate, instigating reflections and pushing for positive change. The last few years have been challenging for the design field, marked by a series of layoffs and a social reality plagued by complex problems. Notably, the leaders of companies that have long shaped our profession have shown explicit and concerning complacency towards right-wing extremism. Not surprisingly, our community lives through a moment of apathy as it seeks out its place once more. Yet, if the future is not being designed by those who care, then we’re guaranteed not to have a positive outlook.

“Digital art and design rework technology into culture, and reread technology as culture. What's more, they do so in a concrete, applied way, manipulating the technology itself, with a latitude that admits misapplication and adaptation, rewiring and hacking, pseudofunctionality and accident.” ➪ Mitchell Whitelaw

Works cited
  1. Computational Thinking by Beecher
  2. At the edge of art by Blais and Ippolito
  3. Code as Creative Medium by Brain and Levin
  4. Waldemar Cordeiro by Fernando Cocchiarale
  5. What Is Generative Art? by Philip Galanter
  6. Creative Computer Graphics by Jankel and Morton
  7. Electronic Abstractions by Laposky
  8. Mathematics and Art: perspective by O’Connor and Robertson
  9. From Technological to Virtual Art by Frank Popper
  10. Generative Art by Matt Pearson
  11.  Cybernetic Serendipity by Jasia Reichardt
  12.  When the Machine Made Art by Grant Taylor
]]>
Articles Design & History
<![CDATA[Interface]]> https://www.doc.cc/syntax/interface https://www.doc.cc/syntax/interface Sun, 31 Aug 2025 18:55:03 GMT

Interface

On connection, multi-modality, and self-expression.

Image credit: Tameem Sankari

“The best interface is no interface.”
➪ Golden Krishna

Interfaces precede computers

At its core, an interface is any system that translates intent into action. Think about human language itself: an interface that converts abstract thoughts into audible words and written symbols, enabling complex communication. The invention of the alphabet served as a standardized interface for knowledge, allowing ideas to be encoded, stored, and retrieved. Beyond language, simple mechanical tools have always functioned as mediators for how we interact with the world. A lever translates human strength into amplified force, a steering wheel translates a driver's hand movements into the precise turning of a vehicle. The concept of interface is timeless: it's a bridge designed to minimize the friction between a person and the world around them.

In computing, interfaces connect two sides of a system

An interface is a shared boundary across which two components of a computer system exchange information. The exchange can be between software, hardware, peripheral devices, humans, and combinations of these. Whenever there is a human involved, we often use the term “user interface” or UI.

Doug Engelbart at an NLS workstation

Doug Engelbart at an NLS workstation | Source: Doug Engelbart Institute

A good user interface is a good conversation

Interfaces thrive on clarity, responsiveness, and mutual understanding. In a productive dialogue, each party clearly articulates their intentions and receives timely, understandable responses. Just as a good conversationalist anticipates the next question or need, a good interface guides you smoothly through your task. At their core, interfaces translate intent into action. They’re a bridge between what's in your head and what the product can do.

An interface makes the invisible tangible

Interfaces take the intricate logic, complex algorithms, and all the data of a system and present them in a way that is understandable and actionable for a human being. Could be pixels on a screen. Could be a simple buzz in your pocket. However it shows up, the interface defines how you experience technology. It's the system, made visible and actionable to human senses.

“Simple things should be simple, and complex things should be possible.”
Alan Kay

Interfaces have always evolved with technology

The first computer interfaces were predominantly text-based and involved command lines. The advent of Graphical User Interfaces (GUIs) changed everything with their windows, icons, menus, and pointers—by offering direct manipulation and a visual metaphor for actions. For the past few decades, the word "interface" mostly meant websites and mobile apps for most people. These defined our digital lives for a long time. More recently, with the rise of Large Language Models (LLMs) and AI Agents, interfaces are once again embracing text-based command line interactions, with a whole new level of power. And we should expect the interface to keep evolving as AI models and devices evolve to become increasingly multi-modal and ubiquitous. The command line wasn’t the end of computers; today’s chat interfaces aren’t the end of AI.

The way we design interfaces will always change

Design tools come and go. People's habits evolve. New devices are born. The way we designed interfaces five years ago is very different from how we do it now. The only common thread in our work is understanding people. Everything else is just a vehicle for that. The job is to make complex things simple. To make things feel intuitive, no matter what new technology is behind the curtain. That's the one thing that will never change.


image of the morse code alphabet

Morse code translates language into dots-and-dashes interfaces

“We become what we behold. We shape our tools, and thereafter our tools shape us.”
➪ Marshall McLuhan

Interfaces are becoming increasingly fluid

They enable users to interact with technology through whatever means feels most natural at a given moment—be it a tap, a swipe, a voice command, or even a subtle gesture. The interface is just a layer of interaction that adapts to you, not the other way around. It’s part of a designer’s job to understand user goals, context, and environment to determine which interface modality is the most appropriate for the task at hand.

Interfaces aren’t only for people

We usually picture interfaces as what a person sees and clicks, but an API (Application Programming Interface) is essentially the same thing: a clear set of rules that lets different software applications talk to each other. It's the point of contact where one program sends requests to another and receives responses, not so different from a human user clicking on a button and getting visual feedback. For developers, your API is the "user interface" to all your product's core data, features, and services. Similarly, MCP (Model Context Protocols) are a set of rules that help AI Agents interface with one another and with other systems without human interference.

Interfaces aren’t always created by people

With Generative UI (GenUI), computers can imagine and build interfaces on the fly—interfaces that adapt fluidly to users, contexts, and devices. Give it a prompt or some context, and the AI can figure out the best interface elements to use to render their response. This isn't about static designs anymore; it's about highly personalized, fluid experiences, delivered exactly when you need them. LLMs started with language but are very quickly expanding into other types of inputs and outputs. In this reality, two people won't ever experience the same product.

The role of designers is drastically changing with ad-hoc interfaces

With interfaces that build themselves and adapt on the fly based on user needs, the focus shifts from meticulously crafting static screens to defining rules, parameters, and intelligent systems that can generate optimal experiences. We'll set the stage for how information and interactions flow—less as an architect and more as a choreographer who is orchestrating a dynamic environment. The real expertise will be understanding human needs and translating them into flexible frameworks that AI can understand. That way, even when interfaces are generated on the fly, they remain intuitive, effective, and centered on people.

diptych of drawing of a dancer next to an architectural drawing

Design is becoming more the work of a choreographer than an architect

Interface modalities are blurring and becoming interchangeable

We're moving past simple multimodal interaction into an omnimodal experience, where different ways of interacting happen simultaneously. Imagine pointing at a screen and speaking a command, with the system understanding both cues instantly. Or pointing your camera at something while asking a question with your voice. This convergence creates a more natural, efficient, and intuitive dialogue with the products we use. It's about seamlessly merging touch, voice, gesture, haptics, and whatever else comes next, all at once.

photo of a person with several sensors in their head

Brain Computer Interfaces (BCI). Sure, looking forward to it.

Brain-computer interfaces are the holy grail—for some

This technology promises to bypass all conventional interfaces, allowing thoughts and desires to become immediate digital commands without the friction of language or the delays of physical movement. They’re the shortest distance between an idea and an action. But the promise of a thought-to-machine connection doesn’t come without its own risks. When your mind can be the input and output, your thoughts aren't just your own anymore—opening up risks around privacy, lack of user control, and other vulnerabilities.

“A good tool is an invisible tool. By invisible, we mean that the tool does not intrude on your consciousness; you focus on the task, not the tool.”
Mark Weiser

In the end, interfaces are also a space for self-expression

The ideal of "no interface" promises ultimate efficiency and direct access—but what do we lose in that pursuit? Perhaps the interface is not just a barrier to be minimized, but a space for human expression. It's a canvas; a place to imbue a product with personality, visual expression, and a unique form of art.

When we strip that away, or make everything look the same, we lose something important. We trade the unique and the delightful for the purely functional. We sacrifice a vital part of what makes technology human: the thoughtful, and sometimes imperfect, ways we present ourselves to the world.

]]>
Design & Craft syntax
<![CDATA[The secret of good metaphors]]> https://www.doc.cc/articles/good-metaphors https://www.doc.cc/articles/good-metaphors Sun, 22 Jun 2025 17:27:39 GMT

The secret of good metaphors

Written by Louis Charron

sketch of a neural network by cajal

Here’s something that has always fascinated me: our understanding of the human body, particularly the brain and nervous system, has been profoundly shaped by the tools and technologies of each era.

During the rise of mechanical craftsmanship, we began perceiving the body and brain as hydraulic systems (Descartes, 1600s) and intricate clockwork mechanisms of gears and springs (La Mettrie, 1700s). The industrial revolution brought new perspectives: the telegraph system with its information-carrying electrical wires transformed our view of the nervous system (Helmholtz, 1800s), while the steam engine, with its energy and pressure, became a model for understanding the brain (Freud, 1800s).

Perhaps the most striking examples emerged in the 20th century with the rise of electronics and networks. The brain was first envisioned as a telephone exchange switchboard connecting signals (Sherrington, 1930s), and later conceptualized as a computer with input, output, processing and storage (Miller, Galanter & Pribram, 1960s). And this pattern continues: we now talk about the brain through comparisons to the internet and AI neural networks.

Our bodies and brains are incredibly complex systems. To comprehend them, we naturally turn to what we already understand — the most sophisticated technologies of our time — and we do so through metaphors and analogies. As designers, we dedicate substantial time to crafting the perfect visual metaphors that make the novel and unexpected feel familiar and approachable. But how do metaphors and analogies work? And what makes a good metaphor?

Why we love metaphors

At its core, metaphors perform a simple yet profound function: they bridge the unfamiliar with the familiar. They connect what lies beyond our grasp to what we already know intimately. And what could be more intimate than our direct experience of the world? One of metaphors’ most powerful aspects is their ability to bring distant concepts within reach of our human scale.

diagram of the human experience of space and time in the physical world represented as a small section of the total area of the visualization

Human experience of space and time in the physical world.

Source: Eureka, Physics of Particles, Matter and the Universe

Our brains, shaped by millions of years of evolution, excel at perceiving and understanding the world at the scale of our bodies. Research has demonstrated that the further we go away from our human scale the less accurate our perception gets. From the microscopic scales of atoms and nano seconds to the macroscopic scales of galaxies and millions of years, everything outside of our human experience seems complex, abstract, ungraspable.

In response, we instinctively map these complex and abstract concepts onto embodied experiences. “We are able to think about how time passes via our implicit understanding of how we and other objects move through space. We are able to think about degree of familiarity and intimacy in relationships in terms of physical proximity.” Metaphors translate abstract ideas into perceptions. They turn what we can think into what we can feel.

Paper, touch and parrots

Let’s look at a concrete example. How do you make a complex machine accessible to everyone? In computing, the answer has always been: find the right metaphor.

In the early days, computers were complex calculators that only specialists could operate, with users submitting punch cards and waiting hours for results. The breakthrough came in 1970 when Xerox — a photocopier company — established PARC, a research center dedicated to explore the paperless future, and tasked it with an ambitious mission: making computers accessible to everyone, even children.

The team found their answer in a simple metaphor: paper. They noticed how naturally office workers handled documents — moving them, stacking them, filing them away. This observation became the foundation for the modern computer interface. They created visual representations of familiar items: a desktop surface, folders, a trash can, files you can pick up and move, and overlapping windows that mimicked papers on a desk. Drawing from research on how children learn through physical manipulation, they designed an interface that felt as natural as arranging items on a desk. They recognized the power of metaphors to rewire our brain.

The paper metaphor creates clear affordances: when you see a folder, you know you can put things in it; when you see a trash can, you know you can throw things away; when you see a window, you know you can move it around. The metaphor didn’t explain the computer — it made it immediately clear how to use it. And ironically, paper became the primary metaphor for the paperless world.

screenshot of xero alto

Xerox Alto, the very first computer using the paper metaphor. Screenshot of Smalltalk GUI, copyrighted 1980. Courtesy of PARC.

Today, as Dan Saffer writes: “no one addresses his computer without some metaphoric mediation.” This is still true in smartphones, where we’ve developed a new language of interaction: we pull to refresh, swipe to dismiss, pinch to zoom. The evolution of these metaphors mirrors how our relationship with technology has changed. We’ve moved from the paper metaphor to a physics inspired interaction model.

This shift from professional to physical metaphors parallels computing’s journey from office tools to personal devices. While desktop computers still use the paper metaphor, smartphones have adopted a language stripped of most cultural or professional context. By embracing simple physical metaphors, smartphones achieved something remarkable: they made computing accessible to a much wider audience. This suggests how universal metaphors, when well chosen, can help make complex systems accessible.

Today, we face this challenge with Generative AI and Large Language Models — some of the most abstract and complex technologies of our time. Confronted with these new tools, we instinctively reach for familiar frameworks: some define LLMs as sophisticated copying machines (stochastic parrots or blurry JPEG of the web), others envision them as crowds, inscrutable gods, aliens, or even food. These metaphors help bring seemingly magical technologies within our grasp, but each offers a different way of understanding them. When we see AI as a copying machine, we focus on its limitations; when we see it as a crowd, we think about emergence; when we frame it as an alien intelligence, we contemplate its otherness. As Sean Trott observes: “our choice of framing is exerting a subtle influence on the direction of our thought.”

What is the right metaphor?

To understand computers, smartphones or LLMs we need the right metaphors. As Steven Jay Gould writes: “We often think naively that missing data are the primary impediment to intellectual progress, just find the right fact and all the problems will dissipate, but barriers are often deeper and more abstract than thought. We must have access to the right metaphors, not only to the requisite information. Revolutionary thinkers are not, primarily, gatherers of facts but weavers of new intellectual structures.” Beyond technical specifications and capabilities, it’s the metaphors we choose that will determine how we understand and use these technologies.

However, some metaphors we assume to be universal are actually deeply cultural. Let’s look at time. English speakers conceptualize time spatially with the past behind us and the future ahead. But Aymara speakers from the Andes use a completely different framework based on visibility rather than direction of movement. For them, “The past, visible, thus stands in front of the speaker, while the future, unseeable, looms behind.” Their gestures match this perspective: they point forward when discussing the past and backward for the future.

Other cultures use entirely different spatial metaphors. Yupno speakers in New Guinea conceptualize time in relation to the mountains around them: the future flows uphill while the past flows downhill. Perhaps most surprisingly, researchers found that Tupi-Kawahíb speakers in Brazil appear to organize time without using spatial metaphors at all, challenging the assumption that time-space mapping is universal.

These examples reveal that metaphors we assume to be universal are often deeply cultural. When creating new metaphors, we should question our assumptions about what’s intuitive or universal, recognizing that different metaphors might serve different communities better.

Finite and infinite metaphors

Let’s go back to our brain metaphors. In the late 1800s, while many scientists were embracing mechanical and electrical metaphors for the nervous system, the Spanish neuroscientist Santiago Ramón y Cajal proposed a radically different vision. Having grown up in the Spanish countryside, he saw the brain not as a fixed machine but as a living garden — neurons were trees that could grow and branch, axons were climbing vines, and dendrites were delicate flowers blooming in the cerebral forest. This wasn’t just poetic language. Cajal actively rejected the dominant telegraph metaphor of his time, arguing that a rigid network of wires contradicted what he observed: the brain’s remarkable ability to change and adapt. Its garden metaphor captured something fundamental that mechanical metaphors couldn’t: the brain’s plasticity, its capacity for growth and transformation.

sketch of a neural network by cajal

Purkinje cell drawn by Cajal from the human cerebellum at the back of the head, which regulates balance for walking and standing. Courtesy of Cajal Institute, Cajal Legacy, Spanish National Research Council

The brain is neither truly a garden nor a machine (whatever the most advanced machine of our time is). But while the machine metaphor attempts to map a complex unknown (the brain) to a complex known (the machine), the garden metaphor shifts entirely our vision of what a brain is, and I would argue that it also shifts our vision of what a garden is. This gives us another insight into what makes a good metaphor: they don’t just map one object to another but rather shift our perspectives on both objects.

James P. Carse develop this idea in the fascinating Finite and Infinite Games: “It is not the role of metaphor to draw our sight to what is there, but to draw our vision toward what is not there and, indeed, cannot be anywhere. Metaphor is horizontal, reminding us that it is one’s vision that is limited, and not what one is viewing.” This shift in perspective is the reason why some metaphors are so powerful.

We need metaphors to expand our thinking, to look at the world in new ways. Much like scientific models, I see metaphors as frameworks through which we perceive and analyze the world. Some metaphors might be models yet to emerge, offering glimpses of new ways to structure our understanding.

In conclusion, we often think we use metaphors to explain ideas, but I believe good metaphors don’t explain but rather transform how our minds engage with ideas, opening entirely new ways of thinking. When crafting metaphors or communicating complex ideas, our role isn’t really to explain what exists, but to cultivate spaces where new understanding can bloom.

]]>
Articles Design & History
<![CDATA[Taste]]> https://www.doc.cc/syntax/taste https://www.doc.cc/syntax/taste Sat, 24 May 2025 19:43:49 GMT

Taste

On subjectivity, gatekeeping, and the risk of undefined words.

“There is a time for any fledgling artist where one's taste exceeds one's abilities. The only way to get through this period is to make things anyway.” ➪ Gabrielle Zevin, Tomorrow, and Tomorrow, and Tomorrow

Taste is not purely subjective

People think taste is subjective, until they start to design things. Paul Graham argues that most people keep their thoughts on taste as unexamined impulses, starting from childhood. When they like something, they have no idea why—it could be because their friends like it, because it’s fashionable, or because a movie star uses it. Once they become designers, they start to realize the relationship between taste and good design. We all need to examine taste more objectively.

google home speakers in different pastel colors

Taste is the ability to identify quality

To understand quality we need to look critically at: materials that are fit for purpose, ergonomy that considers audience needs, effective use of affordances, usability, accessibility, harmonic color choices, aesthetic choices that elicit emotion, intentional visual hierarchy—amongst others. Taste is in the observer, quality is in the object. The concept of taste becomes more productive when framed objectively around quality, and in ways that are measurable or at least comparable.

Taste is also curation

It is the ability to look at a wide scope of possibilities and choose with focus. In a world where Netflix launches hundreds of shows with questionable quality every month, film studio A24 has built their own brand based on supporting fewer movies that have a higher chance of (commercial and artistic) success. Taste, as a skill, is not exclusive to creators.

“My theory is that, as in chess, ‘taste’ is simply the ability to draw on patterns and experience to help us choose better candidates for analysis. An experienced designer doesn’t waste time on clearly ineffective solutions: typographically poor designs, bad colour choice, or unusable interaction metaphors. It follows that taste is learned, not innate. Experience, exposure, and practice give us patterns that suggest which solutions might fit which problems.” ➪ Cennydd Bowles

Taste can be developed

As any other skill, the best way to develop is practice: exploring new paths, taking risks, and making mistakes. Over time, we start to build a repertoire of things that work better than others—and most importantly, why. 

Taste is not a matter of personal preference

I might prefer modernist architecture, you might prefer gothic architecture. Personal preference is not that relevant in design—you’re designing for a brand that has a specific aesthetic, and for an audience whose preferences might be different than yours. 

brauns and garmins watches side by side

Sometimes you’ll need to design products that don’t match your personal aesthetics preferences.

Fashion changes, taste doesn’t

Visual trends will always change (not only over time but across cultures). Think about how software design has evolved from skeumorphic to flat, or even shorter-lived trends such as glassmorphism or bento grids. Someone with a developed taste will know to appreciate good quality beyond the latest trends.

“Thinking about fashion as language allows you to appreciate the many and varied aesthetics in the world, and negotiate between personalization and ‘following the rules.’ There are rules in grammar, but also a great ability for personal expression (e.g., slang, ee cummings, etc).”
➪ Derek Guy

Taste separates humans from machines

In an AI-powered world, it’s never been easier to produce reasonably well-executed outputs in a short amount of time. When execution becomes commodity, developing taste to know what to create becomes a crucial skill.

Taste should not be used as gatekeeping

Watch out for people using “good taste” and “bad taste” as exclusionary terms. When someone says “you need to have good taste” but doesn’t break down what taste means in an objective manner, they might be using it as an excuse to exclude people. Historically, taste has been an elitist concept and intentionally kept blurry and subjective to prevent people from accessing certain places, groups, or opportunities.

“Everything that is beauty and standard in design is in support of the system︎. It’s an aid in reinforcing a constantly evolving (but no so much) mono-aesthetic︎. It participates in a gate-keeping of taste, making us all want and like the same things︎.” ➪ Margherita Sabbioneda 

Taste has to be demystified

As designers, we have to start objectively defining quality: what does quality mean for our company, our team, and our own careers as designers? How can we make design education less focused on process, and more focused on quality? When we eliminate blurry words and subjectivity from the conversation, our individual taste skills can finally be used to reach that shared quality goal.


]]>
Design & Industry syntax
<![CDATA[Craft]]> https://www.doc.cc/syntax/craft https://www.doc.cc/syntax/craft Sat, 24 May 2025 19:41:41 GMT

Craft

On dedication and love for the invisible work.

Cover image from Xemrind

“If you can design one thing, you can design everything.”
Massimo Vignelli

Craft is making something with dedication, skill, and inventiveness

A person with solid craft will unwaveringly pour their time and attention into the thing being made. They will leverage their many years of experience of having made that thing before, and that existing knowledge will make them smarter and more efficient at doing so. But they will also be inventive: they won’t limit themselves to how they’ve done it in the past; they will actively look out for opportunities to push themselves into new territories and new ways of working.

Craft is not pixel-pushing

You might have heard or said before: “I’m not really strong at craft, I’m more of a strategic designer.” Our industry tends to put craft and strategic thinking on opposite sides of a spectrum, sometimes in an attempt to diminish the value of craft. Be careful not to fall into that trap. One can be strategic and deliver work that is well-crafted, polished, and thoughtful— isn’t that what everyone should strive for anyways?

Craft can manifest in different areas of product design beyond UI

How you elaborate conceptual work by exploring a wide range of ideas, how you present your design by building clear rationale, how you manage your time by bringing collaboration and decisiveness at the right moment. In fact, you can showcase your craft even before you start showing any designs.

“It appears inevitable that all digital products must eventually trade craft for scale. (...) It’s simply so much easier to keep 10 people in sync than 10,000. Each additional person makes it harder for everyone to stay focused and in sync.”
George Kedenburg III, The Cost of Craft

Craft is hard to scale

Craft is more suited to small, focused groups of people who all share the same vision and the same level of care for what’s being created. Larger organizations put certain incentives in place that tend to deflect folks from focusing their time on craft. But hard is not impossible: in order to make craft scale, you have to be intentional about how you set up your company, your team structure, and your day-to-day process.

Care for craft needs to come from the top

You need people at the highest ranks in the organization to be looking at craft every single day. People who will not let quality drop. People who understand that how you do something is as important as what or when you do it. If company leadership doesn’t care for craft, chances are people at every level will overlook the details.


“When there is too much distance between where decisions are made and the day-to-day delivery, we risk not understanding what is truly needed.”
Rochelle Gold, Head of User Research, NHS Digital

Getting a seat at the table doesn’t mean abdicating your craft

In some companies, shifting to a managerial position (and away from the day-to-day craft) feels like the only path to growth. Senior individual contributors (ICs) are expected to become leaders, leaders are expected to become managers… But designing and managing are two very different jobs. When that happens, great designers see their day taken over by activities that have very little to do with what made them great designers in the first place. Still, another career path is possible.

Craft is noticeable in the tiniest of things

You can see it in every alignment someone chooses not to neglect when they’re building slides for a presentation. In the icons they choose for their user journey map. It’s in the careful choice of words for headlines; it’s in resisting the easy “lorem ipsum;” it’s in proofreading emails one last time before sending them. You can taste it when someone decides to spend an extra minute organizing the links they’ll share with you, so that you can save an extra minute. Regardless of your role, craft is an obsession for making things better—no matter how big or small they are.

“Nothing must be arbitrary or left to chance. Care and accuracy in the design process show respect towards the user.”
➪ Dieter Rams in Principles of Good Design

"It's just a nice little fun detail, so when the dinners see, they know someone spent a little time on their dish"
The Bear, S2 E7

Craft isn’t glamorous

In fact, it can be pretty boring. Craft doesn't require innovation or a big show, but the curiosity to consistently study your materials and practice your techniques. It's knowing that big moments are made of tiny ones. That every second counts. For a chef, it might be peeling the mushroom skin even if clients will barely notice. For a designer, it might be organizing files even if that work is not going to be celebrated. Everything we do informs how we approach our lives.

Craft is iterative

It gets better at every new version of your work, at every refinement you make. It gets better every time you put your solution in front of users and learn from them. The more you iterate, the clearer the solution gets.

You get better, too

When you compound iteration and experience over the years, you get better at your craft. In his famous book Outliers, Malcolm Gladwell repeatedly refers to the “10,000-hour rule,” asserting that the key to achieving true expertise in any skill is simply a matter of practicing it for at least 10,000 hours. Being a craftsperson is not about how much you know, but how much you’re willing to put in the effort.

“Practice means to perform, over and over again in the face of all obstacles, some act of vision, of faith, of desire. Practice is a means of inviting the perfection desired.”
➪ Martha Graham

Craft will never be replaced by AI

Instead, AI will empower us to push our execution to new, unimaginable places—removing a lot of barriers to what was technically possible. If you’re concerned about being replaced by AI, maybe you are still equating the value of your craft merely to the outputs you produce.

Craft is care

When you consistently put in the time, attention, and effort to make something the best version of what it can be, you’re showing how much you care. Craft is dedication and respect for the invisible work.


]]>
Design & Craft syntax
<![CDATA[Simplicity]]> https://www.doc.cc/syntax/simplicity https://www.doc.cc/syntax/simplicity Sat, 24 May 2025 19:41:26 GMT

Simplicity

On removing complexity to add meaning.

Image credit: Roland Shainidze

“I didn't have time to write a short letter, so I wrote a long one instead.”
➪ Mark Twain

Simple is harder than complex


As designers, we often aspire to create designs that are simple, sleek, and clear for users. But we have a long list of requirements to meet and features to add to our products. Adding is always easier than cutting back. We’re working with digital spaces which, in theory, can expand and grow to fit anything. There’s no limit to how much we can add, but there is definitely a limit to how the human brain comprehends things. Simplicity takes real thought.

Simplicity should be ingrained in every layer of our product

From aesthetic simplicity (how it looks), to logical simplicity (how it works), all the way to technical simplicity (how it performs)—those layers are intrinsically connected and work together to create a simple experience for users. If your product looks simple but takes long minutes to load, users likely won’t perceive it as simple.

Simplicity is repeatedly saying no before getting to a yes

Not everything listed in the requirements document needs to be included in the product, let alone become an interface element. Design isn't just making screens for every feature. Simplicity is achieved when you prioritize, merge, organize, reduce, combine, hide, gradually unveil, shorten, compress, and sacrifice. In Laws of Simplicity, John Maeda talks about the idea of thoughtful reduction: that when in doubt, you should remove, and just be careful of what you remove.

“Living simply makes loving simple.”
bell hooks

Not everything needs to be visible at all times

The beauty of digital products is that we can play with the axis of time; we can build interactions that gradually and contextually reveal themselves to users. The folks at Signal vs. Noise argue that unless you’re making a product that does one thing (like a paperclip, for example), you have to make tough calls about what needs to be obvious, what should be easy, and what should be possible. More than simply removing things or saying no, it’s about HOW you say yes.

frame from the game Hades

Videogames are notoriously famous for keeping the interface simple while relying on visual hints and low-consequential tutorials to progressively introduce mechanics and more complex operations.

Simplicity is not minimalism, is coherence

Designers working on feature-rich products (a B2B dashboard, for example) tend to argue that simplicity is impossible. They are thinking of simplicity as a synonym for minimalism instead of coherence. An inconsistent icon, a different use of color, a slightly off-lexicon naming, or a misplaced menu item can do more harm than a multi-step task. With coherence, something can be robust and simple at the same time.

You cannot “add simplicity” to your product

Simplicity isn't a coat of paint. You cannot start your designs assuming the product will be complex, that it will cover a lot of ground. Start with the most important problem and the simplest solution. That is the core of your product; the one thing it needs to be excellent at. Then, and only then, build new features around it.

A complex system that works is invariably found to have evolved from a simple system that worked.
John Gall

Familiar is simple

Often times, an interface “feels” complicated because your brain is trying to understand how it works and trying to learn how to use it. Start with what people already know. And when adding something, make it feel like they may have seen it before, even if its a distant or unexpected reference. Achieving simplicity is building familiarity; the delight from when things happen as expected.

Things being “one click away” is not a valid measure of simplicity

That’s just a lousy argument others use to convince you to clutter up your homepage. It's a relic from the dial-up days when navigating from one page to another was a more time-consuming decision. If you’re uncritically accommodating every request from stakeholders, your interface will look chaotic, and the cognitive burden will fall on the user.

Words matter

If you’re aiming for simplicity, don’t overlook your writing. Headings and calls-to-action can always be shorter, more consistent, and less generic. If something is hard to explain in simple terms, that might be the symptom of a bigger issue.

screenshot from one of the first googles homepages

When Google Search launched in 1998, it was so simple and efficient that it changed people’s expectations of how a search engine should work. Over the years, Google’s offerings have expanded so widely that people rarely associate the brand with simplicity anymore.

“Design is as much an act of spacing as an act of marking.”
➪ Ellen Lupton

Simplicity lives in the gaps

In the white space between modules. In the negative shape around a sculpture, in the white page between chapters, in the stillness before a dance move. What you don't design is just as important as what you do.

masp building

MASP, by Lina Bo Bardi. The building's audacious simplicity makes it appear to float and creates a striking negative space beneath it—an area that's then used for public gatherings and special events.

Staying in the problem space for too long can be problematic

Designers can get in their own way by losing focus and getting lost in the problem space. When you try to solve everything, you end up not solving anything. The key to simplicity is finding the essence.

Technology has a cost

Tech is supposed to make our lives simpler, but many times can create complexities all around. Nothing exists in a vacuum. Everything we add comes with other consequences: operational costs (production, maintenance, technical debt), personal costs (cognitive load, attention span, extra work for users), and societal costs (from energy consumption to need for regulation). It’s our responsibility as designers to carefully evaluate technology so we don’t sacrifice too much for too little.

crv ad busy with illustrations of what many would consider a full life

This Honda ad positions the car as a simple solution to unlock everything else in life (life's meaning, it seems, hinges on ticking off an overwhelming series of boxes). What it doesn't tell you is that owning a car brings a whole set of complexities to your life (maintenance, inspection, taxes, insurance, parking) and to your community (pollution, noise, accidents, traffic, decline of public spaces, and so on).

“People try to do all sorts of clever and difficult things to improve life instead of doing the simplest, easiest thing: refusing to participate in activities that make life bad.”
➪ Leo Tolstoy

Simplicity is filling everything with meaning

When you put the time and thought to make something simple, you’re showing you care. Simplicity is important in every aspect of your work—not only in how you design, but how you collaborate, how you choose to communicate, how you organize your files, how you lead your meetings. A messy team cannot make a simple product. When you make simplicity a priority, you're not just making things easier. You're giving them meaning.


]]>
Design & Craft syntax
<![CDATA[Consistency]]> https://www.doc.cc/syntax/consistency https://www.doc.cc/syntax/consistency Sat, 24 May 2025 19:41:16 GMT

Consistency

On compounding patterns and the art of divergence.

Image credit: Maya Lin

“Design is the silent ambassador of your brand.”
➪ Paul Rand

Consistency is multifaceted

The word “consistency” gets thrown around quite a lot during design critiques, and it’s every designer’s obsession—especially for designers who work in digital products. Sometimes it's a rigid rule: "Does this match the library?" Sometimes it's high praise: "She's so consistent." Other times, it's just filler feedback: "Let's make it more consistent." We all nod along, but consistency seems to mean whatever the speaker wants it to mean at that moment. Really understanding consistency then becomes imperative to designers.

Consistency makes things familiar

Leveraging known, established UX patterns and sticking to them prevent users from having to learn net-new interactions and build net-new mental models every time they engage with a new product. Think about picking a date: whether it's for groceries delivery or a doctor's appointment, shouldn't that feel roughly the same? By using what people are familiar with, we more quickly meet their expectations of how something should work, thus saving them time and effort to learn something new every time. Consistency isn't about blindly following rules; it's about respecting people's time.

A western calendar view is a familiar pattern. But for a date you know (like your date of birth) you don’t need a calendar view—since you’re not browsing for availability.

Consistency is good for business

From a cost perspective, working within the constraints of a design system is what allows products to scale quickly. Design systems are basically the assembly line, but for building software. Mass production hitting the digital world; repeatable, efficient way to build. Not only can products expand more quickly, but users can more easily navigate those ever-expanding spaces.

Consistency doesn't solve for bad experiences

A confusing button label. A form field that trips people up. A workflow that makes zero sense. If your product is consistent in its flaws or anti-patterns, you're not doing anyone any favors. Replicating a bad pattern consistently can hinder the overall product quality and perpetuate existing biases or systemic disadvantages for certain user groups. Being consistently bad is often far worse than being inconsistent.

The real power of consistency isn't in uniformity, it’s predictability.

Rigid adherence to patterns can stifle discovery

While a familiar, consistent experience is the goal, it shouldn't be a constraint that supersedes context. Designing around the current context is more important than obsessing over consistency just for the sake of it. A common example is this wave of AI products that still rely heavily on chat-like interactions: sure, chat is a familiar user pattern for a technology that can sometimes feel scary. But when does familiarity and consistency become a barrier to innovation?

A consistent UI isn’t as powerful as consistent principles

Users might be able to navigate around an inconsistent pattern and still get things done in your product, but when your brand’s behavior and principles are not consistent across channels, the experience can feel deeply disconnected. Apple’s settings UI can be inconsistent at times, but everybody recognizes the overall patterns and brand voice that makes Apple, Apple.

Duolingo could have done a simple list of the different lessons of a level. By simulating a boardgame, they leverage a familiar interface, while making the experience more unique and better aligned with their brand.

Consistency can't be bureaucratic

Don’t let discussions around adhering to patterns supersede discussions around solving real user problems. The most consistent thing a product can be is consistently useful.

Consistency is about making room for differentiation

Think about a jazz session: the band starts from a known scale, rhythm. One musician breaks through, improvising on top of that pattern for a few minutes before joining the band again. The band, the audience, everyone knows what is happening, when it starts and when it ends, because the foundation of it all is a consistent melody.

miles davis improvising on so what

Moments of silence and meticulously sparse notes are enough to make a rather methodical base feel unique and differentiated. (Miles Davis improvising on So What)

Choose consistency over intensity

Not only for your designs, but for your career as well. Consistency compounds. It’s about showing up, day after day, making steady progress and delivering value continuously. Consistency makes things last.

Consistency builds your stamina

The best professionals in any discipline have spent an immeasurable amount of time training, and practicing, and slowly improving themselves. This type of endurance shows in the quiet ability to simply keep going, to maintain focus over the long haul when others fade or burn out.

“Clarity trumps consistency. If you can make something significantly clearer by making it slightly inconsistent, choose in favor of clarity.”
➪ Steve Krug, Don't Make Me Think

Consistency is not about sameness

It’s not about churning identical copies of the same idea. It’s about having discipline and structure to how you approach the work so that when things go well (or wrong) you can more easily identify what contributed to it. Consistency is about always improving from the previous version, no matter what.

Consistency creates identity

Being consistent shapes who you are, is habit forming. Over time it becomes your brand, what you will be known for. When you frame consistent around your values, it makes you ask harder questions and highlights real opportunities.

Yayoi Kusama has an impressive, prolific body of work. Yet, one could easily identify one of her pieces.

Consistency is reliability

It doesn't mean everything will always look the same or get the same results, but the expectations, the trust, is never broken. Consistency brings calmness.

Consistency is making room for delight

As the user gets familiarized with consistent patterns and elements, they can successfully fill in the blanks: either by the logical association of known elements or by combining known elements in novel ways. As consistency compounds, we as designers start to open space for divergence and exploration.


]]>
Design & Craft syntax
<![CDATA[Concept]]> https://www.doc.cc/syntax/concept https://www.doc.cc/syntax/concept Sat, 24 May 2025 19:31:36 GMT

Concept

On finding the essence of your thinking.

What makes a chair a chair? This philosophy 101 question exemplifies how we are surrounded by concepts that are hard to define but easy to understand.

"You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete."
➪ Buckminster Fuller

Conceptual models are abstractions of things in the real world—whether physical, abstract, or social

A map is a subjective representation of geographical space. An org chart is a map of the hierarchical relationships between people. A party is a social gathering of invited guests to celebrate something. A calendar is a visual representation of time. We are surrounded by ideas that we don’t often think about because they’ve been ingrained in our brains since childhood, and we are constantly relying on existing models to navigate the world around us and operate our daily lives.

Concepting serves as the bridge between imagination and execution

Professionals across various industries have historically relied on concepts to communicate and sell ideas. Concepts are very common in advertising, branding, marketing, graphic design, fashion, architecture, and many more creative fields. It all comes down to the human brain's innate attraction to narrative and metaphor and how we can leverage this to push ideas forward.

amazon logo with an arrow linking the a to the z and fedex logo with the arrow crated in the negative space between the E and the X

A strong concept starts with branding. Amazon promises to deliver any product from A to Z; FedEx's arrow conveys speed.

photo showing the costco hotdog for $1.50

A concept is also about business strategy. Costco's $1.50 hotdog shows its commitment to low prices.

redbull can and two pilots sponsored by redbull

Redbull walks the talk. Their positioning of "pushing the limits" translates into their extreme sports investment.

fashion show look and a product image from uniqlo showing similar  ideas

Fashion designer Clare Waight Keller explored airy and playful concepts that informed her Uniqlo collection.

"Every great design begins with an even better story."
Lorinda Mamo

A concept is strategy, visualized

Great designers are able to distill the essence of a strategy and transmute it (through a mockup, a storyboard, a sentence, a quote, a metaphor, or a story) into a form that stakeholders can grasp and embrace.

Conceptual models help users (and stakeholders) understand how a system works

In the context of digital product design, a concept can help convey the principles and functionalities of the system it represents. If I tell you I’m designing “a calendar for your emotions,” you immediately start to imagine what that app looks like, the types of actions you can take there, and how things are organized. All concepts start from a previous, familiar idea. Designers are often stitching together established mental models—the building blocks of our everyday understanding—to create something new.

Concepts are a tool for conversation

Although designers are often the ones creating visual representations of a concept, everyone on the team could (and should) participate in defining the conceptual model that the product will follow. This shared understanding is pivotal, as it influences every facet of the product—from feature set to visual language, from tone of voice to technical implementation. As a designer, the more you involve your stakeholders in that process, the higher the chances your product vision will come to life in a cohesive way.

Concepts should be simple to grasp

You know you've nailed a concept when you hear someone else articulating it in their own words, effortlessly and with clarity. The purpose is self-evident, leaving no room for ambiguity. Simplicity is not synonymous with ease, though; distilling complex ideas into their purest form is a skill that requires both mastery and restraint.

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”
Antoine de Saint-Exupéry

Concepts build understanding

The visual language and visual affordances in a product play a pivotal role in shaping users' mental models of how they’ll interact with that product. A well-crafted interface leverages visual cues to guide users, making interactions feel natural and predictable.

screenshot of skeuomorphic apple calendar next to the current minimal ui calendar

UI initially leveraged the familiarity of physical objects. As people
have grown more comfortable with digital products, visual concepts evolved to be less skeuomorphic.

screenshot of a gantt chart
screenshot of a calendar with a heatmap showing days with more contributions done

Over time, calendar concepts also evolved to solve different needs.

Concepting is not just executing requirements

A checklist of requirements is not a design; it's a recipe for mediocrity. Design is about critical thinking. It’s about the things you decide NOT to include. Great designers know how to strategically question requirements, say no, and prioritize what really matters. The first step is to have a clear understanding of the problem you’re trying to solve. 

A flashy UI idea is not a concept

Designers often use the word “concept” when referring to fun, dribbble-esque, motion-heavy UI ideas. But the UI is merely the surface layer. A true concept delves deeper, providing a clear strategic solution to a clear problem. A flashy UI creates the impression of a fully fleshed-out concept where there might not be one.

“There’s nothing worse than a sharp image of a fuzzy concept.”
Ansel Adam

A layout variation is not a concept

A high volume of UI options can create the illusion of choice but can actually hinder decision-making if they lack strategic differentiation. Strong concepts are distinct in their approach, addressing the problem from significantly diverse angles. When ideating on possible paths forward, designers should consider different framings for the problem, not different executions for the same solution. 

lotus temple photo

The Lotus Temple in New Delhi has a simple yet bold architectural concept.

Concepts should be bold

They should help others imagine a future no one thought was possible. They should spark imagination and defy convention. They should be radically simple, so they can force teams to make difficult decisions. If a product is trying to be everything for everyone, it risks becoming diluted and forgettable.

While concepting, focus on… concepts

There’s a time and place for different discussions. Oftentimes, in the absence of strategic arguments, peers might raise tactical commentary when reviewing conceptual work (the classic “have you thought about the empty state” comment). But while it's tempting to address every potential scenario immediately, tackling edge cases prematurely can derail the process and hinder the development of a strong conceptual foundation. Great designers know that they will be able to design for specific scenarios once the concept is solidified.

A concept preserves a product’s integrity

Products that start with a conceptual model have higher chances of remaining coherent as they evolve and as new features are added. Concepts act as a touchstone, reminding the team of the product's fundamental purpose and guiding decisions about future development. If you compromise on the concept, you kill the product.


]]>
Design & Craft syntax
<![CDATA[Process]]> https://www.doc.cc/syntax/process https://www.doc.cc/syntax/process Sat, 24 May 2025 19:31:01 GMT

Process

On building trust, not checklists.

“A bad system will beat a good person every time.”
W. Edwards Deming

Process is predictability

It is a series of actions carried out with the goal of achieving a particular outcome. It functions as a recipe or a set of instructions that guide you from start to finish. In context of companies, process brings predictability and clarity to day-to-day work.

Process makes thinking visible

Having a structured way of tackling design problems helps increase visibility into how decision-making unfolds over the course of a project. Process also helps multidisciplinary teams create a shared language and define everyone’s roles more clearly.

Process helps companies scale

Having an established process allows heterogeneous teams to work in tandem. From a business perspective, it also helps mitigate risk—since decisions tend to be more thoughtfully documented and consistent. Even when something goes wrong, having a set process allows teams to understand where things derailed so they can course-correct faster.

Certain steps are about advancing the work; others are about achieving alignment

It’s important to understand that distinction and think critically about why the team is taking a certain step. Not everything will be solved with a workshop.

If your company is making strategic decisions based on Figjam sticker voting, something is fundamentally broken about your team’s vision.

Process are meant to create moments of reflection, not checklists

The best teams are the ones that find a balance between following steps and embracing critical thinking. It’s okay to deviate from the standard path. In fact, many times, following the same steps blindly and without critical thinking means wasting everyone’s time.

Process is a billion-dollar industry

Design thinking, double diamond, agile, scrum, lean UX, design sprint. Humans seek formulas. Formulas sell books, classes, and courses, and feed multi-million contracts where consultancies come in to fix teams that are misaligned and can’t get work done. Watch out for Trademarked Processes™ that claim to solve all your company’s problems. Instead, find what works for your team and your users.

abstract diagram

It’s not because it worked for one company that it will work for every company

Process is dependent on infrastructure, headcount, team configuration, incentives, and overall company culture. Stop trying to implement Pixar’s process in financial institution.

What worked in the past might not work now

Technology changes, company priorities change, and people come and go. Insisting on following a process that was defined five years ago by another leadership team is probably not going to fly. Make it a point to regularly get a pulse from the team about how things can be improved, and which parts of the process can simply be left behind.

You need to understand the process to be able to break it

Less experienced designers tend to stay on the known, prescriptive paths. With experience, as you master the process, you will feel more confident about when and how to push things forward differently. Design is not a formulaic discipline, so you will need to balance process with critical thinking to make decisions.

"The creative process is a process of surrender, not control."
Julia Cameron

photo of google founders sharing a single desk

You don’t need a lot of process when your entire company fits in a small office space.

Photo: Google

Beware of performative processes

Company incentives play a big role in how much people are tied to processes. Sometimes people will add “user interviews” or “cross-disciplinary workshops” to the process simply to build a compelling case for their performance review—regardless of how effective those will be.

Process doesn’t matter at all if output and outcome aren’t good

This is a classic problem in design portfolios: lengthy case studies where the designer shares a detailed breakdown of every single step they took and how “user-centered” and “data-driven” everything was—only to land on underwhelming designs and even more underwhelming results. 

Teams that talk too much about process might be avoiding talking about design

Sure, we can all be more diligent about “following the process”—but do we have clarity on where we’re trying to go? Is the vision clear, as defined by company leadership? Are we comfortable questioning the status quo? Beware of designers who hide behind processes to avoid making real decisions. The focus should always be on the work, how to push it forward, and how to make it better.

"You can't use up creativity. The more you use, the more you have."
Maya Angelou

abstract diagram

The process should serve good design, not the other way around

If design in your organization simply means following pre-defined steps and prompts, you could leave it to AI to solve it. Machines are way more efficient than humans at that stuff. Good design requires critical thinking.

The more the team trusts one another, the simpler the process should be

For companies, it’s safer to trust the process than to trust people—they only implement strict, detailed processes when they’re unable to build trust at scale to make important decisions. You don’t build trust by delivering rulebooks; you build trust when you deliver good design.


]]>
Design & Craft syntax
<![CDATA[The unbearable lightness of big tech]]> https://www.doc.cc/articles/the-unbearable-lightness-of-big-tech https://www.doc.cc/articles/the-unbearable-lightness-of-big-tech Thu, 24 Apr 2025 18:11:38 GMT

The unbearable lightness of big tech

Written by Joanna Weber

three man with clouds over their heads

When the Agile Manifesto was inked in 2001, it was supposed to spark a revolution, and it did: by 2023, 71% of US companies were using Agile. The simple list of commitments to collaboration and adaptiveness branched into frameworks such as Scrum and Kanban.

“Agile” was about having a responsive mindset, not about which process you followed, but it became about which process you followed.

Agile was designed for engineering teams but spread to whole companies. Scaled frameworks emerged to coordinate Scrum teams, with a sprawling training and certification industry. In 2022, the enterprise Agile transformation industry was predicted to reach $142 billion by 2032.

When asked their goals for these transformations, 83% of managers said “fast deliveries to customers”—and that’s where the trouble started.

Product teams for end-to-end delivery

Scrum@Scale cites Harvard research that the optimal team size is four or five people, therefore a Scrum of Scrums Team should be four or five teams of four or five people:

“As a dynamic group, the teams composing the Scrum of Scrums are responsible for a fully integrated set of potentially shippable increments of product at the end of every Sprint. Optimally, they carry out all of the functions required to release value directly to customers.”

The Scrum Guide agrees that “Scrum Teams are cross-functional, meaning the members have all the skills necessary to create value each Sprint”, and also describes the team as “small”.

Whether or not the guides specify that the “unit of value” must be a product/feature, it has been widely interpreted as such: in practice, most product teams consist of a product manager, product marketing manager, engineers, and, occasionally, a designer. The expectation is that those five people should be able to do everything that is required to bring that product to market, and that every team member will eventually learn how to do everything.

A 2021 study by the University of Gothenburg said that “tech workers over 35 are considered old in the industry,” but I only know one person who can design and code to anything like a professional standard, and she’s even older than I am.

Learning takes time.

I have an M-shaped profile — deep skills in several areas honed over three decades in a variety of industries. I have had a little bit of training in scripting and designing and many other business skill areas, but no sensible company would ask me to design a logo for their flagship product, or push code directly to production. There are only two or three skill areas where they would expect me to deliver professional work, which have usually been the roles I have been hired for after a job was created when management agreed that there was nobody already there who could do it well enough and/or had the capacity to do it.

Companies, it turns out, need specialists.

Specialists in a generalist world

The most popular frameworks were developed by engineers who evidently didn’t spend much time considering anything that isn’t software development. Perhaps that’s not surprising: in the previous two companies I worked for, engineers had little contact with the rest of the company. At the last, software development was mostly outsourced to teams in Manila, and the UK-based product owner would have daily standups at 7am before a day of meetings with other managers. In other words, the developers had no contact whatsoever with “the business”. At the company before, developers worked with business analysts who would meet with product marketing managers who would coordinate with every other business function. I had been there fourteen years before I ever even met a developer.

It is logical, therefore, that every non-engineering function is described with an airy wave of the hand — “supporting functions go over there” — with the same abstract regard for “everything else” that enterprise companies have had for software development in the past.

Perhaps the only framework to really consider it is SAFe, the widely-reviled rebranding of RUP which does, at least, acknowledge “the business” but frames it in a baffling and overly-prescriptive set of processes.

Where, in the other frameworks, user research is either absent altogether or assumed to be the work of the five team members, SAFe at least has the familiar “double diamond” explicitly in the framework, so we’ve gone...

From:

just shove a few lines of code out there and see who bites

To:

make some attempt to figure out user needs and run a few prototypes before launching it.

None of these frameworks consider the actual things required to run a successful large business for very long.

The things we have forgotten

Strategy guru Roger Martin recently complained that most business schools no longer teach the foundations of business strategy, as defined by Michael Porter. Porter’s teachings can be summarised with a question:

“What can we profitably offer which customers need and that competitors will find hard to imitate?”

Those three core elements of strategy require three sources of knowledge:

01

What customers need — a deep enquiry into the motivations, habits and behaviours of (potential) customers, segmented to discern which would be the most lucrative to target.

02

What else customers might use — a sound understanding of the competitive landscape including comparators (substitute products that fulfil the same need — like how McDonald’s competes with both Burger King and a homemade sandwich to fulfil the need of hunger), and the capabilities of each rival.

03

What we can profitably offer — a strong knowledge of your own capabilities and resources so that you can choose to offer something that is easy and inexpensive for you to offer but would require heavy investment for a competitor to offer the same, dissuading them from doing so.

Porter created two widely-used strategy tools to work these things out: the Five Forces model, and the Value Chain model.

To answer the core questions that inform the Five Forces analysis, an important business function has traditionally been present in companies, but has all but disappeared in the startup world: Market Research.

Given that almost every strategic decision will be based on data about user needs and market dynamics, most companies in the past agreed that this work was too important to be conducted by an untrained generalist. Market research is a highly professionalised industry with nationally-accredited qualifications on a par with accountancy: product managers in large companies don’t do their own corporate taxes, either.

Consumer research, product design and data analytics sit in the lateral ‘technology development’ channel in Porter’s model, just as recruitment and training flow along the HR channel.

In traditional companies, market researchers conducted the bulk of consumer research, which was why it was initially OK for designers without those specialist research qualifications to conduct some evaluative user research: it was low stakes stuff.

When innovation labs sprang up in the wake of The Lean Startup, they were small teams with a limited budget who could freely experiment outside of the usual checks and safeguards because the work they were doing was outside of the products on which the business depended for income. It was OK to fail, and most new products did.

It was only later when that innovation-lab model became a substitute for the traditional value chain (and Lean UX a replacement for market research), and with it, teams conducting work that they are not qualified to do and making business-critical decisions based on the results.

The problem with growth

When you start a company, one of three things will happen:

01

You will fail within the first year. This is the fate of most businesses;

02

You find a stable fit as a boutique company with under 40 employees. According to Daniel Priestley, once you have more than 40 employees, you have to grow rapidly or you will buckle under your own weight;

03

You become a large company. Large companies operate very differently to small companies and, according to Forbes, most companies that grow very rapidly fail.

Look back at the Scrum team diagram at the top of this page and the Value Chain model further down the page. They look very different, don’t they?

When you have only five people in your company, it’s OK to make a few mistakes and be a little bit scrappy — nobody has any particular expectations and, for these fledgling companies, those first few customers don’t mind very much if it’s all a little rough around the edges. You don’t have to worry about big, expensive market research projects because you know your customers by name.

Even so, the disproportionately large number of startup founders with MBAs (Stanford alone has launched over 1500) have learned about the importance of good quality customer data, and about “the business” — the winning combination among top unicorns, it seems, is where the founders are engineers but their first hires are “business” people.

When you have five thousand people in your company, customers expect polished performance and security. If you screw up in a five-person startup, you will have already accepted the risk of starting a company. If you screw up in a five-thousand-person company, the mail guy might lose his house.

Venture Capitalists aren’t so concerned with screw-ups if the money is rolling in. Since the failure rate is so high, they push startups for growth-at-all-costs, often crushing them in the process. A few years back, Fortune noted that two-thirds of the fastest-growing companies had failed just five years later, because they didn’t have any system in place for selling new products to new customers.

Hypergrowth companies are rarely profitable for a very long time, and by 2021, “ran out of cash” replaced “no market need” as the number one reason for failure.

When you align teams around products rather than segments, it makes it harder to nix the duds. Imagine if Boots No 7 has segment-based skincare product teams for women aged 15–25, 25–35, etc. I don’t know their structure, but they seem quite happy to cannibalise their own products within each segment. In segment-based teams, a single product manager and product marketing manager would look after all skincare products for women aged 25–35, and so on, developing a very deep understanding of that segment.

If you’re aligned around segments, you only need to do a needfinding research project once, and can cover subsequent research with regular lightweight testing, since you already understand the needs of that segment for any other products, unless enough time has elapsed that those needs have substantially changed.

If each individual product has a team, that’s a very expensive way to organise people — especially given that over 90% of products fail. Every time you want to introduce a new product, you have to restructure your teams.

Since maintaining teams is so expensive and so many products fail, the idea is to ship as quickly as possible, to “get value to customers” without any real sense of what that value is.

Product-based teams don’t value good quality user research because nobody has ever told them that they should.

Twenty years ago, “waterfall” project management meant development cycles that could take years. A six-month market research project would establish the user needs, which would be developed into a comprehensive list of requirements, and those customer needs would not be revisited until the product was launched and the satisfaction research would take place, which would be the post-hoc attempt at usability testing.

Now, waterfall has become a myth, and it is the needfinding part that’s neglected, with UX focus on evaluative research (often, also, after the product has been built).

With the approach to user testing being “let’s launch the product and see if anyone buys it”, the thing that nobody wanted or needed is shipped in virtually unusable form, and then begins a frantic rush to try to tweak it into a format that works. Six-year cycles became one year, then, eventually, two weeks. Everyone is rushing around with no idea where they’re headed.

That frantic approach can hide a multitude of sins. AirBNB and Uber are hailed as innovations when they didn’t really invent anything: AirBNB started as a company to let out a spare room (my sister did that back in the 90s), which then quietly pivoted to be a holiday cottage rental company (an idea that The Beatles associated with old people). Uber started out as a ride-sharing company (which originated in the 1940s) and then quietly pivoted to be an unlicensed taxi firm: if your startup idea doesn’t work, just scrabble around for an older idea and call it new, after removing any safety checks that might have made it expensive.

Through frantic rushing around, Cyberpunk 2077 managed to claw its way back from disaster, but Mass Effect: Andromeda was not so lucky. Both were multi-year projects where each new part could only be developed once the previous part was done (i.e. waterfall) but with the rudderless, chaotic approach of a modern startup. EA takes things further with The Sims — sticking to the most recent game for a full decade and chucking out new content every few weeks, which is usually buggy, but eh, no worries, we’ll patch it later.

I’ve even seen this attitude on job adverts: “must be willing to ship something imperfect to refine through post-launch patches”. The point of a Definition of Done is to agree, in advance, that the thing ought to bloody work!

No checks, no balances

Risk management, like market research, is largely forgotten.

In 2024, CrowdStrike’s outage is estimated to have cost firms $5 billion. Crowdstrike workers wear many hats — there’s no one user research function, nor a testing one, it’s just part of someone else’s job.

On July 19, a bug in CrowdStrike’s cloud-based testing system — specifically, the part that runs validation checks on new updates prior to release — ended up allowing the software to be pushed out “despite containing problematic content data.” To fix it, 8.5 million individual devices had to be manually reset. CrowdStrike has since introduced the kind of testing procedures that should have been present all along, including more control for customers over software updates.

Just a few days later, Microsoft Azure was brought down by a DDoS attack, which it then admitted had been exacerbated by “an error in the implementation” of the security protocols that were meant to defend it. Sean Wright, head of application security at Featurespace, said that the incident “highlights the importance of testing software thoroughly”.

Delta Airlines lashed out at both companies, saying that their negligence had cost it $500 million:

“If you’re going to have priority access to the Delta ecosystem in terms of technology, you’ve got to test this stuff,” Bastian said. “You can’t come into a mission critical 24/7 operation and tell us we have a bug. It doesn’t work.”

When risk registers and IT governance are no longer routinely part of the process, it just doesn’t get done properly. I saw a comment on LinkedIn the other day that, after laying off all the software architects, companies are scrabbling around trying to hire them again.

Enshittification, and plain bad service

Cory Doctorow coined the term “enshittification” in 2023, which the American Dialect Society deemed its Word of the Year.

To understand enshittification, we first need to understand customer centricity. According to Wharton’s Peter Fader, “customer centricity” isn’t simply being nice to customers, but understanding their desires and habits so thoroughly that you can work out what’s valuable to them, calculate how much they’re likely to spend, and orient your business around the most profitable customers.

Enshittification is corporate centricity — trapping the customer into a monopolistic system and then taking Porter’s Five Forces model and shafting everyone in that diagram.

Doctorow uses the example of Facebook: it locked end users in via network effects (you’re there because your friends are there), then locked in advertisers by selling user data for cheap ads, then locked in publishers with “recommended” content that was shown to users whether or not they wanted to see it, then squeezed those users and advertisers and publishers to return the money to shareholders. Now everyone has a miserable time — you don’t see your friends’ posts, advertisers pay more for ads that are viewed less, and publishers have their content held hostage: pay up, or we won’t show your content to anyone.

The word “content” says it all. Content. Not art, not education or thoughtful discussion, just vapid filler. In 2016, 64% of web traffic was non-human. In 2023, we talked about “sludge” or vapid sensationalist filler (the type responsible for last week’s riots). In 2024, a new word, “slop”, has been coined for vapid filler that has been generated by AI: much of the internet has been written by bots and is read by bots.

You couldn’t make it up.

Half the time, the makers of these “services” aren’t intentionally being malicious, they just genuinely don’t understand the needs of their users or how businesses are supposed to work. The New Yorker ran a story (July 2024) about Spotify, and how its new user interface is attracting ire.

“Diving deep into a particular artist’s discography requires scrolling through “Popular” tracks, “Artist Picks,” and “Popular Releases.” On Spotify, “the entire concept of an album feels more like a hindrance than anything.” Music on the app is most easily consumed in a disorganized cascade; every song becomes audio “content” separated from a musician’s larger body of work. In short, Spotify does not seem to care about your relationship to “your” music anymore; for long-term users, this has felt like a slow-motion bait and switch.”

When the UI makes it harder to access albums, over time, they become “less important as units of online listening”. If Spotify succeeds at turning us all into passive listeners, then it doesn’t really matter which content the platform licenses. As design professor Jarrett Fuller put it, “It’s about ‘How do you get through as much music as you can so you keep paying for it?’”

Everything is about “engagement”, but not so much “enjoyment”.

The decade-old complex SAFe framework

Milton Friedman has a lot to answer for

Of course, we shouldn’t be surprised by this — for at least two hundred years, a pendulum has swung back and forth between “profit at all costs” and “hey guys, maybe we should stop being such assholes”. Adam Smith’s factory management theories gave way to Keynesian safety nets, which in turn led to Reaganomics.

In 1970, economist Milton Friedman declared that “the social responsibility of a business is to increase its profits”. By 1984, R Edward Freeman had countered this with Stakeholder Theory — the idea that a business is only sustainable long term if every stakeholder feels like they’re getting a good deal.

Exemplifying this approach are B Corp companies — for-profit companies who meet certain social and societal standards. Examples include Ben & Jerry’s ice-cream, Patagonia apparel, and Charlie Bighams foods.

These companies run counter to the prevailing economic trend, that of laissez-faire capitalism, which spread under the tenure of Chairman of the Federal Reserve, Alan Greenspan. Greenspan was a disciple of Atlas Shrugged author Ayn Rand (darling of both US Conservatives and the Church of Satan) whose philosophy is basically, “screw you guys, I’m going home”.

Under Greenspan, loans were irresponsibly foisted upon people who could ill afford them until the whole system collapsed in 2008, after which the government bailed out the banks and implemented austerity measures on everyone else because, well, screw you guys.

Under this system, wages have been suppressed and America’s 1% have siphoned $50 trillion from the bottom 90%, trapping people into jobs where they work more than almost everyone else in the world despite having lower productivity than almost everyone else because, after more than about 50 hours of work, you’re just too exhausted to do a good job. (Most European countries have settled on around 35 hours for optimal performance.)

How is all this killing companies?

Companies don’t lay off tens of thousands of staff because they’re doing just great.

Eventbrite laid off 11% of its staff in addition to the 8% it laid in 2024. It posted $85 million in revenue this quarter, an increase, but posted a 6% decline in ticket sales. In other words, revenue was not the most important indicator. Peloton laid off 400 people following worse-than-expected figures since the world “returned to normality” after a pandemic-related home exercise boom. Dell, like many other companies, is laying off workers to rush to AI in a move that most people would call “really stupid”.

I once sat in a user interview where a very eager engineer showed off a new AI tool that hallucinated in the first attempt in the demo, to the bemusement of the customer. It is not good enough, and companies are entrusting it with their futures.

The poor performance of AI is disappointing to companies that saw it as a panacea to their problem of hiring workers — the thing they’d do almost anything to avoid doing — because tech companies have not yet realised, unlike sports teams or almost anyone else, that the only real value they have to anyone, anywhere, is tied up in the people they employ.

We call them “dependencies” or “resources”, but what we mean is “people”. We know Shaquille O’Neal is valuable, we know that Tom Cruise is valuable, but we seem shocked when we find out that the only value that our company holds to consumers is in the quality of people who work there.

When you’re exhausted and scared of losing your job, it’s hard to work to an optimal standard.

An inevitable result is the decline in customer service: almost half of customers report that they feel customer service has gotten worse in the past three years and Qualtrics estimates the cost to companies of bad customer experience as around $3.7 trillion.

What we can do

Traditional waterfall approaches did not consider that user needs might change, or that it was impossible to have all of the real requirements upfront.

Agile frameworks don’t typically consider that any business that survives beyond a year will have more than 5 or even 50 employees, and that the very reason specialist roles exist is because generalists are not good enough at any of those roles to do it well enough to meet the expectations of enterprise customers.

None of the frameworks, new or old, are fit for purpose in the modern age. We need something else, something not seen before, which acknowledges both the work that Porter et al did in mapping out corporate dynamics and acknowledges that plans will (and need to) change at very short notice.

We need a system which acknowledges: this is what we need to know about customers and competitors; this is what we need to understand about our own capabilities, and this is what we need to understand about what it takes to develop something wholly new and genuinely useful within this space.

If Scrum only worked for as long as there were waterfall systems in place to support it, we need to replace both with something that both acknowledges and improves that reality.

Something that provides a win-win-win for all the stakeholders in the equation, rather than ruthlessly exploiting everyone and making them miserable.

Something that acknowledges the value of the humans in the equation, and the systems that they need to have in place to do their best work fearlessly.

We don’t seem to have this right now, but I’m far from the first to call for it.

Learning from the past to change how we work

It's unlikely that a designer will single-handedly change how a company works. However, the current scenario presents an opportunity for us, designers, to rethink how we work by investing in a new way of operationalizing design. Just like startups disrupted the bigger companies with their nimble approach, designers can can pave the way to experiment and show how we can bring customers back to the equation. How can AI address some needs from the business while opening opportunities to see the market differently? How can a business re-think product and market fit by approaching the design process differently from the status quo? 

We can push back on the reckless and sloppy shoehorning of AI into every product just to leap on a bandwagon. We owe it to customers to keep them safe, and we gain satisfaction from meaningful work, so let's commit to leverage AI to make more meaningful products to our users; adding capabilities only where it genuinely provides value, with some good old-fashioned risk management.

We can use automation to surface information to provide better customer service, to flag risks when certain thresholds are breached, and to visualize patterns in information to guide us in making improvements.

Most companies don’t fully understand their value streams, so let’s start there, by mapping the full end-to-end customer journey, on and off the screen. Commit to quality control and risk management, just as we did in the past. 

Commit to customer centricity, not corporate centricity, since the latter isn’t working out so well for anyone right now.

We changed the world before. We can do it again.

Works cited

  1. Agile Manifesto
  2. State of Agile
  3. Enterprise Agile Transformation by Allied Market Research
  4. Agile adoption by Parabol
  5. Scrum at Scale
  6. Teams by Less Works
  7. Ageism common in the tech industry by U. of Gothenburg
  8. Project management in RUP by Lambertsen
  9. 5 essential questions to craft a winning strategy by Lenny's podcast
  10. What is strategy? By Porter
  11. The Lean Startup by Eric Ries
  12. 24 Assets by Priestley
  13. How to manage a hypergrowth company by Ricard
  14. Why two thirds of the fastest-growing companies fail by Lidow
  15. The top 12 reasons startups fail by CBInsights
  16. The waterfall myth by Alleman
  17. When I'm sixty-four by The Beatles
  18. Dynamic ridesharing by Oliphant and Amey
  19. Uber loses licence to operate in London by BBC (2019)
  20. How Cyberpunk 2077 clawed its way back by Fenlon
  21. The Story Behind Mass Effect: Andromeda by Schreier
  22. The Sims 4 Laundry List by DelGreco
  23. What's the definition of done? By Scrum.org
  24. We finally know what caused the global tech outage by Fung
  25. Remediation and guidance hub by Crowdstrike
  26. Microsoft confirms new outage by O'Flaherty
  27. Delta CEO lashes out at CrowdStrike by Isidore
  28. What is a risk register by Asana
  29. What is governance? By IBM
  30. Who is a software architect? By altexsoft
  31. Enshittification by Doctorow
  32. Customer Centricity by Fader
  33. The rise of sludge content by Prem
  34. The real story of the news website accused of fuelling riots by Spring
  35. Slop by Willison
  36. Why I finally quit Spotify by Chayka
  37. The social responsibility of  business is to increase its profits by Friedman
  38. The Top 1% of Americans Have Taken $50 Trillion From the Bottom 90% by Hanauer
  39. The case for a shorter workweek by BBC
  40. Long hours at the office could be killing you by The Conversation
  41. Bad customer service cost more by Hyken
  42. The death of product development as we know it by Julie Zhuo
  43. Declining ROI from UX design work by Jakob Nielsen
  44. Unbearable Lightness of Being image on Wikimedia
]]>
Articles Design & Industry
<![CDATA[Our human habit of anthropomorphizing everything]]> https://www.doc.cc/articles/anthropomorphizing-everything https://www.doc.cc/articles/anthropomorphizing-everything Sat, 08 Feb 2025 15:31:12 GMT

Our human habit of anthropomorphizing everything

Should we be anthropomorphizing AI?

Written by Daley Wilhelm

Art direction by Manoel do Amaral

A grayscale image featuring a robotic hand reaching out to touch a human hand, with the text "HUMANIZING AI" overlaid in white.

Photo (edited)

Humans anthropomorphize everything. We assign human traits and emotions to animals, inanimate objects, and even software.

“Gmail is acting finicky today.”

“I swear my cat threw up on the rug just to spite me.”

“Siri can be so dumb sometimes.”

The reality is that animal behavior doesn’t always correlate to human behavior. Software and AI don’t “behave” at all, but rather function in accordance with their code. Humans, as social animals, find it easy to interpret certain outputs as “behaviors.” Humanizing the tech we use makes it a little bit more understandable.

But anthropomorphizing things can go wrong. Rather than making complex systems like AI more understandable, anthropomorphizing tech can actually contribute to further mystification and misunderstanding.

A dark square frame in the center of a black and white abstract image contains text explaining anthropomorphism. The text discusses the attribution of human characteristics to non-human entities for relatability and familiarity, citing examples like Mickey Mouse and talking cars. The surrounding image is abstract with a white border that has radiating lines, creating a distorted effect. The frame has a bold outline and a smaller square with a symbol in the upper left corner.

ChatGPT helpfully (and ironically) defines anthropomorphism

What does anthropomorphizing look like?

During a qualitative usability study of ChatGPT, the Nielsen Norman Group observed four patterns of user behavior that assigned human traits to the AI.

1. Courtesy

2. Reinforcement

3. Roleplay

4. Companionship

Courtesy:
Most people are “guilty” of treating AI with basic courtesy. “Please” and “thank you” aren’t required, but out of habit and social conditioning, users will often phrase prompts politely. Voice assistants like Siri are designed to be conversational, and conversations typically involve social niceties like “thank you” after a reply to a query. Siri won’t be upset if we do not thank “her” for her help, but common courtesy is still given because of our social conditioning.

Reinforcement:
The Nielsen Norman Group describes reinforcement as praising, or scolding, a chatbot when it gives a correct, or incorrect, answer. In humans, we know that positive reinforcement is important — it reaffirms the behavior or results we want to see. We praise good grades and give awards for excellent work. We scold misbehavior and try to correct mistakes. But that’s human behavior. What’s the point in providing an AI bot with positive or negative reinforcement? In the study, participants described two different motivations behind their praising ChatGPT with a “good work!” The idea that positive reinforcement would help the AI to replicate similar results in the future, letting it know that it was producing “good” work. AI mirrors human attitudes and behaviors, so by being positive as a user, the bot would also be a positive, friendly interface. This is a step above common courtesy, but can still possibly be attributed to habits created by socialization in human society.

Roleplay:
Happens when users of a product like ChatGPT ask the bot to assume a role. For example, users can ask ChatGPT to “assume the role of an upbeat social media manager and write a newsletter for the release of the new game called…” According to Nielsen Norman, “Assigning roles to the chatbot is a frequently recommended prompt-engineering strategy.” Roleplay prompts ask AI to assume human traits like job titles (social media manager) and attitudes (upbeat). It’s a literal anthropomorphization of a product, but that is sometimes what the task demands. To meet a user’s needs, AI might be required to act more like a coworker than a tool.

Companionship:
The point in which users begin to indeed treat AI like a coworker and fellow human. Users befriend the AI, speaking to it with courtesy and even affection. This doesn’t mean that the user necessarily believes that the AI inherently has human traits like empathy and kindness, but chatbots like ChatGPT often mirror the input style of the users — treating a bot with kindness often means receiving kind replies in turn. Chatting with AI in a companionable way can help alleviate loneliness and be comforting in the same way that readers enjoy fictional characters. Even if they aren’t “real,” the feelings elicited from positive interactions with AI are.

A grayscale image of a segmented arm, transitioning from a robotic design on the left to a realistic human arm on the right, with the robotic portion featuring visible gears and metallic components, and the human portion displaying skin and muscle texture. The arm extends horizontally across a tri-sectioned background, with the left section in solid black, the center in a lighter gray with vertical lines, and the right in solid white.

Image: generated with Visual Electric

Why do we anthropomorphize everything?

Why do we speak to AI as if it is human? Do we want AI to act in a certain way? Do we want it to be more human? Or do we just assign that trait anyway?

“Thus, rumors spread about what makes AI work best, many of which include a degree of anthropomorphism.”

➪ Nielsen Norman Group

As mentioned before, humans assign human traits to inhuman animals and objects so as to understand them through a human lens. AI is especially mysterious, so we do what we can to demystify the technology.

A base misunderstanding of how AI works might actually be the motivation behind anthropomorphizing the technology. The Nielsen Norman study indicated that participants weren’t sure how to interact with platforms like ChatGPT and thus act on what they’ve heard about AI from other sources. “Thus, rumors spread about what makes AI work best, many of which include a degree of anthropomorphism.”

Now that we know why people approach AI the way that they do, namely with anthropomorphism on the mind, should we lean into humanizing AI? Should we encourage users to speak with AI like ChatGPT the way that they might speak with a coworker or friend?

No. In the same way that referring to AI as “magical” can prove problematic, acting as if it is human will lead to frustration and misunderstanding.

The text reads "What is humanized AI?" at the top, followed by four numbered sections with descriptions of humanized AI's capabilities: sensing emotions, understanding basic emotions, responding with context-sensitive solutions, and learning from user reactions. The surrounding abstract pattern features a white border with radiating lines, creating a distorted effect.

Examples of how simple verbage can contribute to the anthropomorhization of AI (Source)

Anthro is not the answer

Referring to the actions of a cat as having “ulterior motives” can create the understanding that cats are capable of scheming or malice. (It seems that way sometimes, but it’s not true!) Thanks to studies on animal behavior and beyond, we know that this is not the case. Again, animals behave based on instinct. Humans behave based on both instinct and social expectations. Digital products like ChatGPT function based on their code.

Therefore, thinking of AI in anthropomorphic terms can be misleading at best, a total misunderstanding at worst. By assigning human traits to AI, people can form an incorrect idea around how it works. If demystification is the goal–which it should be–then anthropomorphizing AI is working counter to the solution.

Raspberry Pi gives a few examples as to how to refer to AI without the use of anthropomorphic language.

“It listens/it learns” → “AI is designed to…/AI developers build apps that…”

This shifts the focus from AI as an independent entity to the fact that it is a piece of technology designed by humans for specific uses.

“see/look/create/recognize/make” → “detect/input/pattern match/generate/produce”

The initial list of verbs can be applied to people, inherently implying a human quality to AI. More accurate language, like “detect” in the place of “see” helps to establish AI as a technology rather than an entity.

Avoid using Artificial Intelligence/Machine Learning as a countable noun, e.g. “new artificial intelligences emerged in 2022” → Refer to ‘Artificial Intelligence/Machine Learning’ as a scientific discipline, similarly to how you use the term “biology.”

This roots AI/ML in the fact that it was something developed by humans rather than a force that emerged on its own/one that has its own motivations.

A phone showing a chat with an AI personality.

Billie can’t tell you the weather, or the local news, but “she” can… hype you up, girlfriend? (Source)

Leaning into anthro

But wait, what about the companies that are clearly leaning into anthropomorphizing their AI chatbots? Meta has gone beyond referring to its chatbots in human terminology to creating 28 chatbots with unique, and very human, personalities. Some are even based off of real celebrities like Snoop Dogg and Kendall Jenner.

Each chatbot is meant to fill a certain role and have a specific area of expertise. This makes it easier to find which bot will help fulfill a user’s goals. For example, “Billie” (Kendall Jenner) is described as a big sister, so users looking for sisterly advice on life and love would turn to her. Other chatbots are based on athletes like Dwayne Wade, which will have more information on exercise and sports than “Billie” would.

This is an obvious example of roleplaying with AI, creeping into companionship. While sequestering information behind friendly faces with welcoming personalities might make sense from a design perspective, this unfortunately contributes to confusion about AI. Does Snoop Dogg endorse everything that the chatbot might say? If there is an inaccurate, or even offensive reply given to a user, they may attribute that to the celebrity personality rather than the shortcomings of the technology.

Robotic and human hands reach out to touch in a black and white image, highlighting themes of technology and connection.

Image: generated with Visual Electric

AI is not magic or human or Snoop Dogg

We don’t think of electricity as magic. Nor do we assume that it has moods, feelings, or preferences. So why should we make those assumptions about artificial intelligence? Doing so causes a base misunderstanding of how the technology works, and can create untenable expectations of AI’s capabilities.

As humans, we are wont to anthropomorphize whatever we’re working with–animals, vehicles, tech, etc–so the anthropomorphization of AI seems unavoidable. This is observable in the Nielsen Norman Group’s ChatGPT study of courtesy, reinforcement, roleplay, and companionship.

But leaning too much into the anthropomorphization of AI can create misunderstandings around how the technology works, and indeed obscure the fact that artificial intelligence is just a technology rather than a sentient being with its own motivations.

There needs to be more robust education around artificial intelligence and machine learning if these technologies are meant to be such a big part of humanity’s future. Otherwise, users will have a basic misunderstanding of how AI works. This is a recipe for frustration in the making, something that designers are meant to alleviate and/or eliminate.

]]>
Articles Design & Society
<![CDATA[It’s time for design to think less and feel more]]> https://www.doc.cc/articles/time-for-design-to-think-less-and-feel-more https://www.doc.cc/articles/time-for-design-to-think-less-and-feel-more Sun, 26 Jan 2025 15:15:08 GMT

It’s time for design to think less and feel more

By embracing sensitivity over logic, designers can learn from old masters and create new solutions that reconnect to us being humans.

Written by Darren Yeo

A two-image collage. The left image shows a circular diagram with text labels for various subjects, including "Study of Materials and Tools," "Nature Study," and "Building Site Testing Design." The right image shows a circular diagram with Chinese characters and symbols, with the character for "thought" written above a horizontal line with a dot on each end.

In a world facing climate change and economic challenges, more technology and productivity aren’t the solutions; instead, we need to embrace emotions and human values, using design philosophies like Bauhaus and Kosei to unlock deeper, meaningful innovations. (image source: Getty; Bauhaus Imaginista)

A message gets lost in an overload of senses

The sound of chatter filled the room as old acquaintances meet each other again. It didn’t matter who they were, because it was the familiar faces that knew each other over the years. The other noises, such as the clanking of cutlery and the occasional clink of wine glasses as waiters pour a never ending supply of alcohol.

Just then, I could hear the distinct knocking of the top of a microphone, which usually signals the sound of someone speaking. As it turns out, there was a fireside chat happening while everyone was merrymaking in the same ballroom. Three distinguished guests were up on stage: one was a senior parliament member, one was a very notable designer, and the last was the president of the organisation that ran the event.

A colorful woodblock print depicting a bustling scene in a multi-story Japanese building, likely a theater or entertainment venue. Numerous figures in traditional Japanese attire are engaged in various activities, including watching a performance, eating, and conversing. The architecture is detailed, showcasing wooden beams, balconies, and a stage-like area.

Like Kabuki, a Japanese theatre mixed with dramatic performance with traditional dance, your senses are overloaded not only by the stage design and performance, but also by the audience and the entire surroundings. (image source: wikipedia)

The topic that was presented to them was on how design could present itself as an answer to solve some of the world’s most pressing issues, such as climate change, and whether countries could unite together based on their similarities.

As each speaker attempted to share their thoughts, the noise in the room continued to drown. Perhaps it was the intoxication of the alcohol that got the better of the crowd, but it was obvious very few were actually paying attention to the wisdom of the stage.

The rowdy group were none other than fellow designers with years of expertise under their belt, and none of them were listening.

The rational mind makes mistakes as the mind can only focus on one thing at a time.

Does rationality trump emotion?

It’s a shame how courtesy is a very humane trait but often abandoned when there are different objectives, such as socialising, However, to technology, it is noise, and wouldn’t know how to rationalise. This is because whatever output comes out of the conversation are most likely positive discussions. So how would a machine know about the precarious situation? This is why good design doesn’t rely on big data. Good design rely on thick or clean data.

Interestingly, data has a multi-dimensional aspect to it. One can determine how much decibels there is in a noisy room. Range and duration can also play a part too, but where it gets interesting is the meaning behind the noise. As words are put together, interpretation comes about with the formation of words in a sentence. Responses are generated when a dialogue takes place, and sentiments can be collected depending the nature of the conversation.

Sentiments. How we evaluate the quality of emotions based on two inputs: whether it is positive or negative. In design, we often see this as a graph with a happy and sad emoji along a customer journey map. Once we pick out the pain points and gains, we map the emotions on a 2-axis graph and mark it as complete.

It’s a shame how the industry feels the same about this too. How often do we see emojis in customer feedback forms and rate our emotions along a 5 point scale? Thus, the worldview of emotions and business are predominantly centred around these 5 circular shapes. At least visibly.

Five theatrical masks, ranging in expression from angry to joyful, are lined up against a black background. The text below the masks reads, "On a scale of 1 to 5, how satisfied are you?"

We see a caricature of emotions as 5 faces in a customer satisfaction survey, missing out on an entire vocabulary of feelings, sensitivity, and emotional meaning. (Yeo, 2024)

Words among noise

With great difficulty, the legendary designer picked up the mic to give his response. Part of the reason for his difficulty was due to the language spoken. Being native Japanese, he had to glance at his script in his phone to share his thoughts in English, which is commendable given that he is fast approaching his seventies. It did require the audience to pay extra attention to each word that he spoke. The following words are some of his verbatim that could be heard:

“The rational mind makes mistakes as the mind can only focus on one thing at a time. But feeling is different. It’s simultaneous. The feeling of the chair, the carpet, the drink, the flushness of your face and temperature of the room. It can all be felt at once. And if you cannot feel anything, you cannot be creative. We need to have a shared feeling as a common language. Perhaps design thinking makes us confused. Design feeling is the right word.”

The rise of Taylorism and STEM

The thinking mind is often viewed as a prized possession among the human race. Without doubt, many inventions lie in the complex formulas and know-hows that led to the advancement of technologies.

At some point in history, the feeling mind were viewed as an important faculty too. Although viewed in the area of the arts, the ability to make deep observations and connect abstract concepts also brings about great innovation too. Therefore, the cultivation of the feeling mind is important.

A black and white photo of a machine shop in the Tokyo Industrial Art School. Several students in uniform are working at various machines.

Taylorism is perceived to be the go to reference of scientific management. With the advancement of technology and mass production, institutions, including education, leverage on its teaching, which taps heavily into analytical thinking (image source: Old Tokyo)

However, our educational system indoctrinates analytical thinking, partly due to the pseudo scientific management of Taylorism. As mass production kicks in, optimisation and productivity become the rule of thumb, and rigorous thinking takes over. In a previous article, I argued about how adding A for Arts and R for Renaissance to form the word MASTER could be a reform in our education system. Evidently, STEM education continues to be the dominant discipline taught to students.

But feeling is different. It’s simultaneous. The feeling of the chair, the carpet, the drink, the flushness of your face and temperature of the room. It can all be felt at once.

A sliver of design exercises

Only in design schools could we understand how the abstract topic of feelings could be expressed in design. Through various design exercises, our design tutors provided us opportunities to express feelings.

One particular exercise was to use squares to express a particular meaning in an abstract world. To the thinking mind, he may have picked each square and label them with an attached association, but misses the point by thinking every square needs a name. To the feeling mind, he may assembled all the squares in a particular order that expresses an associated word, like ‘serenity’ or ‘dangerous’.

A black and white sketch of six squares, each containing a different arrangement of black squares.

A sample of the square exercise, which teaches about the Gestalt Principle, an important visual concept of hierachy used in modern computing applications (image source: angelamuliu)

In another exercise, we had to pick two distinct objects and show the metamorphosis in 6–7 frames. To the thinking mind, he may have picked two objects that are directly related together, like a milk bottle and a cocktail mixer, but misses the point of fully expressing the object through every curve and shift of the two forms. To the feeling mind, he may choose two objects that have various elements that fit the overall semantics. For example, the playground swing morphing into a howling gibbon is a more interesting choice because of the movement, the anatomy, and the squeaking rusty sound of the metal with the loud howling from the monkey’s vocals.

And if you cannot feel anything, you cannot be creative.

From Taylorism to Bauhaus

These exercises may jot the memory of any avid designer to link it back to Bauhaus. More than a style, Bauhaus was a collective of practitioners that view design as a pedagogy — almost like a way of life similar to routines and morning exercises.

In fact, preliminary exercises like the above examples are essential to developing sensitivity to make exceptional design elements. Bauhaus master Johannes Itten advocated for a holistic education that considered mind, body, and spirit. It could prepare students to create the total work of art. Experimental theatre and playing with material thus become part of the courses before going into specialisation.

Here’s another point of view from another Bauhaus master specialising in textiles,

“We do not want pictures, but rather we want to arrive at the best-possible, ultimate, living fabric! It has to be possible to grasp it with “hands." The value of the fabric is recognised in the tactile; in the tactile value, one has to listen to the secrets of the fabric, follow the sounds of the materials; one has not only to grasp the structure with the brain but also feel it with the subconscious.” — Otti Berger, 1930

A collage of six textile samples featuring a variety of colors, patterns, and textures.

Otti Berger’s work in textile design was groundbreaking at her time as she explored innovative ways to do more with fabrics. Despite her short and unfortunate end in her life, she left many lessons to her students as a Bauhaus master (image source: fembio.org)

The same could thus be applied on any object: a chair, a music player, theatrical play or graphics, but Bauhaus did so by integrating a community of artists working together. Through practice. Through dining. Through merrymaking. Through learning. There were no gender divides, which was rare at the time of the 1920s for women to have an equal voice in the studios. That voice became a common language for the Bauhaus movement. It became amplified after the apprentices became masters and cyclically influenced new masters, such as Dieter Ram. They also influenced the foreign visiting students, who later brought their learnings back to their countries. Some even form schools to adopt a similar practice. One such country is Japan.

We need to have a shared feeling as a common language.

Japanese Bauhaus

After Japanese students Takehiko Mizutani, Iwao, and Michiko Yamawaki returned from their Bauhaus experience, they took up teaching positions in the newly formed school known as the Shikenchiku Kōgei Gakuin (School of New Architecture and Design), founded by Architect Renshichirō Kawakita in 1932. Having not left Japan to witness the Bauhaus movement himself, Kawakita translated manuscripts, such as Von Material zu Architektur (1929), a book by the Hungarian Bauhaus master László Moholy-Nagy, as well as organising meetings to contextualise Bauhaus in a Japanese setting. Emerging out of the discussions arose the kōsei education, which combined Josef Albers’ “material form education,” Wassily Kandinsky’s “abstract form education,” and the materials-orientated approach of Laszlo Moholy-Nagy.

Renshichirō Kawakita may not have impacted the architectural communities of his time, but he did shape the art education of Japan by publishing Bauhaus-inspired methods with the help of Fukujiro Gōtō, head of the Art Education Association. In this setting, he is showing art teachers his approach using his self-developed Kōsei method. (image source: Bauhaus Imaginista)

The essence of Kōsei was the sharpening of the senses and observing of daily life. By picking out the daily structures of everyday life and redesigning them through the examination of human activities with the lens of nature and social connection, everyday problems can be solved with the roots of Bauhaus principles of technology and phenomenology.

Sadly, although Kōsei was directed towards the architectural disciplines to rouse new ways of thinking, due to the Japanese political climate in the 1930s of a top-down authoritative government, Japanese Bauhaus thrived for a short period of time before the Shikenchiku Kōgei Gakuin closed its doors. It did, however, gain attention in Japanese art education through the supervision of Fukujiro Gōtō, head of the Art Education Association.

Back to our modern day design master

It is no wonder how Japanese masters like Naoto Fukasawa become prolific in the design field. In fact, he was that very notable designer who was sharing his point of view about design feeling at the fireside chat. As a guest of honour in a design trade association event, Fukasawa wasted no time the next day by leading various design practitioners into the same Bauhaus and Kosei exercise. With strips of bamboo and washi, designers were asked to go back to the basics of expressing their feelings through the materials provided to create tabletop lamps, all within 100 minutes. The outcome was fascinating, as there were no two identical lamps as each designer expressed a different form and function with what they had.

A design feeling workshop conducted by Naoto Fukasawa.

A design feeling workshop conducted by Naoto Fukasawa that revisits design feeling using his approach of “without thought” or affordance. Using strips of bamboo and washi paper, the outcomes from designers from various disciplines show the limitless potential in coming up with an idea (image source: Tham)

Without thought or applying affordance. This has been Fukasawa’s lifelong pursuit as he layers sensory cues with the subconscious mind. One of his most famous pieces, Muji’s wall-mounted CD player, demonstrated this view perfectly. Somehow, the pulling of the cord attached to the CD player felt familiar. A memory of feeling cold wind brushing your face from a fan is now replaced with the twirling of a CD and the melody of nature from a Muji soundtrack. One could say that it was precisely that design icon that propelled Muji to become a popular global brand.

The allure of that familiar feeling is what makes design so powerful among people. Partly because that feeling can be shared, both simultaneously but also all at once as a community. Alas, the master’s words truly reflect the hidden value from within. Perhaps using design feeling is indeed the better phrase than design thinking.

"Perhaps design thinking makes us confused.
Design feeling is the right word."
➪ Naoto Fukasawa

As climate change, geopolitical tensions, and economic stagnation continue to plague the world, creating endless consumption of new technologies isn’t the answer. Neither is the Taylorism of increasing productivity with better thinking the answer.

We will need more “feeling” vocabulary where sharpening of our senses and emotions goes beyond rudimentary 5-point scales of satisfaction. We often boxed emotions in a corporate setting. We prescribe behaviours rather than observing from the start. And so we find it hard to unpack the concept of love, charity, and honesty to expand on other values beyond numbers. We also find it hard to capture such attributes into our objects.

Design exercises, such as Bauhaus and Kosei, prompt us to look beyond the squares, curves, and pictures. Because amidst the noise of the world, there is the silver lining of words spoken by a design master, waiting for the next set of design masters to provide their world-changing design solutions to the world.

]]>
Articles Design & Culture
<![CDATA[Beautiful, boring, and without soul]]> https://www.doc.cc/articles/beautiful-boring-and-without-soul https://www.doc.cc/articles/beautiful-boring-and-without-soul Mon, 20 Jan 2025 13:52:53 GMT

Beautiful, boring, and without soul

What could be more important in life than how you make people feel?

Written by Steyn Viljoen

Some time ago I found myself strolling through a quiet, less touristy place in London. In typical style, it was overcast and gloomy but not raining.

I decided to make one last stop in a bookshop. It was quirky, almost like the one William owned in Notting Hill, but not quite. While browsing through some books, I felt a deep sense of sadness coming over me. It was the longest I’d been away from home since our kids were born and I was missing them deeply; their laughter, playing dinosaur, and kicking ball.

In an attempt to elevate my spirit, I decided to make my way back to a small park I walked by earlier.

A black and white photo of a grassy area with trees and a building in the background.

Clapton Square, Hackney, London

As a gardener, I was holding onto what Robert Harrison said in his book Gardens: An Essay on the Human Condition.

“When we are deprived of green, of plants, of trees, most of us (though evidently not all of us) succumb to a demoralization of spirit which we usually blame on some psychological or neurochemical malady, until one day we find ourselves in a garden or park or countryside and feel the oppression vanish as if by magic.”
➪ Robert Harrison

However, sitting in the park did nothing for me. The park was lifeless and the only thing that kept me from sinking deeper into sadness was a lone dad playing with his 2 boys. I have always believed in a garden or park’s ability to restore the soul, but in that moment, I felt like that wasn’t true anymore. What Harrison said in his book was only an intellectual or philosophical statement.

I sat on the bench in silence while the dad’s partner joined with their baby. The boys continued to kick ball while more people, including a man with his dog, slowly made their way into the park. I could hear someone doing woodwork in a nearby building, while the clouds finally made way for the sun to pierce through.

I felt lighter.

The park wasn’t anything automatically and it didn’t lift my spirit by simply being there. It was through intentional design, and how it is integrated into the community that it became something more than merely a green space.

It would be unfair to completely dismiss Harrison’s statement; green spaces do have some ability to inspire life. I’ve experienced this myself. I wrote about it in the context of our lawn, waterfall and the pergola I built. But in that moment on the park bench, I wasn’t asking for green grass and bright flowers. I was looking for something else. I wanted to feel wholesome and alive.

The park, as simple as it was, graciously offered that because the people who designed it cared enough about how it would make others feel.

Similarly, a digital product isn’t anything automatic. It doesn’t become beneficial because it’s usable. No need to argue the point. However, if a product does end up being used by millions, perhaps billions, it’s still not anything automatically. Even if it’s a beautifully designed and usable product.

Why? If you’ve been to the gardens of Versailles, you’ll know what I mean: Perfectly manicured symmetrical layouts, visited by millions each year, yet, devoid of any soul and boring as a buffalo grazing.

Versailles garden with a few trees and a building in the background.

Garden of Versailles by Theodore Poncet

A short history will help put it in context. Versailles came to King Louis XIV in mid-1600 when his finance minister, Nicolas Fouquet, unveiled his magnificent gardens at an extravagant banquet. Overtaken by jealousy, the king decided to cast his finance minister in a dungeon for life, accusing him of embezzlement and mismanagement of state funds. Besides, only the king was entitled to such magnificence.

To impress his dominance, the king started to plan a garden that would be far more magnificent than his finance minister’s. Dominance often accompanies destruction, and the king’s garden project was no different.

Robert Harrison paints a scene as gloomy as the London weather I experienced:

“…the architect of Versailles (André Le Nôtre) seems to have first sent in an army of human bulldozers to clear away whatever grew here, reducing the grounds to a flat, empty plane on which to project the master design. One cannot help but feel a tremor of anxiety, if not dread, before this complete domination of nature.

That is of course exactly the kind of reaction the gardens are designed to provoke — an almost cowering sense of trepidation in the face of the power that imposed this form on them. Everything about the gardens insistently reminds one of their monarchic creator.”

He continued, and couldn’t have been more insulting of the king’s vision:

“Versailles, where even wandering is centrally controlled, is a masterpiece of representational garden art…”

Beautifully designed products without soul, are just that: representational garden art. In all of our history, we’ve never had more money to design beautiful, functional and usable products. The bar is higher than ever, yet, they’ve never been more boring and devoid of soul.

What’s missing is a deeper sense of beauty and wholeness, an intentionality to make people feel something specific. We need to ask ourselves: What is it that we want people to feel when they use our products? Whatever that is, intentional or not, the depth of that emotion is going to grow and blossom from what we, the builders, feel first.

King Louis felt a deep sense of pride, envy and a hunger for dominance and that’s what he imprinted on Versailles. Yes, when people visit Versailles, they are in awe of its scale and the engineering that powered the fountains, but rarely would anyone say they feel whole or alive.

What might Versailles have felt like if King Louis was also a gardener at heart?

A gardener is not someone who grows flowers but one who cultivates the soil. You need to, as Harrison puts it, delve into the ground’s organic underworld to appreciate the soil’s potential for fostering life.

So, to create something truly alive and nourishing, we must be willing to immerse ourselves in the rich, fertile soil of our own humanity — the depths of our emotions, experiences, and connections to the world around us.

We can only build from what’s within. The challenge though is that we have become so used to making and consuming boring and soulless products, that we simply don’t have a good reference of what wholesome products feel like.

Fortunately, we have architecture as a reference.

Most of us have experienced buildings that either uplift or depress us. The uplifting ones make you feel whole, alive and inspired. The depressing ones, on the other hand, are boring and soulless.

It’s the difference between, for example, the Sagrada Família and The Telephone Exchange building. Notice what you feel when you see these 2 buildings.

A two-image collage. The left image shows the interior of the Sagrada Familia church with tall, ornately decorated columns and a large rose window. The right image shows the exterior of the same church, with its towering spires and intricate facade.

Sagrada Família. Right photo by Mehmet Kirkgoz

A black and white photo of a large, boxy building with many windows. The building is several stories tall and has a flat roof. There are trees and other buildings in the background.

Telephone Exchange Building

Chances are that buildings like the Telephone Exchange were inspired by Charles-Édouard Jeanneret (aka Le Corbusier).

Le Corbusier was one of the most influential architects of the 20th century, and his ideas and designs significantly contributed to the development of the International Style, which emphasised simplicity, functionality, and the use of modern materials like steel and reinforced concrete.

Unsurprisingly, he is crowned as the king of boring. While his designs may be functionally efficient, they are devoid of the warmth and character that make a house feel like a home. His philosophy is perhaps best summed up by his own words:

“A house is a machine for living.”

Le Corbusier wanted to become the builder of cities and he did, although only indirectly by inspiring millions of architects, city planners and other professionals to create oceans of boring, machine-like buildings.

Most product makers, except for a few, blissfully followed suit and what we have for it are products that are beautiful but boring and without soul. Today, products are mostly machines for living (although, I would hardly call consumption, “living”).

To build products with soul, that make people feel wholesome and alive, we need to build it with emotional depth. Without that, they will simply remain machine-like.

Today we’re building boring products because we struggle to go much deeper than “I’m alright”. Going deeper is the difference between experiencing the Sagrada and The Telephone Building. One, with organic, flowing forms and intricate details, evokes a sense of awe and wonder that connects us to something greater than ourselves. The other, with a stark, utilitarian design leaves us feeling uninspired and disconnected from our deeper humanity.

It probably goes without saying, but just like skills without emotional depth result in machine-like experiences, so do deep emotions without skill result in products that feel disjointed. You need both.

Fortunately, we have no shortage of skills. What we now need are makers who dare to look beyond the surface and tap into the wellspring of emotion and meaning that resides in the depths of their being.

By tending to the soil of our own hearts and minds with care and intention, we can cultivate products that not only serve a function but also make us come alive. In doing so, we open the door to a future where technology becomes a canvas for expression, a catalyst for connection, and a testament to the enduring power of the human heart.

After my visit to the park, I met up with one of my friends at a near-by pub. It was lively, but, with a rather bleak entrance and no signage, I had no idea how people discovered it. Nevertheless, we talked about technology, crypto and our jobs. The conversation meandered to an urban garden I walked by earlier the week, but hadn’t yet visited.

The pub wasn’t quite our vibe so we finished our beers and decided to make our way to the garden. It was lively as well, but different. I could sense that people from all walks of life have wrestled with things here before.

A cozy park full of people who are socializing among the trees.

Curve Garden, Dalston, London

Brick building with a large mural depicting a jazz band. The mural is in the center of the image, and the building is on the right side. There is a brick wall on the left side of the image, and a tree in front of it.

Entrance to Curve Garden

We settled down on wooden logs while kids were playing around us, adults chattering, lights strung between trees, and chill music creating a different vibe than the bar. The tone of our conversations was also deeper, shifting from our jobs and technology to family and philosophy.

Neither the pub nor the garden was perfectly manicured but they both had soul. People have evidently poured their hearts into them in an attempt to make others feel wholesome, alive and deeply satisfied.

What could be more important in life than how you make people feel?

Acknowledgements
]]>
Articles Design & Society
<![CDATA[Story of a button]]> https://www.doc.cc/articles/button https://www.doc.cc/articles/button Sun, 29 Sep 2024 10:53:43 GMT

Story of a button

Navigating change and finding your click.

Written by Francesco Pini

Contributions by Fabricio Teixeira

This is Sam. How did she get there?

In the heart of the Button Factory, where dreams were woven into clicks, lived Sam. 

Sam was an “OK” button. She’s been working at the Button Factory for a few years now, and has gotten pretty good at being a button. 

She started as a Tertiary button, a “Maybe later” link. That’s where all buttons start at the Button Factory. With each cheerful click she inspired, Sam grew in skill and stature, until one day her dedication was rewarded. Her continuous compliance with the design system and her joy in making people click have ultimately gotten her promoted to becoming a Primary button.

Sam was OK.

illustration of Sam, a blue button with the label OK and a smile

One crisp morning, as the first rays of sunlight peeked through the Factory windows, Sam set off for her shift, accompanied by her colleague Kim. 

Kim was a “Delete” button and—honestly—had quite a temper. She had very limited patience for user errors and always wanted to delete everything. 

They were going down the assembly line.

illustration of Sam and Kim the delete button in an abstract conveyor belt

But as the conveyor belt whirred to life, Sam found herself alone. Kim had vanished! What happened to Kim? Sam tried to reach out to Kim on Slack, but noticed that Kim had been Deactivated.

“Kim must have received that email from the Button Factory's HR Team, the one whispered about in hushed tones,” Sam thought. “What a terrible thing.”

Sam started to feel ticklish, and deeply sad. Her color, once so vibrant, seeped away, leaving her feeling gray, inactive, and lifeless. 

The once mighty "OK" button was no more.

Sam getting lost
sam with a sad face and greyed out
Sam label now reads not ok

Sam was NOT OK.

A burning question ignited within her: Who could be so heartless, so cruel, as to deactivate a button as talented as Kim?

“I know, of course!” Sam's voice echoed through the Factory, filled with fury. “It was probably one of those evil designers who decided to change Kim’s state to Deactivated! I'll show them! I'll infiltrate the Core Library, the very heart of the system, and wreak havoc upon their cruel designs!"

She took a deep breath and started her revenge journey. 

She ran beneath the cascading wireframe waterfall, where Product Managers poured forth a torrent of requirements. Sam's tiny form slipped through the rushing waters, her resolve unwavering.

busy waterfall of geometric shapes

Next, she tiptoed into the vibrant color room, a place where all shades of red were meticulously washed and prepared for the next generation of "Delete" buttons. 

Sam navigating between circles with a red pigment

At last, Sam arrived at the final obstacle: the imposing figure of Sven, the Paywall. He stood guard before the hallowed door leading to the Core Library, the sacred repository of all artboards and components.

Sven's presence was chilling. He looked so scary with all his cogs and mysterious black holes.

the Paywall was formed by abstract cogs in many colors and formats

Sam asked, “Can you let me into the Core Library, please?”

Sven's reply was swift and cold, "Only if you subscribe to Premium.”

"But I don't have any money," Sam protested, "I'm just a button."

A sly grin spread across Sven's face. "You don't need to pay right away. All I need is your credit card."

Sam hesitated, then relented, "Fine."

Sam got in straight away.

Abstract clouds over a white rectangle. Sam is approaching from below.

One last obstacle. Clouds were all around the Core Library. Sam could barely see through them. Their color went from white to gray, then suddenly yellow, enlightened by lightning bolts.

Hurry up, Sam!

The abstract clouds are changing colors to yellow

Sam took courage, closed her eyes, and ran as fast as she could straight into the Core Library.

Hundreds of canvases untouched by buttons, colors, or purpose. For the first time, she glimpsed the hidden truth that had always existed, lurking beneath the menus, the images, the text.

empty mobile frames. Sam is in one of them in the middle.

The Core Library revealed the boundless potential of choices yet to be made, the exhilarating freedom of owning those choices. 

All the possibilities. Right there.

Was this monochrome expanse, this "nothingtoseeness," the essence of beauty? A concept that had eluded her grasp until this very moment.

A profound sense of peace settled upon Sam. The desire for revenge dissolved, replaced by a newfound appreciation for the vast possibilities that lay before her. 

“I can be any button I want,” she whispered. 

“I was created to do incredible things.”

After that transformative experience, Sam took some time to reflect. The world of buttons seemed anew, brimming with possibilities she had never dared to imagine.

A few weeks later, Sam ventured next door to the Yoga App Corp. Here, amidst the gentle flow of mindful clicks, she discovered a sense of purpose and serenity. She understood that she had choices, that she could shape her own destiny.

Dressed in a resplendent shade of green, Sam radiated contentment. 

Sam was OK again.

Sam is again a happy ok button, but now with a green color over a pink background from the Yoga App
]]>
Articles
<![CDATA[To be a designer is to be a facilitator]]> https://www.doc.cc/articles/facilitator https://www.doc.cc/articles/facilitator Fri, 16 Aug 2024 01:05:55 GMT

To be a designer is to be a facilitator

Written by Marielle Sam-Wall

circular window showing a garden

Photo by Joey Cheung

Close your eyes and picture what a facilitator looks like. What image comes to your mind?

For many, I bet you see a person standing in front of a group, effortlessly guiding people through a series of creative exercises. AKA someone running a workshop.

While yes that is a facilitator, when we do design work in collective and community spaces the image we hold of a facilitator needs to expand.

After speaking to other professionals in social impact design and drawing from my own experiences in the industry, I believe it’s time to shift facilitation from an event-specific role to a principle aspect of how we design.

So what does it mean to “be a facilitator”?

For context, “Be a Facilitator” is the first principle of Design for Collective Spaces (DCS), a design approach that helps people create projects that care. And I’m currently on a mission to collaboratively refresh DCS’s 12 principles, which you can read more about here.

When DCS first came up with this value in 2021, what a facilitator “is" was not directly defined. We only went into what facilitation looked like. Mainly talking about designers using their creative abilities to spark discussion, less about imposing their own ideas.

This time around, I want to correct our past’s lack of context and give you a definition of a facilitator. This is just my opinion formed from experience and conversations, but I see it as a definition that applies beyond design as well:

A facilitator is someone or something that supports others to accessibly understand, mediate, collaborate, and/or create.

By having a broad definition of what a facilitator is, a diverse set of interpretations and relations to this role are valued. It takes a page out of Relational Design, acknowledging that no one challenge, community, system or ecosystem is the same. Therefore, how we approach facilitation must be accessible for that situated context and community.

What I like about this definition is that a facilitator or others involved can be anyone or anything. It deconstructs the colonial notions that center humans as the only knowledge sharers and includes decolonial / anticolonial knowledge that values non-human relationships.

conceptual images showing the previous principle and the updated one

Image by the author

When I bring it back to DCS’s design approach, which looks specifically at the project work we do in community and collective spaces, it’s a call to shift the focus from the end product to the entire journey. It becomes more about the value it provides to the people we are collectively working with and the care we show each other.

And because this project is about refreshing DCS’s design principles, we will be updating our first principle to be defined as:

Be a Facilitator

Support others to accessibly understand, mediate, collaborate, and/or create. Bring a critical eye, draw connections, and encourage various perspectives. Don’t impose, create spaces for creative discovery.

With this, facilitation becomes more of a mindset we step into. Being mindful throughout the design process, we are intentionally creating ways for people to accessibly interact with our work. It validates that, outside of the inevitable workshop, there are other forms of facilitation that are needed when tackling complex social projects.

Note: If you read the original principle, you see that we used the phrase “bring an outside perspective”. After consulting with other practitioners, I decided to remove it because being an outsider on a challenge doesn’t always impact our ability to facilitate how we tackle it.

Why “be a facilitator”?

When we first wrote this value, it was a direct challenge to the idea of “designer as an expert” that is common throughout the industry. There is this narrative in design that often goes like this:

A community, often an underserved one, is struggling with a problem. A designer sees this, observes them for a short period, comes up with an idea in the studio, and then comes back with a solution to the bewilderment and excitement of the community. The problem that no one has been able to figure out before now is solved! All thanks to the designer!

OK, this story might seem a little exaggerated, but it has long been a documented one. but I know each of you reading this likely have seen it play out in your work or can name an example of this happening. While it can be difficult to find exact case studies, check out these examples: PlayPump, One-Laptop-One-Child, the infamous Kony 2012, and arguably many social services.

photos of the different projects mentioned in the article

Credits for photos: PlayPump, One-Child-One-Laptop, Kony 2012

Projects like these often do little to help the communities they target, but instead raise the profile of the designer(s)/agency(ies) that made them. And years later when the projects go inevitably wrong, the publicity has already been gained and communities are left with little resources to hold them accountable.

This same critique can also be made with the current rendition of “co-design”. sahibzada mayed (صاحبزادہ مائد) wrote in “The buzzwordification of Co-Design” that “what claims to be a way to democratize our design practices and share power ends up a reproduction of whiteness and coloniality”. And as someone who has spent years in the design education space, I can tell you that what is being taught is often just a different flavour of “designer as an expert” mistakenly rebranded as “co-design”.

This flavour of co-design is often extractive, and really only used in the research and ideation stages. Often with little credit to the communities who come up with the ideas OR worse, they have their lives turned into a soul-less “empathy” inducing persona.

But when it comes to big decisions and recommendations, those are left up to the designers.

As you can probably see, I have a lot of feelings about “designers as an expert” — probably as I myself have been taught to think this way. But when we apply this thinking to social innovation and community work, it is deeply patronizing and colonial. Often hurting communities more than helping them.

I find this mindset especially egregious as many of these projects have white designers at the helm going into under-served Black, Brown, and Indigenous communities all to tell them what they should and shouldn’t be doing. It exemplifies both white saviorism and this idea “well, if these people are living with the problem, they probably don’t know how to solve it”.

While in reality, people often just don’t have equitable access to the resources, time, or needed aid to tackle their challenges.

This is why the shift for designers to “be a facilitator” is so greatly needed. It is time for diverse stakeholders in communities to guide what projects will look like — a designer is just there to help them make sense of it all. And potentially help them get resources.

How do you “be a facilitator”?

Ok, gripes about the industry aside, let’s get into some practical tips of how you can “be a facilitator” in your work.

Before writing this article, I did a call out asking other designers and facilitators how facilitation looks like in practice. Below is a snapshot of their answers from the DCS’s Miro Board.

screenshot of the miro board showing virtual post its

I would recommend checking out the Miro board as there are many more tips, tools, and examples that couldn’t figure out how to share here in this article.

Reflection on people’s answers and informed by my own experience, below are 6practical tips that you can use in your next project to really embrace being a facilitator in your work:

1. Build a shared understanding of the problem space

Knowing the context of your problem space is essential. The history, stakeholders, power dynamics, and current challenges are all interconnected and can inform how you do your project. Not only do you need to understand it, but so do the people you are designing with.

What it can look like:

• Holding a workshop at the beginning AND end of the project to create a visual map of the problem space. (it’s cool to see how your understanding changes over time)

• Documentation that your team can refer to that state the scope of your work, the stakeholders involved, people’s roles, possible opportunities, and challenges. These can be great for people to reference once the project is over, as well.

• Reflection prompts and open-ended discovery questions to aid co-designers in better understanding the challenge.

quote from albert einstein “If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”

Solène Aymon from makesense shared this quote in a talk I went to recently — I’ve never resonated so deeply with a quote!

2. Use accessible open communication

Ensure people actually understand what you’re doing and can easily access information about your project. “Plain language is community building” is a quote from writer, Imani Barbarin, that I think perfectly captures why accessible communication is important. Remember, accessibility can be context specific — communicate in ways that works best for your audience.

What this can look like:

• Setting up multiple informal and formal points of contact for people to provide feedback. This can even look like a non-judgemental gossip sesh. (“gossip” has long been a way to share important information)

• Sending out outlines and slides before meetings. Having debrief sessions for people afterwards can also do wonders!

• Giving people time to reflect alone and in a group.

3. Harness the Power of Active Listening & Observation

From my experience, communities have great ideas to solving their own challenges — it is just a matter of you actually listening to them. People often won’t communicate to you outright what they need, but over time and holding space for discussion, words and ideas begin to emerge. It’s a matter of being perceptive to what people are saying, doing, and NOT doing.

What this can look like:

• Spending extended time interacting, helping, and actually connecting with people. As you understand each other more, you build a shared language and trust.

• Recruiting someone on your team who knows the community inside and out. They will already know the context and introduce you to who you need to talk and REALLY listen to.

• Observing how people are already navigating the problem. Take a page out of DIY Urbanism, people often have make-shift solutions to a problem. How could you expand what they are already doing?

scene from tv show abbot elementary

“The Panel”, an episode from Abbott Elementary, was a great example of how to listening created a project that actually supported people. A teacher, Gregory, noticed his students kept on using his office as a place to hang out — often seeking his advice and company. He then noticed they needed a space to keep them busy and created the Garden Goofballs. Image from IMDb.

4. Keep People on Task, but Foster Creative Discovery

Creativity is messy, we need time and space to discover then pull insights. But the one thing about creativity is, well, it means you have to create. One great skill every facilitator needs to know is to keep people on task by providing “timeboxes” to get things done. But be gentle with this, as being too strict can alienate people, so allow for flexibility.

What this can look like:

• Agreeing on a pace together in which your co-designers feel they can properly contribute. Fast pace for these type of design projects can really damage communities and burn bridges.

• Giving people multiple trial runs to practice creating. It is important that people internalize creating then analyze!

• Being clear about flexible deadlines vs. strict deadlines.

5. Be Aware of Power Dynamics and Ethical Concerns

Power dynamics have real implications if people truly share their opinions and ideas. Know what concerns and dynamics exist, be transparent with others about them. We cannot mitigate all harm and concerns, but we can do our best to be proactive and create safer spaces.

What this can look like:

• Having separate spaces for staff/managers/community members to discuss problems and solutions

• Holding meetings in spaces that can help minimize hierarchy (ex. Going to a neutral location, like a garden or rented space)

• Allowing for anonymous feedback

6. Don’t just Guide, Facilitate Learning & Connections

Under this model, facilitation isn’t just about being in a workshop, it’s about passing on skills beyond one event. Build design capacity within the community, it helps people move projects forward without you (this is a good thing). The connections you help foster really have ripple effects.

Remember, often as designers, we are coming from places of privilege. Because this industry often aligns with capitalistic intensives and requires post-secondary education, we have more access to funds, knowledge, and connections. If we are serious about equitable change, we have to facilitate passing on these resources.

What this can look like:

• Providing reasoning, resources, and references behind your work. Create ways people for people to access knowledge during and after your project.

• Embrace the one-on-one session for people curious to learn more. One connection can do wonders for a person, help facilitate that.

• This is a little controversial, but embrace strategic communication. Only tell funders what they need to know (don’t lie, just be strategic), as often they aren’t really into transformative change.

graphic showing all the 12 principles

A graphic of all of the DCS’ principles. You can check out this PDF that goes in depth.

Learning More and Getting Involved.

This article is part of a project I am doing to reexamine and refresh the principles within Design for Collective Spaces.  I recommend checking out our collaborative Miro Board for more ideas, questions, and case studies surrounding what it means to “be a facilitator”.  

Building a shared understanding of complex issues is key to doing work in the social impact space.  Shifting facilitation from an event-specific role to a principle aspect of how we design can be our first step.

Acknowledgments

I would like to thank. Tracy Chen, Eliza Lim, Corrina TangZoey Li, and the Impact Cohort. thank you and acknowledgment to Black, Indigenous, Trans, Queer, and Disabled activists who paved the way in social movements. This discipline and how I approach it would not exist without them.

Works cited
  1. Relational Design by JASC
  2. Encyclopedia of Human Geography by James Tyner
  3. The PlayPump by Daniel Stellar
  4. Why one child per laptop didn’t work by Tom Kelleher
  5. Remember #Kony2012? By Dipo Faloyin
  6. Social design, whitening and epistemicide by Sanchez and Sanchez
  7. The buzzwordification of Co-Design by sahibzada mayed (صاحبزادہ مائد)
  8. Design thinking was supposed to fix the world by Rebecca Ackermann
  9. The help-yourself city by 99% Invisible
]]>
Articles Design & Society
<![CDATA[Design without process, or the form factor trap]]> https://www.doc.cc/articles/form-factor-trap https://www.doc.cc/articles/form-factor-trap Sat, 22 Jun 2024 18:18:33 GMT

Design without process, or the form factor trap

Written by Pavel Samsonov

abstract image of a ball rolling in a circuit

Cover image by Bogdan Dreava

Visuals are a core part of the design process, but they can also conceal incomplete thinking. Without establishing conceptual fidelity through tools like the primary user benefit, designers risk creating negative value for their teams.

One complaint has persistently followed me through the graphic design, UX, product, and consulting phases of my career. I heard it over and over, directed not just at me but at my functional and cross-functional colleagues:

The process is taking too long. When can we get the deliverable?

It never gets less frustrating to hear that. Part of that frustration is with the stakeholder. But part of it is with myself — for not bridging the gap between the stakeholder’s mental model of design, and how design actually provides its value.

Like many disciplines (for example, product management), design is conceptually an iceberg: the simplicity of the final output hides the complexity of the work that went into producing it. It has to be for the artifact to be useful to the people downstream from you. Gathering all the details and then purging those that aren’t relevant to the final decisions is a critical part of the job.

And like many disciplines (and again product management is a good example) design gets conflated solely with that output, because the work that comes before it isn’t visible. Like PMs, whose job gets boiled down to “the person who makes the roadmap and the Jira tickets,” designers get defined as “the person who makes the design deliverables.”

As soon as design roles are framed in these terms, the rigor of the design process begins to erode. Only a very small part of it is necessary to develop the visual fidelity of design artifacts, after all — and as teams shed the tools and methods meant to develop the conceptual fidelity of their ideas, they fall into the jaws of the form factor trap.

The road to the form factor trap is paved with optimizations

This deliverable-driven framing of design’s value comes from an understandable place: business people are used to measuring things, and design is hard to measure — unless you focus on deliverables. The more deliverables, the more value. The faster the deliverables arrive, the faster that value can move through the dev cycle and get to the customer.

Executives bringing design into their orgs could feel that they were “optimizing” the design process by jumping straight to what they could see; the artifact they could understand was the only one that they could appreciate. From the outside, the part that got optimized out — the part that stakeholders always feel is just taking too long — didn’t seem all that important. After all, we got rid of it, and the designers are still producing designs, right?

Unfortunately, that part had an important function: it was responsible for the difference between designing an app, and designing something that merely looks like an app.

This is where the form factor trap has been set and, with the design process abrogated, sprung.

Consider a common product vision: a dashboard, a single source of truth, a one-stop shop for consuming all available information available to a given user’s role and taking every possible action. We’ve been building dashboards — first in hardware, then in software — for half a century. The design pattern is so common that product stakeholders reach for it instinctively.

As every discipline develops tools for the problems it encounters, designers developed all kinds of tools to resolve this impasse, and deliver something. Lorem ipsum lets us mock up the content. Design systems give us a collection of widgets that we can lay out in the shape of a dashboard. Now we even have AI, trained on boilerplate templates and capable of generating something that matches our stakeholders’ rudimentary understanding of a dashboard, and even render it in HTML so that the output is truly “pixel-perfect.”

And yet, the implementation of this beautiful dashboard always seems to miss the mark. The user never quite becomes “empowered to receive relevant information that lets them take the right action at the right time” because we optimized out the part of the process that asks “what does that actually mean and how do we tell if we are doing it?”

Having visual artifacts doesn’t mean design was done

In the short term, learning to reach the peak of visual fidelity in the complete absence of conceptual fidelity was a very useful capability. But in doing so, we compromised the very thing that made design valuable.

The value of design isn’t actually in producing visual artifacts, but in the process that leverages those artifacts. Documenting a design decision in a visual artifact — that can be disseminated among appropriate audiences — is what makes that decision tangible and testable. Cycling through that feedback loop between design decisions, visual documentation, and the audience — and increasing the fidelity of both your thinking and your artifacts — is how design works.

From the inside (fastest) outwards (slowest), the design process is a series of loops involving different audiences. Each loop poses a new question with the associated artifacts and activities — and the answer serves as an input for the next loop up.

When that loop is taken apart, and stakeholders pick and choose what they want designers to do, the value evaporates. It’s not so much a matter of those stakeholders being wrong (the loop of the design process embraces being wrong) but about the inputs being incomplete.

And incomplete inputs lead to incomplete outputs. Not visually incomplete, but conceptually incomplete: artifacts that superficially look like those produced via a complete design process, and yet don’t carry any meaningful information within them, because the decisions they were supposed to document were never made.

The design process is how a team establishes useful problem framings and sets an objective quality bar for the final output. A design team that discards rigorous process loses the ability to tell whether or not the resultant artifact is fit for purpose. By taking this shortcut to providing value, design has learned to create negative value: not only will no benefit be delivered to the user, but it is now much more difficult to notice that this is what’s happening, and interrupt it.

No amount of productivity optimization, tooling, or designerly craft can get a team out of this trap. The faster the deliverables arrive, the nicer they look — the more stakeholders believe that their thinking had sufficient rigor, and that value was created.

But there is a way out. Investment in design process — not just making a deliverable, but obtaining and assimilating the data that informs it — can interrupt that cycle, and break the spell of the form factor trap.

Empathy for our users, empathy for our colleagues

There’s a line of thought in UX that tries to justify this phenomenon. It goes something like this: “there will always be conflict between design and the rest of the org because the designer is the champion of the user.” Designers are uniquely imbued with empathy for the user, and everyone around them is a money-grubbing capitalist.

This perspective is both wrong and unproductive. Design will never be able to own “doing the right thing for the customer” because everyone thinks they are doing the right thing for the customer. Designers weren’t born inherently more noble or insightful. The desire that underpins Agile — delivering software more quickly so we can make sure what we are building is valuable — was already embraced by programmers back when “design” in software showed up mostly in the phrase “big design up front.”

Developers have never stopped wanting this. Few of our colleagues get more job satisfaction from closing out tickets than from seeing their work tangibly improve someone’s life. But as methods matured, designers found themselves further upstream in the development cycle. Through low-fidelity methods that let them test ideas much quicker than the build-measure-learn loop, designers were now the first to call out that a particular product idea does not provide end-user value, and also the first to get called out for “designing the vision wrong” when that happened.

The hardline stance of “I alone know what the user needs” worked for Steve Jobs and pretty much nobody else. But because our colleagues want to do the right thing, we can take another path. Through the mechanism of the design process, teams can see when their ideas are conceptually incomplete, and collectively trace the logic that connects the thing they want to build to the value they want to provide.

JJG’s famous diagram showing that design outputs can only be created on a foundation of other fundamental outputs

A product team that has taken even one step towards adapting the design process to their needs is already highly resilient against the form factor trap, because they are no longer limiting their ideation to the form factor phase and outsourcing “implementation details” like “how is this actually going to help” to some other, downstream role. Instead, they start by defining the opportunity (what are we trying to achieve?) and measuring ideas against it (how would this help us achieve that?).

This approach is already extremely powerful. Not only can these teams measure the effectiveness of their solution before they ship, they can also test their understanding of the problem before committing to solving it.

But there’s another step within the design process that makes this cycle even more form factor trap proof: the primary user benefit. Design methods can help your team define it — and perhaps more importantly, identify that you haven’t.

Don’t ideate solutions — brainstorm primary user benefits first

Aligning on a shared problem framing is a critical step in building a good product, but it’s far from the only necessary step. I have worked with many teams who, despite agreeing what the problem is, had no consensus on how they would go about solving it. Conversations about what solution to prioritize ended up being resolved by management fiat — the highest title in the room, rather than the best understanding of the user’s needs, would win every time.

Having alignment on the problem is not always enough to get clarity on what the solution should be — the possibility space is usually too large to test every existing option. Primary user benefit is the missing intermediate stage between problem and solution that can act like a bridge between these conversations, and prevent teams from trying to compare apples to oranges.

The primary user benefit is the layer between the problem or opportunity, and the solution.

In the example strategy pyramid above, the primary user benefit of a hypothetical online grocery store introducing a social feed is enabling better discovery of existing products. It’s far from the only solution that might do that — a search function or recommendation algorithm might be able to do the job even better. But by identifying the primary user benefit of these features, the team will never get stuck arguing between building a social feed and, say, a discount code system, because while these might both be solutions to the same business problem (order size) they provide different user benefits and cannot be directly compared.

Framing the conversation in terms of the primary user benefit is a great example of how design process actually saves both time and money: before spending time on building out different features to run experiments with, the team can use low-fidelity methods to cheaply test whether the user benefit those features are intended to provide is something that will solve the user’s problem in the first place. Thinking in terms of the primary user benefit allows us to sidestep conversations about feature checklists and zero in on the core value proposition.

But the benefit of the design process isn’t just the success state of arriving at a validated primary user benefit. It can also be from a failure state — identifying that we don’t have one and hitting the brakes.

One of my favorite design critique questions to ask is “what are some other ways of providing this user benefit?” A designer who has developed sufficient conceptual fidelity for their solution will be able to answer immediately, not only with what other solutions could exist, but why they discarded them in favor of this superior option.

But in the event of our dashboard example above — with high visual fidelity but low conceptual fidelity — there is no clear benefit to the user beyond “the user will have a dashboard.” An alternative solution for the same user benefit, by definition, can’t really exist. “The user will have a dashboard” tells the team that the idea isn’t ready, without even needing to build and test it.

By contrast, asking the UI design team to “be flexible” and draw up the idea would make it seem falsely complete; having a visual to look at conceals the fact that something else (the value) isn’t really there. The completed artifact misleads; the interrupted design process reveals the gap.

You don’t need to be a designer to practice this method. You don’t even need to think like a designer. But if you’re upstream of design and can get far enough through the thinking to identify that the primary user benefit isn’t defined, then you can reframe the conversation from “design is slowing us down again” to “design is going to help us figure this out.”

]]>
Articles Design & Product 
<![CDATA[ The power of beauty in communicating complex ideas]]> https://www.doc.cc/articles/power-of-beauty https://www.doc.cc/articles/power-of-beauty Sun, 02 Jun 2024 20:01:09 GMT

The power of beauty in communicating complex ideas

Written by Louis Charron

photo strip with images of the moon and earth

When creating visuals to explain complex ideas, cutting-edge innovations, or new scientific research, we often focus on the images’ ability to convey information. But data visualization, infographics, or even schematics have a hidden power we rarely discuss: beauty.

Obviously, not all images are beautiful. As designer Alberto Cairo writes, “Beauty is not a thing, or a property of objects, but a measure of the emotional experience of awe, wonder, pleasure, or mere surprise that those objects may unleash.”
Beauty is an experience.

So, can we defend pursuing beauty when communicating science or innovation?

Beauty gets our attention

Curiosity is the greatest source of motivation. It is usually triggered by creating an information deficit or by promising to teach the audience something new. In both cases the audience expects an answer, at best to learn something interesting, and at least to have their curiosity satisfied.

But beauty triggers curiosity differently. It draws us in the content not with a promise of knowledge but with a promise of visual satisfaction. We feel touched by the work before getting to the knowledge. Beauty makes it easier for us to look deeper, to wander around the content, to gather knowledge while already being satisfied. Beauty makes the learning process pleasant for our brains.

Accurat, the italian design agency specialized in data visualization, pursues beauty as one of its main design principles:

“Beauty is not a frill. We know how to engage and motivate people to dig deeper and take time to explore the intricacies of a visual data analysis. We deploy our rigorous methods to achieve the ideal balance between familiar visual motifs and unexpected aesthetics, a powerful combination that leverages studies on perception to trigger curiosity and interest, and creates indelible images in the minds of users.”

infographic diagram

My reign for a (solved) paradox, Accurat, La Lettura, 2013

Their series of visualizations for the newspaper La Lettura leverages beauty as a way to get people’s attention on very specific topics. For example the visualization above explores the 80 most important mathematical problems in history. That visualization makes me want to know more about it. In the words of Giorgia Lupi (cofounder of Accurat): ‘I like the idea of making people say “Oh that’s beautiful! I want to know what this is about!”’

Beauty triggers emotions

Beauty is fundamentally an emotional experience. Emotions have the power to create deep connections with the audience. They help viewers relate to the subject matter on a personal level. A great example of beauty creating emotional connection with abstract ideas can be found in The beauty of mathematics video. The video shows 3 levels of abstraction, each with its own visual language. The beauty of the video lies in the connection between the 3 scenes. Going from one level of abstraction to the other allows us to understand each of them; it connects our vision of reality to the maths and physics behind it. Watching it, we feel touched by having such an understanding of our world.

frame of the video showing a triptych math equation, its chart visualization and a lamp on

The Beauty of Mathematics, Yann Pineill & Nicolas Lefaucheux

There is something very satisfying for our brains in the fusion of an emotional experience and a rational experience. Emotional and rational experiences both play a fundamental role in our cognition and our decision-making process. As neuroscientist Antonio Damasio writes “emotion is integral to the process of reasoning”. In other words: through emotions, facts resonate within us.

Wind Map by Fernanda Viégas and Martin Wattenberg is a great example of a beautiful data visualization. It represents wind patterns — a very large dataset at the scale of the United States — in an instinctive way. Using only black and white, the visualization makes us feel the wind. It connects with us on an emotional level while delivering precise and complex data. I really like Eli Holder’s words about this visualization:

“Data can be texture. Data isn’t limited to profound insights or crisp takeaways. It can also work in the background to create visual interest or give a more diffuse sense of which way the breeze is blowing.”

Beauty creates a satisfaction that lies as much in the movement of the fine white lines as in the feeling of understanding a complex system at a glance.

Wind Map, Fernanda Viégas and Martin Wattenberg, 03.08.2024

Beauty creates cultural significance

The story behind NASA’s famous Earthrise photo is quite interesting. It was taken during the Apollo 8 mission, the first to travel around the moon. The crew was equipped with cameras to take photos of the lunar surface. They took about 900 photos, most of them showing in details the grey craters of the moon. During the fourth lunar revolution, astronaut Bill Anders noticed, through a foggy window, the earth rising behind the moon. He was struck by the beauty of the scene. He reached to the camera and asked for a color film. His colleague Frank Borman replied: “hey don’t take that, it’s not scheduled”. The photo was not part of the plan. Without Anders’ intuition and photography skills, that photo would not exist.

Back on earth, the photo circulated widely. The 900 photos of the lunar surface were certainly valuable to scientists, but the one truly beautiful image struck the public around the globe, igniting a global movement of environmental awareness. Looking at the photo today makes us reflect on our planet, on our home, on ourselves. As the science historian Lorraine Daston writes: “It’s an uncomfortable emotion. You don’t have wonder. Wonder has you.”

Earthrise, Apollo 8, NASA, 1968

This photo gets our attention, it tells us everything there is to know about our planet, it provokes emotions, but maybe more importantly, it creates a collective moment of reflection at a global scale. This is a third function of beauty in communicating complex ideas: beauty has the power to create culturally significant objects. Objects that resonates in us at the individual scale and at a collective scale. Objects that transcend their context to create profound cultural significance.

As designers creating images to communicate complex ideas, we rationalize our processes, we bring objectivity to our craft, we want our clients to think that our decisions are based on reasoning. However, we should also defend our intuitions, our subjectivity. We should also defend pursuing beauty as it is one of our most powerful tools.

Works cited
  1. Data Visualization in Society by Martin Engebretsen and Helen Kennedy
  2. Data visualisation: A handbook for data-driven design by A. Kirk
  3. Finding the plot in science storytelling  by Martinez-Conde and Macknik
  4. Read the room by Eli Holder
  5. Seeing Science by Marvin Heiferman
  6. The photography behind Earthrise by Phil Edwards 
  7. Mercury, Gemini and Apollo Digital Image Archive by NASA

]]>
Articles Design & Craft
<![CDATA[Ethics in times of growth design]]> https://www.doc.cc/articles/ethics-in-times-of-growth-design https://www.doc.cc/articles/ethics-in-times-of-growth-design Mon, 15 Apr 2024 13:20:00 GMT

Ethics in times of growth design

Can growth design be truly ethical?

Written by Mary Borysova

Art direction by Manoel do Amaral

abstract sculpture with balancing shapes

Cover image by Holger Kilumets

In an ideal world, designers create user experiences that drive business growth while following ethical principles. However, creating and maintaining this balance between growth and ethics is one of the biggest challenges a designer can face when growth is often the word in their job title rather than ethics.

Companies hire designers to help their businesses grow by creating seamless user experiences, optimized flows, and value in their products through differentiated capabilities.  When the market is expanding, the company (and the designer) focus on innovation: finding new markets, experimenting with different value propositions, and creating new product lines. At times like now, when the market is contracting, the focus drastically shifts to optimization: how we can make the most, or squeeze the most, from the market base we already have. This extreme focus on marginal growth is when designers are pushed to the limit of this challenge of balancing growth and ethics.

Shaping ethical solutions

Growth hacks or so-called psychology tricks have a negative reputation. and it’s not a surprise. Think about the last time you stumbled upon such a “smart” hack — maybe when you were trying to unsubscribe from newsletters you didn’t mean to sign up for, or when booking a low-cost flight became complicated due to numerous upsells.

ryanair modal upselling a product

Multiple upsells in Ryanair checkout

The more marketers and people responsible for company growth learn about growth hacks, the more unpleasant some user journeys seem to become.

For example, monetization projects may introduce more points of entry to the paid features to promote them more, disrupting the user flow and potentially making them sign up for something by mistake.

Engagement and retention growth projects can include creating obstacles to cancel subscriptions or opt out of emails, and excessive emailing to bring users back to the product. Is it bad by default or it can be justified?

calm app email with a discount promotion

Graphics are courtesy of Rosie Hoggmascall and are part of her article

Imagine you’re working on a growth design project that involves limiting free access to certain features. How might you ensure users understand the value of the paid features while still respecting their experience as unpaid users?

Growth design projects can and should be ethical as well, and this can be achieved if these promotions are clear, respectful, and provide genuine value to users. Let’s look at some of these which managed to find a balance between business and providing value to the world.

Making job search and networking accessible

LinkedIn offers premium subscriptions that give users access to enhanced networking features, insights, and professional development courses. It is a good example of how a product can provide extra services to users who require them, while still keeping core functions accessible to all users.

Imagine you’re working on a growth design project that involves limiting free access to certain features. How might you ensure users feel the value of the paid features while still respecting their experience?

Journalism and free access to information

The New York Times has a subscription model that gives users free access to a limited number of articles each month and requires a subscription for full access. This approach supports quality journalism by ensuring a revenue stream, at the same time still permits users to engage with some amount of content for free.

new york times paywall

Article in The New York Times

Language learning with upgrades

Duolingo’s core language courses are available for free, and users can access additional features and content through upgrades. This way, all users can learn a language without financial barriers, but those who are interested in extra features, such as pronunciation exercises or no ads, can buy the Premium version. However, one can argue that their gamification and notifications systems are in a more grey area of ethical design.

duolingo app upsell

Duolingo application

Shaping valuable ethical design solutions in growth user experience projects is definitely possible. There is one caveat: it’s possible as long as the team is aligned and willing to follow ethical design principles.

And this leads us to the next point.

Right team culture

When working in product design, we often have to juggle between the requests from the stakeholders in the company and user needs and preferences. When working in growth design, we have to advocate for users, keep in mind requests from all stakeholders, and also make sure the project solution is well aligned with the company goals and the experiment plan is well designed.

While product design projects by nature are aimed at improving user experience, adding new functionality, or introducing innovations, growth design projects often aim to impact specific metrics the company deemed essential for this period.

So, how can you prevent compromising the user experience while also contributing to the business goals?

Ethical by design

Some companies have a design culture that promotes making ethical design decisions more than others. We all know about Patagonia which is committed to responsible consumption, uses recycled materials, promotes fair labor practices, and encourages customers to repair and recycle their clothing.

You can often tell if a company cares about ethical design by judging its user interface, but sometimes you will need to dig deeper and understand how decisions are made.

For example, a Twitter (or now X) employee have flagged that:

“…revenue-enhancing methods (such as increased personalization) would lead to ideological filter bubbles, open up methods of algorithmic bot manipulation, or inadvertently popularize misinformation.”

Ask a team during job interviews about how they are making decisions at the moment, what criteria they are considering, and if they employ ethical design principles.

Communicating your ethical design decisions

If you were lucky enough and joined a company that leans towards the “ethical design” spectrum you might have the opportunity to present and advocate for your ethical solutions, and your voice will be heard.

I have adapted the framework developed by The Markkula Center for Applied Ethics at Santa Clara and will further explain how I usually present my design solutions.

1. Who will be affected? How?

  • Describe the audience that will be harmed in case the solution that is less ethical is dwelled on. Add actual portraits of the people, these can be screenshots from the interview sessions or usability studies. If applicable, share the user testing videos or at least clips from these sessions with emotional insights.

2. What are other alternatives?

  • Be actionable — venting about unethical designs and issues will not help finish the project effectively. Brainstorm alternative designs you feel may work better.

3. Which option treats all parties fairly and with respect?

  • Respect covers various aspects, such as valuing users’ time and resources, considering their mental health, and acknowledging their unique characteristics.
  • Consider assessing mental load, clarity of the policies and extra fees, data privacy, and accessibility.

4. Which option will yield the most good and the least harm?

  • Do the hard work for the stakeholders, prepare the designs, and list the pros and cons of each of them. Highlight the one that you think will be optimal for business and ethical for the community. Decisions should be logical, well-described, and easy to approve.

5. Can I test this solution before rolling it out for 100% of the audience?

  • Some technical solutions are impossible to test using just a design prototype, for example, because they involve algorithms (e.g. personalized feed or suggestions). Yet to minimize the possible harm, you can collect and analyze feedback from a small number of users first and this may help you prevent causing harm to a bigger number of users.

6. Would I be proud to share my decision-making process with the public?

  • At the end of the day, your contribution to both ethical and unethical design projects will leave a mark on the world. Keep this in mind, and consider your decisions from a long-term perspective.
  • Jaycee Day shared her experience of advocating for a more transparent design solution during Config 2023.

Going beyond the screens and flows

A mature designer or design manager has to carefully evaluate both the ethical aspects and benefits of the solution for the company and find a balance. Successful designers meet the business department in the middle and understand their needs to deliver the optimal product and drive company growth.

Steve Johnson — VP of Design at Netflix once have said:

Steve Jobs was a designer and a business person combined. He used design to help run his business. He used simplicity, beauty, ease of use, empathy and reducing complexity as a competitive advantage against Microsoft, Dell, etc.

Preparing for our roles as designers


Some short-term growth strategies may lead to undesirable long-term effects, which could be more costly for the company or even society in the long run.

Consider Uber’s case: Uber’s drive to attract drivers by offering big sign-up bonuses and incentives resulted in an influx of drivers. However, this strategy also led to an excess of drivers in specific areas and ultimately discontent among them. This market disruption, while one could argue allowed more transportation access to people, also impacted the jobs of taxi drivers and even policies around better public transportation.

The impact of certain businesses in society is complex to measure and, at times, doesn't have a direct causal relationship. That doesn't excuse designers from striving to find the optimum solution without compromising their ethical values. The best way to prepare for this job is to make an active effort to learn, research, and re-educate ourselves about the technological, juridical, and societal developments happening around us.

]]>
Articles Design & Society
<![CDATA[The aura of care in UX]]> https://www.doc.cc/articles/the-aura-of-care-in-ux https://www.doc.cc/articles/the-aura-of-care-in-ux Thu, 14 Mar 2024 03:54:37 GMT

The aura of care in UX

Written by Stephen Farrugia

Art direction by Manoel do Amaral

abstract black and white illustration of an aura

In November 2022, Brian Chesky, CEO of Airbnb, began a tweet thread with “I’ve heard you loud and clear” in response to a customer backlash over the way they hid additional costs till the checkout page. “You feel like prices aren’t transparent…starting next month, you’ll be able to see the total price you’re paying up front” he said about a change that could be made urgently in a day, or carefully over a few.

When he said I’ve heard you loud and clear he was also telling his User Experience (UX) researchers and designers they were ignored, if they were heard at all. The deceptive pattern was no mistake. Intentionally designed to deceive and benefit from excited holiday planners and their potential to give in to the sunk cost fallacy. Instead of addressing the ridiculous additional fees the company chose to trick customers into paying them. That’s not empathy, at best it’s apathy, at worst it’s hate. The decision to fix it only came after the balance of business value and public relations started to tip the wrong way. Chesky presented himself as a model CEO doing right by his customers as if he wasn’t responsible for wronging them in the first place. People bought it too. He demonstrated how bright a performative aura of care can shine to hide questions about the business activity or even questions about the business’s legitimacy to exist.

In April of 2022 Twitter added the option to write short descriptions of the images you attach to a tweet. Those descriptions help vision-impaired people that rely on synthesised voice software to read out the contents of a page. The thing about image descriptions is that the World Wide Web Consortium’s (W3C) standards for HTML — the document structure language of the web (and Twitter) — has required them since 1999. When Twitter went live, that requirement was already seven years old and twenty three years old by the time they obeyed it completely. To praise Twitter for recognition of vision-impaired people is like praising a heavy drinker for taking a hip flask to their kid’s school play instead of skipping out to the pub. They did the bare minimum, reluctantly, despite having UX researchers and designers on deck. For this they deserve no more than a collective why the fuck did it take so long?

Goodness in a product’s design tends to make more sense as a convenient side-effect of a business case. For Twitter, crowd-sourced image descriptions written for free can make a nice data set to sell for machine learning.

If we look at industry-wide examples we can see how intrinsic care replaced with business incentives leads to low-quality black-and-white photocopies of the original ideas. Everything becomes optimised to meet business requirements and any surviving sense of care that remains is there by chance.

Since the beginning of the web, writing W3C compliant HTML has been highly regarded among developers. Standards-compliant code makes the web accessible but the design philosophy of prioritising accessibility also led to the unique quality of HTML being forgiving if the standards are ignored. Showing something in a web browser is more accessible than showing nothing, so a web page will still look right if the code is not perfect. In the early days, this meant that the quality of the HTML wasn’t factored into timelines and budgets because it was extra work that didn’t change how the site looked. If a site was built with standards compliant code it was because the developers wanted it that way and did it on their own time.

That all changed in the early 2000s when Search Engine Optimisation (SEO) arrived. The techniques for improving the visibility of a site in Google’s search results included rules for the structure of the HTML. These rules took some W3C standards and tied them to a tangible business case of heightened search visibility. I remember the surreal experience of an SEO consultant presenting these rules to my web development team. We already knew everything they said because we understood web accessibility, but they were retelling these things as novel techniques for getting more sales leads from search.

Responsive Web Design (RWD) — a design philosophy for building sites that work for everyone regardless of the device they use or their connection speed — gained commercial adoption in a similar way, well after developers and designers had already seen its value as an empathetic design philosophy. Google announced that mobile-friendly sites would be preferred in search results and some, not all, RWD techniques became convenient. Now responsiveness in commercial web apps focuses mostly on being visually accessible to devices used by a target demographic. Anything outside is considered an edge case and ignored, or again, supported by developers and designers taking the initiative in their own time. That’s why some sites will crash the browser on your parents' iPad, use up your mobile data before anything renders, and fail basic accessibility tests. Browsing the web has become a reason in itself to upgrade a device.

abstract black and white illustration of an aura

And yet… User Experience has become part of the everyday lexicon. Normal people who don’t make tech products say they prefer a product “for the UX”. Normal people who do make tech products say their product “has great UX”. It’s generally accepted as a measure of how easy something is to use, how little it gets in the way. Like usability before, it takes something that was a core concern of commercial product design — since companies sold products — and treats it like some novel modern add-on. But the real innovation is making it seem like the ease of use, the user experience, is the only thing that matters, because sometimes a product doesn’t offer much else.

Notion is a popular cloud-based product that is marketed with no purpose more specific than productivity or collaboration. It takes existing products like wikis, project management tools, and document editors and mashes them together into one window. Notion is the sum of products that were already legitimised as being useful by themselves. Despite inheriting all of its usefulness from other useful things, Notion’s success is the result of good usability design that makes it easy to use those things in one place. For Notion the UX is the product.

Productivity and collaboration might seem like vague purposes to an individual but to a tech company, they are compelling, concrete, purposes. Businesses are sold corporate subscription plans for Notion and other products, like Slack and even Figma, which are imposed on staff as essential tools. For employees, these products are universal tools of nothing in particular. Each collaboration feature makes the anxiety of productivity ubiquitous. Little floating heads always watching over the document you’re working on, a perfect simulation of what we used to call micromanagement. They are virtual open-plan offices where everything you create becomes littered with comments and conversations you didn’t ask for.

The thing they all have in common is how strikingly easy they are to use. Part of which comes from very good usability design and part of which comes from the fact that you use them for a purpose you define yourself. When they say it’s for productivity instead of doing your taxes, they are benefiting from such an abstract criteria for failure that it doesn’t really mean anything. If you want to use it to do your taxes you can go ahead. But if it can’t help with some obscure tax calculation, you’re an edge case. For a UX designer at Notion the concern is that it can be used easily, not how well it does a specific task for a specific expertise.

And, look, I know how obvious and easy it is to dismiss this as how capitalism works. The problem being the aura of care surrounding UX pretends that capitalism can be coaxed into giving a shit. It chugs along as if UX designers and researchers are the ones who are going to cause a revolution of socialist CEOs who consider users beyond their money and their data. But the inside secret of commercial UX is that the empathy is just a posture, and the businesses benefit from the aura of care without having to entertain it. In non-profit, government, or volunteer-based open source projects, the posture can, and usually does, match the reality but in commercial tech, it’s always contingent to the strength of a business case. The Google UX design course that says it will help you “empathise with users” is attracting the best-intentioned people and setting them up for a future of despair.

That’s why UX can help legitimise products that are intrinsically bad for people who use them. Tell someone that cigarettes are easy to use and they’ll ask about the reasons for using them, but tell them about the user experience of cigarettes and they’ll ask what makes the experience good.

Search Twitter for “FTX UX” and you’ll find no shortage of “it had a great UX” tweets published well after the fraud was exposed. It doesn’t matter which fraud or how obvious the scam was beforehand, the same search will yield the same results. The UX aura of care shines brighter.

The posture is strengthened by a UX community that seems open in its contradictions. The discipline is detached from the substance of the underlying products it is applied to so empathy for users is mixed in with discourse of psychological exploits for increasing user engagement. There are Laws of UX that use psychology to design better products & services and at the top of most UX book lists you’ll find Nir Eyal’s Hooked to learn how to build habit forming products. Nir says he wants to see people hooked on products that promote healthy habits, but of course the ones getting rich from a product are going to believe their own bullshit when they say it’s harmless, healthy, or going to save the world. Another seminal UX book is Steve Krug’s Don’t Make Me Think which has popularised the relentless removal of “friction” from user interfaces for over two decades. When you’re trading crypto with your life savings you do want to think about every thing you do despite how much the product will be designed to avoid it.

abstract black and white illustration of an aura

Marketing is about attracting new customers and retaining existing ones and commercial UX is concerned with removing the barriers that prevent these. UX is powerful because it doesn’t seem like marketing and the practitioners don’t see themselves as being in the marketing business.

Like the sales tough guy that demonstrates his versatility by saying he can sell you this fucking pen. UX doesn’t care what you hit with the baseball bat, it just makes sure you don’t get splinters from it. Web3, NFTs, and blockchain products need this product-agnostic approach that keeps everything in the realm of experience because blurry, uncertain, or non-existent usefulness is a form of friction itself. Consider FTX and all the other centralised crypto exchange, trading, and lending platforms that turned out to be massive scams. Centralised crypto products come from a community-wide UX need to obscure necessary complexity rather than create usefulness that is concrete enough to justify it. Complexity justified by usefulness is obvious in products like Blender where a terrifying interface hasn’t stopped it from becoming an industry standard. The evidence that gaining the expertise to use it will pay off is overwhelming.

There is no wonder that crypto, metaverse, and now AI pushers, are obsessed with UX. They talk about the user experience as a final barrier to adoption as if people are clambering behind a reinforced wall for a prize they can see and know they need. UX ignores questionable usefulness and the bright aura of care distracts from real questions of ethics and harm. It hides the real intentions of the business, not just behind a posture, but behind UX professionals who have a genuine sense of care. UX researchers and designers talk about empathy because they are empathetic people. In a commercial context there is tension between that empathy and viable business activity so the role becomes usability design by another name

UX ignores questionable usefulness and the bright aura of care distracts from real questions of ethics and harm.

UX seniors working outside commercial constraints don’t help the situation. They push the fight for the user rhetoric in Medium articles, tweets, and LinkedIn posts. They goad young UX starters to push for empathetic values without acknowledging how few contexts they are compatible with. For most, choosing where you work is a luxury. It’s going to be the commercial UX roles that pay the best every time. Designing socially beneficial products is something to strive for, but not something that should weigh on the shoulders of a junior UX designer while their manager is asking them to draw a dark pattern in Figma.

UX needs to make clear distinctions between commercial design work and design as a social good so the aura of care is not just an aura. Until that happens we’ll continue to see the worst companies hire the best people to help them make the worst things.

Works cited
  1. Airbnb is adding cleaning fees to a new 'total price'  by Britney Nguyen
  2. A quick update by Airbnb Design
  3. Deceptive design pattern on Wikipedia
  4. Twitter image sharing on Wikipedia
  5. HTML standard by WHATWG community
  6. Alt attribute on Wikipedia
  7. SEO on Wikipedia
  8. RWD on Wikipedia
  9. Rolling out the mobile-friendly update by Google
  10. Edge case on Wikipedia
  11. Google UX Design Certificate by Google
  12. Laws of UX by Jon Yablonski
  13. Hooked by Nir Eyal
  14. Don’t make me think by Steve Krug
]]>
Articles Design & Society
<![CDATA[The snake that eats its tail]]> https://www.doc.cc/articles/the-snake-that-eats-its-tail https://www.doc.cc/articles/the-snake-that-eats-its-tail Thu, 14 Mar 2024 03:50:22 GMT

The snake that eats its tail

Written by Rachel M. Murray

Art direction by Manoel do Amaral

illustration of a snake eating its tail forming an infinite symbol

“So the truth is, I never would have questioned this religion, I never would have looked deeply at this belief system because it gave me so much pleasure if it hadn’t been for the pictures.” (Daisey 2014) (Source)

The fragility of attention — a commodity under threat

I once had a friend whose twelve-year-old daughter designed movies on her tablet, informing everyone she’d be either a fashion designer or an app designer. The feminist in me applauds her interest in technology. Still, the friend in me is overwhelmed and drowning in a deluge of data and screens from dawn until sleep. I wonder how much a twelve-year-old should engage with technology, knowing it may overtake her life just as it has mine. I wonder about her experiences as a woman in tech — by the time she’s my age, will she be worn out by the culture, the constant drumbeats of speed, the rush to consume, the need to be always on and never not connected? Many of us are already engaged in a life of screens — TVs, cellphones, and even GPS devices account for about 8.5 hours a day for adults in the U.S., according to Council for Research Excellence research (Stelter 2009). From the shadows on the wall in Plato’s allegory of the cave to Neil Postman’s warning that we are ‘amusing ourselves to death’ (Postman 2006) with entertainment, concerns over media, distraction, and attention linger in our public discourse.

What is the effect of constant exposure to information on our attention? Who benefits from a dramatic change in our attention, and when and why has this happened? One hypothesis is that companies have created an ecosystem to encourage continual consumption of information, and they profit from our addiction to information consumption and change in values and behaviors around attention and engagement. This ecosystem includes ‘consumer friendly’ products and services, predictive analytics, intelligent agents, artificial intelligence, sensor networks, the Internet of Things (IoT), personalized content, and platforms — these all work together with notifications to continually engage us. Alvin Abraham has called this a ‘ubicomp system’ (ubiquitous computing systems). Designers contribute to this system by tacitly encouraging people to accept the wholesale convenience of connectivity through deceptive intentions. We have long lived in what Herbert Simon called the ‘Attention Economy’ (according to the Berkeley Economic Review). That economy has been reinforced by a complex technical infrastructure and tech solutionism to keep the infrastructure growing.

This technological ecosystem is a feedback loop, or Ouroboros, a snake that eats its tail: content is delivered via well-designed devices that act as a lure. These devices create data fed back into the content distribution systems that use predictive analytics to shape the design and delivery of more content. The platform ecosystem of devices additionally encourages the design, development, and pricing of the supporting technology infrastructure of apps to keep it growing.

This is further compounded by much of the workforce being sedentary ‘knowledge economy’ professions that prioritize engagement with data. As we move towards an increase in content and devices, we will continue to be pushed towards more engagement and consumption of goods — what design theorist Tony Fry calls “the ongoing creation of ever more things” at the cost of the planet, reinforced by messages, keep us distracted from identifying and protesting against the “Weapons of Mass Distraction”:

Capitalist culture organizes people as buyers of commodities and services (and) …transform(s) information and knowledge into commodities…The corporate conglomerates of the culture industry have created a global public sphere that does not offer any scope for discussing the social and cultural consequences of the ‘free flow of information’ they organized. The fusion of trade, politics, and communication has brought about the sophisticated one-dimensional character of our symbolic environment, which is as least as menacing as the pollution of the natural environment (van Toorn as cited in Dinot 2009:180)

Are we aware this is happening? Is this desirable? What can and should be done to address it if it isn’t?

These are questions for designers, too. If this system diverts attention and creates engagement to profit those in power, how can designers prevent that engagement when our work is centered on creating engagement? We know that ‘addictive design’ is hugely problematic — Actionable Design Magazine describes addictive design as ‘design[ing] for addiction and excessive use raises concerns about the impact on mental health and well-being.’ Addictive design is inevitable when we choose to use Deceptive Patterns (known as Dark Patterns) in our design work.

Yet designers should consciously challenge the idea that we design and treat attention in the UI layer alone. We need to look at larger ecosystems that shape products and services — the data, features, and distribution systems far beyond a user interface, even if they’re not on our immediate roadmaps of products. Here we are going to explore how a larger ecosystem around attention works using three product case studies. Apple’s Watch, Humanyze’s workforce optimization software, and SeeClickFix’s civic technology software all manage attention differently. For the Watch, the individual is a consumer, centered on ‘buy’ as the verb; for Humanyze, the individual is an employee, centered on ‘perform,’ and for SeeClickFix, the individual is a citizen, centered on ‘report.’ Wendy Brown and others in this essay will illuminate how attention ultimately shapes the actions (labor), the values (politics), and the way these technologies are created (design). In this last arena, we as designers must also be critical. As designers, we can become distracted by the lure of improvement — how Android is better because it is an open system. Instead, we must utilize a more realist perspective to understand the omnipresent nature of this Ouroboros technological ecosystem rather than be distracted by shiny lures, ultimately allowing us to challenge the system itself.

The challenge of design isn’t simply to design better for consumers, employees, and citizens — but to acknowledge this complex system, how it shapes attention, and the limitations of the idealist perspective for better design. These companies purposely design engagement for consumers as applicable to citizens and employees — the Quantified Self moves into Quantified Employee and Quantified Citizen — and a blending of these roles continues. There is a need for balance in our Attention Economy, not to seek a return to a mythical nostalgic era before technology, but rather to understand our relationship to these technologies and this blending, case by case.

The Apple Watch and the biochemical power over attention

The individual as a consumer can be seen in the case study of the Apple Watch. Behind the happy bing sound from a notification, the Apple Watch is a celebration of action — active action (a user consciously opening an app to engage in ‘info snacking’ petite bites of data) or passive actions (notifications pushed from applications). Because the screen is small, it is limited in what it can display. But brilliantly, Apple has created the perfect plate because the consumption of notifications is facilitated by their simple, practical design, which encourages repeated information-checking behavior and continual information grazing.

photo of apple watch

Image of Apple Watch courtesy Apple Web site

The Watch can be a conductor of a symphony of instruments — bells, tweets, alarms, on-screen boxes, a cacophony demanding your attention. Pulling out a phone at a dinner table is considered rude, but interacting discretely with your Watch shows that Apple recognizes that rudeness and plays with the social convention that users will still graze nonetheless. Alyssa Bereznak describes a change in her behavior that arose because of notifications and the need to respond:

When I first got my Apple Watch last month, that’s what I was most looking forward to: a tool that would keep me connected yet help me break from the magnetic pull of my cellphone — that thing I kept glancing at whenever there was a pause in my life, whether I was at an intimate dinner or on a productive conference call. I wanted to be less distracted, less obsessed with notifications. I wanted a gadget to save me from gadgets. Soon after I snapped the Watch (steel case, black classic buckle) to my wrist, however, I felt exactly the opposite effect. The notifications poured in, and with them, a new feeling of organization and efficiency. But with that productivity came a new sense of conflict between digital life and real life. I was becoming a more adept person, but also a more horrible one. I credit this change to the Watch’s clever-yet-distracting notification system.

Every time something you’ve deemed worthy of interrupting your life occurs, you feel a subtle, human-like tap on your wrist. Being summoned with these little nudges throughout the day is the most intimate experience I’ve had with a computer. The buzz of a phone, though attention grabbing, is easy to physically ignore. I can silently acknowledge that I got a text and make a note to check my messages later without flinching, or breaking eye contact with the person I’m talking to. But my reflexive reaction to a robot flicking my arm is almost always to look down at it. And look down, I did.

The Watch plays into the insecurity created by observing and comparing the self to others on social networks — the ‘fear of missing out’ (FOMO) on a life others are assumed to be experiencing and documenting on social media. Tapping into a stream of notifications, one can feel they are participating by consuming data. The Watch symbolizes a worldview that celebrates hyper-connectivity between person, data, objects that create the data, and the world. For Apple, constant connectivity and engagement with data are critical to a productive life, and the Watch facilitates this always-on, always-connected hyper-connectivity. Like a guard dog watching over a house, the Watch silently waits to notify you of content to consume.

The role present is Master Commander of the Data Seas to sail the waves of data from the Internet of Things (IoT). The worldview is a celebration of the data deluge and of a user’s supposed mastery over data. The Watch speaks a narrative of control as one uses it to curate a digital lifestyle aggregation of online consumption delivered by an endless array of potential new apps. Apple conveys that there is always another app to download, another call, email, message, video, reel… and more to consume.

Creating infrastructural hooks like APIs (application programming interfaces) increases demand for applications with notifications. Services like If This Then That promise to connect devices via the Web and the IoT and create more possible notifications to pay attention to. The Watch and the IoT then create expectations about time, content, and the objects to deliver them. We now ask why my x (product, service, experience) doesn’t connect with my y (input device, product, service, experience) as we become aware these connections are possible. Integrating with other technology companies benefits multiple masters and makes the’ business case’ for connecting more technology. Humanyze and SeeClickFix also speak of integration with existing systems, “whether it’s sales KPIs from Salesforce, call data from Communicator, or email and chat data” so that engagement can continue uninterrupted.

So, why do we keep on engaging? Biochemistry is a complicated milieu, but the relationship between the neurotransmitter dopamine and technologically motivated behavior is well understood. According to Psychology Today, dopamine “helps control the brain’s reward and pleasure centers […] emotional responses, enabling us not only to see rewards but to take action to move toward them.” Dopamine, as Susan Weinchenk notes, “makes us curious about ideas and fuels our searching for information.” We get a spike in dopamine after performing information-seeking behavior like checking social media accounts for new activity. Repeated dopamine-seeking behavior highlights the downside of seemingly innocent actions like notification consumption. We might search for one data point, only to fall down a well of multiple searches that cause a mild dissociative hyper-focus where we lose sense of how much time we’ve spent.

That information-seeking behavior cycle can be triggered even when we receive a notification but are not engaged with technology. Knowing we have a notification starts the cycle of dopamine-seeking behavior again.

Information-seeking behavior starts to appear in everyday life without our awareness. We bite at the apple in the Apple logo but return for another bite because we are never satiated and always need that following bite/app/content and dopamine hit.

This is more significant than one device. It is a love affair with ubiquitous computing personal devices and the digital lifestyle aggregation that lives on them. The Watch offers a way to manage this life and maintain control over time, information, and our attention through notifications. We attempt to control our data via notifications — as if to say, ‘Acceleration is here; time can’t be controlled, but here is a notification about it.’ As Lazzarato notes:

In contemporary capitalism, control means paying attention to events whether they are taking place in the ‘market’ or the workshop; it means paying attention to being able to act, to anticipate, and ‘being up to it’. It demands learning from uncertainty and mutations; it means becoming active in the face of instability and collaborating in ‘communicational networks’

(Lazzarato 2004:192)

The Watch positions users as being in control of their data when, in reality, ownership is less clear. The customer chooses that worldview when purchasing Apple’s brand and technical apparatus (Apple ID as a billing system and iTunes as a content distribution platform) and the political and economic tenets of that worldview. One may argue they don’t buy into the Cupertino worldview — they’re not an ‘Apple Person.’ Still, one cannot use any Apple product without the accompanying infrastructure. As Barclay Sloan describes it, you’re in a walled garden of an operating system, the “walls of dependency that Apple has intelligently built around those living inside this Apple ecosystem to keep you inside -products and services not being compatible outside the ecosystem.”

From Dick Tracy’s watch to our smart objects, a message is communicated by technology companies that a data deluge is coming and that data will be ubiquitous and unavoidable. In response to de Pompadour’s edict ‘après nous le déluge,’ Apple and other technology companies assert there is no end to the deluge — we can only use objects to consume rather than create dams to stop it. This data also bleeds into infrastructures in our cities. As we move towards the surveillance opportunities within smart cities, continually connected smart devices and consumerist narratives like the ‘Quantified Self’ and the Internet of Things lend to the Ouroboros technological ecosystem, encouraging us to stay engaged rather than analyze the engagement itself.

Whether the Watch concerns action at a personal level is not the question — though there is little to tie the actual political action as a citizen to the message that we control our data. Instead, the question is whether we are controlling our constant information-seeking behavior or if it is controlling us. This example is unique from other as the individual becomes self-monitor and master, with little need for a group to monitor when we install a Panopticon made of shiny steel on our wrists to monitor ourselves.

I am the eye in the sky, looking at you: your attention at work

What is the state of attention and engagement for the individual as an employee? In the U.S., workplace surveillance allows for managing and optimizing employees within the company. The case of Humanyze’s “people analytics and workforce optimization” software is one such surveillance apparatus, a platform of interconnected parts that monitor and analyze sociometric data, the science of social relationships.

humanyze ceo photo

Ben Waber, CEO of Humanyze, wearing one of the company's sensors. (Robin Lubbock/WBUR)

The badges have a microphone and sensors to track data such as “real-time voice processing to uncover variables like tone of voice, volume […] and how frequently you contribute to meetings.” This data is analyzed and measured according to productivity and other key performance indicators. Most workplaces in the U.S. have workplace surveillance across computers, Wi-Fi, and employee access badges. Humanyze specifically monitors voice data, which can track elevated heart rate (among other data) to indicate and “measure your body language.” For the entire Humanyze system to work, employees must continually wear the badges as, according to the company, “continuous wear is a commitment by a company to continuous improvement […] it is at its most powerful when adopted company-wide.”

(NE: the Wyze badge was discontinued; as of 2021, the company is focused on digital collaboration data.)

Action for employees is wearing badges, but for employers, action is paying attention to tracked data from employees. Humanyze places managers as surveillance masters to use attention to yield benefits in the name of productivity. Humanyze encourages managers to “proactively understand disruptions to their teams [… ] and be warned of potential project failures based on communication gaps” (Humanyze, 2016), such as employees not participating during meetings even if they may not feel they have anything valuable to contribute. This system then rewards ‘false participation’ and can create a ‘gap’ where one may not exist. Humanyze also notes employees can benefit from monitoring ‘goals and benchmarks’ and self-tracking of attention, similar to Apple’s emphasis on self-tracking.

Tracking progress is a form of self-mastery as well as competition with others (“See how you compare to your team or other roles in your organization” — Humanyze 2016) as well as creating free content (Berardi’s Semiocapital) for the company to use to monitor. Humanyze reduces human behavior to data points — in contrast, human-centered design uses mixed methods of qualitative and quantitative data to understand human behavior and capture a full range of human experiences. When Wodiczko spoke of Interrogative Design, he spoke of integrating designers working with people worldwide. Still, his words could also talk to the dangers of reducing people to data points without context to explain subtleties of human behavior that data alone can’t capture:

Design should not be conceived as a symbolic representation but as a performative articulation. It should not “represent” (frame ironically) the survivor or the vanquished, nor should it “stand in” or “speak for” them. It should be developed with them and it should be based on a critical inquiry into the conditions that produced the crisis

(Wodiczko 1999:17)

When Humanyze speaks of actions (“Get a look into what everyday actions contribute the most to your creativity, collaboration, and productivity” — Humanyze 2016), users become analysts and self-monitors, similar to the Watch and SeeClickFix. For Apple, optimizing the self is the critical action, and for SeeClickFix, ultimately, citizens optimize the city; with Humanyze, we have an optimization of self and others to benefit both the company buying the service and Humanyze. Berardi could view optimization as a natural goal of “the capture of work inside the network […] the dissemination of the labor process into a multitude of productive islands formally autonomous, but actual coordinated and ultimately dependent” (Berardi 2009:88). When he speaks of “human terminals […] all connected to the universal machine of elaboration and communication” (Berardi 2009:76), those ‘terminals are part of a system which uses attention and monitoring via surveillance to optimize.

CEO Ben Waber sees Humanyze’s ‘smart badge’ and platform as desirable and inevitable this technology becomes ‘commonplace’ — “within about four years, every single company ID badge is going to have these sensors, whether it’s ours or somebody else’s, right?” (Humanyze 2016). Humanyze profits from the change in how we value attention and uses surveillance and the lure of optimization to make this shift seem natural.

Attention in the body politics: are we supporting each other or selling the act of feedback?

How does attention and engagement fair for citizens? SeeClickFix is an app that aims to “connect citizens with local government, fixing problems and building trust in your community” (SeeClickFix 2016) for non-emergency service requests about broken public infrastructure and the company concerts action as citizens using their attention and engagement to become surveillance agents to report, track and monitor that infrastructure. There are many other similar apps currently that, with different approaches, has similar goals and, consequently, issues, such as  NextDoor and CitizenApp to name a few.

SeeClickFix app

SeeClickFix (source)

Surveillance for SeeClickFix is turned outward towards the world to monitor infrastructure. It is also turned inward towards public sector employees who are ‘monitored’ by the system. Government officials are notified via a private SeeClickNow Web-based request management system (for which they pay SeeClickFix). They can embed a SeeClickFix map ‘widget’ on municipal Web sites for citizens to pay attention to. Whereas the Apple Watch has notifications managed by individuals who opt into use, SeeClickFix sends notifications to civic servants who do not opt-in — only their employer does. The employer can then use the completion of the tickets as another metric to measure employee performance. Not only are employees monitored for completing their primary job, but they can also be monitored for completing tickets in the SeeClickFix system. The citizen’s attention then becomes another form of employee monitoring. Does this become a new kind of citizenship/ad hoc participation without a governmental body’s organization of the commons? Contrast this with more extensive, sustained efforts of community engagement via community boards, participatory budgeting, and participatory design all in combination:

Public engagement initiatives such as San Francisco’s Urban Forest Map, Chicago’s GO TO 2040, New York’s PlaNYC 2030, Los Angeles’ SurveyLA, and Washington D.C.’s Capital Bikeshare Survey have been successful in bringing the public into a participatory role in data collection, city planning, and vision setting activities. Such projects are examples of undertakings that would most likely be too big and cost-prohibitive for a city to carry out and continue on its own. Yet by asking citizens to participate, not only does it keep costs low but empowers citizens, brings together new ideas, and increases mass collaboration (Bradford 2010).

This new lifecycle of production, monitoring, and analysis of data also shifts responsibility onto citizens to provide services as they become monitoring agents in Smart Cities. SeeClickFix’s site says, “A better neighborhood starts with you. Let’s get to work.” (SeeClickFix 2016). When Wark speaks of a language of commanding, he speaks of how recycling becomes an extension of the consumption cycle with OBEY!/BUY!/ RECYCLE! (Wark 2013:3). The next part of this cycle will then be for citizens to REPORT — or else not really contribute to an invisible metric of civic involvement. We also find an ideology that promotes data and finance over people, similar to the narrative about Smart Cities prioritizing data over qualitative metrics of happiness. Will what’s happened with our consumer data, marketing, and privacy be carried over to our governments? Does analytics — and citizen attention — become a new kind of monitoring of workers now measured by SeeClickFix tickets answered? Do some infrastructures get fixed sooner because of vocal citizens where ‘this pothole got the most likes’, and what are the repercussions for civic engagement in areas without potholes watched by SeeClickFix?

Are policies and processes simplified in this solution, or does this add a bandage layer to existing technologies rather than a systemic solution? SeeClickFix has chosen citizen reporting and ticket management as a part of everyday life — fixing a pothole “or something like it — happen[ing] all the time in the disintegrating spectacle [falls] short of any project of transforming it” (Wark 2013:8). By SeeClickFix focusing on simplicity rather than complexity, they miss a chance to address complex systemic issues. Fry notes that “the actual organizational means to engage problems of defuturing with some chance of success must come from a broader and more informed understanding of causality and a sense of the relational complexity. Such means need to have the ability to undermine bio-political and technological inscribed networks of power” (Fry 2011:32). This oversimplification of the system then shapes expectations of how quickly problems should be solved — on-demand problem solving and the prioritization of speed assumes one can ‘SeeClickFix — see something, click on it, fix it.’

This is part of a larger narrative of neoliberal technosolutionism that views new technology as a solution to political and social issues by privatizing public services. Designing solutions for the public sector is complicated. Yet the civic technology space continues to grow as more private sector companies enter to provide services. Free market forces have worked to erode the public safety net of services traditionally offered by the government, a “dramatic worsening of social protections, determined by thirty years of deregulation and the elimination of public structures of assistance” (Berardi 2009:80). What Wendy Brown identifies as a “financialization of everything […] to enable a radical reduction in welfare state provisions and protections for the vulnerables; privatized and outsourced public goods [ so that ] ‘democracy can be undone, hollowed out from within, not only overthrown or stymied by anti-democrats” (Brown 2015:17).

Those free market forces which advocate eliminating public services often seek entry into the public sector by using data and analytics to ‘optimize’ and ‘improve’ the city, often with the goal of a Smart City which gathers data which can be monetized and used for surveillance. SeeClickFix views the government as its competition — inefficient, antiquated, and needs fixing.

“For me this whole website started because I was trying to report graffiti on a neighbor’s building,” Berkowitz said. “It wasn’t attractive graffiti, and it was in a place that was not a public space.” He reported the graffiti to the New Haven government but he said nothing happened. “I got the idea that my neighbors were reporting similar things, but there was no accountability and no collaborative discussion,” said Berkowitz (Harless 2013)

Berkowitz notes, “With many of the things we want government to help us with, it really makes sense to try to do it on our own […] at first, we thought of calling it Little Brother, like ‘Little Brother is Watching,’ but then we realized we needed to be a bit more kind to government” (Business Innovation Factory 2016). For SeeClickFix, technology expresses ideology and power to fix a broken government and profit from the destruction. ‘On-demand government’ is created and materialized, and this idea becomes “not merely a concept but a social reality” (Geuss 2008:46) that can become yours for a $10,000 license and monthly subscription fee. Municipalities pay SeeClickFix and have citizens become field agents under an indentured Servitude As a Service business model — and as Berardi notes, everyone with a cell phone becomes a perpetual worker and “labor is the cellular activity where the network activates an endless recombination. Cellular phones are the instruments making this recombination possible.

Every info-worker can elaborate a specific semiotic segment that must meet and match innumerable other semiotic fragments to compose the frame of a combinatory entity that is info-commodity Semiocapital” (Berardi 2009:89). SeeClickFix effectively outsources the maintenance of governance to us via our attention and engagement, while profiting from our work for them.

A way forward with awareness and action: practical methods to protect our attention

So que fair — what is to be done to protect our attention? How can designers save agency, build awareness of the issues, and create resistance? This is our work, both those who design and those who consume, to together make a realist response of resistance, a “rupture that declares the situation and creates practical possibilities in that declaration (Badiou 2012:8). Will we still be ‘patzers’ in Larry Holmann’s concept of realist politics (Holmann 2012: 295), simply designing more human-centered ethical Band-aids? Perhaps. But to not attempt a response is to not to think; if, as Hannah Arendt asserted, the only reliable source of light in dark times is found in the activity of thinking (Berkowitz 2010:5), then we must think because it is the start of a realistic political response about attention and any areas where our agency is threatened, the only reliable “safety net against the increasingly totalitarian or even bureaucratic temptations to evil that threaten the modern world” (Berkowitz 2010:5).

We can take action by becoming conscious, holding people accountable, taking responsibility, and creating alternatives. We must hold multinational corporations responsible for creating products and byproducts like data. A company may want to use tools like Humanyze to improve — the answer must also be to advocate for consent, transparency, security, and privacy before using such tools.

We can support Trebor Scholz’s and others in the Platform Cooperativism as they scale up their work. There have been Open Source mobile technologies, but none have gained traction primarily because of collusion between mobile companies and telecommunications carriers — which is why alternative systems like Platform Cooperativism are vital. We can involve citizens in monitoring broken infrastructure and encourage meaningful engagement and dialogue between citizens and public officials. ‘Working with, not for’ is the battle cry of Cuán McCann, and their analysis of 5 Modes of Civic Engagement in Civic Tech (McCann 2015) points to a sophisticated analysis of where civic technology can increase civic engagement, echoing Enzo Manzini’s emphasis on co-design as a dialogic conversation, where actors are “willing and able to listen to each other, to change their minds, and to converge towards a common view; in this way, some practical outcomes can be collaborative obtained” (Manzini 2016:58).

Governments can use tools without contributing to the privatization of the commons. One is Open311, an open-source technology that gives back to other public sector agencies without paying a private sector company. We can also learn to design to put people before the feature sets and technological capacities:

The way that data and algorithms are presented can be confusing or meaningless, and there is a risk that these numbers remain far-flung idealistic pinnacles which convey no grounded meaning. IoT needs to remain human to remain relevant. Designing products and experiences should always implement a human layer (rehabstudio 2016)

We can also consciously design to protect our attention and rethink the idea of engagement. While we want to have products that allow us to complete tasks and experience delight, we can explore what Jean-marc Buchert describes as a more “sober, uncluttered, and optimal experience to help users dive into their flow” — choosing to remove any extra content and UI elements which may serve to distract users. The most prominent example might be the Light Phone, which dramatically pairs down components of an experience by substantially simplifying the user interface altogether.

We must question why and how products change our behavior. It is up to me to teach younger designers like twelve-year-old Maddie that she has the power to step away from the screens, too. Technologies can be redesigned, but also we can rethink our use and respect each other’s attention, time, and agency. Consumers can question our relationship with these companies and add conscious contemplation to our use. Still, it also lies with designers to take responsibility for our role, heed the call Victor Papanek and others have noted, and take responsibility for designing attention and engagement. Ironically, anti-pattern designs can be a great wake-up call to show us that citizens and designers can unite towards a response that respects consciousness and is present to design for a world that values mindfulness. As Carl Jung noted, “One does not become enlightened by imagining figures of light, but by making the darkness conscious” (Jung 2014:265).

Works Cited
  1.  Metapolitics by Alain Badiou and Jason Barker
  2. How the Apple Watch Turned Me into a Hyper-Efficient, Horrible Human Being by Bereznak
  3. Thinking in Dark Times by Roger Berkowitz
  4. Government by the People: The Importance of Public Engagement by Garrett Bradford
  5. The Purdue OWL Family of Sites by The Writing Lab 
  6. Undoing the Demos:  Neoliberalism’s Stealth Revolution by Wendy Brown
  7. How to Restore Your Users’ Attention by Jean-marc Buchert
  8. The New Yeoman Farmer by Business Innovation Factory
  9. Ben Berkowitz’s Formula by PBS
  10. Psychology and Alchemy by Carl Gustav Jung
  11.  The Agony and the Ecstasy of Steve Jobs by Mike Daisey
  12. Dopamine Primer by Emily Deans
  13. Deceptive Patterns
  14. UX Design Ethics: Striking the Right Balance by Actionable Design
  15. Ethics? Design? by Clive Dinot
  16. The Soul at Work: From Alienation to Autonomy by Franco Berardi
  17. Design as Politics by Tony Fry
  18. Philosophy and Real Politics by Raymond Geuss
  19. Drive Performance & Retention with A.I.-Powered Workforce Analytics by  Humanyze
  20. Critical Vehicles: Writings, Projects, Interviews by Krzysztof Wodiczko. 
  21. Tradición Clásica: The Expression by Gabriel Laguna
  22. Capital-Labour to Capital-Life by Maurizio Lazzarato
  23.  Theory and Politics in Organization by Ephemera
  24. Beyond Patzers and Clients — Strategic Reflection on Climate Change and the Green Economy by Larry Lohman
  25. Design Culture and Dialogic Design by Ezio Manzini
  26. 5 Modes of Civic Engagement in Civic Tech by Laurenellen McCann
  27. Amusing Ourselves to Death by Neil Postman
  28. U.S. Workers Spend 6.3 Hours a Day Checking Email by Patricia Reaney
  29. The Internet of Useless Things by rehabstudio
  30. The Apple Ecosystem by Barclay Sloan
  31. Paying Attention: The Attention Economy by  Berkeley Economic Review, 31 Mar. 2020,.
  32. 8 Hours a Day Spent on Screens, Study Finds by Brian Stelter
  33. The Spectacle of Disintegration by McKenzie Wark
  34. 100 Things You Should Know about People by Susan Weinschenk
  35. Can UX design actually be ethical? by Beth J
  36. How to restore your users’ attention by Jean-marc Buchert
  37. Notifications require some much-needed attention by Canvs
]]>
Articles Design & Society
<![CDATA[Why do users prefer certain design? Insights from the landscape theory]]> https://www.doc.cc/articles/landscape-theory https://www.doc.cc/articles/landscape-theory Thu, 14 Mar 2024 03:48:59 GMT

Why do users prefer certain design? Insights from the landscape theory

Written by Dejan Blagic

joshua tree photo black and white

For most of history this has been the main UI for all humans.

The physical environment around us has been the main interface for the most part of humanity: grasslands, forests, mountains, river valleys, deserts. In order to survive and reproduce, humans have evolved to have perceptual tools that allow them to interact with their landscapes efficiently: where to find food, water, shelter, and, as importantly, what to avoid.

According to the work of Environmental psychologists Stephen and Rachel Kaplan:

Humans prefer landscapes that provide the most useful information for their survival

Humans have evolved to instinctively prefer environments that allow them to assess and gather knowledge from their surroundings easily. Seeking and “eating” information is so important to our survival that these characteristics have led us to prefer one environment over others.

These preferences can be developed through two cognitive functions: understanding and exploration.

Understanding is a cognitive function through which humans gather information from their environment in order to make sense of it and create a context in which they can interact with it. With a good understanding of the environment, humans are able to avoid dangers and take advantage of opportunities for food, reproduction, and shelter.

On the other hand, exploration is a cognitive process that involves cultivating an interest in obtaining new information. It is closely related to curiosity and represents a desire to learn something new.

Four predictor variables of preference

The Kaplans realized that understanding and exploration may sometimes occur immediately, but can also require time and inference.

Information that the brain can process without additional effort is considered “immediate”, while “inference” requires additional mental effort and time during the information processing in order to classify elements into certain categories.

By crossing these four dimensions, the Kaplans developed a preference matrix, which forms the core of their landscape preference theory. These four dimensions result in four variables: coherence, legibility, complexity, and mystery.

Table showing the differences of Understanding, which is making sense of the enviroment and exploration which is action of extracting new information. For understanding, the immediate need is coherence and it is inferred you need legibility. For exploration, you are for complexity and mistery.

Preference Matrix

Coherence refers to the immediate understanding of an environment. When an environment is coherent, a person can quickly understand what is present in it and how it looks. These types of environments have a clear order and predictability, with elements that are often repeated in recognizable patterns and structures. This helps a person easily understand the information that is present. Coherence is closely related to the Gestalt laws of visual perception.

Legibility, on the other hand, requires more time and effort to understand and involves movement. While coherence is a passive understanding of a scene, legibility occurs when a person starts moving and perceiving a changing environment. Good legibility provides good orientation and a clear path to move forward and back. If legibility is poor, a person may feel lost and unsafe in the environment. In user experience design, an example of legibility is how users navigate through a site and how user flows are made seamless and comprehensible with actions to go back and undo mistakes. It’s also related to a few heuristics and cognitive effects like Goal-Gradient Effect and Peak-End Rule.

Complex environments are those that allow for immediate exploration due to the large number of diverse elements present. These types of environments motivate exploration because there is so much to discover and learn. In UX design of enterprise and B2B applications we often deal with complex interfaces that cannot be reduced, but which are also preferred by users because they offer more control and flexibility. One of the cognitive effects related to this is Tesler’s law.

Mysterious environments, on the other hand, require additional effort to explore as they contain hidden information. These environments have various elements with hidden information that must be collected and connected. Mysterious environments can be challenging, but they can also be exciting and engaging as people try to uncover hidden information and piece it all together. Although hidden information and functionality on the UI can be dangerous, it can also provide a positive experience if there are hints, indicators, tooltips, Easter eggs and “white rabbits” on the UI that invite users to discover new functionalities. If it does not have negative consequences, a mysterious user journey can provide a positive experience if it ultimately reveals something new to the user.

The environment with a clear legible path and engaging complex foliage that adds mystery. It strikes a good balance of being legible and involving. (Source)

New virtual environments, same mental models

The dawn of digital interfaces and the Internet shifted our places of being from the physical environment to a virtual one. We spend more time looking at the screens of computers and phones than we do looking at nature. Rightly or wrongly, we rely more on digital interfaces to live our lives than natural landscapes.  The user interface of computers has become a main part of our societal environment, but we use the same perceptual tools that our ancestors developed over millions of years.

Evolution has shaped our visual perception to prefer certain landscapes, and these preferences can be applied to the new virtual 2D landscapes we encounter on screens.

User interface perception shares the same mental model from landscape perception

Some cognitive scientists like Donald Hoffman argue that our reality is actually an user interface and we don’t see the true reality any more than we see the processes in the transistors and chips in our computers.

When we interact with a computer’s user interface, we use the same cognitive functions as when we interact with natural landscapes. We try to understand the information the UI provides and make sense of it, and we also take action on the environment and explore it.

There is a feedback loop between these two functions, as exploration leads to a change in the environment that prompts the process of understanding in our minds, and once we understand it, we take action and explore the next state of the environment.

An evaluation framework based on our landscape perception  

The landscape theory of environment preference can be applied to evaluate user interfaces and understand how users perceive them. By considering the four variables of coherence, legibility, complexity, and mystery, we can understand why users prefer certain UI elements or design approaches.

This can inform the development of a framework for evaluative UX research and usability studies and help us understand how users process and understand the information presented in an interface.

Table showing the case related concepts and emotions for coherence, legibility, complexity and mistery as the text is going to describe.

UI preference framework.

By analyzing how users perceive and navigate an interface, we can determine whether they are able to understand the information presented to them immediately or if they need more time to process it.

We can also assess whether they are able to easily navigate and orient themselves within the interface, or if they experience fear and uncertainty. Understanding these factors can help us design more effective and user-friendly interfaces.

The current practice we have in UX design is aligned with understanding, as we want to create intuitive, frictionless designs so users don’t get confused and feel lost.

In addition to understanding how users perceive and understand an interface, we can also research how they interact with it and explore it. We can determine whether they are able to immediately take action and explore the interface, or if they become stuck and try to uncover hidden elements. However, there are also products with mysterious characteristics, like ChatGPT, that require users to explore and uncover hidden features, creating a sense of amazement and excitement.

We can also analyze the complexity of the interface and consider whether it is effective or not. It’s important to note that complexity does not necessarily mean a bad design. Complexity can provide more control and flexibility for users to perform tasks in different ways in the same way that a complex environment can provide different sources of food and security.

A complex interface may offer a range of options and features, allowing users to approach a task in a way that best suits their needs and preferences. This can be beneficial as it allows for greater customization and adaptability. On the visual side, complexity can provide a richer aesthetic and remove the potential boredom we might experience with simpler user interfaces.

It’s worth considering how exploration can positively impact user experience. While we have mostly focused on the understanding side, our users may need an adventure of discovery. There are good reasons why open-world games are so popular and why there  are many players who spend their time hunting for Easter eggs. If we craft it carefully, even on the web and mobile apps we can have a positive experience of exploration, otherwise our users will feel lost, unsafe, and overwhelmed. It is important to strike a balance and consider the context and goals of the product when deciding on the appropriate design approach.

The world is ever-changing, our cognition remains the same

Landscape theory can help us understand how people prefer certain environments in nature and why. The reasons for this lie in the theory of evolution, because we have evolved to prefer those environments that provide us with the necessary information for survival and reproduction.

Today we live in a technological civilization, but we still have the same cognitive functions of understanding and exploration that our ancestors had, and these cognitive functions influence which user interface we will prefer.

Therefore, using the landscape theory of Stephen and Rachel Kaplan, we can analyze and explore the preferences our users have regarding design. A new preference framework can be built on this in further user experience research.

Works Cited
  1. Landscape theory by Stephen and Rachel Kaplan
  2. Why do we prefer one environment over another? By  Georjeanna Wilson-Doenges
  3. Laws of UX by Jon Yablonski 
  4. Easter eggs, little delights in UI/UX design by Eleana Gkogka
  5. Why Open-World Games Tend to Be So Popular by Allison Stalberg
  6. The power of Easter Eggs in tech by Silvia Venditti
  7. Measuring the Intangible. Usability Metrics by Masha Panchenko
  8. Usability Metrics by Jakob Nielsen 
  9. Design for Coherence, not Consistency by Carlos Yllobre
  10. Controls are Choices by Dan Saffer 
  11. Accelerators Allow Experts to Increase Efficiency by Aurora Harley
  12. Case against reality by Donald Hoffman 
]]>
Design & Society Articles
<![CDATA[How design is governance]]> https://www.doc.cc/articles/how-design-is-governance https://www.doc.cc/articles/how-design-is-governance Thu, 14 Mar 2024 03:48:08 GMT

How design is governance

Written by Amber Case

Art Direction by Manoel do Amaral

diagram elements of different shapes in different colors

I’m writing this post from a brand-new coffee shop in a busy area of Los Angeles. It seems like it is opening day, or soon after. From a distance, the seating and tables look charming, and the setting is well-lit by a large picture window at the entrance. It’s clearly designed with the intention of being an inviting place for people to meet up or hang out on their laptops doing work.

The problem is that the coffee shop looks great as long as you don’t try to actually use it.

Once I go in, the details started to really fall apart. The wifi password is printed in a tiny font at the front counter. It is clearly not designed for people working on their laptops, with most outlets located in places inaccessible from the cafe’s tables. The man across the room, also working on a laptop, has to make do with battery power. Luckily for me, after discussing the situation with two women at a nearby table, they laugh and help me plug in my power cord across the room. I have to drag my table closer to them to even make that possible.

After a while, all of the customers in the shop instigate a small mutiny, agreeing that the cafe furniture is totally in the wrong place. We end up moving the chairs and tables around to better positions to better suit our purposes, then sit down and resume our meetings.

Design is governance

At a fundamental level, all design is governance. We encounter inconveniences like this coffee shop every day, both offline and in the apps we use. But it’s not enough to say it’s the result of bad design. It’s also a result of governance decisions made on behalf of the customers during the design process.

Michel Foucault talked about governance as structuring the field of action for others. Governance is the processes, systems, and principles through which a group, organization, or society is managed and controlled.

Design not only shapes how a product or service will be used, but also restricts or frustrates people’s existing or emergent choices, even when they’re not a user themselves. My neighbor at the cafe, who now has a Mac power cord snaked under her feet, can attest to that.

In a coffee shop, we’re lucky that we can move chairs around or talk with other customers. But when it comes to apps, most people cannot move buttons on interfaces. We’re stuck.

When we create designs, we’re basically defining what is possible or at least highly encouraged within the context of our products. We’re also defining what is discouraged.

To illustrate, let’s revisit this same cafe from a governance perspective.

outline illustration of coffee mugs with arrows around

Cafe design as case study in design as governance

A good cafe owner will understand the flow of customers, think it through themselves, demo it with friends, and figure out the furniture placement, either with this information in mind or with an experienced interior designer that has a deep understanding of flow. This particular cafe owner didn’t seem to totally understand their customer base at all. They made decisions on the customers’ behalf that didn’t match with their needs.

Design is not only what is fixed but also what is left free to be modified. A big part of governance involves distinguishing between the rules and the rules for modifying the rules. This cafe example provides a physicalized incarnation of that difference — the ability to rearrange the tables! Over the course of the afternoon, I watched the customers rearrange the tables in a way that suited their needs. Thankfully they were lightweight!

This “re-arrangability” is also related to governance — as long as there are norms about it being okay to rearrange the tables. From a cybernetics perspective, this actually increases the variety of the system and empowers people to participate in governance. The moveability of the tables introduces a governance surface for the cafe clients.

With requisite variety and re-arrangability, opening days or pre-openings can be soft and also learning experiences for the cafe owner. A soft launch with friends and family provides opportunities to fix obvious issues. When there’s an expectation that things won’t be perfect, customers can change things up a bit to suit their needs.

The problem is that the cafe owner has a brittle idea of what a cafe will look like and isn’t working from an idea of how the cafe might feel like, or how they can improve or disrupt actions over time. Without the requisite flexibility or understanding that they need to meet customers in the middle, a well-intended system can become stressful and annoying to the humans it serves, or unnecessarily uptight and restrictive.

A good cafe owner can quickly adjust to meet the needs of their customers in a kind of cybernetic dance that focuses on positive feedback, and spirals upward from there. (More on that dance from “Thinking in Systems” by Donella Meadows.)

Perhaps this particular cafe owner stubbornly pre-decided who their customers would be, assuming that they were mainly in the market for a pleasant place to chat or read, but not work on their laptop — thus making the cafe experience frustrating to the cafe’s true customers — the ones who did come there to work. Or, more likely, an analysis of cafe patron types didn’t cross their mind. They neglected to think through all of the scenarios. And therefore, their idea of a cafe governed everyone who came through it.

There exists a significant naivete and lack of experience with systems that are made up of both structural and behavioral elements. A coffee shop owner can control structure but they can only influence behavior. While structures can guide behavior — by defining what behaviors are possible (as well as some incentives around those behaviors) — the trust is that ultimately individuals always choose their own behaviors.

diagram elements in different colors

Design choices impact operations after the design is implemented and deployed. In other words, this particular design choice turned the baristas into technical support. If the original goal was to nudge customers into ordering an item before getting free Wi-Fi, a better implementation might be posting log-in details and instructions on a large sign above a table carrying cream, sugar, and other items for customers.


The reason why I feel this was a lack of experience rather than intentional incompetence was that this cafe’s design discouraged customers from using it as a workspace while also providing free wireless — but with a WiFi password that was displayed in a tiny font on a small sign near the cashier. This made an added burden on their employees to give out the password, and I saw multiple customers interrupt drink-making tasks to ask the staff for WiFi instructions.

When customers have similar design problems with online/digital apps, we can’t simply move chairs and tables around, or give direct feedback to the owner.

The implicit feudalism of digital product design

In digital spaces, self-governance is enabled and circumscribed by the architecture of the platform on which people interact. This architecture determines the rules of engagement, and governs the interaction between separate user-generated institutions.

Design is governance, and in digital products, it’s often what media studies professor Nathan Schneider calls implicit feudalism.

Many digital products are actually platforms defining and enabling interactions between users. Even though designers are not making people do specific things, they’re deciding what kind of things can be done, and making many other actions impossible to perform.

Users might want (and sometimes need) to do things the platform doesn’t allow. In this way, the governance aspect is even more pronounced because the platform product determines what kinds of interactions its user can and cannot engage in and with each other. Little or no representation is allowed.

This implicit feudalism usually exists even when the designers themselves have the best intentions for their users in mind; often it’s simply not in their action space to do the consumer research they may want to do. There may have been some “public” input during early marketing and testing, but most everything else about the design is locked in by then. Conflicting timelines and a need from management to quickly release the product can create “solutions” that aren’t really thought through. That complexity of use then gets externalized to the end users (and likely the frontline staff as well (with our cafe barista providing wifi support as well as drinks).

Compare this to civil engineering, where the design must support all people and is very carefully considered. The processes for ongoing operations and maintenance of “the solution” is part of the solution.

Product requirements prioritize the needs of a subset of stakeholders, generally those associated with financial outcomes such as VC investors for a startup or executive management accountable to a board for a later-stage company.

In practice, this means that designers also tend to be subjects of this dictatorship. Many of them might want to make a useful app that does what it says it does, but their ability to act and design is in conflict with the incentive model and funding of the company they work for. In many cases, the design of online apps is scoped to maximize engagement; and thus aimed at fulfilling the needs of advertisers, not consumers.

All this culminates into a consumer experience where little about it can be fundamentally changed. And it’s nigh impossible to seek redress with the app developer. When angered by a poorly designed app, customers are trapped in a space that reminds me of the title of a classic Harlan Ellison short story: “I Have No Mouth, and I Must Scream”.

There are many obvious problems to designing for engagement, but a less appreciated challenge is that a user’s attention is a finite resource that fills up through the course of their day.

Attention is actually a rivalrous resource, giving us increasingly less time to budget it to focus on the things we actually want or need.

Despite all this, design frameworks can be scalable in ways that don’t necessitate dictatorship. If we really want to broadly change the field of action to fundamentally shift our experiences in the world, we need a design philosophy that can change what is optimized in the governance space.

diagram elements of different shapes in different colors

Better governance through calm design

When the architecture of a platform is designed with Calm Technology principles in mind, it enables users to self-govern in a way that is efficient and effective.

Flow

With Calm Tech, we optimize for a goal that isn’t just engagement. In the cafe example, we’re optimizing for the right information at the right time (Wifi password), a sense of flow in the cafe that allows people to order and then find a seat, and if they choose, settle in with a laptop and have a business meeting.

Pass-through

Calm Tech optimizes for a sense of pass-through, like a window designed to let you focus on the scene outside, not the window itself. Calm Tech tries to optimize in the same way the classic lightswitch is optimized. We don’t have to think about the switch until we need it, we don’t have to be a licensed electrician to use it, and we don’t have to understand the complexity behind the scenes that makes it work. And we certainly don’t need to install and set up a smartphone app to control it. Instead, we just switch it on/off. Calm technology requires the smallest possible amount of attention and doesn’t disrupt the user’s environment or current task.

In this way, users are able to organize their own social and political institutions without being overwhelmed by the technology that is supposed to enable them. The principles of Calm Technology can help create an environment that is conducive to self-governance.

diagram doodle

Creating calm technology together

Over the next few months, I’ll be announcing the launch of a Calm Technology standards body. The first of its kind — and a very frequently-requested resource — it will offer comprehensive information on how to build systems with Calm Tech principles in mind. My hope is that through design philosophy, we can fundamentally shift the way people and technology interrelate, and enhance humanity’s field of action in a pretty significant way.

But social good won’t be this body’s only goal; the business argument to Calm Tech is just as important. Instead of thinking about temporary value and short-term market share, we can start designing products and services that create value for their customers for a lifetime — who in turn reward them with word of mouth and a lifetime of loyalty.

People who take an enthusiastic interest in minor details of political policy assume everyone wants to talk about governance as much as they do, but in reality, people want organizations to care about them and effectively respond to their needs. And in reality, people just want things that work. And by work, they mean, don’t stop working unexpectedly, and more importantly, work in the way they expect, and what the product said it would do. Governance is important to have when it matters.

There also needs to be a way for customers to call for change — simply and seamlessly. My work with Superset supports the Calm Tech standards body’s higher goal.

This is why I’m working on expanding the role of Calm Technology in the world. I’m excited to work towards a culture that values pass-through over junk engagement, and a sense of human-centered time back to life, so people have a choice over what they want to engage with. With a few guidelines and case studies, I think we can make a huge difference in the world.

Much more soon. And if you’re interested in joining this standards body, please get in touch!

Thanks to the Metagovernance Project, Michael Zargham and James Au for discussions that led to the inspiration behind this article.

Works cited
  1. Governability by Michael Foucault on Wikipedia
  2. Variety (cybernetics) on Wikipedia
  3. Thinking in Systems by Donella Meadows
  4. The Implicit Feudalism by Nathan Schneider
]]>
Articles
<![CDATA[Spatial computing: What designers can learn from Italo Calvino’s book Invisible Cities]]> https://www.doc.cc/articles/spatial-computing-calvino https://www.doc.cc/articles/spatial-computing-calvino Thu, 14 Mar 2024 03:46:52 GMT

Spatial computing

What designers can learn from Italo Calvino's book Invisible Cities

We are still at the dawn of a new digital era: artificial intelligence, virtual worlds, augmented reality, blockchain, and other technical and societal changes are reframing the world we live in and creating new fictions.

However, from a spatial design perspective, they have so far been lame and ordinary. Without the constraints in the physical world, how do we draft the urban blueprints in the metaverse? I believe planners and designers of these new worlds can find inspiration from Italo Calvino’s Invisible Cities, in which he revealed a poetic and mathematical approach to “urban planning” in the imaginary worlds.

About Invisible Cities

”What is the city today, for us? I believe that I have written something like a last love poem addressed to the city, at a time when it is becoming increasingly difficult to live there. It looks, indeed, as if we are approaching a period of crisis in urban life; and Invisible Cities is like a dream born out of the heart of the unlivable cities we know.”

— Italo Calvino

Invisible Cities is a novel by Italian writer Italo Calvino, published in 1972. It is a short book, like a piece of jewellery made with fragments of dreamland. You can start reading it from any page, and each chapter is like a dream, short, bizarre, and traceless but with endless aftertaste. The book consists of brief prose poems, describing a series of verbal reports that the traveller Marco Polo makes to emperor Kublai Khan, telling fantastical stories about the cities that he’s visited.

Over eleven thematic groups, Marco describes a total of 55 fictitious cities, all women’s names, to give rise to a reflection which holds good for all cities in general. With poetic imagery and geometric rigor, Calvino intertwines various elements in the catalog, “Cities and Memory”, “Cities and Desire”, “Cities and Sign”, “Thin Cities”, and “Trading Cities”… Through this means, the city forms a huge intricate labyrinth, with countless alleys and intersections intertwined, and readers are caught in this vortex and cannot extricate themselves. By rearranging the elements, like combinations and permutations, you can construct dozens of cities with different characteristics.

Let’s travel through Calvino’s cities to discover the meanings of cities, and get inspired while building virtual spaces. 

Cities and desire

The appearance of a city is shaped by our desires.


“From there, after six days and seven nights, you arrive at Zobeide, the white city, well exposed to the moon, with streets wound about themselves as in a skein. They tell this tale of its foundation: men of various nations had an identical dream. They saw a woman running at night through an unknown city; she was seen from behind, with long hair, and she was naked. They dreamed of pursuing her. As they twisted and turned, each of them lost her.

After the dream they set out in search of that city; they never found it, but they found one another; they decided to build a city like the one in the dream. In laying out the streets, each followed the course of his pursuit; at the spot where they had lost the fugitive’s trail, they arranged spaces and walls differently from the dream, so she would be unable to escape again.”

- Zobeide, Cities and Desire 5

Zobeide, a city with streets as puzzling as a maze, is shaped by men’s desires. In order to pursue the naked woman in their dreams, they turned the city into an ugly trap. If we take a look at the layouts of some metaverse platforms, we can see the opposite pattern. They adopt a grid system with plots of land distributed on a horizontal plane. This allows for property to be easily parcelled and sold. The maps of metaverse platforms are also shaped by desires. With speculation and profit in mind, are we only going to see big and small boxes in the metaverse and in augmented reality? Are we going to keep creating artificial barriers on the Internet for a profit?

Cities are not necessarily built on (limited) “lands”

“If you choose to believe me, good. Now I will tell how Octavia, the spider-web city, is made. There is a precipice between two steep mountains: the city is over the void, bound to the two crests with ropes and chains and catwalks. You walk on the little wooden ties, careful not to set your foot in the open spaces, or you cling to the hempen strands. Below there is nothing for hundreds and hundreds of feet: a few clouds glide past; farther down you can glimpse the chasm’s bed.

This is the foundation of the city: a net which serves as passage and as support. All the rest, instead of rising up, is hung below: rope ladders, hammocks, houses made like sacks, clothes hangers, terraces like gondolas, skins of water, gas jets, spits, baskets on strings, dumb-waiters, showers, trapezes and rings for children’s games, cable cars, chandeliers, pots with trailing plants.

Suspended over the abyss, the life of Octavia’s inhabitants is less uncertain than in other cities. They know the net will last only so long.”

- Octavia, Thin Cities 5

Octavia is a city built on a spider web. In the book Invisible Cities, there is Armilla, a city made only of water pipes; there is Baucis, a city build on the cloud; there is Isaura, a city built upon deep vertical wells; there is Valdrada, a city of mirrors and reflections; there is Olinda, a microscopic city which gradually spreads out until one realizes that it is made up of lots and lots of concentric cities which are all expanding…

Archigram, an avant-garde architectural group formed in the early 60s, proposed cities that moved, Walking City imagines a future in which borders and boundaries are abandoned in favour of a nomadic lifestyle among groups of people worldwide. Inspired by NASA’s towering, mobile launch pads, hovercraft, and science fiction comics, Archigram envisioned parties of itinerant buildings that travel on land and sea. Like so many of Archigram’s projects, Walking City anticipated the fast-paced urban lifestyle of a technologically advanced society in which one need not be tied down to a permanent location. The structures are conceived to plug into utilities and information networks at different locations to support the needs and desires of people who work and play, travel and stay put, simultaneously. By means of this nomadic existence, different cultures and information is shared, creating a global information market that anticipates later Archigram projects, such as Instant City

Cities are not necessarily built on “lands”. Why do we need “lands” in the metaverse, and on the Internet as a whole? The limited supply of virtual lands created a man-made scarcity that drives the price to soar, increasing the bar for the general public to participate in the creation of a metaverse. Beyond the more direct analogy of the virtual world with the one we physically inhabit, what's the meaning of a city, made of borders and public space, in a remote, interconnected world?

Cities and memory

City archetypes

“There is still one of which you never speak.”
Marco Polo bowed his head.
“Venice,” the Khan said.
Marco smiled. “What else do you believe I have been talking to you about?”
The emperor did not turn a hair. “And yet I have never heard you mention that name.”
And Polo said: “Every time I describe a city I am saying something about Venice.”

In one key exchange in the middle of the book, Kublai prods Polo to tell him of the one city he has never mentioned directly — his hometown. Polo’s response: “Every time I describe a city I am saying something about Venice.” Invisible Cities pays close attention to the ways in which travel and experiencing new things influence how a person sees the world, ultimately suggesting that a person’s perception of their surroundings is subjective and individualized, informed entirely by their memories, perspective, and experiences — in Marco’s case, his memories of Venice.²

“Thus there are psychoanalytical critics who have found the deep roots of the book in Marco Polo’s evocations of Venice, his native city, as a return to the first archetypes of the memory. ”³ If all the imaginary cities in the novel are just iterations of “Venice”, how do we design the archetypes of the metaverse? How can we expand our imagination so it's not just linked with the bias we already have? How would we design a world if we had a blank canvas?

Let’s go back to the real world for a second to discuss the archetype of a city. Manhattan, through the advent of elevators and the steel frame and the concentration they brought, was the archetype of the Metropolis, fusing a “culture of congestion.” In Las Vegas, the vacuum effect of the automobile and highway created “vast expansive texture: the mega texture of the commercial landscape,” making it the archetype of the American suburb.⁴

To create a new archetype in the imaginary world, you can always heavily modify and combine existing cities and urban configurations, following the example set by Half Life 2. Famously, even Terry Pratchett’s Ankh-Morpork was based on Tallinn, Prague, and had bits of 18th century London, 19th century Seattle, and early 20th century NY thrown in.⁵

Familiarity and surrealism

In his 2012 book, Building Imaginary Worlds, media theorist Mark J.P. Wolf says that fictional worlds often “use Primary World [ie real world] defaults for many things, despite all the defaults they may reset”. In other words, because everything in the metaverse is built from scratch, technically you don’t actually have to reference the real world in your designs. But many people choose to do so anyway. They plump for familiar architectural characteristics in their virtual buildings because it makes it easier for participants to feel immersed.⁶

Are users satisfied with just the familiar objects in the metaverse? I believe users are looking for something beyond everyday life, something they can relate to but different, fresh, weird, and confusing…

Art movements like Surrealism provided us with a formula, composing real, ordinary objects in strange, unexpected ways, just like in a dream. For a moment, the mind is confused, and the power of surrealism is that it breaks the mind.
For example, take Dalí’s Persistence of Memory: a barren landscape covered in a bunch of melting clocks… and possibly a platypus…

Let’s try this dreamwork approach:

1. Manifest Content: Sure, we can certainly recognize the objects in this painting — real things like a tree, clock faces, a pocket watch, ants — and yet they are illustrated in very strange ways, making them feel at once real and unreal.

2. Latent Content: Beneath the surface of these objects lies the symbolism — the clocks seem to be melting, distorting their faces. Perhaps this could represent how memory becomes distorted over time.⁷

“The mind loves the unknown. It loves images whose meaning is unknown, since the meaning of the mind itself is unknown.”
— René Magritte

Cities and signs

Ways to convey meanings

Finally the journey leads to the city of Tamara. You penetrate it along streets thick with signboards jutting from the walls. The eye does not see things but images of things that mean other things: pincers point out the tooth-drawer’s house; a tankard, the tavern; halberds, the barracks; scales, the grocer’s.

Statues and shields depict lions, dolphins, towers, stars: a sign that something — who knows what? — has as its sign a lion or a dolphin or a tower or a star. Other signals warn of what is forbidden in a given place (to enter the alley with wagons, tourinate behind the kiosk, to fish with your pole from the bridge) and what is allowed (watering zebras,playing bowls, burning relatives’ corpses)

Your gaze scans the streets as if they were written pages: the city says everything you must think, makes you repeat her discourse, and while you believe you are visiting Tamara you are only recording the names with which she defines herself and all her parts.

However the city may really be, beneath this thick coating of signs, whatever it may contain or conceal, you leave Tamara without having discovered it.

- Tamara, Cities and Signs 1

Tamara reminds me of Las Vegas, a city built from scratch in the middle of the desert. Las Vegas was regarded as a “non-city” and as an outgrowth of a “strip”, along which were placed parking lots and singular frontages for gambling casinos, hotels, churches and bars. According to the book Learning from Las Vegas by Robert Venturi, Denise Scott Brown, and Steven Izenour in 1972, “Passing through Las Vegas is Route 91, the archetype of the commercial strip, the phenomenon at its purest and most intense. We believe a careful documentation and analysis of its physical form is as important to architects and urbanists today as were the studies of medieval Europe and ancient Rome and Greece to earlier generations. Such a study will help to define a new type of urban form emerging in America and Europe, radically different from that we have known”. With the rise of Las Vegas, we see the return of symbolism and the rise of pop culture references in architecture and city planning.

Las Vegas is a city built with desires and inclusiveness. The most distinctive feature is the display of wealth and the pursuit of wealth. In Las Vegas, the precepts of modern architecture like space and form are not the main consideration, they have a different priority, communication — a sign and symbol that can convey information directly to potential customers to stimulate them to pursue wealth, consume, and gamble! Las Vegas has no historical roots, it is an inclusive city in which there is no value authority, no religious authority, no academic authority, and the freedom and will of individuals and groups are maximized. This sounds a lot like the metaverse, doesn’t it?

So far, many metaverse platforms are grid systems parked with signs (Logos). How to communicate with potential users and build your presence intelligently in this virgin territory? In order to convey meanings, architecture before the modern movement used decoration, often in a profound way. Modernists eschewed such ornament, relying only on form, volume or structural elements, and buildings in Las Vegas rely on imagery and signs. Designers today still struggle with whether or how to use ornamentation in contemporary architecture. Metaverse could learn from the past or create a new system to communicate to the users.

Identity of “elsewhere”

If on arriving at Trude I had not read the city’s name written in big letters, I would have thought I was landing at the same airport from which I had taken off. The suburbs they drove me through were no different from the others, with the same little greenish and yellowish houses. Following the same signs, we swung around the same flower beds in the same squares. The downtown streets displayed goods, packages, signs that had not changed at all.

This was the first time I had come to Trude, but I already knew the hotel where I happened to be lodged; I had already heard and spoken my dialogues with the buyers and sellers of hardware; I had ended other days identically, looking through the same goblets at the same swaying navels.

Why come to Trude? I asked myself. And I already wanted to leave.

“You can resume your flight whenever you like,” they said to me, “but you will arrive at another Trude, absolutely the same, detail by detail. The world is covered by a sole Trude which does not begin and does not end. Only the name of the airport changes.”

— Trude, Continous Cities 2

Trude, the undifferentiated city which is steadily covering the surface of the earth, is everywhere. Marco Polo’s travelogue was about continents of the “elsewhere”, now that there is no longer any “elsewhere” in the world, and the whole world is becoming more and more uniform(and for the worse).

All cities have a soul, a character, that is expressed in the symbolic dimension. As the variety of their symbols and symbol carriers is abundant, very different configurations of symbols or symbolic patterns can be found in cities. These symbol carriers can be as diverse as layout, statues, monuments, landmarks, street names, murals, graffiti, rituals, festivities and so on.⁸ The identities of the metaverse can also be distinguished through the symbol carriers that upon us to define, like the recognizable urban layout, meaningful storytelling based on specific user groups, memorable social events and public engagement…

Trading not just NFTs

You do not come to Euphemia only to buy and sell. but also because at night. by the fires all around the market, seated on sacks or barrels or stretched out on piles of carpets, at each word that one man says — such as “wolf,” “sister,” “hidden treasure,” “battle,” “scabies,” “lovers” — the others tell, each one, his tale of wolves, sisters, treasures, scabies, lovers, battles.

And you know that in the long journey ahead of you when to keep awake against the camel’s swaying or the junk’s rocking. you start summoning up your memories one by one, your wolf will have become another wolf, your sister a different sister, your battle other battles, on your return from Euphemia, the city where memory is traded at every solstice and at every equinox.

- Euphemia, Trading Cities 1

Euphemia, the city where memory is traded at every solstice and at every equinox, ultimately represents the essence of trading urban centres in connecting people and ideas, rather than just goods.

Euphemia reminds me of Paris in the 1920s. At the time it was the epicentre of culture, embracing extravagance, diversity and creativity. Artists, poets, writers, musicians, and dancers, flocked from all over the world to Paris. One of the messages from the movie Midnight in Paris is that almost everyone wants to escape from the present and needs to find a golden age of their own, virtual worlds could act as time machines to take us to any “golden ages” in history, or create our current golden age to facilitate rich social, artistic and cultural collaborations.

Current metaverse platforms are mostly just showcased to display “goods”. “Seeing people use an environment for something, towards something, accomplishing a goal that impacts real life, I think that would be my dream,” said metaverse architect untitled,xyz, he envisions the metaverse to be “a space where you can have a protest… where you can create art rather than just showcase it. I think that would be my hope, that these environments, it’s not just a static thing that then hosts something you visit once and you leave…But if it’s a true public space where you can go, like Union Square or Barclays Center and have a protest or something, that has real-world implications, I think that’s something that can enhance everything.”

Structure, pattern, variety and wholeness

A mathematical approach to constructing a city

Kublai Khan had noticed that Marco Polo’s cities resembled one another, as if the passage from one to another involved not a journey but a change of elements. Now, from each city Marco described to him, the Great Khan’s mind set out on its own, and after dismantling the city piece by piece, he reconstructed it in other ways, substituting components, shifting them, inverting them.

“And yet I have constructed in my mind a model city from which all possible cities can be deduced,” Kublai said. “It contains everything corresponding to the norm. Since the cities that exist diverge in varying degree from the norm, I need only foresee the exceptions to the norm and calculate the most probable combinations.”

“I have also thought of a model city from which I deduce all the others,” Marco answered. “It is a city made only of exceptions, exclusions, incongruities, contradictions. If such a city is the most improbable, by reducing the number of abnormal elements, we increase the probability that the city really exists. So I have only to subtract exceptions from my model, and in whatever direction I proceed, I will arrive at one of the cities which, always as an exception, exist. But I cannot force my operation beyond a certain limit: I would achieve cities too probable to be real.”

The dialogue between Kublai and Polo reveals how they interpreted cities, and suggested a mathematical approach to constructing a city. The cities could be dismantled into elements and reconstructed through combination and permutation. The cities could also be calculated as probabilities diverged from the “norm”, by tweaking the “abnormal elements”.

What are the elements in the cities? In the classic urban design book The Image of the City, American urban theorist Kevin Lynch introduces and describes five elements — nodes, paths, districts, landmarks and edges — that give shape to the mental representation of the city. The pattern or structure of the interrelations among five elements shaped the identity or character of cities.

Elements of the cities, Kevin Lynch

•••

The study of interrelations of elements, and how a ‘complex visual whole’ is organized could be considered a “pattern”. According to Christopher Alexandar, in A Pattern Language: Towns, Buildings, Construction, patterns describe a problem and then offer a solution. Alexander claims that ordinary people, not only professionals, can use the pattern language approach to successfully solve very large, complex design problems. In software, Alexander is regarded as the father of pattern language movement.

“A person with a pattern language does not need to be an “expert”. The expertise is in the language. He/she can contribute to planning and design because they know relevant patterns, how to combine them, and how the particular piece fits into the larger whole.” — Christopher Alexandar, A Pattern Language

Design systems for the metaverse

Kublai was a keen chess player; following Marco’s movements, he observed that certain pieces implied or excluded the vicinity of other pieces and were shifted along certain lines. Ignoring the objects’ variety of form, he could grasp the system of arranging one with respect to the others on the majolica floor. He thought: “If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”

With elements(components) and patterns in mind, many tech companies have come up with design systems to facilitate design and development prototyping for their digital products. A design system is a collection of reusable components, guided by clear standards, that can be assembled together to build any number of applications. It’s about reusability, product identities, guidelines, and best practices.

Design systems like Material design is a great example of 2D internet, designers and developers can use the predefined components so they don’t need to start from scratch while making the applications for different devices. When it comes to the metaverse, how to envision design systems for the 3D internet?

The game industry has developed many techniques for worldbuilding, creating the rules and structures of the imaginary world. Townscraper, one of my favourite indie games, in my opinion, is a great example of the design system. Users can click randomly and generate beautiful cities based on the constraints that tiles have, similarly to solving a Sudoku.

The Matrix Awakens, an open-world video game developed by Epic Games, generated a world of 15.79 kilometres, larger than downtown Los Angeles, 7000 buildings and many other assets, with procedural techniques like shape grammar, wave function collapse...etc

Design systems for the metaverse are still a new topic. With the nature of the richness of 3D worlds, we’d see various systems with different themes, for different use cases.

Thanks for reading this far. Let’s revitalize the missing poetic life in the new worlds we are building, to dream the impossible dream, to reach the unreachable star.

“Cities, like dreams, are made of desires and fears, even if the thread of their discourse is secret, their rules are absurd, their perspectives deceitful, and everything conceals something else.”
— Italo Calvino
]]>
Articles Design & Culture
<![CDATA[Accepting design]]> https://www.doc.cc/articles/accepting-design https://www.doc.cc/articles/accepting-design Thu, 14 Mar 2024 03:45:15 GMT

Accepting design

How can we redefine our expectations to match our purpose?

Written by Caio Braga, Fabricio Teixeira

Art Direction by Manoel do Amaral

text reads accepting design surrounded by illustration of tools and interface elements from digital tools

Of promised lands and broken hearts

Two decades ago, designers from all corners of the world were captivated by a simple, unifying vision: the internet’s immense potential to connect individuals and to provide access to information, entertainment, and services to everyone. It all felt like a higher calling; all stars aligned to create a limitless universe for exploration. In the years that followed, many of us embraced this visionary outlook and decided to become the designers of these new digital platforms. 

It didn’t feel like we were joining an industry, a career, or a job, for that matter.

It felt like we were joining something bigger than ourselves.

collage of photos from famous tech leaders and CEOs

We looked up to the pioneers who embodied the epitome of digital leadership. We saw ourselves in them, projecting our aspirations onto those individuals who had successfully navigated the uncertain terrain of innovation and emerged as leaders. We sought to emulate their ingenuity, creativity, and entrepreneurial spirit, believing that by doing so, we could also help shape the future of the internet and “leave a little dent in the universe.”

“At Apple, people are putting in 18-hour days. We attract a different type of person—a person who doesn’t want to wait five or ten years to have someone take a giant risk on him or her. Someone who really wants to get in a little over his head and make a little dent in the universe.”
—Steve Jobs talking about the type of talent he was looking for at Apple

None of those big tech idols were designers by trade, but we took the bait anyways: we fetishized the role of design (and the designer) in all this.

Some of us did put in 18-hour days to make that happen.

And the motto of moving fast and breaking things, coined by Facebook but followed by every other company in that same era… well… did end up breaking things. Social networks, designed for “engagement,” have transformed from global connectors to platforms rife with misinformation and shallow content. Digital banks make grand promises of “bringing access to financial services to excluded populations” but employ deceptive UX tactics to convert more users.

At some point, those same designers from all corners of the world finally came to the unpleasant realization that design, in corporate settings, is just another investment to grow businesses.

photo of Mark Zuckerberg testifying in court with the motto move fast and break things next to it

Mark Zuckerberg testifying in court

Are designers even able to influence anything?

As our romanticized view of the industry started to fade, these types of questions became inevitable. Isn’t ultimately money what’s informing the decisions that will shape these products? Although designers have some degree of influence over the user experience, we understand that our power is limited to the how, not necessarily the what, why, or how much.

Think about the addictive nature of products like TikTok or the negative impact of Instagram on teenage self-esteem. While designers can certainly create features to mitigate some of these issues, the question remains: can design fundamentally alter the underlying incentives that drive the development of these platforms?

Some argue that influence is gradually diminishing, especially now that larger tech companies are entering what is defined by Tim Carmody as their “decline phase.” The term refers to a stage in the lifecycle of big tech where their innovation start to stagnate or diminish, often linked to these companies’ inability to adapt to changes in the market, as well as the inherent challenges of scaling beyond their original vision.

Tangibly, this decline leads to a shift in priorities, including within design teams. For a long time, investing in design meant investing in R&D and innovation, terms that resonated well with investors and venture capitalists while the market was expanding and making new, bold bets on this promising future. But as these companies switched their focus to profitability and optimization, design moved further away from the core of their business.

Maybe we've been overestimating our power as designers all this time.

Or maybe our power lies somewhere else.

Designers who design, bakers who bake

“With the advent of industrialization, furniture-making became a factory-line process. This meant that more furniture could be produced for less, making it more accessible to a wider range of people. Instead of furniture makers designing chairs, a smaller number of industrial designers working for furniture brands like Knoll, Habitat, and later IKEA created the designs, and large factories populated by lower-skilled workers produced them en masse. Although there is still a need for furniture makers, especially at the high end, the number of local furniture makers has dropped significantly. The same fate may await the design industry.” (Andy Budd, The end of design as we know it?

A lot of product designers out there share an appreciation for other hands-on, craft-based activities. Some even dream of switching careers to become furniture makers, leather artists, pottery experts, or pâtissiers: waking up early to do what they enjoy, putting in the time to master the craft, and learning to appreciate the process as much as the output. Then starting over every single day. And being ok with that idea.

Why can't we be ok when it comes to digital product design?

As designers become more cynical about their impact and the future of Tech, lies an excellent opportunity to reflect and understand what really motivates us: What do we really appreciate about design? Why are we here? Why now?

It was easier to ignore these questions when designers were growing in their careers and unlocking opportunities everywhere. For many with the privilege of working in Big Tech for the past decade, it was a matter of taking a deep breath while they fantasized about a different career without the stress, politics, and Kafkaesque ambiguity of big corporations.

When all that Big Tech energy starts to dissipate, we find ourselves standing before the mirror, heart pounding, confronted with the only question left for us: Are we willing to accept design?

Other types of designers have fully embraced their discipline for what it is. Good furniture designers will design chairs that are both beautiful and usable—perfectly balancing form and function. Great designers will design chairs that are unique, memorable, and work particularly well for certain environments. Thoughtful designers will consider materials, sustainability, ergonomy, and many other factors that will make a positive impact beyond the chair itself.

None of these designers have ambitions to change the world with just a chair — but they can definitely find joy in that process. How might we redefine our expectations to make sure the work we do aligns with our purpose?

01

Finding joy in our craft

There has been a long-standing disdain for manual work and craft in our society, perpetuated by the bourgeois idea that the only valuable form of capital is intellectual. But isn’t craft what brings us the most joy as designers?

Think about that feeling when the perfect color or font combination just feels right. Or when that first draft of an idea comes together into a prototype. When we watch users interact with our design and see it having a positive impact on their experience. Solving a particularly challenging design problem and feeling a sense of satisfaction. Discovering a new trick that will make our workflow more efficient. Collaborating with other designers, feeling inspired by their creativity. Finding inspiration from other places and industries and incorporating those references into our work.

Are we able to enjoy our process as much as the outcome of our work?

illustration of a hammer

02

Impact is impact — at any scale

You don’t need to design a product for 2 billion people for it to be a successful product or for you to be a successful designer. Working in big corporations is not the only way. 

In fact, there are more design opportunities, in smaller, underserved sectors. You can strive for high impact on a local business. Or with a niched audience. You can specialize in designing a pretty specific type of experience. Aim to meaningfully change one person’s life versus creating a suboptimal product that tries to serve every human on Earth. 

How can we start thinking about our career beyond Big Tech and our value beyond big numbers—and focus on becoming a better designer every day?

illustration of a triangle ruler

03

It’s ok to work for money

Money is a necessary part of our lives: we want to pursue our passions, but we also need to survive, support our families, and our needs. Charging for our time and work product doesn't make us greedy or selfish. It's a fair exchange of value.

With very few exceptions, every company, brand, or organization's goals revolve around money— and it’s likely that your design will help them reach those goals. Our personal goals, however, might go beyond making money. Deliver great work to make an impact for your employer, but do not expect your job to be the only thing that fulfills you in life.

Define who you want to be as a person, set the proper boundaries with your work, and don’t forget to send them the invoice.

illustration of a c clamp

04

Accepting change

Humans will always try to automate repetitive tasks. Think about mass manufacturing before assembly lines, or accounting before spreadsheet software. Automation will always change what’s expected from jobs. 

Design is not any different. The concept of being a designer (and the types of tasks we do every day) changes every couple of years. That has always been the case. With AI, our job is changing again—hopefully, renaming layers will be a thing of the past.

A curious mindset is a big part of being a designer. This feels like a great time to learn new things as we define what's next.

illustration of a plier

05

People are the real legacy

We are designing interfaces that last a few years in their current form; if all stars align, they might last a few more. We’re in such a nascent industry that our concept of “long-term” is quite open to interpretation. 

Projects change, products get redesigned, jobs come and go. But people, and the relationships they build, stay. The ideas we share, the values we spread, or how much we help others learn and grow in their craft — that is the legacy that goes beyond the products we put out. 

We should be paying attention to our values and how they influence our colleagues as much as we do to our outputs and how they influence our end-users. 

illustration of a chisel

As designers are revisiting the role they want to be playing in the world and questioning where we can go from here as a profession, the answer might be more obvious than we all want it to be: learning to separate the craft of making digital products from the industry we’re in.

]]>
Articles Design & Industry
<![CDATA[Can a bridge be unethical?]]> https://www.doc.cc/articles/can-a-bridge-be-unethical https://www.doc.cc/articles/can-a-bridge-be-unethical Thu, 14 Mar 2024 03:44:13 GMT

Can a bridge be unethical?

Written by James Reith

Art Direction by Manoel do Amaral

ethic the letter h in the shape of a bridge

During a press conference on 8 November 2021, American transport secretary Pete Buttigieg alluded to the story of Robert Moses and his racist overpass.

It goes a little something like this:

Moses, the so-called “master builder” of mid-20th century New York, was very concerned that white-only spaces in America (like beaches) might eventually be opened to all people. When designing the Grand Central Parkway in the late 1920s — a road intended, in part, to give New Yorkers “an easy way to reach Jones Beach’’ — Moses purposely made the bridges over the parkway too low for buses to travel under. At the time of the parkway’s construction, Black Americans predominantly relied on public transport. This made it very difficult for them to access Jones Beach.

“Legislation can always be changed. It’s very hard to tear down a bridge once it’s up.”
Sid Shapiro, Moses’ former aide

photo of low height Moses bridge

Source: Bloomberg

In response to Buttigieg’s press conference “some right-leaning Twitter users immediately cried foul,” as The Washington Post put it, about the Moses story. Originating in Robert Caro’s 1974 Pulitzer-winning Moses biography The Power Broker, this supposed “myth” or “urban legend” has, apparently, since been debunked. 

Some pundits, such as Fox News’ Tucker Carlson, took this a step further and responded with the broader, philosophical claim that “inanimate objects, like roads, can’t be racist.” As always, the general and the particular get jumbled: as if debunking this story also debunks structural racism as a concept. The Washington Post article shows that the Moses story hasn’t been debunked outright (it’s just tricky to prove). But sociologist Rhua Benjamin’s Race After Technology is full of examples of structural racism - from AI beauty pageant judges preferring lighter skin to predictive policing algorithms using racial profiling - alongside the Moses story. It is “a narrative tool,” in Benjamin’s words; one that has taken on an almost fabular role within design. 

And if you’re thinking “James, this is getting a bit political,” you’re right. Because within design theory that’s exactly what the Moses story has come to illustrate: that designs have ethical and political properties independent of their designers; that designs “materialise morality,” as philosopher Peter-Paul Verbeek puts it. 

“...the artfacts that designers make, and the practices and processes they engage in, are not politically and ethically neutral.”
— Ahmed Ansari 

abstract illustration of bridge

Intent and consequence
 

Interstate 20 in the US state of Georgia was built to be racist, but failed. “In Atlanta, the intent to segregate was crystal clear,” writes historian Kevin M. Kruse. The highway was plotted through 1950s Atlanta as a “boundary between the white and negro communities,” according to then mayor William Hartsfield. It didn’t work. It just created the dreadful traffic jams Atlantans still suffer with to this day. Racists, however, are rarely as candid as Hartfield. 

As they stand today, Moses’ bridges are low. Lower than other bridges built in the same era. Why they are low, however, is contested. Some Moses apologists argue that his bridges are low due to road widening and other modifications that were not part of his original design. Others acknowledge the low bridges but, instead, shift his motivation from racism to classism or claim his bridges were low for aesthetic purposes, so that they conformed to “a specific politics of nature.” These critiques, however, focus on Moses’ original intent and vision; it is a discourse based around him and not the effect of his designs.

We will never definitively know Moses’ motivations. As philosopher Kate Manne notes in Down Girl: The Logic of Misogyny, prejudicial attitudes are “very difficult to diagnose” and often “epistemically inaccessible” to other people. The impossibility of proving what lies in the hearts and minds of others is a silencing strategy: an endless, unresolvable debate that looks like dialogue while protecting prejudice. 

This isn’t to say that intention doesn’t matter at all: so much of modern design is about iterating until execution matches aim. Intending to produce a prejudicial product, such as Interstate 20, dramatically increases the likelihood of it existing. But these aims are the concern — or, rather, responsibility — of the designer. As a user, as a citizen, it doesn’t matter what you meant but what you made.

abstract illustration of bridge

The moral of the story, or; the problem with trolley problems

You are standing by a train track. Five people are tied to it and a train is approaching. There is a switch. You could pull it and divert the train. But a single person is tied to the diversion track. What do you do?

This is the infamous trolley problem from moral philosophy. It typifies how ethics is usually understood: as an answer to the question “what should I do?” But that ‘I’ is doing an awful lot of work in the trolley problem.

For a moment, think about this not as a moral but a design dilemma. Why was there no fail-safe mechanism? What kind of security system lets someone tie six people to diverging train tracks? This sounds like pedantry, and obviously goes against the spirit of the trolley problem. But this framing of ethics as individual choice has real influence on how we think about ethics in design. (‘Should a self-driving car kill the baby or the Grandma?’ is a very real article from the revered MIT Technology Review) One that ignores how the very creation of such a choice is itself a moral failure.

“This is an injunction to design and re-design situations in such a way as to prevent moral dilemmas”
— Jeroen van den Hoven

Moral dilemmas like the trolley problem ignore:

• choice architecture
• design histories
• the responsibility of designers

The choice architecture of the trolley problem is a binary choice between killing 1 and 6 people. The choice architecture of Moses’ low-hanging overpass is, in fact, a denial of choice in all but name. By making something so inaccessible that few would choose to do it, you can still blame people for making that choice. Social scientist Langdon Winner says that Moses’ bridges embody “a systematic social inequality, a way of engineering relationships among people that, after a time, becomes just another part of the landscape.” That we do not recognise their influence on our choices is part of their power. 

“Indeed, many of the most important examples of technologies that have political consequences are those that transcend the simple categories of ‘intended’ and ‘unintended’ altogether.”
— Langdon Winner

It’s a tidy story: racist man builds racist bridge. But most discriminatory design is unintended. The inaccessibility of much architecture and technology, for Winner, “arose more from long-standing neglect than from anyone’s active intention.” It’s doubtful that the Google engineers who produced that racist AI did so intentionally. But they still made it. They failed in their responsibility to prevent racism from occurring in their work. It is our duty as technologists to take responsibility for the values we inscribe in our designs; to make doing so a routine, boring part of the design process. Like closing JIRA tickets.

So can a bridge discriminate?

Debating whether Moses’ bridges discriminate is an unhelpful distraction. But can a bridge be discriminatory? Absolutely.

The central thesis of “The Power Broker is that Moses, an unelected city planner, wielded more power in New York than democratically elected officials. Our liberties and choices are increasingly mediated through technical systems which we, as designers, help create and maintain. This too is power: we need to wield it knowingly. 

“We want to believe not just that designers are neutral, but that the discipline of design itself is neutral. We want to believe that there’s something about the processes of design that produces neutrality. This presumption of neutrality furthers the colonial lens.”
— Shana Agid 

abstract illustration of bridge

But that’s the problem: designs have ethical consequences, but ethics are not a part of the design process. At best, ethics are a responsibility shouldered by individual designers; at worst, they aren’t considered at all. And if designs can have unintended ethical consequences then we need to make ethics an intentional part of our process. Not just to avoid unintended harm, but also to do good. Design has the power to reinforce hierarchies, yes. But that also means, as Rhua Benjamin points out, it has the power to subvert them too. 

What is to be done?

abstract illustration of bridge

Draw out assumptions in our methods


“Design proceeds by way of critique.”
– Cameron Tonkinwise



In his article The Interaction Design Public Intellectual, design theorist and critic Cameron Tonkinwise notes a problem at the heart of design practice. Designers are “reflective practitioners” who critique the work of fellow designers, internalise that critical dialogue and attempt to predict design outcomes before they happen. But they also rely on evidence to validate their designs. A tension occurs, however, when designers do not apply that critical reflection to the very methods they use to validate their designs. “Without higher-level critique of overall directions,” says Tonkinwise, “the reflective practitioner is at risk of validating, through tight action research cycles, a response to a situation that works but is heedless of wider consequences.” As an example, he considers a wearable health device. Conventional UX design might produce a ‘delightful’ product that meets user needs, but never even considers “the ecological impact of the e-waste that all wearables become at the end of their use life.” 

Or consider the Decolonise Design movement, which argues that design is not neutral: instead it masks Anglocentric thinking as neutral. This not only silences non-Western ways of thinking but, as design methods spread throughout the world, risks damaging non-Western cultures by importing Western ideas as universal truths; covert colonialism. This very article, with its deference to Western philosophy and the classical essay, risks doing the same. That isn’t to say that Western thought doesn’t have value. But it is a tradition, not the tradition, and working within it is a choice, not a necessity. One with its own benefits and limitations. Acknowledging the limitations of our methods opens us up to new possibilities. 

abstract illustration of bridge

Bring ethics into the design process

Just because something is a certain way doesn’t mean it ought to be. There is a gap between the two. Philosophers call this the is-ought problem, and sometimes broaden it to the fact-value distinction. It means no decision is made on facts alone. That’s why when you present the same evidence to two different people they may make different decisions; they are interpreting evidence through their, often unspoken, values. In design, we love to make evidence-based decisions. But if the fact-value distinction is true, this means only half of our decision-making process is documented or articulated.

Value Sensitive Design is a complementary method to user-centred design. It makes values a conscious part of the design process and provides a host of techniques to help you do so, from value heuristics to mapping value tensions (like the common conflict between security and privacy). It has been criticised for being “descriptive rather than normative,” as design theorist Sasha Constanza-Chock says, “it urges designers to be intentional about encoding values in designed systems but does not propose a particular set of values.” I too thought of this as a major flaw; a paradoxical symptom of how neutrality is so fetishised within technology that even a design method based around values won’t tell you what values to use. But I also think this criticism harkens to a much older sense of how ethics operate, one at odds with both contemporary philosophy and design. 

abstract illustration of bridge

Bring design into ethics


“The ethical is both the tracing of problems and the inventiveness that it engenders…” 
— Levi R. Bryant



There is a tension between design and morality, at least in how they are traditionally understood. Design is process-oriented and evidence-based; morality is rule-oriented and value-based. By this logic, morality will simply operate as a set of absolute limits on design. But just as Tonkinwise argues that this simplistic characterisation of design is wrong, many modern thinkers would not recognise this characterisation of ethics. Maria Puig de la Bellacasa, for example, describes ethics as “a hands-on, ongoing process,” a “thick, impure, involvement in a world where the question of how to care needs to be posed.” And Jeroen van den Hoven acknowledges that traditional, rule-based morals haven’t made the world any better. Which is why he and other applied moral philosophers are turning to design: a field comfortable navigating the messy ambiguity that Bellacasa describes. 

Designer Cennydd Bowles claims “ethical issues are like design briefs,” in that there is rarely a single answer to an ethical question. I prefer to think of them as problem statements, for recognising something as a problem is itself an ethical act. (To dismantle the patriarchy, for example, you first need to see it as a problem.)  Ethics then is, perhaps, less about finding the right principle and more about defining the right problems. 

abstract illustration of bridge

Embrace uncertainty

In articles like this, designers often want practical tips, models or methods they can apply to their own work. Unfortunately, Ethics doesn’t quite work like that. “Ethics and politics,” as Jacques Derrida put it, “start with undecidability.” If moral problems could be solved by following a simple process, they would, by definition, not be moral problems. For Derrida, Ethics are not a matter of knowledge but responsibility. I hope in this article to have outlined ways of recognising the ethical responsibility we all hold as designers, and some methods for considering and articulating it.

As a subject, Ethics is as necessary as it is infuriating. The philosopher Ludwig Wittgenstein once claimed that if anyone “could write a book on Ethics which really was a book on Ethics, this book would, with an explosion, destroy all other books in the world.” Critics have variously interpreted this as a comment on the futility of ethics, or how belief in absolute moral rules can hinder other kinds of knowledge. But too little attention has been paid to the medium in the metaphor: writing. Prescribing moral rules may be a dead-end, but Wittgenstein never said anything about design. 

]]>
Articles Design & Society
<![CDATA[Inclusivity should not be W.E.I.R.D: Sparkling long-lasting inclusive technology]]> https://www.doc.cc/articles/inclusivity-should-not-be-weird https://www.doc.cc/articles/inclusivity-should-not-be-weird Thu, 14 Mar 2024 03:43:09 GMT

Inclusivity should not be W.E.I.R.D: Sparkling long-lasting inclusive technology

Written by Luis Berumen

Art Direction by Manoel do Amaral

arrows in different sizes together

Technology replicates the biases of its creators.

Technology shines bright lights and casts long shadows. When this spotlight is used correctly, it augments our senses, expands our minds and connects us in pulsating collectives of endless possibilities of what we can achieve.

It is easy to be blinded by the flickering lights of the next innovation. A new version is released every couple of months, better and brighter than the last one, giving us a sense of progress and comfort in small agile increments. Why question our processes if we have come so far in such a short period?

The spectrum of what technology can illuminate is biased by its creators. Historically, some communities and individuals have been relegated to the second plane of existence, the shadow of technology. They go about their lives facing unheard challenges, knowing the available options are not precisely created for them.  From non-inclusive terms and lack of representation on images, to straight up tools and systems that don't work for specific groups. 

In a world where domination is signalled by user adoption, technology still needs to deliver solutions for everyone. Weird, isn't it?

a cursor being used as a bridge between 2 cliffs

Historically we have listened only favor only a few Western, Educated, and from Industrialized, Rich, and Democratic Countries.

When things get W.E.I.R.D.

It is estimated that 60% of the psychology and social studies are conducted on University Campuses, and their participants are Western, Educated, and from Industrialized, Rich, and Democratic Countries (W.E.I.R.D.).

Before jumping to conclusions, we must understand that a W.E.I.R.D. population is easy to target in Universities and Research facilities. Their time is abundant, compensation is cheap (if any), and most of the time, the test is conducted by facilitators with similar backgrounds only a few buildings away. (source)

In 2008, a study of over 4,000 articles published over 20 years found that around 95% of behavioral science research subjects come from the U.S.A., Europe, and English-speaking countries like Australia. These countries only comprise about 12% of the world's population, but they are used as a reference by the rest of the world (source).

Although this bias has been well documented for over a decade, we still use most studies without hesitancy about their testing methods and the implementation of their findings. This is very important because the practice of User Experience Design and Research has been heavily inspired by Psychology and Sociology.

a pattern made of cursors

It is common to see W.E.I.R.D. companies build W.E.I.R.D. products

People in tech are W.E.I.R.D.

The technology we consume, regulate and enable our lives is created by a small set of people, and the development cycle faces similar biases undetected by its testing process. That is why it is common to see W.E.I.R.D. companies build W.E.I.R.D. products.

In 2021, H.R. Tech Group asked 171 tech employers in B.C. about their gender identity (source).

• 66.6% self-identify as a man

• 33.2% self-identify as a woman

• 0.2% self-identify as non-binary or other

• Women fill 31% of executive-level roles.

At the same time, despite big claims from well-known companies like Apple, I.B.M., Google, and Tesla about ditching having a University Degree as a requirement, the stats tell a different story. In real life, most companies still follow that guideline. Most active software developers worldwide have a bachelor's (41%) or master's degree (21%), even if the degree is from a  different field (such as a journalist working as a designer, an economist working as a developer), the barrier of higher education remains.

Chart showing that most software developers have at least a bachelor degree

Level of education for software developers, worldwide, 2022 (source)

Funny enough, the word “University” comes from the Latin “universus,” meaning the “entire” or “the whole.” Still, this high level of education and experience is not universal, and most of the learning material is focused on specialization and standardization. In 2018, Canada ranked first as the most educated country in the world (source), but only 30% of Canadian adults hold a university degree. Learn more. Universities are not universal.

Beyond our jobs, if you look at the authors and publishers on your bookshelf, you will find a similar pattern. Keep looking around. From Design Leaders to Processes & Methods, they all share identical influences and background.

We got comfortable with W.E.I.R.D., but it should not be the norm.

three stacks of cursors with different sizes

What if the processes we use to create tech products are far from what the user needs?

W.E.I.R.D. product development does not work.

This "weirdness" of individuals affects the product development cycle. According to Clayton Christensen, over 30,000 new products are introduced to store shelves annually, and 95 percent fail. On a similar scale, more than two-thirds of start-ups fail. Is the failure rate normal, or are we just bad at playing this game?

If the product development practice were a baseball player, it would have a batting average of .050. To get an idea, the worst professional baseball payer in history, Bill Bergen (1901 - 1911), had a recorded average of .170.

It sounds like the odds of a product hitting a home run are almost none, yet new start-ups are created daily, big companies kick-start new initiatives, and new product teams get together and swing for the fences. Never questioning our processes seems risky. How often are users placed on the other side of the fence, while many communities and demographics are not invited to the stadium?

I have been in situations where a team in East Europe makes decisions that will impact users in India based on information gathered from German users. It was easy to predict that the product would end up failing. The hardest part is understanding that the people involved were competent professionals who make decisions based on the available information and experience without considering what they lacked. They never had anybody in the room questioning if designing for Germans was a stylistic choice based on other motivations rather than user needs in a completely different continent.

Imagine the room was making a pen exclusively designed for women was conceived. Spoiler alert: they said women were involved. (Learn more)

product shot bic for her with stereotypical colors

"Bic for Her," when a pen leaves people speechless.

"Bic for her" sounds like the pitch for an S.N.L. sketch. Check the amazon reviews are 10x funnier. Another stylistic choice with no fundament on user needs.

cursors point at a white dot

You cannot help the average user because the average does not exist.

The Greater Good Theory needs to be better.

Investing in products for the average user is a fallacy based on the theory of the greatest good. Investing most of the money in a way that most users benefit is different than striving for the middle ground or the commonplace.

We must question initiatives for digital products and services that address the idea of a population's medium standard variation. We can play safe, most people are not early adopters, and that market most likely already has existing solutions we can learn from and iterate.

standard deviation chart example

Using a standard deviation for decision-making would consciously leave out marginalized people.

Most impactful products go for the long tail of the curve, solving problems for 0.1% at the end of the spectrum. That small population is the critical driver of value and innovation. They used to be the first ones to use zip belts in cars, ask for closed-captioned tv shows or smile at band-aids that match their skin color.

Inclusiveness is the fundamental value proposition of a Blue Ocean Strategy, based on achieving high product differentiation at low cost, which principles are the following:

• Create uncontested market space

• Make the competition irrelevant for a vital segment of the population or user archetype.

• Create and capture new demand

• Break the value-cost trade-off

• Align the whole system of differentiation and low cost.

The opposite of a Blue Ocean Strategy is a Red Ocean Strategy, where large companies fight for hegemony in a highly competitive market with very mature products. E.g., Microsoft launched Teams and killed off their product, Skype for Business.

We have the technology to level the plain field and create shared spaces.

Focusing on co-creation as a strategy

Implementing inclusive co-creation as a strategy might be an excellent way to compensate for biases, but it has its limitations.

You can work hard on bringing a diverse pool of users to test your product and get feedback. Sadly, that feedback stills needs to be prioritized, funded, and implemented by people who might not fully understand it, and they are way too far from the user to empathize with it entirely.

Real impact looks at the full organizational chart. It should be part of a more extensive set of initiatives focused on bringing new voices, broadening perspectives, and bringing light to areas that traditionally had been hidden to roles that do not necessarily get to understand their influence on the user experience, like accounting.

This inclusive co-creation also licenses stakeholders, partners, community members, and subject-matter experts to get involved, learn and provide insights more freely. Sadly still, some companies place their efforts on hiring with D.E.I. in mind as long as they comply with the existing status quo.

Inclusive co-creation is a transformative experience for everyone involved. It implies breaking silos, transforming how a company communicates internally, and challenging “culture fit.” 

Co-creation is not, however, the silver bullet for all issues. As Alva Villamil puts in her article: “[you can't] codesign your way to justice. Certain institutions and design ideas are fundamentally oppressive, and the only way to achieve radical transformation at scale is with collective action and policy change.” 

two cursors clashing and opening a crack

Despite many best intentions, people still fall between the cracks.

Progress is slow and fragile.

There always will be external forces that can justify reverting any progress gain. If there was something to learn from the COVID pandemic and the most recent tech layoffs, it is that a crisis has adverse effects on minorities.

During the worst of COVID, one in four women considered leaving the workforce, vs. one in five men. “The pandemic’s gender effect” affected working mothers, women in senior management positions, and Black women the most (source).

chart showing that women were more affected in the workplace by COVID

Women in the Workplace 2020 (Source)

While the latest tech layoffs are still shocking, most recent numbers disproportionately affect minorities.

A study of more than 800 companies by Harvard Business Review revealed that organizations experience as much as a 22 percent reduction in Black, Hispanic and Asian men on their management teams when they cut positions rather than evaluate individual workers.

chart with data from 2016 showing that layoff affect minorities in management while white men representation usually grows during the same time

Downsizing affects Diversity (source)

The same study shows that when companies take a “last hired, first fired” approach to layoffs, they practically wipe out any DEI progress done in the last couple of years. They lose nearly 19% of their share of white women in management and 14% of their share of Asian men.

In a time when specific communities suffered the most, companies' decisions cut down the individuals that could empathize with them the most and, therefore, serve them better. How do you think this will impact innovation in the long term?

cursors of different colors aligned partially overlapping

Designing technology has always been about people.

The dawn of a light that casts no shadows

To be inclusive in tech,  it's not a matter of apologizing for privileges or providing good optics that fit into a “feel-good” corporate video. Indeed, we can start by acknowledging that our life experiences (assuming fitting the W.E.I.R.D. at least partially) are likely an outlier that does not necessarily reflect the needs, circumstances, and wants of everyone.

Inclusiveness also requires thinking about the long-term implications of our decisions and bringing back implementable ideas we can start today. Thinking in terms of hiring quotas is a fragile solution if we do not have a way to measure performance and revise the mechanisms used to lay off people.

Having a concrete frame of reference helps us to understand where we are falling short or overdoing it.  According to the U.S. Census Bureau, 50.5% of the American population is female, and 24.2% of the country is considered part of an under-represented group. Let's look at the leadership roles in Tech companies. Only Netflix is close enough to match the percentages, with 48% of women in senior leadership (source), and 24.9% of minorities in leadership roles (source).  Stats help us to see that women at Netflix form 49.6% of its workforce, but 37% work in Tech roles, meaning there is a gap in that department and overrepresentation elsewhere. Other companies are lagging in the double digits. 

Wealth in countries should not be measured by the G.D.P. but by the prosperity of your most fragile population. If good fortune, well-being, and health should be the norm, facts like gender, age, or postcodes should not be a factor for people to develop their full potential.

The integration of marginalized individuals and communities brings exponential economic benefits and multigenerational wealth. It also removes shameful barriers to seeing history for what it is:  an invitation to be better today.

Inclusive technology is important for the business, the users, and the world. In order to make it long lasting, make no mistake: it needs to start today. Not acting is the only guarantee that we won't succeed.

Regardless of our role, we have to see the world and bring the lenses of restoration and justice to what we are. 

As professionals, we need to go beyond just being nice and be comfortable in uncomfortable situations if we want to change the status quo.

As designers, it's imperative we start giving a damn about it in the work we deliver, even if we don't get everything right at first – we need to start somewhere;  

But above all, we need to care. Caring is not a glamorous product launch, or an ever growing metric. Care is about being consistent and persistent, finding space to influence and, even if we don't win all the time, increasing our average batting - we can do much better than the 90% of startups that fail. Long-lasting inclusive technology can start today, but changing tomorrow will be an on-going effort.

]]>
Articles Design & Society
<![CDATA[Dying to be online]]> https://www.doc.cc/articles/dying-to-be-online https://www.doc.cc/articles/dying-to-be-online Thu, 14 Mar 2024 03:41:27 GMT

Dying to be online

Written by Isra Safawi

Art Direction by Manoel do Amaral

abstract place with an eerie vibe

Data surpassed oil as the world’s biggest commodity in 2017. Our data is constantly being ‘harvested, collected, modelled and monetised’.

 We live in a hyper-connected world where things don’t seem to have happened unless you post about them. 

 An emotion hardly seems validated until it’s been shared with others online. On average, we spend a quarter of our lives online. 

 For people we never met in person, all they know about us is our digital self formed from our data spread out across the internet.

 Our online activity creates a peculiar portrait of ourselves that will unavoidably long last our lifespan. 

 So with life being lived increasing online, how is it that we have thought so little about our ‘digital death’?

It’s time we started giving our digital assets as much importance as we do with our physical ones. We need to focus on building systems that support and respect the bereaved, how different people grieve and deal with death; systems that shine light on how technology is being used at the end of a users life and how one's data rights, ownership, privacy, and control should continue after their death.

Virtual Cemeteries

We don’t know what happens to our data when we die. Who gets to own the data?  At some point, many digital platforms are going to become virtual cemeteries: Facebook could have more dead members than living ones in as little as 50 years. But modern technologies are not designed to effectively acknowledge the inevitable death of a user. 

• Until 2015 Facebook had no provision for users to manage their data after they pass. They have since then introduced a ‘Legacy Contact’ feature that allows people to assign a trusted contact who can manage their profile to a certain extent. 

• Google launched an Inactive Account Manager in 2013 that allows one to set a time period of inactivity before giving partial access of your data to a trusted contact and whether you want to have all your data deleted or not.

• More recently, Apple released a Digital Legacy feature on iOS 15 with similar functionalities and a similar confusing flow.

 

These initiatives are far from being enough.

To start with, most people, just like me until researching this topic, don't know that these features exist. They are so hidden within their products that one might not know about them until it's too late to act.

abstract minimalistic stairs with an eerie vibe

Our rights and choices should outlast our lives

These features also have the same privacy issues that we have with data in general: as our fingerprints are all over the internet and our data being sold and stored by third parties, it becomes almost impossible to track and manage our online presence. 

While GDPR and the California Privacy Act improved certain aspects of it. It's obscure and hard to define how much we own of our own online content and what tech companies can do with it.  Simply erasing a Google or Apple account can mean erasing an entire life of photos, contacts, emails, memories, documents, and so on.  Keeping it in their services, raised questions on ownership (who should be able to access it), financial responsibilities (if a deceased person was making income through online videos, how should that perpetuate?), and privacy - should family members be able to see emails and private messages from a deceased loved one? Technology companies already have a large influence on dictating the way we live our lives, and it is clear that in the digital age, our cultural experience of death is also being dictated by them.

Death is still a taboo

Most importantly, there are also cultural and personal perspectives and taboos around death. Each culture and each religion have a unique way of grieving or celebrating the deceased. Each person might also want to remember or celebrate their passed loved ones in different ways:  for some, it may foster a feeling of connection, for others, the responsibility bestowed upon them to make decisions on behalf of their loved one can be daunting.  In a case study by Facebook, they stated that they tried to make sure to remove what they deemed as ‘unnecessary’ reminders of the deceased such as notifications or reminders of their birthday. However, in several cultures, people like to remember their deceased loved ones.

 

In general, western societies don't discuss much about what comes after life. If estate planning is a hard conversation itself, our digital legacy can be an even trickier conversation to have. 

abstract minimalistic stairs with an eerie vibe

Taking Control

How can we empower people to take control of their data while they are alive? How do we support communities who are impacted by a loved one’s death and are grieving? How do we strike a balance between respecting the needs of a deceased account holders and the grieving community they have left behind?

The lack of control the grieving community had over memorialised profiles pre-legacy contact impacted them in various ways. People grieve in different ways — privately, collectively, by compartmentalising. The internet makes it possible to eliminate geographical boundaries, it allows a larger group of people to experience loss together. People grieve on a public platform because it makes them feel as though they are not alone in their pain. There is a psychological need within the grieving process to feel as though pain is not merely isolated to the person experiencing it. The continual existence of the deceased eases the pain of those involved because it causes them to feel as though their messages can still be received, and a part of their relationship can continue. At the same time, since the internet is forever it can mean that the mourning process may never come to a natural end.

The internet is forever, we aren’t. And if we don’t start making decisions about our digital deaths, then someone else will be making them for us.

Works Cited
  1. Legacy Contact: Designing and Implementing Post-mortem Stewardship at Facebook by Jed R. Brubaker & Vanessa Callison-Burch
  2. I Called Off My Wedding. The Internet Will Never Forget by Lauren Goode
  3. How to Arrange for your Digital Legacy  by Barbara Krasnoff 
  4. Death Online: Planning your Digital Afterlife by Why’d You Push That Button (Podcast)
  5. What happens to our online identities when we die? By Amelia Tait
  6. Cover photo by Friedrich Siever
  7. Installation by James Turrel
  8. M Awards by Flolin Xu

]]>
Articles Design & Society
<![CDATA[The aesthetics of our new fictions]]> https://www.doc.cc/articles/the-aesthetics-of-our-new-fictions https://www.doc.cc/articles/the-aesthetics-of-our-new-fictions Thu, 14 Mar 2024 03:40:16 GMT

The aesthetics of our new fictions

“Everyone, deep in their hearts, is waiting for the end of the world to come.”
― Haruki Murakami, 1Q84

150 written in different fonts over abstract background

150. 

“Even today, a critical threshold in human organisations fall somewhere around this magic number. Below this threshold, communities, business, social networks and military units can maintain themselves based mainly on intimate acquaintance and rumour-mongering. There is no need for formal ranks, titles and law books to keep order. A platoon of thirty soldiers or even a company of a hundred soldiers can function well on the basis of intimate relations, with a minimum of formal discipline. (...) But once the threshold of 150 individuals is crossed, things can no longer work that way. (...)

How did Homo Sapiens manage to cross this critical threshold, eventually founding cities comprising tens of thousands of inhabitants and empires ruling hundreds of millions? The secret was probably the appearance of fiction. Large numbers of strangers can cooperate successfully by believing in common myths.” (Yuval Harari)

We live in a world of fictions

Imagine trying to cross a busy Manhattan street without the existence of traffic lights. Or trying to pay for your groceries by singing an acapella rendition of Imagine, by John Lennon, instead of using cash or a credit card. The world around us is filled with invisible social contracts that we all accept and subscribe to without thinking. 

From the moment we as humans started to organize ourselves into larger groups, following social contracts to ensure a certain level of the order became crucial.

“People easily acknowledge that ‘primitive tribes’ cement their social order by believing in ghosts and spirits and gathering each full moon to dance together around the campfire. What we fail to appreciate is that our modern institutions function on exactly the same basis.” (Yuval Harari)

What we are calling “fictions” here are social contracts—money, ethics, laws, countries, religions, flags, totems, fashion, music, arts—  rules we create to be able to agree on social norms and expectations. Some may be nearly universally accepted, while others may vary by region or socioeconomic context. Most of us accept, believe, or at least abide by multiple social contracts at any given time. Because they are so powerful and omnipresent, we often don’t acknowledge that they are not laws of nature, but fictional constructs.

screenshot of tweet saying pakistan was artificially created and is not a natural country

“The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.” — David Graeber

These fictions take different forms throughout centuries and locations. Fashion is a good example: what’s elegant in one time and place may be edgy or unacceptable in another. Modern concepts of countries and nationalities are also fictions, suggesting that all people within imaginary territory lines should share the same values, rituals, mores, and beliefs. 

Stewart Brand named these different types of fictions and their relationships with each other "pace layering." He identified fashion, commerce, infrastructure, governance, culture, and nature as distinct layers that move at different paces but affect and inform one another. What’s cool on TikTok might be different week over week, but changes to hegemonic structures of government and dominant culture might take years to happen.

from faster to slower paced layers diagram fashion commerce infrastructure governance culture nature

Stewart Brand coined the term "paced layers" to illustrate how different parts of society move at different speeds.

Social fictions can also expand beyond their original context: A cornerstone of globalization, for example, is the imposition of fictions from a few powerful people and countries to the diverse majority—controlling the fictions to control the narrative. When the United States imposes a global definition of Free World, or when an American cultural export like Friends becomes one of the most popular TV shows in the world, the United States is influencing different layers of other societies.

The result is the oppressive replacement of a wide spectrum of cultures and ways of life with a single set of fictions—a monoculture in which cultural layers are uniform and connected, and in which the demise of certain fictions can have seismic consequences across the globe.

chart showing over fifty percent loss on bitcoin price year to date

Bitcoin, one of the emerging (and dying?) fictions that disguises a (fictional) attempt at global standardization as beneficial financial decentralization.

Power both creates and depends upon the shared fictions of the monoculture. Power self-perpetuates through control of these fictions, which must be coded as legitimate and immediately recognizable through their symbolism, language, and design.

The role of aesthetics in building fictions

Symbolism and other aesthetic indicators help make social fictions feel legitimate, trustworthy, and familiar. These visual codes instill trust and aid recognition when a fiction expands to different contexts, places, or eras. 

Let’s look at money (in paper form) as an example.

Obverse of the United States one dollar bill

When you look at a U.S. one-dollar bill, you know exactly what it is and where it is accepted, and you have a rough idea of its value. 

The one dollar bill has the oldest design of all U.S. currency currently being produced. The obverse design of today’s dollar debuted in 1963 when it was first issued as a Federal Reserve Note. The reverse design debuted in 1935. Around 12.7 billion dollar bills are in circulation worldwide.

The design of the dollar bill is full of meaning. Each symbol builds on older symbols and social fictions. By design, many of the symbols on the dollar bill are familiar to the people who use it, who are bound by the social contract of U.S. currency.

highlights from the observe dollar bill

[1] An image of George Washington, the first U.S. President, based on the Athenaeum Portrait by Gilbert Stuart (1796).

[2] The Federal Reserve District seal. The name of the Federal Reserve Bank that issued the note encircles a capital letter (A–L), identifying it among the twelve Federal Reserve Banks. The sequential number of the bank (1: A, 2: B, etc.) is also displayed in the four corners of the open space on the bill. 

[3] The Treasury Department seal. The scales represent justice. The chevron with thirteen stars represents the original thirteen colonies. The key below the chevron represents authority and trust; 1789 is the year the Department of the Treasury was established.

[4] The signatures of the Treasurer of the United States (left), and the Secretary of the Treasury (right), along with the series date.

Reverse of the United States dollar bill.

highlights from the reverse dollar bill

[1] A barren landscape dominated by an unfinished pyramid of 13 steps, topped by the Eye of Providence within a triangle. At the base of the pyramid are engraved the Roman numerals MDCCLXXVI (1776), the date of American independence from Britain. At the top of the pyramid stands a Latin phrase, "ANNUIT COEPTIS," meaning "He favors our undertaking." At the bottom of the seal is a semicircular banner proclaiming "NOVUS ORDO SECLORUM" meaning "New Order of the Ages" that is a reference to the new American era. To the left of this seal, a string of 13 pearls extends toward the edge of the bill.

[2] The inclusion of the motto, "In God We Trust," on all currency was required by law in 1955, and first appeared on paper money in 1957.

[3] The American eagle flying free, holding 13 arrows of war in its non-dominant left talon and an olive branch for peace in its dominant right talon. The banner in the eagle’s beak reads "E Pluribus Unum," meaning “Out of Many, One.” The shield's horizontal blue band represents Congress uniting the original 13 colonies, represented by 13 red and white vertical stripes.

[4] The Great Seal, originally designed in 1782 and added to the dollar bill's design in 1935, is surrounded by an elaborate floral design. The renderings used were the typical official government versions used since the 1880s.

“Financial documents are designed to look trustworthy by using a style of writing or typography that is consistent, legible, and official. Documents are provided with a seal protecting the message contents, but also carry the symbol—and thus the authority—of the sovereign. State seals, national coats of arms, or bank logos still have a similar function. Finally, the date and the signature make these documents legally binding.” (Ruben Pater)

The dollar bill design has evolved over time. Each iteration represents a move towards a more credible, trustworthy, and believable version of the same fiction: from the choice of typography, to decorative elements that surround key focal points, to the color green—which since the 1860s was added to the dollar bills as an anti-counterfeiting measure to prevent knockoffs and is now a color associated with money in many countries, including the ones that do not use the United States dollar as their main currency.

The new bills circulated by the U.S. government starting in the 1860s became known as “greenbacks” because their backsides were printed in green ink. This was an anti-counterfeiting measure designed to prevent photographic knockoffs, since the cameras of the time could only take pictures in black and white. (History.com)
previous dollar bill design
previous dollar bill design
previous dollar bill design
previous dollar bill design

The symbols of power and authority we see on the modern dollar might not have appeared on the first dollar bill, but many of its visual elements draw on centuries of design evolution. Coins, seals, certificates, and other official documents from various regimes and cultures have contributed to a collective visual lexicon of symbols that convey legitimacy and trustworthiness. 

coins and a bank note

Greek coins from Syracuse, 415-405BC. First European banknote, the Swedish daler banknote, 1666. (CAPS LOCK)

The same visual lexicon shows up in designs created well after the dollar bill, like bank checks or credit cards: their design elements are often a mix of historical references and a reflection of the time they were invented. Over time, the aesthetic of the fiction evolves. It takes a new format with a slightly different meaning.

credit cards

First version of the American Express card, 1958. Print ad for American Express Gold, 1968. (American Express)

To this day, when someone wants to reference the dollar bill in a design, it’s noticeable how certain visual cues remain—even when the medium and the level of fidelity are completely different. Leveraging mental models from previously established fictions helps create a feeling of familiarity, which in turn helps people believe in the new, emerging fiction.

CAPS LOCK, by Ruben Pater, cites a plethora of visual examples to demonstrate the inextricable link between graphic design and capitalism, and the designer’s role in shaping them both
illustration bank app with dollar bill icon

PicPay Mobile App

The aesthetics of our many fictions continue to evolve, every single day and at every step the world takes towards believing them more—or believing them less.

The aesthetics of our many fictions continues to evolve.

Aesthetics are representative of their era, beliefs, and culture

Money is just one example of a social fiction we interact with every day, but fictions are ingrained in multiple aspects of society.

Think about the concept of Artificial Intelligence (AI), and all the visual metaphors that have been used over the decades to explain to people what it is and how it works. AI began as an academic discipline in 1956, but it wasn’t until the 1980s that the term and the concept started making more public appearances in pop culture, movies, and TV shows. 

At the time, such a novelty concept and technology had to be portrayed in a way that could both get people excited about an unknown future, as well as build on top of visual symbols and aesthetics that were known at the time. The novelty needed to look natural, human, and acceptable.

still from movies

Metropolis (1927), Robocop (1987), The Terminator (1984), Tron (1982), West World (1973)

Fast forward to the 2020s and the way our society visually represents Artificial Intelligence has changed quite a lot. The concept itself isn’t as new to many people. Not because everyone has experienced or interacted with AI themselves in a memorable way, but because of how much the topic has come up in the news, movies, TV, and pop culture. Aesthetic choices now focus less on explaining AI and normalizing interacting with machines, and more on trying to make that technology look human, natural, or invisible. 

still from movies and voice assistant

Westworld (2016), Her (2013), Google Home

Movies like Her normalized the acceptance of ubiquitous computing and created their own aesthetic based on a romanticized relationship between humans and machines, making space for the establishment of a service like Google Home, that taps into the same idea presented in the film.

Confusing symbols for confusing times

Every century sees its share of disruptions, but we are now experiencing major changes in several layers of society. Truth (as a universally accepted concept) is eroding, institutions that seemed steadfast are being questioned, and scientific credibility is at an all-time low. Anyone can now create new symbols that share the stage with century-old institutions. 

flat earth society logo using symbols associated with respected institutions

Anything can look credible when you use the right aesthetic.

fake news tweet from a verified user with the twitter checkmark icon

Is the Twitter Verified Badge an icon of trust? Many Twitter accounts that spread misinformation are verified, causing additional confusion on what is true—and who holds the truth. 

president bolsonaro waving the brazilian flag and singer anitta wearing the brazilian flag colors

In recent years, national flags have been heavily associated with far-right movements and leaders around the world, including Brazilian president Jair Bolsonaro. In response, pop artists like Anitta are disputing again their national colors to re-appropriate and re-signify their meaning to the Brazilian people.

american flag and alt-right extremist wearing the american flag colors and a viking helmet

The juxtaposition of symbols shown here is representative of our times: the American flag mixed with nordic symbols, invading a building that is supposed to be the holding ground of democracy. The “Viking at the Capitol” image was imitated across social media, reinforcing this emerging, extremist worldview.

The process of questioning, antagonizing, and creating new fictions is natural, but the scale, speed, and intensity of social media radically changed how quickly this process takes place. 

Any action or misstep can be easily tracked and exposed. The powerful fiction of infallible beings and institutions is easily questioned by any side. In turn, this fuels alternative “facts” and extremist agendas, supported by exaggerated and decontextualized information.

“When trust in government, media, and science declines, disinformation thrives because many people seek alternate facts. As a result, public resistance to rumor, conspiracy, hate, and lies weakens.” – Livingston, Bennett

As old fictions start losing their place, space opens for new fictions to emerge in parallel. New narratives, new truths, and new worlds are being created.

It might seem that we have no control over this chaotic, contradictory, and overwhelming process. But we must remember that within this transformational period, there is space for new, positive changes. As designers, we have a small but important part to play by shaping symbols and its structures.

Emerging aesthetics for emerging fictions

Visual symbols are powerful ways to communicate complex systems of meaning. Designers create them all the time—sometimes successfully, other times not as much. 

Let’s look at some of the emerging products and services known as “web3.” The term is rising in popularity among technologists and investors, and it’s meant to represent the idea of a new iteration of the World Wide Web, incorporating concepts like blockchains, smart contracts, cryptocurrency, virtual reality, the metaverse, and NFTs. It's a new gold rush in a digital landscape. 

All of this exciting new technology creates an atmosphere of positivity about the future for those who are invested in it. A similar feeling happened back in the '60s when President Kennedy made the famous speech "We choose to go to the Moon." Futurism references were at an all-time high. The Jetsons, the first color show broadcast on ABC, well captured that optimism. Cellphones, 3D printers, robots, natural voice interfaces, smartwatches, tablets, you name it. That optimism influenced the creation of a future that was assumed to be impossible. A similar positivity and enthusiasm are helping to create the fictions we see arising around web3 and blockchain technology.

cover from book 1975 and still from tv show jetsons

The Jetsons (TV Show) and its future-inspired theme predicted many technologies we now take for granted, like the smartwatch and cellphone, as well as others that have yet to be developed, like the food printer. The Jetsons was heavily inspired by the book 1975: And the Changes to Come by Arnold B. Barach, and heavily captured the optimism of its era.

What symbols does web3 currently rely on?

01. Gambling on game aesthetics

The aesthetics of many blockchain-based companies can be traced back to environments that once were the source of inspiration and fantasy: arcades. Low-resolution displays, oversaturated neon lights, and the vibrance of gaming are influencing a lot of the look & feel of these emerging services (as well as their business model) to attract new users.

still from videogames showing arcades

02. Borrowing credibility from old money metaphors

Many of these new web3 services reference old metaphors to legitimate their currency system. In Economic theory, money has four main functions: to be a store of value, to be a unit of account, to be a medium of exchange, and to be a deferred payment. To convey all these functions, web3 services need to rely on the universal metaphor of the coin as a symbol of money. An interesting oxymoron if you think about it: they aim to build the future and disrupt the market but yet use established symbols as their foundation. A revolution that is unrecognizable is probably meaningless.

screenshots from crypto wallet websites with coins money wallet and credit card imagery

03. Neon colors that hint at a brighter future

Neon seems to be web3’s official color palette, bringing forward gradients that are vibrant and bold. This color palette choice works as a double statement: declares war on the unsaturated and monotone web of the past, and announces the boldness of what is to come.

screenshots from blockchain-based products websites using neon colors

04. 3D forms that deny the flatness of web2

To convey disruption and innovation, the ideas brought by web3 need to surpass the two-dimensionality of the web. The use of 3D creates a more tangible environment to compensate for some of the abstract concepts of web3, while enticing exploration for those who dare to wonder.

screenshots from blockchain-based products websites using 3d shapes

05. A futurist take on the 8-bit style

While web2 keeps trying to make technology feel more human, web3 tries to make technology more… technologic. The use of 8-bit typeface is an attempt to disrupt the neutral, sans-serif Big Tech style, while reminding us of the origins of computer displays.

screenshots from web3 and metaverse websites using 8 bit visuals

06. Reign of abstract forms and anti-personalism

There is no place for illustration of humans with flat geometric style on this new web. Inspired by science fiction, web3 tries to shift the focus from “this is a product that fits your life” to pitching the idea of an exciting future. Instead of reminding us of the issues of our current world, abstract forms distance us from our reality and entice us to explore a new one.

screenshots from web3 and metaverse websites with abstract sterile shapes

While the web3 aesthetics is an emerging visual trend, it’s far from being a widespread reality beyond the design bubble. Much of it will evolve as these aesthetics make their way from niche cultures (sci-fi books, movies, video games) to mainstream businesses.

However, that doesn't mean the dystopian aesthetic of sci-fi movies is a guide to be followed. 

As designers, with every new project we tend to leverage existing symbols and reinforce their meaning to be able to benefit from mental associations people will naturally make. But we also have the power to modify and repurpose those symbols, should that be our intention. From redefining the role that gender plays in our society and our designs, to understanding the unintended consequences of our actions in the world—every decision we make can have an impact. 

Are we still replicating eurocentric models from 5 decades ago to define our future? What are some of the other futures we can design?

The perils of designing for scale

“We are in the midst of a major social transformation — moving many of our day-to-day activities from physical places to information-based places that we experience on our phones and computers. The central question here is: How can we design these information environments so they serve our social needs in the long term?” (Jorge Arango)

The apps we are creating are reaching thousands, millions (sometimes billions) of users, and with that, it becomes progressively harder to control all the ways in which people will interpret and use our products. In the same way the human brain wasn’t ready to build healthy relationships with communities larger than 150, our design brain alone wasn’t shaped to deal with the massive reach and impact of the experiences we’re creating — which brings along a big opportunity to rethink how we work.

The first step is to be aware of which fictions are being reinforced by our designs.

Works Cited
  1. 1Q84 by Haruki Murakami
  2. Sapiens by Yuval Harari
  3. The Utopia of Rules by David Graeber
  4. Pace Layering by Stewart Brand
  5. Friends Took Over  the World by Rob Picheta
  6. Why is American currency green? By Elizabeth Nix
  7. CAPS LOCK by Ruben Pater
  8. How Science Lost the Public’s Trust by Tunku Varadarajan
  9. Disinformation, Democracy and Conflict Prevention by Livingston and Bennett
  10. Play-to-earn Gaming by Luke Winkie 
  11. Principles of Economics by University of Minnesota
  12. Essay on Illustration by Rachel Hawley
  13. Genderless Design Is a Myth by August Tang
  14. The Designers Gaze by Srishti Mehrotra
  15. Living in Information, Responsible Design for Digital Places by Jorge Arango
Special thanks to Sophia Costomiris and Rafael Frota

]]>
Articles Design & Society
<![CDATA[Returning to craft]]> https://www.doc.cc/articles/returning-to-craft https://www.doc.cc/articles/returning-to-craft Thu, 14 Mar 2024 03:39:05 GMT

Returning to craft

How returning to the craft taught me to be a better leader

Illustrations from the book "Focus" by Evan M. Cohen

I recently made a few "fresh starts" in my life; moving countries and changing roles in my career were two of the most significant. With my job, I decided to move away from design leadership and return to being a product designer. However, I didn't anticipate that going back to the craft would also teach me to be a better leader.

I went back to the craft because I missed delivering products and services for people. The transition away from management (where I mainly designed slide decks) back to designing products wasn't an easy switch; many of the tools and practices have changed over the years. For a while, I asked myself daily if I still “had it", whatever exactly "it" is. 

In sharing my reflections and learnings, I hope to spark a bit of curiosity in others, so they too might play with the idea of practicing again.

Your technical skills don't disappear; they morph as you grow in other areas.

Recognizing knowledge gaps

First off, your technical skills don't disappear; they morph as you grow in other areas. When you move into a leadership role, the scale in which you interact with problems shifts.

 higher level organization making profit medium level leadership supporting people and organization goals lower level practitioner delivering products services

As a practitioner, you address mostly product-level problems, being responsible to define a customer experience that delivers on the overall strategy. Whereas for a leadership role, the focus shifts to higher-level strategic decisions and supporting their teams to execute on it. Going back to delivering the work requires an entirely different mindset, and it isn't easy to switch off the management tendencies.

I genuinely enjoy managing and creating scaffolding for teams to do their best work. However, over the past few years, I've had this lingering feeling that I need to better understand what it takes to deliver products and services in the current environment to show up as an authentic leader.  The higher the level of the manager, the harder it is for them to stay on top of all the new tools and practices needed to deliver on a space that is constantly evolving. As a result, leadership practitioner skills naturally depreciate over time.

By focusing on the bigger picture, a manager might also miss the intricacies of how their product actually works. As digital products become more sophisticated, understanding the trade-offs,  complexities, and dependencies. As Rochelle Gold articulates in their article, it's a risk when there is a gap between where decisions are made and what happens on the ground.

“When there is too much distance between where decisions are made and the day-to-day delivery, we risk not understanding what is truly needed.” Rochelle Gold, Head of User Research, NHS Digital

Beyond the tooling and capabilities, our field also matured on how to better serve the users impacted by our products. There are simply better ways to design, build, and manage services for the business and for the customers: design justice, equity, ethics and bias in tech, accessibility, disrupting power, liberating technology from capitalism, and trauma-informed design are just some examples.  

Understanding each of these areas theoretically is different from actively reflecting and applying learnings in practice. As a manager, it’s one thing to speak about what we should or shouldn’t do and another to know what it takes to implement and deliver the work. For example, I’ve talked about why designing responsible technology is essential, but I hadn’t designed a product or service in years so it felt removed. I wanted to be grounded in the craft to better understand how to build responsible technology in today’s environment.

It's a pendulum: from management back to practice

Charity Majors, co-founder and CTO of Honeycomb, writes about the individual contributor and management pendulum. Specifically, the breadth and strength that accrues to practitioners that go back and forth between the two:

“The best frontline engineer managers in the world are the ones that are never more than 2–3 years removed from hands-on work, full time down in the trenches. The best individual contributors are the ones who have done time in management. And the best technical leaders in the world are often the ones who do both. Back and forth. Like a pendulum.”

Going back to the practice was an exciting opportunity to grow and sharpen my skills. However, I was out of practice as a product designer for about four years, so the learning curve was high. I knew there would be a significant amount of relearning and unlearning that I would need to do. A pendulum approach to team management and to your career can be a great way to keep growing as a designer and for the team to deliver their best work.

Re/unlearning the tools

Even with my disciplined approach to documenting learnings, some of my skills were utterly dormant, so it took hard work to get up to speed again.

flower blooming then disappearing in particles

Illustrations from the book "Focus" by Evan M. Cohen

There was also a massive shift in my day-to-day activities. No more back-to-back meetings, from 1:1 coaching to recruiting or managing stakeholders. The context switching as a manager left me in this perpetual state of feeling behind. Going back to the practice didn't remove that pressure; the intensity of the work became more focused and less scattered.

As a practitioner, you’re accountable for producing the work, despite how rusty you might feel. You’re responsible for clearly articulating what needs to get done and then doing it. It took constant self-encouragement to push through doubting my abilities.

“Practice is betting on myself: It’s being brave enough to be a beginner at something, and loving enough that I let go of the shame, rushedness and defeatism that makes practice unsustainable.” — Annika is Dreaming

In the beginning, I felt like I was constantly unlearning and relearning on the fly. Cyndi Suarez, the author of The Power Manual, writes about the "learning edge," or learning outside of your comfort zone, where you leave behind old identities and practices that no longer serve you or others.

It's easy for leadership to forget what it feels like to deliver a live product or service. When I was in a management role, I sometimes found myself slipping into this mindset of I've "been-there-done-that" type of thinking. Seeing things from both viewpoints expanded my empathy. I understand the pressures leadership need to manage. Equally, going back to the craft also restored my empathy for practitioners, and my energy to learn as if I was seeing things for the first time.

Becoming a better leader by becoming a better designer 

Learning product design tools and practices again made me look at management differently. It made me think of how I can better support people.

flower transforming in two holding hands

Illustrations from the book "Focus" by Evan M. Cohen

1. Slowing down to reflect and grow as a team

There is a lot of pressure to rapidly-produce work and not enough time to learn. From a business perspective, there are always competing pressures; teams are hyper-productive or underutilized for taking time for personal development. The value of learning and growing as a team often isn't considered a worthwhile investment for both practitioners and managers.

What would it look like if we spent more time fostering spaces to grow as a team?

HmntyCntrd is modeling what it looks like for a company to slow down to reflect meaningfully. At their conference in 2021, founder Vivianne Castillo said, "We believe rest is a form of resistance." Everyone at HmntyCntrd spent the last 45 days of the year not taking on any new clients or consulting work. Instead, they took time to rest, read, create and breathe.

2. Creating an environment to challenge "best practices"

What once was the best practice may not be relevant or appropriate anymore. Moving fast and delivering work at any expense or without regard for consequences no longer works, and it never did.

Healthy work cultures are built in environments where it's safe to challenge and contribute. I saw this image of Saielle DaSilva speaking about the importance of psychological safety — highlighting four levels of maturity for stronger product culture: Inclusion, Learner, Contributor, and Challenger Safety.

Being a designer again, I was reminded of what it feels like to challenge best practices when using a design system that wasn't fit for purpose. Questioning design patterns or mental models that are not working requires reflection and a safe environment to explore other options.

It can be vulnerable to step back and apply a learning mindset as a leader. However, the reward of making space to challenge best practices, or models that no longer serve the team, outweighs the fear of potentially feeling exposed. More importantly, modelling critical reflection as a leader fosters a "learner approach" to addressing problems, which can also start to dismantle unhelpful power structures.

3. Listening first before giving your opinion

I don’t know very many people that enjoy being micromanaged or told how to do things. The feeling of not fully being trusted impacts the team's overall health. It also reinforces practitioners to do what they think is expected rather than what is best for the work or team.

It's easy to swoop in as a leader and give your opinion on how to handle a particular challenge before giving others the space to share their views. It's probably more time-efficient, too, when you've done something so many times before and know how to handle it.

Looking back, I wonder what learning opportunities I took away from others figuring out on their own? Or how I might have created a dynamic where people didn’t think I trusted their abilities.

Continuing to experiment and wander

Exploring the design practice again has also given me the space to figure out what I want to spend my time on, reflecting on what brings me joy and allows me to be creative. It also helps me determine my boundaries on what I'm willing to fight for, grounded in current practice.

There are very few roles that give space to managers who want to explore going back to the craft. There are distinct phases that most practitioners experience in their careers. It's like once you're on the management track, you can't get off. I particularly like the approach Charity Majors takes: “Flip the idea you have to choose a ‘lane’ and grow old there. I completely reject this kind of slotting.”

person becoming flower

Illustrations from the book "Focus" by Evan M. Cohen

There is so much value in wandering outside your lane and experimenting to expand your thinking. The longer you're away from delivering the work, the further you remove yourself from the realities of designing and building products. I wonder how much could be gained from a multidisciplinary leadership team rooted in today’s practices?

I would love to see moving back and again from being a manager and being a contributor more common and accepted in our industry. I would love to see companies taking that into consideration in their career ladders, so a manager could go back to the craft without feeling they are moving backwards in their careers or closing a doo for their future.  

As this beautiful quote from Helen Tran says: 

“Beginners, don’t lose your mind. Veterans, don’t lose your edge.”

Finding common threads

Hearing about similar transitions is yet not too common - most content, guides, and even mentorships are framed around becoming a manager or growing as an individual contributor.

I would love to hear more from folks in similar transitions -  share your stories, your perspectives, and your learnings. Those who are considering changing tracks: reflect on what you expect from your next career move and how you plan to keep in touch with your practice that made you a designer in the first place. Your career is not made of linear, premeditated steps, so don't limit it by what is said out there. Shape your career on learning and moving forward toward what you want to do in this world. 

Acknowledgments

Shout out to Lucy Stewart, Emma Parnel, and Alessandra Canella for our wandering chats about the ups and downs of freelance and going back to the craft.

Works cited
  1. How NHS Digital is developing user-centred design maturity by Rochelle Gold
  2. The engineer/manager pendulum by Charity Majors
  3. The Power Manual by Cyndi Suarez
  4. HmntyCntrd Critical UX Conference 2021 by Vivianne Castillo
  5. Jackie Bavaro on Twitter
  6. Hot Streaks in Your Career Don’t Happen by Accident by Derek Thompson
  7. Helen Tran on Twitter
  8. Focus by Evan M. Cohen
]]>
Articles Design & Craft
<![CDATA[Genderless design is a myth]]> https://www.doc.cc/articles/genderless-design-is-a-myth https://www.doc.cc/articles/genderless-design-is-a-myth Thu, 14 Mar 2024 03:37:26 GMT

Genderless design is a myth

How to deconstruct the gender binary in design and make space for genderfluidity.

Written by August Tang

Cover Image by Manoel do Amaral

animation of genderless written with different fonts

How to deconstruct the gender binary in design and make space for genderfluidity.

Design can never truly be free of culture, gender, and bias.  Our pursuit of a neutral and universal design may bring us to modernism, minimalism, and the Apple-esque aesthetic, but these design schools, inspired by eurocentric standards, carry qualities associated with masculinity. How can we go beyond the stereotypical binary view of masculine and feminine and create more universal, inclusive designs? Genderfluidity can give us a hint about the future of design and culture.

Beyond universal design

Universal design was a concept that originated in the context of architecture, asserting the importance of accessible environments for everyone, regardless of ability, age, or status. Within graphic design, universal design has become synonymous with The International Typographic Style—or Swiss Design—due to its emphasis on objective clarity and supposed neutrality. The impact of Helvetica in our culture is well documented in films and can be noted in our everyday life and is one the best examples of the values of the Swiss Design school.

photo of a subway sign in new york

Photo by Mick Haupt on Unsplash

However, in today’s world where we’re building products for multifaceted audiences, universal design and Swiss Design cannot always be a reliable answer. And although the founders might like to believe so, Swiss Design is not free of meaning and bias.

Historically, systems were put in place to serve the privileged (ie white / men / cisgender / heterosexual / wealthy / able-bodied / etc). As time progressed, design has shifted towards the idea that everyone deserves access and that we no longer want to design for solely the privileged class, we want to design for everyone. However, the sentiment of designing for everyone still falls short when serving unique, multifaceted individuals.

Instead of designing for the universal, we need to embrace a mindset that no group of people is a monolith. 

Even communities that we group together have nuances within them. Thus, we have to move from the idea of a universal design to the idea of designing for a pluriverse. Designing with a pluriversal mindset (versus a universal mindset) means that as designers, we become intimately familiar with the users we’re serving. With sensitivity to race, culture, class, sexuality, gender, ability, and more, we can make design solutions that celebrate and acknowledge user differences, rather than erase them with a single solution that will inevitability carry the bias and values of the status quo.

diagram  showing that design for the norm excludes design for everyone imposes a standard and design for pluriverse allow differences to coexist represented by different colors and overlapping circles

Designing for the pluriverse diagram, based on a reference by Mauricio Mejía

Genderless design is a myth

 

The concept of genderless, universal design is an illusion. The reality of Swiss design is that it perpetuates the idea that masculinity is the norm; masculinity equates to neutrality. 

A parallel example can be seen in the fashion industry. Androgyny in the context of clothing often is synonymous with menswear. Upon googling “gender-neutral fashion,” most of what surfaces are t-shirts, formless sweaters, button-ups, and hoodies. I and Me, a denim and lifestyle brand from London, UK is a prime example, claiming on their website, “The design process is gender-neutral, it will always be about fabric and style before ‘his and/or hers’; this is where the story begins with every garment — on a neutral playing field undefined by seasonal trends.”

I and Me’s Better With U Collection

As much as well-intentioned designers might aim to neutralize gender in their work, attempts to degender a product or space often mean defaulting to a plain style leaning towards masculinity. Even if the original designer claims neutral intention or say that gender doesn't play a role in their work, it’s inevitable and unavoidable that cultural norms will still impose meaning and gender onto objects. It only means then that they are accepting and replicating the status quo.  Instead of falling back on norms and universal rules, designing for the pluriverse gives us the opportunity to move beyond the gender binary into a more sophisticated space.

How gender presentation shows up in design

Eurocentric standards and norms have historically perpetuated the gender binary. Framing women as the opposite of men allowed patriarchy to thrive and defined the dichotomies that define gender stereotypes today. In a patriarchal society, masculinity is synonymous with power, superiority, dominance, and control. Therefore, by opposition, femininity represents weakness, inferiority, submission, and servitude.

example of poster using a masculine aesthetic with sharp edges and bold fonts

Art direction example for masculinity.

example of poster using a feminine aesthetic with soft edges and delicate fonts

Art direction example for femininity

Gender stereotypes represent themselves in design predictably. Art directions that are bold, blocky, emotionally restrained, and conservative in color are associated with masculinity. Within the construct of the gender binary, femininity in design is the opposite. Expressive typography, delicate forms, and toned color schemes are typically perceived as feminine. The gender binary is harmful in many ways. Not only does it perpetuate a power structure between masculinity and femininity, it also erases the reality that many cultures, including indigenous, have been celebrating and accepting of nonbinary genders for centuries. 

The gender binary is a relatively recent and Eurocentric perspective that oversimplifies reality. 

The future of design is inherently genderfluid. Acknowledging the masculine or feminine roots of a design, instead of erasing or ignoring them, unlocks the potential for multifaceted, nonbinary solutions. We cannot make designs that are devoid of gender. As designers and creatives, what we can do instead of attempting to detach gender from design is engage in what I like to call “gender fuckery.”

How to deconstruct the gender binary in design

Design that acknowledges a genderfluid reality has more potential to be rich with culture, meaning, and purpose. Gender fuckery is the practice of subverting traditional binary gender norms through mixing, blending, and bending gender expressions. By adopting this approach and applying it to a design practice, we not only create more multifaceted solutions, we begin deconstructing the gender binary through our work.

page example from book with a mix of colors and styles

Sample spread from the book design for Be Gay Do Crime

The image samples of Be Gay Do Crime are an editorial example of how we can begin to introduce genderfluidity into design work. Due to the inherently queer nature of the project, I was highly experimental with the combinations of imagery, illustration, typography, and layout. In combining an expressive serif, a monospaced type, and a handwritten font, the typography itself is an uncommon showcase. The type styles, paired with soft, colorful gradient backgrounds, photo collages, and rainbow trackpad illustrations make the book a prime instance of gender fuckery.

another page example from book with a mix of colors and styles

Another sample spread from the book design for Be Gay Do Crime

The concept of mixing, blending, and bending gender in design can be pushed to a further extreme than this one example I provided. However, in a corporate setting and non-queer spaces, the application of genderfluidity may face more resistance. I have worked with brands in the past where clients were hesitant to change, in fear of alienating their dominant user group. For example, a stereotypically masculine product, such as a grill, might be marketed with a bold, sporty typeface. We could expect this brand to showcase lots of meat / men in the outdoors / colors like charcoal and orange. However, a few ways to subvert the look and feel of the brand could be to introduce a serif typeface, lighter tones of colors, and rounded edges. When working with an existing brand that is highly polarized and gendered, there are still subtle ways to tweak and evolve. For reference, here are a few starter areas for intentionality when it comes to design choices:

Typography: where does type family come from? What does it convey? What font pairing can balance it out?

Colors: what is the meaning behind these colors? What's the story behind this color palette or trend?

Photography: how can we go beyond the typical stock photo?

Illustration style and graphic elements: what does it convey? How is it associated with stereotypical views of masculine and feminine?

The future of design business is genderfluid

The gender binary perpetuates harmful and inaccurate stereotypes, as well as upholds power structures. Instead of attempting to erase this reality, we can evolve beyond the gender binary to reach new audiences. Being intentional about how to use and break from the sterotypical views of femininity and masculinity is key to arriving at a gender-inclusive solution. Embracing gender fuckery in our practice and moving past a rigid, normative approach is a form of resistance against the patriarchy, challenges stereotypes, and allows design to resonate with more people. Genderfluid design avoids alienating the non-dominant gender and instead appeals to a broader spectrum of users and untapped markets – there is a great business case for brands wanting to keep relevant, expand their markets and differentiate from the monotonic mainstream discourse. Design as a profession serves a business. But it can also intentionally subvert the norms to open new opportunities - for people and businesses.  Gender fuckery is always relevant.

The future of design is genderfluid.

My perspective on this topic is ever-evolving. Feedback, comments, thoughts, and callouts are welcome.

Works cited
  1. Swiss style by PrintMag
  2. Helvetica movie review by New York Times
  3. LinkedIn post by Mauricio Mejía
  4. Be Gay, Do Crime by August Tang

]]>
Articles Design & Society
<![CDATA[A brief history of the numeric keypad]]> https://www.doc.cc/articles/a-brief-history-of-the-numeric-keypad https://www.doc.cc/articles/a-brief-history-of-the-numeric-keypad Thu, 14 Mar 2024 03:35:34 GMT

A brief history of the numeric keypad

Written by Francesco Bertelli

Art Direction by Manoel do Amaral

keypad illustration

Picture the keypad of a telephone and calculator side by side. Can you see the subtle difference between the two without resorting to your smartphone? Don’t worry if you can’t recall the design. Most of us are so used to accepting the common interfaces that we tend to overlook the calculator’s inverted key sequence. A calculator has the 7–8–9 buttons at the top whereas a phone uses the 1–2–3 format.

telefone and calculator side by side

Telephone (left) and calculator (right) keypads

Subtle, but puzzling since they serve the same functional goal — input numbers. There’s no logical reason for the inversion if a user operates the interface in the same way. Common sense suggests the reason should be technological constraints. Maybe it’s due to a patent battle between the inventors. Some people may theorize it’s ergonomics.

With no clear explanation, I knew history and the evolution of these devices would provide the answer. Which device was invented first? Which keypad influenced the other? Most importantly, who invented the keypad in the first place?

Typewriters, Cash Registers, and Calculators

Looking at the key arrangement, I was curious to learn when the system of using keys was introduced in the history of machines. The keyboard came about sometime between the first and second industrial revolutions (from 1820 to 1920). Some inventors had already begun experimenting with machines similar to pianos in the late 18th century.

However, it wasn’t until 1844 that a Frenchman by the name of Jean-Baptiste Schwilgué came up with the first working prototype of a key-driven calculator machine. This machine used the first numerical keyboard with a single row of keys that increased from 1 to 9 (Dalakov, 2018).

In all fairness, though, we have to mention two predecessors that could claim they invented the key-based interface. In 1834, Luigi Torchi reportedly showed a prototype of a wooden calculator, with a design similar to the typewriter. In 1822, author James White’s New Century of Inventions showed a key-based device with nine numeric keys. Neither one stood up to the test of time, nor no proof was given that they weren’t just fantasy (Durant, 2011).

adding machine and keypad diagram

Jean-Baptiste Schwilgué’s adding machine keyboard (1844)

Still, White’s machine, even if it was a proof of concept, could certainly be regarded as the earliest example of modern “direct-manipulation” interface. This interface that allows users to focus on the input without the need to operate the bare mechanisms such as the Pascaline or array of Arithmometers that use drums, clocks and unfriendly levers (Dalakov, 2018).

However, these “ideas” still don’t provide an explanation as to why modern calculators use the reverse 9–0 arrangement.

Theories include the suggestion that the calculator was based on the cash register design. Think about it, the currencies used in that time meant the number 0 was often the most pressed key. So, it would make sense to keep that number at the very bottom to ensure it was within hand’s reach (Durant, 2011).

While there is some truth to the explanation, it’s still riddled with factual errors and the hand’s reach argument was weak. This is especially so since early cash registers (until 1893) had no separate 0 key, no drawer and no workers standing behind the cash register.

For the argument to be valid, it’s important to look at the birth of cash registers.

In 1879, James Ritty owned a saloon in Dayton, Ohio where he found some of his employees were stealing his profits. After seeing a tool that counted the revolutions of a steamboat’s propeller, he invented the machine that featured a clock-wise device and a set of numeric keys (Dalakov, 2018). The predecessor to today’s cash register was not meant for calculation but to record a sale and let a manager know with a ring.

Until 1893, the early register models had buttons commonly arranged in one or two horizontal rows, which displayed preset values — 10, 15, 20, 30, 35, etc. These corresponded to the price, in cents, of items sold in stores and saloons. The introduction of the three vertical rows of digits didn’t happen until 1894 when the NCR Model 79 became available.

machine photo and keypad diagram

Ritty’s first cash register, invented in 1883

Still, there is even earlier evidence that suggests vertical columns were already invented.

In 1884 Dorr Felt had a bright idea of a machine that was capable of solving operations with large numbers. The idea was based on the Pasacline’s mechanism, the layout of Thomas Mill’s machine and a macaroni box. It was known as the Comptometer, a device with eight columns of keys that range from 9 (on top) to 1 (on bottom) where each column represented a decimal position. Keep in mind that 0 was still not a part of the key sequence. History shows that it was a 9 to 1 sequence (Dalakov, 2018).

Cash registers were still in the process of catching up.

machine photo and keypad diagram

Comptometer’s layout, invented in 1885 (right)

Here is where the story gets interesting. Why did Felt choose to display the numbers in a 9 to 1 sequence? It wasn’t a widespread notion at the time. After all, the knowledge of arithmetical devices was not as widespread.

A reasonable answer could be tied to some mechanical decisions, possibly related to the method of complements and the fact that keys were pressing levers connected to rotating drums (Durant, 2011). The longer stick was equal to a longer rotation, which meant to be the number 9, as opposed to number 1 which required a shorter rotation — a suggestion from an older concept by Parmelee.

Another interesting explanation — from a modern design standpoint — goes beyond the mechanical reasons. According to the Comptometer Manual, operators were meant to input numbers by using the lowest values on the keyboard. For instance, in order to enter “9 cents”, the operator was not supposed to press the 9 key at the very right column. Instead, they were to press, in sequence, the 4 and 5 key. The machine would do the math. Reaching for the “9” key was deprecated because it decreased the figuring speed if users had to move their right hand from the bottom. Felt was all about efficiency, which meant keeping commonly used keys within the fingers’ reach. It seems this need for efficiency led to this user-centered design layout, but it was still not considered a user-friendly interface (Meehan, 1952).

The Comptometer and its competitors required highly-trained users to attain maximum productivity. It was also difficult to do with one hand, especially when it came to multiplications.

In 1902, the Dalton went on to become one of the most popular 10-key adding machines of the time, rendering multi-column calculators obsolete. Dalton was a miniature version of a typewriter and had two rows of five buttons with an odd arrangement — 24579 at the top and 13068 at the bottom. What’s different in the arrangement that was not seen up until now? (Durant, 2011)

That’s right! The 0 finally appeared in a sequence.

The Dalton was a major improvement, combining the printer and calculator into a smaller size and adding a new kind of keyboard that went beyond the literal arrangements for decimals. Bookkeepers around the world rejoiced when the development of the Dalton (Dalakov, 2018).

The quest for further development continued.

In 1914, David Sundstrand, a Swedish-born American man, filed patent №1198487 under the name Sundstrand Corporation. The goal was to push the usability of these adding machines further. He rearranged the key in a more “logical, natural configuration.” It was based on a 3x3 layout, beginning with 789 at the top and a larger 0 at the bottom. It could be operated with one hand, which made it “the fastest keyboard of all adding machines.”

The layout became the standard for calculator keypads — even 100+ years later.

machine photos and keypad diagram side by side

Dalton’s 10-key adding machine, 1902, and Sundstrand’s 10-key adding machine, 1914

From Calculators to Telephones

Does the evolution of calculators prove its influence on modern phones? Possibly, but there’s no straight answer. Bell Telephone Company, the company who helped invent and popularize long distance phone call technology, was already experimenting with push-button phones in 1887. This was at a time before the rotary dial was invented — a device that could be attributed to Almon Brown Strowger in 1892. Western Electric commercialized the device in 1919, but never gained popularity because the buttons were shortcuts not tied to numbers.

It isn’t until the 1950s that direct distance dialing expanded to a significant number of communities. Local numbers (usually six digits or less) were then expanded to a standard seven-digit named exchange. A toll call to another area, resulted in 11 numbers, with the number 1 being the first number dialed (Durant, 2011).

With increasing length of phone numbers, the number of misplaced calls rose, which led to AT&T engineers wondering if it were due to the keyset toll operating were using (image below).

keypad diagram

In the 1955 study of Expected locations of digits and letters on ten-button keysets followed by the 1960 Human Factors Engineering Studies of the Design and Use of Pushbutton Telephone Sets, there were multiple insights offered that would lead to the modern phone design. AT&T was about to move to a new frequency called Touch Tone, which was meant to be used by push-button devices. It was important to determine which configuration would be best for users (Deininger 1960).

different keypad diagram from AT and T study

Key arrangements tested in the 1960 study.

The company tested 15 layouts, using odd-shaped diagonal, pyramidal, circular and horizontal arrangements and included formats found on existing devices such as calculators and punch card machines like the IBM Model 011. Surprisingly, the calculator layout didn’t do so well, and users preferred a left-to-right, top-to-bottom layout. (Deininger 1960).

In particular, the 2-rows of 5 horizontal version (5–5-H) was as fast as the modern 3x3+1 layout, but the difference was only marginal. AT&T opted for the 3x3+1 layout, perhaps due to its compact format and versatility.

Now ‘perhaps’ is the keyword here. Both studies have not given a final, matter-of-fact answer. And the UK adopted the 5–5-H layout, again perhaps due to patent reasons.

machine photos and keypad diagram side by side

British telephone with 5–5-horizontal layout, 1960s, and IBM Model 011, one of the early 10-keypunch machine, 1940s

Something interesting to mention regarding both studies — letters never played a part in how the configuration would be laid out. While people wanted the left-to-right order for numbers, they demonstrated more speed and accuracy independent from the letter arrangements (Lutz, M. C., & Chapanis, A. 1955). Theories that want the alphabet order to be the primary were proven wrong, which is why the layout is the way it is today.

Design Decisions and Conventions

There is a multitude of factors that go into the design decisions such as technology and its limitations, ergonomics, user perception and familiarity with existing formats. The latter appears to be the strongest criteria, as it’s the most common choice people make in the digital age. No physical constraints, other than the screen real estate that limits designers’ creativity. Look at your Android or iPhone apps. You’ll notice that both the phone and calculator layouts are similar to the ones invented a century ago.

Why is that? The only real explanation for why digital apps still adhere to conventions is that people would rather interact with familiar interfaces instead of learning new ones. Possibly these interfaces reached the maximum optimization an interface can have.

In fact, it’s quite interesting to notice that both Android and Apple iOS, in their early versions, used the phone keyset as the default interface when users were prompted to input numbers in a web text field (see screenshots below. Most recent version of the iOS prompts the special characters board instead). On the other hand, Oculus Go, is adopting the calculator layout for any numeric input (I tested it on a web application).

keyboard screenshots from google and apple

Android 6 left, iOS 9 right (http://inputtypes.com)

photo from VR keyboard

Oculus GO Keyboard (2018) — Source

So, why did Apple and Google choose the phone layout over the other — even keeping the letters underneath the numbers? Why not create a special numeric keypad optimized for the thumb touch for the phones or a special keyboard easier with pointing devices in the case of VR devices? Considering that none of the two historical layouts have been advantageous for speed, the only advantage is the readability, and most likely the reason lays in low maintenance and reuse of existing patterns within the software: smartphone keep the legacy of phones. Oculus, Xbox, choose the legacy of desktop applications.

Timeline

  • 1642: Blaise Pascal’s calculator
  • 1822: James White’s concept of a key-driven calculator machine
  • 1844: Schwilgué’s machine, first numeric keyboard in history
  • 1857: Thomas Hill’s machine, ancestor of the comptometer
  • 1874: E. Remington & Sons began to manufacture and market a subsequent model of the Sholes & Glidden Type Writer
  • 1879: Rittey’s first cash register in history
  • 1885: Comptometer, first columns 9-to-1 layout adding machine
  • 1887: Early prototypes of push-button mini-phones
  • 1887: Cash register Model 79 is born — vertical columns
  • 1902: Dalton, first 10-key machine (now includes the zero)
  • 1914: Sundstrand’s first 10-key machine with “3x3+1” layout
  • 1919: Western Electric & AT&T introduce rotary-dial telephones
  • 1940: Olivetti Dividisumma introduces divisions
  • 1940: IBM 10-keypunch card machine, 123 format on top
  • 1955: AT&T start testing push-button telephones
  • 1963: Bells introduces 10-key push-button telephones
  • 1963: Canon prototypes first electronic calculator with a luminous display
  • 1966: Sharp/Facit commercialized electronic calculator with a luminous display
  • 2007: Apple introduces iPhone, along with a calculator app
Works cited
  • Bellis, M. (2013). The History of the Computer Keyboard — From an Inventor Perspective. [online] Theinventors.org. Available at: http://theinventors.org/library/inventors/blcomputer_keyboard.htm [Accessed 9 Jun. 2018].
  • Bellis, M. (2018). Who Invented the Cash Register? [online] Thoughtco.com. Available at: https://www.thoughtco.com/cash-register-james-ritty-4070920 [Accessed 9 Jun. 2018].
  • Durant, W. (2011). [1912 Dalton Adding, Listing and Calculating machine]. [online] The Portal to Texas History. Available at: https://texashistory.unt.edu/ark:/67531/metapth969/ [Accessed 9 Jun. 2018].
  • Dalakov, G. (2018). History of Computers, Computing and Internet. [online] Available at: https://history-computer.com [Accessed 9 Jun. 2018].
  • Lutz, M. C., & Chapanis, A. (1955). Expected locations of digits and letters on ten-button keysets. Journal of Applied Psychology, 39(5), 314–317.
  • R. L. Deininger (1960). Human factors engineering studies of the design and use of pushbutton telephone sets. The Bell System Technical Journal, 995–1012.
  • Meehan J. R. (1952). How to Use the Calculator and the Comptometer. Published by Gregg Publishing Division, McGraw-Hill Book Company, Inc., 1–4
]]>
Design & History Articles
<![CDATA[Portugal by Sebastião Rodrigues’ pencil]]> https://www.doc.cc/articles/portugal-by-sebastiao-rodrigues-pencil https://www.doc.cc/articles/portugal-by-sebastiao-rodrigues-pencil Thu, 14 Mar 2024 03:34:20 GMT

Portugal by Sebastião Rodrigues’ pencil

Sebastião Rodrigues wearing thick glasses

The recent history of design in Portugal in general and graphic design, in particular, is certainly still to be written. In other words, it is a story that is written every day, thanks to the work, often in difficult contexts, of thousands of professionals and teams. It is a story that is looking for her own story. From borders and styles in exploration and still dealing with the question of identity, whether or not there is a unique “national style” for Portuguese graphic design.

In addition to the identity debates on Portuguese graphic design, there is also a challenge to memory. History, whatever it is, not only has the mission of eternalizing thoughts but also, especially in an area such as design, to inspire the present.

Among the many historical readings that can be made of graphic design in Portugal, one of them will undoubtedly be based on the personal journey of many of the Portuguese designers, perhaps in times when even this name was not recognized as such. When talking about this history, its authors and movements, yet in discovery, one of the unavoidable names is that of Sebastião Rodrigues.

A journey that is intertwined with the country’s history

Sebastião Rodrigues lived in a time of great political changes. Born in Dafundo, on 28 January, a year after Salazar joined the Council of Ministers as Minister of Finance in 1928, he was the son of a middle-class family. Sebastião attended primary school, like many others at the time, where he started in 1936. There he would also find one of the figures, which he himself mentions as one of the most outstanding of his life, his primary education teacher, D. Clotilde. As a result of the economic difficulties that plagued practically the whole country at the time, Sebastião Rodrigues starts working at the age of 13 in the office of a Swedish company, Electro-Lux, while simultaneously helping his father in the advertising department of the newspaper “A Voz”.

At the same time that the Second World War ended, from which Portugal managed to stay away, Sebastião Rodrigues is invited to work at APA — Artistic Advertising Agency, we were in the year 1945. It was also at APA that Sebastião found, the one that would be practically his workmate, Manuel Rodrigues. The closeness between the pair remained for more than twenty years, a relationship that was only interrupted by the death of Manuel Rodrigues in 1965.

The National Secretariat for Information

Sebastião Rodrigues can easily be coined as a designer of the regime. While he did collaborate with the National Information Secretariat, the SNI, he was not involved in politics, maintaining his focus as a designer in solving the graphic problems that were posed to him. This close collaboration with SNI was also the opportunity for Sebastião Rodrigues to be able to embrace body and soul that craft that until then had been an occupation of free time, graphic design. More on this later on this article.

Poster promoting tourism to Portugal

Portugal by Sebastião Rodrigues

The Almanac

One of the most significant pieces of work of Sebastião’s career was the magazine “Almanaque”, for which he was responsible for the graphics between the years 1959 and 1960. “Almanaque” was a small monthly magazine, emanating an aura, both of boldness and of interventive, who played with his reader using both written and drawn subtlety. The publication had 14 issues over the course of its life.

The importance of this magazine in the global work of Sebastião Rodrigues could be easily compared to the importance of the figures who were also involved in the project, such as: Figueiredo Magalhães, José Cardoso Pires, Vasco Pulido Valente, Sena da Silva, Pilo da Silva, Luís Sttau Monteiro , Augusto Abelaira, Alexandre O’Neill, José Cutileiro, Eduardo Gageiro and João Abel Manta, the latter, also a great figure in the history of graphic design in Portugal in the second half of the 20th century.

Nine covers of the Almanaque magazine designed by Sebastiao Rodrigues

Almanaque by Sebastião Rodrigues

Trips through the north of Portugal

At the same time that Sebastião Rodrigues was involved in the “Almanaque” project, he became a fellow with the Calouste Gulbenkian Foundation, a fact that led him to travel for six months in the north of Portugal, carrying out several of his researches on popular graphic material. , which would be decisive in the construction of all his graphic work. Parallel to this extremely creative period for Sebastião Rodrigues, Portugal sees one of its most striking wars, the overseas war, taking place in its colonies in Africa, in 1961.

International recognition

Beyond the national panorama and all the collective and individual exhibitions in which Sebastião Rodrigues has been participating until then, the year 1962 definitely marks the recognition abroad, which was already one of the greatest references of Portuguese graphic design. This year Sebastião Rodrigues sees his work recognized and published in Who’s Who in Graphic Art and in the Graphics Annual International Year Book of Advertising Art.

The Calouste Gulbenkian Foundation

Sebastião Rodrigues, has always been a creator closely linked to dissemination and cultural propulsion. In 1963, he started one of the collaborations in this medium that would bring him more notoriety. This year, graphic production for the Calouste Gulbenkian Foundation begins, with the Portuguese designer in charge of the entire production of catalogs, posters, leaflets, exhibitions, etc.

There are clients who categorically mark Sebastião Rodrigues’ career and career. But, the role that the designer had in the design of many of the foundation’s communication materials at this time, is due to its complexity and multiplicity of applications, without a doubt, one of the most important milestones of his career.

Poster designed by Sebastiao Rodrigues for a theater company that references the Japanese flag aesthetic

Fundação Calouste Gulbenkian by Sebastião Rodrigues

The Portuguese Association of Designers

Practically ten years later, the year 1974 passed, Portugal on an April morning sees one of the longest days in its history begin. April 25, 1974 is the date on which Portugal is stirred up in its Carnation Revolution. A new chapter in the history of Portugal starts here, just as in 1976 a new path in Portuguese design opens, with the creation of the Portuguese Designers Association — which had Sebastião Rodrigues as one of its founding partners.

This year, 1976, also ended a troubled transition period for Portuguese society after the 25th of April. It is precisely this year, 1976, that the new Portuguese Constitution is promulgated, the result of a free electoral process that took place symbolically a year earlier, April 25, 1975. The Constitution promulgated in 1976 is the same that is still in force today, having over the years undergone several punctual rectifications.

The Monastery of Batalha

In 1977 Sebastião created one of the graphic icons of “25 de Abril”, a poster alluding to the date (April 25th), as a symbol of Portuguese patriotism and where the red of passion and the green of hope intertwine giving way to an enlightening lettering. This is the most explicit demonstration that Sebastião Rodrigues never had any ambitions and political superstitions and the only thing that moved him was the enormous passion he had for his craft.

The end of the seventies and the whole eighties was for Sebastião Rodrigues the reverse of what it would be for Portugal. While Portugal, sought to recover from the backwardness and isolation it suffered throughout the previous regime, aiming to resume the pace of development, Sebastião Rodrigues had one of the most fertile phases of his career. He continues to develop a large part of the graphic communication of the Calouste Gulbenkian Foundation, working for one of its other clients of reference in the scope of heritage, the Monastery of Batalha. In addition to these two milestone clients in the work of Sebastião Rodrigues, the Portuguese designer also starts to develop a frenzy of other graphic projects, which make his work a highly complete and diversified collection, whether as a cover artist, paginator, exhibitor, and much more.

Poster designed for a cultural event showcasing sculptures on the wall of an old cathedral
Poster designed for a cultural event showcasing the wall of a classic Portuguese building

Mosteiro da Batalha by Sebastião Rodrigues

National recognition

Before dying in June 1997, Sebastião Rodrigues still had time to see his work recognized both by the international designer community, through ICOGRADA (International Council of Graphic Design Associations) which in 1991 grants him the Award of Excellence, and by society civil, through the award in 1995 of the Medal of Grand Officer of the Order of Merit, awarded by President Mário Soares on the Day of Portugal.

Sebastião Rodrigues died in 1997, at the age of 68, victim of a prolonged illness but with an immense graphic work, spread over several political, social and cultural periods of the 20th century Portuguese society. He left behind a legacy of unquestionable value in the construction of the identity of contemporary Portuguese graphic design, by directly and indirectly influencing many of the designers of the following generations.

Four simple ideas in a work that is all plural

Sebastião Rodrigues exercised almost all his professional activity in Lisbon. A Portuguese in the broadest and most symbolic sense of the word, Sebastião Rodrigues, had a very special fascination for all Portuguese culture and folklore. The designer born in Dafundo, was one of the most tangible examples of how a creator can and should be influenced, by his ethnographic, archaeological and popular origins, without ever losing a sense of modernity and graphic freshness, well ahead of his time.

Characterizing Sebastião Rodrigues’ graphic work is not a simple exercise. Due to its multifaceted valences and inspirations, it can be a challenge to try to simplify this legacy in a restricted set of definitions. Even so, there are some premises in all of his work, which are completely decisive.

01
The Portuguese inspiration

Much of Sebastião Rodrigues’ work was developed during the Estado Novo, more specifically having SNI — National Secretariat of Information — as a client. Naturally, all these graphic creations would have to reflect the values ​​that so commanded the Estado Novo, but in this view, Sebastião Rodrigues always knew how to impress a contemporary, intelligent and creative freshness of the popular identity of the country of Camões.

Sebastião Rodrigues was an extremely well-informed person of everything that was going on internationally. Although Portugal was immersed in itself, the designer always knew how to break this barrier, keeping up with the evolution of the world. To think about Sebastião Rodrigues’ work, without his graphic symbols and metaphors from Portuguese popular culture, is to think about everything, except the work of one who is probably one of the most important figures in the history of Portuguese graphic design.

Sebastião was someone who constantly sought to investigate and understand the world and culture in which he lived. It can almost be said that Sebastião lived with the atelier in Lisbon and thought in Portugal, mainly the so-called deep Portugal, where, not infrequently, he traveled in search of inspirations and graphic material that would later be the basis of many of his creations.

The maximum exponent of this demand occurs between the years 1959 and 1960, when for six months he travels through the north of Portugal, as a fellow of the Calouste Gulbenkian Foundation, looking for and discovering influences and illustrative graphic materials of a popular and ethnographic nature.

02
The design of the illustration

To think of Sebastião Rodrigues’ work is to automatically think of the smooth and strong forms of the works of Verde Gaio. It is thinking about the covers of linear and direct design of the magazine “Almanaque”. It is thinking about geometry transformed into form through simple figures on the covers of many of Sá da Costa Editora’s books, but also on all the characters that populated his posters, leaflets and leaflets.

Sebastião Rodrigues, like any other creator, had several moments in his work. He started to work still influenced by Victor Palla’s neo-realistic ideas, but he quickly started to look for a language of his own. Developed from the fifties, much of the language of illustration that later, from the sixties, would reach a state of purification and almost absolute simplicity.

Looking at the poster “Visitez le Portugal” from 1953 and the poster from “Portugal Ikofa 1960” from 1960, one can clearly see two creative moments in the designer’s journey. Without a doubt, the language of the second poster is the one that most characterizes the Portuguese designer. A language that lives on geometry, transposed in simple forms and smooth colors, of an absolutely immediate communication but at the same time with a lot of humor and rigor in execution.

The work of illustration thinking is undoubtedly the most important milestone in the creation of this graphic designer, who even later, from the mid-seventies, would evolve towards the incorporation and exploration of photography in his work, as shown by many of the creations for the Calouste Gulbenkian Foundation and Batalha Monastery, already in the eighties.

Poster for Portugal Ikofa 1960 event with colorful illustrations of a figure that resembles a bird

Portugal Ikofa 1960 by Sebastião Rodrigues

03
Functional typography

Also in the typographic field Sebastião Rodrigues has undergone a mutation over time. It evolved from a use of purely functional typography, as for example it is common in many of his first book covers, to arrive at a use of typography already as an image, as in the example of the Retrospective Catalog of Eduardo Viana’s work. His titles and typographic works largely reflected his functional vision of communication, that is, typography was used without major compositions, but always at play, or as he used to say, in play, with all the elements of the piece.

In some cases we may even find a fusion between image and typography, as in the poster for the exhibition “Pintura Portuguesa de Hoy” held in 1973 at the University of Salamanca or else in the graphic composition in which Sebastião Rodrigues represents a friendly conversation between two efes, closely watched by an angel at the top of an iconic maze.

From a purely informative functional use, Sebastião Rodrigues has evolved over time, to an imaginary use in the sense that the letter and the word made up graphic stains, strengthening and interacting in some way with the message, which is always clear and immediate.

The mastery of typography and its continued exploration also largely led to his perception and deep knowledge of the techniques of reproduction at the time, where the designer’s work was built between photocompositions and manual drawing of all the elements.

04
The exquisite technique

Sebastião Rodrigues worked for some time with his father in the advertising department of the newspaper “A Voz”. There he would do small works of composition of types of metal that would later print the various editions of the newspaper. It will certainly not be the method invented by Gutenberg, but making all the time filters is in fact a method, in its very similar genesis.

This knowledge and mastery of most reproduction techniques, which at the time were making the transition between photolithography and offset, gave Sebastião Rodrigues an unusual but decisive knowledge, which would later translate into all the rigor that is recognized by his works. In addition to the atelier designer, Sebastião Rodrigues was also a workshop designer, one who, together with the masters, composed, retouched and adjusted all the pieces to exhaustion and the smallest detail. His incomplete training in a professional course in mechanical locksmithing, greatly contributed to his workshop approach to that, which would become his profession of body and soul, graphic design.

Beyond all the graphics and visual exuberance that the works of Sebastião Rodrigues emanate, his creations are also a push to the limit all the printing techniques of the time. The “manuality” and proximity to the workshop that all these processes implied and the fascination that this provoked in Sebastião, was one of the main causes that made the designer never take the new computer possibilities into account in his creations, although these also have came into focus already at a stage when Sebastião’s creative volume had decreased.

All the “facilism” that the Mac revolution would bring to graphic design, never seduced this designer, who in addition to the creation created was always fascinated by reproductive creation, thereby instilling an almost mathematical and scientific rigor in all his communication works.

A designer made by drawing

In Sebastião Rodrigues everything is drawing. More than the act of tracing any shape, in the graphic work of this Portuguese designer, everything is drawing and mental process. From the collection and idea to the finalization and printing of the piece. Everything is, everything must be, under the control of the designer, that in the true sense of the word and the term no vocation will have for mere decorator. As Sebastião himself defended: “the only thing that matters is the role, everything else is a joke, but always a joke with a role”.

Sketches for graphic projects ideas by Sebastiao Rodrigues
Sketches for graphic projects ideas by Sebastiao Rodrigues

Sketches by Sebastião Rodrigues

Sebastião Rodrigues’ work goes through a whole series of social changes in Portugal. He knew better than anyone to keep his work free at the time of the censorship of the Estado Novo. I knew like no other not to subordinate graphic design to values, which would have very little human value. He has always been a figure of high character and education, which translated into a close deal with all those he had the pleasure of depriving.

In Sebastião Rodrigues everything is drawing. Typography, illustration, photography, everything goes through a creative process that translates and at the same time is inspired by the ethnographic and popular origins of Portugal. Sebastião, in addition to his graphic work, knew how to “Draw Portugal” like no one else through his posters, leaflets and exhibitions. He knew how to respect the identity of all the peoples who were the motto of many of his works.

Everything that Sebastião Rodrigues represents is much more than the history of Portuguese graphic design. It is the solution found by someone who saw in his time traces of the future of his profession, graphic design that over time would “transform” into communication design. To a certain extent, one could consider Sebastião Rodrigues as a visionary, because if he were not, he would not be the target of study today, but also of fascination for all his work.

Works cited

  1. Sebastião Rodrigues by Wikipedia (2013)
  2. Fundação Calouste Gulbenkian
  3. Almanaque magazine by Hemeroteca Digital de Lisboa
]]>
Articles Design & History
<![CDATA[Beyond form and function: Design is poetry]]> https://www.doc.cc/articles/design-is-poetry https://www.doc.cc/articles/design-is-poetry Thu, 14 Mar 2024 03:33:36 GMT

Beyond form and function: Design is poetry

Written by Maximilian Speicher

Art direction by Manoel do Amaral

Last year, my brother gave me Mortimer J. Adler and Charles Van Doren’s classic How to Read a Book (Adler & Van Doren, 2014) as a birthday present (a must-read for everyone who wants to truly read). While the book touches on very much every kind of literature there is and how to intelligently read it, one part especially resonated with me—it was the one about poetry. I myself have been interested in poetry since I was in elementary school, and I have read my fair share of lyric works, from Emily Dickinson to Erich Fried to Derek Walcott, always in the search of understanding what the essence of it is. What constitutes a poem? Now, Adler and Van Doren (2014, p. 222) write:

“Poetry [...] is a kind of spontaneous overflowing of the personality, which may be expressed in written words but may also take the form of physical action, or more or less musical sound, or even just feeling. There is something to this, of course, and poets have always recognized it. It is a very old notion that the poet reaches down deep into himself to produce his poems, that their place of origin is a mysterious ‘well of creation’ within the mind or soul. In this sense of the term, poetry can be made by anyone at any time, in a kind of solitary sensitivity session.”

In this sense, really anything can be a poem, as long as it is the result of the above mentioned “overflowing of the personality,” of someone “reach[ing] down deep into [themselves]” and touching their “‘well of creation’” (Adler & Van Doren, 2014). And while they only specifically mention written words, physical action, musical sound, and feeling, such a result can especially also be a design. And why shouldn’t it? Poems are not just form, they always also have a function—as does design. They intend to convey a deeper truth. They want to evoke feelings that (hopefully) result in a certain experience—as does design. In this sense, poems serve as an interface between author and reader who engage in an asynchronous dialog (Adler & Van Doren, 2014)—as does design, just with “users” instead of readers. Moreover, design can hardly be decoupled from art anyway, despite certain initiatives in industry to make things purely data-driven and KPI-focused (Speicher, 2022); and at least visual and graphic design already border on visual poetry sometimes (see, e.g., ctrl + v journal, https://www.ctrlvjournal.com). Maeda (2019, pp. 79-80) writes:

“Hearing how arts classes in public schools were being rationed from hours down to just minutes per week, and owing much of my technology career’s success to the arts, drove me to wonder how to reveal the limits of a STEM-only approach. [...] I wanted to understand how companies like Apple and Airbnb managed to fully leverage the combined capabilities of STEAM [editor’s note: STEM + arts] in their products and services. What did I find? That design-infused companies fully understood how art is the science of enjoying life, and thus, in order for their customers to enjoyably live with their products, they needed to involve artists in how their products were made.”

poetry typographic art

I repeat, “art is the science of enjoying life”! And I would argue that a design that is not art doesn’t have a soul (cf. Buckley, 2021).

In a recent team meeting, colleague A presented a design they had been working on. I can’t exactly remember anymore what it was, but nothing where you’d say you can go completely wild with creativity. I think it had something to do with placing flags or labels on campaign teasers and product pictures in the e-commerce shop we’re working on. Still, colleague A came up with a design that prompted colleague B to say something along the lines of “I like how you managed to still bring your personality into this.” And this is by and large what I’m trying to get at with this essay. I believe that at least for the vast majority of designs, no matter how trivial or mundane they may seem, it should be possible to dip into your individual “well of creation” and give them some personality, some soul, some poetic meaning (if you want, that is). Of course, it’s still out of the question to adhere to established design principles, underlying design systems, a human-centered approach, and the identity of the brand you’re designing for. But in most cases there’s some leeway, some tiny—or not so tiny—part of a design into which you can channel your inner poet. A brand’s personality can (or should) always be the first, “obvious” layer of meaning. The designer’s poetry can then occupy the subconscious, second layer of meaning. In digital product design, for instance, this can manifest when the designer sees beyond standards, guidelines, and patterns to orchestrate a serendipitous user experience—a flow that mirrors what the user is thinking and enables them to discover a deeper meaning through their interactions, maybe topped off with a whimsical micro-interaction that makes an interface enjoyable and memorable.

How to channel that inner poet? That’s a good question, and I think it’s very difficult, if not outright impossible to give specific guidelines or a how-to on that. The best general guidance probably is: Read poetry (a lot), learn about what the essence of great poetry is, learn how to use metaphors and other stylistic elements, and start writing poems yourself. This is always also a (self-)editorial process. Once you have a draft, you revisit it closely and think about where a clever metaphor might be missing or where it needs more vivid imagery. Then, at some point, apply all of this to your designs instead of written words. Why shouldn’t carefully selected colors, a certain arrangement of shapes, or some white space serve as metaphors? Why shouldn’t it be possible to introduce a very subtle critique of capitalism into an e-commerce checkout, without using any words? Whenever you design something, identify the part of it that leaves room for artistic interpretation, then touch your “well of creation” and fill that room. Now, since designing for a digital product is rarely a solo endeavor, part of the challenge in a design team is to have everyone uncover their “well of creation” and, just like in art collectives, find a shared language, a shared purpose, and message—the parts of design that are hard to capture through guidelines, but need to be built with a deeper team connection. 

Photo by Theodor Vasile on Unsplash.

While I can’t give specific instructions on how to bring poetry into a design, I at least want to give examples of designs that are undoubtedly at the same time poems. The first one is Casa Batlló. One of Antoni Gaudí’s masterpieces located on the Passeig de Gràcia in Barcelona, I recently had the pleasure to take part in a guided tour there. Casa Battló was a commissioned work for the Batlló family: the renovation of an existing house that took place from 1904–1906 (Wikipedia contributors, 2022). As per our guide, the entire building is an homage to nature—e.g., Gaudí worked almost exclusively with flowing, natural shapes (Wikipedia contributors, 2022)—and, more specifically, to the sea and life in the sea. Among other things, the main staircase resembles the backbone of an underwater “beast,” along with “a series of skylights that look like turtle shells” and “sinuously vaulted walls” (Wikimedia Commons contributors, 2022). At the same time, the building is highly functional, and innovatively so. It was the first residence in Barcelona to have two instead of just one patio, in order to achieve more balanced lighting throughout the house. Additionally, it features a, for the time, radically new air ventilation system, for which Gaudí used the metaphor of fish’s gills (which can be opened and closed). According to Wikipedia, another “common theory about the building is that the rounded feature to the left of centre [sic], terminating at the top in a turret and cross, represents the lance of Saint George [...], which has been plunged into the back of the dragon” (Wikipedia contributors, 2022). All of these are just some of the metaphors and, subtle or not so subtle, references to nature and the sea that constitute the building. And while Gaudí was, to be perfectly honest, given almost completely free reign by the Batlló family, it is a prime example of how function, innovation, and poetic elements can be combined to achieve an exceptionally unique experience and, in the end, a poem turned design. I can’t even begin to describe what it feels like to walk through the building. That’s something everyone must experience on their own.

The second example of a design that is also a poem is the video game Proteus (Key & Kanaga, 2013), where the player has to explore an island, at first seemingly without a purpose and without a goal. There is no introduction, no instructions, no apparent storyline. You simply wake up(?) on an island and start wandering around, the music and ambient sounds and noises changing depending on where you go and whether it’s day or night. In his review of the game, Keith Stuart (2013) provides an interpretation of what Proteus might mean: “Throughout the history of literature the island has been used as a liminal zone between reality and the imagination; a place of mystery and isolation, where the rules of society no longer apply, and where stranded travelers can truly discover themselves. [...] As in Lost, it is somewhere beyond time with its own physical laws – and as in Lost, you arrive washed ashore as a stranger.”

The third example I want to give is a project that was displayed by the Museum of Modern Art, “Wind Map” (HINT.FM, 2022; MoMA, 2022). It’s a live rendering of wind currents in the U.S. and changes based on, well, whatever mood the weather is currently in. The map can be calm and soothing on one day and seem menacing on another (HINT.FM, 2022). The fact that it’s rendered entirely in black, white, and greyscale gives it an ethereal, otherworldly quality. The functionality of the project is undeniable as it has been used for a variety of practical applications (HINT.FM, 2022). But no matter how the wind currently flows, the deeper meaning that underlies the design is always that “‘[a]n invisible, ancient source of energy surrounds us, energy that powered the first explorations of the world, and that may be a key to the future’” (MoMA, 2022).

As one last example that’s closer to our field, I want to use Apple’s dynamic island. Being probably the major novelty of the new iPhone 14, it’s being widely discussed, with opinions ranging from “shameless gimmick” to “the next renaissance in UX design” (Zhang, 2022). But, regardless of whether it’s placed in a good or bad spot to reach easily with your thumb, maybe—just maybe—it also serves as a metaphor for the ubiquity of push notifications and multi-tasking, which are relentlessly holding our society hostage?

Now, you might ask yourself, why in the world does it matter whether a design is poetry or not? And I wouldn't necessarily argue that it does in the sense that everyone must perceive any design as a potential poem now. But it can matter. It matters to the designer, just like to any other craftsperson: if a designer is open to this thought, it allows them to think differently about their work, because they can add a second layer of meaning and thereby elevate a design beyond just function, form, and aesthetics. Creating a well-crafted poem is a highly satisfying experience. Nowadays, it's getting more and more difficult to get the same experience from crafting a design in the proverbial daily job at an arbitrary tech company where everything is predetermined by proxy KPIs, design systems, and brand identities (what Michael Buckley described as soul-less). Thinking about designs as poetry might help rekindle the flame.

On the other hand, it also matters a lot for companies because of what John Maeda (2019, p. 80) has found: “That design-infused companies fully understood how art is the science of enjoying life, and thus, in order for their customers to enjoyably live with their products, they needed to involve artists in how their products were made.” Ultimately, providing designs that go beyond just the “regular” form, function, and aesthetics yields a competitive edge, because it presents users with more enjoyable, more unique, and soulful products.

If you're searching for more meaning in your work as a designer, try to treat your designs, or at least parts of them, as poetry. Find the spaces for creative freedom, no matter how small they might seem, and fill them with a color, a shape, or a certain arrangement of elements that carry a message that go beyond pure functionality and plain aesthetics. Even if no-one except you ever discovers that deeper message, your designs will be more meaningful. Self-fulfillment and happiness come from the ability to make things and doing so in a self-directed manner (Graeber, 2018). Designers already do the former on a daily basis, and if they treat design more like poetry can also have more of the latter.

There’s not enough (good) poetry in this world, and every designer can help change that.

Works Cited
  1. How to read a book by Adler and Van Doren
  2. Contemporary design has lost its soul by Michael Buckley
  3. Bullshit Jobs by David Graeber 
  4. Proteus by Key, Ed and David Kanaga
  5. How to speak machine by John Maeda
  6. Wind Map by Fernanda Bertini Viègas
  7. Wind Map on MoMA
  8. Listen to Users, but Only 85% of the Time by Maximilian Speicher
  9. Proteus – review by Ketih Stuart
  10. Casa Batlló photos on Wikipedia
  11. Is the iPhone 14’s new Dynamic Island plain stupid or the next revolutionary UX pattern?  By Leon Zhang
Copyright © 2022 by Maximilian Speicher ● Originally published by DOC
]]>
Articles Design & Craft
<![CDATA[Lenses to peek into the metaverse]]> https://www.doc.cc/articles/lenses-to-peek-into-the-metaverse https://www.doc.cc/articles/lenses-to-peek-into-the-metaverse Thu, 14 Mar 2024 03:32:55 GMT

Lenses to peek into the metaverse

Written by Karina Nguyen

People immersed on their VR headsets in a messy dystopian room

Illustrations by deathburger

The virtual reality space is far from being exempt from risks to our society. Peeking into it through philosophical, technological, ethical, and cognitive lenses can help designers and engineers be aware of the risks and traumas it can cause.

people talking to an audience in a dystopian setting

Illustrations by deathburger

Philosophical lens:
Humans are imitative. Virtual reality fits the bill.

Plato’s Republic, one of the philosopher’s earliest works, posits the idea that art is imitative. Plato utilized this theory to dismiss art as being secondary or derivative, but also to suggest that because art imitates life, life also imitates art, in so far as we mirror the emotions of characters on stage.

Aristotle, Plato’s student, argued against this idea in his Poetics. He suggested that art was a venue wherein people can experience in a controlled way potentially overwhelming emotions, and learn thereby to gain better control over these emotions.

“Tragedy: a representation of an action that is heroic and complete and of a certain magnitude — by means of language enriched with all kinds of ornament, each used separately in the different parts of the play: it represents men in action and does not use narrative, and through pity and fear it effects relief to these and similar emotions.” (Aristotle)

So we see here two opposing views from the ancients on the relationship between art, emotion, and ethics. In the nineteenth century, inspired by the moral philosophy of Adam Smith, the novelist George Eliot offered a slightly different view for how art can affect our feelings and therefore our social relationships:

“The greatest benefit we owe to the artist, whether painter, poet, or novelist, is the extension of our sympathies…. a picture of human life such as a great artist can give, surprises even the trivial and the selfish into that attention to what is apart from themselves, which may be called the raw material of moral sentiment… Art is the nearest thing to life; it is a mode of amplifying experience and extending our contact with our fellow men beyond the bounds of our personal lot.”

This theory suggests that these emotions bring us closer to others, makes us sympathize with them. I would argue that Eliot’s idea about what art is for, emotionally and socially, is still very prominent in our era, though we are much more likely to make the argument about novels than about video games.

Brecht found in Chinese acting a model of self-conscious, meta-artistic art that might interrupt the imitative tendency of the theater. He suggests that “the transference of these emotions to the spectator, the emotional contagion, does not take place automatically.” So the alienation effect keeps us from emotional immersion and contagion and therefore allows us to think and critique society.

Because each of these theories is grounded in the question of art as a technology of imitation, you might note that the central concerns in each are exacerbated that more realistic art becomes. On one hand, this is about realism — how close to reality does this art seem? On the other hand, this is about the experience — how similar to the experience of life is my engagement with this art?

Virtual reality and the metaverse are no different.

person working on computer in a dystopian setting

Illustrations by deathburger

Technological lens:
Mirror, theatre, video game. 

It’s, therefore, no surprise that virtual reality is plagued with the same opposing views about its effects. In the movie Matrix, virtual reality is compared to a mirror. In Striking Vipers (Netflix Black Mirror), we see it as the mechanism to play a video game, in its literal definition.

Virtual reality is like mimesis in its focus on imitating life; it is like the theatre and the game in its interactive engagement. Its aim is to create a version of reality that sheds its frame, is unmediated. 

The ‘virtual reality effect’ is the denial of the role of signs (bits, pixels, and binary codes) in the production of what the user experiences are as unmediated presence. This requires the disappearance of the computer and of language itself.

Joshua Rothman’s New Yorker essay “Are We Already Living in Virtual Reality?”, is about the next frontier in the virtual reality effect, toward less mediation or rather, toward greater inhabitation. It entails, as much of Rothman’s work for the New Yorker does, a combination of a profile — usually of one or more professors in philosophy or cognitive science — and a meditation on the implications of their research. Rothman uses, throughout his piece, analogies to the various artistic media.

He discusses “a tracking hardware which allows your virtual body to accurately mirror the movements of your real head, feet, and hands — and a few minutes of guided, Tai Chi-like movement before a virtual mirror” (the technology also frequently uses mirrors). The obsession with the “out of body experience” that leads the main subject of the piece, the philosopher of mind Thomas Metzinger, to develop virtual embodiment is compared to a theatrical stage set:

“In 1983, the psychologist Philip Johnson-Laird had published a book called ‘Mental Models,’ in which he argued that people often think not by applying logical rules but by manipulating models of the world in their minds…. Reality, as do we experience it, might be a mental stage set — a representation of the world, rather than the world itself. Metzinger started to think about how such a model might be constructed. Some internal mental system must function as an invisible, unconscious set dresser.”

In the memoir ‘Dawn of the New Everything,’ the V.R. pioneer Jaron Lanier recalls evangelizing the technology by describing a virtual two-hundred-foot-tall amethyst octopus with an opening in its head; inside would be a furry cave with a bed that hugs you while you sleep. Later, the ‘Matrix’ movies imagined a virtual world so accurate as to be indistinguishable from real life. Today’s most advanced V.R. video games conjure visually rich space stations (Lone Echo), deserts (Arizona Sunshine), and rock faces (The Climb). The goal is to convince you that you are somewhere else.

These three technological and artistic analogies — mirror, theatre, video game — once again ground Rothman’s analysis of virtual reality and emphasize its mimetic and experiential affordances. And once again, the spectre of how those affordances might affect society, emotionally and ethically, hovers over the essay.

cowboy cyborg reading book law of the machine

Illustrations by deathburger

Ethical lens:
Why do we need a code of ethics for VR?

Although Virtual Reality has been around in more or less the form it is now for thirty, forty years, with head-mounted displays, and so forth, there’s nobody who’s spent hours and hours, week after week, month after month, in virtual reality.

When that happens,  we should be prepared. As Rothman puts it:  ‘There’s going to be a moral panic around V.R. as it spreads and spreads, just as there was with comics, with television. It’s going to be the root of every evil, and there’s going to be a huge campaign against it. I hope the companies realize this, because they have to be prepared for it.’ 

In their V.R. code of ethics, Metzinger and Madary predict that the ‘risk of users suffering psychological trauma will steadily increase as V.R. technology advances.’ Metzinger worries about scenarios that encourage the character traits that psychologists refer to as “the dark triad”: narcissism, Machiavellianism, and psychopathy.”

This V.R. code of ethics is necessary because of the widespread panic that video games habituate people — especially young disaffected men — to commit acts of violence and cruelty. Anna Anthropy’s optimism in the Rise of the Videogame Zinesters (a book I highly recommend reading!) about the hacking of video games must be contrasted with its darker uses, as in the video game Grand Theft Auto, which gamers hacked in order to allow their avatars to sexually assault the avatars of other players in the game. Rothman gives an example of how some games — many games — in fact, build this violence in, no hacking necessary: In 2015, the video-game company Capcom released Kitchen, a virtual-reality horror scenario in which the player is tied to a chair while a deranged woman plunges a knife into his thigh.

In the V.R. game Surgeon Simulator, players use power drills, bone saws, and other tools to vivisect a humanoid alien that writhes in pain on the operating table. As in most V.R. video games, players in Kitchen and Surgeon Simulator move in fanciful ways and are, at best, semi-embodied. Even so, in the book ‘Experience on Demand,’ Jeremy Bailenson, a leading V.R.-embodiment researcher at Stanford, reports that after performing virtual vivisection he ‘simply felt bad. Responsible. I had used my hands to do violence.’ The first-person V.R., like the first-person shooter game, makes us not just inhabit another person, it can make us complicit with their actions.

Reproducing a feeling is independent of realism

Reproducing a feeling is independent of realism; inhabitation is far more crucial. “On some level, the brain doesn’t know the difference between real reality and virtual reality.“ Embodied simulations seem to slip beneath the cognitive threshold, affecting the associative, unconscious parts of the mind. ‘It’s directly experiential… It’s not “I know.” It’s “I am.”’ ‘It’s not real, but it doesn’t matter,’ Slater said, watching me. ‘In some sense, it’s a real experience.’ One way to put it is that virtual embodiment is not epistemological — about what we know but — phenomenological — about what we experience. And that’s what we all need to understand about the future of VR as designers, researchers, and engineers.

futuristic concert with vr headsets

Illustrations by deathburger

Cognitive lens:
Not somewhere else, but someone else.

The virtual embodiment Metzinger decided to recreate intensifies this process, not by convincing you that you’re somewhere else, but by convincing “you that you are someone else. This doesn’t require fancy graphics. Instead, it calls for tracking hardware — which allows your virtual body to accurately mirror the movements of your real head, feet, and hands — and a few minutes of guided, Tai Chi-like movement before a virtual mirror.”

Rothman notes that this improves not the mimetic reality but the experiential reality of embodiment: “My virtual self didn’t feel particularly real. The quality of the virtual world was on a par with a nineteen-nineties video game, and when I leaned into the mirror to make eye contact with myself my face was planar and cartoonish.”

Du Bois may have derived this concept of double consciousness from the research of his professor, William James, the founder of modern psychology, who discussed the “Double or Secondary Personality” in his book The Principles of Psychology — a proto-theory of what we now call multiple personalities, and sometimes schizophrenia. Jung and Freud both developed this idea in their work as well, though usually in metaphoric terms.

The notion that seeing oneself from the outside will produce a remarkable shift in consciousness feels natural to us now. But virtual embodiment defamiliarizes the platitude of “getting to know yourself” — and makes it all too literal.

Virtual reality and the rise of the metaverse have a deep connection with the human imitative nature, working as a mirror for the projected ideas we have of ourselves and of our society. However, this same mirror can expose and augment problematic issues that we as designers need to be aware of to prevent and address.

]]>
Articles Design & Society
<![CDATA[Graphic notation: a brief history of visualising music]]> https://www.doc.cc/articles/graphic-notation https://www.doc.cc/articles/graphic-notation Thu, 14 Mar 2024 03:31:49 GMT

Graphic notation: a brief history of visualising music

Written by David Hall

music notation in spiral format

George Crumb, Spiral Galaxy (1972)

Since the output of a designer’s work is not just the product itself — comprising mockups, journey maps, flowcharts, and prototypical representation of how the product will render on a user’s device — documentation is an incredibly important tool to rally our teams toward delivering an excellent product experience.

When documenting designs, we often follow templates and standards, annotating things in a way that is both conventional and clear. We want our audience (i.e. developers who are building the product) not to re-learn our design intentions at every stage of the design.

Similarly, a sheet of music is not the music itself. Music notation is a set of instructions for someone who is “building” or “playing” the song. The reality is that design and music intersect in so many areas; fashion, art, filmmaking, set design. Yet one relatively obscure but staggeringly creative area is in the design of graphic notation used by composers. Graphic notation is one side that is relatively unknown outside the sometimes rarefied world of orchestral and experimental music. This is a great example of how breaking from the conventional system, whether in sheet music or a pattern library, can open up creative possibilities.

Composers have always grappled with ways to express themselves, and in the twentieth century, several began using this radical graphical approach to writing scores. It was a two-fingered salute to the prevailing musical establishment.

traditional music notation and graphic music notation

traditional music notation (left) and graphic music notation (right)

Graphic notation functions in the same way as traditional musical notation but uses abstract symbols, images, and text to convey meaning to the performers. A few of these composers incorporate traditional notation and then bend it in unique ways.

The visual comparison between traditional and modern graphic notation can be striking. Traditional notation is linear and rigid. Modern graphic notation is open, can offer flexibility, and allows the performer to interpret the composer’s ideas.

one of the first western music notations

Aurelian of Réôme (840 C.E.), one of the first western music notations

It all started around 840 C.E. when a former monk named Aurelian of Réôme created one of the first examples of Western musical notation. This was a basic attempt to create a treatise on music theory called Musica discipline.

By the Baroque era in Europe, composers wanted to set down their work with greater consistency and leave less interpretation open to performers. Now musical language was becoming codified. Yet various composers like Beethoven, then Gustav Mahler in the late nineteenth century, strained to break free of the traditional boundaries. Their orchestral scores are full of scribbles, footnotes, and marks, as if sticking to the rules was too much for them.

music notation by beethoven

We can feel Beethoven trying to break free from the constraints of conventional notation in his score from his Piano Sonata

In the early twentieth century, composers such as Henry Cowell began experimenting with notation and his New Musical Resources (1930) was a radical attempt to change musical notation. Increasingly throughout the twentieth century and following the horrors of the Second World War, there was a growing feeling among composers that traditional Western notation was inadequate for expressing their musical ideas.

The earliest example of full-blown graphic notation within a score is Morton Feldman’s Projection 1 (1950) for solo cello. It features an entirely original notation, which looks more like a circuit diagram. It sounds and looks ahead of its time. Listen to it here.

Throughout the 50s and 60s, a new generation of heavyweight post-war composers like Krzysztof Penderecki, Karlheinz Stockhausen, John Cage, Roman Haubenstock-Ramati started using graphic notation as a serious and necessary alternative to traditional forms of notation.

graphic notation by Cardew using circles and lines

Arguably the greatest musical score ever designed, a pinnacle of graphic notation is by Cornelius Cardew, entitled Treatise (1963–1967). The piece comprises 193 pages of highly abstract scores. This is the Sistine Chapel of notation. His training as a graphic designer is obvious. He even used principles of cognitive psychology, which is central to design.

Cardew’s motivation was to inspire creativity and interpretation of the performer. The score gave no specific instructions on how to play the piece, not even what instruments to use. It’s a dense piece, allowing multiple explorations and interpretations. Following along with the score is a rewarding experience. Listen and watch the score unfold.

As the complexity and abstraction of music increased, so too did the scores. Many of the pieces that these scores are referencing are obtuse to the point of incomprehensibility, but there remains genuine beauty in them.

graphic notation by Brian Eno

Brian Eno is one of the more well-known contemporary musicians using graphic notation. Eno is very open about not having a formal musical education and thus being unable to notate in an orthodox way. He has used graphic scores out of necessity and has made it a normal part of his process.

Brian Eno told an interviewer that ‘quite a lot of what I do has to do with sound texture, and you can’t notate that anyway… That’s because musical notation arose when sound textures were limited.’ Eno gave the musicians on the recording of Music for Airports a lot of latitude in interpreting the score with instructions such as ‘play the note C every 21 seconds. I can see the score on the back of my faded copy of Music for Airports.

As the interfaces and experiences we design become more complex and nuanced, just like the music textures Eno introduced, designers should ask themselves if the notation in place is the best way to convey what they want to communicate and achieve to their users and business.

graphic notation by John Cage with abstract lines and shapes

John Cage, Fontana Mix (1958)

graphic notation using a diagram made of geometric lines and shapes

Toru Takemitsu, Study for Vibration (1962)

graphic notation mixing geometric shapes and graphisms

Cathy Berberian, Stripsody (1966)

graphic notation using a diagram made of geometric lines and shapes

Andrzej Panufnik, Universal Prayer (1968–69)

music notation in spiral format

George Crumb, Spiral Galaxy (1972)

music notation with circles, shapes, and lines

Roman Haubenstock-Ramati, Konstellationen (1972)

graphic notation in shape of mushroom cloud

Albert Bernal, Impossible music #9 (2006-)

Works cited
  1. Manuscrits de la Bibliothèque de Valenciennes by Bibliothèque nationale de France (2012)
  2. Notations 21 by Theresa Sauer (2009)
  3. Cardew’s Treatise: the greatest musical score ever designed by David Hall (2020)
  4. Morton Feldman by Paul Griffiths (1995)
  5. New Again: Brian Eno by Glenn O’Brien (2016)

]]>
Design & Culture Articles