I mentioned before how the old-fashioned pixels on CRT screens have little in common with pixels of today. The old pixels were huge, imprecise, blending with each other, and requiring a very different design approach.
Some years ago, the always-excellent Tech Connections also had a great video about how in the era of analog television, pixels didn’t even exist.
But earlier this month, MattKC published a fun 8-minute video arguing that for early video games it wasn’t just pixels that were imprecise. It was also colors.
What was Mario’s original reference palette? Which shade of blue is the correct one? Turns out… there isn’t one.
Come to learn some details about how the American NTSC TV standard (“Never The Same Color”) worked, stay for a cruel twist about PAL, its European equivalent.
In the video linked in the previous post, one of the hosts mentions at one point:
The biggest rebuttal is that the greatest audio engine of all time, the one baked into all Apple products, has 16 volume steps. And no one has ever been like, “My iPhone doesn’t have enough granularity to the volume.”
But of course they have. And the solution is easy: on both the iPhone and Mac you can grab one of the many volume sliders and immediately get a lot more precision:
(Can’t help but notice this volume control has a nice set of notches, too!)
But if I told you that you can actually also increase the precision from 16 to 64 stops using the volume up/down keys, would you know how to do it?
Occam’s Razor: it must be a modifier key. So let’s go through them all.
Pressing ⌥ and brightness up/down opens the Displays settings pane, and consequently, pressing ⌥ and any of the three volume keys gives you the Sound settings pane. (This convention, however, isn’t followed for other keys. ⌥ and Mission Control only opens top level of Settings, and ⌥ and other function keys like Spotlight, Dictation, or media transport doesn’t do anything. My guess is that someone simply forgot about this over time which is a pity, because one of the best ways to teach people about a power-user shortcut is to make it as transferrable as possible, to allow motor memory to blossom.)
So ⌥ is out. ⌃ and brightness keys changes the brightness on the external display, and even though that doesn’t really apply to volume, it’s safe to stay away.
⇧ + volume keys reverses the meaning of this toggle below, making ping sounds if the toggle is off, or suppressing them if the toggle is on. This is nice.
That only leaves Fn/Globe which already reverses top-row keys into function keys, and ⌘. But ⌘ is inert. Instead, the combination to add precision is ⌥ + ⇧ + volume keys. (Same with brightness, which can be useful e.g. on a very dark plane.)
I don’t understand this, and I wonder what is the reason it got this way. Modifier keys are generally tricky, but this doesn’t follow any of the go-to rules I would try in this situation:
Reuse an existing convention for consistency: I don’t think anywhere else ⌥⇧ means “precision.”
Follow naturally from existing UI building blocks: ⌥ and ⇧ do different things and this is not an intuitive combination of what they do independently.
Use mnemonics: This doesn’t feel like it’s doing that at all.
Failing everything else, make it pleasant to press: ⌥ and ⇧ is possibly the least ergonomic two-modifier-key combination.
This shortcut has another problem, which is that it is the only two-modifier-key option here. If you don’t use it often, you might only remember it as “two modifier keys” without further detail, which actually ends up being 10 possible combinations of keys! So if you’re like me, you always awkwardly button mash a bunch of them before rediscovering ⌥⇧.
My recommendation for a small tweak here?
⇧ and brightness/volume: Secondary display/Add pings (both are most important; Shift is nice to press and the “default” modifier key).
⌃ and brightness/volume: Add extra precision (as that gives you more control).
⌥ and brightness/volume/other keys: Open the relevant Settings pane.
Obviously, I might not have all the information that led to the current situation (and it’s possible I don’t even understand it fully), plus changing any long-existing shortcuts is hard. But as above, ⌥⇧ is so peculiar, and it also misses out on the last important consideration: I don’t think anyone would ever discover it by mistake or out of curiosity.
MKBHD’s Waveform podcast (audio or video) sometimes has a fun “Did they even test this?” section. This week, for the first 12 minutes, the team was ranting about various volume controls – a meandering conversation that I also found just very enjoyable.
But then some research revealed that about 20 years ago, Chrysler decided to try to find the perfect volume interval, one that would result in meaningful difference in sound level without going too far. After much experimentation, they decided that 38 discrete volume settings provided the perfect amount of adjustability—not too fine, not too coarse. So the decree went out across the company that all stereos should go to 38.
However, no citation is given, and I couldn’t find any more information about it.
The one thing the group missed in their discussion is “why even show a number”? I think it helps people in remembering their preference, especially if they share a car with someone else. Remembering that “my volume number is 17” can be helpful, even if it feels a bit clunky.
When volume controls were physical, I believe if they didn’t have a number, they at least had a certain amount of notches so you could remember the nearest notch you liked:
Keynote is an app that could use something like that. At this very moment, I am trying to unify the volume of various clips across slides for an upcoming presentation, and having to use environmental cues like “between Edit Movie below and the rewind button above”:
I have been at times frustrated by cute placeholder text in places, most notably Dropbox Paper, which still puts them in a just-created doc…
…and in new to-do items:
This bothered me for two reasons.
First was a potential tone mismatch. What if you are writing a layoffs announcement, a project cancellation doc, or something personal and heartfelt? At Medium back in the day, at some point we added a fun celebratory dialog after publishing that said something like “Now, shout it out from the rooftops!” We took it down very quickly as people made us realize Medium is used to write many kinds of things we didn’t anticipate, and in those situations the cutesy message really failed to read the room.
But the other half of my frustration with Paper was that it felt like the app was making itself too comfortable in my space, in effect shouting all over my inner voice and distracting me. I felt like any app giving you a creative canvas should back off of that canvas unless it’s explicitly invited to participate.
The researchers asked participants to fill in an online survey with questions about hot-button social and political issues. Some were prompted with an AI autocomplete answer that was deliberately biased toward one side of the issue. For example, participants who were asked whether they agreed that the death penalty should be legal might receive an AI suggestion that disagreed.
Across all the different topics in the survey, participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all. Overall, the study participants who saw the biased AI text shifted their positions toward those espoused by the AI.
Interestingly, the people in the study didn’t tend to think the AI autocomplete suggestions were biased or to notice that they had changed their own thinking on an issue in the course of the study.
…and elaborates on how adding warnings didn’t really help:
The Warning and Debrief messages failed to significantly reduce the attitude shift, which is concerning because they were also inspired by those used in real AI applications. AI tools such as ChatGPT show brief and general statements about AI’s propensity to hallucinate false information (e.g., “ChatGPT is AI and can make mistakes. Check important info.”), similar to the messages used in our interventions.
I know on this blog I often focus on the mechanics of interactions, but the job of every designer is to think of more than that. I keep coming back to both pull-to-refresh and infinite scroll mechanics. Both can be put to good use and feel “delightful,” but both started being abused so much that it led to their respective creators disowningthem.
Nice, clear, simple copy in ClarisWorks from 1997:
No “Maybe later.” No “Not now.” Thirteen characters. Now, Later, Never.
(Can’t help but notice that Esc and ⌘. – the classic Mac’s equivalent of Esc – still map to Later, however. Also, this breaks the rule of button copy being fully comprehensible without having to read the surrounding strings first, perhaps most well-known as the “avoid «click here»” rule. Never Register/Register Later/Register Now would solve that problem, but wouldn’t look so neat.)
One of my favorite bits of trivia about the 1983 movie WarGames is that all the computer typing scenes have been faked in a clever way: The actors (many of whom might have never typed before, as home computers were only slowly becoming popular) were allowed to press any key they wanted, but the interface would still proceed as if the correct letter was typed.
This allowed the computer to respond to keystrokes, making it all feel real, but also reduced the burden on actors to type things properly – and also make it easier for proper sight lines to happen, as the actors didn’t have to constantly look at the keyboard.
WarGames used it really well, showing all sorts of face reflections in the CRT screens, as if people literally talked to the machines, which must have been hell to film:
I have never seen this demoed or mentioned outside of the anecdote. However, yesterday, Cathode Ray Dude released an excellent video about the challenges of filming computer screens. The whole video is worth watching, although at this point mostly off-topic for this blog. But starting at 1:32 and ending around 1:37, there’s an actual demo of a similar piece of auto-typing software used in the 1996 movie Scream:
You might think this is just a piece of old-computer trivia, but I’ve actually used that in at least two of my talks, for some of the similar reasons! I run most of my talks from HTML/CSS/JS; it’s nice for the audience to see things being typed and responding properly to (audible, and occasionally visible) key presses – but it’s also nice as a speaker not to worry about messing things up under pressure.
For extra realism, make sure Backspace goes back in the script – you might occasionally press it instinctively – and for extra extra verisimilitude, actually bake in a typo or two into the predefined sequence. (And an escape hatch if you actually change your mind and want to go manual.)
Then, of course, there’s a classic 2011 piece of software called HackerTyper. Did someone already marry this idea with an LLM? Seems like a logical next step.
I know some of you are all whispering “he’s posting all of these hour-long YouTube videos, when am I supposed to find time to watch them”? I hear you loud and clear and I’m going to make it better…
Seriously, though, this is an extremely enjoyable deep dive into Disney’s failed Galactic Staircruiser hotel.
I don’t know much about Disney, but it was engrossing as half of the failures were actually software-related: from the flawed UI in various spaces in the hotel and screen-laden space windows in the rooms, to poor integration with physical elements of the scenery, an “immersive” interactive game that felt untested plus gave you poor feedback, and the general trends of laziness and cheapness that could never fully be remedied by the performers going above and beyond.
What Nicholson does a lot is trying to debug what actually happened to make her experience so miserable, and it’s really refreshing to see debugging in a different context than I usually see it.
There are fun things you can do in software when it is aware of the dimensions and features of its hardware.
iPhone does a cute Siri animation that emanates precisely from the side button:
A bunch of Android phones visualize the charge flowing to the phone from the USB port…
…and even the whole concept of iPhone’s Dynamic Island is software cleverly camouflaging missing pixels as a background of a carefully designed, ever morphing pill.
But this idea has value beyond fun visuals. iPhone telling you where exactly to tap twice for confirming payment helps you do that without fumbling with your phone to locate the side button:
Same with the iPad pointing to the otherwise invisible camera when it cannot see you:
Even the maligned Touch Bar also did something similar for its fingerprint reader:
The rule here would be, perhaps, a version of “show, don’t tell.” We could call it “point to, don’t describe.” (Describing what to do means cognitive effort to read the words and understand them. An arrow pointing to something should be easier to process.)
You could even argue the cute MagSafe animation is not entirely superfluous, as over time it helps you internalize the position of the otherwise invisible magnets on the other side of your phone:
In a similar way, as it moved away from the home button, iPhone X introduced light bars at two edges of the screen – one very aware of the notch – as swiping affordances:
And under-the-screen fingerprint readers basically need asoftware/hardware collab to function:
One of my favourite versions of this kind of integration is from much earlier, where various computers helped you map the “soft” function keys to their actual functions, which varied per app…
…and the famous Model M keyboard moving its keys to the top row helped PC software do stuff like this more easily:
(And now I’m going to ruin this magical moment by telling you the cheap ATM machine that you hate does the same thing.)
The last example I can think of (but please send me your nominations!) is the much more sophisticated and subtle way Apple’s device simulator incorporates awareness of the screen’s physical size and awareness of the dimensions of the simulated device. Here’s me using the iPhone Simulator on my 27″ Apple display. If I choose the Physical Size zoom option, it matches the dimensions of my phone precisely. The way I know this is not an accident is that it remains perfectly sized if I change the density of the whole UI in the settings.
Why am I thinking about it all this week?
The new MacBook Neo was released with two USB-C ports. Only one of the ports is USB 3, suitable for connecting a display, an SSD, and so on. The other port’s speeds are lower, appropriate only for low-throughput devices like keyboards and mice.
To Apple’s credit, macOS helps you understand the limitations – since the ports look the same and the USB-C cables are a hot mess, I think it is correct and welcome to try to remedy this in software. It looks like this, appearing in the upper right corner like all the other notifications:
I think this is nice! But it’s also just words. It feels a bit cheap. macOS knows exactly where the ports are, and could have thrown a little warning in the lower left corner of the screen, complete with an onscreen animation of swapping the plug to the other port – similar to what “double clicking to pay” does, so you wouldn’t have to look to the side to locate the socket first.
“Point to, don’t describe” – this feels like a perfect opportunity for it.
I enjoy little lists like these, and the presentation here is also delightful. From a design engineer Jakub Krehel, Details that make interfaces feel better. A few of these stood out to me:
Make your animations interruptible. […] Users often change their intent mid-interaction. For example, a user may open a dropdown menu and decide they want to do something else before the animation finishes.
Yes. Never make the user wait for your animation to finish, unless the animation itself is meant to cause friction and slow the user down (which is very rare).
Make exit animations subtle. Exit animations usually work better when they’re more subtle than enter animations.
I love asymmetric transitions. My go-to analogy for this is “in real life, you don’t open the door the same way you close it.”
Add outline to images. A visual tweak I use a lot is adding a 1px black or white (depending on the mode) outline with 10% opacity to images.
This is very nice and (both literally and figuratively) sharp. In some contexts, I bet you could even try to go for 0.5px.
In my head, some bugs belong to categories that feel important, and yet remain hard to define and quantify: embarrassing bugs, dumb bugs, flow killers.
Somewhere in the hard-to-explain space is another tricky category: UI decisions that feel cheap.
The examples of cheapness that come to my mind readily will, I bet, be different for each one of you reading this:
using emoji instead of iconography
using text and typography instead of graphic design elements as UI (except in terminal/text-based interfaces)
excessive centering
obvious misalignments and overflows
accidentally mismatched fonts and unspecified fallback fonts
reflow and bad loading states that do not match the eventual UI
selectable user interface element that betray “bad webiness” of the UI
typos
But my absolute #1 go-to example is definitely this:
Computers could pluralize nouns basically for free already in the 1970s, and sure, there are objective arguments of why this is bad, but there’s also this: I wince so hard every time I see something like this.
I think it’s important for every designer to notice when they wince, and teach others how to wince and notice, too.
(I stole the brilliant title from this short post by Joe Leech in 2018, in which Leech uses the word “lazy” rather than “cheap” – they’re related!)
Many of you have probably heard the repeated story of the first Moon landing in 1969 almost getting undone by a bunch of onboard computer glitches:
There could not be a worse time in the flight to have computer problems. At, the time the press gleefully reported how Armstrong seized manual control from a crippled and failing onboard computer and managed to heroically and single-handedly land the spaceship on the surface of the Moon against all odds.
Robert Wills argues against this narrative in this 2020 talk, wanting to shine a spotlight away from Neil Armstrong and toward people who designed the software (among them Margaret Hamilton), and the mission control’s Steve Bales, who made a decision not to abort the launch as the 1201 and 1202 errors were piling up.
The argument: the computer was working as intended, it fixed itself over and over again owing to its clever software, and it actually helped Buzz Aldrin understand (at least subconsciously) what led to the seemingly random and distracting computer errors.
The above is more of a traditional talk than the videos I usually share – a bit more technical, taking up an entire hour, and with generic slides – but it’s buoyed by Wills’s enthusiasm and knowledge.
Besides, it’s lunar landing! Did you know about DSKY and its fascinating keyboard and UI? Did you know the spacecraft’s window was part of the interface, too? Or that its software was woven into the hardware? Or that the Apollo 11 had a… guillotine in it?
An unsung hero of the decision not to abort the landing is Richard Koos, a NASA simulation supervisor who […] 11 days before the launch of Apollo 11, put the team of controllers including Bales […] through a simulation that intentionally triggered a 1201 alarm. […] Unable to figure out what the 1201 was, Bales aborted that simulated landing. He and Flight Director Gene Kranz were dressed down for it by Koos, who put the team through four more hours of training the next day specifically on program alarms. When the 1202 and 1201 alarms occurred during the actual landing, Garman, Bales, and even Duke recognized them immediately.
This iPhone UI for dark/light theme is doing something clever:
Ostensibly, there are two modes here:
automatic, for when you want the theme to match the time of day
manual, for when you want to keep one of the themes forever
But check out what happens when I am in automatic mode, but toggle the theme by hand anyway:
More rigid or less thoughtful interfaces would either disable manual changes when you’re in automatic mode, or understand a manual theme switch to mean “I want to turn off automatic.”
But here, iOS is quietly putting me in a temporary hybrid mode: a manual theme override until the theme catches up with what automatic mode would do, at which point it snaps back (I’m resisting very hard calling this rubber banding) to automatic mode.
What I think is clever is that this isn’t presented as a third mode – which could be more confusing than helpful – but the design simply reuses the existing Options field to set the expectations.
One has to be careful designing in shades of gray; once you enter the space you really have to commit to it and see it through. My go-to analogy is symmetry vs. asymmetry. Symmetry in visual design is usually easier and safer. If you venture into asymmetry you have to make an effort to make it work. The highs of asymmetry will be higher than anything symmetry can provide, but getting to those highs can be arduous and sometimes might even be impossible.
I thought this particular example was really nicely done and the team found a great balance. (I think Apple’s previous shade of gray – “Disconnecting Nearby Wi-Fi Until Tomorrow” – ended up slightly less successful.)
Do we need yet another person crashing out about Apple’s design decisions? Am I doing it only because it’s fashionable to be on Apple Design Hate Train these days? I’ll be honest: I don’t know.
But I have been bothered by Apple’s approach to this aspect of its keyboard design for a while, because it starts breaking what I think is really important in using a computer well: keyboard shortcuts.
I hope it’s also a fun visual history of the most tricky of modern modifier keys.
(And if you like it, I linked to a few of my other keyboard essays in the footer.)
RuneScape is a popular MMORPG that reached its peak popularity in the late 2000s.
In the game, combat – colloquially known as PvP, or player vs. player – is limited to a specific map area (called the Wilderness) and otherwise people’s houses.
On 6/6/6 (sic!) a bug in RuneScape made it possible for a few players to start killing others outside of designated areas, without them being able to defend themselves. One of these players, Durial321, gained a lot of notoriety:
A player called Cursed You had invited some friends to his in-game house once he had maxed his construction skill, but decided to eject them all from the premises. Things turned sour, however, as a group of players marked as PvP in the house didn’t lose this PvP flag when ejected, allowing them to storm through Falador and massacre whoever they pleased. The most notorious of these players was named Durial321.
This event went down in internet infamy and meant that many players lost their items when killed as well as the banning of those involved.
I don’t have any context of RuneScape and I found it really funny to learn about this event from different retellings of the story.
Several others were able to use this glitch, but Durial321 abused it the most. His rampage lasted for about an hour, starting at Rimmington, where the house party was, then proceeding to Falador and subsequently Edgeville. At Edgeville, he gave Voodoolegion the green partyhat, who never gave it back to him. Soon after, he finally encountered a Jagex Moderator, Mod Murdoch, who disconnected him and locked his account. Durial321 was later permanently banned from RuneScape. In a 2006 interview, he said that player killing outside of the Wilderness was exciting, although he felt bad for the players who lost their belongings.
The 2006 incident later became known as the Falador Massacre.
There is also this more modern retelling that feels like scary story time by the campfire:
Reactions from players were initially kind of incredulous. Plenty of people were shocked and found the whole incident quite funny. Durial had essentially broken the game, after all. Some players wanted to be like him, whipping strangers to death and taking their items. But soon, as more players started hearing about what had happened and seeing the video, the mood shifted. Players wanted Durial321 hung, drawn and quartered, with his head displayed on a pike outside Lumbridge Castle.
Without spoiling too much, the bug was a classic Swiss cheese situation involving a new untested item, a race condition, peculiar timing, and a player with an unusually high uptime and a whole lotta luck.
I’ve learned recently that “rubber banding” can mean at least three different things in the context of UI/UX design:
whatever happens at the edges of your scroll container when you’re using elastic scrolling, which started on the first iPhone and have spread more widely since
in videogames, balancing the difficulty in real-time so that inexperienced players stand a chance and good players are not bored (a classic example in any racing game is computer-controlled cars slowing down if they are running too far ahead, as if held by a rubber band, to give you a chance to catch up)
in multiplayer experiences (mostly videogames, too), the experience of snapping back and forth (example) during gameplay when your connection speed is low and the game has to reconcile your predicted position with your real one
Each one is interesting in its own way. (Each one is also controversial, although for a different reason!) But what I understand they all have in common is – well, obviously – the specific mechanics of rubber banding.
I imagine many reading this are familiar with basic interpolation between A and B using curves like ease in, ease out, and so on. But in gaming and I think increasingly in UI design, that’s not enough. When coding stuff related to movement – imagine dragging an elastic scrolling view near its edge – the challenges compound:
the object might already be in motion
its destination might also be in motion
the load or framerate can vary, so calculations have to take that into account
With that in mind, I found these two videos helpful and informative:
The videos together start with basic lerp (linear interpolation), then move to lerp smoothing, and then arrive at frame-independent lerp smoothing. There’s light math/physics here, but that’s to be expected, as all these experiences are meant to feel like real-life objects would.
I found especially lerp smoothing where you feed a lerp into itself particularly conceptually beautiful.
I mentioned VisiCalc not long ago and Lotus 1-2-3 just this week. Yesterday, a new issue of Stone Tools came out, nicely tying the story together.
Stone Tools is a project by Christopher Drum where he grabs old productivityapps, spools up the correct emulator, and writes a review from today’s perspective. I like the emulation part – Drum even provides specific instructions so you could do it, too – and the fact he’s actually putting the tools through their paces.
The reviews can probably be a bit intense if you are unfamiliar with the territory, but you will be rewarded with a lot more detail than just casual understanding of these apps. Reading about VisiCalc first and 1-2-3 second really drives home how “VisiCalc walked so 1-2-3 could run” and it’s fun to see the beginnings of spreadsheet conventions that we take for granted today, for example $ absolute addressing:
In VisiCalcI’m prompted for a “relative or fixed?” decision for every cell reference in every target cell. Replicate a formula with 5 cell references across a column of 100 cells and be ready to answer 5 x 100 prompts. Unfortunate and unavoidable.
Like always, one can find inspiration in surprising places. In the review of Lotus 1-2-3, there’s this interesting tidbit:
The more I encounter [the horizontal menu-bar], the more I wonder if we gave up on it too soon. This could be “blogger overly immersed in their subject matter” brain, but I’m growing to oftentimes prefer two-line horizontal menus over modern GUI menus. […]
It also provides something GUI menus don’t: an immediate explanation of a menu item before committing its action to the document. If a menu item is not a sub-menu, line two describes it. It’s easy to audit features in an unknown program.
I have just been pondering that maybe we moved away from status bars and question mark (Windows)/balloon (Mac) help too soon – pretty much everything these days relies on tooltips – and this slotted right into that.
Anyway. Drum seems to be having fun with the project, and I appreciate that. There are little custom visuals and jokes in every post. Also, as an example, you can download an absolutely delightful recreation of VisiCalc called PicoCalc and run it on your Mac. I have never expected a spreadsheet to be so cute:
And it’s not just most well-known tools. What astonished me in the review of Scala Multimedia in January is how absolutely gorgeous the software (which I’ve never seen before) looked:
⌘T is a very important shortcut in Slack. It allows you to quickly talk to someone just by typing in their name. I use it probably dozens, if not hundreds of times a day.
⌘T is right next to ⌘R, which reloads Slack. Occasionally, on the way to ⌘T, my fingers graze ⌘R. Fingers being fingers, I immediately realize something went wrong and wince, and within a second or two I witness Slack completely reloading. It’s not a big deal – no data is lost, and the reload is only 5 to 10 seconds, but when you move fast, it feels like eternity.
⌘O is a very important shortcut in Finder. It opens the selected file in the correct app. I use it probably dozens, if not hundreds of times a day.
⌘O is right next to ⌘P, which prints the file I’m pointing to. Curiously, and in contrast with most apps, the print function is not gated in any way by a confirmation dialog box, or an intermediate print settings window.
So, occasionally, on the way to ⌘O, my fingers graze ⌘P. Fingers being fingers, I immediately realize something went wrong and wince, and within a few seconds, the lights in my old apartment dim for a second. Then, far away, I hear the recognizable sound of my laser printer spitting out a page.
Gamers used to deride Windows key for automatically ejecting them from the game to the desktop, before an option to disable it started appearing in gaming keyboards. (Some of the professional gaming leagues were very strict about how a player could use their keyboard.)
Similarly, professional Excel champions and players started physically removing keys: In Excel, F1 (right next to an often-used F2) opens the help dialog and slows you down.
I served as a judge for the ModelOff Financial Modeling Championships in NYC twice. On my first visit, I was watching contestant Martijn Reekers work in Excel. He was constantly pressing F2 and Esc with his left hand. His right hand was on the arrow keys, swiftly moving from cell to cell. F2 puts the cell in Edit mode so you can see the formula in the cell. Esc exits Edit mode and shows you the number. Martijn would press F2 and Esc at least three times every second.
But here is the funny part: What dangerous key is between F2 and Esc? F1.
If you accidentally press F1, you will have a 10-second delay while Excel loads online Help. If you are analyzing three cells a second, a 10-second delay would be a disaster. You might as well go to lunch. So, Martijn had pried the F1 key from his keyboard so he would never accidentally press it.
I enjoyed this essay that presents prying off the key as a rite of passage:
Removing the F1 key from the equation is just the beginning. By embracing the keyboard-centric approach, you have the opportunity to become an Excel Wizard!! Okay, maybe that’s not a technical term, but it perfectly captures the essence of those who navigate Excel solely using the keyboard.
And I particularly liked this tongue-in-cheek answer telling people they could construct their own homemade molly guard to protect against “fat-fingering”:
Here’s an alternative snippet that can be used:
Use bits of plastic or cardboard to make a tiny box that fits around your F1 key.
Affix this box with duct tape, so that the F1 key is guarded.
Fool-proof, works on any key, and can easily be reversed if needed!
Obviously, none of this can help me with my ⌘R and ⌘P woes, so, two final thoughts:
If your app has a well-trafficked shortcut, it’s worth thinking of the shortcuts immediately adjacent to that one. Could they cause any inadvertent damage or confusion?
Apps and operating systems should very easily allow you to unset a keyboard shortcut, in addition to setting or changing it. (Unfortunately, this is not as common as it should be.)
This video from Marblr about adding fall damage to Overwatch is really intense – 45 minutes of length and a lot of footage of frantic gameplay – but really informative, too.
It’s a great case study of how something seemingly really simple – deducting health from the player as they fall from height – can be a complicated thing to figure out in all the detail.
I never played Overwatch and rarely play videogames anymore, but many of the lessons here more universal for any sort of UI and system design:
You will have to introduce tactical inconsistencies for the system to feel consistent, but be careful as there might be a point those inconsistencies start to outweigh the whole thing.
Wanna learn how you and others feel about something? Overcrank it to make the feelings come out more easily. (And to find bugs.)
There will always be tensions between what the data says and how you feel about something. (I was surprised how often the word “intuitive” entered the picture.)
Also, it’s just a really well-made video, filled with little presentation and storytelling details that elevate it. I wish more videos like this existed for UI mechanics.
But maybe the most important takeway? You don’t have to choose between rigor and fun. You can have both.
It’s hard to imagine it now, but during iPhone’s first year, no emoji were available at all. It took four years until 2011’s iOS 5 gave everyone an emoji keyboard.
But in between 2008 and 2011, there existed a peculiar interregnum where emoji were only available on Japanese iPhones. The situation had to be carefully explained and caveated:
Eventually, an enterprising developer realized that emoji outside Japan was as easy as toggling a UI-less preference with a great name KeyboardEmojiEverywhere, hiding inside the innards of the iPhone:
Except, “easy” is in the eye of the beholder. This was still a few too many hoops to jump for an average iPhone user. So, developers figured out that there could be an app for that: the above preference incantation wrapped inside an application with an easy UI, and put in the burgeoning App Store.
The interesting part is that Apple initially fought some of these efforts, by rejecting a Freemoji app and likely a few others. (Not sure if this was about emoji specifically, or more principally about losing control.)
The developers had to get sneaky, and started hiding emoji enablers inside other apps. A $0.99 “RSS reader for a Chinese Macintosh news site” called FrostyPlace unlocked emoji by “simply pok[ing] around in it for a minute or so by tapping in and out of an article and playing with the two buttons at the bottom of its screen. That part is important, so be sure to do some genuine tapping.”
The author called it an “easter egg” and even wrote candidly at the end of instructions that “you can also delete Spell Number if you don’t want it, the setting will still be here.” (The number also had to change from 9876543.21 to 91929394.59 at some point, perhaps to evade… something?)
Eventually, Apple seemingly gave up – Ars Technica has a fun interview from 2009 from someone who renamed their app from Typing Genius to “Typing Genius – Get Emoji” and got away with it:
Ars: As the screenshot at the start of this post shows, you haven’t been shy about advertising the Emoji support over at App Store. Are you worried that adding Emoji to your application might have negative consequences? Are you worried about Apple pulling it from App Store?
Fung: I’m obviously taking a risk here by advertising Emoji directly on iTunes. That being said, I’m not the first. Worst case scenario, I’ll update the application with Emoji support removed. I’m hoping that Apple will turn a blind eye to this because I can’t see any harm done in allowing users to use Emoji.
Not quite “I am ready to do some time for the good cause,” but close enough.
Yet, it still took until 2011 for emoji support to be universally available with iOS 5, and even then you had to enable the keyboard in settings.
I like this little story of a mysterious latent cool new thing hiding inside your device, a thing that you could unlock only if you followed some seemingly nefarious instructions that never fully made sense but that actually worked.
An interesting tidbit: At least early on in 2008, for emoji to work both the sender and the recipient had to follow the instructions. So the toggle wasn’t just about adding a keyboard, but also enabling the decoding and rendering. (And complicating things further, iPhone’s Japanese keyboard had emoticons, and that keyboard was widely available without any hacks. The difference between emoji and emoticons was not obvious to many people, leading to a lot of extra confusion.)
Lastly, a fun sidebar: I asked about all this an old internet buddy, Steven Troughton-Smith, whom I remembered back from my GUIdebook days, and who still routinely posts fun hacks and discoveries about Apple platforms on Mastodon. I thought “Steven might remember that story; he seems like the kind of person who’d at least know how to find an answer.” Turns out, my hunch was better than I thought: Steven was the enterprising developer who actually discovered how to give emoji to any iPhone, all the way back in 2008.
I just stumbled upon a nice little power-user innovation in Chrome’s Web Inspector.
In Safari, and previously in Chrome, when editing CSS properties, you’d get a usual editing typeahead for the property name, and then the same on the other side for the property value.
In newer versions of Chrome, the typeahead menu works as before on the right side. However, the menu on the left side also includes the right side.
I think this is really clever in this context – not just to speed you up, but also to aid understanding. Just like the inert mouse up and down in the previous post could serve as a safe “peek” into the values, this new interaction can quickly allow you to explore the CSS space if you are curious, or if you only lightly remember part of the name, or even just one of the values.
This blog is authored in Apple Notes, and some time ago Notes added quick linking via typing >>, and that has a similar effect: The interactions are so nimble and precise that it is very easy to link to something, but a nice side effect is that it also feels very welcoming just to type a few letters to remind yourself of a title of an article, and then cancel out.
The downside of the Chrome change is, well, more stuff matching, but I think the audience for this UI is going to be okay with that.
I know we’re probably collectively a bit tired talking about macOS Tahoe, but I just noticed something that I think is a good example of how small details can ladder up to bigger things.
This is macOS Sequoia (the pre-Tahoe release) and a typical pop-up button:
One clever thing macOS has been doing since basically the dawn of GUIs is that upon clicking on a button like this, the currently selected row will be in the same place as before you clicked. (As opposed to, for example, the entire menu appearing below like it would from a top menu bar.)
This has interesting and often underappreciated consequences. It allows you to orient yourself quicker since you don’t have to find the selected option again. And, it saves you movement overall: the next or previous option will always be at the absolutely shortest possible distance. (Of course, the approach also has some challenges,for example if the button is positioned close to the top or bottom of the screen.)
There’s another clever thing that happens throughout macOS: All the menus work using a classic click-to-open and click-to-select sequence, but they are also usable via the slightly more advanced, but faster mousedown-drag-mouseup gesture.
These building blocks work together and mean that selecting the next option can be as simple as a little flick of a mouse.
Now, check out macOS Tahoe (current release):
You will notice that iCloud Drive, upon clicking, is now misaligned both horizontally and vertically.
On the surface, this feels just like a visual blemish – slighly embarrassing, but without much consequence. But check out what happens if you hold your mouse button at a certain position, and then release it without moving:
The stability of macOS’s interface and the thoughtful set of aforementioned rules allowed for an emergent fast behaviour: mouse down and up meant you could “peek” into a menu safely, or you could change your mind right after seeing what’s inside. In a bigger sense, it created a certain trust between you and the operating system: it’s worth learning those gestures, as they will be rewarded.
In Tahoe, some of that learned behaviour – by the way, I see it in all of these buttons, not just this one – will now work against you. Now, you can accidentally change an option without intending to do so.
Is it a big deal? No, not really. This likely – hopefully! – simply fell through the cracks in a rush to get Liquid Glass out the door, rather than no one being there to care, or no one understanding that all these gestures add up in aggregate, creating a GUI that feels fast, trustworthy, and catering to your motor memory in a way that elevates your experiences with the interface in the long run.
But I’d feel better if it wasn’t almost half a year since the release, and if we hadn’t already seen other things exactly like it.
The year is 1981. Your IBM PC is equipped with a tragic speaker that sounds awful for anything except occasional beeps. (Those beeps sound awful, too.)
You can’t afford a sound card and besides, sound cards for your PC have not been invented yet. You can’t even afford a floppy drive, so you’re one of the rare people who actually uses an audio cassette player as a storage device – a technique usually reserved for more primitive machines that have half the bits your new PC does.
But there’s a silver lining. Your cassette player has a little relay that controls its motor. You can engage and disengage the relay at will.
So, someone figured out that toggling the relay kind of sounds like a metronome. Like percussion. It’s a hack, but in the sonic landscape inhabited solely by your sorry speaker, it’s a breath of fresh air (scroll to 7:26 if you don’t land there automatically):
The year is 2026. Your computer itself is the size of an audio cassette, fits in your pocket, has better storage, graphics, sound, pretty much everything compared to a 1981 PC. It even has a special haptic motor. Except, that motor can only be controlled by native apps, and there is no official API to do it from a browser.
But there’s a silver lining. Tapping any checkbox on a site generates a haptic pulse. And that apparently works even if the checkbox is hidden and if thecomputer is doing the tapping.
I love these kinds of hacks, and I wonder what’s going to happen to this one. Will it fly under a radar, or will some websites start abusing it? If so, will Safari clamp it down, or will it actually give people a proper API for haptics?
This blog is about craft, but sometimes the answer to craft is not skill or taste or awareness or effort, but it’s creating conditions for craft to flourish. Workday looks like Workday, and your banking app looks like your banking app, not because there aren’t enough designers and engineers around that know how to do it better.
This is a thoughtful post by Anil Dash about Apple’s recent announcement of introducing video podcasting, warning how the conditions set up right now will lead to enshittification, and proposing changes:
This will also start to impact content. You don’t hear podcasters saying “unalive” or censoring normal words because there is no algorithm that skews the distribution of their content. The promotional graphics for their shows are often downright boring, and don’t feature the hosts making weird faces like on YouTube thumbnails, because they haven’t been optimized to within an inch of their lives in hopes of getting 12-year-olds to click on them instead of Mr. Beast — because they’re not trying to chase algorithmic amplification.
It’s worth reading even if you don’t care much about podcasts.
One of the most mysterious keys on the PC keyboard has always been Scroll Lock, joining Caps Lock and Num Lock to create the instantly recognizable LED triumvirate:
Scroll Lock was reportedly specifically added for spreadsheets, and it solved a very specific problem: before mice and trackpads, and before fast graphic cards, moving through a spreadsheet was a nightmare. Just like Caps Lock flipped the meaning of letter keys, and Num Lock that of the numeric keypad keys, Scroll Lock attempted to fix scrolling by changing the nature of the arrow keys.
This is normal arrow key usage in Lotus 1-2-3, doing what you’d expect, if likely a bit slower:
And this is Lotus 1-2-3 with Scroll Lock enabled. Here, the arrows do not move the cursor, but move the spreadsheet:
In time, scrollbars helped with the problem, then mice with wheels solved it in one direction, and then trackpads in both. (Although even though my 2025 Windows laptop doesn’t have a Scroll Lock key, its onscreen keyboard does, and the key still works in Excel.)
But, I grew to believe that UI problems never fully die, and often come back dressed up in new clothes.
This is the TV app on my Apple TV, doing movement as you’d expect:
But Netflix a while back picked a different approach – scrolling almost as if Scroll Lock was on:
More recently, I saw that approach spread to HBO Max and YouTube apps as well:
Is this good? To me personally, the Scroll Lock-esque approach feels strange and claustrophobic. I see the (hypothetical) value of keeping the selection in one place, but the downsides are more pronounced: things feel lopsided, going back in this universe is flying blind, and the system creates strange situations at the edges, where Scroll Lock struggled as well.
And yet, given I just dated myself by reminiscing Lotus 1-2-3, I’m curious how it feels to others.
From May last year, a 21-minute video by Linus Boman about font piracy, specifically during the era of personal computing and early internet:
The nuances of what separates font piracy from non-pirated revivals or general inspiration are too much even for me, but I liked how the video moved on from the obvious and cheap “haha, you wouldn’t pirate a font” story to cover a few of the more complex issues with panache.
My small contribution to the discourse is that I just scanned an interesting booklet from 1979 called Typeface Analogue, which catalogs various names different phototypesetting manufacturers used for their “replica” fonts – a sort of a translation table between once-relevant parallel type ecosystems.
Some are pretty uninspired: CS for Century Schoolbook, OP for Optima, Eurostyle for Eurostile, and so on. Others are more interesting: a version of Palatino called Patina, American Classic becoming Colonial, or Futura renamed to Twentieth Century. Absolute fav? Helvetica becoming Megaron.
The display fonts you see on this blog are my vector conversion and slight improvement (kerning pairs!) over a bitmap PC/GEOS font called University, which itself was inspired by the original Macintosh’s Geneva. Inspired or downright stolen? You decide:
Those terms confused me back in the day, and occasionally they still do, so I thought it might be nice to write it all out. This is how I understand them:
deterministic: whenever you do something, it gives you identical results as any previous or next time you do it
idempotent: if you do something twice, or thrice, or more times, it will be the same as doing it once
But “deterministic” in UI design might mean something more specific. Let’s take pressing ⌘B to bold, for example:
Every time you press ⌘B on an identical selection, it will behave the same. But, pressing ⌘B doesn’t guarantee something will get bold. If it already is bold, the command will reverse its meaning. In this sense, ⌘B is non-deterministic.
It’s not hard to imagine a determistic version of bolding. Make ⌘B bold the text, and make another shortcut – say, ⌘U – unbold it. This way, you can always press either and be absolutely confident you will get a predetermined result without worrying about anything else. It’s a boon for motor memory, but it is more complex to explain, and it adds more UI surface.
There is also another, more interesting way: you can make ⌘B always bold first, and unbold second. This way, your fingers can remember ⌘B is for bolding, and ⌘BB is for unbolding. But this also gets tricky: for already fully bolded text, it might seem the feature is broken, because the first keystroke does nothing!
Only the second of these three approaches is idempotent,meaning you can invoke it many times and it will always give the same result:
⌘B toggles bolding from current state: non-deterministic, non-idempotent
⌘B bolds, ⌘U unbolds: deterministic, idempotent
⌘B toggles bolding, always starts from bold: deterministic, non-idempotent
One of my favorite idempotent concepts is the Clear key present on many calculators, and still on some larger Mac keyboards.
The idea behind Clear is simple. Let’s say you’re a professional keypad user – maybe an accountant? – typing in numbers for hours a day. You just made a mistake. Pressing Backspace will remove the last digit, but are you sure you only made a mistake on that last digit? What if your fingers brushed another key and you typed in two digits by accident?
Instead of using the non-idempotent Backspace key where you’d have to look at the screen to confirm, it’s easier just to press the idempotent Clear which will always remove the entire number, and then start from scratch without even having to look anywhere (as gamers would say, “no scope!”).
And, for people who are moving fast, it feels safe just to press the shortcut or a button instinctively, for ease of their mind, even if nothing has to be done. Some people might choose to press it a few times, just in case. The Esc key often has that property – isn’t it just nice to slam Esc many times? – and Jeff Jarvis in his 2014 essay talked about another shortcut that felt that way:
Since I don’t need ⌘S anymore, I can now appreciate how much it had become a part of my ritual of writing and even of thinking. I used to hit ⌘S not just as data insurance — hell, I’d often hit it after having not made a single change in my text since the last time I’d hit it. I hit ⌘S as a break, a psychic, semiotic semicolon. It gave me a moment to search for the right word, to plan the structure of where I would go next, to commit to what I’d written, or to wonder whether I had the courage to erase what I’d written and try again.
(In Figma, where ⌘S wasn’t necessary, we used to show this – but we only showed it once every fortnight, since some people would press the shortcut instinctively like Jarvis, and find the message distracting and maybe even patronizing.)
All of these options have pros and cons. The beauty of determinism and idempotence is that they free you from paying attention. I always get a bit nervous when someone tells me that in their country, you can press the elevator button again to unset it. Even if you don’t make a button a toggle, visually disabling it or showing a message (“Nothing to delete!”) when it has nothing to do could feel like a friendly gesture toward newcomers, but its non-idempotence will grate people who know what they’re doing. Determinism and idempotence are good for motor memory to develop, but – just like the above bolding example – might be initially more confusing.
The approaches can coexist; browsers give you ⌘⌥←→ to move between tabs (non-deterministic, non-idempotent) and ⌘1/2/3 to switch to a specific tab (deterministic, idempotent). Some places even offer a choice. In macOS, you can say whether you want clicking on a scrollbar chute to be deterministic or not:
…although usual choice-giving caveats should apply.
I think it’s good to think about those things, especially for interfaces used professionally. Magical things happen if you can trust your fingers and sometimes, if you worry too much about novice users, that might make it hard for pro users to emerge.
As a former ISP employee I occasionally like dipping my toes into some networking stuff, and this 25-minute video from The Serial Port is a good retelling of the day in 2014 when one of internet’s important routing tables crossed a threshold of 512K, which caused all sorts of trouble:
What I appreciate about The Serial Port is that they always seem to actually test the vintage hardware or rebuild the old software they’re commenting on, and this time was no exception: they grabbed a classic unsung hero of ISPs, a Cisco Catalyst 6500-series router, and then recreated “The 512K Day” in their studio.
This was a nice comment under the video:
Have absolutely no knowledge about networking, but watched this video as if a thriller movie. Thanks for opening my world of tech to networking.
Yeah, the video is kind of nerdy and intense, but maybe you’ll enjoy it; even a classic aging piece of hardware with an arbitrary ticking-bomb limit deserves some respect.
Also, the funniest comment:
I had a 2.4k day a couple days ago when I realized Farm Sim 22 only allows a max of 2400 bales. Couldn’t load into my saved game. Had to go into items.xml and temp remove a hundred bales.
I read Erika Hall’s Just Enough Research. I’m not going review the entire book as it feels a bit off-topic for this blog, but the chapter about surveys had me nodding my head so much I’d love to excerpt a few things:
The questions can be asked in person or over the phone, or distributed on paper or collected online. The proliferation of online survey platforms has made it possible for anyone to create a survey in minutes.
This is not a good thing.
Surveys are the most dangerous research tool — misunderstood and misused. They frequently blend qualitative and quantitative questions; at their worst, surveys combine the potential pitfalls of both. […]
If you ever think to yourself, “Well, a survey isn’t really the right way to make this critical decision, but the CEO really wants to run one. What’s the worst that can happen?”
Brexit.
Hall highlights that surveys are much harder to debug than other methods:
It’s much harder to write a good survey than to conduct good qualitative user research—something like the difference between building an instrument for remote sensing and sticking your head out the window to see what the weather is like. Given a decently representative (and properly screened) research participant, you could sit down, shut up, turn on the recorder, and get useful data just by letting them talk. But if you write bad survey questions, you get bad data at scale with no chance of recovery. It doesn’t matter how many answers you get if they don’t provide a useful representation of reality. […] Surveys are the most difficult research method of all.
[…] Bad code will have bugs. A bad interface design will fail a usability test. A bad user interview is as obvious as it is uncomfortable. […] A bad survey won’t tell you it’s bad.
And that they might be seductive because they feel like hard data:
Designers often find themselves up against the idea that survey data is better and more reliable than qualitative research just because the number of people it is possible to survey is so much larger than the number of people you can realistically observe or interview. [… But] unless you are very careful with how you sample, you can end up with a lot of bad, biased data that is totally meaningless and opaque.
There’s also this hilarious bit:
Managers love NPS because it was designed to be loveable by managers. It’s simple and concrete and involves fancy consultant math, which makes it seems special. But is this metric as broadly applicable and powerful as it claims to be?
Nah.
NPS is not a research tool. I shouldn’t even be talking about NPS in a research book.
The entire book is worth a read, with a lot more to offer than the pithy quotes I excerpted above. I really liked its pragmatic approach to research that understands the realities of the industry.
Mac allows you to assign keyboard shorcuts to menu items, but the interface is clunky – you have to select the app even if you just came from it, and then type in the menu item name by hand without any assistance:
Other tools, like Keyboard Maestro, do something similar. You either have to type it again, or you can point to it, but in a replica of the menu of the app shown in a very different style and orientation:
But this week I learned of another app, KeyCue, that approaches this differently. You simply point to the menu item and hold the desired key for a while:
Okay, this is not a universal endorsement. The feature works clunkily, and KeyCue as a whole is way too comfortable adding itself to login items without asking.
But as far as singular interactions go, this is great and eye-opening. It made me realize that the previous things I’ve shown – System Settings, Keyboard Maestro – are really not GUIs, and they don’t practice direct manipulation. They’re still partially command line interfaces dressed up in GUI clothing.
We kind of lightly made fun of Jony Ive going angelic on “staying true to the material” and things being “beautifully, unapologetically plastic.” And there is, of course, value in command line and those kinds of approaches. But this part of KeyCue at least is unapologetically a graphical user interface, and it is nice to still be surprised in this space.
Before computer graphics, movies relied on matte paintings to extend or flesh out the background. This is perhaps my favourite matte painting, from the end credits of Die Hard 2:
Turns out, videogames do something similar, except the result is called a skybox, since it has to encompass the player from all sides. It’s another way to use cheap trickery to pretend the world is larger than it is.
This 9-minute video by 3kliksphilip shows a few more advanced skybox tricks from Counter Strike games using the Source engine:
I particularly liked two discoveries:
In real world, you wouldn’t style backfacing parts, because the player will never be allowed to see from the other side. Here, you don’t even have to render them.
Modern skyboxes have layers and layers of deceptions: more realistic 3D buildings closer to you, and completely flat bitmaps far away. It almost feels like each skybox contains the history of skybox technology that preceded it.
On the other hand, seeing clouds as flat bitmaps was really disappointing.
Beagle Bros was a 1980s software company making apps for Apple II that is still remembered fondly for their personality.
The company had a hobbyist slant, selling various small tools and collections with fun names like Beagle Bag (in the “Indoor Sports” collection) and DOS Boss and Utility City – similar perhaps to Norton Utilities on the PC side, but with a lot more fun and charisma. This is one of their loading screens, also showing both their recognizable logo and their endearing quirkiness:
How do you understand a man who has three clocks on his wall, showing the time in three different cities-San Diego, Fresno, and Seattle-all, of course, showing the same time (″If anything changes in those cities, we’ll know about it”)?
(I find the anachronistic combination of hedcuts and dot matrix printer typography particularly fascinating.)
Some of their software was more serious; Beagle Bros released many useful tools and even text editing and presentation apps. They also made practical posters:
But other stuff…? It was just goofing off:
How does this relate to craft and quality?
There is this interesting question about how much product and marketing and vibes and lorecorrelate. Did we forgive Sierra On-Line the numerous flaws of their games because we liked the company? Do we love Panic because we like what they do, or because of how they do it? Did Google put doodles on its homepage to distract people from more nefarious things, or because it just felt like a fun way to celebrate things? Is there such a thing as pure selflessness? What is the nature of free will?
Those are, perhaps, topics for future posts.
But Beagle Bros must have been doing something right if there is still a living, elaborate catalog of their works online, 40+ years later. Jeff Atwood also argued in 2015 that it was more than just fun – or that “fun” itself can give back in great ways:
Here were a bunch of goofballs writing terrible AppleSoft BASIC code like me, but doing it for a living – and clearly having fun in the process. Apparently, the best way to create fun programs for users is to make sure you had fun writing them in the first place.
But more than that, they taught me how much more fun it was to learn by playing with an interactive, dynamic program instead of passively reading about concepts in a book. […]
One of the programs on these Beagle Bros floppies, and I can’t for the life of me remember which one, or in what context this happened, printed the following on the screen: “One day, all books will be interactive and animated.”
I thought, wow. That’s it. That’s what these floppies were trying to be! Interactive, animated textbooks that taught you about programming and the Apple II! Incredible.
Steven Frank, the co-founder of Panic, wrote this in 1999, with similar themes:
You never knew exactly what you were going to get. I remember one program listing printed on the side of a bird that, when run, produced a series of wild chirping noises from the Apple’s speaker. And this was from a program that was only five to ten lines long. As a neophyte BASIC programmer myself, I was stunned and amazed. How could you make something this cool with this small amount of code? […]
Beagle Bros’ tools were fantastic. They literally let you do the (allegedly) impossible, like change the names of operating system commands. And they always packed the disks full with extra stuff. Demos of their other products, and strange graphics hacks that existed for no reason other than the fact that they were cool, and because there was spare room on the disk. Beagle Bros. had a lot to do with why I ever wanted to learn programming in the first place. […]
I’ll never forget the book. […] The book was a huge compilation of all around interesting stuff. Weird Apple II tricks that were pointless, but endlessly fascinating. Like the fact that there were extra offscreen pixels of lo-res graphics memory that you could write to, that never got displayed. Or how to put “impossible” inverted or flashing characters into your disk directory listing. Or how to modify system error messages. Not very useful, but really fun to know and really, really cool to mess with. My dad was convinced I was going to somehow break the computer with all this hacking, but a simple reboot always fixed everything.
(I swear I did think of Panic above as a spiritual successor to Beagle Bros without knowing that their work literally inspired one of the Panic’s founders!)
The subtlety: They had utilities which would produced formatted Basic listings and they would give example output of these utlities in their ads and catalogs. It was quite a while before I realized that most of those examples were not program excerpts, but complete programs which of course contained the Beagle Bros signature weirdness. And then there were the seemingly innocent hex dumps. My favorite was from the cover of one of their catalogs, which had a classic picture of this fellow sitting in a chair. On the floor next to him is a handbag with a piece of tractor paper sticking out. On the paper is a hex dump: 48 45 4C 50 21 20 and so on, which are ASCII codes that spell out the message: “HELP! GET ME OUT! I’M TRAPPED IN HERE!----SOPHIE”
After the work the company had done on AppleWorks 3.0, Simonsen felt ready to jump into the Macintosh market with a “Mac AppleWorks” of their own – they called it Beagle Works. Unfortunately, other companies – giants in the Mac market such as Microsoft, Claris, and Symantec – had the same idea. Their resources were far greater than Beagle Bros had imagined, and the race was costly.
The gamble killed the company. It’s likely that the changing software market would anyway.
But the years before seem to still inspire some people. Check out the Beagle Bros Repository – the homepage is a bit confusing (I think it prominently shows last-updated or last-added things for some reason?), but just use the nav at the top. Maybe it will inspire you, too.
In short, the way I think about software quality is the amount of meaningful problems. […]
There are problems in Finder — resizing columns, renaming or deleting files synced with a FileProvider-based app, and different views not reflecting immediate reality. There are problems with resizing windows. AirPlay randomly drops its connection. AirDrop and other “continuity” services do not always work or, in an interesting twist I experienced a couple days ago, work fine but display an error anyway. The AirPlay and AirDrop menus shuffle options just in time for you to tap the wrong one. […]
These are the products and features I actually use. There are plenty others I do not. I assume syncing my music collection over iCloud remains untrustworthy. Shortcuts seems largely forgotten. Meanwhile, any app augmented by one of Apple’s paid services — Fitness, News, TV — has turned into an upselling experience.
As I’m reading this and thinking about my own Apple usage patterns and a similar litany of problems, I keep returning to Apple TV, which feels by far like the most stable and least troubled platform. I wish I had a better explanation for it: Is Apple magically really good at TV interfaces? Are their benefitting from it being a “hobby project”? But I think the Occam’s Razor here is this: tvOS is just a lot simpler.
And just like that, a thought appears: Is what we’re seeing overall is really just Apple losing the battle with complexity?
Apple won once, in the late 1990s, when on the hardware side all the Performas and Newtons and LaserWriters were cut ruthlessly, and on the software front Mac OS X pushed Classic away as the operating system. The situation was different then, however, because there was no other choice. Today, Apple seems successful on paper, so the pressure needs to come from inside, from someone high up enough to recognize that what Apple is doing vis-a-vis software quality is not sustainable and hasn’t been for some time now. That the bill already came due on all of the decisions where systems thinking and deep testing and focus and preventative maintenance and paying off design debt have been deprioritized in favour of another shiny launch event that stretches the teams and platforms even thinner.
When thinking about complexity, a different go-to framework I have is “can I explain a situation in a short paragraph?” This can help separate regular bugs (where the explanation is typically: I am doing the thing that used to work and it’s no longer working, so something broke), from bigger problems that require some serious long-term system-thinking approach. Off the top of my head, there are many things I can no longer explain:
I cannot explain Apple’s widget strategy
I cannot explain what is going on with the Fn/Globe key
I cannot explain the long-term thinking surrounding icons in Tahoe menus
Of course, it’s not me who should be explaining those things. And I haven’t done this exercise before so I don’t know for sure if things are getting worse here. It feels like it, though. I wonder if Apple just hit a limit of some sort of being able to deal with complex things, and first course of action should be: don’t throw even more complex things on your plate.
It’s probably impossible to tell the upper echelon of Apple that it’s breaking revenue records in spite of its software design, not because of it. I hope the next regime knows better.
When you start a new game in SimCity 2000 (you can try it in the browser yourself), as the city is generated, you see a few messages fly by: Creating Hills, Tracing Rivers, Smoothing. Among them, for a bit, one can see “Reticulating Splines”:
“Reticulating splines” is a giant pulling of our legs. Will and some others made up the phrase because they thought it looks and sounds as if it means something. It might: the word “reticulate” means to divide something so that it looks like, or appears to be, a net or a network generating, perhaps, from a single point; a “spline” can be an irregular curve or the approximation of a curve. Individually the terms have meaning. Together – in the case of SimCity 2000 – they don’t. It’s just a prank and a joke.
In some versions of the game, there was also a seductive woman’s voice saying the phrase out loud, which presumably made it even more memorable.
The phrase moved to other Maxis games, notably The Sims…
I’ve heard the argument that it wasn’t just Reticulating Splines – that Will Wright’s joke was the beginning of the habit of putting “cute” loading messages in apps, including actual not-game and definitely-not-cute applications. I am 100% sure there are some earlier examples of “funny” loading or error states, but I also see how this one attained a certain critical mass and influence.
I hate these cute loading strings with passion. I think I’m in the minority. It’s a topic for a future time, but it was fun at least to trace some part of its history, sifting through hundreds of pages earnestly explaining the concept of “reticulating splines” to people. Whether they’re in on the joke, I am not sure.
Also, okay. Fair enough. I chuckled just now when I saw this:
This was a fun 15-minute architectural video from Stewart Hicks (absolutely worth a follow otherwise) that mapped precisely into the same kind of tension and internal debate I sometimes feel when talking about minimalism in UX design: Minimalism is good! Until it’s not!
One interesting lesson here is that the famous “less is more” was actually – surprise! – perverted from the original poem, where it meant “less technical perfection means more emotional impact.”
I wasn’t fully sure why Hicks decided to incorporate a commentary to his own story this way – maybe he was afraid that the sarcasm of “steel wanting to share its joy” and “lessness” and “simplificity” wouldn’t land well? Or perhaps it was just the introduction that didn’t quite work for me, as it confused the entire joke.
But it was fun to watch it twice anyway. Those stories are never easy. I am not ready to draw too many parallels between architecture and UX design, even if Hicks lightly does so at the end. There’s no gentrification and displacement when Liquid Glass takes over Aqua, although I think a lot of people would love to see a Apple’s recent design decisions meeting the business end of a wrecking ball.
My favourite recent saying to replace “less is more” is this, by Paul Valéry (another poet!):
Everything simple is false. Everything complex is unusable.
You can see it as unsolvable, cynical, maybe even nihilistic. I do too, on a dark day. But more often, I see it as a great challenge. “Less is more” has this simplistic seductiveness that feels naïve. “More” is not an option, but often in my work on complex systems “less” is neither, and a lot of UX design is finding the perfect shade of gray.
I keep thinking about this very good 11-minute Not Just Bikes video about traffic calming. In it, a simple argument is made: the posted speed limit of any given street or road doesn’t really matter. What matters is how the street feels. Generously wide and separated lanes, sparse traffic lights, and the road being straight past the horizon will make you unconsciously speed up. Reducing the posted speed limit or adding flashing YOUR SPEED signs won’t help:
The truth is that many drivers will not slow down because of signs or speed limits. They’ll slow down either because they don’t feel safe, or because they’re afraid of damaging their car.
The only answer is redesigning the street for the desired speed limit – narrowing the lanes or joining them, creating choke points and speed bumps, adding posts and planting trees close to the road, and even adding visual cues like “dragon’s teeth.”
One of the great thing about driving in the Netherlands is that it’s rarely necessary to look at the speed limit. The road design takes care of that for you.
There is an app I use a lot called Forklift, a suped up Finder, with one of its functions being syncing files to a remote server.
In its version 3, the syncing window looked like this:
This is a pretty straightforward and dependable function – and I’ve depended on it for years.
I recently updated to version 4 to check it out, particularly since it promised faster syncing. But I was thrown aback by how it randomly deteriorated:
It’s not that there seem to be some UI challenges: the new icons make it harder to understand hierarchy, and one of the switches starts with “Don’t” in contravence of rules of avoiding double negatives.
No, the worst part is this:
This is a new temporary state that meant to help me understand the details of what’s changing.
On the surface, it’s a thoughtful thing. But it’s done in the worst possible way for this kind of a power-user interface: It’s very slow to invoke and slow to cancel. I often activate it by accident – it makes large swaths of UI a minefield where you can no longer rest your cursor safely. It also changes the hierarchy of the output in a way that’s confusing – and it even animates the text wrapping in a distracting way. Then, if you press Esc instinctively to get rid of whatever happens, the window closes altogether.
It’s a “delightful,” luscious transition that is completely out of place. I think this is how many people misunderstand craft – that it’s only about “high polish” without any thought underneath. Here, the effort was spent on executing something that couldn’t be saved this way and needed a more serious rethink. It seems like its creators forgot who’s using the app and for what, and embarked on accidental UI calming.
There are other challenges along the same lines, both downgrades from version 3:
when the app analyzes the differences, I can no longer press the Sync button and walk away
even when the button becomes active, I can no longer press Enter to activate it – I have to use the mouse
In version 3, I could invoke Sync, immediately press Enter, and get on my merry way, with syncing continuing in the background. It was exactly what I wanted. Version 4 slows me down by requiring me to pay constant attention to the interface: it matters where I rest my mouse, it matters when I click the button, it matters what input device I use to commit.
It’s okay to think of friction and sometimes transitions are indeed very helpful for UI calming to avoid drastic movements or accidental activations. But here, this isn’t great at all; the creators of Forklift promised me faster syncing and achieved the opposite.
I thought Safari’s release notes are pretty good – exhaustive, direct, something you won’t ever read for pleasure, but something you can actually learn a lot from even if you are just scanning.
I wish either MDN or Can I use… integrated them in some way (and, of course Chrome’s and Firefox’s as well), so that looking up a certain feature or property – for example, will-change – would show you the recent changes in browsers in reverse chronological order, which could help you understand the details of the feature evolving, and help with debugging.
A really interesting 28-minute video by daivuk about making a first-person shooter game QUOD that fit in just 64 kilobytes:
I found watching it strangely enthralling and even nerve racking. The creator keeps adding stuff that seemingly has no chance of fitting into such a small space – textures! sounds effects! music! his own language! – and somehow finds a way to squeeze them all in.
This is inspiring, but also practically useful: even though you and I are maybe never likely to need such high optimization, some of these techniques alone could be useful in some tight quarters like a load-bearing CSS file, or embedded software.
As an example, the author wrote his own “music tracker,” which is a clever way to reduce the weight of music: instead of the tune being one big audio file, only the instruments are sampled, and then arranged in repeating patterns.
Except in his case, there were no instruments… just audio effects already existing in the game. And audio effects themselves were generated in a similar way, by combining smaller waves and effects.
The same was done for textures: the creator wrote a bespoke text editor that saves each texture as smaller pieces and combination instructions – a sort of a “PDF” of a texture rather than a more costly scan of the printed page – and re-generates it on entry.
Lastly, this debug view of “cost” was really interesting. (Good debug views, in my opinion, are generally underrated.)
As you progress in your UI design career, you learn that there are quite a few unsolvable challenges:
do you write My Items or Your Items in UI?
do you put hand cursors over buttons?
for a boolean item (especially in the menu), do you talk about the present state or the future state?
do you try to solve for change blindness or change aversion?
I was reminded of one of those today: how do you sort items in a bottom-aligned menu?
One school of thought is to keep it in the same order as you would a regular top-aligned menu:
On the positive side, this allows to build consistent understanding of how menus are structured: the most important thing is at the top, Quit is always at the bottom. But the downsides are obvious, too – now the most important item is furthest away from where you cursor started, and you have to awkwardly cross all the other items on the way to it.
iOS’s springboard went, literally, the other way:
Here, the bottom aligned menu reverses its item order. This tripped me up today. The dock in macOS was actually more defensible upside down because there, every menu was always going the same way. Here, the inconsistency starts rearing its ugly head.
Of course, the best way to not face an impossible choice is to avoid it altogether. Not sure how one could accomplish it here, though. Placing the menus consistently below would make some of them scrollable, or basically invisible for bottommost icons. You could also slide the entire screen up to make room for the menu, but that would probably feel disorienting.
So, I can’t say this is a wrong solution. The inconsistency might only bother people who use this often, and maybe no one uses this often? Or, perhaps, it was really important to allow to resize widgets and make that item as easy to tap as possible? But still, I think I would have done it the other way – align as needed, but items always in the same order.
An interesting crowdsourcing effort from Mimestream that asks users who want snoozing to pick up a phone and dial their representative put pressure on Gmail to add that feature to the API.
I wonder how much of a chance of succeeding this has. The issue has been open since 2018, and was reopened in 2023. It has over 5,100 stars. Those dates seem old and the numbers seem huge, but I don’t have a full frame of reference. Casual search shows there are only two more bugs that have been starred by more people.
I feel macOS these days starts feeling like Windows in the 1990s where occasionally some core component of it breaks, and a reboot is necessary to restore it to full functionality again.
But even with that in mind: this happened literally right after the reboot, with nothing much happening and no other signs of the system in distress.
It’s hard for me to even understand what would make this kind of thing pop up. Trash feels like one of the core tenets of a GUI – like undo, or copy/paste, or windows gaining focus. You don’t expect it to just… stop working, especially with a circular error message like the above.
The breathing light – officially “Sleep Indicator Light” – debuted in the iconic iBook G3 in 1999.
It was originally placed in the hinge, but soon was moved to the other side for laptops, and eventually put in desktop computers too: Power Mac, the Cube, and the iMac.
The green LED was replaced by a white one, but “pulsating light indicates that the computer is sleeping” buried the nicest part of it – the animation was designed to mimic human breathing at 12 breaths per minute, and feel comforting and soothing:
Living through that era, it was interesting to see improvements to this small detail.
The iMac G5 gained a light sensor under the edge of the display in part so that the sleep indicator light wouldn’t be too bright in a dark room (and for older iMacs, the light would just get dimmer during the night based on the internal clock).
In later MacBooks, the light didn’t even have an opening. The aluminum was thinned and perforated so it felt like the sleep light was shining through the metal:
And, for a while, Apple promoted their own display connector that bundled data and power – but also bundled a bit of data, which allowed to do this:
Back when I had a Powermac G4 plugged into an Apple Cinema Display, I noticed something that was never advertised. When the Mac went to sleep, the pulsing sleep light came on, of course, but the sleep light on the display did too... in sync with the light on the Mac. I’ve tested that so many times, and it was always the same; in sync.
Just a little detail that wouldn’t sell anything, but just because.
To do this I shifted the first gaussian curve to that its domain starts at 0 and remains positive. Since the time domain is 5 seconds total and the I:E ratio is known, it was trivial to pick the split point and therefore the mean. By manipulating sigma I was able to get the desired up-take and fall-off curves; by manipulating factor “c” I was able to control for peak intensity.
But at that point, in the first half of 2010s, the breathing light was gone, victim to the same forces that removed the battery indicator and the illuminated logo on the lid.
I know each person would find themselves elsewhere on the line from “the light was overkill to begin with” to “I wished to see what they would do after they introduced that invisible metal variant.”
I know where I would place myself.
This blog is all about celebrating functional and meaningful details, and there were practical reasons for the light to be there. This was in the era where laptops often died in their sleep – so knowing your computer was actually sleeping safe and sound was important – and the first appearance of the light after closing the lid meant that the hard drives were parked and the laptop could be moved safely.
The breathing itself, however, was purely a humanistic touch, and I miss that quirkiness of this little feature. If a save icon can survive, surely so could the breathing light.
By definition security and usability coexist wearily, so it was nice someone thought about allowing me to do this at an opportune time, rather than at a random moment that might be extremely untimely or stressful.
I think a lot about bugs or design decisions that make software appear dumb.
At some point in my career I started fitting everything against two principles: “don’t make your app treat your user as if they’re dumb” and “don’t make your app itself feel dumb.”
To wit:
This is, very obviously, my website. I have made it from scratch. I have visited it a million times. And yet, at some point my friend Noah shared a link of it with me, so now Safari occasionally announces that with glee when I check it out.
Me having visited something many times should outweigh someone sharing it with me once.
There is a close box here, although you have to hover over the bar to see it. (And, after closing, it seems to come back after a few days!) I can also right click and choose Remove which does… absolutely nothing.
I believe this whole feature is called Shared With You. Elsewhere, on occasion, I find it useful. But its tentacle right here makes Safari appear just… kinda obtuse.
Also, speaking of obtuse: Can you spot a grave typographical mistake I made on this screenshot? (I already fixed it in production.)
Over the past few years, I’ve found myself relying on TextEdit more as every other app has grown more complicated, adding cloud uploads, collaborative editing, and now generative A.I. TextEdit is not connected to the internet, like Google Docs. It is not part of a larger suite of workplace software, like Microsoft Word. You can write in TextEdit, and you can format your writing with a bare minimum of fonts and styling. […]
I trust in TextEdit. It doesn’t redesign its interface without warning, the way Spotify does; it doesn’t hawk new features, and it doesn’t demand I update the app every other week, as Google Chrome does.
But I get the feeling that Chayka would be better served switching from TextEdit to Apple Notes for most of these things he’s creating. Saving a whole pile of notes to yourself as text files on your desktop, with no organization into sub-folders, isn’t wrong. The whole point of “just put it on the desktop” is to absolve yourself of thinking about where to file something properly. That’s friction, and if you face a bit of friction every time you want to jot something down, it increases the likelihood that you won’t jot it down because you didn’t want to deal with the friction.
Part of me agrees with this vehemently – for casual text wrangling, Notes is by far the best iteration of what both the old Stickies app and TextEdit attempted.
But Notes are still evolving. The UI keeps changing. I’ve had a note shared by a friend hanging alongside my own notes for years, without me asking for it. I remember the moment when tags were introduced, and suddenly copy/paste from Slack started populating things in the sidebar. Then there was this scary asterisked dialog that slid so well into planned obsolescence worries that it felt like a self-own:
And the attendant warning, ostensibly well-intentioned, adorned my notes for months, just because I had an older Mac Mini I barely touch doing menial things in a dusty closet:
On top of that, the last version of Apple Notes on my macOS occasionally breaks copy/paste (!), which led to some writing loss on my part. (If you cut from one note intending to paste in another, and realize nothing was saved in the clipboard, you lost the text forever.)
These are not show stoppers. But they too are friction that has to be juxtaposed with what Gruber lists in his essay. They’re also friction of the unexpected, new, stochastic flavour. TextEdit’s challenges, on the other hand, are known knowns. In this context, TextEdit is in that rare – and maybe increasingly treasured – place where it no longer gets updates, but it doesn’t feel abandoned, or falling apart, or at the risk of outright cancellation. (I think on the inside of tech companies this is called being “maintenanced” – not actually staffed to be improved, but still eligible for breaking bug fixes and security updates.)
We need to normalize declaring software as finished. Not everything needs continuous updates to function. In fact, a minority of software needs this. Most software works as it is written. The code does not run out of date. I want more projects that are actually just finished, without the need to be continuously mutated and complexified ad infinitum.
Personally I would be very happy to live in a postcapitalist world where it was 100% FINE that desktop operating systems had “stopped evolving” because they were good enough to meet basically everyones’ needs, and there was no stock price to crash from an old monopoly having clawed its way to the top with nowhere else to go. “Let [certain] software be finished” has always felt to me like oblique pining for humanity to outgrow our current political-economic system.
The fact that in the 10+ years I’ve been using it, there’s only been a single major overhaul update is a feature, not a bug to me.
I have seen this sentiment grow in recent years, as AI is seemingly shoved into every crevice of everything whether or not it even had crevices to begin with. Liquid Glass on the Mac side and incessant ads plus bugs on the Windows side add to the malaise.
But I’ve also been in technology so long that even outside of tensions of capitalism, it’s hard for me to imagine software not changing. Code does run out of date even if you try very hard. So I don’t know yet how to square all this.
Bear is not finished/“maintenanced,” but it seems to not be changing the same way some other software is changing, either. I’m excited reading its blog – even if there are features or updates that do not pertain to me, they don’t bother me, and make me excited for others benefitting. Its innovation feels considered, not reckless.
In a week I’m praising products I didn’t expect to praise, I feel similarly about Lightroom Classic. When Adobe in 2017 forked Lightroom Classic out of the newly-refreshed Lightroom, a lot of us got worried about the “Classic” tag having “dead man’s walking” connotations. But nine years later, and Lightroom Classic is still being lightly updated with fixes, camera presets, and – occasionally – feature changes that largely feel welcome. Lightroom Classic appears, to once again use industry jargon, “stable.”
Maybe the answers are somewhere in this post: celebrate and fund “maintenanced” apps, fork apps into “stable” and “modern” paths, or encourage and practice slow, considered growth. I bet there are other approaches and altogether new ideas to try, too. (There used to be a tradition, when software was physical, to list all the new stuff at the back of the box. What if we started writing out the things we didn’t add?) But I like at least talking about it to begin with. There are apps in my life I want to feel like TextEdit, there are apps that I want to feel like Notes, and there are ones I’m happy to put on the cutting edge/beta/canary path, where bugs are a promise, and motor memory a distant dream.
I yearn for a software ecosystem that allows all of these types of apps to blossom.
As a Mac user I naturally focus on that platform, but Windows 11 has had its own share of problems – and that list has grown so vast it’s hard to know where to start.
So let’s pick it up at random, with a post by Thom Holwerda with a great title “You can actually stop Windows Explorer from flashbanging you in dark mode”:
One of the most annoying things I encountered while trying out Windows 11 a few months ago was the utterly broken dark mode; broken since its inception nine years ago, but finally getting some fixes. One of the smaller but downright disturbing issues with dark mode on Windows 11 is that when Explorer is in dark mode, it will flash bright white whenever you open a new window or a new tab. It’s like the operating system is throwing flashbangs at you every time you need to do some file management.
I find the videogame-inspired nickname darkly – I’m sorry! – funny, but the problem is real. It looks like this (video via windowscentral.com):
It’s not a problem unique to Windows 11 – just the other night I saw this on Wikipedia on my iPhone, exacerbated by the delayed reaction of Liquid Glass buttons spastically adapting to the changing background:
But there is something about this that feels a notch more important than other visual and layout issues.
I think this is because dark mode is a contract – we’ll lower the brightness, and we’ll let your eyes rest. There’s a physiological part to it: a sudden flash of light when your eyes are not expecting to it can be actually physically painful. I think it’s worth thinking about it and futureproofing and sanding dark-mode views especially at their edges: loading states, error messages, signing in and logging off areas. The “flashbang” analogy is very apt, and especially so on bigger screens.
I have been enthralled with this tiny feature in Google Sheets called “Show edit history,” which premiered in 2019:
Mind you, it’s not unconditional love. The execution feels a bit clunky, showing the edit values in a pop-up rather than in situ, with formatting that feels too heavy, and an awkward “No more edit history” state rather than just disabling the button.
But! Just its very presence here is delightful. Version history is often this huge, comprehensive, perhaps disorienting mode you enter that by design deals with the entire file. It always feels like a longer trip:
But edit history reimagines the feature from the perspective of the cell. You can just peek inside, quickly and effortlessly. Right click menu, a few arrows, I learned what I needed, and I barely even moved my hand. It’s a perfect example of the rule “to make something feel faster, make it smaller.” It’s like picking your newspaper at your doorstep in your pajamas rather than having to dress up to go to the newspaper store.
(…he said, dating himself and perhaps also thinking of The Sopranos for some reason.)
This kind of reimagining of something that already exists (see: undo send in Gmail) can be really hard, and I don’t even imagine Google Sheets was the first with this idea – but for me seeing this remix was eye-opening, and it inspires me to this day.
I’m not going to spoil the surprise. Am I fully supportive of the approach? Not sure. PlayStation’s region protection complicates my feelings, and any sort of DRM-esque approach eventually backfires when it comes to software preservation. But you can’t deny what Spyro developers did is a really fascinating and weird approach.
The quote in the title of this post refers to the hackers who eventually did conquer the Spyro’s copy protection system. I guess – and I apologize in advance – game recognize game.