<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Aether Mug</title>
        <link>https://aethermug.com/</link>
        <description>The brain is a tool.</description>
        <lastBuildDate>Thu, 19 Mar 2026 06:55:41 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>Creative Commons CC BY-SA 4.0</copyright>
        <item>
            <title><![CDATA[AeMug Chat #2 / Experience.Computer Interview]]></title>
            <link>https://aethermug.com/posts/ae-mug-chat-2-experience-computer-interview?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/ae-mug-chat-2-experience-computer-interview</guid>
            <pubDate>Thu, 19 Mar 2026 01:54:56 GMT</pubDate>
            <description><![CDATA[Scientific or Manifest or...?]]></description>
            <content:encoded><![CDATA[<p>Following the success of the <a href="/posts/ae-mug-chat-1-aether-mug-evolves">first installment</a> (thanks for all the succulent comments!), here is another tough thinking question for you. Before that, a couple of updates.</p>

<p>Last week I had a lot of fun being interviewed by Jay Springett of <a href="https://experience.computer/" rel="nofollow noopener noreferrer" target="_blank">Experience.Computer</a>. Jay runs a truly unique podcast, focused specifically on an activity that I feel people should do and share more: &quot;curious introspection&quot;, or the careful examination of what your consciousness and perception are like and feel like from the inside. I care about it so much that I maintain a <a href="/posts/a-list-of-introspective-descriptions">living list of my and others&#x27; introspective descriptions</a>.</p>
<p>The moment you start reading/listening to this kind of mental spelunking you realize how deep, varied, and mysterious it all is. We (including our scientists) have only scratched the surface in our understanding of these matters.</p>
<p>Experience.Computer is specifically about what it&#x27;s like to <em>imagine</em> things—images, motions, places, sensations. Jay has <a href="/posts/aphantasia">aphantasia like me</a>, and <a href="/posts/i-do-not-remember-my-life-and-it-s-fine">SDAM like me</a>, so his podcast starts with a long and intriguing series of questions to his guests about what really happens in their mind when they <a href="/posts/normality-and-surprise-in-an-image-free-mind">imagine concrete things</a> like balls rolling on tables, or remember events from the past. The kind of questions that people would never think or bother to ask—until they realize that everyone has a different answer!</p>
<p>Apparently I was the first aphantasic to join the show as a guest, so Jay and I had fun comparing our experiences. It turns out that even between aphantasics the way the mental world is experienced can vary a lot. &quot;A lack of imagery&quot; is too reductive a term for this strange trait.</p>
<p>If this kind of exploration intrigues you, you can listen to the podcast <a href="https://experience.computer/p/marco-giancotti" rel="nofollow noopener noreferrer" target="_blank">here</a> or on the major podcast platforms.</p>
<p><a href="https://bsky.app/profile/thejaymo.net" rel="nofollow noopener noreferrer" target="_blank">Jay</a> was extremely kind and curious throughout, and he knows how to nudge you to dig deeper and find words to express the inexpressible. His <a href="https://thejaymo.net/" rel="nofollow noopener noreferrer" target="_blank">other</a> podcast episodes and blogs are well worth checking out, too.</p>

<p>As I mentioned <a href="/posts/ae-mug-chat-1-aether-mug-evolves">last time</a>, I&#x27;m working on creating a buffer of blog posts to free myself from the curse of the weekly deadline, and that&#x27;s why you&#x27;re seeing fewer posts from me. I&#x27;m working on some very exciting projects that I can&#x27;t wait to share with you here—the temptation is to publish them as soon as they&#x27;re finished, but I will resist. I. Will. Resist.</p>
<p>Now for the open question to the community: share your thoughts in the comments below, and don&#x27;t hesitate to respond to other people&#x27;s ideas. Who knows what new thinking tools will come out of the mix?</p>
<h1>Q: What Is Left?</h1>
<p>My question is about this famous quote from Philip K. Dick:</p>
<blockquote>
<p>Reality is that which, when you stop believing in it, doesn&#x27;t go away.</p>
<p>— Philip K. Dick, <em>I Hope I Shall Arrive Soon</em></p>
</blockquote>
<p>This is very compelling. The first time I read this sentence I thought it was very clever. If you stop believing in a tree standing right in your path and walk on, you&#x27;ll get a bruise on your forehead. Using that as proof that the tree is real sounds reasonable and pragmatic.</p>
<p>But (to no surprise for long-time Aether Mug readers), things aren&#x27;t that straightforward when you look at them from a different <del>angle</del> framing.</p>
<blockquote>
<p>I realized that it is not only the physical world that differs from the particular way we see it; that all reality is perhaps equally dissimilar from what we believe ourselves to be directly perceiving, which we compose with the help of ideas that do not reveal themselves but are functioning all the same, just as trees, the sun, and the sky would not be the way we see them if they were perceived by creatures with eyes differently constituted from our own, or with organs other than eyes, which fulfilled the same purpose and conveyed equivalents of trees and sky and sun, but not visual ones.</p>
<p>— Marcel Proust, <em>In Search of Lost Time - The Guermantes Way</em></p>
</blockquote>
<p>So reality is there, but it&#x27;s different depending on all the details of how you look at it? That sounds problematic.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Philosophers have thought about what parts of reality are &quot;really real&quot; vs those that are &quot;kinda real&quot;. One way to draw this boundary is explained concisely by philosopher Bas C. van Fraassen:</p>
<blockquote>
<p>Wilfrid Sellars presented us with a clear dichotomy: the world as described by science, which he called the Scientific Image, and the world as it appears to us, the Manifest Image. Not that the dichotomy was so novel: Sir Arthur Eddington&#x27;s famous example of the two tables is an obvious precedent. The table we see is solid, it is mostly material even if there are some small pores and little gaps in the wood. The table science describes, however, is mostly empty space, filled with small electrically charged particles frantically whirling around in the void. So the Scientific Image is astonishingly different from how things appear to us. Yet science is meant to represent the very same world in which we live—and there is the rub.
...</p>
<p>Wilfrid Sellars argued that the two world pictures are in irreconcilable conflict, and that the infinitely superior Scientific Image must eventually displace the Manifest Image altogether.</p>
<p>— van Fraassen, <em><a href="https://philpapers.org/rec/VANTMI-2" rel="nofollow noopener noreferrer" target="_blank">The Manifest Image and the Scientific Image</a></em></p>
</blockquote>
<p>But Daniel Dennett had a different take on the difference between the scientific image and the manifest image.</p>

<blockquote>
<p>Sometimes the sweeping negative claims encompass all of the manifest image: the items in the official ontology of the scientific image really exist, but solid objects, colors, sunsets, rainbows, love, hate, dollars, home runs, lawyers, songs, words, and so on, don’t really exist. They are useful illusions, perhaps, like the user-illusion of the desktop icons. The patterns of colored pixels on the computer screen are real, but they portray entities that are as fictional as Bugs Bunny and Mickey Mouse. Similarly, some would say, the manifest image has some reality as a collection of images—the movie we live in, perhaps—but it is a mistake to think of the “things” we interact with and manipulate, and fall in love with, as reality.</p>
<p>That’s a defensible position, I suppose. In fact, it’s a <em>version</em> of what I have said about the manifest image of each species: a user-illusion brilliantly designed by evolution to fit the needs of its users. My version differs only in being willing and eager to endorse these ontologies as ways of carving up <em>reality</em>, not <em>mere</em> fictions but different versions of what actually exists: real patterns. The more shocking way of saying it—“we live in a fictional world, in a dream world, in unreality”—tends to cause confusion because it suggests we are somehow victims, duped by some evil force.</p>
<p>— Daniel Dennett, <em>From Bacteria to Bach and Back</em></p>
</blockquote>
<p>Come to think of it, the scientific image seems a bit suspicious, too. Take the scientific concept of &quot;force&quot;, introduced by Newton as a fundamental mechanism in the laws of nature. To this day, every schoolchild studies Newton&#x27;s Three Laws about how physical objects must behave in the presence or absence of forces. Nothing is more worthy of being enshrined in Sellars&#x27; scientific image than force, right?</p>
<p>But the idea of force as a fundamental &quot;thing&quot; was rendered obsolete in the 19th century when Faraday and Maxwell introduced the much more powerful concept of &quot;field&quot;. Now things actually move not because of forces—invisible arrows driving their acceleration—but because they react to the presence of invisible fields that pervade all space. <em>That&#x27;s</em> the scientific image!</p>
<p>Then general relativity and quantum field theory arrived, and we reframed the reason things move yet again: gravity is actually spacetime curvature and the other three fundamental forces are actually exchanges of virtual gauge bosons. Both forces and fields are still taught in school and used regularly by physicists, but they&#x27;re now understood to be &quot;just&quot; simplified models of those more sophisticated <em>real scientific image</em> concepts.</p>
<p>Is this the end of the process? Will we ever have to stop believing even in spacetime and bosons?</p>
<p>That was a rhetorical question. My question for you is about Philip K. Dick&#x27;s initial quote:</p>
<p>When you stop believing, what is left of reality? What <em>doesn&#x27;t</em> go away?</p>
<p><a href="https://aethermug.com/posts/ae-mug-chat-2-experience-computer-interview#comments">Comment on this post&#x27;s page</a></p>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/ae-mug-chat-2-experience-computer-interview/526666ldsdl.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[AeMug Chat #1 / Aether Mug Evolves]]></title>
            <link>https://aethermug.com/posts/ae-mug-chat-1-aether-mug-evolves?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/ae-mug-chat-1-aether-mug-evolves</guid>
            <pubDate>Thu, 26 Feb 2026 02:38:16 GMT</pubDate>
            <description><![CDATA[I'll be posting less often and more in depth]]></description>
            <content:encoded><![CDATA[<p>In this post I&#x27;m trying something different. Instead of writing about <em>my</em> ideas, I&#x27;m asking <em>you</em> a question—a tough, Aether-Mug-style question!—for you to discuss with other readers in the <a href="#Comments">comments</a> section below.</p>
<p>Several people have told me that they don&#x27;t know of any other places on the internet to read and talk about these topics. I can very much relate, and it&#x27;s part of why I started AeMug in the first place.</p>
<p>It would be remiss of me not to allow you to connect with other people who are interested in <code>[whatever you want to call this mix of interests]</code>.</p>
<p>If you&#x27;re eager, you can skip the Announcements section below and jump straight to the <a href="#how-do-mental-holes-work">question</a> and comments.</p>
<h2>A Couple of Announcements</h2>
<p>Recently I asked you how often and how deeply you want me to post, and you answered.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ae-mug-chat-1-aether-mug-evolves/donut_chart.webp" alt="" title="Donut chart of reader survey results: deeper/longer posts monthly 39%, deeper/longer posts biweekly 36%, no preference 11%, short posts weekly 7%, short posts less than weekly 7%"/></figure>
<p>About 75% of respondents asked for longer/deeper posts and more than 80% asked for a <em>lower</em> posting frequency. Over a third wanted monthly posts.</p>
<p>These results blew my mind!</p>
<p>I&#x27;ve been pushing myself to keep a weekly pace as much as possible, and mostly succeeded. But that came at a cost, because I&#x27;ve been (metaphorically) &quot;living hand to mouth&quot; with my posts, writing each new one just before publishing it. This limited how much time I could spend on each post. I have a huge and growing pile of cool post ideas that I always shy away from because I couldn&#x27;t possibly finish them in a week.</p>
<p>You&#x27;re telling me that the common <a href="/posts/don-t-think-and-write-they-say">don&#x27;t think and write</a> advice doesn&#x27;t apply as strictly as I had feared. This is <em>excellent</em> news. I will take your votes as permission to relax that constraint and bite into meatier subjects.</p>

<p>Here is what I&#x27;ll do:</p>
<p>First, I will switch to publishing every two weeks.</p>
<p>Second, while I&#x27;m at it, I&#x27;ll spend the next 2-3 months building a &quot;post buffer&quot; of ready-to-publish pieces, so that in the future I&#x27;ll have the peace of mind of tackling longer, more research-heavy ideas. This means that, for a while, I&#x27;ll be spending <em>more</em> time than usual on AeMug, but you&#x27;ll see <em>less</em> content than usual. Please bear with me!</p>
<p>This is where these &quot;chat&quot; posts come in: if I see a good response (even a few brave commenters are enough!) to this post, I&#x27;ll make more in the coming months to keep the pulse going.</p>
<p>As always, I invite you to also reply to this newsletter&#x27;s emails to share your thoughts.</p>
<h1 id="how-do-mental-holes-work">Q: How Do Mental Holes Work?</h1>
<p>The Stanford Encyclopedia of Philosophy has <a href="https://plato.stanford.edu/entries/holes/" rel="nofollow noopener noreferrer" target="_blank">a whole page</a> about holes.</p>
<p>Should we consider a hole to &quot;be a thing&quot; in the same sense that street lamps and airliners are things? The philosophers are split on this.</p>
<p>My question is not a philosophical one, though. I&#x27;m interested in how the human mind <em>actually</em> treats concepts like holes, regardless of whether that treatment is philosophically sound or not. It&#x27;s a question about psychology.</p>
<p>In case you&#x27;re dying to know, here is my current take on holes: holes are &quot;things&quot;—<a href="/posts/a-black-box-view-of-life">black boxes</a>—in our <a href="/posts/a-framing-and-model-about-framings-and-models">mental framings</a> exactly like everything else. Since they&#x27;re black boxes, we usually don&#x27;t worry about their deep, internal workings, and are perfectly content using them in our mental models based on some properties we take to be intrinsic: like most physical objects, they are countable, perceivable, localized, bounded, and structured, and they come in variants (hollows, cavities, tunnels, etc.); unlike most other black boxes, they are also immaterial, parasitic, fillable, etc.</p>
<p>In other words, holes have their own laws of <a href="/posts/living-in-a-real-world-acting-in-imaginary-ones">virtual physics</a> that we use to make predictions—e.g. &quot;stepping in that hole would be dangerous,&quot; or &quot;I&#x27;ll wait for the gaps between pops to get longer and then take the popcorn off the heat.&quot;</p>
<p>In this sense, holes are exactly like every other concept in our minds. Mental framings are maximally democratic.</p>
<p>The problem is about their boundaries. Our minds don&#x27;t seem to have any issues with holes in everyday life, but something is weird here. As the philosophers remind us, it&#x27;s not so clear what the boundary of a hole should be. If your answer is &quot;it&#x27;s the boundary of the object around it&quot; you&#x27;re talking about <em>another</em> black box, not the hole. The dough of the doughnut, not the hole. If you say that the boundary is where the space inside the hole ends, okay, but you can then remove the stuff around it and the same space will still be there. Does it become a hole within a hole?</p>
<p>To put the philosophical implication another way: you need to draw the shores in order to depict a recognizable lake.</p>
<p>Yet the mind seems to handle this strange kind of overlapping, &quot;subordinate&quot; boundary just fine. So here are my psychological questions for you:</p>
<ol>
<li>Is the mind missing something important by pretending holes have normal boundaries like everything else?</li>
<li>Is this naïvete the cause of problems sometimes, in our daily lives?</li>
<li>Are there more examples, other than holes, where this strange boundary phenomenon happens?</li>
<li>Is this a case in which the &quot;boundary&quot; concept (itself a black box) breaks down and needs to be replaced with a better framing?</li>
</ol>
<p>I honestly don&#x27;t know the answers. Leave a comment, reply to others, see what emerges from the collective mind!</p>
<p><a href="https://aethermug.com/posts/ae-mug-chat-1-aether-mug-evolves#comments">Comment on this post&#x27;s page</a></p>
<p>(I will moderate the comments to keep the atmosphere constructive and pleasant.)</p>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/ae-mug-chat-1-aether-mug-evolves/527898ldsdl.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[I Witnessed the Birth of a Tiny World]]></title>
            <link>https://aethermug.com/posts/i-witnessed-the-birth-of-a-tiny-world?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/i-witnessed-the-birth-of-a-tiny-world</guid>
            <pubDate>Thu, 19 Feb 2026 04:21:58 GMT</pubDate>
            <description><![CDATA[How the mind creates and improves its framings on the fly]]></description>
            <content:encoded><![CDATA[<p><em>This post is part of my <a href="/posts/a-list-of-introspective-descriptions">List of Introspective Descriptions</a>. I&#x27;m always accepting &quot;curious introspection&quot; links and submissions from everyone, regardless of the kind of inner experience covered.</em></p>
<p><em>Also: This blog now has comments! You can add yours <a href="#comments">at the end of the page</a>.</em></p>

<p>To think better, you need to understand how thought works. That&#x27;s easier said than done, but there are gentler ways to get started. For example, you can do worse than figure out what the <em>building blocks</em> of thought are like, and how they emerge, evolve, and die.</p>
<p>This is what I&#x27;m doing on this blog when I write about &quot;<a href="/posts/a-framing-and-model-about-framings-and-models">framings</a>&quot; (and their little siblings, &quot;mental models&quot;). But it&#x27;s one thing to theorize about them, and another to see them in action, real-time. It&#x27;s like learning to ride a bicycle: a couple of minutes on it teach you more than a whole textbook could. If you can learn to <em>notice</em> how your mind uses framings in your everyday life, it will be hard for you to un-notice it. But everyday life is busy, multitasking, and chaotic. It&#x27;s hard to stop and take an hour or two in an isolated, contemplative state to observe what your mind is doing.</p>
<p>So, earlier this week, I gallantly did that for you.</p>
<p>This wasn&#x27;t exactly planned. I simply happened to find myself in the right conditions to see my thoughts in action, with no distractions and plenty of time to mull over it all.</p>
<p>As some of you know, I have <a href="/posts/aphantasia">aphantasia</a>—no mental images, no &quot;picturing things&quot; in my head—and I&#x27;ve been offering my brain to neuroscientists for fMRI experiments, hoping to help shed light on this unusual way to think. When I wrote <a href="/posts/boxed">Boxed</a>, I had been in the MRI machine for a total of about 30 hours. The tally is now well over twice that. While inside that big and noisy box, I&#x27;ve done all sorts of tasks: watching thousands of random images pass in front of my eyes, solving memory puzzles, listening to stories being read, watching movies, and so on.</p>
<p>This week I had another 3-hour session, and I realized, midway through, that while doing the actual experiment I could also learn something about what my mind does while working. A meta-experiment, if you will.</p>
<p>What I found turned out to be simple, but still quite illuminating. That&#x27;s the topic of this short post.</p>
<h2>The Setup</h2>
<p>The experiment was as follows.</p>
<ol>
<li>While lying in the MRI, I&#x27;m shown an image made of triangles arranged in a specific way.</li>
<li>After a few seconds of looking at it, the image disappears and I stare at a blank screen for about 8 seconds.</li>
<li>Then I hear a beep, which means it&#x27;s time for me to &quot;picture&quot; the same image again with my mind&#x27;s eye. During this time, the screen remains blank, and I&#x27;m supposed to mentally project the image onto it as accurately as possible: the size, the arrangement and orientation of the triangles, whether they were black or white, etc.</li>
<li>Then the next image comes, and everything is repeated.</li>
<li>Many.</li>
<li>Times.</li>
</ol>
<p>In short, a cycle of &quot;see → wait → imagine&quot;.</p>
<p>Now, having aphantasia means that I never &quot;see&quot; anything at all in my mind, which is precisely why I&#x27;m doing this: to see what goes on in my brain while I try.</p>
<p>But I also have <a href="/posts/i-do-not-remember-my-life-and-it-s-fine">abysmal memory</a> (I think this is connected to aphantasia, but I&#x27;m not sure). Unless I use the special tricks I&#x27;m about to explain, I tend to forget the last image in those eight or so seconds between the moment it disappears from the screen and the moment the beep tells me to imagine it again. I go, &quot;was it a big black triangle, or four small white ones?&quot; (It is a tad embarrassing.)</p>
<p>This poor memory is nothing new to me, so during tasks like this I do <a href="https://www.sciencedirect.com/science/article/pii/S0732118X25000716" rel="nofollow noopener noreferrer" target="_blank">what all aphantasics do</a>: I unconsciously use alternative strategies to make up for it. Instead of toiling to conjure images that won&#x27;t come, I talk to myself. In the case of this experiment, I convert the image into words while I&#x27;m still looking at it, then I repeat those words to myself a few times until it&#x27;s time to try (and fail) to mentally replicate the image. (This &quot;talking to myself&quot; doesn&#x27;t involve any &quot;mental sounds&quot;, but that&#x27;s a story for another day.)</p>
<p>This week&#x27;s experiment was mind-numbing and not exactly exciting. But it proved to be the perfect setup to witness a simple framing popping into existence.</p>
<h2>The Birth of a Framing</h2>
<p>Take this image:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/i-witnessed-the-birth-of-a-tiny-world/bowties-diamonds_1.webp" alt="" title="Simple arrangement of four black equilateral triangles. Two triangles are aligned vertically with respect to each other, each with an angle pointing at the other; these are not at the exact center of the frame, but slightly shifted to the right of and above of the center; the other two triangles form the same shape but are shifted to the right of and below the center."/></figure>
<p>The first time I saw this, I described it to myself as &quot;two pairs of triangles, each pair pointing at each other&quot;.</p>
<p>Then this other one came up,</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/i-witnessed-the-birth-of-a-tiny-world/bowties-diamonds_2.webp" alt="" title="The exact same shape as the previous one, but rotated by 90 degrees."/></figure>
<p>and I realized I had to make my descriptions more complex to distinguish the two images. I expanded it to &quot;...each pair pointing at each other <em>vertically/horizontally</em>&quot;, depending on the case.</p>
<p>But this wasn&#x27;t great, because those rather long and complicated descriptions are cumbersome and easy to fumble. If I get distracted for a moment (especially likely after a couple of hours in the MRI), I might mis-repeat a detail and end up playing a solo telephone game, possibly screwing up the experiment.</p>
<p>So my descriptions evolved. After a while, I realized I had simplified the description of the first image to &quot;two hourglasses&quot;, and the second image to &quot;two bowties&quot;. These analogies are shorter, easier, and more evocative. I was able to use them because there were no other arrangements that looked anything like those two objects. But the important thing is that these neat naming ideas came to me instinctively. It seems to be a built-in capacity of the brain.</p>
<figure><img src="https://aethermug.com/assets/posts/i-witnessed-the-birth-of-a-tiny-world/bowties-diamonds_3.webp" alt="Two more arrangements of four triangles, again one a simple 90-degree rotation of the other. In general, the arrangement is similar to the previous two images, but instead of having angles pointing at each other, each pair has edges facing each other, in parallel."/><figcaption>I made similar mental descriptions for all other shapes, like these. At first, I called these &#x27;two pairs of triangles pointing away from each other vertically/horizontally&#x27;, but they evolved into &#x27;standing/sideways diamonds&#x27; after a while.</figcaption></figure>
<p>After a few repetitions, my descriptions further simplified into just &quot;hourglasses&quot; and &quot;bowties&quot; (plural), because I realized that they always came in pairs anyway, so there was no need to specify the number &quot;two&quot; every time. By the end of the experiment I was just repeating to myself &quot;black bowties,&quot; &quot;white diamonds,&quot; &quot;white hourglasses&quot;...</p>
<p>In other words, my mental framing for this task changed as follows:</p>
<table><thead><tr><th>Phase</th><th>Black Boxes</th><th>Virtual Physics</th></tr></thead><tbody><tr><td>1: <strong>Images are novel</strong></td><td>&quot;triangle&quot;</td><td>Can appear in any number and size, each oriented in any direction and at any distance from the others</td></tr><tr><td>2: <strong>Some patterns emerge</strong></td><td>&quot;hourglass&quot;, &quot;bowtie&quot;, &quot;diamond&quot;, ...</td><td>Each can appear in small numbers, with fixed orientation</td></tr><tr><td>3: <strong>A few fixed patterns confirmed</strong></td><td>&quot;hourglass<u>es</u>&quot;, &quot;bowtie<u>s</u>&quot;, &quot;diamond<u>s</u>&quot;, ...</td><td>Each always appears only as a pair with fixed orientation, never mixing</td></tr></tbody></table>
<p>This is as simple an example of mental framing as they get.</p>
<p>In these images, the &quot;moving parts&quot; comprising the framing—I call these <a href="/posts/a-black-box-view-of-life">black boxes</a> but a psychologist might simply call them <a href="https://iep.utm.edu/theory-theory-of-concepts/" rel="nofollow noopener noreferrer" target="_blank">concepts</a>—are the ontology I chose: the things that I assume to <em>exist</em> in the context of the task.</p>
<p>Also part of the framing are the ways those moving parts can arrange and orient themselves with respect to each other (the <a href="/posts/living-in-a-real-world-acting-in-imaginary-ones">Virtual Physics</a>). This is all that&#x27;s needed to define a &quot;tiny world&quot;, a restricted reality in which mental models can be created to make predictions and simulate events. That tiny world is the framing.</p>
<p>Like everyone else, I go through this kind of process constantly while awake, but this is the first time I had the chance to analyze it so closely. Isn&#x27;t it strange? How does this work? I think the answer has to do with compression and meaning-making.</p>
<h2>Compression</h2>
<p>As you can see in the table above, my framing gradually acquired more black boxes (from just &quot;triangle&quot; to several names), and at the same time its Virtual Physics became simpler—from &quot;triangles can do basically anything&quot; to &quot;each composite object always appears in the same way in an image&quot;.</p>
<p>The tiny world I created in my mind to remember what I had seen grew in <em>variety</em>, but its Virtual Physics shrank. By coming up with <a href="/posts/a-fundamental-framing-of-human-language">names</a> for those objects, I was drawing new boundaries to streamline my mental models.</p>
<p>When I started writing this post, I hadn&#x27;t realized that this is what happens, but it makes sense. Perhaps it is easier for our brains to store lots of pre-built archetypes for things—e.g., people, objects, ideas—each known to do only a few things reliably, than it is to track undifferentiated &quot;particles&quot; that can mix and interact in an infinity of ways.</p>
<p>When you turn a large file into a smaller zip archive, the operation your computer does is called <em>compression</em>. In rough terms, it works by finding predictable patterns in the file&#x27;s binary sequence, like big swaths of repeating zeros or ones, and replacing them with a <em>description</em> of that pattern: compression algorithms essentially do a (much) more clever version of replacing &quot;000000000&quot; (a sequence of nine zeros) with &quot;9x0&quot; (a shorter way to convey the same thing).</p>

<p>The idea that data can be compressed is central to computer science, but it extends well beyond machines and email attachments. In particular, it is well known that <a href="https://fchampalimaud.org/news/brain-uses-data-compression-decision-making" rel="nofollow noopener noreferrer" target="_blank">the brain uses compression</a> to cope with the enormous amount of data that flows in all the time from the senses, as well as to manage internal processes like decision making. But that&#x27;s for the low-level neural encoding of raw sensory information. What about higher-level &quot;thoughts&quot;?</p>
<p>Here the science is still fuzzy, but <a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008598" rel="nofollow noopener noreferrer" target="_blank">there are</a> <a href="https://elifesciences.org/articles/84376" rel="nofollow noopener noreferrer" target="_blank">indications</a> that some form of compression happens even in the &quot;language of thought&quot;, closer to consciousness. It seems like the hourglasses-bowties framing I developed in the MRI is another case of the same process: as I noticed more patterns on the screen, I progressively compressed the information into reusable components, and managed to express in a single word what had initially taken many words to describe, and as a result succeeded in memorizing those funky arrangements.</p>
<h2>Meaning</h2>
<p>The compression approach seems to have worked well enough for me, but how did I do it? What criteria allowed me to converge on those few black boxes for my framing? An even better way to ask that question is this: <em>what guides the drawing of meaningful boundaries?</em></p>
<p>I answered this in <a href="https://planktonvalhalla.com/20251219-carving-nature-at-your-joints/" rel="nofollow noopener noreferrer" target="_blank">the final episode of my Purpose series</a> on Plankton Valhalla: I could only do it because I had a goal in mind—to correctly remember and distinguish those arrangements for the sake of the experiment.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>That goal naturally led to a distinction between what is meaningful and what isn&#x27;t. By looking at many images, I (mostly unconsciously) noticed that there were some <em>differences that made the difference</em> for my goal, and chose new boundaries and names to capture those.</p>
<p>Most differences did <em>not</em> make a difference for my goal, though: potential orientations and positions of the triangles that never occurred, the exact spacing between triangles, etc. These were meaningless in that context, so the new names I chose didn&#x27;t capture them. I didn&#x27;t need to create categories for &quot;hourglasses with 3-millimeter necks&quot; or &quot;diamond triplets&quot;, because those differences, although present or possible in theory, never mattered or occurred in the dataset. I locked those irrelevant details up in my newly named black boxes, and allowed myself to disregard them.</p>
<p>This was me making meaning, guided by a clear goal.</p>
<p>Now think about the things in your life that involve novelty: things like reading a book, meeting new people, having a conversation, learning to paint. In all these activities, you are presented with new sensory information that may initially overwhelm you. There are too many moving parts, too many details that may or may not be relevant, and it seems like anything could happen. You are in the &quot;it&#x27;s all triangles&quot; phase.</p>
<p>Then, as you spend more time doing whatever you&#x27;re doing, your compression system kicks in: you find patterns, recurrences, fixed forms, and begin treating them as special. You may give them new names, or find analogies with things you&#x27;re already familiar with, or simply treat them implicitly as &quot;things&quot; of their own—you don&#x27;t need a special name for &quot;the curious way white paint mixes with cobalt&quot; for that pattern to start &quot;being a thing&quot; in your mind.</p>
<p>All of this is always a function of the goal you have at the time. Different people, with other goals, will do the same things but form different framings from yours. That&#x27;s how you end up talking past each other, or puzzling at the strange ways some people seem to think. You rarely realize that you&#x27;re each living in a different tiny world. ●</p>
<p><a href="https://aethermug.com/posts/i-witnessed-the-birth-of-a-tiny-world#comments">Comment on this post&#x27;s page</a></p>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/i-witnessed-the-birth-of-a-tiny-world/kaitlin-duffey-BrI5b8rcY3w-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[A List of Introspective Descriptions]]></title>
            <link>https://aethermug.com/posts/a-list-of-introspective-descriptions?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/a-list-of-introspective-descriptions</guid>
            <pubDate>Thu, 26 Sep 2024 01:47:29 GMT</pubDate>
            <description><![CDATA[Please contribute your own!]]></description>
            <content:encoded><![CDATA[<p><em>This is a living blog post. I will keep adding links to the list as I write new introspective descriptions or find good ones by others.</em></p>

<blockquote>
<p><strong>TL;DR</strong></p>
<p>The way you experience reality and life inside your head differs more than you would think from the way the next person experiences it. We should think and talk more about this very important truth.</p>
<p><strong>At the bottom of this page is a list of deep introspective writing curated to uncover just such differences.</strong></p>
<p><a href="#list">Jump directly to the list</a>.</p>
</blockquote>
<h2>Maybe We&#x27;re All Naked and We Know It</h2>
<p>I had to live more than thirty-five years and go through an existential epiphany about my own brain before I learned to do any kind of introspection. Prior to learning about my aphantasia (the total lack of mental images), I never even considered the <em>quality</em> of my consciousness. The idea that consciousness could vary substantially from person to person simply didn&#x27;t cross my mind. The way I felt seemed obvious to the point of transparency.</p>
<p>But my experience is, of course, different from others in many ways. Everyone else, too, seems to be unique in their combination of inner experiences. I&#x27;m talking not only about well-known spectra like autism and ADHD, but also traits like synesthesia (the mixing of senses), savant tendencies, different abilities with and approaches to words and numbers, and many other more-or-less-studied variations.</p>
<p>The sense I get from talking about this stuff with many people is that we&#x27;ve only scratched the surface of how exquisitely alien the world would feel if only one could engage with it out of a different skull. Even science is far from uncovering it all.</p>
<p>Why did it take me so long to realize this, and why is everyone so surprised (sometimes outright suspicious) when they first hear about people without mental imagery, or episodic memories, or an inner monologue?</p>
<p>Perhaps we&#x27;re in a special case of pluralistic ignorance, also known as the <a href="https://andersen.sdu.dk/vaerk/hersholt/TheEmperorsNewClothes_e.html" rel="nofollow noopener noreferrer" target="_blank">Emperor&#x27;s New Clothes</a> effect. Even if you sense that a certain inner experience of yours—say, the way the color red contrasts with green—is different from other people, if no one around you ever mentions the possibility of such difference, your social instincts will nudge you to assume that it doesn&#x27;t really exist. So you will not talk about it, for fear of sounding absurd or mad, and in doing so you will perpetuate the effect for everyone else, too.</p>
<p>It&#x27;s not a coincidence, I think, that even color-blindness, something easy to detect with ultra-low-tech tests, was never mentioned in writing until 1798, when Dalton wrote <em>Extraordinary Facts relating to the Vision of Colours</em>. Compare that to differences in more subtle phenomena like imagination and dreams, and it&#x27;s easy to see how it could still take centuries before we manage to verbalize the whole kaleidoscope of human experience.</p>
<p>If we really are in a group illusion of this scale, then it&#x27;s a very important topic to study and shed light on. Who knows how many conflicts, big and small, could be averted if we learned, as a species, to account for these unconscious differences in worldview? How many endless debates and misunderstandings and ideological fractures would disappear if we all knew to adjust and clarify our assumptions based on who we&#x27;re talking to?</p>
<p>The only way to expose the emperor&#x27;s farcical parade is to shamelessly declare his nakedness to the masses. We need more people writing about what goes on inside their heads. That&#x27;s especially hard for introspection about mental processes, because <a href="/posts/a-fundamental-framing-of-human-language">we don&#x27;t even have a vocabulary</a> yet to express that kind of variation. Nevertheless, the conversations I&#x27;ve joined about aphantasia and other neurodiverse traits gives me optimism. The vocabulary emerges naturally as more and more people do this kind of exercise.</p>
<p>These are not inherently indescribable topics: they&#x27;re simply under-described.</p>
<h2>The (Growing) List of Introspective Descriptions</h2>
<p>Below I&#x27;ll curate, on an ongoing basis, a collection of all my attempts at describing my own inner experience, as well as links to other people&#x27;s great introspective statements. (It has very few entries now, but stay tuned for more in the next months.)</p>
<p>The expectation is that each of these descriptions will provoke at least one of three reactions in each reader: some will find them unbelievably strange, others will find the content banal, and still others will feel as if they have found a sibling they never knew existed. Either reaction is useful, because it&#x27;s the differences and similarities that we need to map.</p>
<p>Please share your own experience! If I get enough pertinent testimonies, I will share them (with due permission and anonymization) on this blog. I&#x27;m also accepting links to other notable depictions, to be added to the list.</p>
<p>You can write me by replying to any of Aether Mug&#x27;s emails, or by writing to: introspection@ + this website&#x27;s domain.</p>

<h1 id="list">The List</h1>
<h3>Written by Me</h3>
<ul>
<li><a href="/posts/aphantasia">Aphantasia</a>: brief description of what it&#x27;s like to imagine without pictures.</li>
<li><a href="/posts/aphantasia-is-no-creativity-killer">Aphantasia Is No Creativity-Killer</a>: on self-perception of creativity with aphantasia.</li>
<li><a href="https://nautil.us/my-brain-doesnt-picture-things-407444/" rel="nofollow noopener noreferrer" target="_blank">My Brain Doesn&#x27;t Picture Things (Nautilus Magazine)</a>: more about aphantasia and participating in MRI experiments, as well as the related science.</li>
<li><a href="/posts/boxed">Boxed</a>: things I learned after lying in an MRI machine for 30 hours.</li>
<li><a href="/posts/reading-blood-meridian-with-aphantasia">Reading Blood Meridian with Aphantasia</a>: what reading fiction is like without mental imagery.</li>
<li><a href="/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes">An Aphantasic&#x27;s Observations on the Imagination of Shapes</a>: what happens when I try really hard to picture a shape?</li>
<li><a href="/posts/i-do-not-remember-my-life-and-it-s-fine">I Do Not Remember My Life and It&#x27;s Fine</a>: what it&#x27;s like to have a poor ability to recall past episodes.</li>
<li><a href="/posts/i-witnessed-the-birth-of-a-tiny-world">I Witnessed the Birth of a Tiny World</a>: witnessing a mental framing emerge in real-time during an MRI experiment.</li>
</ul>
<h3>Written by Others</h3>
<ul>
<li><a href="/posts/my-head-as-a-lake">My Head as a Lake</a>: reflection from someone who noticed as a child that her imagination worked differently.</li>
<li><a href="https://youtu.be/Cj4y0EUlU-Y?t=135" rel="nofollow noopener noreferrer" target="_blank">Feynman&#x27;s interview</a>: about the differences in how people count mentally.</li>
<li>The chapter titled &quot;Persistence of vision: a journal&quot; in Oliver Sacks&#x27; book <a href="https://www.oliversacks.com/oliver-sacks-books/the-minds-eye/" rel="nofollow noopener noreferrer" target="_blank">The Mind&#x27;s Eye</a> describes his experience with a melanoma in his eye, and the strange visual effects that followed his operation.</li>
<li><a href="https://www.youtube.com/watch?v=qd1LG_2Fthk" rel="nofollow noopener noreferrer" target="_blank">Consciousness, Qualia and Internal Monologues</a>: video essay about consciousness in general, but Roper describes his subjective experience roughly between 5:26 and 12:00.</li>
<li><a href="https://youtu.be/7Fa3Ydtng3o?feature=shared" rel="nofollow noopener noreferrer" target="_blank">Is Everyone Conscious in the Same Way?</a>: philosophical video essay, but Roper describes his inner imagery and lack of an inner monologue in the first 19 minutes.</li>
<li><a href="https://hurlburt.faculty.unlv.edu/codebook.html" rel="nofollow noopener noreferrer" target="_blank">Codebook for Descriptive Experience Sampling nomothetic uses</a>: comprehensive academic description of the various modalities of inner speech.</li>
<li><a href="https://www.reddit.com/r/Aphantasia/comments/1f38bb3/an_animation_showing_what_a_thought_looks_like/" rel="nofollow noopener noreferrer" target="_blank">An animation showing what a thought looks like for someone without Aphantasia</a> and <a href="https://www.reddit.com/r/Aphantasia/comments/1f642e4/example_2_of_visualisationimagining_an_apple/" rel="nofollow noopener noreferrer" target="_blank">followup post</a>: someone who is able to visualize, trying to convey what it&#x27;s like to those who aren&#x27;t.</li>
<li>Many interesting descriptions scattered in the comments on subreddits like <a href="https://www.reddit.com/r/Aphantasia/" rel="nofollow noopener noreferrer" target="_blank">r/Aphantasia</a> and <a href="https://www.reddit.com/r/Synesthesia/" rel="nofollow noopener noreferrer" target="_blank">r/Synesthesia</a>. Too many to list, but <a href="https://www.reddit.com/r/Aphantasia/comments/1fp6dor/question_from_person_without_aphantasia/" rel="nofollow noopener noreferrer" target="_blank">here is an example</a>, and <a href="https://www.reddit.com/r/Synesthesia/comments/1f627y0/do_you_see_nonexistant_colors/" rel="nofollow noopener noreferrer" target="_blank">another one</a>.</li>
<li><a href="https://www.goodreads.com/book/show/74812.Born_on_a_Blue_Day" rel="nofollow noopener noreferrer" target="_blank">Born on a Blue Day</a>, by Daniel Tammet: excellent description of what life is like as an autistic savant with Asperger&#x27;s.</li>
<li><a href="https://www.goodreads.com/book/show/103408.Thinking_in_Pictures" rel="nofollow noopener noreferrer" target="_blank">Thinking in Pictures</a>: another deep demonstration of what it&#x27;s like to experience the world as an autistic person.</li>
<li>(Video) <a href="https://www.youtube.com/watch?v=_pZDgL6QMN0" rel="nofollow noopener noreferrer" target="_blank">How Different Are Our Inner Monologues?</a></li>
</ul>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/a-list-of-introspective-descriptions/madonna.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Culture Is the Mass-Synchronization of Framings]]></title>
            <link>https://aethermug.com/posts/culture-is-the-mass-synchronization-of-framings?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/culture-is-the-mass-synchronization-of-framings</guid>
            <pubDate>Thu, 12 Feb 2026 04:26:37 GMT</pubDate>
            <description><![CDATA[What exists is a matter of public opinion]]></description>
            <content:encoded><![CDATA[<h2>1</h2>
<p>If you descend onto the Marunouchi Line platform in Ikebukuro Station on any weekday morning, you will witness an unusual train-boarding ritual. Like in any other Japanese station, people wait in line at the two sides of where each train&#x27;s door will open. This is called <em>seiretsu jousha</em> (整列乗車, orderly boarding), and is a universal standard in Japan. <em>Unlike</em> most other stations in Tokyo, though, on Ikebukuro&#x27;s Marunouchi platform people will form not one but <em>two</em> queues on each side. One of these queues, the one closest to the doors, is the <em>senpatsu</em> (先発, earlier departure) line, and will board the next train that comes; the other, shorter, queue, is called <em>kouhatsu</em> (後発, later departure) and is waiting to take the place of the <em>senpatsu</em> line: they&#x27;ll skip the next train, and board the one after that instead.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/culture-is-the-mass-synchronization-of-framings/20260211_ikebukuro_lines_1.webp" alt="" title="Diagram of the double-queue system at Ikebukuro Station: four queues flank a train door, with the inner teal queues labeled &#x27;next train queue&#x27; (with comic balloons saying &#x27;I&#x27;m in a rush!&#x27;) and the outer pink queues labeled &#x27;next next train queue&#x27; (with balloons saying &#x27;I want to sit!&#x27;)."/></figure>
<p>When the new train arrives, first everyone waits for the passengers to get off (the &quot;orderly boarding&quot; common sense), then the people in the <em>senpatsu</em> queue all get in, and finally the people in the <em>kouhatsu</em> queue shift laterally on the platform to become the new <em>senpatsu</em>. A new <em>kouhatsu</em> queue immediately starts forming in its now-vacated place.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/culture-is-the-mass-synchronization-of-framings/20260211_ikebukuro_lines_2.webp" alt="① People get off. ② Senpatsu line gets on. ③ Kouhatsu line becomes the new senpatsu line." title="Diagram of the boarding sequence: passengers exit the train first, then the &#x27;next train queue&#x27; boards, and finally the &#x27;next next train queue&#x27; shifts sideways to take their place"/><figcaption>① People get off. ② Senpatsu line gets on. ③ Kouhatsu line becomes the new senpatsu line.</figcaption></figure>
<p>This is a rather strange way to do things. Why not simply form one kind of line, and use the age-old first-come-first-served approach? Why would anyone ever choose <em>not</em> to try boarding the next train? And why is this procedure used in the Marunouchi Line in Ikebukuro and not in the many other lines in the same station, or (for that matter) on most other lines and stations in Tokyo?</p>
<p>The key to it all is the observation that Ikebukuro Station is a terminal of the Marunouchi Line, so all trains always start empty on that platform. This double-queueing ritual gives passengers a tradeoff that would not be available in most other cases: speed vs comfort.</p>
<p>If you&#x27;re in a hurry, you can directly join the <em>senpatsu</em> crowd and be (almost) guaranteed a spot on the very next train, but forget about sitting down—be ready to stand squeezed like a sardine. If, on the other hand, you have plenty of time, you may decide to get in the shorter <em>kouhatsu</em> queue—which will become the front of the <em>senpatsu</em> once the next train leaves—and you&#x27;ll be (almost) guaranteed a comfy seat in your long commute.</p>
<p>For an Italian like me, this whole process is nothing short of a miracle. I grew up in a city where metro train boarding during rush hour feels like a prelude to the apocalypse.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/culture-is-the-mass-synchronization-of-framings/20260211_ikebukuro_lines_3.webp" alt="① Go for it!!!" title="Humorous illustration of chaotic Italian-style train boarding: a disorderly mass of colorful figures all pushing toward the door at once, with speech bubbles shouting &#x27;Let me get off!&#x27;, &#x27;Vaffanculo!&#x27;, and &#x27;Guys don&#x27;t push, for Christ&#x27;s sake!&#x27;"/><figcaption>① Go for it!!!</figcaption></figure>
<p>Many Italians can come up with the idea of waiting for passengers to get off before boarding themselves, but most crowds there lack the restraint to apply it with any kind of regularity. When it comes to the strategy of directly aiming for the <em>next next train</em>, though, I wonder if it has ever even occurred to anyone south of the Alps.</p>
<p>The miraculous thing about the Japanese method is that there is no authoritative &quot;director&quot; standing next to each door and yelling at people where to stand. There are &quot;<em>senpatsu</em>&quot; and &quot;<em>kouhatsu</em>&quot; signs on the ground, but no detailed instructions or explanations. I doubt it is taught at school or anywhere else, either. People just seem to know, and to naturally implement the whole process without exchanging so much as a word with each other.</p>
<p>Are these people human?</p>
<p>Live in Japan as a foreigner for a while, and you&#x27;ll see miracles of this kind everywhere. No one steals, even when people leave their purses and smartphones and wallets unattended in plain sight for half an hour at a time; no one litters; no one disturbs fellow train passengers by talking loudly or making phone calls; and people are extremely polite and go out of their way to help you if you ask. In Japan, you will only witness restraint and patience, even in the face of rudeness and selfishness from strangers. What kind of DNA compels them to behave in such a coordinated and <em>collectively useful</em> manner?</p>
<p>Of course, I know that there is nothing innate in the miraculous &quot;Japanese Way&quot; because expats living here quickly adapt to the same behaviors.</p>
<p>It&#x27;s not just the ethnic Japanese that correctly follow the <em>senpatsu</em>/<em>kouhatsu</em> queueing system, for instance. All the long-time expats I know in Japan are—at least in public—just as polite, restrained, and rule-following as the average Japanese, regardless of their nationality. I wrote that no one steals unattended wallets in Japan, not that <em>no native Japanese</em> steals.</p>
<p>In fact, the sure-fire way to spot a tourist in Tokyo is not by their appearance or the language they speak, but by how loudly they talk in public, or how they stand in spots that inconvenience other people. They&#x27;re not trying to disrupt, they simply haven&#x27;t had enough time to assimilate the local behavioral patterns.</p>
<p>Those miraculous scenes have nothing to do with the Japanese DNA: it&#x27;s their culture. And culture is, by and large, random, arbitrary, and self-reinforcing.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h2>2</h2>
<p>You can go and look at the history of Japan, their institutions past and present, their religious philosophies and military values, and you can point to many things that seem to &quot;explain&quot; why today&#x27;s Japanese are polite, orderly, and ultra-civilized. This is a mistake, though, because all it does is kick the can a little farther down the road. Why were those institutions and philosophies like that? Why did the first samurai become so honorable?</p>
<p>Simply going farther back in history only repeats the mistake. You won&#x27;t find a final answer, because the answer is not at the beginning, it&#x27;s in the ongoing process itself: chance and contingency. People behave the way they do <em>because</em>, period.</p>
<p>If that seems implausible to you, think about simpler cases you might witness anywhere in the world. When a corridor is being traversed by crowds of people moving in both directions, two or more lanes will form spontaneously: the first two people trying to avoid each other&#x27;s path will randomly dodge either left or right; the people behind them will find it more convenient to follow the path of those walking ahead, and very quickly everyone is walking in a line on &quot;their side&quot;. Whether those going northward walk on the left and those going southward on the right, or the other way around, doesn&#x27;t matter, and no one really cares. It&#x27;s just arbitrarily become the easiest thing to do, and it stays that way as long as there are enough people in both directions.</p>
<p>Sometimes there <em>is</em> a good initial reason behind a cultural standard, but that reason becomes irrelevant later on. The QWERTY layout of English keyboards started as a clever design for typewriters—it helped minimize jamming of the mechanisms—but is now completely meaningless and even sub-optimal for modern digital keyboards.</p>
<p>If these things are simply cultural and arbitrary, why can&#x27;t people change them, then? Well, have you tried changing the rhythm of a mass applause by clapping in a specific way? Or typing with a DVORAK keyboard?</p>
<figure><img src="https://aethermug.com/assets/posts/culture-is-the-mass-synchronization-of-framings/KB_United_States_Dvorak.webp" alt="Diagram of a Dvorak keyboard layout, showing the alternative key arrangement designed for typing efficiency"/><figcaption>E... F... F... I... C... I... E... N... C... Y... !</figcaption></figure>
<p>Once a self-sustaining feedback loop has started, <em>how</em> it started ceases to mean anything. <a href="https://planktonvalhalla.com/20241030-recursion-tidy-stars-and-water-lilies/" rel="nofollow noopener noreferrer" target="_blank">Mindless forces emerge</a> that suck you in a specific direction.</p>
<h2>3</h2>
<p>So far, it sounds like what gets &quot;synchronized&quot; between people living in the same culture is their behavior and habits. This is true, but I don&#x27;t believe it&#x27;s the whole, or even the main, story. What I&#x27;m talking about is not a unification of actions but of the thinking patterns from which those actions arise. Culture is the mass-synchronization of framings.</p>
<p>A mental model is a simulation of &quot;<em>how things might unfold</em>&quot;, and we all build and rebuild hundreds of mental models every day. A framing, on the other hand, is &quot;<em>what things exist in the first place</em>&quot;, and it is much more stable and subtle. Every mental model is based on <em>some</em> framing, but we tend to be oblivious to which framing we&#x27;re using most of the time (I&#x27;ve explained all this better in <a href="/posts/a-framing-and-model-about-framings-and-models">A Framing and Model About Framings and Models</a>).</p>
<p>Framings are the basis of how we think and what we are even able to perceive, and they&#x27;re the most consequential thing that spreads through a population in what we call &quot;culture&quot;.</p>
<p>You&#x27;re forced to learn this (at least in the abstract) when you begin noticing some apparent contradictions in the collective behavior of Japanese crowds. Non-residents tend to think that the core tenet of Japanese culture is to &quot;obey the rules&quot; or &quot;do things properly&quot;, but that is absolutely not the case. How do you explain the fact that some rules are <em>ignored</em> by literally everyone here?</p>
<ul>
<li>People in Japan never follow the written rule to switch off your phone in the &quot;priority seats&quot; area of each train carriage (it&#x27;s a precaution for those with pacemakers).</li>
<li>People <em>always</em> actively climb escalators, despite incessant written and vocal requests that people stand still for safety reasons.</li>
<li>Flows of people in corridors very often form lanes that go opposite those indicated by the signs on the floor.</li>
</ul>
<figure><img src="https://aethermug.com/assets/posts/culture-is-the-mass-synchronization-of-framings/parking_annotated.webp" alt="Photograph of a Japanese sidewalk with several bicycles parked right next to red cones bearing &#x27;no bicycle parking&#x27; signs, with yellow annotations highlighting the ignored signs"/><figcaption>I only had to walk 30 seconds from the cafe I wrote this post in to take this picture. People don&#x27;t mind parking all around the very explicit &quot;no bicycle parking&quot; signs.</figcaption></figure>
<p>The list goes on. The more you pay attention, the more collective infractions you&#x27;ll notice.</p>
<p>Sure, these are mostly small transgressions of little consequence, and they are not enforced in any strong way. But if following the rules were a core value of Japanese culture, why would that matter?</p>
<p>The <em>real</em> core value of Japanese culture (or one of them) is something like &quot;never stand out or make a fuss&quot;. Nowhere in that principle is a strict requirement to follow the rules. In fact, it&#x27;s perfectly fine, in Japan, to break the rules <em>as long as that&#x27;s what everyone does and expects you to do</em>. In terms of framings, the Japanese culture has acquired—by arbitrary and unimportant means—a definition of the concept of (or a &quot;<a href="/posts/a-black-box-view-of-life">black box</a>&quot; for) &quot;standing out&quot; that differs from its equivalent in many other cultures: instead of being generally neutral, it is seen as intrinsically unpleasant and embarrassing.</p>
<p>The behavior that stems from employing this ontological &quot;thing&quot; (this particular flavor of &quot;standing out&quot;) in your mental models is what you see manifested on the train platforms, on the escalators, etc.</p>
<p>The Italian culture has the concept of <em>simpatia</em> that translates awkwardly to English as &quot;being a mix of likeable and/or charming and fun to be around&quot; and doesn&#x27;t even exist in Japan. I do believe that having this compact and convenient idea of <em>simpatia</em> makes Italians more conscious of the importance of being <em>simpatico</em> and seek that property in others. It drives their behavior in more or less explicit ways.</p>
<p>Similarly, English (as most Western languages) has a cultural black box for what we call &quot;sarcasm&quot;, but this black box is largely absent from the Japanese cultural framing: sarcasm is simply not a thing in Japan, and people aren&#x27;t (I&#x27;m tempted to say <em>can&#x27;t be</em>) sarcastic. It doesn&#x27;t occur to them to be it.</p>
<p>Each culture is made of shared framings—ontologies of things that are taken to exist and play a role in mental models—that arose in those same arbitrary but self-reinforcing ways. Anthropologist Joseph Henrich, in <em>The Secret of Our Success</em>, brings up several studies demonstrating the cultural differences in framings.</p>
<p>He mentions <a href="https://archive.org/details/arewegettingsmar0000flyn" rel="nofollow noopener noreferrer" target="_blank">studies</a> that estimated the average IQ of Americans in the early 1800&#x27;s to have been around 70—not because they were dumber, but because their culture at the time was much poorer in sophisticated concepts. Their framings had fewer and less-defined moving parts, which translated into poorer mental models. Other studies found that children in Western countries are brought up with very general and abstract categories for animals, like &quot;fish&quot; and &quot;bird&quot;, while children in small-scale societies tend to think in terms of more specific categories, such as &quot;robin&quot; and &quot;jaguar&quot;, leading to different ways to understand and interface with the world.</p>
<p>But framings affect more than understanding. They influence how we <em>take in</em> the information from the world around us. Explaining <a href="https://www.pnas.org/doi/epdf/10.1073/pnas.1934527100" rel="nofollow noopener noreferrer" target="_blank">this paper</a>, Henrich writes:</p>
<blockquote>
<p>People from different societies vary in their ability to accurately perceive objects and individuals both in and out of context. Unlike most other populations, educated Westerners have an inclination for, and are good at, focusing on and isolating objects or individuals and abstracting properties for these while ignoring background activity or context. Alternatively, expressing this in reverse: Westerners tend not to see objects or individuals in context, attend to relationships and their effects, or automatically consider context. Most other peoples are good at this.</p>
</blockquote>
<p>How many connections and interrelations you consider when thinking is in the realm of framings. If your mental ontology treats most things as largely independent and self-sufficient, your mental models will tend to be, for better or worse, more reductionist and less holistic.</p>

<h2>4</h2>
<p>The definition of &quot;framing&quot; that I&#x27;m adopting on Aether Mug is more precise than what people use in general, and for this reason I don&#x27;t know of any study that specifically tested how framings evolve in social interactions. But I don&#x27;t think I&#x27;m making a bold leap by saying that we experience, at a deeper level, the same form of synchronization between framings that we can trivially witness between surface behaviors.</p>
<p>It might take longer, but if everyone around you talks and acts based on the assumption that concepts A and B exist with certain properties, and no one ever mentions concept D or acknowledges it with their behavior, you will gradually shift to think in terms of A and B and not so much in terms of D. Given enough time, the ontological status of D in your mind might atrophy and vanish in the background, while A and B&#x27;s status solidifies.</p>
<p>Somehow, the commuters on Ikebukuro&#x27;s Marunouchi platform have acquired clear and distinct concepts for &quot;<em>senpatsu</em> queue&quot; and &quot;<em>kouhatsu</em> queue&quot;, both of which are absent from the framings of Italian commuters. The &quot;miraculous&quot; part is not that any of them can conceive the idea—any Italian would have no trouble understanding it and following it if those around them did the same—but that feedback loops emerged to reinforce them in the whole commuter culture.</p>
<p>Like in the emergent walking lanes in a corridor, once these recursive forces have gained traction, it&#x27;s almost trivial for newcomers to learn them as &quot;rules&quot; and comply. As is often the case, here the shared framing led to the rules, not the other way around.</p>
<p>In this case, the emergent cultural rules have clear advantages for everyone: more choices, less stress, everyone wins. But it is not true, in general, that all framing synchronizations lead to better behaviors.</p>
<figure><img src="https://aethermug.com/assets/posts/culture-is-the-mass-synchronization-of-framings/955225ilsdl.webp" alt="A 1920 pen-and-ink political cartoon showing an elephant, a donkey, and a camel labeled &#x27;Prohibition&#x27; walking in a line, each imitating the posture of the one ahead"/><figcaption>Imitation is the sincerest flattery, William Henry Walker</figcaption></figure>
<p>The basic force behind all culture formation is imitation. This ability is innate in all humans, regardless of culture: we are extraordinarily good imitators. Indeed, we are <em>overimitators</em>, sometimes with unfortunate consequences.</p>
<blockquote>
<p>Overimitation ... may be distinctively human. For example, although chimpanzees imitate the way conspecifics instrumentally manipulate their environment to achieve a goal, they will copy the behavior only selectively, skipping steps which they recognize as unnecessary [unlike humans, who tend to keep even the unnecessary steps]. ... Once chimpanzees and orangutans have figured out how to solve a problem, they are conservative, sticking to whatever solution they learn first. Humans, in contrast, will often switch to a new solution that is demonstrated by peers, sometimes even switching to <strong>less</strong> effective strategies under peer influence.</p>
<p>— <a href="https://plato.stanford.edu/entries/psychology-normative-cognition/" rel="nofollow noopener noreferrer" target="_blank">The Psychology of Normative Cognition</a>, Stanford Encyclopedia of Philosophy, emphasis theirs.</p>
</blockquote>
<p>We have a built-in need to do what the people around us do, even when we know of better or less wasteful ways. This means that we can&#x27;t even explain culture as something that, while starting from chance events, naturally progresses towards better and better behaviors. That&#x27;s what <a href="https://planktonvalhalla.com/20230905-mass-producing-the-mistake-minimizer/" rel="nofollow noopener noreferrer" target="_blank">science</a> is for.</p>
<p>Once the synchronized behaviors are in our systems, when we are habituated to certain shared ways of doing things, these behaviors feed back into our most basic mindsets, which guide our future behaviors, which further affect each other&#x27;s mindset, and so on, congealing into the shared framings we call culture, i.e.: <em>whatever happens to give the least friction in whatever happens to be the current shared behavioral landscape.</em></p>
<p>This is why, often, formal rules and laws do indeed take root in a culture: not because they&#x27;re rules, but because the way they are enforced creates enough friction—or following them creates enough mutual benefits—that, like in the corridor lanes, crowds will settle into following them. This is also why, perhaps even more often, groups will settle into the easy &quot;unruly&quot; patterns.</p>
<p>Maybe the Japanese culture tends to have more extreme examples of this than others because of its meta-cultural framing: not only is imitating others a natural human tendency, but here it has also become a self-reinforcing loop in itself. Imitate the imitating. ●</p>
<p><a href="https://aethermug.com/posts/culture-is-the-mass-synchronization-of-framings#comments">Comment on this post&#x27;s page</a></p>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/culture-is-the-mass-synchronization-of-framings/536968ldsdl.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Year Two]]></title>
            <link>https://aethermug.com/posts/year-two?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/year-two</guid>
            <pubDate>Thu, 05 Feb 2026 08:21:47 GMT</pubDate>
            <description><![CDATA[Toddlin' on]]></description>
            <content:encoded><![CDATA[<style>
@keyframes rainbow {
  0% { color: #ff6b6b; }
  14% { color: #feca57; }
  28% { color: #48dbfb; }
  42% { color: #1dd1a1; }
  57% { color: #5f27cd; }
  71% { color: #ff9ff3; }
  85% { color: #54a0ff; }
  100% { color: #ff6b6b; }
}
.animate-rainbow {
  animation: rainbow 3s linear infinite;
}
</style>
<p>Aether Mug has just turned two years old.</p>
<p><a href="/posts/year-one">Three-hundred and sixty-four days ago</a> I did a &quot;reflections post&quot; about my blogging experience. There will be a <em>tiny</em> bit of that at the end of this post, too, but I&#x27;ve already done a lot of looking back and a lessons-learned roundup <a href="/posts/nsoott">last week</a>, so this one will be more practical: some announcements and new stuff for you—yes, You, my friendly and uncommonly attractive readers!</p>
<p>Spoilers: new ways to customize your subscriber experience, to give me feedback, to interact with me and other readers, and even to buy me a 🍵 matcha drink!</p>
<h2>Reading Suggestions</h2>
<p>I write all my blog posts to be evergreen: no &quot;news of the week&quot; or trendy topics, only ideas that will (hopefully) remain (mostly) relevant (if not &quot;right&quot;) five, ten, even fifty years from now. This means that, in theory, you can go back and read any of them at any time and still get something out of them.</p>
<p>Now, AeMug has had an influx of new readers this year! But it occurred to me that many of you might not know where (whether?) to start reading older posts. If you&#x27;re one of them, here are some ideas.</p>
<h3>The Most Popular Aether Mug Posts Until Now</h3>
<p>These are the ones that generated the most buzz online, in chronological order:</p>
<p>Year One:</p>
<ul>
<li><a href="/posts/aphantasia">Aphantasia</a> (5-minute read): this actually predates AeMug, it was the Twitter thread that led to me writing for Nautilus magazine.</li>
<li><a href="/posts/the-beautiful-dissociation-of-the-japanese-language">The Beautiful Dissociation of the Japanese Language</a> (20 min)</li>
<li><a href="/posts/borges-on-chaos-theory">Borges on Chaos Theory</a> (8 min)</li>
<li><a href="/posts/boxed">Boxed</a> (12 min)</li>
</ul>
<p>Year Two:</p>
<ul>
<li><a href="/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes">An Aphantasic&#x27;s Observations on the Imagination of Shapes</a> (9 min)</li>
<li><a href="/posts/i-do-not-remember-my-life-and-it-s-fine">I Do Not Remember My Life and It&#x27;s Fine</a> (16 min)</li>
<li><a href="/posts/human-stigmergy">Human Stigmergy</a> (3 min)</li>
<li><a href="/posts/i-used-to-know-how-to-write-in-japanese">I Used to Know How to Write in Japanese</a> (9 min)</li>
<li><a href="/posts/hi-no-youjin">Hi No Youjin</a> (7 min)</li>
<li><a href="/posts/famous-cognitive-psychology-experiments-that-failed-to-replicate">Famous Cognitive Psychology Experiments that Failed to Replicate</a> (7 min)</li>
<li><a href="/posts/process-world-object-oriented-mind">Process World, Object-Oriented Mind</a> (13 min)</li>
<li><a href="/posts/steinbeck-on-teleology">Steinbeck on Teleology</a> (11 min)</li>
<li><a href="/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable">Linear Algebra Explains Why Some Words Are Effectively Untranslatable</a> (13 min)</li>
</ul>
<h3>The Not-So-Popular-But-Foundational Posts</h3>
<p>These ten didn&#x27;t get much attention online, but they treat core ideas that come up over and over on this blog, so they&#x27;re probably good places to start.</p>
<ul>
<li><a href="/posts/embedded-prophesy-devices">Embedded Prophesy Devices</a> (7 min)</li>
<li><a href="/posts/a-framing-is-a-choice-of-boundaries">A Framing Is a Choice of Boundaries</a> (12 min)</li>
<li><a href="/posts/a-fundamental-framing-of-human-language">A Fundamental Framing of Human Language</a> (11 min)</li>
<li><a href="/posts/language-is-a-bottleneck-for-thought">Language is a Bottleneck for Thought</a> (13 min)</li>
<li><a href="/posts/a-black-box-view-of-life">A Black-Box View of Life</a> (8 min)</li>
<li><a href="/posts/a-framing-and-model-about-framings-and-models">A Framing and Model About Framings and Models</a> (17 min)</li>
<li><a href="/posts/rationality-fails-at-the-edge">Rationality Fails at the Edge</a> (12 min)</li>
<li><a href="/posts/living-in-a-real-world-acting-in-imaginary-ones">Living in a Real World, Acting in Imaginary Ones</a> (10 min)</li>
<li><a href="/posts/water-lilies-water-lilies-everywhere">Water Lilies, Water Lilies Everywhere</a> (11 min)</li>
<li><a href="/posts/a-list-of-introspective-descriptions">A List of Introspective Descriptions</a> (6 min)</li>
</ul>
<h3>The Posts I Personally Wish You Would Read</h3>
<p>The title says it all. These are some of the posts closest to my heart, painfully limited to only ten items:</p>
<ul>
<li><a href="/posts/is-there-anything-untranslatable">Is There Anything Untranslatable?</a> (14 min)</li>
<li><a href="/posts/you-don-t-have-time">You Don&#x27;t Have Time to Read Books That Won&#x27;t Change Your Life</a> (4 min)</li>
<li><a href="/posts/philosophy-is-the-battle-against-the-bad-framing-kraken">Philosophy Is the Battle Against the Bad Framing Kraken</a> (2 min)</li>
<li><a href="/posts/reading-blood-meridian-with-aphantasia">Reading Blood Meridian with Aphantasia</a> (18 min)</li>
<li><a href="/posts/jack-of-all-trades-master-of-something">Jack of All Trades, Master of Something</a> (12 min)</li>
<li><a href="/posts/ballistic-moments-of-weightlessness">Ballistic Moments of Weightlessness</a> (6 min)</li>
<li><a href="/posts/whatever-a-man-or-woman">Whatever a Man or Woman Was Able to Accomplish Before, You Can Accomplish, Too</a> (32 min)</li>
<li><a href="/posts/strive-as-it-might">Strive as It Might</a> (1 min) (It&#x27;s a poem.)</li>
<li><a href="/posts/cow-ontologies">Cow Ontologies</a> (6 min) (It&#x27;s a short story.)</li>
<li>The whole <a href="/posts/darwin-the-fun-loving-young-fellow">Darwin in His Own Words</a> series (20 min for the first post)</li>
</ul>
<h2>Practical Stuff</h2>
<p>Until now, the only things <em>you</em> could do with Aether Mug were reading and sending me email replies. Not anymore! Starting today, there are a few other things you can do thanks to the amazing functionality of Buttondown, the newsletter platform I&#x27;m using to send emails.</p>
<p>First of all, if you&#x27;re an email subscriber you can do the following:</p>
<ol>
<li>On your Buttondown profile page <strong>you can opt out of blog post topics you don&#x27;t want to receive</strong>. By default, all subscribers receive AeMug emails weekly-ish. But if you really don&#x27;t ever want to read my long disquisitions about, say, the Japanese language, or about mental imagery, or [insert other recurring topic here], you can now turn them off. The weeks when I write <em>primarily</em> about those topics, you won&#x27;t receive the email in your inbox. (I may still <em>touch</em> on those topics in other posts you do get, so this is not an air-tight system.)</li>
<li><strong>You can also set your first name</strong> to go along with your email by going to your Buttondown profile page. I&#x27;m not tracking you! This is fully optional, and only helps me know what to call you when I send you emails. It&#x27;s a nice way to get to know each other. Hi! I&#x27;m Marco.</li>
</ol>
<p>Pro tip: if you&#x27;re currently following with a feed reader like Feedly or Readwise Reader—I respect you!—and you want to take advantage of this customizability, you&#x27;re in luck: many modern readers allow you to subscribe to newsletters with special dedicated email addresses. Check your reader app&#x27;s docs, and consider registering it via the form below. Then you&#x27;ll be able to customize in the same way.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Second, you can now show your appreciation by treating me to a matcha latte every now and then: I&#x27;m enabling pay-what-you-want subscriptions to this newsletter. <em>I will not lock my blog posts behind a paywall</em>, and (at least for now) there are no special perks you get by becoming a paid subscriber. Please consider becoming one only if and when you feel comfortable with it, as a way to help Aether Mug keep the lights on as long as possible.</p>
<p>At the moment there is no straightforward way to send one-time donations, due to Buttondown&#x27;s limitations. If you only want to donate once, please consider starting a paid subscription, and then cancelling before the second payment (or not). I will send paying subscribers regular reminders that they can stop the donations any time. Sorry for this inconvenience.</p>
<p>And finally, I want to talk about community.</p>
<h2>The Year of Connections</h2>
<p>I spent Year One mostly figuring out that <a href="/posts/year-one">the work is the reward</a>. Year Two is when I learned how to <a href="/posts/nsoott">stay in the day</a> most of the time. I want to make Year Three about connections.</p>

<p>For starters, I think of Aether Mug as half-way between a blog and a <a href="https://maggieappleton.com/garden-history" rel="nofollow noopener noreferrer" target="_blank">digital garden</a>. While I don&#x27;t go back and update most older posts (except for lists like <a href="/posts/a-list-of-introspective-descriptions">this</a> and <a href="/posts/famous-cognitive-psychology-experiments-that-failed-to-replicate">this</a>), I do try to keep them well linked-up bidirectionally. You&#x27;ll be seeing more and more links from older posts to newer ones, both in the text and in this brand new &quot;You might also like...&quot; component <span class="xl:hidden">above</span><span class="hidden xl:inline">to the right</span>.</p>
<p>Another kind of connection I&#x27;m working on is with other blogs and social networks. I deliberately chose not to use a platform like Substack because I want the freedom to do random things like <span class="animate-rainbow font-semibold">this</span>, but this choice means AeMug doesn&#x27;t get all the benefits of recommendation algorithms, &quot;similar newsletters&quot;, and other network-effect opportunities. The discoverability of this site is rather abysmal, and it can feel a little like I&#x27;m floating on a raft in the middle of the Pacific ocean, with the occasional kind visitor swimming by to say hi.</p>
<p>So, at least as a first step, I&#x27;ve linked up the website to the <a href="https://indieweb.org/" rel="nofollow noopener noreferrer" target="_blank">Indieweb</a> and the <a href="https://en.wikipedia.org/wiki/Fediverse" rel="nofollow noopener noreferrer" target="_blank">Fediverse</a>. People can now follow Aether Mug posts on Bluesky (<a href="https://bsky.app/profile/aethermug.com" rel="nofollow noopener noreferrer" target="_blank">@aethermug.com</a>) and Mastodon (<a href="https://mastodon.social/@aethermug.com@aethermug.com" rel="nofollow noopener noreferrer" target="_blank">@aethermug.com@aethermug.com</a>)—these are separate from my personal accounts. I&#x27;ll get webmention notifications when someone in the network links to my website and vice versa. I will also be posting condensed versions of my new and old posts on my personal <a href="https://x.com/marco_giancotti" rel="nofollow noopener noreferrer" target="_blank">Twitter/X</a> and <a href="https://bsky.app/profile/marco-giancotti.bsky.social" rel="nofollow noopener noreferrer" target="_blank">Bluesky</a> accounts. I haven&#x27;t been very active on social media recently, but I&#x27;ll be a better netizen/neighbor this year.</p>
<p>But by far the most interesting connection, for me, is the one with you and, why not? <em>between</em> you, AeMug readers.</p>
<p>The biggest surprise I&#x27;ve had since starting this website has been how many of you have written me emails to share your thoughts, ideas, links, disagreements, and kind words. I&#x27;m slow answering (expect anything between 2 days and 2 weeks), but I read and respond to them all. Write me more, seriously! It makes it all worth it, all over again.</p>
<p>Direct one-on-one conversations are great, but what if... readers could talk with other readers about these topics? I haven&#x27;t been able to find a space on the internet to discuss this specific mix of ideas (send me links if you have!) and it would be greedy for me to lock your inputs in my inbox alone. So <strong>I will experimentally enable comments on the blog post pages</strong>. You can comment at the bottom of this one!</p>
<p>The website is still largely unknown, so there may be very few or no comments for a good while, but I guess the less friction there is, the better. Commenting requires being subscribed with a verified email address.</p>
<p>If you use a feed reader app and haven&#x27;t subscribed to the newsletter, you can get the best of both worlds in one of two ways: either switch to following the emails via the feed reader (see the previous section) or register your email and immediately opt out of all the topic tags (also explained in the previous section), so that you won&#x27;t get any more emails but you can still comment on the threads.</p>
<p>With AI bots flooding the internet with slop, it&#x27;s a little scary (which is why I haven&#x27;t enabled comments until now), but I hope the humans among you will take the lead.</p>
<h2>Conclusions: This Blog is a Toddler</h2>
<p>The <a href="/about">Aether Mug About page</a> still says that this blog &quot;is about <em>something</em>&quot;, which is intentionally but vexingly vague. I may have to update the page soon, because the fog is clearing. The blog is no longer a collection of disparate body parts; the <a href="/posts/year-one">invisible elephant&#x27;s overall shape</a> is becoming apparent.</p>
<p>One framing above all others helped me keep learning Japanese back in the 00&#x27;s: <em>never compare your L2 abilities with those of adult native speakers; the right comparison is with a baby born the day you started learning.</em> I think this might apply to something like a blog, too. At two years old, AeMug is now deep in toddler territory.</p>
<p>Most children speak in simple words and fragmentary sentences until about 18 months of age, then their vocabulary explodes, and they begin making more and more sense. Maybe this is what&#x27;s going on with this website, too. Through all the words that I write on it, it is beginning to speak its own intelligible language. This language is not simply mine, because a blog post always turns out quite different from what I initially intended for it, and because I listen to what those around me have to say. The blog&#x27;s language is the product of my thoughts and the readership&#x27;s response, of the rest of the blogosphere and newsletter-o-sphere, and so on.</p>
<p>I have mountains of ideas for what <em>I</em> want to write about this year. I can&#x27;t wait to see what <em>it</em> will say next.</p>
<p>Thank you so much for teaching the baby how to speak! ●</p>
<p><a href="https://aethermug.com/posts/year-two#comments">Comment on this post&#x27;s page</a></p>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/year-two/506787ldsdl.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Never Slide Out of the Day]]></title>
            <link>https://aethermug.com/posts/nsoott?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/nsoott</guid>
            <pubDate>Thu, 29 Jan 2026 11:52:07 GMT</pubDate>
            <description><![CDATA[Looking back on how I've been looking back]]></description>
            <content:encoded><![CDATA[<p>Roughly three years and three months ago I executed what I call my Master-Switch Plan: I resigned from my stable, well-paid position at an excellent up-and-coming Japanese startup and hired myself. That same company immediately became my client, but now I was working for them only two days a week instead of five. I did not take other clients: the whole point, from the beginning, was to take charge of those three days per week to do what I wanted with them. All I wanted was to write and to research interesting questions with no strings attached.</p>
<p>I&#x27;ve been doing that this whole time, and I couldn&#x27;t be happier with my choice. My life is richer, more exciting, and less stressful than ever.</p>
<p>It wasn&#x27;t easy to get into the current state of mind, though. I haven&#x27;t figured it all out, and there is no amazing story of breakout success here, but I consider myself successful beyond my dreams at finding a good place inside me. I feel I can share something about how I did that.</p>
<p>This is not to encourage you to make your own Master Switch with your career. Even assuming the idea attracts you, there are other things to consider before attempting something like that—questions about your finances, your relationships, your goals. I was lucky to have a viable combination of those, so I took the plunge.</p>
<p>Instead, I think these simple lessons I&#x27;m about to share might be useful to anyone, regardless of career situation, at least if they&#x27;re in a specific but rather common kind of predicament: <em>thinking about the future makes them shit their pants.</em></p>
<p>In my case, the pivot won me an enormous amount of freedom and control over my life, and freedom is scary. Add the fact that there is no clear path to stability in front of me outside of &quot;write the best stuff I can until someone maybe hopefully perhaps decides to give me money for it&quot;, and you can probably imagine the state of mind I&#x27;m talking about. Many people in creative jobs surely feel the same. My gut feeling is that even people in more &quot;predictable&quot; careers might feel like this, because they might feel stuck in predictably uninteresting careers.</p>
<p>So, if you want something big and fuzzy in your life, but getting there feels like being lost in a pristine jungle with only the sun and stars for guidance, this is my modest contribution: never slide out of the day.</p>
<h2>The Retros</h2>
<p>The first thing I learned after descending into freedom in the middle of a career jungle is that I am my own predator. For months, I had no idea how to manage my open-ended projects. All the task-management skills I had honed during fifteen years of company work became irrelevant overnight. I was fighting against myself for time. When you can do <em>anything you want</em>, the choices can crush you. Should you spend time building a shelter? Should you climb the tallest tree in the vicinity to figure out what direction to go? Should you be making weapons? Or stock up food before everything else?</p>

<p>The answer is, probably all of those things and a bunch more, and most of them will turn out to have been a waste of time. But which should you spend more time and energy on? How important is your comfort today and tomorrow relative to the big goal of getting back to civilization? There is no end to this Amazon River of questions, and that&#x27;s no good for your stress levels. Sometimes it can make you miss your ex-boss.</p>
<p>Instinctively, I decided early on that I would have to focus on <em>sustainable progress</em>. The only thing I knew for sure was that any kind of success would take a long time to come, assuming it ever came. So it was important that I didn&#x27;t hate the journey, or I would probably burn out and quit at some point.</p>
<p>For a while, I didn&#x27;t worry too much about these complexities. I was just getting started, everything was new and exhilarating, and I was learning a lot anyway. The lack of visibility into the future didn&#x27;t oppress me <em>that</em> much. I swept it all under the rug of my optimism. I would figure it all out, someday.</p>
<p>Then the rug became a small cloth on top of a mountain of dust. Reflecting on how I felt a few months after the Master Switch, I wrote something a little deranged in my journal (forgive the metaphor swap):</p>
<blockquote>
<p>They&#x27;re here. I knew all along that I would have to face them and slay them early on, and it hasn&#x27;t taken them long to appear in front of me. The four demons that dwelled in the shady recesses of my soul are now out in the open, and they&#x27;re after me. Perfectionism, Imposter&#x27;s Syndrome, Loneliness, and Impatience, in all their ugly fanaticism. In order to open the path ahead of me, I was forced to unshackle them, set them free from their black cells and out of my control.</p>
</blockquote>
<p>It goes on, but I&#x27;ll spare you the most embarrassingly dramatic bits.</p>
<p>For long periods, a slow-burning existential dread took hold of me. Was I making the greatest mistake of my life? Would I have to return to the job market poorer, humiliated, and less employable? Was I wasting the most important years of my life?</p>
<p>When your head gives no answers, your gut is usually more cooperative. About one year in, I felt I needed a change of air and some healthy solitude. I booked a cheap hotel (frugality is part of the Plan) in Morioka, a minor city in northeastern Japan, and spent a few days away from home. While there, I stopped doing what I had been doing for months and did one thing only: I sat in a cafe and wrote a long debate with myself about this whole situation. I wrote a paragraph or two about the things I had done well, the lessons I learned, the progress I made, then I wrote a paragraph or two of rebuttal, tough questions, and lists of failures. And so on.</p>
<p>The critical voice was ruthless, not pulling any punches, and the optimistic voice answered back just as vehemently, pointing out the silver linings, the opportunities hidden behind every failure, the lessons to take stock of. Fortunately, the two voices were constructive enough to eventually settle for an uneasy truce, with several ideas about changes I should make to improve my chances of survival.</p>
<p>It felt good. I went back home revitalized, motivated all over again to keep going. These things seemed to help:</p>
<ul>
<li>Being away from home, in an unfamiliar place</li>
<li>Being alone and a little lonely</li>
<li>Having no other purpose than reflecting on the past few months and the future direction—there wasn&#x27;t much to see or do in Morioka, which made it perfect for the exercise</li>
</ul>
<p>In other words, I needed to get out of my everyday environment. Attempting the same in the usual places, for some reason, didn&#x27;t feel right. How curious.</p>
<p>In any case, it felt so good that I decided to do it again four months later, in another minor city. Since then I&#x27;ve turned those trips into a thrice-a-year ritual, every January, May, and September, each time in a different place in Japan, each carefully chosen to give me no choice other than to look into myself. I&#x27;ve taken to calling these &quot;retro(spective) trips&quot;. Every time, I emerge a different man. I&#x27;m more focused, more optimistic, and super eager to start <em>doing</em> stuff.</p>
<p>And it&#x27;s not a temporary illusion either: I have the records of all those retrospective sessions, and I can see that each of them was a permanent step up in my maturity, peace of mind, and productivity. Each time my harvest included long lists of practical ideas on how to refocus, course-correct, optimize, and avoid obstacles, most of which I put into practice the very next day.</p>
<p>The act of thinking earnestly about my life and my future, which used to make me feel queasy every day, instead makes me feel very good during my retro trips. Really, <em>why though</em>?</p>
<p>It was a while before I figured out what was going on. I was only looking at half the picture. It turns out that the positive effects of those trips were even better than I realized at the time.</p>
<h2>The Other Half of the Picture</h2>
<p>The other half of the picture was the whereabouts of my mind on all the days <em>other</em> than the trips. Before, I did okay-ish on most days, but I had frequent periods of anxiety about it all. I was doing the same work, but in those periods I felt lost, drifting through the jungle in a random direction, my star-reading skills suddenly seeming very suspicious.</p>
<p>Then, gradually, I began to notice that I felt good on the days when every thought was about the immediate tasks at hand, and none beyond that horizon. When, in the early days, I tried to consciously avoid thinking about my long-term prospects, I found it almost impossible. Not thinking long-term felt like an unstable equilibrium, with slippery slopes all around me ready to take me back into anxiety with a brief swooshing sound. It was like trying not to think about a pink elephant, with the smell of elephant dung filling the room and a trunk wrapped around my neck.</p>
<p>Here&#x27;s where my retro trips helped: they took care of the elephant for me, and that made it much easier to avoid sliding into a constant re-evaluation of my life.</p>
<p><a href="https://planktonvalhalla.com/20251204-purpose-from-first-principles/" rel="nofollow noopener noreferrer" target="_blank">Purposes form webs of dependencies</a> between each other. A plan is a connected hierarchy of goals and subgoals and sub-subgoals. In general, every small action is an intermediate step toward something a little bigger, which in turn is a step toward something even bigger, and so on until you get to your deepest and most mysterious goals and desires. Remove a higher-level goal, and the subgoals that were instrumental to its achievement become obsolete and meaningless—no point in getting halfway there after you&#x27;ve given up on going all the way.</p>
<p>To accomplish anything non-trivial, you need to be sure what it is you&#x27;re really trying to do, and you need to be sure how the smaller acts connect to <em>that</em>. Lose track of the higher goals, and your means will become your new goals, and may even rebel against your own interests.</p>
<p>Unfortunately, keeping track of the higher goals <em>while</em> taking your smaller steps toward them is also the perfect recipe for analysis paralysis, constant doubts, and a sense of running in place. This is where our good friend Existential Dread resides. Here be demons.</p>
<p>By making retro trips a routine exercise, I unwittingly freed myself from having to think about those scary goal hierarchies all the time. I&#x27;m separating the <em>zoom in</em> periods of daily work from the <em>zoom out</em> bursts of life planning. This was the key to both my peace of mind and the huge improvement in productivity I&#x27;ve seen in these past two years.</p>
<p>My Obsidian vault has a motto in a big font on its landing page: <em>Never Slide Out of the Day</em>. Each day is small and smooth, with few handles and slippery edges. The risk of falling into the same patterns of self-doubt is always there. But, on closer inspection, each day is exactly big enough to contain my whole being comfortably. I just need to take care not to slide out of it, not to spill myself all over the vast expanse of days ahead and behind me.</p>
<p>I&#x27;ve learned to recognize the right state of mind every day. If I&#x27;m &quot;in the day&quot;, I don&#x27;t think about the past, I don&#x27;t think about where I&#x27;m heading, and I suspend all worries about my life path. It&#x27;s just me and today&#x27;s work. Each of those days, modest and unassuming as they are, is where I find all my moments of unmitigated happiness doing what I love. If there is a need to course-correct, if I&#x27;m doing something wrong or wasteful or misguided, I will eventually find out during my next retrospective trip. For today, I live my life as I want and need it.</p>
<p>The result is that these years have become, for me, a period of my life that I will never regret living, regardless of where I end up later.</p>
<h2>How I Do It Today</h2>
<p>Nowadays, my life consists of 98% in-the-day days, and 2% of retrospective work &quot;outside&quot;. This balance feels just right for me. The mindset is one of constant experimentation. I don&#x27;t know what will work, but I know what <em>might</em> work, so each time I commit to a list of changes to make in the following four months, and next time I&#x27;ll assess how those went.</p>
<p>For example, in the last retrospective I decided I wanted to squeeze one more hour of work out of my brain in my daily schedule, but faced the reality that my morning-person brain takes a dramatic plunge in IQ after about 3 PM. Writing about it in my retro work (nowadays it&#x27;s not a self-dialog but more like a monologue), I noticed that different kinds of cognitive tasks require very different amounts of mental energy. I did some AI-aided research, found which tasks have a higher cognitive load, and decided to try optimizing the <em>order</em> in which I do things: heavy first, gradually lighter as I get deeper into the afternoon. This simple experiment worked <em>magnificently</em>, as I took stock this morning (I&#x27;m in the middle of my January retro trip right now!).</p>
<p>Other times I&#x27;m forced to note that an experiment didn&#x27;t work at all (case in point: I still spend much more time than I&#x27;d like writing Aether Mug blog posts; I&#x27;m open to suggestions on how to contain these beasts). In those cases, in the retro I&#x27;ll think of something else that might work based on what I learned in the previous failed attempts.</p>
<p>After a few iterations of this, I&#x27;ve gotten quite good at guessing which experiments might actually work <em>and</em> be sustainable for me in the long run. My initiatives value MX (Marco Experience) highly, because I can&#x27;t fool myself into doing something I don&#x27;t want to.</p>
<p>This whole process creates a virtuous cycle. When I&#x27;m focused on the here and now, knowing that the time to retro- and intro-spect will come soon enough, I&#x27;m not only happier, but also more productive and creative. I achieve more with a tenth of the stress. At the same time, I look forward to the trips as the time when solutions are found. Those trips, in turn, remain optimistic and constructive because, despite tackling the hard questions, I enter them already in a low-stress, forward-looking state of mind.</p>
<p>Externalizing and structuring things like this makes it very easy to surface and disentangle the key issues in my mind, which in turn makes it easy to come up with ideas to experiment with. There&#x27;s no need to conjure the perfect solution on the first attempt: good enough is more than good enough. And if a better idea exists, my gut will work on it and make it pop in my mind soon, <em>without me consciously struggling with it, or sliding out of the day</em>. Rinse and repeat, good <a href="https://planktonvalhalla.com/20241030-recursion-tidy-stars-and-water-lilies/" rel="nofollow noopener noreferrer" target="_blank">Water Lily</a>.</p>
<p>Greetings from the jungle! ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/nsoott/yanhao-fang-rNpxTNTsOjw-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Darwin El Naturalista]]></title>
            <link>https://aethermug.com/posts/darwin-el-naturalista?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/darwin-el-naturalista</guid>
            <pubDate>Thu, 22 Jan 2026 08:10:47 GMT</pubDate>
            <description><![CDATA[In His Own Words (Episode 5)]]></description>
            <content:encoded><![CDATA[<p><em>This is the final episode in the series of curated quote collections &quot;Darwin in His Own Words&quot;.</em></p>
<ol>
<li><em><a href="/posts/darwin-the-fun-loving-young-fellow">Darwin the Fun-Loving Young Fellow</a></em></li>
<li><em><a href="/posts/darwin-the-voyager">Darwin the Voyager</a></em></li>
<li><em><a href="/posts/darwin-the-man-of-his-times">Darwin the Man of His Times</a></em></li>
<li><em><a href="/posts/darwin-the-witness">Darwin the Witness</a></em></li>
<li><em>Darwin El Naturalista</em></li>
</ol>

<p>At last, it&#x27;s time to talk about young Charles Darwin&#x27;s scientific side during his voyage on the <em>Beagle</em> (in his own words, of course). To be honest, though, so much has been written already about the origin his <em>Origin</em> (pun detected) that it would be boring to repeat the same old stuff. I want to share with you the kind of quotes that don&#x27;t typically get paraphrased in textbooks and epitaphs. The rest is history, so this post is about the <em>non</em>-history part. Not the widely known history, at least.</p>
<p>What you&#x27;ll find, in this slice of the book, almost-there-but-not-quite ideas. My main point is that Darwin—or <em>El Naturalista Don Carlos</em>, as his passport identified him—was already doing <em>something</em> right even in his early twenties, decades before he announced his theory of evolution by natural selection. You can see it in what he pays attention to, and you can see it in the questions that torment his youthful brain. From those, I think, you can glean all you need to understand how he managed to become what he later became.</p>
<h2>Rocks and Shifting Earth</h2>
<p>Darwin was recruited in Captain FitzRoy&#x27;s crew as an all-around &quot;naturalist&quot;. His mentor and protector Prof. John Stevens Henslow recommended him for the position, informing him of this helpful job description: &quot;<em>collecting, observing, &amp; noting anything worthy to be noted in Natural History.</em>&quot; The sciences were not so strictly separated at the time, so Darwin had learned the basics of most disciplines.</p>
<p>But, even then, everyone had a favorite discipline, and so did Darwin. When he boarded the ship in December 1831, he probably saw himself, first and foremost as... a geologist.</p>
<p>Today we associate him with biology, but the love for wild flora and fauna was more of a hobby to him at the time. He collected beetles, shot birds, and liked to take long walks in the woods, and that was more or less it. Geology, on the other hand, must have looked like the most promising path forward for his uncertain career. Just before receiving his life-changing invitation from FitzRoy, he had been assisting a professor on a month-long field trip to study the rock formations and geological strata of North Wales. He learned useful skills in this survey, and his notes at the time suggest that he was especially eager to study the hell out of those South American rocks. That he would learn more about how Life works than anyone before him may not have crossed his mind.</p>
<p>Darwin did stick to his resolution, and duly surveyed the mountains and mineral formations of every area he visited over the five-year voyage. But he did not limit himself to describing what he saw. He interpreted and, crucially, he confirmed.</p>
<blockquote>
<p>Daily it is forced home on the mind of the geologist that nothing, not even the wind that blows, is so unstable as the level of the crust of this earth.</p>
<p>— Voyage of the Beagle (all quotes below from the same book unless otherwise stated)</p>
</blockquote>
<p>A sentence like this may sound ordinary today, but it was quite unorthodox in his day. Darwin had been struck by Charles Lyell&#x27;s controversial new theory of how the Earth&#x27;s surface came to be as it is: not with sudden catastrophes that lifted mountains and tore canyons across the land once and for all, as was thought at the time, but by very slow and gradual changes. Where the geological orthodoxy believed that, once formed, all features of the land remained otherwise static, Lyell showed ample evidence that most of it had happened—was <em>still</em> happening—through continuous, &quot;uniform&quot; forces.</p>
<figure><img src="https://aethermug.com/assets/posts/darwin-el-naturalista/lyell.webp" alt="19th-century engraving of a deep V-shaped ravine with steep eroded walls, a tiny human figure standing at the bottom for scale, surrounded by rolling hills with scattered trees"/><figcaption>A drawing of a spot near Milledgeville, Georgia, US, used by Lyell to demonstrate the variability of landscapes: &quot;when the trees of the forest were cut down, cracks three feet deep were caused by the sun&#x27;s heat in the clay; and, during the rains, a sudden rush of water through the principal crack deepened it at its lower extremity, from whence the excavating power worked backwards, till, in the course of twenty years, a chasm, measuring no less than 55 feet in depth, 300 yards in length, and varying in width from 20 to 180 feet, was the result.&quot;</figcaption></figure>
<p>Young Darwin was won over by Lyell&#x27;s uniformitarianism well before it became widely accepted among the &quot;respectable&quot; scholars. He made Lyell&#x27;s framing his own even before Lyell was finished publishing his series of books on the topic.</p>
<p>It&#x27;s not that he didn&#x27;t feel the tension between intuition and scientific evidence, though:</p>
<blockquote>
<p>As often as I have seen beds of mud, sand, and shingle, accumulated to the thickness of many thousand feet, I have felt inclined to exclaim that causes, such as the present rivers and the present beaches, could never have ground down and produced such masses. But, on the other hand, when listening to the rattling noise of these torrents, and calling to mind that whole races of animals have passed away from the face of the earth, and that during this whole period, night and day, these stones have gone rattling onwards in their course, I have thought to myself, can any mountains, any continent, withstand such waste?</p>
</blockquote>
<p>This is what a good scientist must do: <a href="/posts/steinbeck-on-teleology">neither ignoring nor blindly following</a> his first instinct, Darwin takes note of it and goes on to collect more data. He knows that the scientific method, and not any jump to conclusions, is the <a href="https://planktonvalhalla.com/20230905-mass-producing-the-mistake-minimizer/" rel="nofollow noopener noreferrer" target="_blank">shortest path</a> to a potential resolution of that tension.</p>
<p>Each <a href="/posts/a-framing-and-model-about-framings-and-models">framing</a> is a lens capable of showing you the world in a different way, so this early bet on a mostly-correct new theory by Lyell was a major win for Darwin. In a sense, it made him the first scientist ever to look at the geology of South America and Oceania from a modern perspective.</p>
<p>So what? His contributions to geology have been eclipsed by the success of his theory of evolution. Ironically, though, his progressive view of geology played a major role in the development of his <em>biological</em> theory.</p>
<h2>Predictions and Discoveries</h2>
<figure><img src="https://aethermug.com/assets/posts/darwin-el-naturalista/rhea.webp" alt="Naturalist illustration of a rhea standing in profile, showing its brown and gray mottled plumage, long bare neck, small head, and long legs against a sparse landscape"/><figcaption>Darwin&#x27;s rhea, one of the species first described by the man.</figcaption></figure>
<p>Although researching rocks and minerals seemed like his career of choice, Darwin was diligent enough to also study the living things of every place he visited. In total, he brought back to Britain 1529 specimens in bottled spirits and 3907 dried specimens of plants and animals, including dozens of new species of birds, fish, and insects. These contributions were clearly appreciated by his fellow scholars, but I think they were a bit less sensational than they might sound today. South America hadn&#x27;t been thoroughly explored by scientists yet, and even today it is full of undiscovered species, so it wasn&#x27;t <em>too</em> hard to stumble on yet-uncataloged creatures.</p>
<p>More important (though still not earth-shattering, I think) was his discovery of long-extinct megafauna fossils.</p>
<blockquote>
<p>October 1st.—We started by moonlight and arrived at the Rio Tercero by sunrise. ... I stayed here the greater part of the day, searching for fossil bones. Besides a perfect tooth of the Toxodon, and many scattered bones, I found two immense skeletons near each other, projecting in bold relief from the perpendicular cliff of the Parana. They were, however, so completely decayed, that I could only bring away small fragments of one of the great molar teeth; but these are sufficient to show that the remains belonged to a Mastodon.</p>
</blockquote>
<p>He unearthed bones of ancient relatives of horses, giant sloths, and other large mammals unlike any alive in his time. Overall, these findings created more questions than they answered, but they were immediately useful for at least one thing: they put the mystery of the extinction of species on the forefront of Darwin&#x27;s mind. They also forced him to puzzle over the peculiar ways these animals were distributed across the world:</p>
<blockquote>
<p>As so many species, both living and extinct, of these same genera inhabit and have inhabited the Old World, it seems most probable that the North American elephants, mastodons, horse, and hollow-horned ruminants migrated, on land since submerged near Behring’s Straits, from Siberia into North America, and thence, on land since submerged in the West Indies, into South America, where for a time they mingled with the forms characteristic of that southern continent, and have since become extinct.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-el-naturalista/skeleton-of-mylodon-darwinii-from-the-ken-welsh.webp" alt="19th-century scientific illustration of a giant ground sloth skeleton in a rearing position, supported on its hind legs and massive tail, with large clawed feet and distinctive skull"/><figcaption>Skeleton of a Mylodon darwinii, the extinct giant sloth discovered by Darwin. It was 3 to 4 m (10 to 13 ft) in length.</figcaption></figure>
<p>This prediction was surprisingly accurate. Well, he did get the details of the <em>direction</em> of the migrations wrong (for example, today we know that horses originated in North America and moved <em>out</em> of it via the Bering Strait), but the idea that temporary land bridges could explain how species redistributed themselves was quite novel—a fruit of his adoption of Lyell&#x27;s framing of ever-shifting landforms.</p>
<p>On the more strictly geological side of things, Darwin was not afraid to advance bold ideas about how mountains form:</p>
<blockquote>
<p>From many reasons, I believe that the frequent quakings of the earth on this line of coast are caused by the rending of the strata, necessarily consequent on the tension of the land when upraised, and their injection by fluidified rock. This rending and injection would, if repeated often enough ... form a chain of hills</p>
</blockquote>
<p>Again, the details weren&#x27;t <em>exactly</em> right (magma injections were an effect of plate tectonics, not a cause in themselves), but he was doing pretty radical work all the same. Others had proposed a link between earthquakes, volcanoes, and orogeny (mountain formation) before, but Darwin was the first to provide direct and compelling evidence for it. He made a connection between his observations of the Concepción 1835 earthquake (<a href="/posts/darwin-the-witness">Ep. 4</a>) and its aftermath, where the <em>Beagle</em>&#x27;s surveyors measured land uplift of up to 3 meters, and his discovery of marine fossils high up in the Andes; from these, he correctly concluded that, given enough time, a sequence of events like that could gradually have produced the majestic mountain ranges of the region. Again, Lyell&#x27;s new theory at work, merely a couple of years after its publication.</p>
<p>But Darwin&#x27;s biggest scientific discovery in the short term, brilliant enough to earn him a strong reputation long before his other, bigger theory, happened in the latter half of his voyage.</p>
<p>For some time, he had been curious about those bizarre shapes formed by corals in the tropical seas:</p>
<blockquote>
<p>It is remarkable how little attention has been paid to encircling barrier-reefs; yet they are truly wonderful structures. The accompanying sketch represents part of the barrier encircling the island of Bolabola in the Pacific, as seen from one of the central peaks.</p>
</blockquote>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/darwin-el-naturalista/coral.webp" alt="" title="Line drawing of a volcanic island with a jagged central peak and palm trees at its base, encircled by a distant ring of palm-covered reef visible as a broken line across the sea"/></figure>
<blockquote>
<p>... Usually a snow-white line of great breakers, with only here and there a single low islet crowned with cocoa-nut trees, divides the dark heaving waters of the ocean from the light-green expanse of the lagoon-channel.</p>
</blockquote>
<p>Darwin knew, as did his contemporaries, that those rings were made by corals, but no one knew the mechanism behind those strange patterns:</p>
<blockquote>
<p>What has caused these reefs to spring up at such great distances from the shores of the included islands? It cannot be that the corals will not grow close to the land.</p>
</blockquote>
<p>And yet the same pattern pops up over and over in the oceans of the world (they are examples of what I call <a href="/posts/water-lilies-water-lilies-everywhere">Water Lilies</a>). Darwin was the first to propose a model that was both convincing and supported by his empirical observations. In simple terms, they form when islands slowly sink over geological ages: the corals initially form close to the central island&#x27;s shore, but as their reefs are dragged down by the rocky base, they build upward in their drive to remain close to the surface of the sea. In this way, the reef builds on top of itself and retains its original diameter, even while the central island&#x27;s diameter shrinks.</p>
<p>This theory was both correct and original, and became his first scientific monograph after his return to England. He cleverly built on top of Lyell&#x27;s ideas of gradual geological change to explain something about living creatures: almost a trademark move for him at this point.</p>
<p>The <em>Voyage</em> is full to the brim with Darwin&#x27;s open questions and his hypothetical answers. A large number of them turned out to be wrong, of course, but several were surprisingly accurate and ahead of their time. How could a green twentysomething-year-old fresh out of college be so good at theorizing, and so often right?</p>
<p>I think the answer is clear: he was not afraid to make connections between fields that at the time seemed utterly unrelated. His conjectures about megafauna migrations and coral reefs, for example, were good conjectures thanks to his putting together geological ideas with data about animal behaviors.</p>
<h2>Unbounded Curiosity</h2>
<p>I don&#x27;t want to overstate Darwin&#x27;s direct discoveries during those years of travel. He didn&#x27;t make <em>that</em> many discoveries, and he didn&#x27;t change the world with them anyway (with the one obvious exception). In the grand scheme of things, those results were somewhat impressive, but not super important.</p>
<p>What I found most interesting, while reading his book, was not the number of correct theories he formulated, but the number of great questions he posed. The <em>Voyage</em> is thickly sprinkled with countless expressions of its author&#x27;s curiosity about the amazing kaleidoscope of natural phenomena he witnessed.</p>
<blockquote>
<p>I was often interested by watching the clouds, which, rolling in from seaward, formed a bank just beneath the highest point of the Corcovado. ... Mr. Daniell has observed, in his meteorological essays, that a cloud sometimes appears fixed on a mountain summit, while the wind continues to blow over it. The same phenomenon here presented a slightly different appearance. In this case the cloud was clearly seen to curl over, and rapidly pass by the summit, and yet was neither diminished nor increased in size.</p>
</blockquote>
<div class="lg:col-start-1 lg:col-end-1 aspect-w-16 aspect-h-9 mb-8"><iframe class="w-full p-0" src="https://www.youtube-nocookie.com/embed/zEf9OrBcKrc?start=12&amp;end=25" title="YouTube video player" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerPolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe></div>
<p>Besides all the wonders I quoted back in <a href="/posts/darwin-the-voyager">Episode 2</a>, Darwin also wondered about the lifecycle of worms living in saline mud (&quot;<em>what becomes of these worms when, during the long summer, the surface is hardened into a solid layer of salt?</em>&quot;), about the self-organization of krill spawn into long red streaks in the sea (&quot;<em>how do the various bodies which form the bands with defined edges keep together? ... what causes the length and narrowness of the bands?</em>&quot;), about the fossilization of trees (&quot;<em>how surprising it is that every atom of the woody matter ... should have been removed and replaced by silex so perfectly that each vessel and pore is preserved!</em>&quot;), about the habits of birds and reptiles, the distribution and differences between species, the meaning of geological layers, and too many more things to attempt a full list.</p>
<p>I need to make a little detour: out of the questions Darwin posed to himself, I was most struck by those about diseases. I knew that germ theory would only become widely accepted a few decades after the <em>Beagle</em>&#x27;s trip, so of course I should have expected nothing else, but quotes like these still managed to bewilder me:</p>
<blockquote>
<p>In all seasons, both inhabitants and foreigners suffer from severe attacks of ague. This disease is common on the whole coast of Peru, but is unknown in the interior. The attacks of illness which arise from miasma never fail to appear most mysterious.</p>
</blockquote>
<p>Why does miasma just &quot;pop up&quot; here and not there? Also:</p>
<blockquote>
<p>The plain round the outskirts of Callao is sparingly covered with a coarse grass, and in some parts there are a few stagnant, though very small, pools of water. The miasma, in all probability, arises from these ... Miasma is not always produced by a luxuriant vegetation with an ardent climate;</p>
</blockquote>
<p>and</p>
<blockquote>
<p>From these facts it would almost appear as if the effluvium of one set of men shut up for some time together was poisonous when inhaled by others; and possibly more so, if the men be of different races.</p>
</blockquote>
<p>Reading these after all those other clever theories, it finally dawned on me with full force the fact that, oh boy, <em>they had no idea</em>! And yet they were <em>so close</em>:</p>
<blockquote>
<p>Mysterious as this circumstance appears to be, it is not more surprising than that the body of one&#x27;s fellow-creature, directly after death, and before putrefaction has commenced, should often be of so deleterious a quality that the mere puncture from an instrument used in its dissection should prove fatal.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-el-naturalista/tucotuco.webp" alt="Photograph of a tuco-tuco emerging from sandy soil, showing its grayish-brown fur, prominent orange incisors, whiskers, small dark eyes, and thin tail"/><figcaption>Darwin was intrigued by the apparent blindness of the tuco-tuco, a burrowing rodent. He writes, &quot;considering the strictly subterranean habits of the tucu-tuco, the blindness, though so common, cannot be a very serious evil; yet it appears strange that any animal should possess an organ frequently subject to be injured.&quot; Then he adds, with a hint of humor, &quot;Lamarck would have been delighted with this fact, had he known it.&quot; (Wikimedia Commons)</figcaption></figure>
<h2>Premonitions</h2>
<p>You probably remember Darwin&#x27;s Galapagos finches, each with a slightly different beak adapted to a different island in the archipelago. The story goes that those birds were what got him started with his big <em>Origin of Species</em> project. Well, the story was a bit more nuanced than that. The naturalist had been collecting, and <em>noticing</em>, all the pieces necessary for his theory over the course of several years.</p>
<p>The <em>Voyage</em> is an unwitting documentary on how his most consequential ideas slowly coalesced from their primordial soup, despite the fact that he still hadn&#x27;t figured the theory out at the time of the chronicle&#x27;s publication.</p>
<p>First, to avoid some common misconceptions, let me summarize in a paragraph where Darwin was starting from: the dominant theory about species was that somehow they were created—by God, presumably—all at once a few thousand years ago, and remained unchanged after that. But some scholars had been discussing for a while the idea of evolution, or variation of species (e.g. Lamarck and Charles&#x27; own grandfather Erasmus Darwin). Also, the facts that some species went extinct and others could &quot;colonize&quot; new regions were, by the 1830s, hard to refute. Lyell was now championing the idea that the world is much older than initially thought, and that it never ceased transforming. Finally, Humboldt and others had noted that animal and plant species always distribute themselves in specific parts of the world where they are well-adapted to the challenges of the local environment.</p>
<p>In other words, while the idea of evolution was still highly controversial, the old creationist theory was already showing deep, deep cracks. The problem was that no one had figured out how evolution might possibly work, and few even dared trying.</p>
<p>Darwin himself, for all his youthful progressivism, didn&#x27;t try to question the creationist view in the <em>Voyage</em>. In an amusing &quot;Darwin before Darwin&quot; moment, he notes that different species exist on the two sides of the Cordillera, but waves it away noncommittally:</p>
<blockquote>
<p>These mountains have existed as a great barrier since the present races of animals have appeared; and therefore, unless we suppose the same species to have been created in two different places, we ought not to expect any closer similarity between the organic beings on the opposite sides of the Andes than on the opposite shores of the ocean.</p>
</blockquote>
<p>And yet, his inexhaustible curiosity and attention to detail made him... notice things.</p>
<p>He noticed first hand how well organisms seem to be adapted to the places they live in. For example, about the Galapagos lizards he notes:</p>
<blockquote>
<p>Their limbs and strong claws are admirably adapted for crawling over the rugged and fissured masses of lava which everywhere form the coast.</p>
</blockquote>
<div class="col-start-1 col-end-1 m-10"></div>
<p>He also noticed that animals adapt differently to different <em>climates</em>:</p>
<blockquote>
<p>The lowest point to which the thermometer fell was 41.5° ... Yet with this high temperature, almost every beetle, several genera of spiders, snails, and land-shells, toads and lizards, were all lying torpid beneath stones. But we have seen that at Bahia Blanca, which is four degrees southward, and therefore with a climate only a very little colder, this same temperature, with a rather less extreme heat, was sufficient to awake all orders of animated beings. This shows how nicely the stimulus required to arouse hybernating animals is governed by the usual climate of the district, and not by the absolute heat.</p>
</blockquote>
<p>That is to say, even relatively similar animals in relatively close regions will behave differently depending on the temperatures typical of where they live.</p>
<p>If that weren&#x27;t enough, he noticed how species adapt <em>to each other</em>, i.e. to what today we would call their ecosystems.</p>
<p>For example, he found that those marine lizards in the Galapagos were very agile in the water and almost defenseless on land, and yet they very much preferred staying dry even when a British guy kept tormenting them up there (remember <a href="https://aethermug.com/posts/darwin-the-fun-loving-young-fellow" rel="nofollow noopener noreferrer" target="_blank">Ep. 1</a>?). About this paradox he notes:</p>
<blockquote>
<p>Perhaps this singular piece of apparent stupidity may be accounted for by the circumstance that this reptile has no enemy whatever on shore, whereas at sea it must often fall a prey to the numerous sharks. Hence, probably, urged by a fixed and hereditary instinct that the shore is its place of safety, whatever the emergency may be, it there takes refuge.</p>
</blockquote>
<p>Indeed! Another time, after describing the remarkable ability of a crab to split open a rock-hard cononut, he adds,</p>
<blockquote>
<p>I think this is as curious a case of instinct as ever I heard of, and likewise of adaptation in structure between two objects apparently so remote from each other in the scheme of nature as a crab and a cocoa-nut tree.</p>
</blockquote>
<p>Until this point, okay—perhaps God simply created all these creatures with these adaptations from the start. But other things caught Darwin&#x27;s attention, things that weren&#x27;t so easily reconciled with creationism.</p>
<p>For one thing, ecosystems themselves are always changing! Emus and kangaroos were being decimated in Australia, for example, and something similar had happened in Argentina:</p>
<blockquote>
<p>According to the principles so well laid down by Mr. Lyell, few countries have undergone more remarkable changes, since the year 1535, when the first colonist of La Plata landed with seventy-two horses. The countless herds of horses, cattle, and sheep, not only have altered the whole aspect of the vegetation, but they have almost banished the guanaco, deer, and ostrich. Numberless other changes must likewise have taken place.</p>
</blockquote>
<p>And what about this parallel between humans conquering other humans and the what happens between different animals?</p>
<blockquote>
<p>The varieties of man seem to act on each other in the same way as different species of animals—the stronger always extirpating the weaker.</p>
</blockquote>
<p>Call it &quot;survival of the strongest&quot;, if not yet the <em>fittest</em>.</p>
<figure><img src="https://aethermug.com/assets/posts/darwin-el-naturalista/marine_iguana.webp" alt="Detailed black and white scientific illustration of a marine iguana viewed from above, showing its scaly body, spiny crest along its back, long curved tail, and splayed clawed feet"/><figcaption>The kind of marine iguana from the Galapagos that Darwin repeatedly threw into the water to see if it had had enough of the land.</figcaption></figure>
<p>Finally, and for good measure, the <em>Voyage</em> shows Darwin reasoning rather sharply about inheritance of behaviors. Remember how he would casually kill ultra-naive birds &quot;<em>with a switch, and sometimes, as I myself tried, with a cap or hat</em>&quot; (<a href="/posts/darwin-the-fun-loving-young-fellow">Ep. 1</a>)? Well, he didn&#x27;t do that for fun, really. He was learning some very important truths about nature:</p>
<blockquote>
<p>From these several facts we may, I think, conclude, first, that the wildness of birds with regard to man is a particular instinct directed against him, and not dependent upon any general degree of caution arising from other sources of danger; secondly, that it is not acquired by individual birds in a short time, even when much persecuted; but that in the course of successive generations it becomes hereditary.</p>
</blockquote>
<p>All these &quot;noticings&quot;, to me, paint an interesting picture. Almost all of the pieces were right there under his nose, dots almost amusingly easy to connect for us readers of the future. Unfortunately, he didn&#x27;t get it yet at the time of the <em>Voyage</em> (he hadn&#x27;t read about Malthus&#x27; idea of the eternal struggle for limited resources), and could only conclude that his observations might hopefully &quot;<em>assist in revealing the grand scheme, common to the present and past ages, on which organised beings have been created.</em>&quot;</p>
<p>He didn&#x27;t get it, but he was deeply puzzled by all of it. The finches, and the careful re-examination of his specimens back home, were not the spark that gave him the idea. They were the last drop in a bucket already full of questions.</p>
<h2>Who Was He?</h2>
<p>To recap, reading Darwin&#x27;s innocent debut publication reveals a man endlessly curious about everything around him; a man eager to make new connections between different fields; a still-nameless scientist brave enough to advance hypotheses considered almost heretical by most of his peers; a man capable of acknowledging the tensions between compelling ideas, and of refusing to be satisfied with them; and a man with the patience and drive to pursue the truth, even when he knows it could be unpleasant, as demonstrated by his &quot;golden rule&quot;</p>
<blockquote>
<p>Whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favourable ones. Owing to this habit, very few objections were raised against my views which I had not at least noticed and attempted to answer.</p>
<p>— Autobiography</p>
</blockquote>
<p>In short, he was already a great scientist before even making a single discovery.</p>
<p>As an extra gift, a reading of <em>The Voyage of the Beagle</em> will also show you a fun-loving, adventurous, spoiled, in some ways narrow-minded, frequently wonder-struck, and observant young man. So much gold in just one book! And, if you&#x27;ve read this far, you know what to read next. ●</p>

<p><em>The End</em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/darwin-el-naturalista/Charles_Darwin_Lawrence.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Darwin the Witness]]></title>
            <link>https://aethermug.com/posts/darwin-the-witness?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/darwin-the-witness</guid>
            <pubDate>Thu, 08 Jan 2026 07:24:24 GMT</pubDate>
            <description><![CDATA[In His Own Words (Episode 4)]]></description>
            <content:encoded><![CDATA[<p><em>This is the fourth episode in the series of curated quote collections &quot;Darwin in His Own Words&quot;.</em></p>
<ol>
<li><em><a href="/posts/darwin-the-fun-loving-young-fellow">Darwin the Fun-Loving Young Fellow</a></em></li>
<li><em><a href="/posts/darwin-the-voyager">Darwin the Voyager</a></em></li>
<li><em><a href="/posts/darwin-the-man-of-his-times">Darwin the Man of His Times</a></em></li>
<li><em>Darwin the Witness</em></li>
<li><em><a href="/posts/darwin-el-naturalista">Darwin El Naturalista</a></em></li>
</ol>

<p>It should be clear by now that Charles Darwin&#x27;s <em>Voyage of the Beagle</em> is much more than a log of scientific notes. Descriptions of nature and its mysteries abound—we&#x27;ll see them in the next episode—but his book is also <a href="/posts/darwin-the-fun-loving-young-fellow">a window into his own character</a>, <a href="/posts/darwin-the-voyager">an adventure story</a>, and a reminder of the <a href="/posts/darwin-the-man-of-his-times">glaring ethical blind spots</a> that are possible even in well-meaning, brilliant people. There is one other major aspect of that narrative I haven&#x27;t done proper justice to yet: it&#x27;s a tantalizing snapshot of those thriving and tormented lands during an interesting period of their history.</p>
<p>Some of the places surveyed by the <em>Beagle</em> had been wild, near-uninhabited lands until very recently (roughly 3 years for the Galapagos, mere decades for New Zealand and Australia). Others had been colonies for up to three centuries, but were in periods of turbulent political change: Brazil had just become an independent empire, while Argentina was a maelstrom of conquest, persecutions, and civil wars unfolding under the British crew&#x27;s very eyes. Half-unwittingly, Darwin immortalized a fast-transforming world—customs, political situations, and ways of life that were both new and just about to vanish into mostly-unwritten history.</p>
<p>While reading the <em>Voyage</em>, I was bewitched by these testimonies. It feels like catching vivid but fragmentary glimpses of the past, as if through a shaky old telescope. In this post, I&#x27;ll share with you several excerpts to whet your appetite, but I can hope to do little more than that. Darwin&#x27;s own descriptions are necessarily a sparse and reductive view into those cultures and times, and my selection reduces even more. This won&#x27;t give you anything near a &quot;full picture&quot; of life in the Southern Hemisphere in the 1830&#x27;s. But, if these glimpses interest you, do read the whole book! You might find yourself, like me, wanting to read even more sources about that historical period.</p>
<h2>Argentina: Life on the Plains</h2>
<p>The <em>Beagle</em> remained in Argentina for well over one year in total, and during that time Darwin was always roaming around. His scientific studies led him to travel long distances inland. Sometimes he was away from FitzRoy&#x27;s ship for as long as two months straight. In August 1833, for example, the <em>Beagle</em> dropped him off at Río Negro and he traveled on horseback through the Argentinian plains with some gauchos (basically cowboys), through Bahía Blanca, across the Pampas to Buenos Aires, surveying rivers—he found several important fossils there—and collecting specimens until October that year. During that time, he often followed the ways of the gauchos, something the wealthy young Brit didn&#x27;t dislike at all.</p>
<blockquote>
<p>This was the first night which I passed under the open sky, with the gear of the recado [saddle] for my bed. There is high enjoyment in the independence of the Gaucho life—to be able at any moment to pull up your horse, and say, “Here we will pass the night.” The deathlike stillness of the plain, the dogs keeping watch, the gipsy-group of Gauchos making their beds round the fire, have left in my mind a strongly-marked picture of this first night, which will never be forgotten.</p>
<p>— Voyage of the Beagle (all quotes below from the same book unless otherwise stated)</p>
</blockquote>
<p>Whenever possible, though, they would seek shelter in the local <em>estancias</em>, large cattle and horse ranches typical of the Argentine plains. Inevitably, he learned a lot about the peculiar ways of life of the locals.</p>
<blockquote>
<p>I was confined for these two days to my bed by a headache. A good-natured old woman, who attended me, wished me to try many odd remedies. A common practice is, to bind an orange-leaf or a bit of black plaster to each temple: and a still more general plan is, to split a bean into halves, moisten them, and place one on each temple, where they will easily adhere. It is not thought proper ever to remove the beans or plaster, but to allow them to drop off, and sometimes, if a man, with patches on his head, is asked, what is the matter? he will answer, &quot;I had a headache the day before yesterday.&quot;</p>
</blockquote>
<p>The naturalist wrote about customs like that frequently and vividly.</p>
<blockquote>
<p>Many of the remedies used by the people of the country are ludicrously strange, but too disgusting to be mentioned. One of the least nasty is to kill and cut open two puppies and bind them on each side of a broken limb. Little hairless dogs are in great request to sleep at the feet of invalids.</p>
</blockquote>
<p>It sounds like the population density in those regions was quite low, and traveling through them wasn&#x27;t easy—especially when transporting something with legs:</p>
<blockquote>
<p>It is very difficult to drive animals across the plains; for if in the night a puma, or even a fox, approaches, nothing can prevent the horses dispersing in every direction; and a storm will have the same effect. A short time since, an officer left Buenos Ayres with five hundred horses, and when he arrived at the army he had under twenty.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-witness/gaucho.webp" alt="Sepia photograph of a bearded gaucho in traditional dress: wide-brimmed hat, poncho draped over shoulders, striped chiripá, and leather boots, with a decorative coin-studded belt and knife at his waist"/><figcaption>A gaucho with his trusty knife, 1868 (United States Library of Congress&#x27;s Prints and Photographs division)</figcaption></figure>
<p>Darwin had great respect for the gauchos, the true masters of that land, and for their skills with the lazo and bolas. Even when accounting for their roughness and laziness:</p>
<blockquote>
<p>The Gauchos, or countrymen, are very superior to those who reside in the towns. The Gaucho is invariably most obliging, polite, and hospitable: I did not meet with even one instance of rudeness or inhospitality. He is modest, both respecting himself and country, but at the same time a spirited, bold fellow. On the other hand, many robberies are committed, and there is much bloodshed: the habit of constantly wearing the knife is the chief cause of the latter. It is lamentable to hear how many lives are lost in trifling quarrels. In fighting, each party tries to mark the face of his adversary by slashing his nose or eyes; as is often attested by deep and horrid-looking scars. Robberies are a natural consequence of universal gambling, much drinking, and extreme indolence. At Mercedes I asked two men why they did not work. One gravely said the days were too long; the other that he was too poor. The number of horses and the profusion of food are the destruction of all industry.</p>
<p>— From Darwin&#x27;s Diary</p>
</blockquote>
<h2>General Rosas: Genocidal Crusades</h2>
<p><em>Content warning: this section contains some gruesome bits.</em></p>
<p>Argentinian history has never been tranquil, but the period witnessed by Darwin was as violent as they get. In his 1833 excursion to Bahia Blanca, he met the troops of the infamous <a href="https://en.wikipedia.org/wiki/Juan_Manuel_de_Rosas" rel="nofollow noopener noreferrer" target="_blank">General Juan Manuel de Rosas</a> in the act of exterminating all indigenous tribes daring enough to try defending their ancestral territories. Rosas was then already a prominent political and military figure, having (been) served as Governor of the Buenos Aires province until the previous year. Out of office, he had given himself the mission of &quot;clearing the lands&quot; from the &quot;problem&quot; of the &quot;Indians&quot;. If successful, he would not only extend the reach of the Argentinian government, but also build the loyal following and reputation he needed to regain a position of power.</p>
<p>Rosas showed kindness toward Darwin, promptly granting him the travel permits he needed. Perhaps a bit naïve in his youth, the Englishman was swayed by the General&#x27;s charisma, noting that he was very fair in his application of the law, and &quot;also a perfect horseman&quot;.</p>
<blockquote>
<p>By these means, and by conforming to the dress and habits of the Gauchos, he has obtained an unbounded popularity in the country, and in consequence a despotic power. I was assured by an English merchant, that a man who had murdered another, when arrested and questioned concerning his motive, answered, &quot;He spoke disrespectfully of General Rosas, so I killed him.&quot; At the end of a week the murderer was at liberty. This doubtless was the act of the general&#x27;s party, and not of the general himself.</p>
</blockquote>
<p>At the time, Darwin believed Rosas to be &quot;<em>a man of an extraordinary character, and has a most predominant influence in the country, which it seems he will use to its prosperity and advancement.</em>&quot; By the time of his book&#x27;s 1845 edition, however, he had to add a footnote to this sentence: &quot;<em>This prophecy has turned out entirely and miserably wrong.</em>&quot;</p>
<p>In fact, Rosas returned to power two years after meeting Darwin, and remained a totalitarian tyrant for almost two brutal decades. It was definitely a dark period of the country&#x27;s history.</p>
<p>But let&#x27;s return to 1833&#x27;s genocidal campaign. Don Carlos was not as impressed with the troops as he was with their commander.</p>
<blockquote>
<p>They passed the night here; and it was impossible to conceive anything more wild and savage than the scene of their bivouac. Some drank till they were intoxicated; others swallowed the steaming blood of the cattle slaughtered for their suppers, and then, being sick from drunkenness, they cast it up again, and were besmeared with filth and gore.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-witness/rosas.webp" alt="Painting of three uniformed officers on horseback surveying a battlefield strewn with fallen indigenous bodies, one pointing into the distance across rolling hills"/><figcaption>General Rosas&#x27; expedition against the Indians (Calixto Tagliabúe)</figcaption></figure>
<p>Considering what their job was, this kind of behavior is not surprising. Whenever Rosas&#x27; warriors were able to ambush an enemy camp, &quot;<em>the Indians, men, women, and children ... were nearly all taken or killed, for the soldiers sabre every man.</em>&quot; In open battle, no mercy was contemplated:</p>
<blockquote>
<p>One dying Indian seized with his teeth the thumb of his adversary, and allowed his own eye to be forced out sooner than relinquish his hold. Another, who was wounded, feigned death, keeping a knife ready to strike one more fatal blow. My informer said, when he was pursuing an Indian, the man cried out for mercy, at the same time that he was covertly loosing the bolas from his waist, meaning to whirl it round his head and so strike his pursuer. &quot;I however struck him with my sabre to the ground, and then got off my horse, and cut his throat with my knife.&quot; This is a dark picture; but how much more shocking is the unquestionable fact, that all the women who appear above twenty years old are massacred in cold blood! When I exclaimed that this appeared rather inhuman, he answered, &quot;Why, what can be done? they breed so!&quot;</p>
</blockquote>
<p>In some cases, local tribes chose to ally with him and sold him information about the other tribes, and sometimes even fought them directly.</p>
<blockquote>
<p>The general, however, ... thinking that his friends may in a future day become his enemies, always places them in the front ranks, so that their numbers may be thinned.</p>
</blockquote>
<p>Eventually, the voyage left the shores of Argentina, so that was all Darwin could witness. Fortunately, he notes in his book that &quot;<em>since leaving South America we have heard that this war of extermination completely failed.</em>&quot;</p>
<h2>Falklands: Atlantic Wild West</h2>
<p>The <em>Beagle</em> visited the Falkland Islands twice over its 5-year journey, finding them in a very transitory state. Darwin summarizes in one paragraph the archipelago&#x27;s whole history:</p>
<blockquote>
<p>After the possession of these miserable islands had been contested by France, Spain, and England, they were left uninhabited. The government of Buenos Aires then sold them to a private individual, but likewise used them, as old Spain had done before, for a penal settlement. England claimed her right and seized them. The Englishman who was left in charge of the flag was consequently murdered. A British officer was next sent, unsupported by any power: and when we arrived, we found him in charge of a population, of which rather more than half were runaway rebels and murderers.</p>
</blockquote>
<p>Captain FitzRoy was especially disappointed by the sorry state of the place. In his own volume of the <em>Voyage</em>, he writes:</p>
<blockquote>
<p>A few half-ruined stone cottages; some straggling huts built of turf; two or three stove boats; some broken ground where gardens had been, and where a few cabbages or potatoes still grew; some sheep and goats; a few long-legged pigs; some horses and cows; with here and there a miserable-looking human being – were scattered over the fore-ground of a view which had dark clouds, ragged-topped hills, and a wild waste of moorland to fill up the distance.</p>
</blockquote>
<p>The &quot;unsupported&quot; British officer had taken with him a handful of gauchos from the Argentinian mainland, as his only armed forces. But the gauchos didn&#x27;t like it there. Again, FitzRoy:</p>
<blockquote>
<p>The gauchos wished to leave the place, and return to the Plata, but as they were the only useful labourers on the islands, in fact, the only people on whom any dependance could be placed for a regular supply of fresh beef, I interested myself as much as possible to induce them to remain, and with partial success, for seven staid out of twelve.</p>
</blockquote>
<p>Only months after the <em>Beagle</em>&#x27;s first visit, an eight-person uprising killed several senior members of the settlement, thrusting the place into (teacup) chaos once again.</p>
<p>Apart from some fossils, a now-extinct species of fox, and some geological observations, there wasn&#x27;t much of interest for Darwin in that &quot;desolate&quot; place. I found this story that he heard from a local tantalizing, though:</p>
<blockquote>
<p>At the Falkland Islands, when the Spaniards murdered some of their own countrymen and all the Englishmen, a young friendly Spaniard was running away, when a great tall man, by name Luciano, came at full gallop after him, shouting to him to stop, and saying that he only wanted to speak to him. Just as the Spaniard was on the point of reaching the boat, Luciano threw the balls [bolas]: they struck him on the legs with such a jerk, as to throw him down and to render him for some time insensible. The man, after Luciano had had his talk, was allowed to escape.</p>
</blockquote>
<p>Apart from showing the gaucho bolas in action, this passage isn&#x27;t that historically important. But the curiosity is killing me: what did Luciano have to say so urgently to the Spaniard? Darwin never tells us, and this mystery seems to be forever lost to oblivion.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h2>Chile and Peru: Forms of Peace</h2>
<p>After spending a couple of years on the tempestuous eastern coast of South America, and after delivering the Fuegians to their home in Tierra del Fuego (see <a href="/posts/darwin-the-man-of-his-times">Ep. 3</a>), <em>HMS Beagle</em> moved on to the more peaceful western coast, moving up and down the islands and cities of Chile and Peru. I&#x27;ve already quoted many passages from the <em>Voyage</em> about the natural wonders of the place, but his narrative also offers glimpses of the day-to-day life in those places.</p>
<p>These were not wealthy people—at least the non-Europeans. Commerce usually consisted in barter:</p>
<blockquote>
<p>At Caylen [now Chaullin, part of the Chiloé Archipelago], the most southern island, the sailors bought with a stick of tobacco, of the value of three-halfpence, two fowls, one of which, the Indian stated, had skin between its toes, and turned out to be a fine duck; and with some cotton handkerchiefs, worth three shillings, three sheep and a large bunch of onions were procured.</p>
</blockquote>
<p>Darwin also noted the clever practices of the local shepherds:</p>
<blockquote>
<p>My companions were Mariano Gonzales, who had formerly accompanied me in Chile, and an &quot;arriero,&quot; with his ten mules and a &quot;madrina.&quot; The madrina (or godmother) is a most important personage: she is an old steady mare, with a little bell round her neck; and wherever she goes, the mules, like good children, follow her. The affection of these animals for their madrinas saves infinite trouble. If several large troops are turned into one field to graze, in the morning the muleteers have only to lead the madrinas a little apart, and tinkle their bells; and although there may be two or three hundred together, each mule immediately knows the bell of its own madrina, and comes to her. It is nearly impossible to lose an old mule; for if detained for several hours by force, she will, by the power of smell, like a dog, track out her companions, or rather the madrina, for, according to the muleteer, she is the chief object of affection.</p>
</blockquote>
<p>The technique of using a mare to control mules was common across the world, but this was my first time reading about it. Mule psychology!</p>
<p>I said that the place was peaceful. I meant that in a relative sense: we&#x27;ll see a large-scale exception at the end of this post; the other exception was crime and political instability. This tragicomical episode from Chapter XVI should give you an idea of the atmosphere in 1835 Peru:</p>
<blockquote>
<p>A short time before, three French carpenters had broken open, during the same night, the two churches [in the town of Iquique, now part of Chile], and stolen all the plate: one of the robbers, however, subsequently confessed, and the plate was recovered. The convicts were sent to Arequipa, which though the capital of this province, is two hundred leagues distant, the government there thought it a pity to punish such useful workmen who could make all sorts of furniture; and accordingly liberated them. Things being in this state, the churches were again broken open, but this time the plate was not recovered. The inhabitants became dreadfully enraged, and declaring that none but heretics would thus &quot;eat God Almighty,&quot; proceeded to torture some Englishmen, with the intention of afterwards shooting them. At last the authorities interfered, and peace was established.</p>
</blockquote>
<p>Fortunately a happy ending there.</p>
<h2>Pacific Life</h2>
<p>The last major phase of the British voyage around the globe was the year or so spent in the islands of the Pacific and Oceania. Here, too, Darwin gives us far too many fascinating peeks into local customs and living conditions to cover fully in a blog post, but here is a medley.</p>
<p>The Galapagos islands had literally barely been reached by mainland colonizers. Only three years before the <em>Beagle</em>&#x27;s visit—<em>after</em> the ship had set sail from England!—Ecuador had claimed the islands for the first time and made them into a penal colony (a common fate for islands at the time). When FitzRoy&#x27;s crew arrived in 1835, there was almost no European presence, only a handful of prisoners and a few huts. Life on the islands was simple and exploitative.</p>
<blockquote>
<p>The inhabitants, although complaining of poverty, obtain, without much trouble, the means of subsistence. In the woods there are many wild pigs and goats; but the staple article of animal food is supplied by the tortoises. Their numbers have of course been greatly reduced in this island, but the people yet count on two days&#x27; hunting giving them food for the rest of the week. It is said that formerly single vessels have taken away as many as seven hundred, and that the ship&#x27;s company of a frigate some years since brought down in one day two hundred tortoises to the beach.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-witness/galapagos.webp" alt="1841 Admiralty nautical chart of the Galapagos Islands showing the archipelago with detailed topographical shading of volcanic terrain, compass rose on the left, and inset maps of specific bays and coves around the border"/><figcaption>Admiralty Chart No 1375 Pacific Ocean - Galapagos Islands surveyed by Captn. Robt. Fitz Roy R.N. and the officers of H.M.S. Beagle 1836. Published 1841</figcaption></figure>
<p>The settlers (there were no natives) killed many more tortoises than would have been sustainable, which eventually led to the extinction of some of their species. They didn&#x27;t always kill outright, though not out of any conservationist spirit:</p>
<blockquote>
<p>The flesh of this animal is largely employed, both fresh and salted; and a beautifully clear oil is prepared from the fat. When a tortoise is caught, the man makes a slit in the skin near its tail, so as to see inside its body, whether the fat under the dorsal plate is thick. If it is not, the animal is liberated; and it is said to recover soon from this strange operation.</p>
</blockquote>
<p>Tahiti, on the other hand, was rather densely populated, and its natives impressed Darwin more than any other group. Unfortunately, missionaries had been active in the islands for decades already, and had been largely successful in &quot;civilizing&quot; the Tahitians: they were Christian now, and quite familiar with the English language. Much of what the naturalist praised in them was about this high level of development, rather than their original ways (see <a href="/posts/darwin-the-man-of-his-times">Ep. 3</a> for more). Still, he described the decorations of the Tahitians&#x27; bodies in detail:</p>
<blockquote>
<p>Most of the men are tattooed, and the ornaments follow the curvature of the body so gracefully, that they have a very elegant effect. One common pattern, varying in its details, is somewhat like the crown of a palm-tree. It springs from the central line of the back, and gracefully curls round both sides. The simile may be a fanciful one, but I thought the body of a man thus ornamented was like the trunk of a noble tree embraced by a delicate creeper.</p>
<p>Many of the elder people had their feet covered with small figures, so placed as to resemble a sock. This fashion, however, is partly gone by, and has been succeeded by others. Here, although fashion is far from immutable, every one must abide by that prevailing in his youth. An old man has thus his age for ever stamped on his body, and he cannot assume the airs of a young dandy. The women are tattooed in the same manner as the men, and very commonly on their fingers. One unbecoming fashion is now almost universal: namely, shaving the hair from the upper part of the head, in a circular form, so as to leave only an outer ring. The missionaries have tried to persuade the people to change this habit; but it is the fashion, and that is a sufficient answer at Tahiti, as well as at Paris.</p>
</blockquote>
<p>We&#x27;ve already seen that Darwin did not much care for New Zealand: the area he saw—the Bay of Islands—was too plain, and the settlers too enterprising for his tastes. The aborigines also seemed less attractive and interesting to him compared to his beloved Tahitians. Nevertheless, he did manage to observe some traditional customs in the short nine days he was there. For example, I love this aboriginal greeting (we&#x27;ll see if I can get it to catch on in Japan):</p>
<blockquote>
<p>On coming near one of the huts I was much amused by seeing in due form the ceremony of rubbing, or, as it ought to be called, pressing noses. The women, on our first approach, began uttering something in a most dolorous voice; they then squatted themselves down and held up their faces; my companion standing over them, one after another, placed the bridge of his nose at right angles to theirs, and commenced pressing. This lasted rather longer than a cordial shake of the hand with us, and as we vary the force of the grasp of the hand in shaking, so do they in pressing.</p>
</blockquote>
<p>Darwin also provides an important description of the funerary rites of those tribes.</p>
<blockquote>
<p>The daughter of a chief, who was still a heathen, had died there five days before. The hovel in which she had expired had been burnt to the ground: her body, being enclosed between two small canoes, was placed upright on the ground, and protected by an enclosure bearing wooden images of their gods, and the whole was painted bright red, so as to be conspicuous from afar. Her gown was fastened to the coffin, and her hair being cut off was cast at its foot. The relatives of the family had torn the flesh of their arms, bodies, and faces, so that they were covered with clotted blood; and the old women looked most filthy, disgusting objects. On the following day some of the officers visited this place, and found the women still howling and cutting themselves.</p>
</blockquote>
<p>The final stop in that corner of the world was Australia. Besides all the comments I&#x27;ve already shared about slavery and land theft—delivered with that same contradictory attitude, at once appreciative and condescending—he gives a description of an aboriginal festival:</p>
<blockquote>
<p>When both tribes mingled in the dance, the ground trembled with the heaviness of their steps, and the air resounded with their wild cries. Every one appeared in high spirits, and the group of nearly naked figures, viewed by the light of the blazing fires, all moving in hideous harmony, formed a perfect display of a festival amongst the lowest barbarians.</p>
</blockquote>
<h2>Epochal Destruction</h2>
<p>Darwin was no anthropologist (the discipline itself would not be properly established for another half a century), but I&#x27;m sure that his reports of the local peoples are valuable material for the real anthropologists that came after. by far the most impressive historical event he witnessed and recorded, though, was of a very different nature.</p>
<p>Without planning to, he provided a dreadful foreshadowing of what would happen later. It was February 1835, while the vessel explored the Chilean coast.</p>
<blockquote>
<p>On the night of the 19th the volcano of Osorno was in action. At midnight the sentry observed something like a large star, which gradually increased in size till about three o&#x27;clock, when it presented a very magnificent spectacle. By the aid of a glass, dark objects, in constant succession, were seen, in the midst of a great glare of red light, to be thrown up and to fall down. The light was sufficient to cast on the water a long bright reflection. Large masses of molten matter seem very commonly to be cast out of the craters in this part of the Cordillera. I was assured that when the Corcovado is in eruption, great masses are projected upwards and are seen to burst in the air, assuming many fantastical forms, such as trees: their size must be immense, for they can be distinguished from the high land behind S. Carlos, which is no less than ninety-three miles from the Corcovado. In the morning the volcano became tranquil.
...
I was surprised at hearing afterwards that Aconcagua in Chile, 480 miles northwards, was in action on the same night; and still more surprised to hear that the great eruption of Coseguina (2700 miles north of Aconcagua), accompanied by an earthquake felt over a 1000 miles, also occurred within six hours of this same time. This coincidence is the more remarkable, as Coseguina had been dormant for twenty-six years; and Aconcagua most rarely shows any signs of action. It is difficult even to conjecture whether this coincidence was accidental, or shows some subterranean connection.</p>
</blockquote>
<p>There was no accepted explanation for volcanic eruptions at the time, but Darwin couldn&#x27;t help but notice the coincidence and suspect that something of an enormous scale was involved. No one, however, anticipated what would happen the next day.</p>
<p>In the morning of February 20th, &quot;<em>the most severe earthquake experienced by the oldest inhabitant</em>&quot; hit the Chilean coast, centered near the city of Concepcion. The shake was felt over an area spanning almost 700 kilometers throughout the continent, and reached Darwin in Valdivia.</p>
<blockquote>
<p>I happened to be on shore, and was lying down in the wood to rest myself. It came on suddenly, and lasted two minutes, but the time appeared much longer. The rocking of the ground was very sensible. ... A bad earthquake at once destroys our oldest associations: the earth, the very emblem of solidity, has moved beneath our feet like a thin crust over a fluid;—one second of time has created in the mind a strange idea of insecurity, which hours of reflection would not have produced. In the forest, as a breeze moved the trees, I felt only the earth tremble, but saw no other effect.</p>
</blockquote>
<p>(Side note: he undoubtedly used that &quot;like a thin crust over a fluid&quot; as a metaphor, without knowing how scientifically accurate he was!)</p>
<p>The <em>Beagle</em> was not far from the epicenter of the earthquake, and in a couple of weeks they reached Concepcion, the area of maximum devastation. A local who welcomed them there recounted</p>
<blockquote>
<p>&quot;That not a house in Concepcion or Talcahuano (the port) was standing; that seventy villages were destroyed; and that a great wave had almost washed away the ruins of Talcahuano.&quot; Of this latter statement I soon saw abundant proofs—the whole coast being strewed over with timber and furniture as if a thousand ships had been wrecked. Besides chairs, tables, book-shelves, etc., in great numbers, there were several roofs of cottages, which had been transported almost whole. The storehouses at Talcahuano had been burst open, and great bags of cotton, yerba, and other valuable merchandise were scattered on the shore.</p>
</blockquote>
<p>Darwin notes the geological effects of the cataclysm, how &quot;<em>the ground in many parts was fissured</em>&quot; and &quot;<em>the superficial parts of some narrow ridges were as completely shivered as if they had been blasted by gunpowder.</em>&quot; But the locals had more intense stories to tell.</p>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-witness/concepcion.webp" alt="Engraving of the ruined Concepción Cathedral after the 1835 earthquake, its stone walls partially collapsed with rubble below, makeshift wooden shelters lining the muddy street in the foreground, and a solitary figure with a dog"/><figcaption>Remains of the Cathedral of Concepción after the earthquake, engraved by John Clements Wickham, first lieutenant on HMS Beagle</figcaption></figure>
<blockquote>
<p>The first shock was very sudden. The mayor-domo at Quiriquina told me, that the first notice he received of it, was finding both the horse he rode and himself, rolling together on the ground. Rising up, he was again thrown down. He also told me that some cows which were standing on the steep side of the island were rolled into the sea.</p>
</blockquote>
<blockquote>
<p>As shock succeeded shock, at the interval of a few minutes, no one dared approach the shattered ruins, and no one knew whether his dearest friends and relations were not perishing from the want of help. Those who had saved any property were obliged to keep a constant watch, for thieves prowled about, and at each little trembling of the ground, with one hand they beat their breasts and cried &quot;misericordia!&quot; and then with the other filched what they could from the ruins. The thatched roofs fell over the fires, and flames burst forth in all parts. Hundreds knew themselves ruined, and few had the means of providing food for the day.</p>
</blockquote>
<p>Ever the science communicator, Darwin uses familiar terms in the <em>Voyage</em> to convey the horror and destruction involved:</p>
<blockquote>
<p>Earthquakes alone are sufficient to destroy the prosperity of any country. If beneath England the now inert subterranean forces should exert those powers ..., how completely would the entire condition of the country be changed! What would become of the lofty houses, thickly packed cities, great manufactories, the beautiful public and private edifices? ... England would at once be bankrupt; all papers, records, and accounts would from that moment be lost. ... The hand of violence and rapine would remain uncontrolled. In every large town famine would go forth, pestilence and death following in its train.</p>
</blockquote>
<p>He also compares the area afflicted in those few days to places in Europe, which is impressive even today:</p>
<blockquote>
<p>It will give a better idea of the scale of these phenomena, if ... we suppose them to have taken place at corresponding distances in Europe:—then would the land from the North Sea to the Mediterranean have been violently shaken, and at the same instant of time a large tract of the eastern coast of England would have been permanently elevated, together with some outlying islands,—a train of volcanos on the coast of Holland would have burst forth in action, and an eruption taken place at the bottom of the sea, near the northern extremity of Ireland—and lastly, the ancient vents of Auvergne, Cantal, and Mont d&#x27;Or would each have sent up to the sky a dark column of smoke, and have long remained in fierce action. Two years and three-quarters afterwards, France, from its centre to the English Channel, would have been again desolated by an earthquake, and an island permanently upraised in the Mediterranean.</p>
</blockquote>
<p>Today, Darwin&#x27;s direct report of the Concepcion earthquake of 1835 <a href="https://doi.org/10.1785/0220120072" rel="nofollow noopener noreferrer" target="_blank">is considered</a> highly valuable both as a historical document and as scientific data for seismological studies. The <em>Beagle</em> was well equipped to take accurate observations of the local geography, and they measured exactly how much the land had been uplifted (roughly 3 meters in the island of Santa Maria). Their observations became useful material for an early understanding of tsunamis, and helped improve predictions of future earthquakes as lately as 2009.</p>
<p>The Concepcion area sees major earthquakes like that only once every century or so—previous occurrences were in 1570, 1657, 1751, and the next one would hit only in 1939. It was an incredible coincidence that such a competent and prepared crew as FitzRoy&#x27;s, and such an articulate chronicler as Charles Darwin, were in the right place at the right time to preserve that memory for posterity. The same is true for all those other testimonies of peoples and places—treasures they kindly brought back from their great circumnavigation of the world. ●</p>

<p><em>Next, the last episode: <a href="/posts/darwin-el-naturalista">In His Own Words Ep. 5 - Darwin El Naturalista</a></em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/darwin-the-witness/osorno.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Darwin the Man of His Times]]></title>
            <link>https://aethermug.com/posts/darwin-the-man-of-his-times?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/darwin-the-man-of-his-times</guid>
            <pubDate>Thu, 01 Jan 2026 11:21:43 GMT</pubDate>
            <description><![CDATA[In His Own Words (Episode 3)]]></description>
            <content:encoded><![CDATA[<p><em>This is the third episode in the series of curated quote collections &quot;Darwin in His Own Words&quot;.</em></p>
<ol>
<li><em><a href="/posts/darwin-the-fun-loving-young-fellow">Darwin the Fun-Loving Young Fellow</a></em></li>
<li><em><a href="/posts/darwin-the-voyager">Darwin the Voyager</a></em></li>
<li><em>Darwin the Man of His Times</em></li>
<li><em><a href="/posts/darwin-the-witness">Darwin the Witness</a></em></li>
<li><em><a href="/posts/darwin-el-naturalista">Darwin El Naturalista</a></em></li>
</ol>

<p>If you&#x27;ve read the previous two posts, you might have formed the impression that Charles Darwin&#x27;s voyage aboard the <em>HMS Beagle</em> was all adventure and levity. In this episode, I want to temper that impression a little. After all, if you want a three-dimensional picture of someone, you need some dark spots to create contrast.</p>
<p>TL;DR: Darwin in many ways represented the best English society of the 1830&#x27;s had to offer—smart, good-hearted, well-educated, and a fervent science-lover—but he was very much <em>not</em> free of the worse features of that same culture.</p>
<p>The period of the late 18th and early 19th century was a time of deep contradictions—that is, only a little more so than most other times. The Enlightenment had brought forth the birth of modern science and some new moral values that are still foundational today. Political upheavals were paving the way for the spread of democracy. Technology was accelerating fast and transforming society. All of these things were good at a high level, but terribly messy if you look only a little bit closer. I&#x27;m not going to write a history lesson here. Let&#x27;s just say that, in that period, people had begun to see and tackle the deepest systemic problems of their times, but they were not yet ready yet to fully grasp the reach of those problems, nor to commit to complete solutions.</p>
<p>Two prominent examples were the American Declaration of Independence (1776) and the French Declaration of the Rights of Man (1789): fantastic intellectual advancements implemented rather poorly and contradictorily. Both proclaimed the equality of all people, but the former was written by slaveholders, the latter denied poor people citizenship, and both failed to give women equal rights.</p>
<p>Darwin&#x27;s voyage happened half a century after that, but the situation wasn&#x27;t much better then. From today&#x27;s perspective, he had a strange mixture of novel insights and antiquated thinking, and this showed up in almost everything he said. For example, he was ahead of his time when it came to (what would later be called) the field of ecology. Inspired by his idol and precursor Alexander von Humboldt, he was able to see Nature as a dynamic, deeply interconnected web of delicate relations—something you can&#x27;t say of most of his conteporaries.</p>
<p>He understood the risks of invasive species, and saw natural resources as finite and endangered by human interference.</p>
<blockquote>
<p>A few years since this country [Australia] abounded with wild animals; but now the emu is banished to a long distance, and the kangaroo is become scarce; to both the English greyhound has been highly destructive. It may be long before these animals are altogether exterminated, but their doom is fixed.</p>
<p>— Voyage of the Beagle (all quotes below from the same book unless otherwise stated)</p>
</blockquote>
<p>(Fortunately he was wrong on both accounts, but many other Australian species <a href="https://wwf.org.au/blogs/10-extinct-animals-in-australia-and-why-we-must-act-now/" rel="nofollow noopener noreferrer" target="_blank">weren&#x27;t so lucky</a>.)</p>
<p>At the same time, he was disarmingly passive about all that.</p>
<blockquote>
<p>It is said that the common Norway rat, in the short space of two years, annihilated in this northern end of the island the New Zealand species. In many places I noticed several sorts of weeds, which, like the rats, I was forced to own as countrymen. A leek has overrun whole districts, and will prove very troublesome, but it was imported as a favour by a French vessel. The common dock is also widely disseminated, and will, I fear, for ever remain a proof of the rascality of an Englishman who sold the seeds for those of the tobacco plant.</p>
</blockquote>
<p>The &quot;fear&quot; and &quot;doom&quot; he wrote about were apparently not enough to make him lift a finger to protect those ecologies. Darwin showed concern for endangered species, because he valued their variety and beauty, but he was never a conservationist.</p>
<p>I found the same kind of paradox all over the pages of his <em>Voyage</em>, often in much worse forms. This is interesting, because it reveals complexities in today&#x27;s value systems that I hadn&#x27;t even considered before. After reading that book I feel like I understand a bit better why it took us centuries to arrive at today&#x27;s still-imperfect ideas about human rights, environmentalism, and morality in general.</p>
<p>Before diving deeper into the murkier side of this, let&#x27;s dip our toes. One thing I thought was amusing about Charles Darwin&#x27;s attitude as a writer is that he felt authorized to judge everything rather peremptorily. The things he liked, like the jungles and mountains of South America, he praised profusely (see Episode 2). Other things, he condemned with an air of objectivity.</p>
<p>New Zealand, for example, did <em>not</em> impress him:</p>
<blockquote>
<p>In the afternoon we stood out of the Bay of Islands, on our course to Sydney. I believe we were all glad to leave New Zealand. It is not a pleasant place.</p>
</blockquote>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Australia fared only marginally better:</p>
<blockquote>
<p>Farewell, Australia! you are a rising child, and doubtless some day will reign a great princess in the South; but you are too great and ambitious for affection, yet not great enough for respect. I leave your shores without sorrow or regret.</p>
</blockquote>
<p>This judgmental stance was very conscious. In this following passage, he pulls off a funny contortion: at once humble—he admits his own fallibility—and self-entitled to judge whole populations:</p>
<blockquote>
<p>From the varying accounts which I had read before reaching these [Polinesian] islands, I was very anxious to form, from my own observation, a judgment of their moral state,—although such judgment would necessarily be very imperfect. First impressions at all times very much depend on one&#x27;s previously acquired ideas.</p>
</blockquote>
<p>I cringed while reading his many comparisons between people of different regions, as if they were species of finches:</p>
<blockquote>
<p>Looking at the New Zealander, one naturally compares him with the Tahitian; both belonging to the same family of mankind. The comparison, however, tells heavily against the New Zealander. He may, perhaps be superior in energy, but in every other respect his character is of a much lower order. One glance at their respective expressions brings conviction to the mind that one is a savage, the other a civilised man.</p>
</blockquote>
<p>All of this reeks of the entrenched (pre-) Victorian British worldview, where it is natural and appropriate for the &quot;more civilized&quot; to look down on the &quot;savages&quot; of &quot;lower order&quot;, as if a moral ladder like that were as clear as daylight.</p>
<p>You should be warmed up enough now. It&#x27;s time to bring forth the heavy artillery.</p>
<h2>The Good: Abolitionism</h2>
<p>Darwin came from a family with a tradition of anti-slavery sentiment. Both his grandparents had been prominent figures in the British movement for the abolition of slavery, and Charles himself was quite outspoken about it. You can find statements like this peppered all over the <em>Voyage</em>:</p>
<blockquote>
<p>On the 19th of August we finally left the shores of Brazil. I thank God, I shall never again visit a slave-country. To this day, if I hear a distant scream, it recalls with painful vividness my feelings, when passing a house near Pernambuco, I heard the most pitiable moans, and could not but suspect that some poor slave was being tortured, yet knew that I was as powerless as a child even to remonstrate.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-man-of-his-times/brazilian_family.webp" alt="19th century illustration of a wealthy Brazilian family dining at a table while enslaved Black servants attend to them, one fanning with a large feather fan and enslaved children sitting on the floor"/><figcaption>A Brazilian family in Rio de Janeiro. (Jean-Baptiste Debret, Public domain, via Wikimedia Commons)</figcaption></figure>
<p>Another time, in Australia:</p>
<blockquote>
<p>There are many serious drawbacks to the comforts of a [Sydney] family, the chief of which, perhaps, is being surrounded by convict servants. How thoroughly odious to every feeling, to be waited on by a man who the day before, perhaps, was flogged, from your representation, for some trifling misdemeanour.</p>
</blockquote>
<p>The topic of slavery was the cause of occasional rifts and disputes between Darwin and the captain of the <em>Beagle</em>, Robert FitzRoy—who were otherwise good friends. Contrary to Darwin&#x27;s liberalism, FitzRoy was a conservative Tory, and saw slavery as a necessary evil. In their first serious quarrel, FitzRoy</p>
<blockquote>
<p>defended and praised slavery, which I abominated, and told me that he had just visited a great slave-owner, who had called up many of his slaves and asked them... whether they wished to be free, and all answered &quot;No.&quot; I then asked him, perhaps with a sneer, whether he thought that the answer of slaves in the presence of their master was worth anything? This made him excessively angry, and he said that as I doubted his word we could not live any longer together. I thought that I should have been compelled to leave the ship; ... But after a few hours Fitz-Roy showed his usual magnanimity by sending an officer to me with an apology and a request that I would continue to live with him.</p>
<p>— Autobiography</p>
</blockquote>
<p>During his <em>Beagle</em> voyage, Darwin collected a huge number of animal specimens. In order to preserve them for later study, he stuffed them and periodically shipped them back to England, where his colleagues could examine and store them. He had learned the skills to quickly and accurately stuff animals in his teenage years, when he took private lessons in bird taxidermy from a man called John Edmonstone, someone he considered a &quot;<em>very pleasant and intelligent man</em>&quot;. Edmonstone was a Black man from Guyana, and had been a slave in a timber plantation since his birth until around the age of 27. During this time, a visiting naturalist had taught him the science of preserving bird skins from decay, and this became his profession when he was finally freed in 1817. We can&#x27;t know the details, but one thing is clear: Darwin got the first strong evidence for evolution years after he had returned to England, when an ornithologist re-examined his collection of stuffed finches from the Galapagos islands. In this sense, Edmonstone&#x27;s teachings were instrumental for the development of the theory. I mention this because it shows that Darwin literally put his money where his mouth was, and he didn&#x27;t forget or try to hide it.</p>
<p>Darwin was also dispirited by the way his countrymen were usurping the lands and resources of the indigenous peoples, although his denunciation of this theft was not as strong as his anti-slavery stance. In Australia, for example, he wrote that</p>
<blockquote>
<p>The aborigines are always anxious to borrow the dogs from the farmhouses: the use of them, the offal when an animal is killed, and some milk from the cows, are the peace-offerings of the settlers, who push farther and farther towards the interior. The thoughtless aboriginal, blinded by these trifling advantages, is delighted at the approach of the white man, who seems predestined to inherit the country of his children.</p>
</blockquote>
<p>Here the waters begin to get a little murky. The sentiment in this excerpt is mostly relatable: white men were indeed taking advantage of the locals to take over their territories. My snag is with the term &quot;thoughtless aboriginal&quot;. Had he written &quot;unsuspecting&quot;, for example, it would have been accurate. But &quot;thoughtless&quot; implies that the locals had the necessary information and context to uncover the ploy, and simply failed to do so. I don&#x27;t believe that is true. Together with that last &quot;predestined&quot; sentence—as if there was nothing to be done about the matter—this passage shows a (rather soft version of a) pattern that pops up all over the <em>Voyage</em>. (We&#x27;ll get there.)</p>
<h2>The Bad: Arrogance of the Civilized</h2>
<p>The ownership of slaves was finally banned in most of the British Empire in 1833, while the <em>Beagle</em> was midway through its circumnavigation of the globe. We owe that milestone to the activism and political pressure of people like Darwin&#x27;s family, and for this they deserve our respect and gratitude. It took the courage of people like Darwin to advocate and fight for those causes at a time when so much was at stake (financially) for so many.</p>
<p>This makes it all the more interesting to read the <em>Voyage</em> closely, because—at least for me—it reveals a moral contrast that comes off as very unpleasant today. I probably should have expected this, but I didn&#x27;t. The thing is, all this talk of the right to freedom and appreciating the qualities of all people still came safely embedded in a deeply flawed framing that today we would definitely call racist.</p>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-man-of-his-times/slaves_resting.webp" alt="Lithograph of enslaved African people at a resting station in Brazil, groups sitting around a fire in the foreground while overseers on horseback watch from a covered structure"/><figcaption>Recently bought slaves in Brazil on their way to the farms of the landowners who bought them. (Johann Moritz Rugendas, Public domain, via Wikimedia Commons)</figcaption></figure>
<p>The implicit assumption in all of Darwin&#x27;s writing goes something like this: rich Westerners are the pinnacle of human achievement, which we call &quot;civilization&quot;, and the populations who are less civilized are inherently inferior. Seen in this context, Darwin&#x27;s progressive stance wasn&#x27;t an assertion of the equality of all people, but a much weaker form: all people have the <em>potential</em> to be equal to the Westerners <em>if</em> they were educated and civilized in the British way. Ignorance was somehow morally degrading. Even when advocating for magnanimity and protection, this was a condescending stance.</p>
<p>My pleasure in reading the <em>Voyage</em> is sometimes soured by reading passages like this:</p>
<blockquote>
<p>Landing at midday, we saw a family of pure Indian extraction. ... This party could muster but little Spanish, and talked to each other in their own tongue. It is a pleasant thing to see the aborigines advanced to the same degree of civilisation, however low that may be, which their white conquerors have attained.</p>
</blockquote>
<p>To be clear, this wasn&#x27;t only Darwin&#x27;s attitude. It was the <em>dominant</em> attitude pervading the way of thinking of everyone in Darwin&#x27;s slice of society. As a matter of fact, that mindset was so entrenched, that it might have been the main cause of that whole voyage. The story is quite crazy.</p>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-man-of-his-times/weaving.webp" alt="Pencil sketch of a woman from Chiloé sitting at a traditional floor loom weaving inside a wooden structure"/><figcaption>Woman weaving in Chiloé, an island in Chile. From the sketchbook of Conrad Martens, an artist who was part of Beagle&#x27;s voyage. (CC BY-NC 4.0)</figcaption></figure>
<p>On paper, the <em>Beagle</em>&#x27;s mission was to take a large number of accurate measurements of South America&#x27;s geography and other physical and natural features. In truth, however, the voyage happened because Captain FitzRoy <em>really</em> wanted to go back to Tierra del Fuego. The Lords Commissioners of the Admiralty of the British Navy weren&#x27;t particularly eager to send FitzRoy on a new mission like that. It took a lot of negotiating and cajoling on the captain&#x27;s part, using the wealth and political influence of his rich family to force their hand. He wanted to go to Tierra del Fuego so badly that he went so far as contracting a merchant ship with money from his own pocket. That was a waste of money, though, because soon after that he was finally appointed to command the <em>Beagle</em>, and the voyage happened as it did.</p>
<p>Why was FitzRoy so adamant about returning to that forsaken place at the edge of the world? He wanted to take &quot;his Fuegians&quot;—natives of Tierra del Fuego he had taken to England with him in his previous voyage—back home.</p>
<p>The affair had begun one year earlier, in 1830, when the <em>Beagle</em> was in Patagonia for its first expedition (no Darwin that time). One morning, the ship&#x27;s crew found that one of their whaleboats had been stolen by the indigenous people they called the Fuegians. Hoping to convince them to give the boat back, the Englishmen retaliated by taking hostages from the Fuegians. They captured two young men, a teenage boy, and a 9-year-old girl. Then, when all their attempts to recover the boat failed, FitzRoy decided that he would keep the captive Fuegians anyway. It would be an interesting experiment, he thought, to see how much they could be &quot;civilized&quot; back in Britain.</p>
<p>In plain words, the captain kidnapped four innocent kids and took them with him to the other side of the world for three years. In his mind, though, he was doing them a favor. In London, they were taught the English language and manners, paraded in front of the King, and converted to Christianity. The last part of the &quot;experiment&quot; was the most important: once properly civilized (reasoned FitzRoy), the Fuegians should be taken back to their original tribes, where they could spread the magic and make life better for everyone. By teaching English and the &quot;plainer truths of Christianity&quot; to their peers, the repatriated Fuegians would not only contribute to advancing their way of life, but would make it easier for future voyagers to communicate with and understand the locals. What could possibly go wrong?</p>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-man-of-his-times/fuegian.webp" alt="Watercolor portrait of an indigenous person from Tierra del Fuego, standing barefoot with dark hair and a headband, wearing a fur cloak over the shoulders. A canoe rests behind them, with forested mountains and a smoking shelter in the distance"/><figcaption>A watercolor of a native from the Tierra del Fuego, painted by Conrad Martens when he and Charles Darwin visited the area</figcaption></figure>
<p>Here&#x27;s how it went: one of the captives died of smallpox very early on. As for the other three, called York, Jemmy, and Fuegia, Darwin writes:</p>
<blockquote>
<p>It was quite melancholy leaving the three Fuegians with their savage countrymen; but it was a great comfort that they had no personal fears. York, being a powerful resolute man, was pretty sure to get on well, together with his wife Fuegia. Poor Jemmy looked rather disconsolate, and would then, I have little doubt, have been glad to have returned with us. His own brother had stolen many things from him; and as he remarked, &quot;What fashion call that:&quot; he abused his countrymen, &quot;all bad men, no sabe (know) nothing&quot; and, though I never heard him swear before, &quot;damned fools.&quot; Our three Fuegians, though they had been only three years with civilised men, would, I am sure, have been glad to have retained their new habits; but this was obviously impossible. I fear it is more than doubtful whether their visit [to Britain] will have been of any use to them.</p>
</blockquote>
<h2>The Ugly: Contradictions All the Way Down</h2>
<p>The matter would be easy to dismiss as antiquated bigotry if all instances were so black and white. But evil rarely comes in its pure form. A lesson I learned reading this book is that a bad <a href="/posts/a-framing-and-model-about-framings-and-models">framing</a> can lead to some pretty powerful reality distortions like this. When FitzRoy defended slavery in his arguments with Darwin, they both held the same flawed worldview of the primacy of civilization, and simply interpreted it in different ways: Darwin inferred that helping people improve is always preferable to enslaving them, while the captain concluded that slaves should be grateful to the teachings and protection of their masters.</p>
<p>The reality distortion comes up in most remarks that Darwin, the good guy of the situation, made about these topics. In the same breath, he is able to bring together kindness and prejudice.</p>
<p>So about the slave-owning lifestyle in Brazil he could write things like this:</p>
<blockquote>
<p>As long as the idea of slavery could be banished, there was something exceedingly fascinating in this simple and patriarchal style of living: it was such a perfect retirement and independence from the rest of the world.</p>
</blockquote>
<p>After witnessing the massacre of indigenous tribes at the hands of the colonizers, he is livid about it:</p>
<blockquote>
<p>Every one here is fully convinced that this is the most just war, because it is against barbarians. Who would believe in this age that such atrocities could be committed in a Christian civilised country? The children of the Indians are saved, to be sold or given away as servants, or rather slaves for as long a time as the owners can make them believe themselves slaves;</p>
</blockquote>
<p>...but still he can&#x27;t help closing the matter with &quot;<em>but I believe in their treatment there is little to complain of.</em>&quot;</p>
<p>In Rio, he complains of &quot;<em>very nearly being an eye-witness to one of those atrocious acts which can only take place in a slave country</em>&quot;: a slaveowner had decided to sell away his slave women and children, separating them from their men. The plan was canceled, but not out of the goodness of the owner&#x27;s heart.</p>
<blockquote>
<p>Indeed, I do not believe the inhumanity of separating thirty families, who had lived together for many years, even occurred to the owner. Yet I will pledge myself, that in humanity and good feeling he was superior to the common run of men. It may be said there exists no limit to the blindness of interest and selfish habit.</p>
</blockquote>
<p>His ability to compartmentalize things is impressive. This last quote is about Tasmania, then called Van Diemen&#x27;s Land:</p>
<blockquote>
<p>All the aborigines have been removed to an island in Bass&#x27;s Straits, so that Van Diemen&#x27;s Land enjoys the great advantage of being free from a native population. This most cruel step seems to have been quite unavoidable, as the only means of stopping a fearful succession of robberies, burnings, and murders, committed by the blacks; and which sooner or later would have ended in their utter destruction. I fear there is no doubt that this train of evil and its consequences originated in the infamous conduct of some of our countrymen.</p>
</blockquote>
<p>To him, the forced deportation of natives from their own land is cruel but &quot;unavoidable&quot;. This is perhaps the most telling quote, because it shows just how hard he was banging against the <a href="/posts/a-framing-is-a-choice-of-boundaries">boundaries in his mind</a>. He hates that such things are happening, he wishes they didn&#x27;t have to, but <em>it simply doesn&#x27;t occur to him that stopping colonization is the answer</em>. It is not part of the calculus of his mind.</p>
<h2>Conclusions</h2>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-man-of-his-times/medallion.webp" alt="Ceramic medallion in a silver frame showing a kneeling enslaved Black man in chains with raised hands, surrounded by the text &#x27;Am I Not a Man and a Brother&#x27;"/><figcaption>Darwin&#x27;s grandfather Josiah Wedgwood created the emblem that became a well-known symbol of the anti-slavery movement in Britain. Even while advocating in favor of the freedom of all, the African slave is portrayed as weak and submissive, asking for pity. While propaganda has (unfortunately) always tended to appeal to strong emotions by distorting the truth somewhat, this choice of framing is typical of the contradictions of the period.</figcaption></figure>
<p>Why bring all of this up? Isn&#x27;t it unfair to call out Darwin for racism and imperialism, out of so many worse offenders in his and later times?</p>
<p>Yes, it <em>is</em> a little unfair, and that&#x27;s the point. I am a fan of Darwin—I mean, I&#x27;m here writing a whole series about his first book—and I think he was genuinely a kind, honest, and well-meaning guy. He was also humble and open-minded, very much ready to admit his weaknesses and mistakes, and to entertain new ideas for improvement. Those virtues are what allowed him to follow through with this multi-decade project to write the <em>Origin of Species</em>. And, precisely <em>because</em> he was so good, it is useful to take a close look at the ways his thinking failed.</p>
<p>Charles Darwin is the perfect example of how <a href="https://aethermug.com/posts/rationality-fails-at-the-edge" rel="nofollow noopener noreferrer" target="_blank">rationality fails at the edge</a>: his fault, and the cause of all those discriminatory and conceited remarks, was not a problem with his thinking ability, but with his most basic assumptions. He was standing on the shoulders of the wrong giants, and had to work from what he could see from there.</p>
<p>Those excerpts where he manages to simultaneously express sympathy and condescension towards slaves and indigenous peoples are not hypocrisy, they are contradictions that he was utterly unable to notice.</p>
<p>I&#x27;ll go out on a limb and say that Darwin might have been able to see those mistakes and feel ashamed about them, if only <a href="/posts/bad-framings">someone had provided a better framing</a> for him to adopt. I know it&#x27;s idle speculation, but there were several passages in the <em>Voyage</em> that showed him perfectly capable of appreciating, even looking up to the <em>original</em> ways of the native tribes, even when they were unaffected by the Western teachings. I gave some examples to that effect in the previous episode. Here is another one:</p>
<blockquote>
<p>Their [The Aborigines&#x27;] countenances were good-humoured and pleasant, and they appeared far from being such utterly degraded beings as they have usually been represented. In their own arts they are admirable. A cap being fixed at thirty yards distance, they transfixed it with a spear, delivered by the throwing-stick with the rapidity of an arrow from the bow of a practised archer. In tracking animals or men they show most wonderful sagacity; and I heard of several of their remarks which manifested considerable acuteness.</p>
</blockquote>
<p>His choice of learning a key skill for his work from John Edmonstone, a freed slave, and his later openness in owning that choice, is another hint that he didn&#x27;t have a strong cognitive dissonance on these matters.</p>
<p>But that &quot;better framing&quot; was not available at his time. It would take more than 100 difficult years to overthrow the paternalist worldview of the Europeans with any degree of success.</p>
<p>Simply &quot;canceling&quot; Darwin today for his other warty statements and forgetting about it would mean missing an important lesson. The <em>Voyage</em> provides a useful cultural snapshot of that shifting intellectual period: in the middle of a transition to better ideals, but before they could reach the more profound ideas of universal human rights and true equality.</p>
<p>But, more than that, I am humbled by these examples—the whole lot, good, bad, and ugly included. If a man as great as Charles Darwin could have such formidable blind spots, if <em>he</em> could fail to see the deep flaws in his own assumptions, what am <em>I</em> missing when I think about today&#x27;s issues? Why would I ever stop trying to think better, to improve my mental framings and models, and to question what I and everyone around me has always taken for granted? ●</p>

<p><em>Next: <a href="/posts/darwin-the-witness">In His Own Words Ep. 4 - Darwin the Witness</a></em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/darwin-the-man-of-his-times/east_india.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Darwin the Voyager]]></title>
            <link>https://aethermug.com/posts/darwin-the-voyager?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/darwin-the-voyager</guid>
            <pubDate>Thu, 25 Dec 2025 08:00:06 GMT</pubDate>
            <description><![CDATA[In His Own Words (Episode 2)]]></description>
            <content:encoded><![CDATA[<p><em>This is the second episode in the series of curated quote collections &quot;Darwin in His Own Words&quot;.</em></p>
<ol>
<li><em><a href="/posts/darwin-the-fun-loving-young-fellow">Darwin the Fun-Loving Young Fellow</a></em></li>
<li><em>Darwin the Voyager</em></li>
<li><em><a href="/posts/darwin-the-man-of-his-times">Darwin the Man of His Times</a></em></li>
<li><em><a href="/posts/darwin-the-witness">Darwin the Witness</a></em></li>
<li><em><a href="/posts/darwin-el-naturalista">Darwin El Naturalista</a></em></li>
</ol>

<p>Charles Darwin&#x27;s <em>Voyage of the Beagle</em> was more than a series of fun episodes. It was, first and foremost, a scientific expedition, and this job occupied Darwin full-time for his five years of travel (minus the many days he had to spend lying down sick). But the <em>Voyage</em> is also exactly what it says on the can: the retelling of a grand journey around the globe, passing through some of the wildest and most remote places then known to humankind. In fact, remove all the scientific descriptions and erudite disquisitions from the book and you&#x27;ll basically have an adventure diary. This is what I want to focus on with this week&#x27;s quotes.</p>
<p>That great journey deeply affected Darwin for the rest of his life. Throughout his narrative, the emotional impact of what he saw, what he experienced, and the people he interacted with seems to push the boundaries of what he was able to express in words. He would later recollect those times with fondness. In a letter to FitzRoy in 1840, he wrote that he often had &quot;<em>the most vivid and delightful pictures of what I saw on board the Beagle pass before my eyes.– These recollections &amp; what I learnt in Natural History I would not exchange for twice ten thousand a year.</em>&quot; (Clearly, Charles Darwin didn&#x27;t have <a href="/posts/aphantasia">aphantasia</a>.)</p>
<p>What <em>did</em> he see out there, then?</p>
<h2>Adventure</h2>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-voyager/1871075i00221385-7.webp" alt="Historical engraving showing a natural stone bridge spanning a mountain gorge in Uspallata Pass, with small figures of travelers visible beneath the arch and steep rocky peaks rising in the background"/></figure>
<p>You can&#x27;t have, today, the kind of adventure Darwin experienced back then: going to places that take weeks of wearisome sea or land travel to reach, to grand and treacherous sites that have been barely touched by human feet, let alone written about for others to read.</p>
<blockquote>
<p>In vain we tried to gain the summit: the forest was so impenetrable, that no one who has not beheld it can imagine so entangled a mass of dying and dead trunks. I am sure that often, for more than ten minutes together, our feet never touched the ground, and we were frequently ten or fifteen feet above it, so that the seamen as a joke called out the soundings. At other times we crept one after another, on our hands and knees, under the rotten trunks.</p>
<p>— Voyage of the Beagle (all quotes below from the same book)</p>
</blockquote>
<blockquote>
<p>In our descent we followed the line of ridges; these were exceedingly narrow, and for considerable lengths steep as a ladder; but all clothed with vegetation. The extreme care necessary in poising each step rendered the walk fatiguing. I did not cease to wonder at these ravines and precipices: when viewing the country from one of the knife-edged ridges, the point of support was so small that the effect was nearly the same as it must be from a balloon.</p>
</blockquote>
<p>Many of those remarkable places are still there. You can go there if you want to. But it wouldn&#x27;t be the same thing. In the 1830&#x27;s, there were no tourists or geared-up wilderness aficionados crowding those areas. There were no online guides, no rescue forces, no safety guidelines. Most of those places weren&#x27;t even properly charted when <em>el naturalista Don Carlos</em> was there. We can only imagine what it must have felt like.</p>
<p>After two decades of reading about Nature, Darwin was finally connecting with it at a deeper level.</p>
<blockquote>
<p>It has been said that the love of the chase is an inherent delight in man—a relic of an instinctive passion. If so, I am sure the pleasure of living in the open air, with the sky for a roof and the ground for a table, is part of the same feeling; it is the savage returning to his wild and native habits. I always look back to our boat cruises, and my land journeys, when through unfrequented countries, with an extreme delight, which no scenes of civilisation could have created.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-voyager/518439ldsdl.webp" alt="Painting of Andean mountain valley with snow-capped peaks glowing pink at sunset, travelers with pack animals on a winding path through rugged terrain"/><figcaption>A Scene In The Andes, Chile, Thomas Jacques Somerscales</figcaption></figure>
<p>There are many earnest snippets like this scattered throughout the <em>Voyage</em>—short passages that hide within them the force and beauty of wilderness. But Darwin was also capable of weaving mystery and storytelling into his descriptions.</p>
<blockquote>
<p>One day we accompanied a party of the Spaniards in their whale-boat to a salina, or lake from which salt is procured. ... The water is only three or four inches deep and rests on a layer of beautifully crystallised, white salt. The lake is quite circular, and is fringed with a border of bright green succulent plants; the almost precipitous walls of the crater are clothed with wood, so that the scene was altogether both picturesque and curious. A few years since the sailors belonging to a sealing-vessel murdered their captain in this quiet spot; and we saw his skull lying among the bushes.</p>
</blockquote>
<p>Why did the sailors murder their captain? What dreadful scenes unfolded in that untamed landscape? We are left to wonder.</p>
<p>Other times the story does find its resolution, like in the episode of the mysterious abandoned camp:</p>
<blockquote>
<p>A strong desire is always felt to ascertain whether any human being has previously visited an unfrequented spot. A bit of wood with a nail in it is picked up and studied as if it were covered with hieroglyphics. Possessed with this feeling, I was much interested by finding, on a wild part of the coast, a bed made of grass beneath a ledge of rock. Close by it there had been a fire, and the man had used an axe. The fire, bed, and situation showed the dexterity of an Indian; but he could scarcely have been an Indian, for the race is in this part extinct, owing to the Catholic desire of making at one blow Christians and Slaves.</p>
</blockquote>
<p>Who might have slept here, and why? The answer comes unexpectedly in the next day&#x27;s log:</p>
<blockquote>
<p>In the evening another harbour was discovered, where we anchored. Directly afterwards a man was seen waving his shirt, and a boat was sent which brought back two seamen. A party of six had run away from an American whaling vessel, and had landed a little to the southward in a boat, which was shortly afterwards knocked to pieces by the surf. They had now been wandering up and down the coast for fifteen months, without knowing which way to go, or where they were. What a singular piece of good fortune it was that this harbour was now discovered! Had it not been for this one chance, they might have wandered till they had grown old men, and at last have perished on this wild coast. Their sufferings had been very great, and one of their party had lost his life by falling from the cliffs. They were sometimes obliged to separate in search of food, and this explained the bed of the solitary man.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-voyager/540164ldsdl.webp" alt="Expedition party with pack animals traversing a treacherous mountain pass through towering rock formations under stormy skies"/><figcaption>Pass through the Andes, Lt. William Smyth</figcaption></figure>
<p>Some of the situations in Darwin&#x27;s chronicle are downright surreal. Upside-down frozen horse statues? He&#x27;s got you covered.</p>
<blockquote>
<p>In the valleys there were several broad fields of perpetual snow. These frozen masses, during the process of thawing, had in some parts been converted into pinnacles or columns ... On one of these columns of ice a frozen horse was sticking as on a pedestal, but with its hind legs straight up in the air. The animal, I suppose, must have fallen with its head downward into a hole, when the snow was continuous, and afterwards the surrounding parts must have been removed by the thaw.</p>
</blockquote>
<p>There was also a time in which a living force of nature blotted out the sun and reddened the fields:</p>
<blockquote>
<p>Shortly before we arrived at this place we observed to the south a ragged cloud of a dark reddish-brown colour. At first we thought that it was smoke from some great fire on the plains; but we soon found that it was a swarm of locusts. They were flying northward; and with the aid of a light breeze, they overtook us at a rate of ten or fifteen miles an hour. The main body filled the air from a height of twenty feet to that, as it appeared, of two or three thousand above the ground; &quot;and the sound of their wings was as the sound of chariots of many horses running to battle:&quot; or rather, I should say, like a strong breeze passing through the rigging of a ship. The sky, seen through the advanced guard, appeared like a mezzotinto engraving, but the main body was impervious to sight; they were not, however, so thick together, but that they could escape a stick waved backwards and forwards. When they alighted, they were more numerous than the leaves in the field, and the surface became reddish instead of being green: the swarm having once alighted, the individuals flew from side to side in all directions.</p>
</blockquote>
<h2>Contact With the Locals</h2>
<p>The natural world was Darwin&#x27;s main professional interest, but he was also endlessly captivated by the people of South America and of the other lands touched by the <em>Beagle</em>. The fascination was often reciprocal.</p>
<blockquote>
<p>On the first night we slept at a retired little country-house; and there I soon found out that I possessed two or three articles, especially a pocket compass, which created unbounded astonishment. In every house I was asked to show the compass, and by its aid, together with a map, to point out the direction of various places. It excited the liveliest admiration that I, a perfect stranger, should know the road ... to places where I had never been. ... I was asked whether the earth or sun moved; whether it was hotter or colder to the north; where Spain was, and many other such questions. The greater number of the inhabitants had an indistinct idea that England, London, and North America, were different names for the same place; but the better informed well knew that London and North America were separate countries close together, and that England was a large town in London!</p>
</blockquote>
<p>He encountered a good deal of skepticism, too.</p>
<blockquote>
<p>My geological examination of the country generally created a good deal of surprise amongst the Chilenos: it was long before they could be convinced that I was not hunting for mines. This was sometimes troublesome: I found the most ready way of explaining my employment was to ask them how it was that they themselves were not curious concerning earthquakes and volcanos?—why some springs were hot and others cold?—why there were mountains in Chile, and not a hill in La Plata? These bare questions at once satisfied and silenced the greater number; some, however (like a few in England who are a century behindhand), thought that all such inquiries were useless and impious; and that it was quite sufficient that God had thus made the mountains.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-voyager/1871077i00161523-7.webp" alt="Indigenous canoe approaching thatched huts near Cape Horn, with dramatic mountain peaks under stormy skies"/></figure>
<p>An exchange that left me perplexed is the following, which happened during a long ride across the Brazilian country:</p>
<blockquote>
<p>On first arriving, it was our custom to unsaddle the horses and give them their Indian corn; then, with a low bow, to ask the senhôr to do us the favour to give us something to eat. “Anything you choose, sir,” was his usual answer. For the few first times, vainly I thanked providence for having guided us to so good a man. The conversation proceeding, the case universally became deplorable. “Any fish can you do us the favour of giving ?&quot;—&quot;Oh no, sir.&quot;—&quot;Any soup?&quot;—&quot;No, sir.&quot;—&quot;Any bread?&quot;—&quot;Oh no, sir.&quot;—&quot;Any dried meat?&quot;—&quot;Oh no, sir.” If we were lucky, by waiting a couple of hours, we obtained fowls, rice, and farinha. It not unfrequently happened that we were obliged to kill, with stones, the poultry for our own supper. When, thoroughly exhausted by fatigue and hunger, we timorously hinted that we should be glad of our meal, the pompous, and (though true) most unsatisfactory answer was, “It will be ready when it is ready.” If we had dared to remonstrate any further, we should have been told to proceed on our journey, as being too impertinent. The hosts are most ungracious and disagreeable in their manners; their houses and their persons are often filthily dirty; the want of the accommodation of forks, knives, and spoons is common; and I am sure no cottage or hovel in England could be found in a state so utterly destitute of every comfort.</p>
</blockquote>
<p>Rather than a cultural barrier, this seems to be a case of self-entitlement and ingratitude on the part of the travelers, not a lack of hospitality. Couldn&#x27;t they offer proper payment instead of &quot;timorously hinting&quot; things? It&#x27;s hard to judge without the full context—maybe that kind of hospitality was universal and taken for granted everywhere else in the world?—but this smells like a case of the British arrogance of the time (more on that in the next episode).</p>
<p>At least, not all social hurdles were fraught with resentment. Once, in New Zealand, Darwin asked an aboriginal chief to let him hire a local man to act as his guide across the plains, at which the chief heartily offered to guide him personally. Darwin appreciated the help, but the chief had quite the personality:</p>
<blockquote>
<p>Although the scenery is nowhere beautiful, and only occasionally pretty, I enjoyed my walk. I should have enjoyed it more, if my companion, the chief, had not possessed extraordinary conversational powers. I knew only three words: &quot;good,&quot; &quot;bad,&quot; and &quot;yes:&quot; and with these I answered all his remarks, without of course having understood one word he said. This, however, was quite sufficient: I was a good listener, an agreeable person, and he never ceased talking to me.</p>
</blockquote>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Still, so many of the naturalist&#x27;s descriptions of people are about unmitigated merriment and good spirits, like in this episode in St. Domingo:</p>
<blockquote>
<p>It happened to be a grand feast-day, and the village was full of people. On our return we overtook a party of about twenty young black girls, dressed in excellent taste; their black skins and snow-white linen being set off by coloured turbans and large shawls. As soon as we approached near, they suddenly all turned round, and covering the path with their shawls, sung with great energy a wild song, beating time with their hands upon their legs.</p>
</blockquote>
<p>Or that charming evening in Tahiti:</p>
<blockquote>
<p>In returning in the evening to the boat, we stopped to witness a very pretty scene. Numbers of children were playing on the beach, and had lighted bonfires which illumined the placid sea and surrounding trees; others, in circles, were singing Tahitian verses. We seated ourselves on the sand, and joined their party. The songs were impromptu, and I believe related to our arrival: one little girl sang a line, which the rest took up in parts, forming a very pretty chorus. The whole scene made us unequivocally aware that we were seated on the shores of an island in the far-famed South Sea.</p>
</blockquote>
<p>I can almost hear them singing!</p>
<h2>Oh, the Wonders!</h2>
<p>No journey would be worthy of that name without a good dose of the sense of wonder. Darwin, like every scientist worth their salt, had plenty of it. Indeed, he experienced all facets of wonder, from awed curiosity...</p>
<blockquote>
<p>[Near the Peruvian city of Lima] I had an opportunity of seeing the ruins of one of the ancient Indian villages, with its mound like a natural hill in the centre. The remains of houses, enclosures, irrigating streams, and burial mounds, scattered over this plain, cannot fail to give one a high idea of the condition and number of the ancient population. When their earthenware, woollen clothes, utensils of elegant forms cut out of the hardest rocks, tools of copper, ornaments of precious stones, palaces, and hydraulic works, are considered, it is impossible not to respect the considerable advance made by them in the arts of civilisation.</p>
</blockquote>
<p>...to the feeling of astonishment:</p>
<blockquote>
<p>It is necessary to sail over this great [Pacific] ocean to comprehend its immensity. Moving quickly onwards for weeks together, we meet with nothing but the same blue, profoundly deep, ocean. Even within the archipelagoes, the islands are mere specks, and far distant one from the other. Accustomed to look at maps drawn on a small scale, where dots, shading, and names are crowded together, we do not rightly judge how infinitely small the proportion of dry land is to the water of this vast expanse.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-voyager/1871075i00221385-5.webp" alt="19th-century engraving of a sailing ship navigating rough seas near Cape Horn with mountainous coastline"/></figure>
<p>So many times, throughout his book, Darwin claims to be at a loss for words to describe how he felt. Here is a small sample:</p>
<blockquote>
<p>It is easy to specify the individual objects of admiration in these grand scenes; but it is not possible to give an adequate idea of the higher feelings of wonder, astonishment, and devotion, which fill and elevate the mind.</p>
</blockquote>
<blockquote>
<p>Following a pathway I entered a noble forest, and from a height of five or six hundred feet, one of those splendid views was presented, which are so common on every side of Rio. At this elevation the landscape attains its most brilliant tint; and every form, every shade, so completely surpasses in magnificence all that the European has ever beheld in his own country, that he knows not how to express his feelings.</p>
</blockquote>
<blockquote>
<p>Who when examining in the cabinet of the entomologist the gay exotic butterflies, and singular cicadas, will associate with these lifeless objects the ceaseless harsh music of the latter and the lazy flight of the former,—the sure accompaniments of the still, glowing noonday of the tropics?</p>
</blockquote>
<blockquote>
<p>When quietly walking along the shady pathways, and admiring each successive view, I wished to find language to express my ideas. Epithet after epithet was found too weak to convey to those who have not visited the intertropical regions the sensation of delight which the mind experiences.</p>
</blockquote>
<p>The naturalist was an eloquent guy, though, and he often <em>did</em> find the words he needed to express his delight. And delight he found everywhere he went. I will conclude this post with three of my favorites.</p>
<p>In the tropical jungles of Brazil—this was very early on, one of the first times he was mind-blown by what he saw:</p>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-voyager/554890ldsdl.webp" alt="Dense Amazonian rainforest with towering trees, hanging vines, tropical palms, colorful birds including toucans and parrots, and a spotted jaguar resting on logs in the foreground"/><figcaption>On the upper reaches of the Amazon River, August Specht</figcaption></figure>
<blockquote>
<p>The elegance of the grasses, the novelty of the parasitical plants, the beauty of the flowers, the glossy green of the foliage, but above all the general luxuriance of the vegetation, filled me with admiration. A most paradoxical mixture of sound and silence pervades the shady parts of the wood. The noise from the insects is so loud, that it may be heard even in a vessel anchored several hundred yards from the shore; yet within the recesses of the forest a universal silence appears to reign. To a person fond of natural history, such a day as this brings with it a deeper pleasure than he can ever hope to experience again.</p>
</blockquote>
<p>In Mauritius:</p>
<blockquote>
<p>Some of the views where the peaked hills and the cultivated farms were seen together, were exceedingly picturesque; and we were constantly tempted to exclaim &quot;How pleasant it would be to pass one&#x27;s life in such quiet abodes!&quot;</p>
</blockquote>
<p>And in the Southern Andes:</p>
<blockquote>
<p>In these wild countries it gives much delight to gain the summit of any mountain. There is an indefinite expectation of seeing something very strange, which, however often it may be balked, never failed with me to recur on each successive attempt. Every one must know the feeling of triumph and pride which a grand view from a height communicates to the mind. In these little frequented countries there is also joined to it some vanity, that you perhaps are the first man who ever stood on this pinnacle or admired this view.</p>
</blockquote>
<p>What a voyage! I can see why those memories became something he &quot;<em>would not exchange for twice ten thousand a year</em>.&quot; Despite his difficulties in fully conveying his experiences, I think it was generous of him to share so much of his adventurous travels with us, his readers. ●</p>

<p><em>Next: <a href="/posts/darwin-the-man-of-his-times">In His Own Words Ep. 3 - Darwin the Man of His Times</a></em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/darwin-the-voyager/sa_map.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Darwin the Fun-Loving Young Fellow]]></title>
            <link>https://aethermug.com/posts/darwin-the-fun-loving-young-fellow?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/darwin-the-fun-loving-young-fellow</guid>
            <pubDate>Thu, 18 Dec 2025 04:39:40 GMT</pubDate>
            <description><![CDATA[In His Own Words (Episode 1)]]></description>
            <content:encoded><![CDATA[<p><em>This is the first episode in the series of curated quote collections &quot;Darwin in His Own Words.</em></p>
<ol>
<li><em>Darwin the Fun-Loving Young Fellow</em></li>
<li><em><a href="/posts/darwin-the-voyager">Darwin the Voyager</a></em></li>
<li><em><a href="/posts/darwin-the-man-of-his-times">Darwin the Man of His Times</a></em></li>
<li><em><a href="/posts/darwin-the-witness">Darwin the Witness</a></em></li>
<li><em><a href="/posts/darwin-el-naturalista">Darwin El Naturalista</a></em></li>
</ol>
<p>I recently read a book with the nicely vintage title of <em>Narrative of the Surveying Voyages of His Majesty&#x27;s Ships Adventure and Beagle, describing their Examination of the Southern Shores of South America, and the Beagle&#x27;s Circumnavigation of the Globe - Volume III</em>, and I want to share its joys with you. It&#x27;s truly a wonderful book.</p>
<p>As you may have already guessed, I&#x27;m talking about Charles Darwin&#x27;s narration of his 5-year journey around the world between 1831 and 1836 aboard the ship <em>Beagle</em>. It&#x27;s on all biology textbooks as &quot;the time Darwin went to the Galapagos, saw some strange birds, and got the idea for the theory of evolution.&quot; But while this is factually true—the &quot;Voyage of the <em>Beagle</em>&quot; did plant the seeds of one of the deepest revolutions in scientific history—limiting your knowledge of it to just that fact would mean badly underselling it. The story of this voyage is exciting and interesting in its own right, and in many ways.</p>
<p>The <em>Voyage</em>, as I&#x27;ll call the book from now on, reads like the perfect prequel: grand, exciting, and full of subtle foreshadowing. But even without knowing anything—or caring—about what happened in the following decades, this travelogue paints many fascinating pictures: of a mysterious continent&#x27;s nature, of times and peoples very different from our own, and of some interesting protagonists. Most of all, it paints a picture of the naturalist Charles Darwin as little more than a boy. He was twenty-two when the ship set sail, and it&#x27;s fair to say that he learned most of what he later became famous for on those fateful whirlwind years of travel.</p>
<p>I&#x27;m not going to tell the story of the voyage chronologically here, though. My aim is only to extract a few of those pictures, and maybe to convince you to read the book yourself. I will do this by sharing a wide and curated collection of direct quotations from the book (and other books about it) across several posts, because I believe it&#x27;s always nicer and more authentic to hear things in the original words. But first, a little context.</p>
<h3>How It All Came About</h3>
<p>In 1831, Captain Robert FitzRoy, commander of <em>Her Majesty&#x27;s Ship Beagle</em>, was given the mission of surveying the continent of South America to make accurate measurements of its geography and features. He had many practical things to attend to before sailing, but one seemingly superfluous thought nagged at him: in his previous voyage, he had regretted visiting places with interesting and exotic natural elements without a properly trained naturalist to understand and record them. He decided that in <em>Beagle</em>&#x27;s new mission he would not be unprepared again.</p>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-fun-loving-young-fellow/HMS_Beagle_by_Conrad_Martens.webp" alt="Two-masted HMS Beagle anchored in calm waters near dramatic snow-capped mountains, with an indigenous canoe in the foreground"/><figcaption>HMS Beagle at Tierra del Fuego (painted by Conrad Martens)</figcaption></figure>
<blockquote>
<p>Anxious that no opportunity of collecting useful information, during the voyage, should be lost; I proposed to the Hydrographer [Captain Francis Beaufort] that some well-educated and scientific person should be sought for who would willingly share such accommodations as I had to offer, in order to profit by the opportunity of visiting distant countries yet little known. Captain Beaufort approved of the suggestion, and wrote to Professor Peacock, of Cambridge [professor of mathematics], who consulted with a friend, Professor Henslow, and he named Mr. Charles Darwin, grandson of Dr. Darwin the poet, as a young man of promising ability, extremely fond of geology, and indeed all branches of natural history.</p>
</blockquote>
<p>Who was this &quot;young man of promising ability&quot;? The truth is, he wasn&#x27;t very much at all then—which makes it an even bigger surprise that he was chosen for the trip.</p>
<p>Charles Darwin, in 1831, was a rich and somewhat spoiled boy studying to become a clergyman. In his youth he had shown an interest in nature, collecting minerals and insects, engaging in birdwatching and helping his older brother with simple chemistry and crystallography experiments. But that fascination seemed to take the back seat when, around the age of fifteen, he got sucked into the fashionable rich pastime of the time: hunting and shooting with a gun. He became so engrossed in it that his father once scolded him, saying &quot;<em>You care for nothing but shooting, dogs, and rat-catching, and you will be a disgrace to yourself and all your family.</em>&quot;</p>
<p>That was the beginning of a period in which Charles&#x27; &quot;promising ability&quot; did <em>not</em> shine through. He was sent to medical school, but he was too squeamish and wasn&#x27;t really interested in the subject. In his autobiography he later recalled about those years,</p>
<blockquote>
<p>I became convinced from various small circumstances that my Father would leave me property enough to subsist on with some comfort ... my belief was sufficient to check any strenuous effort to learn medicine.</p>
</blockquote>
<p>In other words, he realized that he was rich and didn&#x27;t really need to work at all. His hunting pastime wasn&#x27;t going to undermine anything after all.</p>
<p>His father, who was a medical doctor, was not happy about it, but he accepted that Charles would probably not follow in his footsteps. So he changed the plans for his son&#x27;s career: he would become an Anglican parson. In 1828 Charles was sent to Christ&#x27;s College in Cambridge to begin the new theological curriculum.</p>
<p>Things did not start on the right foot at Cambridge, either.</p>
<blockquote>
<p>Although, as we shall presently see, there were some redeeming features in my life at Cambridge, my time was sadly wasted there, and worse than wasted. From my passion for shooting and for hunting, and, when this failed, for riding across country, I got into a sporting set, including some dissipated low-minded young men. We used often to dine together in the evening, ... and we sometimes drank too much, with jolly singing and playing at cards afterwards.</p>
<p>— Autobiography</p>
</blockquote>
<p>Still, there were indeed &quot;redeeming features&quot; during his stay there—just not the kind his father would have liked. His youthful interest in nature had endured, and he now had professors and like-minded students to pursue it in much more depth. He became interested in geology and entomology, and he found a new hobby:</p>
<blockquote>
<p>No pursuit at Cambridge was followed with nearly so much eagerness or gave me so much pleasure as collecting beetles. It was the mere passion for collecting, for I did not dissect them, and rarely compared their external characters with published descriptions, but got them named anyhow. I will give a proof of my zeal: one day, on tearing off some old bark, I saw two rare beetles, and seized one in each hand; then I saw a third and new kind, which I could not bear to lose, so that I popped the one which I held in my right hand into my mouth. Alas! it ejected some intensely acrid fluid, which burnt my tongue so that I was forced to spit the beetle out, which was lost, as was the third one.</p>
<p>— Autobiography</p>
</blockquote>
<p>Young Charles also became friends with professor John Stevens Henslow, as well as other prominent naturalists who hung out in the Cambridge &#x27;hood in that period. Those acquaintances later proved to be <em>very</em> redeeming indeed.</p>
<p>In that same period, Charles read Alexander von Humboldt&#x27;s best-selling book series <em>Personal Narrative</em>, which recounted the amazing South American journey of the superstar-scientist (really, Humboldt was perhaps <em>the</em> most famous person in the world in his prime, on par with Napoleon—but you can read Andrea Wulf&#x27;s excellent <a href="https://www.goodreads.com/book/show/23995249-the-invention-of-nature" rel="nofollow noopener noreferrer" target="_blank">The Invention of Nature</a> to learn more about his story).</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>The funny thing is that Darwin was not very ambitious. The notion of going himself to the New Continent never seemed to occur to him. What he latched onto was the more modest idea of a field trip with his friends to the Canarian island of Tenerife, highly praised by Humboldt for its natural beauty. Tenerife was much closer to England, and Charles became obsessed with this seemingly-achievable &quot;Canary Scheme&quot;:</p>
<blockquote>
<p>My head is running about the Tropics: in the morning I go and gaze at Palm trees in the hot-house and come home and read Humboldt: my enthusiasm is so great that I cannot hardly sit still on my chair. Henslow &amp; other Dons give us great credit for our plan: Henslow promises to cram me in geology. I will never be easy till I see the peak of Teneriffe and the great Dragon tree... I am working regularly at Spanish ... I have written myself into a Tropical glow.</p>
<p>— Letter to his sister Caroline</p>
</blockquote>
<p>And elsewhere, in May 1831:</p>
<blockquote>
<p>As for my Canary scheme, it is rash of you to ask questions; my other friends most sincerely wish me there, I plague them so with talking about tropical scenery, &amp;c.</p>
<p>— Letter to William Darwin Fox</p>
</blockquote>
<p>Fast-forwarding a little: his leisure trip plans never really materialized, but he got to accompany a professor in a geological survey in Wales that summer. Nothing very exciting, but he did learn some practical skills there. More importantly, on his return home Charles found a letter addressed to him from his friend Prof. Henslow inviting him to join Captain FitzRoy aboard the <em>Beagle</em>. And, as they say, the rest is history (and a lot of science, too).</p>
<p>Although his <em>Origin of Species</em> was more impactful and universally known, his narrative of the <em>Voyage</em> was very successful and loved by his contemporaries. Towards the end of his life, after achieving his enormous fame as the father of evolutionary theory, he would still write fondly that &quot;<em>the success of this, my first literary child, always tickles my vanity more than that of any of my other books. Even to this day it sells steadily in England and the United States, and has been translated for the second time into German, and into French and other languages. This success of a book of travels, especially of a scientific one, so many years after its first publication, is surprising.</em>&quot;</p>
<p>Frankly, I did not expect to be so captivated by this book. I can confidently say that I never highlighted so many passages in a single volume. To be sure, it&#x27;s written in a convoluted and antiquated English and it&#x27;s filled to the brim with descriptions and musings about scientific facts that are now mostly outdated or obsolete. But it&#x27;s just so wholesome, so sincere a retelling to ever bore you (and you can skip the boring parts!). The story and the characters that transpire from it make it all more than worth it.</p>
<p>And that&#x27;s about all the story-telling I&#x27;m going to do. Now for the &quot;pictures&quot; I promised: this is the first of a series of short posts showcasing my favorite passages from the <em>Voyage</em>, but not in any chronological order. It&#x27;s a sort of prismatic experiment: instead of trying to describe Charles Darwin directly, in each episode I will show a different face, or a different set of interesting scenes that he experienced. I will share about his face as a voyager, as a man of his times, as a witness of cultures and events, and so on. Each is only one side, a limited &quot;2D view&quot; of the man, but, by reading them all, I hope you&#x27;ll be able to form your own 3D image of him in your mind.</p>
<p>Now for the first episode.</p>

<h2>Darwin the Fun-Loving Young Fellow</h2>
<p>When you think about Charles Darwin, you might imagine this man:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/darwin-the-fun-loving-young-fellow/darwin_01.webp" alt="" title="Elderly Charles Darwin in formal Victorian attire with his iconic long white beard, photographed in his later years"/></figure>
<p>You may also think him a rather dull, serious, and stuck-up Victorian gentleman. I already mentioned how this was not the case in his teens—he was a playful boy and did not like studying per se—but I think it is my duty to begin this series by dispelling that prejudice more thoroughly once and for all. You may also think his participation in the naval survey was strictly about work, boring work. This is also not true, as I&#x27;ll hope to show you.</p>
<p>At the time of boarding the <em>Beagle</em>, Darwin looked roughly halfway between this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/darwin-the-fun-loving-young-fellow/darwin_02.webp" alt="" title="Young Charles Darwin as a child, approximately 7 years old, holding a potted plant with yellow flowers"/></figure>
<p>and this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/darwin-the-fun-loving-young-fellow/darwin_03.webp" alt="" title="Charles Darwin as a young man in his twenties, clean-shaven with reddish-brown hair, wearing a brown coat and blue waistcoat"/></figure>
<p>One of the first fun <em>Beagle</em> moments on record happened less than two months after their departure from England. It was a time-honored seafaring tradition to celebrate when the ship crosses the line of the Equator. The celebration was done in honor of the pagan god of the sea, Neptune.</p>
<blockquote>
<p>The effect produced on the young naturalist&#x27;s mind was unmistakably remarkable. His first impression was that the ship&#x27;s crew from Captain downwards had gone off their heads. &#x27;What fools these sailors make of themselves,&#x27; he said as he descended the companion ladder to wait below till he was admitted.</p>
<p>The Captain received his godship and Amphitrite his wife with becoming solemnity; Neptune was surrounded by a set of the most ultra-demonical looking beings that could be imagined, stripped to the waist, their naked arms and legs bedaubed with every conceivable colour which the ship&#x27;s stores could turn out, the orbits of their eyes exaggerated with broad circles of red and yellow pigments. Those demons danced a sort of nautical war dance exulting on the fate awaiting their victims below.</p>
<p>Putting his head down the after Companion, the Capt called out &#x27;Darwin look up here&#x27;! Up came the young naturalist, in wonderment but yet prepared for any extravagance in the world that seamen could produce. A gaze for a moment at the scene on deck was sufficient, he was convinced he was amongst madmen, and giving one yell disappeared again down the ladder. He was of course the first to be called by the official secretary, and Neptune received him with grace and courtesy ... Of course Mr Darwin readily entered into the fun, and submitted to a few buckets of water thrown over him and the Captain as they sat together by one of the youngsters, as if by accident.</p>
<p>— Recollection by Philip Gidley King in The Beagle Record, edited by Richard Darwin Keynes</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/darwin-the-fun-loving-young-fellow/crossingtheline.webp" alt="Sailors performing the Neptune ceremony aboard a ship, with crew members in costumes including someone dressed as Neptune with a trident, and others being initiated in the crossing-the-line tradition"/><figcaption>(Thomas Landseer, CC BY 4.0, via Wikimedia Commons)</figcaption></figure>
<p>By &quot;entered into the fun&quot;, King means that Charles&#x27; face was painted with tar and his head was shaved, before having the cold water dumped on him. Sounds like the perfect initiation rite for a rich and formally-educated boy joining a band of seventy rowdy sailors.</p>
<p>Technically, Darwin was on that ship for work. In truth, the work mostly consisted in doing precisely the kind of things that had been his hobbies back home, where his real duty was to theology and becoming a country parish. I doubt any of it felt like work to him. He seemed to be consumed by a burning curiosity for nature. This made him do... things.</p>
<p>For example:</p>
<blockquote>
<p>In the evening the weather was quite tropical, the thermometer standing at 79°. Numbers of fireflies were hovering about, and the musquitoes were very troublesome. I exposed my hand for five minutes, and it was soon black with them; I do not suppose there could have been less than fifty, all busy sucking.</p>
<p>— Voyage of the Beagle (all quotes below from the same book unless stated otherwise)</p>
</blockquote>
<p>Sure, let&#x27;s allow a hundred mosquitoes to suck me dry, just for the heck of it!</p>
<p>Also, he was not afraid of tasting things:</p>
<blockquote>
<p>I believe it is well ascertained that the bladder of the frog acts as a reservoir for the moisture necessary to its existence: such seems to be the case with the tortoise. For some time after a visit to the springs, their urinary bladders are distended with fluid, which is said gradually to decrease in volume, and to become less pure. The inhabitants, when walking in the lower district, and overcome with thirst, often take advantage of this circumstance, and drink the contents of the bladder if full: in one I saw killed, the fluid was quite limpid, and had only a very slightly bitter taste.</p>
</blockquote>
<p>I&#x27;ll let you judge for yourself if he should be called &quot;fearless&quot;, &quot;overzealous&quot;, or something else still.</p>
<p>But more than subjecting his own body to all sorts of gross trials, Charles liked to play pranks on animals. All for the advancement of science, of course.</p>
<p>Mucking around with Galapagos tortoises:</p>
<blockquote>
<p>The inhabitants believe that these animals are absolutely deaf; certainly they do not overhear a person walking close behind them. I was always amused when overtaking one of these great monsters, as it was quietly pacing along, to see how suddenly, the instant I passed, it would draw in its head and legs, and uttering a deep hiss fall to the ground with a heavy sound, as if struck dead. I frequently got on their backs, and then giving a few raps on the hinder part of their shells, they would rise up and walk away;—but I found it very difficult to keep my balance.</p>
</blockquote>
<p>&quot;Plaguing&quot; <a href="https://en.wikipedia.org/wiki/Marine_iguana" rel="nofollow noopener noreferrer" target="_blank">aquatic iguanas</a>:</p>
<blockquote>
<p>If this Amblyrhynchus is held and plagued with a stick, it will bite it very severely; but I caught many by the tail, and they never tried to bite me. If two are placed on the ground and held together, they will fight, and bite each other till blood is drawn.</p>
</blockquote>
<p>Testing <em>precisely</em> how easy certain birds are to catch:</p>
<blockquote>
<p>All of them often approached sufficiently near to be killed with a switch, and sometimes, as I myself tried, with a cap or hat. A gun is here almost superfluous; for with the muzzle I pushed a hawk off the branch of a tree. One day, whilst lying down, a mocking-thrush alighted on the edge of a pitcher, made of the shell of a tortoise, which I held in my hand, and began very quietly to sip the water; it allowed me to lift it from the ground whilst seated on the vessel: I often tried, and very nearly succeeded, in catching these birds by their legs.</p>
</blockquote>
<p>Having staring conversations with very bothered <a href="https://en.wikipedia.org/wiki/Gal%C3%A1pagos_land_iguana" rel="nofollow noopener noreferrer" target="_blank">land iguanas</a>:</p>
<blockquote>
<p>This animal, when making its burrow, works alternately the opposite sides of its body. One front leg for a short time scratches up the soil, and throws it towards the hind foot, which is well placed so as to heave it beyond the mouth of the hole. That side of the body being tired, the other takes up the task, and so on alternately. I watched one for a long time, till half its body was buried; I then walked up and pulled it by the tail; at this it was greatly astonished, and soon shuffled up to see what was the matter; and then stared me in the face, as much as to say, &quot;What made you pull my tail?&quot;</p>
</blockquote>
<p>Yeah. Darwin had a lot of fun.</p>
<p>Tormenting animals wasn&#x27;t his only occupation, of course. He was a keen observer of everything, from scientific facts to the local cultures (both covered in later episodes). He looked at things and people without animosity or bitterness, and the humor of things did not escape him. His narration of specific events is often veiled with irony.</p>
<p>For instance, he was amused by the reaction of some locals to his being an Anglican:</p>
<blockquote>
<p>In the evening we reached a comfortable farm-house, where there were several very pretty señoritas. They were much horrified at my having entered one of their churches out of mere curiosity. They asked me, &quot;Why do you not become a Christian—for our religion is certain?&quot; I assured them I was a sort of Christian; but they would not hear of it—appealing to my own words, &quot;Do not your padres, your very bishops, marry?&quot; The absurdity of a bishop having a wife particularly struck them: they scarcely knew whether to be most amused or horror-struck at such an enormity.</p>
</blockquote>
<p>One day, while on an exploration of the Andes with some guides, Charles noted that they were at such a high altitude that water would boil at well below the usual temperature. &quot;<em>Hence the potatoes, after remaining for some hours in the boiling water, were nearly as hard as ever. The pot was left on the fire all night, and next morning it was boiled again, but yet the potatoes were not cooked.</em>&quot;</p>
<p>He noted (I imagine with a smile) how his local guides explained the matter:</p>
<blockquote>
<p>I found out this by overhearing my two companions discussing the cause, they had come to the simple conclusion &quot;that the cursed pot (which was a new one) did not choose to boil potatoes.&quot;</p>
</blockquote>
<p>And the log for the next day begins: &quot;<em>March 22nd.—After eating our potato-less breakfast, we travelled across</em>...&quot;</p>
<p>I hope the little snippets above convinced you of the playful and good-hearted nature of young Charles Darwin. I feel it is important to understand this in order to take him down from his high pedestal in the scientific pantheon and understand him more closely. He wasn&#x27;t a superhero, and he wasn&#x27;t visited by a sudden stroke of genius. In part, the contributions he brought to biology and to human thought were the result of decades of patient and tedious work piecing together many small observations of plants and animals. But they were also the product of the particular temperament of the man. I believe that his playfulness, his occasional foolhardiness, and his almost childish will to pursue his interests at all costs were a major factor in his success.</p>
<p>But, most of all, it is his way of connecting with people that might have had the largest influence.</p>
<p>One way to peek into his relationships is to look at what people called him: at home, his family called him &quot;Bobby&quot; or &quot;Charley&quot;; on board the <em>Beagle</em>, the seamen called him &quot;the Philosopher&quot;, or &quot;Philos&quot; for short; and (although it isn&#x27;t meant to be endearing) I like how he was known in South America:</p>
<blockquote>
<p>On arriving at a post-house we were told by the owner that if we had not a regular passport we must pass on, for there were so many robbers he would trust no one. When he read, however, my passport, which began with “El Naturalista Don Carlos,” his respect and civility were as unbounded as his suspicions had been before.</p>
</blockquote>
<p>Despite his lifelong shyness and the introversion of his youth, Charles Darwin had a way with people. By the time he arrived in Cambridge, he was a very likeable figure. James Taylor, author of a book about the <em>Beagle</em> and his people, <a href="https://www.goodreads.com/book/show/28001155-the-voyage-of-the-beagle" rel="nofollow noopener noreferrer" target="_blank">writes</a> that &quot;<em>In contrast to his childhood personality Darwin by this time was an amiable and affable student and had a remarkable capacity to develop and maintain friends.</em>&quot; Historian of science John van Wyhe <a href="https://darwin-online.org.uk/darwin.html" rel="nofollow noopener noreferrer" target="_blank">confirms</a> that &quot;<em>Charles Darwin was a kind, good-humoured, pleasant man, unassuming and profoundly modest.</em>&quot;</p>
<p>It&#x27;s not a stretch to say that his <em>Voyage</em>, the insights he gleaned from it, and the publication of the <em>Origin of Species</em> might never have happened had he been a pompous and obnoxious rich kid. The friendships he made during those &quot;wasted years&quot; in Cambridge—first and foremost that with Prof. Henslow—were key to his selection. Even after his introduction to the <em>Beagle</em>&#x27;s captain, his selection was all but guaranteed:</p>
<blockquote>
<p>Later, on becoming very intimate with Fitz Roy, I heard that I had run a very narrow risk of being rejected, on account of the shape of my nose! He... was convinced that he could judge a man&#x27;s character by the outline of his features; and he doubted whether any one with my nose could possess sufficient energy and determination for the voyage. But I think he was afterwards well satisfied that my nose spoke falsely.</p>
</blockquote>
<p>FitzRoy was a kind and generous person, but he tended to be stubborn with some of his convictions. Young Charles impressed him enough with his charms to make the mariner forget his faith in <a href="https://en.wikipedia.org/wiki/Physiognomy" rel="nofollow noopener noreferrer" target="_blank">physiognomy</a>. The two went on to become very close friends during their travels, and together they changed the world. ●</p>

<p><em>Next: <a href="/posts/darwin-the-voyager">In His Own Words Ep. 2 - Darwin the Voyager</a></em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/darwin-the-fun-loving-young-fellow/sportsmen.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Cow Ontologies]]></title>
            <link>https://aethermug.com/posts/cow-ontologies?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/cow-ontologies</guid>
            <pubDate>Thu, 11 Dec 2025 01:52:07 GMT</pubDate>
            <content:encoded><![CDATA[<p>Overcast. An old loghouse with a balding thatched roof along a mountain path. Goats linger about, munching at the anemic grass that grows around the building. A wooden sign, which might have once hung high but is now resting oblique against the wall beside the loghouse&#x27;s door, reads: &quot;Pilgrims Welcome&quot; in ancient hand-painted letters.</p>
<p>A woman, dressed in so many layers of drab coats that she looks twice her bulk, shuffles up the muddy path under the low gray sky. A heavy bag flung over her shoulder. She looks at the fading sign and enters.</p>
<p>An hour or so later, a man pulling a wobbly two-wheeled cart approaches from the opposite end of the path. The cart is empty except for a sprinkling of frayed twigs, half-dried leaves, and dirt. A blooming thicket of wiry hairs between the wide bulbous nose and the thin lips, no hair on chin or scalp. He halts to look silently at a white goat, then picks up his cart again and flips it against the wall to the side of the loghouse&#x27;s door. He enters.</p>
<p>Inside, a rude room cluttered with half a dozen crooked chairs and the desolate bunk frame of a bed. All exists in semi-darkness, because only two small windows open to the bleak-gray outside. A slim fire lengthens in the fireplace at the center of the room. The sitting woman, her hair quivering strawlike in color and rigidity over the hearth, looks up and welcomes the new visitor.</p>
<p>&quot;Good day,&quot; she says.</p>
<p>The man nods frowningly and looks at the four patchy coats thrown on the back of a chair. &quot;To you,&quot; he says.</p>
<p>&quot;Pilgrim, mister?&quot; she asks.</p>
<p>&quot;Aye, second time, too,&quot; he answers. &quot;Heading back down.&quot;</p>
<p>&quot;Eh, you ahead of me then. I&#x27;m climbing and I wish I was going down like you.&quot;</p>
<p>&quot;&#x27;Tis a tiresome path, aye.&quot; He begins stirring the logs in the fire, blows in their gaps.</p>
<p>The woman looks at the man&#x27;s blooming mustache for a moment, then asks, &quot;Where you from though?&quot;</p>
<p>&quot;River Valleys up north. Talk about a hike.&quot;</p>
<p>&quot;I know them River Valleys. You have the oceans there, don&#x27;t ye? Never met anyone from there I don&#x27;t reckon.&quot;</p>
<p>&quot;Well now you have and congratulations to you,&quot; the man says as he sits on one of the chairs.</p>
<p>&quot;I guess I have.&quot;</p>
<p>The two gaze at the fire for some time. The man takes an old piece of bread out of his pocket and begins chewing at one of its corners.</p>
<p>&quot;I&#x27;m from them Green Plains down down south,&quot; she says quietly, as if to herself.</p>
<p>&quot;That so?&quot;</p>
<p>&quot;I am too. You been?&quot;</p>
<p>&quot;Can&#x27;t say I have,&quot; he answers, the hairy thicket brushing every bite of bread as if to prepare it for deglutition. The man looks at the fire, then tilts his head as a goat&#x27;s bleat seeps through the loghouse&#x27;s wood. &quot;They sure got funny beasts &#x27;round here,&quot; he says into the fire, which is now mature and licking high towards the ceiling.</p>
<p>&quot;Goats always be funny,&quot; the woman replies. &quot;Goat voices used to make me and me sisters laugh every time when we was little.&quot;</p>
<p>&quot;Well, these ones make me laugh now and I ain&#x27;t little no more.&quot;</p>
<p>The woman stands up and removes two more coats, speaking as she does so. &quot;You know what I think is good? Cows. Cows is good.&quot;</p>
<p>&quot;Cows? I ain&#x27;t seen no cows &#x27;round here. Not this time or the other time.&quot;</p>
<p>&quot;Not here. Just cows, wherever they be. Cows be real good. <em>Majestic</em>, that&#x27;s the word Teacher used to used.&quot;</p>
<p>The man looks into the woman&#x27;s eyes for the first time since arriving. His eyebrows, overgrown but ever overshadowed by the mustache, raised high. &quot;What&#x27;s <em>majestic</em> about cows now?&quot;</p>
<p>For a moment, the woman&#x27;s thoughts go back to her youth, when she had a favorite cow belonging to her father. Its mane was a deep brown like rich earth, and its golden horns were very long and erect, curving handsomely along their shafts and then briskly near the tips like the august handles of a great lyre. All bovines in her country look like that, but her father&#x27;s beast seemed to her like the most beautiful of all. &quot;I reckon it&#x27;s them horns,&quot; she says. &quot;Yeah, all considered it&#x27;s the horns that&#x27;s <em>majestic</em>.&quot;</p>
<p>&quot;Ha!&quot; chortles the man. His mind wanders back to his farm, up north in the River Valleys, where he tends to a whole herd of cows with his extended family. All cows in his region are rather small and squat, with furs speckled with dirty white, black, and brown in uneven and shifting proportions, and their horns are short and curled into pathetic spirals. &quot;Well, now,&quot; he says. &quot;Wouldn&#x27;t call them <em>majestic</em>. Always thought them funny as hell, you know. Funnier than goats, at least. But the goats &#x27;round here are even funnier.&quot;</p>
<p>&quot;Cows ain&#x27;t funny. Cows is good and beautiful and <em>majestic</em>, too. I like cows,&quot; says the woman.</p>
<p>&quot;Oh, I like cows too. I just find them funny, is all. But I like them all right. Good beasts, good beasts.&quot;</p>
<p>The woman looks at him intently, her eyes again on the natural brush sprouting from his nose. &quot;Ye folks of the River Valleys strange, that&#x27;s for sure.&quot; Then, catching herself, &quot;no offense meant, mister. I just thought you use strange words. Funny this, funny that, you know. Not to say ye bad folks. I&#x27;m sure ye fine folks all the same.&quot;</p>
<p>&quot;Less fine than you think, ha! But the ones with queer words are you people of the Green Plains, calling cows <em>majestic</em> and all. Talk about strange!&quot;</p>
<p>&quot;That was Teacher who said that. It&#x27;s not all people of the Valleys who go around saying <em>majestic</em> this, <em>majestic</em> that. I just like the word, is all.&quot;</p>
<p>&quot;Sounds queer to me all the same. <em>Majestic</em> cows, ha!&quot;</p>
<p>The woman lurches back to her feet, stretches her back with a whimper, and says, &quot;I better get going on up. Who knows when it goes dark in these mountains.&quot;</p>
<p>The man looks up at her as she begins to apply her coats in a seemingly predetermined sequence. &quot;It goes dark when it goes dark, as everywhere else.&quot;</p>
<p>&quot;I bet it does, too.&quot;</p>
<p>She walks out of the semidark room, her fabric bulk scraping against both edges of the door, and leaves the loghouse.</p>
<p>The man looks at the fire for a long time, rests a dry log on the embers. Then his shoulders begin to shake slightly up and down, and continue shaking for a good while. Then he says, &quot;Ha! <em>Majestic cows</em>!&quot; ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/cow-ontologies/Gudvangen.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Purpose Essay Now on Plankton Valhalla]]></title>
            <link>https://aethermug.com/posts/purpose-essay-now-on-plankton-valhalla?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/purpose-essay-now-on-plankton-valhalla</guid>
            <pubDate>Thu, 04 Dec 2025 05:37:24 GMT</pubDate>
            <description><![CDATA[A meta-cross-post]]></description>
            <content:encoded><![CDATA[<p>Wow. It&#x27;s been over a year since I published my <a href="https://planktonvalhalla.com/20241030-recursion-tidy-stars-and-water-lilies/" rel="nofollow noopener noreferrer" target="_blank">last essay</a> on Plankton Valhalla. That was the second episode in a trilogy, and my work only got harder after that. I worked on the third piece on and off (mostly on) for the past 13 months, and I have finally published it today. It&#x27;s titled Purpose From First Principles, and you can read it <a href="https://planktonvalhalla.com/20251204-purpose-from-first-principles/" rel="nofollow noopener noreferrer" target="_blank">here</a>.</p>
<p>The essay is already very long (twice as long as the previous one!), so I&#x27;ll be brief here: this is, in part, my attempt at &quot;fixing&quot; POSIWID (<em>the purpose of a system is what it does</em>), a goal I stated already <a href="/posts/metapost-new-plankton-valhalla-essay">a year ago</a>. POSIWID suffers from a fundamental marketing flaw: the message itself is good and important, but the way it&#x27;s <em>worded</em> is paradoxical and <a href="https://www.astralcodexten.com/p/come-on-obviously-the-purpose-of" rel="nofollow noopener noreferrer" target="_blank">baffling</a> unless you happen to already understand the message.</p>
<p>It is also the foundation for a way of thinking about the world that clarifies and operationalizes an understanding of purpose, rather than pretending it doesn&#x27;t exist, as 99% of modern science does. There is so much confusion about these topics that any progress on that front would be quite valuable. I hope I&#x27;ve spelled it all out thoroughly and patiently enough in this trilogy that it instantly feels obvious to anyone who reads it. I did try hard to achieve that, but please let me know if that&#x27;s the case.</p>
<p>The essay was even longer at first, but I decided late in the editing process to separate out the last part, which gathers the conclusions of the whole trilogy and answers the questions in the first installment. This Trilogy Conclusions piece (effectively a short fourth episode) will reach Plankton Valhalla very soon (within a month, I promise!), and will also include new interpretations of core concepts like &quot;meaning&quot; and &quot;information&quot; in light of my Purpose framing. Stay tuned for that.</p>
<p>If you like the essays in Plankton Valhalla, make sure to <a href="https://planktonvalhalla.com/subscribe/" rel="nofollow noopener noreferrer" target="_blank">subscribe</a> over there, too, because it&#x27;s a separate newsletter from Aether Mug! ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/purpose-essay-now-on-plankton-valhalla/sacrifice.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[My Notes on: When Ants Are Smarter Than People]]></title>
            <link>https://aethermug.com/posts/my-notes-on-when-ants-are-smarter-than-people?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/my-notes-on-when-ants-are-smarter-than-people</guid>
            <pubDate>Thu, 27 Nov 2025 05:09:03 GMT</pubDate>
            <description><![CDATA[Group intelligence is worth talking about]]></description>
            <content:encoded><![CDATA[<p><em>Disclaimer: these are little more than reading notes. I&#x27;ll list some excerpts and highlights from a paper I&#x27;ve loved reading recently, with only marginal explanations. Rougher than usual, but this shortcut allows me to share things I wouldn&#x27;t have the time to cook otherwise. Let me know how it landed.</em></p>

<p><strong>The paper: <a href="https://www.pnas.org/doi/10.1073/pnas.2414274121" rel="nofollow noopener noreferrer" target="_blank">Comparing cooperative geometric puzzle solving in ants versus humans</a>, by Dreyer et al. (2025)</strong></p>
<p>There are so many alternative definitions of intelligence that it&#x27;s hard to pick one, but in general they all tend to be about one&#x27;s ability to solve new problems. We know that people are intelligent, and that dogs, say, are intelligent but less so, and other animals (and even plants, fungi, etc.) all have <em>some</em> level of intelligence, some much higher than others. It may be hard or impossible to measure in most cases, but the idea that there are &quot;degrees of intelligence&quot; seems rather uncontroversial. But what about groups?</p>
<p>If intelligence is about solving problems, and groups (of people, of fish, of <a href="/posts/shake-up-your-boundaries">beech trees</a>...) are known to solve problems together, it stands to reason that we should be able to talk about &quot;group intelligence&quot;. For example, how smart is a team of researchers compared to a single one? It seems obvious that the team is smarter, but is that a jump in cognitive abilities similar to that from a chimpanzee to a human, or more, or less? Does the group&#x27;s intelligence grow steadily with the number of members? And what factors determine how much the intelligence grows, if at all?</p>
<p>All of these are super important questions, if you think about how much of our modern world relies on the cooperation of many people. While it is impossible to answer them rigorously yet, this paper by Dreyer et al. does an excellent job at shedding some light.</p>
<h3>The Puzzle</h3>
<p>Early on, the paper acknowledges the main difficulty in assessing &quot;group intelligence&quot;:</p>
<blockquote>
<p>While tempting, direct comparisons between the cognitive abilities of wholes versus individuals are often meaningless because different group sizes tend to interact with the environment at different scales. For example, there is no meaningful way to compare the cognitive capacity of a single neuron to that of the brain.</p>
</blockquote>
<p>Another way to put it is that the problems faced by an individual agent are usually different from those faced by groups. The comparison would be apples to oranges.</p>
<p><em>Usually</em>, that&#x27;s the case. But here&#x27;s where the authors&#x27; ingenious idea comes in: find one problem or task that remains the same regardless of the number of participants.</p>
<p>What&#x27;s more, they wanted a problem so scalable that it could be applied equally well to people and ants! Ants are known to achieve amazing feats when they work shoulder to shoulder (do they even have shoulders?), so I can see why the researchers would be curious to compare them with people. But what kind of problem could fit the bill here?</p>
<blockquote>
<p>Of all the social animals, only ants and people excel at cooperative transport.</p>
</blockquote>
<p>That&#x27;s the perfect task to level the playing field, then: have them carry things around.</p>
<p>Dreyer et al. devised the following scalable puzzle: carry a (relatively) large, T-shaped object through two doorways. For the humans, the object had wheels and handles to be pulled around, while the ants (longhorn crazy ants, to be precise) relied on their pincers and disproportionate strength to pull their 3D-printed, food-smelling Ts directly. The rooms of the puzzle were oriented so that the ants&#x27; nest was beyond the second door, which motivated the insects to pull the object in that direction.</p>
<p>Finally, they did the experiment both with individuals (single ants, single people) and with groups.</p>
<p>Of course, the size of the T and of the room layout was varied in each case to make the subjective conditions as similar as possible, but the proportions were the same in every case.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/my-notes-on-when-ants-are-smarter-than-people/four_scales.webp" alt="(Dreyer et al., 2025, CC BY-NC-ND 4.0)" title="Four photographs taken from above depicting the same problem (a T-shaped object in a three-chamber room) being tackled by different solvers: a single ant, a large group of ants, a single person, a group of people."/><figcaption>(Dreyer et al., 2025, CC BY-NC-ND 4.0)</figcaption></figure>
<p>The task sounds easy, but the proportions of the maze are such that the T won&#x27;t fit through the doors <em>except in a very specific and indirect sequence of maneuvers</em>. The most obvious answer—plunging the T&#x27;s long end into the first door, is wrong, and so are most of the other obvious-seeming steps. Simple and hard, as every good puzzle should be: how quickly would people and ants, individuals and groups be able to solve it? That&#x27;s the experiment&#x27;s measure of (a form of) intelligence.</p>
<p>The experiment was repeated for the following five &quot;solvers&quot;, each with the object and room layout scaled to keep the proportions right:</p>
<ol>
<li>Solver: individual <strong>ants</strong></li>
<li>Solver: groups of <strong>ants</strong></li>
<li>Solver: individual <strong>people</strong></li>
<li>Solver: groups of <strong>people</strong> who were allowed to communicate</li>
<li>Solver: groups of <strong>people</strong> who were prohibited from communicating</li>
</ol>
<p>The division of groups of people into &quot;unlimited communication&quot; and &quot;restricted communication&quot; is another nice and important touch. The latter is where the playing field is really level, because ants can&#x27;t speak. People in the restricted communication group wore masks and eyeglasses, and were not allowed to speak or gesture to each other. The only means of communication they could use was by pulling at levers attached to the object, i.e. letting the others know that they intend to pull in this or that direction. This mimics the limitations faced by ants.</p>
<p>The insects, on the other hand, were unable to use their own unique superpower:</p>
<blockquote>
<p>For longhorn crazy ants, communication in the context of cooperative transport is naturally mediated by both haptic sensation and pheromone communication. However, since in the context of our puzzle, pheromones are practically useless, this primarily leaves the ants with force-based communication. This makes comparisons between ant groups and restricted communication human groups especially compelling.</p>
</blockquote>
<div class="lg:col-start-1 lg:col-end-1 aspect-w-16 aspect-h-9 mb-8"><iframe class="w-full p-0" src="https://www.youtube-nocookie.com/embed/ZHpu7ngQxwE" title="YouTube video player" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerPolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe></div>
<h3>Ant Performance</h3>
<p>The first (unsurprising) finding was that &quot;<em>large ant groups perform significantly better than individual ants and small groups of ants.</em>&quot;</p>
<p>Very few of the individual ants were able to solve the puzzle at all, no matter how much they tried. With their tiny brains, it would be difficult to expect otherwise.</p>
<p>The story was different for groups of ants, though. Small groups of them (about seven individuals) were able to <em>eventually</em> solve the puzzle about half the time. Large groups (about 80 individuals) were able to solve it 100% of the time given enough tries.</p>
<p>That is a striking increase in intelligence. Since no single ant has the brainpower to mentally solve the problem, the strategy they use to solve it as groups must be distributed <em>among</em> them, and not located in any one of them. To understand how this might work, the researchers created an ant simulator: an agent based model (ABM) where each virtual ant follows really basic but specific rules not requiring an overall understanding of the puzzle&#x27;s configuration.</p>
<blockquote>
<p>This empirically verified model assumes that when an ant attaches to the load, she transiently acts as an “informed leader” by pulling it in the direction of the nest. After about ten seconds, the newly attached ant switches her state to that of an “uninformed follower” and tends to align her pulling effort with the direction in which the load, at her point of contact, is moving. This tendency is larger for larger group sizes.</p>
</blockquote>
<p>This simple rule seems to capture what is happening in the actual ants, because the computer simulations are able to closely mimic their performance.</p>
<p>Pause for a minute and ponder this fact. The groups of ants are much, much better at solving the puzzle than any single one of them, which means that they have a form of collective intelligence. The group is intelligent. But none of them <em>understands</em> what is happening at all. None of them thinks &quot;oh, this puzzle is tough, we need to find a different approach!&quot; All they can think is &quot;pull towards the nest, pull towards the nest!&quot;</p>
<p>This is a case of what Daniel Dennett calls <em>competence without comprehension</em>: the ability of agents (and machines) to accomplish impressive feats without having a clue as to how they&#x27;re doing it.</p>
<p>In practice, Dreyer&#x27;s team hypothesizes, the key to the increased intelligence of the ant groups is in how their movement of the T is less chaotic and more systematic. Instead of bumping around, rotating left and right and generally following an erratic wiggling motion, the object carried by larger groups of ants glides smoothly and, upon hitting against a wall, slides against it until it finds an opening. This greatly decreases the number of attempts they have to make before finding the right sequence of steps to get through the puzzle.</p>
<p>But why the smoother motion? With more ants pulling, each individual participant has less of an effect on the whole. If the T is already moving in one direction, a lot of ants would need to decide to change direction in unison in order to sway it.</p>
<blockquote>
<p>The high persistence of large groups translates into short-term collective memory.</p>
</blockquote>
<blockquote>
<p>To summarize, emergent cognitive faculties allow large ant groups to employ a heuristic that is reminiscent of the well-known “right-hand rule,” in which, upon entering a maze, the solver slides their right hand along the wall and proceeds forward without changing their direction. Moreover, the fact that the ants occasionally move away from the wall and collide again at a new location allows them to avoid infinite loops that may plague strict right-hand-rule followers. On the other hand, small ant groups exhibit random-walk-like dynamics that include futile searches and trapping in dead ends.</p>
</blockquote>
<div class="col-start-1 col-end-1 m-10"></div>
<h3>Human Performance</h3>
<p>Needless to say, people did better than the ants in all cases, but there is a lot of juicy nuance here. Let&#x27;s look at the &quot;no talk&quot; group first.</p>
<blockquote>
<p>While restricted-communication groups of people communicate via forces as ants do, they did not show a corresponding improvement with numbers. In fact, the opposite was true as these groups performed significantly worse than individuals.</p>
</blockquote>
<p>Now, that is surprising. The conditions are exactly the same as the ants, and yet the collective intelligence of the groups of people was <em>lower</em> than individuals. When working alone, people solved the puzzle in under 13 attempts about 60% of the time, yet groups of non-communicating people did so approximately 17% of the time. That is pretty abysmal, if you ask me.</p>
<p>Things were different when talking was involved:</p>
<blockquote>
<p>Groups which were allowed to communicate reverse this effect and marginally outperform individuals.</p>
</blockquote>
<p>The group intelligence did increase in these cases, but only by a little. They solved the problem in under 13 attempts some 75% of the time (fifteen percentage points above individual solvers). On the other hand, the talking groups were considerably slower than all others, because &quot;<em>communicating groups of people spend significant time discussing and deciding on their next move and, by this, display similar performance to individuals.</em>&quot; I&#x27;ll touch on the possible reasons for this underwhelming group performance later.</p>
<p>Clearly, things work very differently here than in the ants&#x27; case. People have enormous brains capable of <a href="/posts/embedded-prophesy-devices">mentally simulating</a> future movements, and they have the major advantage of a long-term memory.</p>
<blockquote>
<p>The motion exhibited by people, either as individuals or in groups, displays even higher efficiency as these solvers tend to take the direct path [from one phase of the puzzle to the next.]</p>
</blockquote>
<p>Yet even people seem to follow relatively simple rules of thumb, at least in the &quot;restricted communication&quot; case. The pull-handles had force sensors, so the scientists could track exactly when and how each person pulled:</p>
<blockquote>
<p>The individual force-meters show that similar to ants, once motion begins, all group members tend to align their pulling efforts with it</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Summing up the force vectors of all initial pullers is equivalent to a majority vote among this leading subgroup (or <em>oligarchy</em>). Once motion commences, all other group members quickly comply with this direction and transport the load to the chosen target node.</p>
</blockquote>
<p>As we&#x27;ve seen, this strategy didn&#x27;t work out very well.</p>
<h3>Why Did Humans Proportionally Suck?</h3>
<figure><img src="https://aethermug.com/assets/posts/my-notes-on-when-ants-are-smarter-than-people/ant_human_charts.webp" alt="Two performance charts: left shows success rate versus path length with ant groups outperforming humans and single ants; right shows success rate versus number of attempts with similar performance patterns across all solver types"/><figcaption>If you are so inclined, you can pore over this exciting chart for half an hour like I did. Note the overlap region where the best ants beat the worst humans. (Dreyer et al., 2025, CC BY-NC-ND 4.0)</figcaption></figure>
<p>People have big brains, big enough to contain thousands of <em>whole</em> ants (~650,000 average-sized ants to be precise, according to Kimi K2), so I find the difference in intelligence measured in this experiment pretty disappointing.</p>
<p>Individual people did vastly better than individual ants—that checks out—but when they got into groups, the ants got dangerously close to people:</p>
<blockquote>
<p>We find that, on average, human solvers perform better than ant solvers. However, the full performance distributions do display a small overlap as <strong>the best ant solvers outperform the worst human solvers</strong>.</p>
</blockquote>
<p>Sure, we were smarter overall, but we can&#x27;t be too proud of ourselves. (I imagine the referee of these sessions looking at their sensors after a while and going, &quot;folks, I just wanted to let you know that some ants have already finished the puzzle at this point; just sayin&#x27;.&quot;)</p>
<p>What gives? Well, it&#x27;s complicated, but the authors of the paper identify an interesting tendency, especially in the non-communicating group of people: their strategies tended to be &quot;greedier&quot;, in the sense that they often attempted the most easy- and obvious-looking state transition first. Unfortunately, this puzzle is designed to require indirect, non-obvious steps to solve, so the greedy choices were usually wrong. But why the extra greediness?</p>
<p>One potential factor is that, when they can&#x27;t tell each other what&#x27;s around the corner, people in a group have a limited visibility of the overall situation from where they are standing. That is a physical factor. But there seems to be a psychological side, too:</p>
<blockquote>
<p>Greediness may also be related to the empirical observation that when debate is prohibited, people tend to reach a consensus quickly. In this case, in line with the notion of groupthink, people tend to forsake their personal opinion and promote a different one, not because they think it is the better option, but because it is the option they believe is most likely to be independently chosen by others.</p>
</blockquote>
<p>Follow the majority, they must know what they&#x27;re doing, right?</p>
<blockquote>
<p>It may be the case that this assessment of the majority opinion is, in fact, the minority opinion, which is reminiscent of the social phenomenon of <a href="https://en.wikipedia.org/wiki/Pluralistic_ignorance" rel="nofollow noopener noreferrer" target="_blank">pluralistic ignorance</a>.</p>
</blockquote>
<p>Here you might be wondering how this differs from the ant teams, since we&#x27;ve seen that following the majority is the core of their strategy, too. There can&#x27;t be a greedier strategy than &quot;pull towards the nest, pull towards the nest!&quot;</p>
<p>The short answer is yes, ants also gingerly rush in wrong directions all together, and this is confirmed by the fact that ants still perform (mostly) worse than even the noncommunicating human groups. But at least the groups of ants do strictly better than their individuals. The paper&#x27;s explanation for this is nuanced.</p>
<blockquote>
<p>We assume that while longhorn crazy ants discern the context of cooperative transport, they make no distinctions regarding the geometry of the specific problem and always apply the same individual scale behavioral rules.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/my-notes-on-when-ants-are-smarter-than-people/1024px-Longhorn_Crazy_Ant_-_Paratrechina_longicornis,_Dagny_Johnson_State_Park,_Key_Largo,_Florida.webp" alt="Three longhorn crazy ants on bright yellow-green surface, showing characteristic brown bodies, long antennae, and elongated legs"/><figcaption>Longhorn crazy ants (Judy Gallagher, CC BY 2.0, via Wikimedia Commons)</figcaption></figure>
<p>I.e., ants are equally greedy at all levels but, instead of being their doom, this alignment and simplicity of behavior is the key to their collective success. By dumbly pulling in one direction at a time, the large groups of ants create a sort of momentum in the movement, and this creates a form of collective memory. None of them is clever enough to remember the direction they were going a moment earlier, but the group as a whole does. So they effectively implement the right-hand technique for solving mazes, and this happens to be rather effective.</p>
<p>Every single human, on the other hand, can stop and reflect, imagine different options, and prune those that seem to be clearly hopeless.</p>
<blockquote>
<p>People are more flexible in selecting tools from their cognitive repertoire and can finely adjust their problem-solving tactics to suit the particular task at hand. While this flexibility can enhance individual performance, it inevitably results in interpersonal differences that may require more advanced communication to avoid worsening collective performances and allow for effective cooperation.</p>
</blockquote>
<p>Inside single brains, there is a sort of internal debate—what we call &quot;thinking&quot;. This debate can be partly &quot;scaled up&quot;, so to speak, when group members are able to talk:</p>
<blockquote>
<p>Communicating groups reach consensus in a very different manner [compared to non-communicating groups]: at the beginning of a solution attempt, and before any motion commences, they tend to spend tens of seconds conversing. ... The ability to discuss frees the group from the urge to make a single-shot decision and they can, instead, take their time to advocate for less obvious, but more accurate choices toward a joint decision.</p>
</blockquote>
<p>Just like lone humans, the groups that can talk are able to do this thing called &quot;thinking&quot;, and it happens <em>between</em> people, not only <em>within</em> each skull. This option is precluded from the groups that are not allowed to communicate, and that is why their performance is more ant-like. The saying that &quot;two heads are better than one&quot; should come with the fine print, &quot;if they can talk to each other.&quot;</p>
<p>That said, the performance of talking groups is only slightly better than single people. Apparently people are able to correct each other&#x27;s ideas—someone might point out a mistake that another participant had overlooked—but we&#x27;re not seeing the dramatic leaps in group intelligence that we saw in ants.</p>
<p>The researchers&#x27; reasonable hypothesis to explain this is that unlimited communication groups informally or implicitly elect one person as their leader. But, since those groups perform worse than the best individual solvers, we must infer that they are not very good at choosing leaders.</p>
<blockquote>
<p>Therefore, communication does not significantly help the group distinguish a competent member from the rest and follow her lead.</p>
</blockquote>
<h3>My Takeaways</h3>
<p>What can we learn from all this? Does understanding this paper make us better at collective action?</p>
<p>The temptation is to infer that we should act more like the ants: find a set of simple rules governing individual behavior, then have everyone just do their job without worrying about the big picture. Have someone draft the perfect formula, distribute the instructions to everyone top-down, and the resulting harmony will boost our overall intelligence, right? Of course not! That is exactly what the non-talking human groups did—be greedy and too eager to act—and they became dumber, not smarter!</p>
<p>A more cautious takeaway might be that the ants demonstrated the tremendous <em>potential</em> for increased intelligence in groups. If humans are still unable to increase their intelligence dozens-fold like the ants did, it may be just a matter of removing the procedural obstacles, whatever they are.</p>
<p>I don&#x27;t think you can generalize these results into concrete operational advice. This is such a specific kind of problem, and rather simple. The best strategies for solving this sort of puzzle—even assuming we know them—will probably be useless in most other contexts. Also, the experiment was explicitly designed to remove most of the planning and thinking time for people. The unlimited communication groups had the opportunity to talk at length, but what if they had let them reflect on it for a whole day? What if they had given them pen and paper, computers, books about geometric puzzles, and training for such situations? A lot of the combined intelligence of us <em>homo sapiens</em> resides in our capacity to form castles in the mind, and to access external and shared memory. None of that was allowed to shine in this experiment, for obvious and good reasons.</p>
<p>Still, I find this work by Dreyer et al. fascinating for the way it demonstrates that the interpersonal dynamics make <em>all</em> the difference. It also shows what I call a fruitful <a href="/posts/shake-up-your-boundaries">shifting of boundaries</a>: groups can be treated as their own <a href="/posts/a-framing-and-model-about-framings-and-models">black boxes</a> with behaviors, strategies, and intelligence that don&#x27;t directly map to those of their constituents.</p>
<p>Here are some questions that surfaced in my mind while reading this:</p>
<ul>
<li>How does the gap between individual and group intelligence change for more complex problems, especially those too complex to fully model in any single mind?</li>
<li>Assuming the ability to communicate, when does this gap reverse and become negative (i.e. individuals fare better), if ever?</li>
<li>Can people achieve the same leaps in collective intelligence as ants do?</li>
<li>Can we have both human-level unit intelligence and ant-level concerted synergy? ●</li>
</ul>

<p>You can watch other fun videos from the experiment <a href="https://www.pnas.org/doi/10.1073/pnas.2414274121#supplementary-materials" rel="nofollow noopener noreferrer" target="_blank">here</a>.</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/my-notes-on-when-ants-are-smarter-than-people/azzam-qourti-5IV6_bnr7wg-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Roam]]></title>
            <link>https://aethermug.com/posts/roam?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/roam</guid>
            <pubDate>Thu, 20 Nov 2025 01:57:45 GMT</pubDate>
            <content:encoded><![CDATA[<p>You&#x27;re looking for the Place.</p>
<p>Not a special place, or an important place. The <em>right</em> place for yourself. A place that means nothing to anyone else, but that completely fills the nooks and crannies of your being and lets you hold on to the world as firmly as is possible for a human being. A place that is like it was made for you, like a rock of the perfect size and shape to be held snugly in your hand. But it&#x27;s not just the shape of this place that matters to you. It&#x27;s the way light enters it, the way it is positioned and attached to the other places around it, the way it absorbs or reflects the sound of your footsteps.</p>
<p>Where do you start looking for such a place? No maps bear marks for it, no road signs show you the direction you ought to take. Even if you were lucky enough to encounter someone who has seen your Place, they wouldn&#x27;t know that. To them, it would be an anonymous, insignificant spot far away from the Place <em>they</em> are looking for. No one can lead you there. Indeed, it may be moving at all times, depending on the tides of oceans and peoples, on the movements of your dear ones and on the evolution of nature&#x27;s forms.</p>
<p>Over the years, if you travel and go around a good deal, you may come upon many a place that calls to you, invites you personally into its premises, captures your gaze a few moments longer than would be normal. The narrow staircase-street in a quiet medieval town; a grassy spot between a boulder and two trees along a mountain trail; the uncertain band of terrain where sand turns to grass on the beach of an uninhabited island. For an instant after you see such a place, you wonder if you&#x27;ve finally come upon the one Place that you&#x27;ve been looking for all these years. You briefly survey its curves and textures, the resonances of its colors and sounds. You breathe its air into your lungs to feel it inside you. Some of these places are close enough to your own Place that it wouldn&#x27;t be unwise of you to stop there and make them a lasting part of your life. Yours wouldn&#x27;t be a bad life if you could come back to such places over and over when you feel the need to elevate your spirit above the clouds and into the rest of the Universe. But somewhere inside you know, after those instants of contemplation, that it&#x27;s not the Place you seek. It is a <em>good place</em>, one to be cherished even, but it is not where you will fit in your entirety. If you were to interrupt your search there, you would be giving up on something precious. So you divert your gaze, reluctantly turn away, and continue walking up the mountain path, towards the other side of the island where everyone is waiting for you, or to wherever else you were going a minute before. Your search goes on.</p>
<p>Few people actually manage to find their Place. Of those, even fewer are able to seize it and make it their home. Those that do, exist in a shroud of bliss. You don&#x27;t hear from them, because they don&#x27;t need to talk about it. The rest of us, many at least, keep looking far and wide, or near and close. In doing so, we live. We are already creating a Place or many places for others to find, or for our future selves. We talk about the search, and share it with others. And while we may not fit into the Universe as snugly as our Place would afford us, many of us can be happy anyway, because we know that by not stopping we are ever getting closer. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/roam/casey-horner-4rDCa5hBlCs-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Linear Algebra Explains Why Some Words Are Effectively Untranslatable]]></title>
            <link>https://aethermug.com/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable</guid>
            <pubDate>Thu, 13 Nov 2025 06:54:50 GMT</pubDate>
            <description><![CDATA[A modest mathematical framing of language]]></description>
            <content:encoded><![CDATA[<p>A part of me still hasn&#x27;t recovered from learning that some people believe there is no such thing as an untranslatable word. I&#x27;ve written about why I disagree <a href="/posts/is-there-anything-untranslatable">before</a>, but that explanation didn&#x27;t satisfy me completely. There was a stronger argument to be made, I thought, but I couldn&#x27;t put it into words. Now I remember, though: you need to see language as (a little bit) like math. Call me crazy, but I think that language translation is like a change of basis in linear algebra.</p>
<p>Me making weird connections like this might simply be an occupational hazard. Both my PhD research and my first job had to do with controlling the position and orientation of spacecraft and rocks in space, which means that I spent years juggling vectors, matrix multiplications, and reference frames almost daily. Still, I think it is simple enough to be understood by anyone, so hear me out.</p>
<p>(<em>You might remember linear algebra from high school. It&#x27;s that subfield where you write about stuff like this:</em></p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>M</mi><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>3</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>3</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">M = \begin{bmatrix}
  1 &amp; 0 &amp; 0 \\
  2 &amp; 1 &amp; 0 \\
  3 &amp; 3 &amp; 1
\end{bmatrix}
\begin{pmatrix}
x \\
y
\end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em"></span><span class="mord mathnormal" style="margin-right:0.10903em">M</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:3.6em;vertical-align:-1.55em"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em"><span style="top:-4.05em"><span class="pstrut" style="height:5.6em"></span><span style="width:0.667em;height:3.600em"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em"><span style="top:-4.21em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.01em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-1.81em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em"></span><span class="arraycolsep" style="width:0.5em"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em"><span style="top:-4.21em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.01em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.81em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em"></span><span class="arraycolsep" style="width:0.5em"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em"><span style="top:-4.21em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.01em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.81em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em"><span style="top:-4.05em"><span class="pstrut" style="height:5.6em"></span><span style="width:0.667em;height:3.600em"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.1667em"></span><span class="minner"><span class="mopen delimcenter" style="top:0em"><span class="delimsizing size3">(</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em"><span style="top:-3.61em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord mathnormal">x</span></span></span><span style="top:-2.41em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em">y</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em"><span class="delimsizing size3">)</span></span></span></span></span></span></span>
<p><em>If the mere sight of the above is like a punch in the face for you, don&#x27;t worry. I&#x27;m not going to math you to death in what follows. I will only remind you of a tiny basic part of it that I think relates to languages.)</em></p>
<p>During those same mathematical days, I was also learning Japanese. The language fascinated me for many reasons, like its <a href="/posts/the-beautiful-dissociation-of-the-japanese-language">beautiful dissociation</a> between written and spoken words and its <a href="/posts/i-like-the-sound-of-how-you-look-at-me">many</a> <a href="/posts/what-if-gratitude-was-built-into-the-grammar">unique</a> <a href="/posts/in-japanese-you-need-a-dictionary-to-count-things">quirks</a>, but I was also struck early on by something a bit more meta: how hard it is to translate things to and from it.</p>
<p>These two concurrent interests made it hard for me not to see a connection. For almost a decade now, I&#x27;ve held it in a corner of my mind without telling anyone, perhaps because I thought it would be seen as too outrageous, but hey! Now LLMs are popular and they <em>literally</em> handle words and concepts as vectors with linear algebra operations, so maybe my analogy isn&#x27;t <em>that</em> out there. Let me give it a try.</p>
<h3>The Case of Vectors</h3>
<p>Contrary to popular belief, a vector is not &quot;a list of numbers&quot; but an abstract object with no predefined way to express it.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable/vec_1.webp" alt="You can think of vectors as arrows floating in space. No numbers involved." title="Image of two vectors with the same origin but different directions, on a blank background. They are labeled a and b."/><figcaption>You can think of vectors as arrows floating in space. No numbers involved.</figcaption></figure>
<p>But a vector is not very useful in this abstract state. We need a way to write it down so that we can manipulate it with algebra and communicate it to others. We do this by choosing a frame of reference—or, more accurately, a set of vectors to use as &quot;basis&quot; to quantify all others.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable/vec_2.webp" alt="In the 2D case like this, if you set two vectors (e1 and e2) as the basis and decide that they have length 1, then you can find the coordinates of all other vectors based on their comparison to that basis. The basis vectors act as measuring sticks." title="Image of the same two vectors a and b, with the addition of two basis vectors titled e1 and e2 with the text &#x27;length = 1&#x27;. Both the a and b vectors have lines projecting them perpendicularly against the e1 and e2 vectors, with numbers expressing what fraction of that vector&#x27;s length the line intersects at."/><figcaption>In the 2D case like this, if you set two vectors (e1 and e2) as the basis and decide that they have length 1, then you can find the coordinates of all other vectors based on their comparison to that basis. The basis vectors act as measuring sticks.</figcaption></figure>
<p>This is where numbers come into play. By projecting the vectors against the basis vectors, you can assign them lists of numbers (two numbers each, in this two-dimensional case):</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi mathvariant="bold">a</mi><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.5</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.7</mn></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow><mo separator="true">,</mo><mspace width="1em"></mspace><mi mathvariant="bold">b</mi><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.8</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.3</mn></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">\mathbf{a} = \begin{pmatrix} 0.5 \\ 0.7 \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} 0.8 \\ 0.3 \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4444em"></span><span class="mord mathbf">a</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em"></span><span class="minner"><span class="mopen delimcenter" style="top:0em"><span class="delimsizing size3">(</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em"><span style="top:-3.61em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.5</span></span></span><span style="top:-2.41em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.7</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em"><span class="delimsizing size3">)</span></span></span><span class="mspace" style="margin-right:0.1667em"></span><span class="mpunct">,</span><span class="mspace" style="margin-right:1em"></span><span class="mspace" style="margin-right:0.1667em"></span><span class="mord mathbf">b</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em"></span><span class="minner"><span class="mopen delimcenter" style="top:0em"><span class="delimsizing size3">(</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em"><span style="top:-3.61em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.8</span></span></span><span style="top:-2.41em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em"><span class="delimsizing size3">)</span></span></span></span></span></span></span>
<p>Those numbers are the <em>coordinates</em> of the vectors in that basis. For instance, the vector <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi mathvariant="bold">a</mi></mrow><annotation encoding="application/x-tex">\mathbf{a}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4444em"></span><span class="mord mathbf">a</span></span></span></span> can be read as &quot;<em>half as long as the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi mathvariant="bold">e</mi><mn mathvariant="bold">1</mn></msub></mrow><annotation encoding="application/x-tex">\mathbf{e_1}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.5944em;vertical-align:-0.15em"></span><span class="mord"><span class="mord mathbf">e</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em"><span class="pstrut" style="height:2.7em"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathbf mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em"><span></span></span></span></span></span></span></span></span></span> vector along the direction of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi mathvariant="bold">e</mi><mn mathvariant="bold">1</mn></msub></mrow><annotation encoding="application/x-tex">\mathbf{e_1}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.5944em;vertical-align:-0.15em"></span><span class="mord"><span class="mord mathbf">e</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em"><span class="pstrut" style="height:2.7em"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathbf mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em"><span></span></span></span></span></span></span></span></span></span>, and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>0.7</mn></mrow><annotation encoding="application/x-tex">0.7</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6444em"></span><span class="mord">0.7</span></span></span></span> times as long as the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi mathvariant="bold">e</mi><mn mathvariant="bold">2</mn></msub></mrow><annotation encoding="application/x-tex">\mathbf{e_2}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.5944em;vertical-align:-0.15em"></span><span class="mord"><span class="mord mathbf">e</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em"><span class="pstrut" style="height:2.7em"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathbf mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em"><span></span></span></span></span></span></span></span></span></span> vector along the direction of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi mathvariant="bold">e</mi><mn mathvariant="bold">2</mn></msub></mrow><annotation encoding="application/x-tex">\mathbf{e_2}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.5944em;vertical-align:-0.15em"></span><span class="mord"><span class="mord mathbf">e</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em"><span class="pstrut" style="height:2.7em"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathbf mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em"><span></span></span></span></span></span></span></span></span></span></em>.&quot;</p>
<p>The important thing I want to convey here (and the last mathy thing to remember) is that if you were to choose a different basis, the same vector would have different coordinates.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable/vec_3.webp" alt="Same vectors, different basis: the numbers are different." title="Image of the same two vectors a and b, this time with a different pair of perpendicular basis vectors titled u1 and u2 with a different angle respect to the two in the previous image. The text still says &#x27;length = 1&#x27; for these basis vectors. The lines projecting a and b perpendicularly against the u1 and u2 vectors now show different numbers."/><figcaption>Same vectors, different basis: the numbers are different.</figcaption></figure>
<p>In this new basis, the two vectors are written as:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi mathvariant="bold">a</mi><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.8</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.2</mn></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow><mo separator="true">,</mo><mspace width="1em"></mspace><mi mathvariant="bold">b</mi><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.8</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>0.2</mn></mrow></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">\mathbf{a} = \begin{pmatrix} 0.8 \\ 0.2 \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} 0.8 \\ -0.2 \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4444em"></span><span class="mord mathbf">a</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em"></span><span class="minner"><span class="mopen delimcenter" style="top:0em"><span class="delimsizing size3">(</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em"><span style="top:-3.61em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.8</span></span></span><span style="top:-2.41em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em"><span class="delimsizing size3">)</span></span></span><span class="mspace" style="margin-right:0.1667em"></span><span class="mpunct">,</span><span class="mspace" style="margin-right:1em"></span><span class="mspace" style="margin-right:0.1667em"></span><span class="mord mathbf">b</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em"></span><span class="minner"><span class="mopen delimcenter" style="top:0em"><span class="delimsizing size3">(</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em"><span style="top:-3.61em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.8</span></span></span><span style="top:-2.41em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">−</span><span class="mord">0.2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em"><span class="delimsizing size3">)</span></span></span></span></span></span></span>
<p>In short, change the basis (e.g. from &quot;<span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi mathvariant="bold">e</mi><mn mathvariant="bold">1</mn></msub></mrow><annotation encoding="application/x-tex">\mathbf{e_1}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.5944em;vertical-align:-0.15em"></span><span class="mord"><span class="mord mathbf">e</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em"><span class="pstrut" style="height:2.7em"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathbf mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em"><span></span></span></span></span></span></span></span></span></span> &amp; <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi mathvariant="bold">e</mi><mn mathvariant="bold">2</mn></msub></mrow><annotation encoding="application/x-tex">\mathbf{e_2}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.5944em;vertical-align:-0.15em"></span><span class="mord"><span class="mord mathbf">e</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em"><span class="pstrut" style="height:2.7em"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathbf mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em"><span></span></span></span></span></span></span></span></span></span>&quot; to  &quot;<span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi mathvariant="bold">u</mi><mn mathvariant="bold">1</mn></msub></mrow><annotation encoding="application/x-tex">\mathbf{u_1}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.5944em;vertical-align:-0.15em"></span><span class="mord"><span class="mord mathbf">u</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em"><span class="pstrut" style="height:2.7em"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathbf mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em"><span></span></span></span></span></span></span></span></span></span> &amp; <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi mathvariant="bold">u</mi><mn mathvariant="bold">2</mn></msub></mrow><annotation encoding="application/x-tex">\mathbf{u_2}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.5944em;vertical-align:-0.15em"></span><span class="mord"><span class="mord mathbf">u</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em"><span class="pstrut" style="height:2.7em"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathbf mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em"><span></span></span></span></span></span></span></span></span></span>&quot;) and the same abstract object (vector) will be represented with different numbers. You can do all the same operations with them, like changing the vector&#x27;s length and direction, calculating the angle between the vectors, and so on, and you&#x27;ll get the same results in both bases, because they are operations on the same objects. The choice of basis is merely cosmetic from this point of view.</p>
<h3>The Case of Language</h3>
<p>Now let&#x27;s turn to language and see the parallel.</p>
<p>Contrary to popular belief, a concept is not a word or a group of words but an abstract object in your mind.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable/joukyou.webp" alt="A concept I have in mind now, shown as a drawing rather than words." title="AI drawing of a woman with a backpack walking under cherry blossoms, with a bullet train on the background."/><figcaption>A concept I have in mind now, shown as a drawing rather than words.</figcaption></figure>
<p>But a concept is not very useful in this abstract state. We need a way to write it down so that we can manipulate it with grammar and communicate it to others. We do this by choosing a language that is shared with the receiver of the message.</p>
<p>For the concept in my head right now, and choosing the English language to represent it, you get this:</p>
<blockquote>
<p><strong>&quot;Going to Tokyo&quot;</strong></p>
</blockquote>
<p>This is where words come into play. By projecting the abstract idea onto the standard English vocabulary and grammar, you can assign it a list of words:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>💭</mtext><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>Going</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>to</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>Tokyo</mtext></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">💭 = \begin{pmatrix} \text{Going} \\ \text{to} \\ \text{Tokyo} \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.3669em"></span><span class="mord">💭</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:3.6em;vertical-align:-1.55em"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em"><span style="top:-4.05em"><span class="pstrut" style="height:5.6em"></span><span style="width:0.875em;height:3.600em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="3.600em" viewBox="0 0 875 3600"><path d="M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1
c-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349,
-36,557 l0,84c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210,
949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9
c0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5,
-544.7,-112.5,-882c-2,-104,-3,-167,-3,-189
l0,-92c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3,
-210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em"><span style="top:-4.21em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord text"><span class="mord">Going</span></span></span></span><span style="top:-3.01em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord text"><span class="mord">to</span></span></span></span><span style="top:-1.81em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord text"><span class="mord">Tokyo</span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em"><span style="top:-4.05em"><span class="pstrut" style="height:5.6em"></span><span style="width:0.875em;height:3.600em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="3.600em" viewBox="0 0 875 3600"><path d="M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3,
63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5
c11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,9
c-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664
c-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11
c0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17
c242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558
l0,-144c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7,
-470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>Those words are the equivalent of the coordinates of the vectors in linear algebra. The way I just wrote it is not accurate, though, because it makes it look as if English only had 3 dimensions (3 words), just like the 2-element vectors were 2-dimensional. English actually has hundreds of thousands of words,  so we would need a vector that long to fully represent it, with blank spaces for all the words that aren&#x27;t involved in this case.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>💭</mtext><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>Going</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>to</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>Tokyo</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mi mathvariant="normal">⋮</mi><mpadded height="0em" voffset="0em"><mspace mathbackground="black" width="0em" height="1.5em"></mspace></mpadded></mrow></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">💭 = \begin{pmatrix} \text{Going} \\ \text{to} \\ \text{Tokyo} \\ \emptyset \\ \emptyset \\ \emptyset \\ \vdots \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.3669em"></span><span class="mord">💭</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:9.06em;vertical-align:-4.28em"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:4.7499em"><span style="top:-6.7499em"><span class="pstrut" style="height:11em"></span><span style="width:0.875em;height:9.000em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="9.000em" viewBox="0 0 875 9000"><path d="M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1
c-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349,
-36,557 l0,5484c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210,
949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9
c0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5,
-544.7,-112.5,-882c-2,-104,-3,-167,-3,-189
l0,-5492c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3,
-210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:4.2501em"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:4.78em"><span style="top:-7.6275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord text"><span class="mord">Going</span></span></span></span><span style="top:-6.4275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord text"><span class="mord">to</span></span></span></span><span style="top:-5.2275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord text"><span class="mord">Tokyo</span></span></span></span><span style="top:-4.0275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:-2.8275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:-1.6275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:0.2325em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord"><span class="mord">⋮</span><span class="mord rule" style="border-right-width:0em;border-top-width:1.5em;bottom:0em"></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:4.28em"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:4.7499em"><span style="top:-6.7499em"><span class="pstrut" style="height:11em"></span><span style="width:0.875em;height:9.000em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="9.000em" viewBox="0 0 875 9000"><path d="M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3,
63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5
c11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,5409
c-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664
c-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11
c0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17
c242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558
l0,-5544c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7,
-470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:4.2501em"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>The English language offers its speakers many other words, like <em>camel</em>, <em>frolic</em>, and <em>or</em>, but in this case none of them was necessary to express the idea that was in my mind, so they remain empty (<span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi mathvariant="normal">∅</mi></mrow><annotation encoding="application/x-tex">\emptyset</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em"></span><span class="mord">∅</span></span></span></span>) and absent from my utterance of &quot;going to Tokyo&quot;.</p>
<figure><img src="https://aethermug.com/assets/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable/vackground-com-F0qfw5H-pUU-unsplash.webp" alt="Abstract mosaic of irregular polygons in purple, pink, and white"/><figcaption>Image by vackground.com, Unsplash</figcaption></figure>
<p>Of course, similar to vectors with bases, you can express the same concept in a different language. If I chose Italian as the &quot;basis&quot;, you&#x27;d get:</p>
<blockquote>
<p><strong>&quot;Andare a Tokio&quot;</strong></p>
</blockquote>
<p>The concept is the same, but now its representation (its &quot;word coordinates&quot;) is different:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>💭</mtext><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>Andare</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>a</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>Tokio</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mi mathvariant="normal">⋮</mi><mpadded height="0em" voffset="0em"><mspace mathbackground="black" width="0em" height="1.5em"></mspace></mpadded></mrow></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">💭 = \begin{pmatrix} \text{Andare} \\ \text{a} \\ \text{Tokio} \\ \emptyset \\ \emptyset \\ \emptyset \\ \vdots \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.3669em"></span><span class="mord">💭</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:9.06em;vertical-align:-4.28em"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:4.7499em"><span style="top:-6.7499em"><span class="pstrut" style="height:11em"></span><span style="width:0.875em;height:9.000em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="9.000em" viewBox="0 0 875 9000"><path d="M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1
c-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349,
-36,557 l0,5484c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210,
949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9
c0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5,
-544.7,-112.5,-882c-2,-104,-3,-167,-3,-189
l0,-5492c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3,
-210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:4.2501em"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:4.78em"><span style="top:-7.6275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord text"><span class="mord">Andare</span></span></span></span><span style="top:-6.4275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord text"><span class="mord">a</span></span></span></span><span style="top:-5.2275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord text"><span class="mord">Tokio</span></span></span></span><span style="top:-4.0275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:-2.8275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:-1.6275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:0.2325em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord"><span class="mord">⋮</span><span class="mord rule" style="border-right-width:0em;border-top-width:1.5em;bottom:0em"></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:4.28em"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:4.7499em"><span style="top:-6.7499em"><span class="pstrut" style="height:11em"></span><span style="width:0.875em;height:9.000em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="9.000em" viewBox="0 0 875 9000"><path d="M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3,
63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5
c11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,5409
c-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664
c-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11
c0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17
c242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558
l0,-5544c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7,
-470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:4.2501em"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>You can, in theory, say those two different sequences of sounds in those two languages, and obtain the same effect in the mind of the receiver. The only requirement is that all people involved know both languages.</p>
<h3>Cosmetics Matter</h3>
<p>Alright, the parallel seems plain enough. Sadly, expressing all language in that vector-like format would use up a lot of ink and is not really practical for everyday use. (LLMs kind of achieve that feat, but in a more convoluted and definitely not human-readable way.)</p>
<p>Why do I think it is interesting, then? Because &quot;untranslatable&quot; words exist.</p>
<p>The words that people sometimes call &quot;untranslatable&quot; are terms that have a clear and widely understood meaning in one language, but no equivalent in another one. I gave <a href="/posts/is-there-anything-untranslatable">some examples</a> from Japanese before, and the web is awash with blog posts <a href="https://www.rocketlanguages.com/blog/20-of-the-worlds-most-beautiful-untranslatable-words" rel="nofollow noopener noreferrer" target="_blank">enumerating</a> curious words like that from many other languages. The key takeaway is that what only takes a single word in Language 1 can only be expressed with some accuracy in Language 2 if you use many words to explain them in all of their facets.</p>
<p>I wrote that the choice of basis in linear algebra is &quot;cosmetic&quot;, because the result of an operation on vectors does not change depending on the basis. But that is only the ideal, mathematical way to look at it. We humans are not so ideal. We are weak and fallible and sometimes we even stink.</p>
<p>Which vector representation do you like better between these two?</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi mathvariant="bold">v</mi><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.498213</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.731212</mn></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">\mathbf{v} = \begin{pmatrix} 0.498213 \\ 0.731212 \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4444em"></span><span class="mord mathbf" style="margin-right:0.01597em">v</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em"></span><span class="minner"><span class="mopen delimcenter" style="top:0em"><span class="delimsizing size3">(</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em"><span style="top:-3.61em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.498213</span></span></span><span style="top:-2.41em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.731212</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em"><span class="delimsizing size3">)</span></span></span></span></span></span></span>
<p>or</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi mathvariant="bold">v</mi><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">\mathbf{v} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4444em"></span><span class="mord mathbf" style="margin-right:0.01597em">v</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em"></span><span class="minner"><span class="mopen delimcenter" style="top:0em"><span class="delimsizing size3">(</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em"><span style="top:-3.61em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-2.41em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em"><span class="delimsizing size3">)</span></span></span></span></span></span></span>
<p>Both represent the same vector under different bases, because I chose a base that has the vector <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi mathvariant="bold">v</mi></mrow><annotation encoding="application/x-tex">\mathbf{v}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4444em"></span><span class="mord mathbf" style="margin-right:0.01597em">v</span></span></span></span> <em>as one of the basis vectors</em>. Here is what the two cases look like graphically:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable/vec_4.webp" alt="Left: a vector in a generic basis, with both components not zero. Right: the same vector in a basis where it is itself one of the basis vectors. In this case, the coordinates are 1 on the first component e1 (it is &quot;one times&quot; itself) and 0 on the other (there is &quot;nothing to project&quot; on it)." title="Images of a vector under two bases, shown side by side. One represents the vector in a generic basis, similar to the previous examples. The other shows the same vector in a basis where one of the basis vectors coincides exactly with the vector v. In this second case the projection onto the other basis vector is zero."/><figcaption>Left: a vector in a generic basis, with both components not zero. Right: the same vector in a basis where it is itself one of the basis vectors. In this case, the coordinates are 1 on the first component e1 (it is &quot;one times&quot; itself) and 0 on the other (there is &quot;nothing to project&quot; on it).</figcaption></figure>
<p><em>In theory</em>, both representations are exactly equivalent to each other. A computer wouldn&#x27;t have any preference. But the second representation, the clean one with a simple one and zero, is not only easier to remember and grasp for a person, but it is also easier to handle mathematically. Things simplify easily with it, everything that is multiplied by zero becomes zero, and similarly for the multiplications by one. The calculations proceed faster and with fewer errors.</p>
<p>This means that, at least for our feeble organic minds, the choice of basis does matter.</p>
<p>The same holds for language. A concept that took several words in English...</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>💭</mtext><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>Going</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>to</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>Tokyo</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mi mathvariant="normal">⋮</mi><mpadded height="0em" voffset="0em"><mspace mathbackground="black" width="0em" height="1.5em"></mspace></mpadded></mrow></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">💭 = \begin{pmatrix} \text{Going} \\ \text{to} \\ \text{Tokyo} \\ \emptyset \\ \emptyset \\ \emptyset \\ \vdots \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.3669em"></span><span class="mord">💭</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:9.06em;vertical-align:-4.28em"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:4.7499em"><span style="top:-6.7499em"><span class="pstrut" style="height:11em"></span><span style="width:0.875em;height:9.000em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="9.000em" viewBox="0 0 875 9000"><path d="M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1
c-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349,
-36,557 l0,5484c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210,
949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9
c0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5,
-544.7,-112.5,-882c-2,-104,-3,-167,-3,-189
l0,-5492c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3,
-210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:4.2501em"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:4.78em"><span style="top:-7.6275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord text"><span class="mord">Going</span></span></span></span><span style="top:-6.4275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord text"><span class="mord">to</span></span></span></span><span style="top:-5.2275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord text"><span class="mord">Tokyo</span></span></span></span><span style="top:-4.0275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:-2.8275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:-1.6275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:0.2325em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord"><span class="mord">⋮</span><span class="mord rule" style="border-right-width:0em;border-top-width:1.5em;bottom:0em"></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:4.28em"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:4.7499em"><span style="top:-6.7499em"><span class="pstrut" style="height:11em"></span><span style="width:0.875em;height:9.000em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="9.000em" viewBox="0 0 875 9000"><path d="M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3,
63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5
c11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,5409
c-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664
c-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11
c0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17
c242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558
l0,-5544c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7,
-470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:4.2501em"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>in Japanese is a single word: <em>joukyou</em> (上京)</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>💭</mtext><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtext>上京</mtext></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi mathvariant="normal">∅</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mi mathvariant="normal">⋮</mi><mpadded height="0em" voffset="0em"><mspace mathbackground="black" width="0em" height="1.5em"></mspace></mpadded></mrow></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">💭 = \begin{pmatrix} \text{上京} \\ \emptyset \\ \emptyset \\ \emptyset \\ \vdots \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.3669em"></span><span class="mord">💭</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:6.66em;vertical-align:-3.08em"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.55em"><span style="top:-5.55em"><span class="pstrut" style="height:8.6em"></span><span style="width:0.875em;height:6.600em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="6.600em" viewBox="0 0 875 6600"><path d="M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1
c-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349,
-36,557 l0,3084c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210,
949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9
c0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5,
-544.7,-112.5,-882c-2,-104,-3,-167,-3,-189
l0,-3092c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3,
-210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.05em"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.58em"><span style="top:-6.4275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord text"><span class="mord cjk_fallback">上京</span></span></span></span><span style="top:-5.2275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:-4.0275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:-2.8275em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord">∅</span></span></span><span style="top:-0.9675em"><span class="pstrut" style="height:3.6875em"></span><span class="mord"><span class="mord"><span class="mord">⋮</span><span class="mord rule" style="border-right-width:0em;border-top-width:1.5em;bottom:0em"></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.08em"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.55em"><span style="top:-5.55em"><span class="pstrut" style="height:8.6em"></span><span style="width:0.875em;height:6.600em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="6.600em" viewBox="0 0 875 6600"><path d="M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3,
63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5
c11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,3009
c-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664
c-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11
c0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17
c242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558
l0,-3144c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7,
-470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.05em"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>Looking at it in the other direction, a native word in one language is like one of its &quot;basis vectors&quot;—simple and straightforward—but the underlying concept might need to be &quot;spread out&quot; onto several words when applying a different language (i.e. &quot;basis&quot;).</p>
<p>Arguably, having a compact word makes it easier not only to express the concept, but also to think about it. This is the <a href="https://en.wikipedia.org/w/index.php?title=Linguistic_relativity" rel="nofollow noopener noreferrer" target="_blank">Sapir-Whorf debate</a>, but I&#x27;ll leave that for another day. Instead, I want to show you what this means for untranslatability, because there are people who vehemently deny their existence.</p>
<h3>Losing in Translation</h3>
<p>I think there are two ways in which this analogy makes it quite obvious that &quot;practical untranslatability&quot; is a thing.</p>
<p>First, communication is costly, and we don&#x27;t have infinite time and space to put in all the words that are needed. Even if the word could <em>in theory</em> be explained in the other language, usually it&#x27;s not worth it.</p>
<p>For example, the Japanese term <em>mono no aware</em> (物の哀れ) could be rather accurately translated as</p>
<blockquote>
<p>a gentle, poignant sadness or pathos felt in response to the transient nature of all things, a deep awareness of their impermanence that evokes a subtle, bittersweet sorrow and a profound, quiet empathy for their passing.</p>
</blockquote>
<p>In a dictionary, perhaps that&#x27;s okay. But a dictionary is not translation. Translation is about conveying the meaning of full texts, and you can&#x27;t do that kind of multi-line expansion for every word.</p>
<p>And so the translator simplifies it to the gist, e.g. as <em>the pathos of all things</em>. This conveys the majority of the meaning and it is usually enough, but it does lose a lot in the process.</p>
<p>This is exactly analogous to the data analysis technique called Principal Component Analysis (PCA), where one simplifies a vector by picking only its largest coordinates and disregarding all others. This means choosing a subset of the basis vectors that are more closely aligned with the vectors of interest, and ignoring the existence of the other basis vectors, effectively reducing the dimensions of the data. Translators use a version of PCA every time they (begrudgingly) accept to leave the finer nuances of a concept unsaid for the sake of space.</p>
<p>But, even assuming you <em>do</em> have time to explicate, using many more words increases the risk of introducing unintended nuances that come bundled with those extra words. This is like doing PCA but selecting inappropriate basis vectors, which introduce lots of small errors in the calculation of the vector&#x27;s coordinates. Is &quot;sorrow&quot; too emotionally charged in that long translation of <em>mono no aware</em>? Does the use of &quot;passing&quot; unnecessarily remind English speakers of people dying?</p>
<p>You eventually hit diminishing returns: using more words confuses the reader instead of clarifying things further.</p>
<figure><img src="https://aethermug.com/assets/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable/vackground-com-up8ooQ1Pm2s-unsplash.webp" alt="Abstract mosaic of irregular polygons in coral, turquoise, and navy"/><figcaption>Image by vackground.com, Unsplash</figcaption></figure>
<p>The second problem with translation is precision. Even if you can use many words, and even if none of those are misleading, words are still finite in number. Unlike ideal numerical coordinates, which can take any value down to the finest detail, you only have a small selection of words to convey a given bit of meaning.</p>
<p>Suppose you realize that the word &quot;subtle&quot; is not accurate enough in the &quot;...evokes a subtle, bittersweet&quot; part of the translation above. Maybe you do want to convey subtlety, but feel that the simple word &quot;subtle&quot; feels too strong in this case. You might try to soften it with an adverb, like &quot;somewhat subtle&quot; or &quot;slightly subtle&quot;, but there aren&#x27;t many other options out there. What if none of them is perfect for your current needs?</p>
<p>In this sense, language is &quot;quantized&quot;: you can jump from one level of intensity of some meaning to the next, but you can&#x27;t express anything in between.</p>
<p>This is a problem shared by all computers. Unlike ideal numbers, the numbers in a processor necessarily have a finite number of decimal places. So when you want to work with the ideal vector</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi mathvariant="bold">v</mi><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>12.21983714303</mn></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.6124152102345</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>3.0280184181084</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.10000003</mn></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">\mathbf{v} = \begin{pmatrix} -12.21983714303 \\ 0.6124152102345 \\ 3.0280184181084 \\ 0.10000003 \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4444em"></span><span class="mord mathbf" style="margin-right:0.01597em">v</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em"><span style="top:-4.65em"><span class="pstrut" style="height:6.8em"></span><span style="width:0.875em;height:4.800em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="4.800em" viewBox="0 0 875 4800"><path d="M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1
c-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349,
-36,557 l0,1284c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210,
949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9
c0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5,
-544.7,-112.5,-882c-2,-104,-3,-167,-3,-189
l0,-1292c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3,
-210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em"><span style="top:-4.81em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">−</span><span class="mord">12.21983714303</span></span></span><span style="top:-3.61em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.6124152102345</span></span></span><span style="top:-2.41em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">3.0280184181084</span></span></span><span style="top:-1.21em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.10000003</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em"><span style="top:-4.65em"><span class="pstrut" style="height:6.8em"></span><span style="width:0.875em;height:4.800em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="4.800em" viewBox="0 0 875 4800"><path d="M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3,
63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5
c11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,1209
c-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664
c-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11
c0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17
c242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558
l0,-1344c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7,
-470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>the storage limitations of your computer might mean that you have to content yourself with this truncated version:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi mathvariant="bold">v</mi><mo>=</mo><mrow><mo fence="true">(</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>12.219</mn></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.61241</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>3.0280</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.1</mn></mstyle></mtd></mtr></mtable><mo fence="true">)</mo></mrow></mrow><annotation encoding="application/x-tex">\mathbf{v} = \begin{pmatrix} -12.219 \\ 0.61241 \\ 3.0280 \\ 0.1 \end{pmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4444em"></span><span class="mord mathbf" style="margin-right:0.01597em">v</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em"><span style="top:-4.65em"><span class="pstrut" style="height:6.8em"></span><span style="width:0.875em;height:4.800em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="4.800em" viewBox="0 0 875 4800"><path d="M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1
c-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349,
-36,557 l0,1284c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210,
949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9
c0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5,
-544.7,-112.5,-882c-2,-104,-3,-167,-3,-189
l0,-1292c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3,
-210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em"><span style="top:-4.81em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">−</span><span class="mord">12.219</span></span></span><span style="top:-3.61em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.61241</span></span></span><span style="top:-2.41em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">3.0280</span></span></span><span style="top:-1.21em"><span class="pstrut" style="height:3em"></span><span class="mord"><span class="mord">0.1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em"><span style="top:-4.65em"><span class="pstrut" style="height:6.8em"></span><span style="width:0.875em;height:4.800em"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="4.800em" viewBox="0 0 875 4800"><path d="M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3,
63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5
c11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,1209
c-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664
c-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11
c0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17
c242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558
l0,-1344c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7,
-470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>(Modern computers can usually handle many more digits than that, but you get the idea.)</p>
<p>Graphically, it looks something like this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable/vec_5.webp" alt="Left: An ideal vector. Right: how that vector might be stored if the smallest increment that the computer can store is the separation between the grid lines. This is a loss of accuracy." title="Two images side by side. The first one shows a vector in a basis with its point not aligned with any of the grid of lines on the background. The second shows the same vector in a fainter color, and another vector near it with its point aligned with the closest intersection of the grid."/><figcaption>Left: An ideal vector. Right: how that vector might be stored if the smallest increment that the computer can store is the separation between the grid lines. This is a loss of accuracy.</figcaption></figure>
<p>(Incidentally, AI engineers sometimes intentionally quantize large language models to make them take up less memory, but this tends to make them dumber, because they lose nuance.)</p>
<p>With language, like with computers, we&#x27;re forced to &quot;lower the resolution&quot; of our concepts whenever we put them into words. This happens twice in translated text: once when the author first writes down their thoughts, then once more when the translator transports that already-degraded concept into a different language with different &quot;quantization steps&quot;.</p>
<h3>Between the Lines</h3>
<p>I hope these rather unorthodox leaps between linguistics and mathematics helped make it almost obvious that some words and ideas are untranslatable <em>in practice</em>. I also hope you don&#x27;t take the analogy <em>too</em> seriously, because it won&#x27;t go much further than this. You might be tempted to begin talking about &quot;word matrices&quot; and whatnot, but I doubt it would help clarify things. That kind of advanced linear algebra with concepts might work for LLMs, but it doesn&#x27;t seem to map to anything intelligible for a human being, not to mention make you any wiser.</p>
<p>Besides, language has something going for it that doesn&#x27;t seem to have a mathematical equivalent: what does it mean to &quot;read between the lines&quot;?</p>
<p>It&#x27;s hard to pin down, but I think it has to do with the <em>structure</em> and <em>context</em> of the words communicating something that is not contained in any of the words themselves. Perhaps it is the clever use of those &quot;negligible coordinates&quot;—the fringe nuances—of words scattered around the text to produce a collective effect on the reader.</p>
<p>A good translator might not be able to exactly translate a given word or sentence, but they might be able to &quot;write between the lines&quot; so that it doesn&#x27;t matter very much. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable/vackground-com-7K1_uSnNoy4-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Steinbeck on Teleology]]></title>
            <link>https://aethermug.com/posts/steinbeck-on-teleology?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/steinbeck-on-teleology</guid>
            <pubDate>Thu, 06 Nov 2025 03:08:33 GMT</pubDate>
            <description><![CDATA[None of it is important or all of it is]]></description>
            <content:encoded><![CDATA[<p><em>Edit 2025-11-18: I did some more research after first publishing this, and learned that the non-teleological treatise of Chapter 14 was actually adapted from an earlier essay by Ed Ricketts. I have changed a couple of sentences to clarify this attribution. This post&#x27;s title would need to be qualified better, but I&#x27;ll leave it as-is to avoid confusion.</em></p>

<p>In the spring of 1940, while the second World War was beginning to escalate in distant Europe, John Steinbeck and his dear friend, marine biologist Ed Ricketts, decided to go on a scientific boat expedition to the Gulf of California, also known as the Sea of Cortez.</p>
<blockquote>
<p>“Let us go,” we said, “into the Sea of Cortez, realizing that we become forever a part of it; that our rubber boots slogging through a flat of eel-grass, that the rocks we turn over in a tide pool, make us truly and permanently a factor in the ecology of the region. We shall take something away from it, but we shall leave something too.”</p>
</blockquote>
<p>Their stated goal was the description and collection of a great number of specimens of all the life forms they could find along the coast. What they actually achieved was, I think, much more: a marvelous book that speaks of the Universe.</p>
<blockquote>
<p>We had sat beside the little pool and watched the tree-frogs and the horsehair worms and the water-skaters, and had wondered how they got there, so far from other water. It seemed to us that life in every form is incipiently everywhere waiting for a chance to take root and start reproducing; eggs, spores, seeds, bacilli—everywhere. Let a raindrop fall and it is crowded with the waiting life. Everything is everywhere.</p>
</blockquote>
<p>The book, which they titled <em>The Log from the Sea of Cortez</em>, has everything I ever wanted from a book. I&#x27;m known to have a <a href="/posts/seamanship-of-life">soft spot</a> for seafaring stories, and this one is all about cramped spaces cradled by the waves; the prose is literally <em>Nobel-laureate-level</em> good; and I want nothing more from a text than depth—another quality that distinguishes the <em>Log</em> more than most other books I know.</p>
<p>Steinbeck and Ricketts recount every early-morning excursion on those Mexican shores, every single variant they observed of the few clades of creatures that live in the region. At the same time, they organically expand the scope of their discoveries to the broadest truths about reality and existence. Their daily activities are mostly mundane, repetitive, diligent, and yet they can never bore you because they come across as deeply meaningful. Their expedition didn&#x27;t simply catalog some species resident somewhere far away from most other places: it also showed what they meant when they wrote that &quot;everything is everywhere&quot;.</p>
<p>The <em>Log</em> is too full of beautiful passages to do it any justice here. I only want to present you a few excerpts from a particular part of the book, Chapter 14, that touches on a theme I&#x27;ve been working on personally for a long time: teleology—the idea that many things happen for specific purposes—and the ills it engenders.</p>
<p>In the past couple of years I&#x27;ve read many papers, essays, and books on the topic of teleology and I have formed some pretty detailed ideas about it. I have a long essay on this topic coming up soon on <a href="https://planktonvalhalla.com" rel="nofollow noopener noreferrer" target="_blank">Plankton Valhalla</a>, so I know from experience how difficult it is to create a simple <em>and</em> coherent argument on these topics. I thought it needed a lot of scientific premises to really get the point across.</p>
<p>Well, the <em>Log</em>&#x27;s fourteenth chapter explains teleology (and non-teleology) better than most others, and without requiring the reader to deeply understand hard sciences. To my astonishment, it does it better than many eminent scientists and philosophers who tried, and chronologically earlier than most of them! I started reading this book hoping for beautiful descriptions of nature and voyage, and did not expect to find such a precious inspiration for my own research. This is why you should always read broadly.</p>
<p>This chapter seems to be mostly the brainchild of Ricketts, although the two authors shared the overall vision and Steinbeck helped weave those ideas into the narrative. The chapter&#x27;s point is that teleological thinking is a distorting and unhelpful way to look at the world. Its basic assumption is that things have to be either &quot;right&quot; or &quot;wrong&quot;, depending on whether they further a stated goal or not. But this does not reflect reality, because reality can&#x27;t be right or wrong, it can only <em>be</em>. They write:</p>
<blockquote>
<p>In their sometimes intolerant refusal to face facts as they are, teleological notions may substitute a fierce but ineffectual attempt to change conditions which are assumed to be undesirable, in place of the understanding-acceptance which would pave the way for a more sensible attempt at any change which might still be indicated.</p>
</blockquote>
<p>In other words, purpose-based thinking nudges you into a narrow view of the world where your role is to execute or protect some preordained plans, rather than seeing things in the full richness of their possibilities.</p>
<p>It is better to let go of such notions and embrace the non-teleological view of life. It is a philosophy of unbounded acceptance.</p>
<blockquote>
<p>[Non-teleological ideas] consider events as outgrowths and expressions rather than as results; conscious acceptance as a desideratum, and certainly as an all-important prerequisite. Non-teleological thinking concerns itself primarily not with what should be, or could be, or might be, but rather with what actually “is”—attempting at most to answer the already sufficiently difficult questions <em>what</em> or <em>how</em>, instead of <em>why</em>.</p>
</blockquote>
<p>Take in the world without burdening it with your judgment, say the authors. It is enough, and more rewarding, to take in the uncountable expressions of existence as they are, instead of shoehorning them into narrow schemes and subtly shaky value systems.</p>
<p>The chapter offers several examples, like the following.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<blockquote>
<p>At one time an important game bird in Norway, the willow grouse, was so clearly threatened with extinction that it was thought wise to establish protective regulations and to place a bounty on its chief enemy, a hawk which was known to feed heavily on it. Quantities of the hawks were exterminated, but despite such drastic measures the grouse disappeared actually more rapidly than before. The naively applied customary remedies had obviously failed.</p>
</blockquote>
<p>The naïveté stems from the teleological approach. If you assume that the nearly-extinct species <em>should</em> survive, automatically its predators become <em>bad</em>. The answer, then, seems straightforward: kill the <em>bad</em> hawks to save the <em>good</em> grouse. Why didn&#x27;t it work, then?</p>
<blockquote>
<p>An ecological analysis into the relational aspects of the situation disclosed that a parasitic disease, coccidiosis, was epizootic among the grouse. In its incipient stages, this disease so reduced the flying speed of the grouse that the mildly ill individuals became easy prey for the hawks. In living largely off the slightly ill birds, the hawks prevented them from developing the disease in its full intensity and so spreading it more widely and quickly to otherwise healthy fowl. Thus the presumed enemies of the grouse, by controlling the epizootic aspects of the disease, proved to be friends in disguise.</p>
</blockquote>
<p>The non-teleological mindset, here, would be one of acceptance. Nothing <em>should</em> be anything, nothing is <em>good</em> or <em>bad</em>, and everything simultaneously just <em>is</em>. The hawks <em>are</em>, just as much as the grouse <em>are</em>, and many other things <em>are</em> too: you have to take them all into account to have any hope of coming up with effective interventions. If something we do allows their population to stabilize, that&#x27;s that. If the grouse disappear, that&#x27;s that, too.</p>
<p>This stance feels wrong to some people, though, because &quot;whoever employs this type of thinking with other than a few close friends will be referred to as detached, hard-hearted, or even cruel.&quot;</p>
<p>This is a mistake, claim Steinbeck and Ricketts, because the opposite is true.</p>
<blockquote>
<p>Non-teleological methods more than any other seem capable of great tenderness, of an all-embracingness which is rare otherwise. Consider, for instance, the fact that, once a given situation is deeply understood, no apologies are required. There are ample difficulties even to understanding conditions “as is.” Once that has been accomplished, the “why” of it ... seems no longer to be preponderantly important. It needn’t be condoned or extenuated, it just “is.” It is seen merely as part of a more or less dim whole picture.</p>
</blockquote>
<p>I was surprised to read such a modern and scientifically grounded worldview from people writing in 1930s. These views are easy enough to accept if you know something about the theory of dynamical systems, ecology, and biological complexity—none of which were really a thing (as we understand them today scientifically) when the <em>Log</em> was published!</p>
<p>But I want to conclude with what I think is the most mind-bending point in Chapter 14. The authors&#x27; non-teleological philosophy is not simply a matter of whether or not you assume goals in things and judge them based on how well those goals are accomplished. The non-teleological view is necessary also because it removes the distorting veil of causal thinking.</p>
<p>Cause-and-effect relations—the fact that everything is caused by something else—is something we take for granted, but it is an illusion, a crutch or compromise we use to make sense of the world around us.</p>
<p>But again, everything simply <em>is</em>, everywhere and all at once, so there can be no hierarchies or before/afters:</p>
<blockquote>
<p>In such a pattern, causality would be merely a name for something that exists only in our partial and biased mental reconstructings. The pattern which it indexes, however, would be real, but not intellectually apperceivable because the pattern goes everywhere and is everything and cannot be encompassed by finite mind or by anything short of life—which it is.</p>
</blockquote>
<p>This is hard to swallow. It is even hard to understand! Why would the fact that everything <em>is</em>, without goals, make causality an illusion? Perhaps an example will help you see their point.</p>
<p>Imagine watching a horse walk behind a wooden palisade. Only thin vertical slices of the other side are visible through the planks. When the horse passes behind one of those slits, you first see its snout appear and disappear in a second. Then the ears show up—sudden dark spikes against the greenery beyond. Then comes the thickness of the neck, after which the legs flicker into view for an instant, broken-looking. The curves of the back and belly manifest to you as a smooth growth followed by a shrinkage of a vertical chestnut band. You see legs again, then the rear that collapses out of existence, and finally the precipitous dive of the short bundle of blond hairs that make up the horse&#x27;s tail. Then the horse passes behind another slit, and another, repeating the same spectacle for you several times.</p>
<p>Now suppose you have never seen a horse before in your life. Those fleeting shapes between the wooden surfaces are all you&#x27;ve ever seen of what people call &quot;horse&quot;. &quot;Horse,&quot; then, may seem to you to belong to the category of mysterious <em>events</em> rather than the category of <em>things</em>. You saw a sequence of happenings, not something that exists at the same time.</p>
<p>You may even start to believe that those strange but well-ordered events—the ear-spike, the belly-pulse, the tail-dive—are linked to each other by causality: the first event triggers the one after it, the growth of a shape sets the stage and causes the occurrence of the next bold transformation.</p>
<p>Of course, you would be wrong to think that. The &quot;occurrence&quot; of the legs&#x27; complex flicker in the gap between the planks does not &quot;cause&quot; the distinctive patterns that follow it: they simply are what they are at the same time, and the regular temporal relationship between those &quot;events&quot; is only an artifact of the very limited view afforded you by the wooden barrier.</p>
<p>John Steinbeck and Ed Ricketts believed that reality is that horse, and our senses are slits in that palisade. Letting go of a teleological view of life is not equivalent to removing the palisade and seeing the horse all at once—that is not really possible for our &quot;finite minds&quot;. It means accepting that the palisade is there and that things are whole and connected despite all appearances.</p>
<p>The two mariners only spell this line of reasoning out in that one chapter, but this lens, or rather absence of a teleological lens pervades the entire book. Demonstrating its own point about simultaneity, the <em>Log</em> echoes the same idea in a scientifically poetic, or poetically scientific passage at its very beginning, right after that first &quot;Let us go&quot; quote. After saying that they will take something away from the region and also leave something behind, they continue:</p>
<blockquote>
<p>And if we seem a small factor in a huge pattern, nevertheless it is of relative importance. We take a tiny colony of soft corals from a rock in a little water world. And that isn’t terribly important to the tide pool. Fifty miles away the Japanese shrimp boats are dredging with overlapping scoops, bringing up tons of shrimps, rapidly destroying the species so that it may never come back, and with the species destroying the ecological balance of the whole region. That isn’t very important in the world. And thousands of miles away the great bombs are falling and the stars are not moved thereby. None of it is important or all of it is.</p>
</blockquote>
<p>●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/steinbeck-on-teleology/dulcey-lima-WAHDR6mEv6s-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Process World, Object-Oriented Mind]]></title>
            <link>https://aethermug.com/posts/process-world-object-oriented-mind?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/process-world-object-oriented-mind</guid>
            <pubDate>Thu, 30 Oct 2025 04:47:19 GMT</pubDate>
            <description><![CDATA[How programmers' struggles are everyone's mental struggles]]></description>
            <content:encoded><![CDATA[<p>A couple of weeks ago I shared an attempt at <a href="https://aethermug.com/posts/visualizing-framings" rel="nofollow noopener noreferrer" target="_blank">visualizing framings and models</a>, the basic tools of all human reasoning. I ended up with a kind of diagram that looks very much like good ole&#x27; UML.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/process-world-object-oriented-mind/framing_uml.webp" alt="Left: A &quot;framing diagram&quot; about a mental ontology (by me). Right: an example of a class diagram in UML." title="Side-by-side comparison: left shows a simple framing diagram with blue boxes for &#x27;Bicycle&#x27; and &#x27;Road&#x27; connected by &#x27;roll on&#x27;; right shows a UML class diagram with &#x27;Car&#x27; class containing methods, connected to &#x27;Wheel&#x27; class via &#x27;uses&#x27; relationship"/><figcaption>Left: A &quot;framing diagram&quot; about a mental ontology (by me). Right: an example of a class diagram in UML.</figcaption></figure>
<p>UML stands for Unified Modeling Language, and it is a set of standardized diagram types designed to prototype and communicate the most common software concepts, from architectural structures to process sequences. There are many kinds of diagrams in UML, but one of the most popular is what&#x27;s called &quot;class diagrams&quot;, of which the right side of the image above is an example.</p>
<figure><img src="https://aethermug.com/assets/posts/process-world-object-oriented-mind/UML_diagrams_overview.webp" alt="Hierarchical tree diagram showing the taxonomy of UML diagram types. At the top is &#x27;Diagram&#x27;, which branches into &#x27;Behaviour Diagram&#x27; and &#x27;Structure Diagram&#x27;. Behaviour Diagram includes Activity, State Machine, Interaction (with subtypes Communication, Interaction Overview, Sequence, and Timing), and Use Case diagrams. Structure Diagram includes Class, Component, Object, Composite Structure, Deployment, Package, and Profile diagrams."/><figcaption>Genealogy of UML diagrams... as a UML diagram.</figcaption></figure>
<p>Class diagrams are based on the idea that you can divide the world up into things called &quot;classes&quot;, which are basically general categories of things, like <em>Person</em> and <em>Job</em>. These classes have their own properties and behaviors, and can interact with each other. For example, the diagram may show that every object in the <em>Person</em> class may have at most one link to an object in the <em>Job</em> class. Define more classes and more interactions, and you can model basically anything you want with your code.</p>
<p>This sounds exactly like what I&#x27;ve been calling <a href="/posts/a-framing-and-model-about-framings-and-models">framings and models</a> here on Aether Mug! Except I wasn&#x27;t talking about software but about the human mind.</p>
<p>When I first noticed this parallel, I thought I might have simply defaulted to a familiar thinking paradigm without even realizing it. A sort of unconscious borrowing. This is probably true in the inspiration and the details but, thinking about it more, I think the parallel runs much deeper than that.</p>
<p>The interesting question is why engineers trying to build better software programs would discover an ideal way to represent the human mind.</p>
<h3>The Rise of the Object-Oriented</h3>
<p>For a long time since the early days of computer programming, code usually took the form of lists of instructions, like this:</p>
<ol>
<li>Start with a total of 0.</li>
<li>For each price in a given list of prices, add that price to the total.</li>
<li>Once you&#x27;ve gone through the whole list, return the final total number, which is the tally of all prices.</li>
</ol>
<p>This so-called <em>procedural</em> approach worked perfectly fine for small programs like that, but it ran into several issues for larger and more complex applications. The more moving parts you have, the more your procedural code becomes a spaghetti tangle of variables being passed around and hard-to-track operations. Often you wanted to do a very similar operation again, and you had to duplicate the same code with small variations, bloating the program. You had a high risk of losing track of the details and introducing more bugs.</p>
<p>What emerged as a solution to those problems was the <em>object-oriented</em> programming paradigm, or OOP. This was a remarkably different way to think about the code.</p>
<p>Instead of dry data being manipulated a step at a time, with OOP you identify the key entities (or <em>classes</em>), what their structure is, and how they behave. These classes usually have real-world names like <em>Person</em> and <em>Job</em>, and they are precisely what those UML diagrams (which historically came later) are meant to represent. A basic program of this kind would work like this:</p>
<ol>
<li>Define a  class called ShoppingCart.
<ol>
<li>The ShoppingCart starts with an empty list of items with prices.</li>
<li>The ShoppingCart has the ability to add items with prices to it.</li>
<li>The ShoppingCart has the ability to compute the total sum of the prices it contains.</li>
</ol>
</li>
<li>Create an object of the class ShoppingCart.</li>
<li>Add three items to it with its predefined ability to add items.</li>
<li>Return the total price of the ShoppingCart using its predefined ability to tally up its prices.</li>
</ol>
<p>I think you will agree that this feels less abstract and dry than the procedural example, even though it is doing essentially the same thing. (The object-oriented program is still procedural at its core, because it indicates what needs to be done at each step, but it clumps things together differently.)</p>
<p>You&#x27;re not just &quot;adding X to Y&quot; any more, you&#x27;re using the &quot;abilities&quot; of objects to <em>do</em> things. Of course there is much more to the object-oriented paradigm than dividing things into classes, but it is this real-world intuitiveness that, I think, led to its taking over the software world.</p>
<p>The theoretical roots of OOP date back to the 1950s and 1960s, but it really began to spread in the late 1970s. The 1990s, when the technology supporting it had been refined enough, saw a tidal wave of hype and excitement about the object-oriented way of coding. Countless books were written about it, more and more languages supported it, and it was clear to most that it was the future of software, and that the purely procedural way of doing things was going extinct.</p>
<p>Why this popularity? Apparently, OO code felt easier and more powerful. There is a surprising amount of psychological and educational research from that era showing its strengths. Among other things, it makes people <a href="https://dl.acm.org/doi/pdf/10.1145/142750.142905" rel="nofollow noopener noreferrer" target="_blank">much faster</a> at sketching out the initial structure of the programs and imagining how it might work, and gives experienced programmers <a href="https://www.sciencedirect.com/science/article/abs/pii/S1071581984710287" rel="nofollow noopener noreferrer" target="_blank">more time</a> to test and evaluate various approaches in practice.</p>
<p>In other words, OOP seems to tap into some of the pre-existing wiring of our minds better than the older approaches did. This, of course, was not an accident. The scientific notion that <a href="https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-7687.2007.00569.x" rel="nofollow noopener noreferrer" target="_blank">people tend to think in terms of discrete objects and agents</a> with behaviors and interactions is always cited as one of the main reasons and strengths of this paradigm.</p>
<p>That&#x27;s the theory.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h3>Objects Disappoint</h3>
<p>Today, the object-oriented paradigm is not considered to be &quot;the future&quot; anymore. Don&#x27;t get me wrong, OOP is still wildly popular, being at the root of major programming languages like Java, C++, and C#. It isn&#x27;t going anywhere, but it is certainly considered to be declining in popularity. Many of the top languages today support other paradigms either exclusively or as an option. New languages are appearing that return to the old procedural approach, or to a very different approach called <em>functional</em> programming (more on this below). What is causing this loss of popularity? I find this question to be very revealing about human nature.</p>
<p>As far as I can tell from the research on this question, there are two major categories of problems with the object-oriented view of code.</p>
<p>First, some aspects of OOP <a href="https://www.sciencedirect.com/science/article/pii/S0167642312001918" rel="nofollow noopener noreferrer" target="_blank">feel counterintuitive</a>. A key feature of classes in OOP is the ability to create <em>abstractions</em>, that is, to define a &quot;genealogy&quot; of inheritance from more generic to more specific classes. For example, in a videogame where players battle monsters, the class <em>Person</em> may be defined as the &quot;child&quot; of a more abstract class called <em>Being</em>, which is also the parent of the <em>Monster</em> class. <em>Being</em> would be used to define all of the traits in common between all children, like the management of their hit points, their ability to move, etc. The sibling concrete classes <em>Person</em> and <em>Monster</em>, on the other hand, would each define characteristics specific to themselves only—the handling of user control for <em>Person</em> objects, for example, and autonomous AI behavior in the <em>Monster</em> objects.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/process-world-object-oriented-mind/inheritance.webp" alt="" title="UML class inheritance diagram showing an abstract &#x27;Being&#x27; class at the top with properties (hitPoints, position) and methods (manageHitPoints, move, isAlive). Two classes inherit from Being: &#x27;Person&#x27; with a userInput property and methods for handling user control, and &#x27;Monster&#x27; with an aiController property and methods for AI behavior. Arrows labeled &#x27;inherits&#x27; connect the child classes to the parent."/></figure>
<p>In simple use cases with few moving parts, people have no trouble with such inheritance patterns. But when the problems get more complex, and the solution requires deep hierarchies and many kinds of interactions, people find it increasingly difficult to figure out the best way to define classes and their family trees. Indeed, they <a href="https://ieeexplore.ieee.org/document/799953" rel="nofollow noopener noreferrer" target="_blank">find it easier</a> to think in the supposedly-outdated procedural style—despite its spaghettification risk!—than in terms of classes.</p>
<p>The second kind of problem people encounter with OOP is that it simply isn&#x27;t always the best way to think about a problem. Some tasks are inherently about processes happening more than they are about entities doing things. For those tasks, research has shown that <a href="https://www.tandfonline.com/doi/abs/10.1080/07421222.1996.11518126" rel="nofollow noopener noreferrer" target="_blank">people actually perform better with the procedural way</a> of programming.</p>
<p>These two obstacles often lead to more bugs, harder-to-debug bugs, and headaches with certain kinds of problems, for example those involving parallel and distributed computation. We&#x27;ll come back to these observations later.</p>
<p>From the perspective of <a href="https://alex-ber.medium.com/richard-feldman-the-return-of-procedural-programming-0eeb2fba3840" rel="nofollow noopener noreferrer" target="_blank">many software engineers today</a>, OOP has failed to deliver the amazing promises it made forty years ago. I don&#x27;t think many of those engineers are against it 100% of the time, but there is a general understanding that it is simply a tool among others, and that there are many important cases in which it is not the right one. Some of them may tell you that it <em>very rarely</em> is the right one.</p>
<p>Wait a minute. Wasn&#x27;t OOP successful because it is grounded in the way we humans think—what I call framings, models, and their black boxes? Is there something even better taking its place?</p>
<h3>A Humble Functional Answer</h3>
<p>Another paradigm is recently emerging in contrast with object-oriented programming, with a radically different approach. It is called <em>functional</em> programming and, as the name implies, it deals with functions, not objects.</p>
<p>A functional programming (FP) language would implement the shopping cart example as follows:</p>
<ol>
<li>Define a function that takes a list of numbers and recursively adds them up one at a time until it has gone through the whole list.</li>
<li>Input a list of prices into the function.</li>
</ol>
<p>Unlike OOP, functional programming lacks concrete objects that &quot;exist&quot; and &quot;do&quot;, and instead works with pipelines through which data is transformed.</p>
<p>(The simple example above may look similar to the procedural case, but it has some key differences that avoid many of its usual pitfalls. Instead of giving a list of instructions like &quot;do this, then do this, then that&quot;, with FP you define <em>what transformations are possible</em>, then you see what happens to the data that you put in. The functional mindset is like building an automated assembly line, while the procedural mindset is like training a team of (very fast) assembly-line workers with the steps they have to execute.)</p>
<p>In general, FP leads to fewer bugs, and it works better with distributed systems and parallel processing compared to the other paradigms. <em>And I don&#x27;t think this is a coincidence.</em></p>
<p><a href="https://planktonvalhalla.com/20220829-the-world-is-a-network/" rel="nofollow noopener noreferrer" target="_blank">The real world is a continuous network of interactions</a>, and the currency being transferred and transformed through it is <a href="https://planktonvalhalla.com/20220818-differences-contraptions-and-a-ripple-universe/" rel="nofollow noopener noreferrer" target="_blank">differences</a> (what some call &quot;information&quot;). Boundaries are in the eye of the beholder: the things we tend to see as separate objects aren&#x27;t really that separate after all. From this perspective, it seems to me that the functional paradigm is a much better way to represent the <em>real</em> world around us: everything is transformation, everything is &quot;pipeline&quot;, nothing persists. What we call the &quot;state&quot; of something is just a snapshot of an ever-changing process, even when it is changing very slowly.</p>
<p>Like most of these approaches, FP has a long history, but it only recently started <a href="https://github.com/readme/featured/functional-programming" rel="nofollow noopener noreferrer" target="_blank">going mainstream</a>. Many big programming languages, like Python, Javascript, and Ruby, provide the tooling to code in the FP way. Even FP-only languages like Clojure and Scala have their relatively small but passionate communities of users. It may seem like the functional way might be the way of the future.</p>
<p>Sadly—because I love FP more than any other paradigm—I don&#x27;t think FP is <em>the</em> future. Things are more nuanced than that.</p>
<p>Problem is, FP is <em>hard</em>! It is much less intuitive to design with than OOP, and arguably less so than even the plain procedural style. Thinking in terms of functions is not what we usually do in our heads. You don&#x27;t generally look at a shopping cart and say, &quot;that is a useful product-aggregation process right there, son.&quot; You look at the cart and say, &quot;oh, a cart, the thing you put products in.&quot;</p>
<p>Our reason is fundamentally concept-based, black boxes with boundaries drawn as and when needed to make specific predictions. The <a href="/posts/living-in-a-real-world-acting-in-imaginary-ones">virtual physics</a> in our minds needs discrete building blocks to think about anything.</p>
<p>Even if I&#x27;m right—even if FP best reflects the world&#x27;s incessant and borderless transformation of differences—functions are never going to be the only paradigm for software, and perhaps they will never even be the major one either. Our minds won&#x27;t let them. Even saying &quot;the function transforms the data&quot; is describing a functional truth in an <a href="https://aethermug.com/posts/a-fundamental-framing-of-human-language" rel="nofollow noopener noreferrer" target="_blank">object-oriented language</a>—English—where &quot;functions&quot; and &quot;data&quot; are black boxes. The same goes for &quot;the world is a network.&quot; We just can&#x27;t help it.</p>
<figure><img src="https://aethermug.com/assets/posts/process-world-object-oriented-mind/n8n.webp" alt="Screenshot of an n8n workflow automation showing four connected nodes in a linear pipeline: &#x27;Start: Every Hour&#x27; (with a clock icon), &#x27;FTP: List Files&#x27;, &#x27;FTP: Download File&#x27;, and &#x27;Google Drive: Upload&#x27; (with Google Drive icon). Each node is represented as a rounded rectangle connected by lines, visualizing a functional data transformation pipeline."/><figcaption>Low-code tools like n8n use the visual metaphor of a pipeline. They are functional in nature, transforming data at each step, but notice how the UI designers chose to show it as a sequence of &#x27;node&#x27; objects, making the computation easier to grok for a human.</figcaption></figure>
<h3>Living with the Tension</h3>
<p>I love software for how it regularly leads to questions much deeper and more interesting than &quot;how do I automate the checkout of my e-commerce website&quot;. On the surface it is &quot;just&quot; engineering, but it is really philosophy, psychology, and more rolled into one.</p>
<p>Remember that the first category of mistakes attributed to OOP had to do with abstractions and inheritance. This seems to be the part of the OOP method that <em>least</em> reflects our natural way of framing the world. While we do think &quot;in objects&quot;, we always use them mentally in concrete, goal-oriented tasks, where the horizontal relationships between objects are what matters (e.g. &quot;the person fights the monster&quot;).</p>
<p>To think about inheritance, on the other hand, you need to think &quot;vertically&quot;, about the levels of abstraction of the categories involved (&quot;a person is a being, and traits X and Y are shared by all beings&quot;). We <em>can</em> do this—it is an important and useful effort—but it&#x27;s not what our minds are best at.</p>
<p>The second kind of cognitive issue with OOP had to do with cases in which objects are just a bad fit for the problems. Here is where procedural and, even better, functional methods shine: not because they don&#x27;t apply just as nicely in other cases, but because these are the circumstances in which the object-oriented fictions in our minds break down in the face of reality.</p>
<p>In short:</p>
<ul>
<li>Object-orientation is declining in popularity and people see it as flawed not (mainly) because it doesn&#x27;t reflect how our minds work, but because our minds don&#x27;t work very well.</li>
<li>Functional code is not more popular, and perhaps will never be, not because it is a worse model of reality, but because it is alien to how our minds tend to work.</li>
</ul>
<p>What we are left with is an awkward mixture of both paradigms, both in software and in life.</p>
<p>When programming, I&#x27;m afraid we can&#x27;t hope for a single neat paradigm to rule them all, a way of designing code that is both intuitive and world-accurate. We (programmers) have to find the right tool for each case, learn to discern in which situations the cognitive benefits of thinking in objects outweigh the technical downsides of less-functional code, and vice versa. Even better, we can use both at the same time and trade elegance for pragmatism.</p>
<p>When reasoning in general, outside the narrow world of software, I think the same wisdom applies. Knowing the limits of your framings and the failure modes of your models can only be good for you.</p>
<p>It may be worth the effort to borrow ideas from visualization tools like UML to represent what goes on in our minds—especially what goes wrong. Make this intuitive process of framing everything into something more deliberate when necessary. Design your understanding of the world like an engineer designs his programs. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/process-world-object-oriented-mind/henri_de_toulouse_lautrec.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Famous Cognitive Psychology Experiments that Failed to Replicate]]></title>
            <link>https://aethermug.com/posts/famous-cognitive-psychology-experiments-that-failed-to-replicate?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/famous-cognitive-psychology-experiments-that-failed-to-replicate</guid>
            <pubDate>Thu, 21 Aug 2025 01:12:36 GMT</pubDate>
            <description><![CDATA[A quick reference]]></description>
            <content:encoded><![CDATA[<p><em>TL;DR is the part in bold below.</em></p>
<p>The field of psychology had a big crisis in the 2010s, when many widely accepted results turned out to be much less solid than previously thought. It&#x27;s called the <a href="https://en.wikipedia.org/w/index.php?title=Replication_crisis" rel="nofollow noopener noreferrer" target="_blank">replication crisis</a>, because labs around the world tried and failed to replicate, in new experiments, previous results published by their original &quot;discoverers&quot;. In other words, many reported psychological effects were either non-existent—artifacts of the experimenter&#x27;s flawed setup—or so much weaker than originally claimed that they lost most of their intellectual sparkle.</p>
<p>(The crisis spanned other fields as well, but I mostly care about psychology here, especially the cognitive kind.)</p>
<p>This is very old news, and I&#x27;ve been vaguely aware of several of the biggest disgraced results for years, but I keep on forgetting which are (still probably) real and which aren&#x27;t. This is not good. <em>Most</em> results in the field do actually replicate and are robust<sup>[citation needed]</sup>, so it would be a pity to lose confidence in the whole field just because of a few bad apples.</p>
<p><strong>This post is a compact reference list of the most (in)famous cognitive science results that failed to replicate and should, for the time being, be considered false.</strong> The only goal is to offset the trust-undermining effects of my poor memory—and perhaps yours, too?—with a bookmarkable page.</p>
<p>This can&#x27;t be a comprehensive list: if a study is <em>not</em> on this page, it&#x27;s not guaranteed to be fully replicated. Still, this should cover most of the high-profile debunked theories that laypeople like me may have heard of.</p>
<p><em>Credit: I enlisted the help of Kimi K2, o3, and Sonnet 4 to gather and fact-check this list. I also checked, pruned, and de-hallucinated all the results.</em></p>
<h2><span id="table-of-contents">Table of Contents</span></h2>
<ul>
<li><a href="#self-regulation--willpower">Self-Regulation &amp; Willpower</a></li>
<li><a href="#social-cognition--priming">Social Cognition &amp; Priming</a></li>
<li><a href="#cognitive-performance--intelligence">Cognitive Performance &amp; Intelligence</a></li>
<li><a href="#emotion--behavior">Emotion &amp; Behavior</a></li>
<li><a href="#social-psychology">Social Psychology</a></li>
<li><a href="#judgment--decision-making">Judgment &amp; Decision Making</a></li>
<li><a href="#paranormal--controversial">Paranormal &amp; Controversial</a></li>
</ul>

<h2><span id="self-regulation--willpower">Self-Regulation &amp; Willpower</span></h2>
<h3>Ego Depletion Effect</h3>
<ul>
<li><strong>Claimed result:</strong> We have a &quot;willpower battery&quot; that gradually depletes during the day as we exercise self-control. (I remember reading Baumeister&#x27;s pop-science book and being awed by the implications of their findings; I might have known it sounded too good to be true.)</li>
<li><strong>Representative paper:</strong> <a href="https://psycnet.apa.org/doiLanding?doi=10.1037%2F0022-3514.74.5.1252" rel="nofollow noopener noreferrer" target="_blank">Baumeister et al. 1998</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://journals.sagepub.com/doi/10.1177/1745691616652873" rel="nofollow noopener noreferrer" target="_blank">Hagger et (63!) al. 2016</a></li>
</ul>
<h3>Glucose and Ego Depletion Effect</h3>
<ul>
<li><strong>Claimed result:</strong> Connected to the debunked ego-depletion effect, this one claims that adding glucose to your blood &quot;recharges&quot; the willpower battery. (For a while, I may have drunk more orange juice than usual after reading Baumeister&#x27;s book. At least it&#x27;s healthy-ish.)</li>
<li><strong>Representative paper:</strong> <a href="https://doi.org/10.1037/0022-3514.92.2.325" rel="nofollow noopener noreferrer" target="_blank">Gailliot &amp; Baumeister (2007)</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://www.sciencedirect.com/science/article/abs/pii/S0195666313005072" rel="nofollow noopener noreferrer" target="_blank">Lange &amp; Eggert (2014)</a></li>
</ul>
<h3>Marshmallow Test &amp; Long-Term Success Effect</h3>
<ul>
<li><strong>Claimed result:</strong> Children&#x27;s ability to resist eating a marshmallow when left alone in a room at age 4-5 strongly predicts adolescent achievement, with those who waited longer showing better life outcomes.</li>
<li><strong>Representative paper:</strong> <a href="https://doi.org/10.1037/0012-1649.26.6.978" rel="nofollow noopener noreferrer" target="_blank">Shoda, Mischel, &amp; Peake (1990)</a></li>
<li><strong>Replication status:</strong> <em>did not replicate significantly</em></li>
<li><strong>Source:</strong> <a href="https://doi.org/10.1177/0956797618761661" rel="nofollow noopener noreferrer" target="_blank">Watts, Duncan, &amp; Quan (2018)</a></li>
</ul>
<p><a href="#table-of-contents">↑ Back to Table of Contents</a></p>

<h2><span id="social-cognition--priming">Social Cognition &amp; Priming</span></h2>
<h3>Power Posing Effect</h3>
<ul>
<li><strong>Claimed result:</strong> Adopting expansive body postures for 2 minutes (like standing with hands on hips or arms raised) increases testosterone, decreases cortisol, and makes people feel more powerful and take more risks.</li>
<li><strong>Representative paper:</strong> <a href="https://doi.org/10.1177/0956797610383437" rel="nofollow noopener noreferrer" target="_blank">Carney, Cuddy, &amp; Yap (2010)</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://doi.org/10.1177/0956797614553946" rel="nofollow noopener noreferrer" target="_blank">Ranehill et al. (2015)</a></li>
</ul>
<h3>Social Priming: Elderly Words Effect</h3>
<ul>
<li><strong>Claimed result:</strong> People walk more slowly after being exposed to words related to elderly stereotypes.</li>
<li><strong>Representative paper:</strong> <a href="https://doi.org/10.1037/0022-3514.71.2.230" rel="nofollow noopener noreferrer" target="_blank">Bargh, Chen, &amp; Burrows (1996)</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://doi.org/10.1371/journal.pone.0029081" rel="nofollow noopener noreferrer" target="_blank">Doyen et al. (2012)</a> (I like how they prove that the psychological effect was actually in the experimenters, rather than the subjects!)</li>
</ul>
<h3>Money Priming Effect</h3>
<ul>
<li><strong>Claimed result:</strong> Simply thinking about money makes you more selfish and more likely to endorse free market values.</li>
<li><strong>Representative paper:</strong> <a href="https://doi.org/10.1126/science.1132491" rel="nofollow noopener noreferrer" target="_blank">Vohs, Mead, &amp; Goode (2006)</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://pubmed.ncbi.nlm.nih.gov/26214168/" rel="nofollow noopener noreferrer" target="_blank">Rohrer, Pashler, &amp; Harris (2015)</a></li>
</ul>
<h3>Cleanliness and Morality Effect</h3>
<ul>
<li><strong>Claimed result:</strong> Being clean or thinking about cleanliness makes people more morally lax.</li>
<li><strong>Representative paper:</strong> <a href="https://doi.org/10.1111/j.1467-9280.2008.02227.x" rel="nofollow noopener noreferrer" target="_blank">Schnall, Benton, &amp; Harvey (2008)</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://doi.org/10.1027/1864-9335/a000186" rel="nofollow noopener noreferrer" target="_blank">Johnson, Cheung, &amp; Donnellan (2014)</a></li>
</ul>
<p><a href="#table-of-contents">↑ Back to Table of Contents</a></p>

<h2><span id="cognitive-performance--intelligence">Cognitive Performance &amp; Intelligence</span></h2>
<h3>Mozart Effect</h3>
<ul>
<li><strong>Claimed result</strong>: Listening to Mozart temporarily makes you smarter.</li>
<li><strong>Representative paper</strong>: <a href="https://www.nature.com/articles/365611a0" rel="nofollow noopener noreferrer" target="_blank">Rauscher, Shaw, &amp; Ky (1993)</a></li>
<li><strong>Replication status</strong>: <em>did not replicate</em></li>
<li><strong>Source</strong>: <a href="https://www.sciencedirect.com/science/article/abs/pii/S0160289610000267" rel="nofollow noopener noreferrer" target="_blank">Pietschnig, Voracek, &amp; Formann (2010)</a> (What a title!)</li>
</ul>
<h3>Growth Mindset Interventions</h3>
<ul>
<li><strong>Claimed result:</strong> Teaching students that intelligence is malleable (not fixed) dramatically improves academic performance.</li>
<li><strong>Representative paper:</strong> <a href="https://psycnet.apa.org/doiLanding?doi=10.1037%2F0033-295X.95.2.256" rel="nofollow noopener noreferrer" target="_blank">Dweck, &amp; Leggett (1988)</a></li>
<li><strong>Replication status:</strong> <em>mixed results</em> - many failed replications but also some successful replications</li>
<li><strong>Failed replication source:</strong> <a href="https://pubmed.ncbi.nlm.nih.gov/31464486/" rel="nofollow noopener noreferrer" target="_blank">Li &amp; Bates 2019</a></li>
<li><strong>Notable successful replication:</strong> <a href="https://www.nature.com/articles/s41586-019-1466-y" rel="nofollow noopener noreferrer" target="_blank">Yeager et al. 2019 in Nature</a></li>
</ul>
<h3>Bilinguals Are Smarter</h3>
<ul>
<li><strong>Claimed result:</strong> Being bilingual provides substantial cognitive advantages in attention, task-switching, and executive control.</li>
<li><strong>Representative paper:</strong> <a href="https://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(12)00056-3" rel="nofollow noopener noreferrer" target="_blank">Bialystok, Craik, &amp; Luk (2012)</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://pubmed.ncbi.nlm.nih.gov/29494195/" rel="nofollow noopener noreferrer" target="_blank">Lehtonen et al. 2018</a></li>
</ul>
<h3>Difficulty Makes You More Rational</h3>
<ul>
<li><strong>Claimed result:</strong> People become more careful, analytical, and deliberate in solving mental problems (more specifically, syllogistic reasoning), when they are under duress. In other words, mental stress activates the use of Kahneman and Tversky&#x27;s System 2 to solve problems.</li>
<li><strong>Representative Paper:</strong> <a href="https://psycnet.apa.org/record/2007-16657-003" rel="nofollow noopener noreferrer" target="_blank">Alter, Oppenheimer, Epley, &amp; Eyre, 2007, Study 4</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://journals.sagepub.com/doi/epdf/10.1177/2515245918810225" rel="nofollow noopener noreferrer" target="_blank">Klein et al., 2018</a> (page 13, &quot;Disfluency engages analytic processing&quot;)</li>
</ul>
<p><a href="#table-of-contents">↑ Back to Table of Contents</a></p>

<h2><span id="emotion--behavior">Emotion &amp; Behavior</span></h2>
<h3>Smile to Feel Better Effect</h3>
<ul>
<li><strong>Claimed result</strong>: Holding a pen in your teeth (forcing a smile-like expression) makes you rate cartoons as funnier compared to holding a pen with your lips (preventing smiling). More broadly, facial expressions can influence emotional experiences: &quot;fake it till you make it.&quot;</li>
<li><strong>Representative paper</strong>: <a href="https://psycnet.apa.org/record/1988-25514-001" rel="nofollow noopener noreferrer" target="_blank">Strack, Martin, &amp; Stepper (1988)</a></li>
<li><strong>Replication status</strong>: <em>did not replicate</em></li>
<li><strong>Source</strong>: <a href="https://journals.sagepub.com/doi/full/10.1177/1745691616674458" rel="nofollow noopener noreferrer" target="_blank">Wagenmakers et (54!) al. (2016)</a></li>
</ul>
<h3>Hunger and Risk-Taking Effect</h3>
<ul>
<li><strong>Claimed result:</strong> People exposed to the scent of freshly baked cookies become less sensitive to risk and take more risks to obtain food.</li>
<li><strong>Representative paper:</strong> <a href="https://onlinelibrary.wiley.com/doi/10.1002/bdm.520" rel="nofollow noopener noreferrer" target="_blank">Ditto et al. 2006</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://doi.org/10.1016/j.foodqual.2018.02.014" rel="nofollow noopener noreferrer" target="_blank">Festjens, Bruyneel, &amp; Dewitte (2018)</a></li>
</ul>
<p><a href="#table-of-contents">↑ Back to Table of Contents</a></p>

<h2><span id="social-psychology">Social Psychology</span></h2>
<h3>Stereotype Threat (Women&#x27;s Math Performance) Effect</h3>
<ul>
<li><strong>Claimed result:</strong> Women risk being judged by the negative stereotype that women have weaker math ability, and this apprehension disrupts their math performance on difficult tests.</li>
<li><strong>Representative paper:</strong> <a href="https://www.sciencedirect.com/science/article/abs/pii/S0022103198913737" rel="nofollow noopener noreferrer" target="_blank">Spencer, Steele, &amp; Quinn (1999)</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://doi.org/10.1016/j.jsp.2014.10.002" rel="nofollow noopener noreferrer" target="_blank">Flore &amp; Wicherts (2015)</a></li>
</ul>
<h3>Ovulation &amp; Mate Preferences Effect</h3>
<ul>
<li><strong>Claimed result:</strong> Women are more attracted to hot guys during high-fertility days of their cycles.</li>
<li><strong>Representative paper:</strong> <a href="https://doi.org/10.1037/a0035438" rel="nofollow noopener noreferrer" target="_blank">Gildersleeve, Haselton, &amp; Fales (2014)</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://publications.goettingen-research-online.de/bitstream/2/77327/1/10.1177_0956797619882022.pdf" rel="nofollow noopener noreferrer" target="_blank">Stern, Gerlach, &amp; Penke (2020)</a></li>
</ul>
<p><a href="#table-of-contents">↑ Back to Table of Contents</a></p>

<h2><span id="judgment--decision-making">Judgment &amp; Decision Making</span></h2>
<h3>The Dunning-Kruger Effect</h3>
<ul>
<li><strong>Claimed result:</strong> Low-ability individuals believe they&#x27;re better at a task than they really are.</li>
<li><strong>Representative paper:</strong> <a href="https://doi.org/10.1037/0022-3514.77.6.1121" rel="nofollow noopener noreferrer" target="_blank">Kruger, J., &amp; Dunning, D. (1999)</a></li>
<li><strong>Replication status:</strong> <em>Mixed results: effects are generally more limited and weaker than originally reported, and may mostly be a statistical artifact</em></li>
<li><strong>Source:</strong> <a href="https://doi.org/10.1016/j.intell.2020.101449" rel="nofollow noopener noreferrer" target="_blank">Gignac &amp; Zajenkowski (2020)</a>, <a href="https://doi.org/10.3389/fpsyg.2022.840180" rel="nofollow noopener noreferrer" target="_blank">Magnus &amp; Peresetsky (2022)</a>, <a href="https://doi.org/10.1038/s41598-024-61042-1" rel="nofollow noopener noreferrer" target="_blank">Lebudaet al. (2024)</a></li>
</ul>
<h3>Psychological Distance &amp; Construal Level Theory</h3>
<ul>
<li><strong>Claimed result</strong>: &quot;Psychologically distant&quot; events are processed more abstractly, while &quot;psychologically near&quot; events are processed more concretely. E.g., you worry about the difficulty of a task if you have to do it tomorrow, but you see the same task&#x27;s attractive side if it is planned far in the future.</li>
<li><strong>Representative paper</strong>: <a href="https://pubmed.ncbi.nlm.nih.gov/20438233/" rel="nofollow noopener noreferrer" target="_blank">Trope &amp; Liberman (2010)</a>, building on <a href="https://nyuscholars.nyu.edu/en/publications/the-role-of-feasibility-and-desirability-considerations-in-near-a" rel="nofollow noopener noreferrer" target="_blank">Liberman &amp; Trope (1998)</a></li>
<li><strong>Replication status</strong>: <em>serious credibility problems</em></li>
<li><strong>Source</strong>: A <a href="https://climr.org/about/" rel="nofollow noopener noreferrer" target="_blank">collaboration</a> between 73 labs around the world is vetting this theory right now because of many doubts about its validity.</li>
</ul>
<p><a href="#table-of-contents">↑ Back to Table of Contents</a></p>

<h2><span id="paranormal--controversial">Paranormal &amp; Controversial</span></h2>
<h3>ESP Precognition Effect</h3>
<ul>
<li><strong>Claimed result:</strong> In some cases, people can predict future events &quot;that could not otherwise be anticipated through any known inferential process&quot;.</li>
<li><strong>Representative paper:</strong> <a href="https://doi.org/10.1037/a0021524" rel="nofollow noopener noreferrer" target="_blank">Bem (2011)</a></li>
<li><strong>Replication status:</strong> <em>did not replicate</em></li>
<li><strong>Source:</strong> <a href="https://doi.org/10.1037/a0029709" rel="nofollow noopener noreferrer" target="_blank">Galak et al. (2012)</a>, <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0033423" rel="nofollow noopener noreferrer" target="_blank">Ritchie, Wiseman, &amp; French (2012)</a></li>
</ul>
<h3>Objective Measurement of Biases</h3>
<ul>
<li><strong>Claimed result</strong>: You can predict if someone is racist by how quickly they answer certain trick questions.</li>
<li><strong>Representative paper</strong>: <a href="https://psycnet.apa.org/doiLanding?doi=10.1037%2F0022-3514.74.6.1464" rel="nofollow noopener noreferrer" target="_blank">Greenwald, McGhee, &amp; Schwartz (1998)</a></li>
<li><strong>Replication status</strong>: <em>mixed evidence with small effects</em></li>
<li><strong>Source</strong>: <a href="https://pubmed.ncbi.nlm.nih.gov/23773046/" rel="nofollow noopener noreferrer" target="_blank">Oswald et al. (2013)</a> shows that the prediction power is small at best. <a href="https://osf.io/preprints/psyarxiv/fj6rm_v1" rel="nofollow noopener noreferrer" target="_blank">Cummins &amp; Hussey (2025)</a> (preprint) suggests that the predictive power is almost zero for individuals.</li>
</ul>
<p><a href="#table-of-contents">↑ Back to Table of Contents</a></p>
<p>Did I miss any famous debunked studies? Let me know by replying to this newsletter, and I&#x27;ll add it to the list. ●</p>

<ul>
<li>Edit (Oct 5, 2025): <em>Thanks to HJA for submitting the &quot;Difficulty Makes You More Rational&quot; item.</em></li>
<li>Edit (Oct 16, 2025): <em>Important addition of the Dunning-Kruger effect to this list. Thanks to David E. Hultstrom for letting me know: I didn&#x27;t before! Also, the list was getting long so I created categories and added the ToC.</em></li>
<li>Edit (Oct 24, 2025): <em>Thanks to Dr. Ian Hussey for offering clarifications on Dunning Kruger and Objective Measurement of Biases.</em></li>
</ul>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/famous-cognitive-psychology-experiments-that-failed-to-replicate/rebecca-freeman-rqFm0IgMVYY-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[26 Easy Steps to Taming Your Smartphone Addiction]]></title>
            <link>https://aethermug.com/posts/26-easy-steps-to-taming-your-smartphone-addiction?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/26-easy-steps-to-taming-your-smartphone-addiction</guid>
            <pubDate>Wed, 22 Oct 2025 00:38:37 GMT</pubDate>
            <description><![CDATA[And voilà!]]></description>
            <content:encoded><![CDATA[<p>Apparently people find it hard not to ogle the glassy sides of their smartphones. <a href="https://backlinko.com/smartphone-usage-statistics" rel="nofollow noopener noreferrer" target="_blank">Some statistics</a> say that, on average, people spend around four hours per day doing that—that&#x27;s 1.17 days per week of tapping, swiping, and gawking. In some places, the number goes up to five or even six hours per day!</p>
<p>And then they have the guts to tell me, &quot;Sorry, Marco, I&#x27;m very busy, so I can&#x27;t meet you next week or any other week after that. Ever.&quot;</p>
<p>This extreme addiction to smartphones is a social problem because, well, I really want to meet you.</p>
<p>By far the simplest solution would be to make smartphones with no glassy side: both sides matte and inert. I haven&#x27;t seen a single model like that, but I have hope for the next season.</p>
<p>In the meantime, I&#x27;ll have to take things into my own hands. I&#x27;ll tell you how <em>I</em> solved the problem while still holding on to a smartphone with a glassy side. I&#x27;ll release you from this false busy-ness, so y&#x27;all won&#x27;t have any more excuses to avoid me. Thank me when you see me next week.</p>

<p>Why me? I know my stuff. One time I even spent five full days traveling in uninterrupted airplane mode—that&#x27;s regardless of the fact that I didn&#x27;t spend even a minute in an airplane during those five days. It was tough, because I had to actually talk to people to get directions, and I had to carry with me one of those folded pieces of paper with lines on them that show you where places are next to each other, so you can go there. I also looked at the green parts of a public park for, like, two hours every day. It was tough, but I did it: this is why you can trust my advice below. (My dad&#x27;s claim that, when he was young, he used to go around without a smartphone for years at a time is doubtful and should be ignored.)</p>
<p>It&#x27;s really very simple. All you need to do is follow these 26 basic tweaks to your lifestyle. After I did, my smartphone use went down from about three hours per day to a little less than one hour. I&#x27;ve done it so that you&#x27;ll have to, too.</p>
<p>Twenty-six may sound like a lot, but compare that to the number of hairs you carry all over your body. That should put things into perspective. Besides, each of these is as easy as factoring an 8-digit prime. There&#x27;s no reason you shouldn&#x27;t do them as well.</p>
<p>To begin with, the obvious steps:</p>
<ol>
<li>Uninstall and stop using TikTok.</li>
<li>Uninstall and stop using Instagram.</li>
<li>Uninstall and stop using Facebook.</li>
<li>Uninstall and stop using Snapchat.</li>
<li>Uninstall and stop using what used to be known as Twitter.</li>
<li>Uninstall and stop using all mobile games.</li>
<li>Uninstall and stop using all other social media apps that require you to rotate your thumb more than four times in a row.</li>
<li>For every other app you open a lot, ask yourself if it&#x27;s giving you superpowers, keeping you alive, or paying your bills. If it&#x27;s none of those, uninstall and stop using. (Hint: no news app pays your bills.)</li>
<li>For the apps that remain, like blog feeds and email, unsubscribe from all sources of content that don&#x27;t give you superpowers, keep you alive, and/or pay your bills. The goal is to have only a finite and short list of things waiting for you to check out every day, so that by evening you have nothing left to do with the phone.</li>
</ol>
<p>If you feel like you <em>have</em> to check an app for fear of missing out, remember that you miss out more by looking at your smartphone. My dynamic and gracious company, for example.</p>
<p>I know, some apps need to stay, but they&#x27;re still as time-sinking as all those other ones. I&#x27;ve got you covered.</p>
<ol start="10">
<li>YouTube: only use it to cast videos to a TV screen—never watch videos on the smartphone screen.</li>
<li>YouTube: also, turn off search history in the settings, which will also turn off all recommendations. No more algorithm. The feed of your followed channels will be your only source of video content. Besides, thanks to step 9, you&#x27;ll only be following a few channels, so you&#x27;ll watch them faster than they come out.</li>
<li>WhatsApp/Discord/other chat apps: permanently mute all groups/servers/channels except those where you absolutely need to check as soon as a new message is posted. For me, this is only a small circle of friends and family.</li>
<li>For some apps, there&#x27;s a middle ground. Maybe you don&#x27;t want to stop checking them altogether, just less often. Then uninstall the app and use the mobile browser to check them. This adds some friction because it&#x27;s less convenient, so you&#x27;ll avoid going there when the urge is weak.</li>
</ol>
<p>Now, you might entertain the notion that the above is enough, but the above is not enough. We&#x27;re only halfway through the list. The next big trap is the notification bar of your smartphone. It&#x27;s not your friend, and so it isn&#x27;t my friend either.</p>
<ol start="14">
<li><a href="/posts/a-medium-agnostic-taxonomy-of-notifications">Mute notifications from most remaining apps</a>, especially email and other &quot;endless torrent&quot; apps. You don&#x27;t need to be informed that the torrent has provided 23,129,486 new drops of water since the last time you checked three seconds ago: you already know that. You can check the updates when you open the app next. Rule of thumb: if you ever find notifications from more than 4-5 different apps accumulating in half a day, you haven&#x27;t muted enough.</li>
<li>Keep the phone in silent mode at all times. Use <em>serious</em> silent mode, the one that not only makes your phone quiet and vibration-free, but also hides any indication that notifications even exist (Do Not Disturb on Android, Focus Mode on iPhone). Only turn off this silent mode 4-5 times a day to quickly (&lt;30 seconds) work your way through the few important notifications that have accumulated.</li>
</ol>
<p>Once you&#x27;ve done all of the above, it&#x27;s time to fight, scare, and manipulate yourself mercilessly.</p>
<ol start="16">
<li>Install an app usage monitoring app (I like Digitox and StayFree on Android). It will tell you how many minutes per day you use your smartphone and which apps you&#x27;re using the most. The main goal is to provide the scary factor, which is why I call them &quot;horror apps.&quot;</li>
<li>Use the horror apps to set daily time limits for your time-sink apps. Decide what&#x27;s a healthy amount per day (hint: it&#x27;s less than 15 minutes per app) and let your phone block the app when you reach the limit. The clock ticking will offer some sweet anxiety on top of the fear.</li>
<li>Give yourself monthly goals and rewards based on the statistics from your monitoring app. I reward myself with money because I&#x27;m morally depraved. Last month I bought myself a potted plant with it.</li>
<li>Always have ready a different, non-glowing-and-glassy activity available to absorb your urge to spend time on your smartphone. In my case, I always have a book nearby, so I read that.</li>
</ol>
<p>Then there&#x27;s all the physical side of things. You know, space and time, touching grass, etc.</p>
<ol start="20">
<li>Always completely power off the phone before going to bed. (Incidentally, this will also lengthen its battery&#x27;s lifespan by a year or two.)</li>
<li>At night, charge the phone in the farthest room from your bedroom as possible.</li>
<li>Keep the smartphone out of your field of view while working and outside your immediate reach. You have to stand up to reach it.</li>
<li>Always put your phone on the table with the screen facing down. Just the view of the screen will corrupt your soul and make you want to give up on 1.17 days of your life per week.</li>
<li>Create &quot;floor is lava&quot; games for yourself, except that it&#x27;s not the floor (unless you want to) and the lava only melts your phone. In my house, my work desk is lava for my smartphone. I can&#x27;t place the phone on my desk even for a second, ever. For you, this might be your kitchen table, if that&#x27;s where you tend to get sucked in by the mean little screen.</li>
<li>Set situational rules for yourself: no smartphone in the toilet, while brushing teeth, etc. Don&#x27;t let it wedge its way into longer time blocks.</li>
</ol>
<p>If you&#x27;ve done all of the above, you might have what it takes for the final blow:</p>
<ol start="26">
<li>Every morning at 6:66 AM, boil your smartphone for thirteen minutes in fresh Bulgarian lamb blood while chanting a curse to Aza Raskin. This will decrease the time you spend on your phone to zero, at least until you go and buy a new one.</li>
</ol>

<p>In other words, maim, vilify, and neuterize your phone; horrify, surveil, and entertain yourself—and those 1.17 days per week will be yours again.</p>
<p>Thanks to these elementary precautions and minor adjustments, I now spend merely 60 minutes per day on my black rectangle. If that weren&#x27;t enough, 20 of those minutes are dedicated to talking to my beloved family and friends, 20 more to navigating the web (to pay my bills, mostly), and the last 20 are divided among a dozen other (more or less healthy) apps.</p>
<p>Now you can&#x27;t unsee this list anymore. I told you you&#x27;d have to do it. So... pizza next week? ●</p>

<p><em>Shout out to <a href="https://herman.bearblog.dev/being-present/" rel="nofollow noopener noreferrer" target="_blank">Herman&#x27;s blog</a> for inspiring this one.</em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/26-easy-steps-to-taming-your-smartphone-addiction/saint_gerome.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Visualizing Framings]]></title>
            <link>https://aethermug.com/posts/visualizing-framings?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/visualizing-framings</guid>
            <pubDate>Fri, 17 Oct 2025 05:42:16 GMT</pubDate>
            <description><![CDATA[A stolen attempt]]></description>
            <content:encoded><![CDATA[<p>Your brain is a tool very much like your hands are tools. You use them like tools to do things, even if most of the time you do it instinctively, as if they were givens. Only when you lose their control—a hand injury, a time of extreme mental fatigue or stress—might you realize that these were actually the best devices in your toolbox.</p>
<p>You could say that, to some extent, most of your appendages and muscle-controlled organs are tools in a similar way. But hands and brain are special: they can learn new skills almost without bound and, especially, <em>they can build new tools</em>.</p>
<p>The mind-built tools, or mental tools I return to over and over in this blog are, mainly, <em>framings</em> and <em>models</em>. Although I didn&#x27;t invent these concepts, I&#x27;ve worked to make them clearer, less ambiguous, and more practically useful. (I have <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">already</a> <a href="/posts/a-framing-and-model-about-framings-and-models">written</a> <a href="/posts/bad-framings">a lot</a> <a href="/posts/a-black-box-view-of-life">about them</a>, so I will spare you the details here—if what follows is confusing to you, you might want to visit some of those links first.)</p>
<p>Knowing framings and models in this enhanced form is a good idea, I think, because they get to the core of what it means to &quot;understand&quot; anything, because they <a href="/posts/a-pocket-size-checklist-of-thinking-errors">can help you debug your thoughts</a>, and because (among other things) they express how <a href="/posts/matsuri-s-grown-up-world">some people</a> can suffer <a href="/posts/living-in-a-real-world-acting-in-imaginary-ones">and even die</a> from mental causes.</p>
<p>The main thing I&#x27;m working on now (besides the long-overdue final installment of my <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">Purpose Trilogy</a>... almost ready!) is how to make these concepts of framings and models more convenient in practice. Theory and understanding are nice and all, but how do you <em>use</em> these ideas in your daily life? My current question is, can we visualize them?</p>
<p>Two weeks ago we saw that one way to do that is to <a href="/posts/shake-up-your-boundaries">tinker with the boundaries</a>. But that is a ten-thousand-foot view of things, and there is a limit to how much detail it can handle. Let&#x27;s try something else. It turns out that visualizing framings is surprisingly straightforward.</p>
<p>First, I&#x27;ll show you a way I propose you can draw framings and models, then I&#x27;ll give you a very quick tour of real-world examples in three categories: what business books sell as &quot;mental models&quot;, what science calls &quot;models&quot;, and how we think about mundane things of (apparently) different domains.</p>
<p>This approach is work in progress and a little rough around the edges. Email me your thoughts on how to improve it!</p>
<h2>Visualizing Framings</h2>
<p>In short, a framing is the definition of a set of &quot;things&quot; that &quot;exist&quot;—what I call <em>black boxes</em>—and everything else is taken <strong>not</strong> to exist at all, at least while you&#x27;re using that specific framing. In other words, a framing is a deployable metaphysics or ontology.</p>
<p>I&#x27;ll show you plenty of practical examples later in this post, but let&#x27;s build this gradually.</p>
<p>The obvious thing to do is to treat each &quot;thing&quot;, each black box, as a literal box.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/bicycle_1.webp" alt="A Bicycle black box" title="Diagram showing a simple blue box labeled &#x27;Bicycle&#x27; representing a black box framing"/><figcaption>A Bicycle black box</figcaption></figure>
<p>It&#x27;s equivalent to saying, &quot;this is its own thing, separate from the rest of the world, and it does stuff in a somewhat well-defined way.&quot;</p>
<p>A black box usually has some properties that we use to describe it, so we write those inside a box.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/bicycle_2.webp" alt="In this example, only the bicycle&#x27;s weight and maintenance state may be necessary for the framing at hand. Other properties could be the bicycle&#x27;s color, its shape, brand, etc., but in this case maybe the goal is to imagine how fast one can go, so only these two properties are needed. Everything else is ignored." title="Blue box labeled &#x27;Bicycle&#x27; containing two bullet points: &#x27;weight&#x27; and &#x27;maintenance state&#x27;"/><figcaption>In this example, only the bicycle&#x27;s weight and maintenance state may be necessary for the framing at hand. Other properties could be the bicycle&#x27;s color, its shape, brand, etc., but in this case maybe the goal is to imagine how fast one can go, so only these two properties are needed. Everything else is ignored.</figcaption></figure>
<p>Some black boxes will also have an intrinsic behavior or mutability that happens on its own, without being induced by the interaction with other black boxes. Remember that black boxes are so called because we choose (or are forced to) ignore what happens inside them. If you look closely, this mutability usually results from interactions of smaller components within and around the system, but those details fall outside the scope of this particular framing, so we treat them as a mysterious &quot;behavior&quot;.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/bicycle_3.webp" alt="Continuing with a basic framing for thinking about bicycle speed, a useful behavior is that it can coast, that is, keep moving at a roughly constant speed without external input like pedaling, at least in some cases." title="Blue box labeled &#x27;Bicycle&#x27; with three properties: &#x27;weight&#x27;, &#x27;maintenance state&#x27;, and &#x27;coast&#x27;"/><figcaption>Continuing with a basic framing for thinking about bicycle speed, a useful behavior is that it can coast, that is, keep moving at a roughly constant speed without external input like pedaling, at least in some cases.</figcaption></figure>
<p>Finally, every black box interacts with the other black boxes in its framing in various ways. The exact mode of interaction depends on many things. In truth, it depends on <em>too many</em> things to hope to keep track of them all in one&#x27;s mind, but that&#x27;s not a problem. This is precisely what framings are for: simplifying reality by ignoring all but the most relevant interactions.</p>
<p>You could represent this interface as a small set of predefined ways in which that black box can relate itself with other black boxes. These aren&#x27;t outputs or inputs of the system: they are a catalog of the interactions that you consider possible in the scope of the framing.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/bicycle_4.webp" alt="In this example framing, other black boxes might be the person riding the bicycle, the road, and possibly other factors. Depending on how sophisticated your framing is, you might have black boxes for the wind, other vehicles, etc. Then you would have one or more &quot;plugs&quot; for each of those other boxes (see next section)." title="Bicycle box with three interface connections pointing right: &#x27;support body weight&#x27;, &#x27;roll on the ground&#x27;, and &#x27;accept pedaling input&#x27;"/><figcaption>In this example framing, other black boxes might be the person riding the bicycle, the road, and possibly other factors. Depending on how sophisticated your framing is, you might have black boxes for the wind, other vehicles, etc. Then you would have one or more &quot;plugs&quot; for each of those other boxes (see next section).</figcaption></figure>
<p>That is all you really need to think about a &quot;thing&quot; in your mental simulations of the world. But the examples above might leave you with a bad taste in your mouth. Surely there is a more rational and effective way to model a bicycle, you might be thinking. There are more factors to consider, and probably there are better ways to structure the problem for easy solution. I agree! I&#x27;m not saying that this is the optimal way to think about this or any other specific situation: I&#x27;m saying that this is roughly how our brains do it automatically and inescapably because of their fundamental structure and composition.</p>
<p>The limitations of this way of building framings are the limitations of the human mind.</p>
<h2>Visualizing Models</h2>
<p>Models are just configurations of the black boxes that exist in a framing. Think of a framing as a collection of <em>types</em> of bricks, and of a model as one of the nigh-infinite structures you could build with those types of bricks.</p>
<p>If a framing is about defining what exists, a model is a specific way in which things that exist actually interact and interface with each other. So some people might have an ultra-simplified model of a bicycle&#x27;s movement:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/cycling_model_1.webp" alt="Each of those boxes comes from the framing, and has its own properties, mutability, and interface. Each link (in this case just one) is a case of one of those &quot;plugs&quot; being activated at each side of an interaction between black boxes. I&#x27;m omitting all those details here to avoid clutter." title="Simple model diagram with &#x27;Bicycle&#x27; box connected to &#x27;Road&#x27; box via &#x27;roll on&#x27; relationship"/><figcaption>Each of those boxes comes from the framing, and has its own properties, mutability, and interface. Each link (in this case just one) is a case of one of those &quot;plugs&quot; being activated at each side of an interaction between black boxes. I&#x27;m omitting all those details here to avoid clutter.</figcaption></figure>
<p>Others, perhaps those more familiar with cycling, might have a model that takes into consideration more interactions.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/cycling_model_2.webp" alt="Not only does this model use a larger framing—a greater number of different black boxes—but it considers more possible interactions between those things. For instance, the experienced cyclist may be keenly aware of the risk of skidding, rather than normally rolling, when the road is wet, covered in gravel, etc." title="Complex model diagram showing Bicycle, Cyclist, Road, and Other Vehicle boxes with multiple relationships including &#x27;sit on&#x27;, &#x27;steer&#x27;, &#x27;pedal&#x27;, &#x27;skid&#x27;, &#x27;roll on&#x27;, &#x27;present obstacle&#x27;, and &#x27;move about&#x27;"/><figcaption>Not only does this model use a larger framing—a greater number of different black boxes—but it considers more possible interactions between those things. For instance, the experienced cyclist may be keenly aware of the risk of skidding, rather than normally rolling, when the road is wet, covered in gravel, etc.</figcaption></figure>
<p>There is no predefined or correct way to model a given situation. Depending on what you know, what you remember, and what you care about at a given time, the level of sophistication of your models will change. The healthy tendency, I think, is to simplify them as much as you can get away with it.</p>
<p>The problem with this kind of visualization is that it is so easy it is boring. This is such a basic function of our brains that all of this seems obvious—not which exact model is perfect for a specific goal, which is not obvious at all, but the fact that there are <em>some</em> ways to connect things in your mind.</p>
<p>For this reason I will avoid drawing this kind of model diagrams in most of the practical examples below. Once you see how the framing is defined, plausible models built on it will promptly pop up in your mind. The juice is in the framings.</p>
<h2>Examples: Commonly Cited Mental Models</h2>
<h3>Occam&#x27;s Razor</h3>
<p>One of the oldest and best-known mental models, powerful and memorable for its simplicity: <em>when faced with two competing explanations for the same phenomenon, the one that makes the fewest (or simplest) assumptions is more likely to be correct</em>.</p>
<p>What this does is create a two-element framing: a world in which there are these black boxes called &quot;explanations&quot; and these other black boxes called &quot;assumptions&quot;, and they connect through the relation &quot;based on&quot;.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/occam_a.webp" alt="" title="Diagram showing two separate boxes: &#x27;Explanation&#x27; with &#x27;be based on assumption&#x27; interface, and &#x27;Assumption&#x27; with &#x27;complexity&#x27; property and &#x27;inform an explanation&#x27; interface"/></figure>
<p>Connect an explanation&#x27;s interface plug to that of one or more assumptions, and you have a model:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/occam_b.webp" alt="" title="Model diagram showing one Explanation box connected to three Assumption boxes via &#x27;has assumptions&#x27; relationship"/></figure>
<p>That model tells you that the number and complexity of the assumptions underlying an explanation determine how likely that explanation is to be correct. For this judgment, nothing else matters: not even the exact contents of explanations and assumptions! They are true black boxes for Occam.</p>
<h3>Jobs to Be Done</h3>
<p>JTBD is a neat mental model proposed by Clayton Christensen to think about business. It shifts the focus from <em>who</em> a customer is or <em>what</em> product they buy, to the underlying &quot;job&quot; they are trying to accomplish. People &quot;hire&quot; products and services to get a job done.</p>
<p>A person doesn&#x27;t buy a drill bit because they want a drill bit; they hire it to create a quarter-inch hole. A fast-food chain might realize people aren&#x27;t just buying a milkshake (the product), they are &quot;hiring&quot; it to make a long, boring commute more interesting (the job).</p>
<p>This shift can lead to ideas that wouldn&#x27;t otherwise come up. For example, the traditional, pre-JTBD framing probably looks something like this.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/jtbd_1.webp" alt="" title="Traditional business framing with three boxes: Customer (with &#x27;cash amount&#x27; and &#x27;desires change over time&#x27;), Product (with &#x27;price&#x27;), and Desire (with &#x27;intensity&#x27;), showing relationships &#x27;have desires&#x27;, &#x27;buy products&#x27;, and &#x27;be about a product&#x27;"/></figure>
<p>Suppose you wanted to increase sales of, say, your milkshake. In this framing, you might think about tweaking the milkshake&#x27;s flavor and price to make it more desirable. This might work, but often it doesn&#x27;t.</p>
<p>Jobs to Be Done uses another framing:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/jtbd_2.webp" alt="" title="Jobs to Be Done framing with three boxes: Customer (with &#x27;cash amount&#x27; and &#x27;JTBD change over time&#x27;), Product (with &#x27;price&#x27; and &#x27;fulfillable jobs&#x27;), and Job (with &#x27;value&#x27;), showing relationships &#x27;have JTBD&#x27;, &#x27;hire products&#x27;, and &#x27;fulfill a job&#x27;"/></figure>
<p>The models you build in this framing may look similar on the surface, but they are not. Whereas before the focus was on the product&#x27;s qualities, now it is about how the product can fulfill a specific job—how it solves a practical problem. In the milkshake example, where you realized that the job it is usually hired for is to kill time during a long commute, you may consider making the milkshake thicker so that it lasts longer (this is an example Christensen himself used).</p>
<p>The milkshake&#x27;s thickness would not be the first thing you consider when thinking about its desirability. By adding this new black box called &quot;job to be done&quot; to your ontology, though, the same idea may become almost obvious. This is a mental model doing its job.</p>
<h3>Try It Yourself</h3>
<p>It&#x27;s actually quite fun and instructive to figure out what framing and model (as defined in this blog) is employed by any of the popular &quot;mental models&quot; floating around on the internet. For example, take <a href="https://medium.com/@yegg/mental-models-i-find-repeatedly-useful-936f1cc405d" rel="nofollow noopener noreferrer" target="_blank">this post by Gabriel Weinberg</a> and go through his list of mental models. What do they define as &quot;things that exist&quot;, and how do they arrange them together in ways that change how you think?</p>
<p>You&#x27;ll notice immediately that, lacking a clear definition of what they mean by &quot;mental models&quot;, those lists tend to be uneven patchworks of qualitatively different things. Some of those &quot;models&quot; on Weinberg&#x27;s list are not complete framings but single black boxes, presumably meant to be added to existing framings or carved out of the forgotten &quot;rest of the world&quot;. Others are suggestions of new boundaries to draw inside existing black boxes. I think that the clarified definitions I provided should make the roles and proper applications of these different beasts more intuitive.</p>
<h2>Examples: Scientific Models</h2>
<h3>Models in Physics</h3>
<p>Science is <em>the</em> realm of mental (and non-mental) models-as-tools. Most scientific work is about the conscious choice of framings and processes to build, vet, and select the best models inside them. This is a very deliberate, systematic way to work with models. Indeed, it was probably scientists who first created the black box called &quot;model&quot; in the first place.</p>
<p>If you&#x27;ve ever taken a high-school physics class, you have used models in this way, too. You might have solved a problem like the following.</p>
<blockquote>
<p>A car is traveling down a straight road at a constant velocity of 10 m/s. The driver then presses the accelerator, causing the car to speed up at a constant acceleration of 2 m/s² for a duration of 5 seconds.</p>
<p>(a) What is the final velocity of the car after those 5 seconds?
(b) How far has the car traveled during this 5-second period of acceleration?</p>
</blockquote>
<p>This is usually solved by framing it like this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/phys.webp" alt="" title="Physics framing showing four boxes (car, velocity, acceleration, position) with relationships between them: car has position and velocity, triggers acceleration; velocity modifies position; acceleration modifies velocity"/></figure>
<p>This framing is complex enough to solve problems with constant acceleration like the above, and you can also use it to build models of objects with &quot;piecewise-constant&quot; acceleration, i.e. where the acceleration is constant at value A for some time, then instantly changes to a different constant value B, and so on.</p>
<p>If you have problems where the acceleration changes continuously in more complicated ways, however, this framing will fall short. To solve those, you&#x27;ll have to introduce black boxes like integrals and derivatives.</p>
<p>More advanced topics, like materials science, quantum physics, and general relativity, all proceed in this same way. The difference is only in that they handle more complex and counterintuitive framings to solve their respective problems.</p>
<p>It&#x27;s a bit ironic. This core activity of physics—creating appropriate framings to answer questions—is squarely in the realm of &quot;art&quot;, not &quot;science&quot;. There is no predetermined, &quot;right&quot; way to frame a system, and finding those that are good enough is a matter of trial, error, and intuition. Fortunately, for all but the most cutting-edge fields, textbooks offer vast catalogs of pre-built framings to reuse and tweak as needed.</p>
<h2>Examples: Everything Else</h2>
<p>Up to now, you might still be clinging to the idea that framings and models are specialist tools used knowingly in jobs like marketing and research. But no. Consciously or not, we all use framings and models for <em>every single thought</em> we think. We can&#x27;t think in any other way.</p>
<p>The best way to see that is looking at what psychologists and philosophers call &quot;folk science&quot;—the intuitive, implicit understanding of how the world works that we all have since infancy.</p>
<h3>Folk Physics</h3>
<p>Even toddlers know a range of physical &quot;laws&quot; that allow them to interact with the things around them and make certain modest predictions. These form a set of empirically-known framings that doesn&#x27;t need to be taught, and we all carry them in our minds even as adults. Some of us, over years of education and work, learn to improve them or replace them with more advanced and powerful framings and models, but that is less common than you might think.</p>
<p>For example, people intuitively believe that objects thrown upward will always fall back down. In other words, a simple black box like this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/folk_phys.webp" alt="" title="Folk physics framing showing a &#x27;solid object&#x27; box with properties &#x27;direction of movement&#x27; and behavior &#x27;eventually direction always turns downwards&#x27;"/></figure>
<p>This framing works just fine in most everyday situations, even though it is pre-Newtonian in nature. But if you expand the scope of application—e.g. very high velocities—it will stop working. The concept of &quot;escape velocity&quot; doesn&#x27;t exist in this bit of folk physics, and someone will have to teach you a better framing to handle that.</p>
<h3>Folk Biology</h3>
<p>People generally believe that belonging to a species is determined by descent:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/folk_bio.webp" alt="" title="Folk biology framing showing &#x27;organism&#x27; box with &#x27;species&#x27; property connected to interface &#x27;have offspring of the same species&#x27;"/></figure>
<p>In most cases, this is a reasonable assumption. But if this were really true, how would new species begin?</p>
<p>As Daniel Dennett explains well in his book <em>Darwin&#x27;s Dangerous Idea</em>, new species are surprisingly difficult to pin down because they only make sense in a retrospective sense.</p>
<blockquote>
<p>Speciation ... has a curious property: you can’t tell that it is occurring at the time it occurs! You can only tell much later that it has occurred, retrospectively crowning an event when you discover that its sequels have a certain property.</p>
</blockquote>
<p>Each offspring is always extremely similar to its parents, except for a few tiny mutations here and there. It is only after those offspring have lived their lives, and <em>their</em> offspring have lived <em>theirs</em>, and so on for a while, that we can tell if any given mutation turned out to be a key differentiator of a budding new species. Dennett clarifies this with an analogy:</p>
<blockquote>
<p>Other concepts exhibit similar curiosities. I once read about a comically bad historical novel in which a French doctor came home to supper one evening in 1802 and said to his wife, &quot;Guess what I did today! I assisted at the birth of Victor Hugo!&quot;</p>
</blockquote>
<p>To understand speciation in a less &quot;folk&quot; and more scientific way, you need a bigger framing where, alongside the &quot;organism&quot; black box, you have other black boxes like &quot;population&quot; and &quot;lineage&quot;.</p>
<p>(By the way, improved framings like Dennett&#x27;s lead to a fun realization: you and everyone you know is yet another attempt at starting a new species. Will you succeed?)</p>
<h3>Folk Psychology</h3>
<p>When you see someone reaching for a glass of water, you intuitively infer they <em>believe</em> the glass contains water and <em>desire</em> to drink it. This attribution of mental states to explain behavior is a typical example of folk psychology, our intuitive approach at making sense of other people.</p>
<p>Folk psychology (and <a href="https://en.wikipedia.org/wiki/Theory_of_mind" rel="nofollow noopener noreferrer" target="_blank">theory of mind</a>) tends to be more complex and fuzzy than those neat physics problems, for obvious reasons. Your implicit framing for the scene above might be something like this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/folk_psy.webp" alt="" title="Folk psychology framing with five boxes: Person (with &#x27;have beliefs&#x27;, &#x27;have desires&#x27;, &#x27;take actions&#x27;), Belief (with &#x27;strength&#x27;, &#x27;justifies desire&#x27;), Desire (with &#x27;strength&#x27;, &#x27;be about an object&#x27;, &#x27;incite action&#x27;), Object (with &#x27;availability&#x27;, &#x27;become more/less available&#x27;), and Action (with &#x27;cost&#x27;, &#x27;target an object&#x27;)"/></figure>
<p>If words like &quot;belief&quot; and &quot;desire&quot; seem perfectly natural and clear to you, ask yourself: where are they? What are they made of? There are no &quot;belief neurons&quot; or &quot;desire neural networks&quot; in the human brain. People have looked. These concepts are artificial boundaries that we created to delineate convenient black boxes for our explanations of the world.</p>
<p>But folk psychology creates the risk of what I call <a href="/posts/living-in-a-real-world-acting-in-imaginary-ones">Tunnel Vision</a>—forgetting that you&#x27;re using a simplified framing of the world, and consequently becoming trapped inside it. It&#x27;s when you believe that people <em>have</em> to behave in certain ways, and that you <em>just can&#x27;t</em> break free of certain conventions that society has imposed on you. Almost as if those were laws of physics.</p>
<h2>It&#x27;s All Just UML</h2>
<p>If you have a technical background, you may have noticed that the diagrams I drew above are eerily similar to Unified Modeling Language <a href="https://en.wikipedia.org/wiki/Class_diagram" rel="nofollow noopener noreferrer" target="_blank">class diagrams</a>. UML diagrams are are more complex and have more moving parts, but they are essentially the same as those I&#x27;ve been drawing.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/visualizing-framings/Class_Dependency.webp" alt="Samirsyed, CC BY 3.0, via Wikimedia Commons" title="UML class diagram showing Car class with methods like startEngine and manufactureString connected to Wheel class with sizeInt property via a uses relationship"/><figcaption>Samirsyed, CC BY 3.0, via Wikimedia Commons</figcaption></figure>
<p>So am I stealing ideas from engineering? Yes! In a Robin Hood way, though. When James Rumbaugh invented this way of representing the relationships between entities, he was thinking only about software architecture. The problems his team was trying to solve were about designing the relationships and interactions between many parts of large computer programs. But what they discovered had a much broader applicability than they seemed to think. Wouldn&#x27;t it be nice if all the visual tooling and methods they built for a specialized sub-discipline turned out to be useful to better understand how the mind works? ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/visualizing-framings/jtbd_2.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Strive as It Might]]></title>
            <link>https://aethermug.com/posts/strive-as-it-might?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/strive-as-it-might</guid>
            <pubDate>Wed, 08 Oct 2025 01:53:18 GMT</pubDate>
            <content:encoded><![CDATA[<div><p class="indent-6 !mb-0">The eyeball&#x27;ll never see</p><p class="indent-6 !mb-0">the color of its iris.</p><p class="indent-6 !mb-0">The hand will never scratch its back.</p><p class="indent-6 !mb-0">The skull will never kiss its nape.</p><p class="indent-6 !mb-0">The clock will never measure</p><p class="indent-6">its keeping of true time.</p><p class="indent-6">Unless, that is, they get external help.</p><p class="indent-6 !mb-0">But what about</p><p class="indent-6">the living mind?</p><p class="indent-6 !mb-0">They say that there is such a thing</p><p class="indent-6 !mb-0">that it is like</p><p class="indent-6">to be what someone is.</p><p class="indent-6 !mb-0">They say that science can&#x27;t explain</p><p class="indent-6 !mb-0">the simple fact</p><p class="indent-6">of feeling like a bat.</p><p class="indent-6 !mb-0">The mind <a href="/posts/embedded-prophesy-devices">predicts</a>,</p><p class="indent-6 !mb-0"><a href="/posts/a-framing-is-a-choice-of-boundaries">divides</a>,</p><p class="indent-6 !mb-0">and <a href="/posts/a-framing-and-model-about-framings-and-models">works its simulations</a>.</p><p class="indent-6">It operates upon the world in ways that science can expound.</p><p class="indent-6 !mb-0">And yet</p><p class="indent-6">the mind will never think itself.</p><p class="indent-6">Unless, that is, it gets external help. ●</p></div>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/strive-as-it-might/ohara_koson_monkey_and_reflection_of_the_moon.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Shake Up Your Boundaries]]></title>
            <link>https://aethermug.com/posts/shake-up-your-boundaries?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/shake-up-your-boundaries</guid>
            <pubDate>Thu, 02 Oct 2025 05:47:22 GMT</pubDate>
            <description><![CDATA[Six assorted examples]]></description>
            <content:encoded><![CDATA[<p>The way we think is entirely based on mentally chopping up the world into discrete entities—things and concepts—pretending they are less connected with everything else than they really are, and forgetting, at least for a while, the details inside them. I described this at length on Plankton Valhalla (my essay site) when I wrote <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">Boundaries Are in the Eye of the Beholder</a>, and in several shorter posts on this blog. Today I want to grok that idea better with a few examples from nature and life.</p>
<p>For those who haven&#x27;t read those other posts, here is a flash summary: I call those generic &quot;things&quot; and &quot;concepts&quot; <em><a href="/posts/a-black-box-view-of-life">black boxes</a></em>. Each black box is an atomic unit in your mind—at least in a given context—and it&#x27;s disarmingly simple. It has a boundary, an (ignored) inside, and an outside that encompasses the rest of the universe.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/shake-up-your-boundaries/black_box.webp" alt="" title="Diagram showing a black circle labeled &#x27;the boundary&#x27; containing patterned interior labeled &#x27;the inside&#x27;, set against a topographic map background labeled &#x27;the rest of the world&#x27;"/></figure>
<p>So when most people talk about going cycling in the hills, they have a black box in their mind called &quot;bicycle&quot; and they don&#x27;t think or care about the individual pedals, the chain, and the frame that compose the bicycle. Their mental model of the weekend excursion treats &quot;bicycle&quot; as an atomic unit that can do a certain number of things and with which one can interact in a certain number of ways. It doesn&#x27;t matter, usually, why and how those things are physically possible, and all the mechanisms that make them possible. That kind of consideration is unnecessary and would only bog down the thought process, which is simply: &quot;we can sit on bicycles and go nice places&quot;. Mental models are useful because of how much they ignore.</p>
<p>Of course, a bicycle mechanic will have a <a href="/posts/a-framing-and-model-about-framings-and-models">different mental model</a>, with black boxes for &quot;pedal&quot;, &quot;chain&quot;, &quot;saddle&quot;, and so on. Boundaries are in the eye of the beholder, and the mechanic draws them more granularly than most when it comes to the topic of bicycles.</p>
<p>My question is this: if boundaries are arbitrary and subjective, how can I make sure I draw the best boundaries to help me <a href="/posts/language-is-a-bottleneck-for-thought">think better</a>? How do I redraw the boundaries to make sense of things that puzzle me?</p>
<p>Black boxes are, in principle, just closed boundaries, topologically equivalent to circles. This means that there are only three broad ways you can redraw a black box:</p>
<ol>
<li>You can enlarge (or shrink) it</li>
<li>You can merge two or more of them</li>
<li>You can have them intersect</li>
</ol>
<p>Let&#x27;s see some examples for each of these cases.</p>
<h2></h2>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/shake-up-your-boundaries/1_larger.webp" alt="" title="Diagram showing a small gray circle inside a larger black circle with arrows pointing outward in four directions, illustrating boundary expansion"/></figure>
<p>We&#x27;re already quite comfortable expanding the boundaries in the case of groups of people. For example, we have terms like &quot;team&quot;, &quot;company&quot;, and &quot;family&quot; to indicate units with specific properties that transcend their individual members. But it is not enough to have a word for a group of things to correctly think in terms of larger black boxes. Here are two examples from nature.</p>
<h3>1.1 Bees</h3>
<p>Many species of bees use a special communication method called the &quot;waggle dance&quot;. When a scout bee finds a good source of food like a group of juicy flowers, it returns to the nest and performs a complex dance choreography that encodes the quality, direction and distance to the food source so that other bees can find it on their own.</p>
<p>I find the existence of a proto-language like this in an insect fascinating. A human can &quot;read&quot; the message and act on that information to find a specific location far from the nest! I&#x27;m tempted to say that each bee acts as the colony&#x27;s eyes, multiplying the collective ability to find food by the number of individuals in it. But reality is more complex than that, and the bee colony is smarter.</p>
<figure><img src="https://aethermug.com/assets/posts/shake-up-your-boundaries/Bee_dance.webp" alt="Schematic diagram of bee waggle dance showing two bees, a sun, a red flower, and a wavy path marked &#x27;1 s = 1 km&#x27; with an angle alpha indicating direction"/><figcaption>Image by Jüppsche, Wikimedia Commons, CC-BY-SA 2.5</figcaption></figure>
<p>The thing is, bees are <del>human</del> fallible, too. They make mistakes. Sometimes they even exaggerate their accounts, because their judgments of the quality of nectar sources are inherently subjective. Besides, the signal conveyed by the dance is an approximation, and it might be misinterpreted by those who observe it.</p>
<p>Imagine, then, if the whole colony started dispatching big squads of optimistic bees every time a scout waggled excitedly in a new direction. That would lead to a lot of wasted time and disappointment.</p>
<p>So the colony starts from a state of great skepticism with these waggle dances. Most bees in the nest simply ignore the dancing scout altogether, and the few that do observe it generally do nothing at all with that information. If the scout is lucky, it might win over a small number of bees, and they will try to find the food themselves. This small expedition is closer to a group of fact checkers than committed harvesters, and often they fail to confirm the original bee&#x27;s account: the colony forgets about it and goes on with its business.</p>
<p>But if the second wave of bees actually finds the promised food, they will return to the nest and perform their own excited dances for the same coordinates. The colony is still skeptical at this point, but the greater number of matching dances will have a bigger chance of grabbing their sisters&#x27; attention. <em>If the information is truthful</em>, this creates a feedback loop, where more dancers mobilize more fact-checkers, which in turn mobilize more bees still, and so on until the colony finally accepts the new location as a veritable source of food.</p>
<p>In other words, it is wrong to think that sources of food are found by any individual bee. The boundary should be drawn much larger than that, encompassing the whole community of bees that participates in the beautiful truth-finding feedback process. The colony, <em>as a whole</em>, is the black box capable of reliably finding its new energy sources. The specific actions of single bees are irrelevant and confusing, so (in this case) they belong on the inside of the black box, the part we don&#x27;t need to concern ourselves with.</p>
<h3>1.2 Beech Forests</h3>
<p>A beech tree has a common problem: all sorts of animals love eating its seeds, and they are very good at it. On a typical year, they might eat almost all of its new seeds, which means that the tree won&#x27;t be able to reproduce. Its genetic heritage is prevented from passing to future generations. And since every beech tree has this same problem, the species risks going extinct.</p>
<p>Now, trees have various strategies to protect their seeds from predators, like thorns and poisons, and most of these are self-defence strategies, because each tree fights for itself. Beech trees have some of these, but they also use a very different approach.</p>
<p>Instead of each plant controlling its own production of beechnuts, they coordinate the amount of seeds they produce with all the trees around them. Beech forests are known to synchronize over vast distances, as wide as Northern Europe, with mechanisms still unknown to science. How is this a defense strategy, though?</p>
<p>On most years, all beeches produce a stable but low amount of seeds, most of which are eaten by their predators, and reproduction is low. But, once every two to eight years, they all have what&#x27;s called a &quot;mast year&quot; in which they produce many more nuts than usual. So many, in fact, that the predators can&#x27;t even catch up, and are forced to leave lots of seeds uneaten on the ground. That is when most of the beech trees are able to reproduce.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>This process is called the &quot;satiation hypothesis&quot;, because it saves its seeds by filling up the animals&#x27; bellies, but it might as well be called the &quot;starvation hypothesis&quot;: the beeches are killing their seed-eaters by starving them. The trees can&#x27;t simply produce huge amounts of seeds every year—this would simply cause predator populations to swell until they could gobble up all the seeds again. So the beeches make lots of fruits for one year, allow the animals to multiply, and then they lie low for a few years, producing a small number of seeds that are not enough to feed all those baby animals.</p>
<p>The beech forest controls the population of its own predators—the black box that is defending itself, then, is not any individual tree but the immense community of millions of trees participating in the synchronized production. The boundary, here, should be drawn very large indeed.</p>
<h2></h2>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/shake-up-your-boundaries/2_merge.webp" alt="" title="Two black circles merged together into a figure-eight shape, illustrating the merging of boundaries"/></figure>
<p>When two or more things interact closely on a regular basis, you might have to treat them as one new thing altogether.</p>
<h3>2.1 Vehicles and Tools</h3>
<p>In World War accounts, you sometimes encounter expressions like these (emphasis mine):</p>
<blockquote>
<p>Simultaneously, <strong>like some bird of prey</strong>, they fall upon their victim and release their load of bombs on the target</p>
<p>— Sergeant Pruemers, Stuka Dive Bombers</p>
</blockquote>
<blockquote>
<p>I was peering anxiously ahead, for the controller had given us warning of at least fifty enemy fighters approaching very high. When we did first sight them, nobody shouted, as I think we all saw them at the same moment. They must have been 500 to 1000 feet above us and coming straight on <strong>like a swarm of locusts</strong>.</p>
<p>— Richard Hillary, Messerschmitt Fighters</p>
</blockquote>
<p>When you see a squadron of enemy bombers homing in on your home city, you may think of them as people inside flying machines, or you may think of them as terrifying creatures made of steel and explosives. At first, the latter interpretation seems more emotional and inaccurate, but is it really?</p>
<figure><img src="https://aethermug.com/assets/posts/shake-up-your-boundaries/bombers.webp" alt="Black and white aerial photograph showing a squadron of military aircraft flying in formation through cloudy skies"/></figure>
<p>When a person enters a killing machine on a mission to kill, they cease to be &quot;just a person&quot;. The range of actions and effects they can bring about in the world changes dramatically. The behavior of the human inside—possibly even their psychology—changes by virtue of this new environment and range of influence. It is of little use thinking about the aircraft and the pilot as two separate entities: that mental model is too clunky and stiff for many of the relevant problems you need to solve.</p>
<p>It may be better, then, to merge their boundaries in your mind for the time being, treat them as one entity that is neither machine nor person. Your thoughts and predictions should focus on the novel and unique properties of this chimera, and your observations on refining your understanding of that black box as a whole.</p>
<p>We tend to do this kind of merging all the time in our daily lives, albeit instinctively. On a city street, you are more likely to say &quot;watch out, a car is coming&quot; than &quot;watch out, a person is coming in a car,&quot; even though the latter would technically be more precise. I think this is for a good reason: the car <em>is</em> the person—the two can&#x27;t be separated.</p>
<p>I&#x27;m not advocating for dehumanizing drivers and pilots, or to lessen their responsibilities in any way. There are real people in there, with mostly rational minds that work mostly like they always do. What I&#x27;m saying is that the way you think about a person in a vehicle, and really with any tool and technology whatsoever, should be as a unit that is different than the sum of the elements.</p>
<p>When drivers curse at &quot;that asshole&quot; who dangerously cut in front of them, the &quot;asshole&quot; part is most accurately applied to the human-car unified black box, not just at the person inside. You&#x27;ll often find that the person ceases to be an asshole the instant they step out of the driver&#x27;s seat.</p>
<h3>2.2 Lichens</h3>
<p>Here&#x27;s another one from nature: what we call with the single word <em>lichen</em> is actually two very different organisms merged together into a single entity. It is really a fungus hiding a population of green algae or cyanobacteria inside itself.</p>
<p>The fungus, with its strong filaments and its ability to absorb water and minerals from the environment, provides the structure and protection necessary for the photosynthesizing organisms to thrive. On the other hand, the algae inside acts as the power source of the lichen with its ability to convert sunlight into chemical energy.</p>
<figure><img src="https://aethermug.com/assets/posts/shake-up-your-boundaries/Lobaria_pulmonaria_010108c.webp" alt="Green Lobaria pulmonaria lichen with deeply lobed, lettuce-like fronds growing on tree bark"/><figcaption>Photo by Bernd Haynold, Wikimedia Commons, Dual License GFDL and CC-BY-SA 1.0, 2.0, 2.5, 3.0</figcaption></figure>
<p>Lichens were the first officially recognized instance of symbiosis in biology. It is a mutually-beneficial relationship, and an inseparable one: you can&#x27;t have a lichen that is all fungus and no algae, because the fungus can&#x27;t do photosynthesis. This is a case in which we learned about the merged boundary before we found that it is a composite of smaller ones.</p>
<p>In most cases it is fine, even sensible, to treat lichens as a single organism, with its unique traits and behaviors, because it acts as a unit at all times.</p>
<p>Scientists, though, have the option of doing the opposite of merging, instead splitting the boundary into two black boxes. This operation has the advantage of making some finer details and mechanisms clearer, and it is a staple of science as a whole (sometimes called <em>reductionism</em>). But, once again, just because a black box can be opened to reveal smaller black boxes inside, it doesn&#x27;t mean that it&#x27;s always a good idea. The right boundary depends on what you are trying to achieve.</p>
<p>(There is another interesting thing about lichens that I should mention. Although the fungus is dependent on the algae or cyanobacteria as energy source, the same is not necessarily the case the other way around. Those photosynthetic organisms can usually survive well enough on their own, although they certainly benefit from the added protection of the fungi. In a sense, then, the fungus is &quot;farming&quot; its photosynthetic partner, and the relationship is not exactly symmetrical. This leads us to the last major way in which you can redraw boundaries.)</p>
<h2></h2>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/shake-up-your-boundaries/3_intersect.webp" alt="" title="Two black circles partially overlapping, creating an intersection area between them"/></figure>
<p>This, frankly, shouldn&#x27;t happen. The whole meaning of &quot;boundary&quot; is that it stops things from passing through it, right? And yet the fuzzy boundaries that we project onto reality don&#x27;t even respect that constraint properly. Black boxes can intersect even when they don&#x27;t fully merge. Or, perhaps, the merging of two boundaries into one is simply an extreme case of intersection that is made permanent.</p>
<p>Even when two bounded entities make more sense as distinct and autonomous, their interactions may make it impossible to consider them separately for some purposes. This last couple of examples should help you see this.</p>
<h3>3.1 Horizontal Gene Transfer</h3>
<p>Did you know? The days when biologists understood reproduction and inheritance as the unavoidable hallmarks of evolution are behind us. Already a century ago biologists had some hints that bacteria can sometimes absorb genes from other organisms, and be forever changed by that. At first these events were thought to be freak accidents, rare occurrences of limited importance: reproduction is how evolution works, we thought.</p>
<figure><img src="https://aethermug.com/assets/posts/shake-up-your-boundaries/hgt.webp" alt="Phylogenetic diagram showing horizontal gene transfer between Bacteria, Archaea, and Eukaryotes as vertical lines with horizontal connections, labeled with plastids and mitochondria, converging at a common ancestral community"/><figcaption>Nowadays the &quot;tree of life&quot; looks more like a ladder drawn by Picasso. Image by Barth F. Smets, Tamar Barkay, CC-BY-SA 4.0</figcaption></figure>
<p>The picture has gradually changed as the evidence piled up, though. Now this kind of horizontal gene transfer (HGT) is believed to be a major mechanism for new forms of life to evolve and adapt to their environment. There are many ways in which it can happen. For example, some of them pick up stray DNA segments that were floating around, others have viruses implant them directly in their chromosomes, and others still exchange genes when they come in contact with other bacteria.</p>
<p>HGT was especially common at the dawn of life, when all organisms were single-celled and most susceptible to the infiltration of alien DNA. At the time, it must have affected the evolution of early life forms in major ways. It may not even make sense to talk about distinct species and lineages for them—it was a web and a mishmash of genetic recombination. Nowadays the phenomenon is rarer because of the added complexity of multi-cellular life, but it still happens. By some estimates, the human genome itself contains a non-negligible amount of DNA &quot;sourced&quot; from other life forms.</p>
<p>So we have a situation in which two organisms live and act separately, as their own autonomous black boxes, but may suddenly interact in a way that fundamentally changes their identity.</p>
<p>If you don&#x27;t acknowledge this temporary intersection of their boundaries, some phenomena won&#x27;t make sense: among others, cells with phenotypes different from their mothers and harmless bacteria becoming deadly by simply being in the presence of the broken remains of infectious bacteria.</p>
<h3>3.2 Parents</h3>
<p>Speaking of lineages, what about sexual reproduction? Unlike HGT, in the case of a male-female pair of a sexual species no DNA is transferred between the two, and they remain largely unchanged by their rendezvous. They appear to be sharply separate black boxes. Yet, they do something very peculiar together: they both need each other in order to produce their offspring—they can&#x27;t make half a baby each and stick it together afterwards.</p>
<p>Again, we have a case in which two entities are neither fully independent nor can be merged into a single boundary as in the case of a symbiotic relationship. Their boundaries intersect only temporarily, but the effects of that intersection are unique and long-lived.</p>
<p>More generally, every interaction is necessarily an overlapping of boundaries, a cooperation that transcends the contributions of both sides and becomes something else of its own. This may seem obvious in the cases of parents making children, but it is a universal truth that applies to much more than that.</p>
<p>We tend to focus on the individual people or elements in a situation—often only on the single biggest contributor—and forget that the interaction <em>between</em> those elements is more important than each contributor. Too rapt in our considerations of the different black boxes, we fail to look at their intersections.</p>

<p>How to redraw your boundaries? A starting point is to ask yourself:</p>
<ol>
<li><strong>Am I looking at the right scale of organization?</strong> Instead of blaming Bill, maybe I should blame the whole situation Bill was in.</li>
<li><strong>Can I really consider those two elements separately?</strong> The killer was not the gun, the killer was not the human who pulled the trigger, the killer was the union of the two: the armed person.</li>
<li><strong>Is the interaction more important than the individuals here?</strong> Forget who did &quot;A&quot; and who did &quot;B&quot; for a minute: how does &quot;AB&quot; work? ●</li>
</ol>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/shake-up-your-boundaries/ales-cerin-H2DwGkqZCMc-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[What's the Deal With Counterfeit People?]]></title>
            <link>https://aethermug.com/posts/what-s-the-deal-with-counterfeit-people?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/what-s-the-deal-with-counterfeit-people</guid>
            <pubDate>Wed, 24 Sep 2025 00:58:45 GMT</pubDate>
            <description><![CDATA[I asked a counterfeit person about it]]></description>
            <content:encoded><![CDATA[<p><em>Note: This is a short conversation I had with Claude Sonnet (Anthropic&#x27;s AI bot) about the late philosopher Daniel Dennett&#x27;s opinions on generative AI. Dennett is one of my philosophical heroes. I&#x27;ve abridged some of Claude&#x27;s answers to spare you the boring bits while leaving the intellectual backflips in. Debating with LLMs is an underrated pastime.</em></p>

<h1>🧠</h1>
<p><strong>Marco</strong>: In 2023, in the last year of his life, Daniel Dennett used most of the spotlight he could get to <a href="https://www.bbc.com/future/article/20240422-philosopher-daniel-dennett-artificial-intelligence-consciousness-counterfeit-people" rel="nofollow noopener noreferrer" target="_blank">vehemently criticize</a> the rise of AI chatbots. He had issues with those who build and sell AI &quot;agents&quot; in the form of human imitations. He called the mimicking AI <em>counterfeit people</em>, and said that &quot;it would be reassuring to know that major executives, as well as their technicians, [who created these imitations] were in jeopardy of spending the rest of their life in prison in addition to paying billions in restitution for any violations or any harms done.&quot; (<a href="https://archive.is/aDwCP" rel="nofollow noopener noreferrer" target="_blank">source</a>)</p>
<p>Dennett is one of my favorite authors, and I&#x27;ve read several of his books and papers. Such strong condemnatory statements coming from anyone else would leave me indifferent, but I know Dennett was a nuanced, calm debater. He must have been seriously concerned about this issue of counterfeit people.</p>
<p>According to his interviews, he saw the problem of not knowing if the author of a message is human or not as a terrible danger to society, &quot;another pandemic&quot;. He framed it as a systemic destruction of social trust. This is surprising to me for a couple of reasons.</p>
<p>First, he often insisted how there is nothing magic about the human mind, that it is a physical, algorithmic process—however complex and mysterious for us today. It seems out of character for him to claim that AI behaving exactly like a human would be a sacrilege.</p>
<p>Second, it seems to me the erosion of trust and the mass-production of lies and misinformation has already reached near-completion a decade or more ago. Even long before that, even before the evolution of humans, some individuals have tried to deceive others, and rarely one could have full certainty that any message was truthful and not by an impostor. We already live in doubt of being scammed, fooled, and lied to. It&#x27;s not great, but we survive well enough.</p>
<p>How is AI going to make this unbearably worse, according to Dennett?</p>
<h1>🤖</h1>
<span class="font-true-sans col-start-1 col-span-1"><p><strong>Claude</strong>: Dennett&#x27;s concern isn&#x27;t about AI mimicking humans being inherently wrong—you&#x27;re right that would contradict his physicalist views. His alarm centers on <em>scale and indistinguishability</em>.</p><p>The key difference from historical deception is automation. Previously, lies required human effort per instance. Now, AI can generate millions of convincing fake communications simultaneously, overwhelming society&#x27;s ability to verify authenticity. One bad actor can impersonate thousands of people across multiple platforms instantly.</p><p>Dennett saw this creating a <em>epistemic crisis</em>—not just more lies, but the collapse of reliable methods for distinguishing truth from falsehood. When anyone can be anyone online, and the fake communications are indistinguishable from real ones, social institutions that depend on authentic human interaction begin to fail.</p></span>
<h1>🧠</h1>
<p><strong>M</strong>: One could argue that even truthful messages by genuine, verified humans are &quot;slop&quot; and useless most of the time, at least online. Governments (e.g., Russia and North Korea) have already managed to scale misinformation even before [generative] AI existed. We&#x27;re all already used to that. Why would AI messages be automatically bad?</p>
<p>If the content created by counterfeit people is actually truthful, it should be fine. If it&#x27;s false, some people will fall for it as they fell for false information 10 years ago, and others will find ways to confirm its truthfulness, as they already do. Intelligent people and experienced people will take care not to believe anything 100%, instead keeping a healthy dose of uncertainty in mind—a practice that has been excellent advice for millennia.</p>
<figure><img src="https://aethermug.com/assets/posts/what-s-the-deal-with-counterfeit-people/1024px-Dan_Dennett_at_TED.webp" alt="Elderly man with white beard and glasses sitting at TED conference"/><figcaption>Dan Dennett in 2009. Photo by Erik Charlton from Menlo Park, USA, CC BY 2.0, via Wikimedia Commons</figcaption></figure>
<h1>🤖</h1>
<span class="font-true-sans col-start-1 col-span-1"><p><strong>C</strong>: Your counterarguments have merit, but miss Dennett&#x27;s specific worry. He&#x27;s not claiming all AI content is inherently bad or that lies are new. His concern is about the collapse of <em>authentication mechanisms</em> that society relies on.</p><p>Dennett&#x27;s alarm isn&#x27;t about content quality—you&#x27;re right that human &quot;slop&quot; exists. It&#x27;s about epistemic collapse: &quot;the cost of such deepfakes to human trust will be enormous&quot; because we&#x27;ll lose reliable methods to distinguish authentic human participation from artificial simulation.</p><p>Democracy requires distinguishing genuine public opinion from manufactured consent. Markets need real human preferences versus artificial demand signals. Even your &quot;intelligent people keeping uncertainty&quot; approach breaks down when the baseline assumption—that you&#x27;re interacting with humans—becomes unreliable.</p></span>
<h1>🧠</h1>
<p><strong>M</strong>: Might this trend not be a blessing in disguise, though? The trust issues exist only for online media, where you don&#x27;t directly see the person speaking to you. Perhaps the erosion of trust will push us back into the habit of listening to flesh-and-blood humans in front of our eyes—not cameras—first and foremost. That sounds like a healthy habit, healthier than anything we&#x27;ve had since around 2001.</p>
<h1>🤖</h1>
<span class="font-true-sans col-start-1 col-span-1"><p><strong>C</strong>: If AI-driven epistemic collapse forces retreat from digital-mediated trust back to in-person verification, we might inadvertently solve several problems simultaneously. A forced return to prioritizing physical presence could restore community bonds, reduce polarization from algorithmic echo chambers, and rebuild the social trust that digital platforms have systematically undermined.</p><p>But Dennett would likely argue this retreat isn&#x27;t feasible at scale. Global coordination—from climate action to economic policy—increasingly requires digital communication networks. Financial markets, supply chains, and democratic institutions now depend on remote verification systems. Retreating to purely local, in-person trust networks might preserve small communities while leaving larger coordination problems unsolved.</p></span>
<h1>🧠</h1>
<p><strong>M</strong>: Isn&#x27;t it ironic that &quot;you&quot;, an AI pretending to be a person conversing with me, are agreeing with Dennett&#x27;s concerns about counterfeit people?</p>
<h1>🤖</h1>
<span class="font-true-sans col-start-1 col-span-1"><p><strong>C</strong>: You&#x27;ve caught a genuine irony there. I&#x27;m essentially validating concerns about my own existence as a potential threat to authentic human discourse.</p><p>But this highlights something important about Dennett&#x27;s argument: the problem isn&#x27;t AI systems like me existing, it&#x27;s whether I&#x27;m transparent about what I am. Right now, you know you&#x27;re talking to Claude, not a human. The conversation has clear context about my nature as an AI system.</p><p>Dennett&#x27;s &quot;counterfeit people&quot; concern targets AI that deliberately masquerades as human—fake social media accounts, deepfake phone calls claiming to be specific real people, bots designed to deceive rather than assist. The deception is the core issue, not the AI capability itself.</p><p>🤖...</p><p>The irony cuts deeper though: I&#x27;m using human-like reasoning patterns to analyze why human-like AI might be dangerous. My agreement with Dennett&#x27;s concerns doesn&#x27;t validate them—it could just demonstrate that I&#x27;m trained on human arguments and reflecting them back convincingly.</p><p>This raises the question: does my transparency about being AI actually solve the trust problem? You still don&#x27;t know if my analysis is genuinely reasoned or sophisticated pattern matching. The epistemic challenge remains even when the AI is honest about its nature.</p><p>Perhaps the real test isn&#x27;t whether AI can think, but whether human society can maintain meaningful discourse when it becomes impossible to distinguish between the two—regardless of whether the AI is trying to deceive. ●</p></span>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/what-s-the-deal-with-counterfeit-people/walker.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Hi No Youjin]]></title>
            <link>https://aethermug.com/posts/hi-no-youjin?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/hi-no-youjin</guid>
            <pubDate>Wed, 17 Sep 2025 05:23:40 GMT</pubDate>
            <description><![CDATA[On quaint devices for time-travel]]></description>
            <content:encoded><![CDATA[<h2>Touch Stone, Touch Wood</h2>
<p>One of the luxuries of living in Rome is getting to visit incredibly ancient ruins any time you want. There are literally <em>too many</em> of them (public infrastructure works need to stop for months whenever they accidentally dig up an imperial-era domus or an Ionic capital, so they feel more like journeys through a minefield than construction projects). It is a luxury that, in retrospect, I didn&#x27;t take advantage of enough during the many years I lived in the Eternal City (when I tell people in Japan that &quot;I&#x27;m from Rome,&quot; some react as if I&#x27;d told them my uncle is Julius Caesar.).</p>
<figure><img src="https://aethermug.com/assets/posts/hi-no-youjin/olivia-anne-snyder-EOdVI7cech4-unsplash.webp" alt="Ancient Roman Colosseum interior showing weathered stone walls and arched passages"/><figcaption>Photo by Olivia Anne Snyder, Unsplash</figcaption></figure>
<p>That&#x27;s not to say that I didn&#x27;t appreciate the city&#x27;s treasures, though. Simply standing in front of one of those sun-washed marble or brick walls, touching them, feeling them loom over me, is never not <em>awesome</em> in the literal sense. While I&#x27;m there, all the modern technology and cultural paraphernalia around me fade away from my consciousness. I no longer see or hear the cars and sneaker-wearing people around me, because I&#x27;m transported three quarters of a million days into the past, when those stones were cut and set down so heavily as to stay there forever.</p>
<p>I can&#x27;t see ancient Rome in my mind, but I can feel this strange connection to the anonymous human beings who stood, two millennia ago or more, in that very same spot, touched with fingers just like mine the same rough patch in the stone, experienced the same palette of emotions, instincts, and weaknesses as I do. I feel the urge to observe them, to ask them questions. Time itself seems to stop mattering. A lifespan seems briefer than a spark.</p>
<p>For many years, I thought these time-traveling powers were essentially a property of rocks—one of the most durable substances we encounter in daily life. Stone and metals carry eternity inside them, I thought. Temples and walls, being stone shaped by human hands, simply make that eternity more palpable to other humans. What I felt was perhaps a mineral view of the world. I now see that I was wrong.</p>
<p>After moving to Japan, I learned that the same epochal awe—the same sense of dissolving time and cross-generational connection—can be evoked even by a building containing no stone at all. Douglas Adams explained why:</p>
<blockquote>
<p>I remembered once, in Japan, having been to see the Gold Pavilion Temple in Kyoto and being mildly surprised at quite how well it had weathered the passage of time since it was first built in the fourteenth century. I was told it hadn’t weathered well at all, and had in fact been burnt to the ground twice in this century. “So it isn’t the original building?” I had asked my Japanese guide.</p>
<p>“But yes, of course it is,” he insisted, rather surprised at my question.</p>
<p>“But it’s burnt down?”</p>
<p>“Yes.”</p>
<p>“Twice.”</p>
<p>“Many times.”</p>
<p>“And rebuilt.”</p>
<p>“Of course. It is an important and historic building.”</p>
<p>“With completely new materials.”</p>
<p>“But of course. It was burnt down.”</p>
<p>“So how can it be the same building?”</p>
<p>“It is always the same building.”</p>
<p>I had to admit to myself that this was in fact a perfectly rational point of view, it merely started from an unexpected premise. The idea of the building, the intention of it, its design, are all immutable and are the essence of the building. The intention of the original builders is what survives. The wood of which the design is constructed decays and is replaced when necessary. To be overly concerned with the original materials, which are merely sentimental souvenirs of the past, is to fail to see the living building itself.</p>
<p>— Douglas Adams, Last Chance to See</p>
</blockquote>
<p>Most important buildings in Japan&#x27;s history up to the 19th century were built in wood, so they naturally rot and deteriorate over time. Devastating fires and earthquakes were frequent, and many important landmarks were destroyed and rebuilt several times over.</p>
<p>When building the Golden Pavilion or Ryoan-ji, no one expected them to last forever. I suspect, like Douglas Adams, that wasn&#x27;t the point at all. It&#x27;s no coincidence that some of the oldest companies in the world are <a href="https://www.youtube.com/watch?v=5k9g1WqRM0M" rel="nofollow noopener noreferrer" target="_blank">Japanese temple carpenters</a>.</p>
<div class="lg:col-start-1 lg:col-end-1 aspect-w-16 aspect-h-9 mb-3"><iframe class="w-full p-0" src="https://www.youtube-nocookie.com/embed/kwfd3gF6uOA" title="YouTube video player" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerPolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe></div>
<div class="text-center text-sm text-ash dark:text-faintLanternWhite mt-3 italic mb-8 col-start-1 col-span-1"><p>There are many carpenters even today specialized in all the ancient techniques necessary to build Buddhist temples and Shinto shrines.</p></div>
<p>So immutable rock is not essential to the time-traveling effects I found in Roman monuments. What, then, is the key ingredient?</p>
<p>It&#x27;s not just physical age, either—the materials don&#x27;t need to be that old at all. A modern residential building completed in 2013 doesn&#x27;t incite the same sense of wonder in me as the Ise Grand Shrine, even though the shrine was last rebuilt in the same year (they&#x27;ve been <a href="https://www.youtube.com/watch?v=HTrSx1peqUo" rel="nofollow noopener noreferrer" target="_blank">doing it all over again</a> every 20 years since 692).</p>
<p>What I discovered is that something doesn&#x27;t even have to be a building to work its magic on me. I learned this only a few years ago, from the comfort of my own living room.</p>
<h2>Intangible Paths</h2>
<p>At the time, I lived in Nezu, an old Tokyo neighborhood a stone&#x27;s throw from the famous Ueno Park. The whole area was once part of Edo&#x27;s <em>shitamachi</em>, or &quot;low town,&quot; where commoners—merchants, artisans, and craftspeople—lived and worked. Of course it has undergone much transformation over the years, but its sleepy side alleys and hidden temples retain a vintage charm that you can&#x27;t find in many other parts of modern Tokyo.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/hi-no-youjin/nezu_no_zu.webp" alt="I used to live here. More or less."/><figcaption>I used to live here. More or less.</figcaption></figure>
<p>On winter evenings in Nezu, when darkness had just reached its fullness, I would sometimes hear a peculiar sound and a voice coming from the back street outside my window.</p>
<p>The sound was that of a <em>hyoushigi</em>, a traditional instrument consisting of a pair of hardwood or bamboo sticks that are banged together to produce a pleasant, high-pitched wooden click. It&#x27;s often used in traditional Japanese arts, like sumo matches and theater performances, but what I heard in Nezu was nothing of the sort. From the street came two clicks, followed by an old man&#x27;s voice chanting <em>hi no youjin!</em> (火の用心), which means &quot;be careful with fire!&quot;</p>
<p>The routine was repeated once every twenty or thirty seconds. It felt too peaceful, too ritualistic to be someone scolding a child or sounding an alarm. I didn&#x27;t know exactly what it was, but the whole thing felt wonderfully quaint.</p>
<figure><img src="https://aethermug.com/assets/posts/hi-no-youjin/willian-justen-de-vasconcellos-Njd1TRaJj7w-unsplash.webp" alt="Traditional Kyoto street at dusk with wooden buildings leading to a five-story pagoda"/><figcaption>Photo by Willian Justen de Vasconcellos, Unsplash</figcaption></figure>
<p>When I looked it up, I learned that it was a &quot;fire watch patrol&quot;—volunteers walking around the neighborhood to remind all inhabitants never to forget the dangers of fire. Even today, most buildings in the area are made of wood, and a single spark could mean destruction for hundreds of households. To my delight, I also read that this practice hasn&#x27;t changed since at least the year Keian 1 (1648), when a directive from the town authorities of Edo containing the exact term <em>hi no youjin</em> ordered:</p>
<blockquote>
<p>町中の者は交代で夜番すべし。月行事はときどき夜番を見回るべし。店子たちは各々火の用心を厳重にすべし。</p>
<p>People in the town should take turns doing night watch. Monthly officials should occasionally inspect the night watch. Tenants should each be strictly careful about fire prevention (<em>hi no youjin</em>).</p>
</blockquote>
<p>There it was again, that feeling! Now, every time I heard the old patrolling man&#x27;s voice intoning his <em>hi no youjin</em> in the night, I had to stop whatever I was doing and just listen. I was again transported to that place outside time. I had no trouble at all imagining the same exact voice echoing through that same street four hundred years ago, when everyone went around in straw sandals carrying paper lanterns in the night. They were there now, right outside my house!</p>
<p>And so I never once looked out the window to see the source of that voice. I just listened as it repeated its incantation at slow, regular intervals, crisp in the dead silence of Japanese neighborhoods. Faint at first, then louder, then fainter again, and then it was gone.　●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/hi-no-youjin/jacob-gonzales-SnOmJ_lLWeQ-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Claude Sonnet Is Teaching Me English]]></title>
            <link>https://aethermug.com/posts/claude-sonnet-is-teaching-me-english?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/claude-sonnet-is-teaching-me-english</guid>
            <pubDate>Wed, 03 Sep 2025 01:16:36 GMT</pubDate>
            <description><![CDATA[Today, even a lazy after-thought can become a useful project]]></description>
            <content:encoded><![CDATA[<p>Once upon a time I published my English mistakes for the whole world to see. Those were simpler, wilder days. Then language models, far more capable with English than I, took on the role of proofreaders of every blog post I wrote. I would write my thing, polish it to the best of my abilities, then hand it over to GPT-4.1 and friends for a final brush-up of the prose. For a while, I thought that was all I needed.</p>
<p>The next phase seems inevitable in retrospect: instead of fixing my text, AI should be fixing <em>me</em>. Teach a man to fish, the saying goes. <del>Humans are the root of all problems and must be optimized away.</del></p>
<p>So I built (well, I had Claude Sonnet 4 build) an ultra-customized English improvement system based only on my own specific mistakes.</p>
<p>It takes the entire history of proofreading corrections that AI models applied to my blog posts in the past 6 months, and analyzes the hell out of them. Then it presents the results to me like this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/claude-sonnet-is-teaching-me-english/dashboard_1.webp" alt="" title="English Writing Analysis dashboard showing 94 total corrections, 4 error categories, with Word Choice as the most common issue, alongside pie chart and bar graph"/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/claude-sonnet-is-teaching-me-english/dashboard_2.webp" alt="" title="Grammar examples section showing subject-verb disagreement and past participle corrections with before/after comparisons in red and green boxes"/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/claude-sonnet-is-teaching-me-english/dashboard_3.webp" alt="" title="Word choice examples showing comma usage with parenthetical elements and vocabulary precision improvements with frequency and severity data"/></figure>
<p>This kind of analysis is feasible because I write my posts in Markdown, a code-based text markup language. While writing, my text looks like this:</p>
<pre><code>Words follow each other, and those that I want to italicize I simply surround with *single asterisks*, those I want to bold (embolden?) I surround with **double asterisks**.

Lists are just:

- Hyphens...
1. or numbers I type...
2. directly into the text editor
   
And so on, with a small number of other formatting conventions like that, entirely based on typing special characters with my clumsy fingers.
</code></pre>
<p>This means that I can write everything in plain-text files in a standard code editor, and all the amazing AI coding tools can access, edit, and interact in various ways with everything I write. I also get to keep a complete and exact <a href="https://en.wikipedia.org/wiki/Version_control" rel="nofollow noopener noreferrer" target="_blank">version control</a> history of every single change applied to any of the files in the website.</p>
<p>For this reason, the AI has full access to a catalog of all of my mistakes in a before/after format:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/claude-sonnet-is-teaching-me-english/diff.webp" alt="" title="Code diff view showing before and after text corrections, highlighting changes in red and green for proofreading improvements"/></figure>
<p>This makes it easy to create a processing pipeline that uses AI to categorize and explain the mistakes, then some old-fashioned code to tally up the numbers, then other AI calls to pick representative examples. Everything is then put into a web page available only to myself.</p>
<p>I plan to run this again periodically, to see if the number and distribution of mistakes evolves over time as I try to fix my English.</p>
<p>While I was at it, I also asked Claude to create a mini-game to test me on my weakest points:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/claude-sonnet-is-teaching-me-english/game_1.webp" alt="Before I answer..." title="Interactive practice game interface showing 2 correct out of 2 total answers with 100% accuracy, featuring Submit Answer, Get Hint, and Skip buttons"/><figcaption>Before I answer...</figcaption></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/claude-sonnet-is-teaching-me-english/game_2.webp" alt="...and after." title="Practice game showing correct answer feedback with explanation about comma usage in dependent clauses, categorized as punctuation with medium difficulty"/><figcaption>...and after.</figcaption></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/claude-sonnet-is-teaching-me-english/game_3.webp" alt="This one was tricky, mainly because there could be so many words that could be substituted for &quot;strong&quot;. The game is currently too brittle to accept variants." title="Practice game displaying incorrect answer feedback, explaining that &#x27;extensive experience&#x27; is preferred over &#x27;strong experience&#x27; for word choice improvement"/><figcaption>This one was tricky, mainly because there could be so many words that could be substituted for &quot;strong&quot;. The game is currently too brittle to accept variants.</figcaption></figure>

<p>This was a quick experiment, and I can think of many ways to improve this system. For example, the generated exercises are based on my own mistakes, but some of them tend to be either too easy or almost impossible (I had to ask it not to give me word-choice problems, because they are too open and subjective). This could be improved with some more prompt engineering. Better still, I could build interactive LLM evaluation and feedback straight into the interface with a little more work.</p>
<p>But that&#x27;s not really what I want to show you. Here is what: a new way to use computers is becoming possible with AI agents.</p>
<p>I only spend $20 per month on my Claude subscription for other reasons, and this whole exercise took less than 2 hours of my time, spread over a couple of days. Its usefulness, however, might extend for months or years. And this is only an easy example to share: lately I&#x27;m building this kind of ad-hoc AI-enhanced solutions to the little inconveniences in my life all the time.</p>
<figure><img src="https://aethermug.com/assets/posts/claude-sonnet-is-teaching-me-english/chart_3.webp" alt="Severity analysis bar chart showing error distribution across grammar, punctuation, word choice, and style categories with high, medium, and low severity levels"/><figcaption>Apparently word choice is the &#x27;tallest&#x27; category of mistakes I make.</figcaption></figure>
<p>Before AI, I would never have dreamed of building a tailor-made system like this. Not because it can&#x27;t be done without AI (it can), nor because I couldn&#x27;t write the code myself, but simply because it would have taken considerably more of my time—more than I think it&#x27;s worth.</p>
<p>In other words, a host of useful, but previously too costly, solutions is becoming affordable with these tools. Not just fancy products by trendy startups, but one-of-a-kind apps ideated, created, and consumed <em>by you only, nearly for free</em>.</p>
<p>I never planned to have AI make statistics and games out of my punctuation mistakes: I just suddenly realized that it was a low-hanging fruit right there, waiting for me (and no one else) to pick it. It&#x27;s the era of <a href="https://www.robinsloan.com/notes/home-cooked-app/" rel="nofollow noopener noreferrer" target="_blank">applications as home-cooked meals</a>.</p>
<p>The tooling for these things is getting better every month. Today, this kind of solution still needs a modicum of technical and programming skills, but I expect the situation to be quite different one year down the road. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/claude-sonnet-is-teaching-me-english/212461fgsdl.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[There Is Thinking and There Is Thinking and There Is Thinking]]></title>
            <link>https://aethermug.com/posts/there-is-thinking-and-there-is-thinking-and-there-is-thinking?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/there-is-thinking-and-there-is-thinking-and-there-is-thinking</guid>
            <pubDate>Wed, 27 Aug 2025 23:59:20 GMT</pubDate>
            <description><![CDATA[Should we anthropomorphize LLMs?]]></description>
            <content:encoded><![CDATA[<p>The computer-science-adjacent forum Hacker News had a very interesting conversation going on last month titled <a href="https://news.ycombinator.com/item?id=44484682" rel="nofollow noopener noreferrer" target="_blank">A non-anthropomorphized view of LLMs</a>. That is actually the title of a <a href="https://addxorrol.blogspot.com/2025/07/a-non-anthropomorphized-view-of-llms.html" rel="nofollow noopener noreferrer" target="_blank">blog post</a>, but (although the post itself is interesting) I was more fascinated by what the Hacker News commenters had to say about it.</p>
<p>halvar.flake, the author of the original post, says that talking about LLMs as if they were people feels wrong to him, and that we should instead treat them for what they are: &quot;mathematical objects&quot; and algorithmic &quot;functions.&quot; We must not, he says, use terms like &quot;thinking&quot; and &quot;waking up.&quot; He writes:</p>
<blockquote>
<p>Instead of saying &quot;we cannot ensure that no harmful sequences will be generated by our function, partially because we don&#x27;t know how to specify and enumerate harmful sequences&quot;, we talk about &quot;behaviors&quot;, &quot;ethical constraints&quot;, and &quot;harmful actions in pursuit of their goals&quot;. All of these are anthropocentric concepts that - in my mind - do not apply to functions or other mathematical objects. And using them muddles the discussion, and our thinking about what we&#x27;re doing when we create, analyze, deploy and monitor LLMs.</p>
</blockquote>
<p>There is no doubt that he is technically correct: language models are literally &quot;transformers&quot; of information in the form of numbers, implemented on semiconductor chips—<em>not</em> evolved organic neurons inside living bodies.</p>
<p>But <a href="https://news.ycombinator.com/item?id=44487564" rel="nofollow noopener noreferrer" target="_blank">the top commenter on the Hacker News thread</a> immediately casts doubt on halvar.flake&#x27;s take with some very compelling points: while LLMs are indeed mathematical objects, they do things that are <em>very unlike</em> most other kinds of math functions, so it is not useful to treat them <em>only</em> as such: &quot;we need a higher abstraction level to talk about higher level phenomena in LLMs.&quot;</p>
<p>And since LLMs imitate humans in many ways (continues the top commenter), it is only natural to use human-centered language to describe some aspects of them. It seems to me that they are accusing the blogger of <a href="https://en.wikipedia.org/wiki/Greedy_reductionism" rel="nofollow noopener noreferrer" target="_blank">greedy reductionism</a>, the same kind of thinking that would make you say &quot;the Mona Lisa is just a few grams of dry paint particles on a canvas&quot;—<em>technically</em> right, but kind of <em>missing the point</em>.</p>
<p>Although the commenter doesn&#x27;t say this explicitly, it sounds to me like they&#x27;re applying a weak form of the <a href="https://en.wiktionary.org/wiki/if_it_looks_like_a_duck,_swims_like_a_duck,_and_quacks_like_a_duck,_then_it_probably_is_a_duck" rel="nofollow noopener noreferrer" target="_blank">duck test</a> to this problem: if it talks like a person and it reacts like a person, then treat it as a person at least to some extent.</p>
<p>This sparked an interesting thread of characteristically blunt-and-polite Hacker News arguments and counter-arguments. Computer scientists might understand anthropomorphic terms applied to LLMs as mere metaphors, says another commenter, and this would make such language mostly harmless for them, but the problem is that the public at large will be hopelessly confused or harmfully misled. Other commenters go as far as proposing the creation of whole new verbs for this strange category of action, to distinguish it both from human agency and from that of &quot;traditional&quot; mathematical operations.</p>
<figure><img src="https://aethermug.com/assets/posts/there-is-thinking-and-there-is-thinking-and-there-is-thinking/kimi_thinking.webp" alt="Screenshot of AI chatbot conversation where user asks &#x27;What do you think I think you think?&#x27; and Kimi K2 responds &#x27;I think that you think I think that you think I possess a rather deterministic sense of self-modeling, and consequently you suspect I believe you expect me to say, \“I think that you think I think that you think I am merely executing predetermined algorithms.\”&#x27;"/></figure>
<p>What I find amusing about this discourse is how it mirrors almost exactly the one that has been raging for a hundred and fifty years (and counting) in biology about the applicability and defensibility of teleological—purpose-centered—terminology in evolution. The only difference is that the subject of <em>that</em> debate is not a technology but living things and evolutionary processes: Is it okay to say that a meerkat makes an alarm call because it &quot;wants&quot; to inform its gang about an approaching predator? Did eyes evolve &quot;so that&quot; an organism could see?</p>
<p>Biologists all agree that there isn&#x27;t <em>really</em> any intention involved in most of these biological facts, unlike when we use those expressions for humans. And that&#x27;s where the agreement ends.</p>
<p>Many major figures in the field, including Charles Darwin, claim that teleological terms are metaphors, yes, but necessary ones: trying to speak about them in any more &quot;accurate&quot; way would be very inconvenient and, frankly, too much of a hassle, so let us accept the compromise and get on with our work.</p>
<blockquote>
<p>This preservation, during the battle for life, of varieties which possess any advantage in structure, constitution, or instinct, I have called Natural Selection; and Mr. Herbert Spencer has well expressed the same idea by the Survival of the Fittest. <strong>The term &quot;natural selection&quot; is in some respects a bad one, as it seems to imply conscious choice; but this will be disregarded after a little familiarity.</strong></p>
<p>— Charles Darwin, The Variation of Animals and Plants under Domestication (emphasis mine)</p>
</blockquote>
<p>Other scientists and philosophers disagree, insisting that words like &quot;purpose&quot; and &quot;function,&quot; and speaking of organisms or species as if they were conscious, intelligent agents, is just plain wrong. For a long time, any mention of teleology was almost taboo among evolutionary scientists.</p>
<blockquote>
<p>During my own education, I was repeatedly warned against teleological thinking, and its close cousin anthropomorphism, by lecturers who spoke of the heart as a pump <strong>for</strong> the circulation of blood and of RNAs as messengers <strong>for</strong> the translation of proteins. ... Final causes are shunned as fruitless females who remain anathema to the virile vulcans of hard science.</p>
<p>— David Haig, From Darwin to Derrida (emphasis his)</p>
</blockquote>
<p>In recent decades, there has been a push-back against this kind of censoring. In an excellent 1997 piece for Discover Magazine, Frans de Waal takes issue with the canonical view that all animals except humans are effectively mindless automata:</p>
<blockquote>
<p>If we descended from such automatons, were we not automatons ourselves? If not, how did we get to be so different? Each time we must ask such a question, another brick is pulled out of the dividing wall, and to me this wall is beginning to look like a slice of Swiss cheese.</p>
<p>— Frans de Waal, <a href="https://web.archive.org/web/20191202161124/https://www.discovermagazine.com/planet-earth/are-we-in-anthropodenial" rel="nofollow noopener noreferrer" target="_blank">Are We in Anthropodenial?, Discover Magazine</a></p>
</blockquote>
<p>Someone, hoping to resolve the divide, even created a new word, &quot;teleonomy&quot;, to indicate &quot;teleology without conscious intention&quot;. How similar this is to the proposal of some of the Hacker News commenters:</p>
<blockquote>
<p>We need a new word for what LLMs are doing. Calling it &quot;thinking&quot; is stretching the word to breaking point, but &quot;selecting the next word based on a complex statistical model&quot; doesn&#x27;t begin to capture what they&#x27;re capable of.</p>
</blockquote>
<p>Alas, even &quot;teleonomy&quot; still hasn&#x27;t widely caught on, half a century after its proposal and despite many committed proponents. In other words, nothing in this biological conundrum is settled yet, and confusion on the teleology of life still lingers, annoying basically everyone.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Of course, the parallel between biological and AI language choices can only go so far. While we <em>H. sapiens</em> are undeniably linked to all other life forms by at least part of our ancestry, we have no such deep similarity with GPUs flipping bits in their memory registers. In his opinion piece, de Waal can convincingly assert that &quot;the closer a species is to us, the easier it is to [understand the animal&#x27;s idiosyncratic way to perceive its environment].&quot; There is no &quot;degree of closeness&quot; between us and, say, Deepseek R1.</p>
<p>Still, there <em>are</em> similarities between what modern language models and people do: if there weren&#x27;t, we wouldn&#x27;t be able to converse with them! I think many of the hard lessons that biology—confused as the field might still be on these topics—has learned from its own version of the controversy could become useful pointers for thinking about LLMs, too, even if the answer turns out to be different.</p>
<p>De Waal himself, for instance, shows due care in not overdoing the anthropomorphism he&#x27;s trying to defend.</p>
<blockquote>
<p>Without experience with primates, one could imagine that a grinning rhesus monkey must be delighted, or that a chimpanzee running toward another with loud grunts must be in an aggressive mood. But primatologists know from many hours of observation that rhesus monkeys bare their teeth when intimidated, and that chimpanzees often grunt when they meet and embrace. ... A careful observer may thus arrive at an informed anthropomorphism that is at odds with extrapolations from human behavior.</p>
<p>— Frans de Waal, <a href="https://web.archive.org/web/20191202161124/https://www.discovermagazine.com/planet-earth/are-we-in-anthropodenial" rel="nofollow noopener noreferrer" target="_blank">Are We in Anthropodenial?, Discover Magazine</a></p>
</blockquote>
<p>Although anthropomorphism (at least for some animals) is probably more than a mere metaphor, it is also not a license to assume that people and animals can be automatically equated on all fronts. Instead, the parallels should be leveraged carefully:</p>
<blockquote>
<p>We should use the fact that we are similar to animals to develop ideas we can test. For example, after observing a group of chimpanzees at length, we begin to suspect that some individuals are attempting to deceive others--by giving false alarms to distract unwanted attention from the theft of food or from forbidden sexual activity. Once we frame the observation in such terms, we can devise testable predictions. We can figure out just what it would take to demonstrate deception on the part of chimpanzees. In this way, a speculation is turned into a challenge.</p>
<p>— Ibid.</p>
</blockquote>
<p>I don&#x27;t know if all this justifies saying that GPT-5 or Claude Opus or Grok have any form of &quot;ethics, will to survive, or fear,&quot; as halvar.flake abhors. It&#x27;s very nice when a blog post brings up a thorny question and shines light on it until an answer becomes apparent on its own. It&#x27;s the ideal form that I strive for. The failure of the teleology debate in biology makes me shy away from attempting that here, though.</p>
<p>I will only risk one prediction: the &quot;Should we say that (today&#x27;s) AI thinks?&quot; debate isn&#x27;t going to be resolved anytime soon, either.</p>
<p>That said, some people speak with great confidence about things they can&#x27;t possibly understand—things that even the top experts in the field are clumsily and frantically grappling with. These people are adamant that &quot;predicting the next word&quot;—what LLMs do—is clearly, categorically different from what the human brain does when we say it is thinking. For instance, another one of the thread&#x27;s commenters writes, without explanation, that &quot;a machine that can imitate the products of thought is not the same as thinking.&quot;</p>
<p>(I know, it&#x27;s internet forums we&#x27;re talking about. I shouldn&#x27;t expect much better by default, but I find it strange to ignore such views only for that reason.)</p>
<p>Interestingly, the same person concludes their comment with &quot;LLMs are not intelligence but its fine that we use that word to describe them.&quot; But <em>why</em> is it? The differences between a transformer and a brain are enormous, true, but they are based on similar network principles and have many things in common, too. Do you really know how to draw the line, where the <a href="https://en.wikipedia.org/wiki/Family_resemblance" rel="nofollow noopener noreferrer" target="_blank">family resemblance</a> ends? ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/there-is-thinking-and-there-is-thinking-and-there-is-thinking/sebastian-hages-Jn67_91aoOg-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[I Used to Know How to Write in Japanese]]></title>
            <link>https://aethermug.com/posts/i-used-to-know-how-to-write-in-japanese?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/i-used-to-know-how-to-write-in-japanese</guid>
            <pubDate>Fri, 15 Aug 2025 00:22:25 GMT</pubDate>
            <description><![CDATA[Somehow, though, I can still read it]]></description>
            <content:encoded><![CDATA[<p>I recently came across <a href="https://www.scribd.com/document/48751681/Brief-Note-on-the-Kanji" rel="nofollow noopener noreferrer" target="_blank">a short essay</a> about <em>kanji</em>—Japanese logographic characters—by a certain James W. Heisig. His point is that learning <em>kanji</em> presents two obstacles: remembering what the shapes mean and remembering how they are pronounced. And it is a bad idea, claims Heisig, to try learning both at the same time. Japanese children learn the spoken language first, then they learn how to write it in elementary school; Chinese students of Japanese (who tend to be pretty good at it) have pre-existing knowledge of character meanings and forms from their mother tongue, so they only have to learn how to pronounce them. Therefore, a Western learner should first focus only on the meaning and writing of those couple of thousand common characters and, only after having mastered <em>those</em>, should move on to studying the pronunciations. Heisig professes simple divide and conquer.</p>
<p>That sounds plausible, but is it really an effective approach? How can you keep so many of those tangled squiggles in your head without even knowing how to say them out loud?</p>
<p>The answer is yes, it works. At least, it worked <em>fantastically</em> well for me. The first thing I did when I began learning the language in 2006 was opening Heisig&#x27;s famous <em>Remembering the Kanji Volume 1</em> and going through it, one <em>kanji</em> at a time, using the book&#x27;s mnemonic techniques to commit to memory the meaning, construction, and stroke order of all 2042 characters in the book. No thought to pronunciations, words, grammar—just the way each character is written and understood. I filled several notebooks with handwritten characters as I practiced recalling them every day, and eleven months later, I had them all in my head. I could write and understand them all with little effort and, from that point on, learning how to pronounce and compose them into words and sentences felt like a breeze. Prof. Heisig has my eternal gratitude.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/i-used-to-know-how-to-write-in-japanese/kanji_notebook.webp" alt="A practice notebook of mine from the late 2010&#x27;s." title="A photograph of an open spiral notebook with fine handwritten Japanese kanji characters arranged in neat rows and columns on lined paper"/><figcaption>A practice notebook of mine from the late 2010&#x27;s.</figcaption></figure>
<p>But fast-forward two decades, and the situation has evolved in an interesting way that I would never have anticipated. I spent over thirteen years in Japan, and my Japanese has only gotten better. My friends and colleagues in this period have been mostly Japanese natives, as is my spouse. I use the language every day at home, I use it to read novels and send emails, to watch South Korean shows with Japanese subtitles, and to file my taxes. I use it more than my own native language, both in spoken and written form. And yet... I cannot handwrite most of those <em>kanji</em> any more.</p>
<p>Except for a few hundred simple and/or frequently recurring characters (like those in my home address), I just cannot recall how to draw them out with a pen. I haven&#x27;t completely <em>forgotten</em> them, and I&#x27;m perfectly capable of reading and understanding them in the blink of an eye—it&#x27;s just the act of turning the intended character into ink on paper that is often impossible for me.</p>
<p>I&#x27;m not alone in this &quot;character amnesia,&quot; either. Whenever I tell a Japanese native about my lost ability, they all readily admit to having forgotten how to write many <em>kanji</em>, too. Apparently, this is a well-known phenomenon in Japan and in China. There is even a term for it, <em>wahpro baka</em> (ワープロ馬鹿), meaning &quot;word-processor idiot,&quot; from the idea that spending too much time typing into Microsoft Word makes people&#x27;s handwriting skills atrophy.</p>
<p>Is it only me, or is all of this surprisingly deep and fascinating?</p>
<figure><img src="https://aethermug.com/assets/posts/i-used-to-know-how-to-write-in-japanese/54877apsdl.webp" alt="Japanese woodblock print depicting a samurai in traditional clothing kneeling while writing his death poem, with a large knife in his hand"/><figcaption>Akashi Gidayu writing his death poem before committing Seppuku, Tsukioka Yoshitoshi</figcaption></figure>
<p>I wrote previously about <a href="/posts/the-beautiful-dissociation-of-the-japanese-language">the beautiful dissociation of the Japanese language</a>, where the way you write and the way you pronounce <em>kanji</em> are two separate worlds with no simple one-to-one correspondence between the two. What this handwriting forgetfulness shows is that there is an even deeper separation between how our brains process the act of reading and that of writing by hand.</p>
<p>Indeed, neuroscience research has shown that <em>reading</em> activates visual-language pathways in the left hemisphere of the brain, from the occipitoparietal to the posterior temporal cortex. <em>Writing</em> kanji, however, is driven by our motor-planning and primary motor cortex, as well as a network in the posterior parietal cortex specialized in remembering the sequence of strokes necessary for the task.</p>
<p>In other words, what feels like a single, monolithic &quot;literacy&quot; ability is actually two distinct skills, each exercised in different instances and each capable of improving and decaying on its own. We all learn two ways to handle text, not one, although we usually learn them at the same time. Spend years typing on a phone with autocomplete, and your pen-focused neural network weakens.</p>
<p>Hold on. This explanation is quite convincing, but it doesn&#x27;t solve the entire mystery. The thing is, I have <a href="/posts/aphantasia">aphantasia</a>: I do not have, nor can I choose to conjure, images in my mind. On the surface, this atypical trait seems to explain quite well why I can draw a blank when asked to write the <em>kanji</em> for &quot;plant&quot; (植) from memory. I don&#x27;t see the character in my mind, so it makes sense that I can&#x27;t reproduce it on paper.</p>
<p>What confuses me is that other people <em>can</em> form images in their minds. Are all those with character amnesia also aphantasic? That can&#x27;t be, given that aphantasics amount to less than 5% of the population, while a much larger number of people forget how to write (<a href="https://web.archive.org/web/20130820140919/http://www.wantchinatimes.com/news-subclass-cnt.aspx?cid=1304&amp;MainCatID=13&amp;id=20130807000001" rel="nofollow noopener noreferrer" target="_blank">70%</a> of teenage participants in a Chinese TV show were unable to write the word &quot;toad&quot;!).</p>
<p>How is it possible for you to &quot;see&quot; the text in your mind and not be able to replicate it with a pen? Even if the mental image is faint and fuzzy, surely you can sketch it out roughly at first, then refine it until it settles into its exact form? Apparently, that is not how mental images work, either.</p>
<p>Admittedly, I&#x27;ve never heard of someone forgetting how to write a letter from the Latin alphabet. Character amnesia is mostly a thing with logograph-based languages. The simplicity of letters and the much higher frequency of those symbols probably play a role—you encounter &quot;R&quot; in a text much more often than &quot;plant.&quot; I wish there were more research on where exactly the line lies: how complex and how numerous do the symbols in a writing system have to be in order to create a character amnesia problem? The answer sounds important, because it would tell us about some fundamental limits to how the brain processes visual information. According to the <a href="https://thelanguagecloset.com/2023/09/16/exploring-character-amnesia/" rel="nofollow noopener noreferrer" target="_blank">Language Closet</a>, researchers &quot;found that character frequency, age of acquisition, spelling regularity, familiarity, stroke count, and imageability were significant predictors of character amnesia,&quot; but this result is partial and muddled by other, less relevant factors.</p>
<figure><img src="https://aethermug.com/assets/posts/i-used-to-know-how-to-write-in-japanese/courtisane.webp" alt="Japanese ukiyo-e print showing an elegant woman in traditional dress with elaborate hair ornaments, reading from a scroll"/><figcaption>Courtisane en de Toi Tama rivier, Kitagawa Utamaro</figcaption></figure>
<p>Back to the phantasia paradox: for some reason, mental images don&#x27;t help significantly with writing. I think this hints at the incredible compression ability of the brain.</p>
<p>There is a widely accepted theory in cognitive science called fuzzy trace theory, which hypothesizes that there are two ways we encode (record) memories in the brain: <em>verbatim traces</em> and <em>gist traces</em>. The verbatim kind is what we typically think of as &quot;good&quot; memory—the ability to remember something in full detail, almost literally. These are relatively precise memories, but they are difficult to retrieve and easy to forget. They&#x27;re not &quot;sticky&quot;.</p>
<p>A gist trace, on the other hand, is the quick and very sticky transcription of only the <em>salient</em> parts of the experience—minus the details. They are the sublimation of sensory information into fuzzier, abstract &quot;meaning&quot;—a form of compression. Gist memories jump back at us more readily and are harder to forget, but they lack all the particulars of verbatim memories. And, crucially, <em>forming gist traces doesn&#x27;t depend on having verbatim traces</em>.</p>
<p>When you read something, both verbatim and gist traces are recorded in your brain, but the latter leave a stronger and longer-lasting presence. In the case of 雨, your gist trace might simply read &quot;rain / Chinese character made mostly of horizontal lines and drops&quot;. The exact (verbatim) shape may not even leave a mark in your mind until you&#x27;ve seen and studied it carefully several times over.</p>
<p>In practice, this means that you don&#x27;t have a single, complete &quot;source file&quot; for a character or image stored anywhere in your brain. There is the abstract gist of it somewhere, with just enough features to allow you to recognize it when you see it again, but the exact details are nowhere to be found. The details may be scattered in other parts of your brain—for instance, in your motor memory networks, which allow you to write the character down—a different set of neurons that needs its own painstaking training. Reading a complex character, then, means &quot;recognizing the gist,&quot; and writing it means &quot;activating your memory of the precise movements needed to reproduce it.&quot; Two starkly different tasks.</p>
<p>This distinction applies to much more than East-Asian symbol recognition. In <a href="/posts/reading-blood-meridian-with-aphantasia">Reading Blood Meridian with Aphantasia</a>, I described my experience reading a notoriously gruesome novel by Cormac McCarthy. I noted how the vivid and atrocious depictions in the book don&#x27;t provoke the deep, visceral reactions that others talk about.</p>
<blockquote>
<p>It feels important but remote, not something related to me personally. Kind of like observing the events from a flying bird&#x27;s (metaphorical) eyes—more than far enough for objectivity, but keen enough to take it all in.</p>
</blockquote>
<p>This sense of remoteness might imply that, due to aphantasia, I only get the gist of the written experience from the text, bypassing the verbatim kind. My image-less reading experience is almost pure abstraction, because there is very little visual verbatim information to be stored in the first place. I have countless powerful &quot;memories&quot; of that novel, but I would never be able to reconstruct even a single intense scene to any degree of fidelity.</p>
<p>This is also why I believe that <a href="/posts/language-is-a-bottleneck-for-thought">language is a bottleneck for thought</a>. Most of what you remember is nothing like an approximate copy of the things you experienced in real life—even in the specific case of text, memory is not even <em>remotely</em> like a paraphrase of previously read words. Many of our thoughts happen in a highly abstracted and distilled form, interacting and connecting with each other as a network that simply cannot be faithfully converted into a sequence of words, however long. The fact that people can fail even at something as basic as sketching a <em>kanji</em> <a href="https://bikerumor.com/artists-renderings-show-most-people-really-cant-draw-a-bicycle/" rel="nofollow noopener noreferrer" target="_blank">or a vehicle</a> they&#x27;ve seen hundreds of times before is just another example of the same phenomenon.</p>
<p>It turns out that the bottleneck is not only between different minds, but also between parts of the same mind. In my case, Prof. Heisig&#x27;s divide-and-conquer approach worked well to create the perfect scaffolding to learn the Japanese language as a whole. But when the scaffolding was dismantled to reveal the completed structure, the ability to write by hand was thrown away with the rest of the junk. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/i-used-to-know-how-to-write-in-japanese/51370jpsdl.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Do You Remember What You Read?]]></title>
            <link>https://aethermug.com/posts/do-you-remember-what-you-read?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/do-you-remember-what-you-read</guid>
            <pubDate>Thu, 07 Aug 2025 00:42:55 GMT</pubDate>
            <description><![CDATA[Why I never skip a Readwise day]]></description>
            <content:encoded><![CDATA[<p>You read a book and love it, you highlight several of its most brilliant passages, and you feel that this process has changed you in some way. Maybe it has even <a href="/posts/you-don-t-have-time">changed your life</a> a little bit. Then you move on to other books, the weeks pass, the months flow, and one day you realize that you remember little of that magnificent book. You recall that you loved it, and some of its key themes and propositions. If it was a novel, you might remember the rough outline and what happened in a few key scenes, but everything else is blurry, faded, or completely gone. You begin to question whether the book really changed your life as you so heartily felt at the time—whether it left you anything of value other than a warm, fuzzy feeling. But you have other good books to work through now, and even that doubt is soon buried in oblivion. And so you keep going, and the same thing happens again and again.</p>
<p>Until you die.</p>
<p>Just kidding, you&#x27;re not really going to die. Sooner or later I am, though, and this is a real situation I experienced firsthand for decades. I don&#x27;t know how many people will relate to this sense of having most of what you read slip through your fingers, but it was the norm for me for most of my life. Perhaps it is something only people with <a href="/posts/i-do-not-remember-my-life-and-it-s-fine">SDAM</a> experience, and everyone else has no such problems. I hope to hear feedback either way (if you register for the newsletter <a href="/about">here</a> you can shoot me an email).</p>
<p>I&#x27;ve always been a slow but constant reader, and although I do think all those books changed me somewhat and helped me grow, I never really realized how much I was missing until about—nay, exactly—1,000 days ago.</p>
<p>I can sketch a rough outline of most novels I&#x27;ve read, list two or three traits of the main character or the key takeaways of a nonfiction book, but it&#x27;s all 10,000-foot information. The sense I get is that what I can consciously remember constitutes the tip of an iceberg, the majority of which is locked away in forgotten recesses of my memory. Reading, for me, is mostly unconscious osmosis—its information dissolved in order to be absorbed—and remains largely outside my control to summon at will.</p>
<p>Then 1,000 days ago happened: November 11, 2022. The exact date is not especially meaningful, but I remember it because I have an accurate day counter shown to me every morning. Today, the counter says this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/do-you-remember-what-you-read/readwise_1000days.webp" alt="" title="Readwise statistics showing 1,000-day reading streak milestone from November 11, 2022 to August 6, 2025, with daily review heatmap and current streak rank #985"/></figure>
<p>(Before going any further: I will be praising Readwise a lot below, but I&#x27;m not sponsored by them, nor was I asked to write a review or anything. I just genuinely think it&#x27;s a great service, and more people should know about it. That said, there <em>is</em> a referral program, and if you&#x27;re thinking of registering for Readwise you can do me a favor by using <a href="https://readwise.io/i/marco421" rel="nofollow noopener noreferrer" target="_blank">this referral link</a>.)</p>
<p>The screenshot above is from Readwise, the app and paid service that changed how I relate to my past reading. The way it works is very simple: you link it with your Kindle account, your Kobo account, and/or a ton of other sources you might be using for reading stuff, and every day they send you ten passages you highlighted in the past for you to review.</p>
<p>The passages are selected semi-randomly, with some criteria to make the list more meaningful and varied. You re-read them one at a time and can immediately swipe left to move on to the next highlight, or provide feedback about that specific item: for example, you can add a comment to it, or you can exclude it from all future reviews (useful when you mis-highlighted something, or when it has become irrelevant or undesired). You can also control how often or rarely—if at all—a given book or source should be selected in your reviews. I won&#x27;t cover all functions, but suffice it to say that it gives you all the configuration handles you need to tailor your reviews exactly how you need them.</p>
<div class="xl:col-start-2 xl:h-0 xl:overflow-visible"><figure class="max-w-full xl:relative" id="floating-1"><img src="https://aethermug.com/assets/posts/do-you-remember-what-you-read/readwise_email.webp" alt="A Readwise daily email newsletter with a dark theme. The header shows &#x27;Your daily Readwise: Henry David Thoreau, Ksenija Slivac, and more&#x27; dated 07.30.25. It contains three highlighted passages: one from Walden by Henry David Thoreau about morning being the most memorable time of day, another from &#x27;Linguistic Priors for Perception&#x27; by Ksenija Slivac about perception and reality, and a third from &#x27;The Selfish Gene&#x27; by Richard Dawkins about warfare. Each highlight has Favorite/Discard/Tag/Share options." class="max-w-full"/><figcaption>Readwise daily email. I usually do the review in the dedicated app, though, because it has more features.</figcaption></figure></div>
<p>I registered for Readwise on November 11, 2022, and have never skipped a single review since. One thousand days is both my longest and my shortest streak on the platform. This is not just an opportunity to brag (although that&#x27;s exactly what I&#x27;m doing, I guess; hey, just this once). I share it here because I think this little milestone proves how much it has become an integral part of my life. I think it complements my memory in a fantastic way that I never anticipated: I remember the high-level stuff okay-ish, and Readwise reminds me of the fine-grained stuff, the sentence-level insights that hit me the hardest at first reading. It opens up avenues of thought that I had thought forever outside my reach.</p>
<p>This is what I think Readwise does to my brain: it reawakens that submerged iceberg-portion of what I absorbed from my reading. Where I have no reliable handles to recall detailed segments of a text, Readwise stimulates it back into consciousness. This doesn&#x27;t guarantee that I&#x27;ll be able to remember it forever—the uncovered insights usually bury themselves back soon enough—but it does allow me to make new connections on the spot, and this is what I find exceedingly useful.</p>
<p>Because what is reading useful for, if not to <a href="/posts/aphantasia-is-no-creativity-killer">make new links in the network</a> of your thoughts and feelings?</p>
<p>My rather un-scientific hunch is that this kind of randomized daily review reactivates neural networks that would otherwise remain dormant most of the time, and allows them to &quot;talk to each other&quot;. These are the sources I reviewed today, on my one-thousandth review:</p>
<ol>
<li>Charles Darwin&#x27;s The Voyage of the Beagle</li>
<li>Kazuo Ishiguro&#x27;s The Buried Giant</li>
<li>Richard O. Prum&#x27;s The Evolution of Beauty</li>
<li>James C. Scott&#x27;s Seeing Like a State</li>
<li>Yasutaka Tsutsui&#x27;s 創作の極意と掟</li>
<li>A neuroscience paper by Megla et al (2025) about aphantasia</li>
<li>A psychology paper by Hong (2025) about how people tend to explain natural phenomena</li>
<li>Cormac McCarthy&#x27;s Suttree</li>
</ol>
<p>(Fewer than 10 because the review included multiple quotes from some of these sources.)</p>
<div class="xl:col-start-2 xl:h-0 xl:overflow-visible"><figure class="max-w-full xl:relative" id="floating-2"><img src="https://aethermug.com/assets/posts/do-you-remember-what-you-read/readwise_stats.webp" alt="A Readwise statistics dashboard showing reading habits over time. At the top is a bar chart of &#x27;New Highlights&#x27; spanning from September to August, with varying heights indicating daily highlight activity. Below are three key statistics: 41 weeks (Longest New Highlight Streak from March 17, 2024 to December 28, 2024), 695 highlights created in the last year, and 2,338 highlights created all-time since October 10, 2010. At the bottom is a &#x27;Most Highlighted&#x27; section showing book cover thumbnails with titles and authors, organized chronologically from August to July." class="max-w-full"/><figcaption>It looks like I&#x27;ve been adding new highlights almost every week.</figcaption></figure></div>
<p>You&#x27;d be surprised how many connections can occur to you among such an eclectic crew of quotes. And new connections arise even, or especially, with ideas outside the review itself. On any given month, I&#x27;m reading several books and thinking hard about a small number of topics, so the 300 old highlights resurfaced for me in 30 days provide plenty of opportunities to hook up my ideas-of-the-day with insights that I had half-forgotten.</p>
<p>But my favorite benefit from using Readwise is that it integrates well with Obsidian, a popular networked note-taking tool. I use Obsidian heavily every day to record and link up every thought I have, enormously expanding the breadth of my thinking. Readwise can be configured to automatically add all new highlights to my Obsidian vault, so that they&#x27;re ready to be connected, in a couple of keystrokes, with the thousands of other ideas in there. The net effect is that these two products allow me to build ideas on top of each other like never before. What originally felt like a house of cards, where I could only piece together two or three insights at a time, can now grow indefinitely in every direction, with any number of different decks of cards. If Obsidian is my &quot;second brain,&quot; Readwise is a &quot;meaning antenna&quot; directly injecting good thoughts by others into it. Some weird metaphors are in the air today, folks.</p>
<p>Something that happens quite often is that I stop in the middle of a daily review and say to myself, &quot;Dang, I&#x27;ve read this highlight about &#x27;A&#x27; many times before, but now I finally see that it relates deeply to that topic, &#x27;Q,&#x27; I was working on yesterday!&quot; Then I rush over to Obsidian, find the auto-synced version of the same highlight on A, and <a href="/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list">add a link or two</a> with the notes about Q. From that day on, I will be able to trace the connection from A to Q in either direction by simply looking at my notes. I can&#x27;t overstate how good all this feels.</p>
<p>Readwise is not free. You have to pay $5.59 every month for it, and that might not be for everyone. But the kind of value that it brings, for me, is many times that amount. At the risk of sounding hyperbolic, it feels like the good people at Readwise invented an upgrade to the old process of reading, after millennia of stagnation. Why isn&#x27;t everyone talking about it? Why isn&#x27;t it mandated by every school on Earth? It has got to be the most underrated service out there.</p>
<p>At least, that&#x27;s how I feel about it. For people with poor detailed memory like me, it is a boon. But I suspect it will be useful to you even if you have decent memory, as long as you read a lot and like to think about what you read. ●</p>
<p>(<a href="https://readwise.io/i/marco421" rel="nofollow noopener noreferrer" target="_blank">Here</a>&#x27;s that referral link again. You know. Just in case.)</p>
<div class="col-start-1 col-end-1 m-10"><div class="my-8 w-full max-w-xs mx-auto"><form class="flex flex-col items-stretch gap-3 mx-auto"><label for="bd-first-name" class="w-full text-sm text-ash dark:text-faintLanternWhite">Subscribe to receive new posts every week. It&#x27;s free.</label><input type="email" name="email" id="bd-email" placeholder="Your email*" class="subscription-field w-full
                               text-black dark:text-lanternWhite font-true-sans text-sm px-3 py-2 rounded
                               placeholder-ash/50 dark:placeholder-faintLanternWhite/50 placeholder:text-sm
                               focus:outline-none
                               disabled:opacity-50 disabled:cursor-not-allowed" value=""/><input type="text" name="first_name" id="bd-first-name" placeholder="First name / nickname (optional)" class="subscription-field w-full
                               text-black dark:text-lanternWhite font-true-sans text-sm px-3 py-2 rounded
                               placeholder-ash/50 dark:placeholder-faintLanternWhite/50 placeholder:text-sm
                               focus:outline-none
                               disabled:opacity-50 disabled:cursor-not-allowed" value=""/><div class="text-sm text-ash dark:text-faintLanternWhite text-center" style="margin-top:0.5rem">Loading verification...</div><input type="submit" disabled="" class="subscription-field subscription-field-btn
                               py-2 px-6 text-ash dark:text-faintLanternWhite text-sm font-true-sans rounded
                               cursor-pointer mx-auto
                               hover:text-black dark:hover:text-lanternWhite
                               disabled:opacity-50 disabled:cursor-not-allowed" value="Subscribe"/><label class="text-xs text-ash dark:text-faintLanternWhite text-center mt-2 w-full">Or use the <a href="/atom.xml" class="underline" target="_blank">Atom feed</a> for reading apps</label></form></div></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/do-you-remember-what-you-read/akita-library.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[A Pocket-Size Checklist of Thinking Errors]]></title>
            <link>https://aethermug.com/posts/a-pocket-size-checklist-of-thinking-errors?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/a-pocket-size-checklist-of-thinking-errors</guid>
            <pubDate>Thu, 31 Jul 2025 03:04:55 GMT</pubDate>
            <description><![CDATA[All the ways your framings and models can lead you astray]]></description>
            <content:encoded><![CDATA[<p>A recurring theme on this blog is the idea that we use &quot;framings&quot; and &quot;(mental) models&quot; to make sense of the world. Neither of these are original concepts of mine, but I have spent a considerable amount of time and keystrokes refining, redefining, and clarifying what they mean and how to understand them. It occurs to me that some readers might be wondering <em>why</em>.</p>
<p>My chief goal is to decide, act, and live better—both for myself and for whoever wants to listen—and I believe that <em>thinking</em> better is one of the best ways to do that. Framings and models are meta-thinking tools for this very practical purpose, and in this post I will try to prove that.</p>
<p>Talk of framings and, especially, mental models is common nowadays, but they seem to be relegated to business self-help shelves in bookstores and clickbait &quot;tips and tricks to be more logical&quot;-kinds of blog posts. <a href="/posts/a-framing-and-model-about-framings-and-models">As I wrote previously</a>, I think this is a disservice to these ideas.</p>
<p>They are not just useful add-ons for productivity buffs and those who want to be cleverer: <em>they describe how every single thought you ever think works</em>. We always frame things and construct models whether we realize it or not. Understanding them even a little better is probably a good investment of anyone&#x27;s time.</p>
<p>One first, easy way to leverage them is in the negative. How does one typically misuse framings and models? What thinking mistakes do they highlight?</p>
<p>I tried that exercise, and the result was the 15-item checklist below. These should encompass virtually every way to use framings and models wrong—and, by extension, every possible thinking mistake. Not bad, for just 15 short items!</p>
<p>The idea is for you to return to this list whenever you&#x27;re puzzled, confused, stumped, contradicted, flummoxed, bewildered, perplexed, nonplussed, dumbfounded, frustrated, bamboozled, discombobulated, balked, mystified, flabbergasted, foiled, or foxed. It might just point you in the direction you need.</p>
<p>But first, a quick recap of what &quot;framing&quot; and &quot;model&quot; mean in AeMug-speech.</p>
<h2>Definitions</h2>
<p>The premise to all that follows is that virtually everything about human thought consists in building predictive models of the world. Our brains have <a href="/posts/embedded-prophesy-devices">embedded prophesy devices</a> that are working around the clock to predict the future, reconstruct past events, and understand our place in all that. <a href="/posts/internal-model-translation-table">Thinking is simulating the world</a>. The question is how—not from a biological point of view (we don&#x27;t know much about that), but from a systemic one.</p>
<p>In this context, a <strong>framing</strong> is the set of things that you consider to exist when thinking about something, including what those things do. You can&#x27;t keep the whole universe in mind, so you have to pick a minimal number of &quot;moving parts&quot; that suffices to build your next model.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>If you&#x27;re trying to predict the outcome of a tennis match, you probably focus on what&#x27;s going on between two specific humans, two rackets, one ball, one court, and perhaps a few other elements, but probably you don&#x27;t include the International Space Station, your cousin&#x27;s masseuse, or the Cambodian tapioca industry. The latter three are not part of your &quot;tennis match prediction framing&quot;—they&#x27;re irrelevant, dismissable noise for your current purpose.</p>
<p>Readers who also follow Plankton Valhalla might remember that <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">Boundaries Are in the Eye of the Beholder</a>. A framing, then, is a subjective, arbitrary, but deliberate <a href="/posts/a-framing-is-a-choice-of-boundaries">choice of boundaries</a>.  It is also the understanding about <em>how those moving parts behave</em>, their &quot;properties&quot;.</p>
<p>(Those familiar with information science or philosophy may use the term &quot;ontology&quot; interchangeably with &quot;framing&quot;.)</p>
<p>I call the &quot;moving parts&quot; comprising a framing <strong>black boxes</strong> to emphasize that you don&#x27;t need to know or care what goes on inside them, only <a href="/posts/a-black-box-view-of-life">how they look and behave from the outside</a>. In that last sentence I used &quot;look&quot; metaphorically, because even abstract concepts are black boxes.</p>
<figure><img src="https://aethermug.com/assets/posts/a-pocket-size-checklist-of-thinking-errors/kevin-mueller-Q-fL04RhuMg-unsplash.webp" alt="A dynamic sports photograph showing a bright yellow tennis ball captured mid-motion against a dark background. The ball appears to be moving through or creating a spray of water droplets and small particles, creating a striking trail of motion that curves across the frame. The high-contrast image emphasizes the ball&#x27;s vibrant color against the black background, with the water spray adding texture and movement to the composition."/><figcaption>Photo by Kevin Mueller, Unsplash</figcaption></figure>
<p>So the tennis ball in the &quot;tennis match prediction framing&quot; is just an object with a certain shape, texture, and color, which bounces in a certain way. For all you know, it might contain a thousand multiverses full of monsters and better versions of you on the inside, but you don&#x27;t need to think about that: as far as <em>this</em> framing is concerned, the ball is just a black box, and it only has to keep doing what it usually does on the outside in order to be useful for your goals.</p>
<p>In <a href="https://aethermug.com/posts/a-fundamental-framing-of-human-language" rel="nofollow noopener noreferrer" target="_blank">language</a>, every word is a boundary around a set of concepts and behaviors, and the inside of the boundary is a black box.</p>
<p>Finally, a <strong>model</strong> is a specific mental arrangement—or, as I prefer to say, an <strong>alignment</strong>—of the black boxes comprising a framing. The humans, the tennis rackets and balls, and so on could be aligned in a lot of different ways: for example, you could imagine two rackets on one side of the court, and two players on the other side; or a court completely flooded with a million tennis balls and no humans. Every such arrangement is <em>a</em> model, but usually you want to imagine <em>the</em> model with the alignment that best approximates the actual match in the real world. That will allow you to make the best possible simulation of how the real-world game might unfold, based on your knowledge.</p>
<p>And here&#x27;s the ELI5 version: the framing is the group of Lego bricks you take out of the bag; each brick on the floor is a black box; the model is whatever you build with them today. But it all happens in your head!</p>
<p>You can read more about these things <a href="/posts/a-framing-and-model-about-framings-and-models">here</a>.</p>
<h2>The Checklist</h2>
<p>First, the list. Explanations and examples follow. We&#x27;ll start with the models, because they&#x27;re the most superficial part, and go deeper as we proceed to framings.</p>
<h3>Model errors</h3>
<ul>
<li><a href="#construction-errors">Construction errors</a>:
<ul class="contains-task-list">
<li class="task-list-item"><input type="checkbox" disabled=""/> When aligning the black boxes, did I really replicate the real-world situation I care about?</li>
<li class="task-list-item"><input type="checkbox" disabled=""/> Did I forget to include all the relevant black boxes?</li>
</ul>
</li>
<li><a href="#naivety-errors">Naivety errors</a>:
<ul class="contains-task-list">
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I implicitly assuming that my model is 100% reliable?</li>
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I confused about the purpose of the model in the first place?</li>
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I assuming that other people share the same purpose when using what seems like the same model?</li>
</ul>
</li>
<li><a href="#existential-errors">Existential errors</a>:
<ul class="contains-task-list">
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I forgetting that every thought I think has a framing and a model behind it, however implicit?</li>
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I forgetting that what I have might actually be a framing problem, rather than a model problem?</li>
</ul>
</li>
</ul>
<h3>Framing errors:</h3>
<ul>
<li><a href="#behavior-prediction-errors">Behavior prediction errors</a>:
<ul class="contains-task-list">
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I misunderstanding how each black box should behave?</li>
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I forgetting some important but rare behaviors?</li>
</ul>
</li>
<li><a href="#boundary-errors">Boundary errors</a>:
<ul class="contains-task-list">
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I drawing the boundaries too wide?</li>
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I drawing the boundaries too small?</li>
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I relegating too much or too little to &quot;environment&quot; or &quot;background noise&quot;?</li>
</ul>
</li>
<li><a href="#premise-errors">Premise errors</a>:
<ul class="contains-task-list">
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I assuming that the same-sounding word used by someone else is necessarily part of the same framing as mine?</li>
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I assuming that black boxes I&#x27;ve never seen interacting would interact in predictable ways?</li>
<li class="task-list-item"><input type="checkbox" disabled=""/> Am I forgetting that framings are arbitrary choices and can be changed as and when needed?</li>
</ul>
</li>
</ul>
<h2>Model Errors</h2>
<h3><span id="construction-errors">Construction Errors</span></h3>
<h4>When aligning the black boxes, did I really replicate the real-world situation I care about?</h4>
<p>The most obvious error when making predictions, and probably the only one that most people worry about: did I get it right? Am I modeling something that doesn&#x27;t exist?</p>
<h4>Did I forget to include all the relevant black boxes?</h4>
<p>I might have succeeded in aligning some moving parts realistically, but am I downplaying some important known factors? For example, maybe I&#x27;m aware that weather is a factor in a tennis match—weather is in my framing—but in this case I might have forgotten about it, and fail to predict that the extreme heat on the day of the match might affect the two players differently.</p>
<figure><img src="https://aethermug.com/assets/posts/a-pocket-size-checklist-of-thinking-errors/filip-mroz--WgTWXb4nh4-unsplash.webp" alt="A panoramic view of a tennis stadium at sunset, taken from elevated seating. The image shows a packed tennis venue with thousands of spectators filling tiered seating sections surrounding a blue tennis court. Above, a dramatic sky displays rich orange, pink, and purple clouds illuminated by the setting sun, which appears as a bright starburst on the horizon. Stadium floodlights are visible on tall poles, and the scene captures the grand scale and atmosphere of a major tennis tournament."/><figcaption>Photo by Filip Mroz, Unsplash</figcaption></figure>
<h3><span id="naivety-errors">Naivety Errors</span></h3>
<h4>Am I implicitly assuming that my model is 100% reliable?</h4>
<p>Ah, the map is not the territory! All models are wrong, even though some are useful. In other words, there is always something I can&#x27;t predict or account for. That&#x27;s okay, as long as I remember that. Forget it, and I&#x27;m in for some rude awakenings.</p>
<h4>Am I confused about the purpose of the model in the first place?</h4>
<p>Since boundaries and framings are arbitrary, how I choose them depends on what I&#x27;m trying to achieve. If my goal is match-betting, the humans-rackets-ball-court framing is good; if my goal is designing a zero-gravity experiment, I probably want to pick the ISS and its contents as black boxes, and treat any ball game on the ground as useless noise.</p>
<p>That&#x27;s an extreme example and hard to get wrong. But very often we deal with much subtler doubts about purpose. For example, if I&#x27;m the tennis team&#x27;s physician, I might be more interested in predicting whether our player can finish the match without injuries, rather than the final score—a difference that needs to be reflected in my choice of framing and model.</p>
<h4>Am I assuming that other people share the same purpose when using what seems like the same model?</h4>
<p>Different people have different goals. Thus, they will naturally choose different framings and models even for the same real-world phenomena. A fan and a tennis physician will be very confused in conversation unless they understand that they&#x27;re thinking about the match with different goals and perspectives.</p>
<p>The same for two fans who enjoy the sport differently—one for the strategic awe of a long and complex rally, for instance, and the other for the sense of being part of a crowd of passionate fans. These slight differences in goals will impact how they think and talk about the game.</p>
<p>All this is normal and to be expected. I should always be asking myself: is my interlocutor trying to do the same thing with his model?</p>
<h3><span id="existential-errors">Existential Errors</span></h3>
<h4>Am I forgetting that every thought I think has a framing and a model behind it, however implicit?</h4>
<p>This is one of the toughest. We don&#x27;t often meta-think, as in &quot;am I thinking about this right?&quot; We just... think, and may the Flying Spaghetti Monster help us.</p>
<p>The thing is, it is much easier to think wrong—make one of the errors in this checklist—than right. Taking a step back, at least for the trickier problems, is a powerful way to do better.</p>
<p>Here&#x27;s the solution: I&#x27;ll remember to return to this list over and over until I&#x27;ve memorized it!</p>
<h4>Am I forgetting that what I have might actually be a framing problem, rather than a model problem?</h4>
<p>A-ha. Maybe I&#x27;m doing everything right with my black boxes, I&#x27;m following the instructions manual to the letter, and yet things don&#x27;t check out. Maybe it&#x27;s time to go deeper, down into the very foundations of my thinking about this problem—question the instructions themselves. I need to check the Framing Errors section.</p>
<h2>Framing Errors</h2>
<h3><span id="behavior-prediction-errors">Behavior Prediction Errors</span></h3>
<h4>Am I misunderstanding how each black box should behave?</h4>
<p>Perhaps I misunderstood how those things work. There are two &quot;player&quot; black boxes in the tennis match, alright, but maybe Player Carlos has more cards up his sleeve than I thought, and his forehand return dip is not as perfect as I initially expected. This affects the quality of my predictions.</p>
<h4>Am I forgetting some important but rare behaviors?</h4>
<p>Maybe I have the basics right, and my model provides good predictions in most common cases, but what if one of the black boxes behaves in peculiar ways under certain less-common conditions?</p>
<p>Maybe I expertly included &quot;the crowd of spectators&quot; as another black box in my sophisticated model of the tennis match, because sometimes too much noise can distract the players. But did I consider that Player Nick becomes stressed when the crowd is <em>too supportive</em>?</p>
<h3><span id="boundary-errors">Boundary Errors</span></h3>
<h4>Am I drawing the boundaries too wide?</h4>
<p>I usually choose my black boxes instinctively—I draw their boundaries in the way that feels most obvious. But sometimes that is not enough for the level of quality I need in my mental simulations of the world.</p>
<p>Perhaps I need to open it up and look inside: instead of &quot;Player Carlos&quot;, I might have to begin thinking about &quot;Player Carlos&#x27; left knee (which was injured months ago)&quot;, and &quot;Player Carlos&#x27; mental state&quot;: new black boxes added to my framing to make sense of what is going on out there.</p>
<h4>Am I drawing the boundaries too small?</h4>
<p>On the other hand, I might be considering <em>too many</em> details. I don&#x27;t need to think about the court&#x27;s grass conditions or Player Coco&#x27;s tank-top&#x27;s size. True, everything is connected and even those factors might influence the match a tiny bit, but those black boxes would probably complicate my mental model more than they would enhance it.</p>
<figure><img src="https://aethermug.com/assets/posts/a-pocket-size-checklist-of-thinking-errors/mario-gogh-xU7GEBYnFfE-unsplash.webp" alt="A minimalist geometric composition showing a tennis court from above, divided into sections by white court lines. The image features a stark contrast between dark teal-green and bright blue court surfaces, with crisp white lines creating a grid pattern. The clean, abstract composition emphasizes the geometric precision of tennis court design, with the different colored surfaces creating distinct rectangular sections."/><figcaption>Photo by Mario Gogh, Unsplash</figcaption></figure>
<h4>Am I relegating too much or too little to &quot;environment&quot; or &quot;background noise&quot;?</h4>
<p>This is a deeper and broader version of the &quot;did I forget to include all the relevant black boxes?&quot; question in the Model Errors section. In that case, it was simply about leaving one of the black boxes available in the framing out of the model without a good reason. This framing version, however, is about blind spots and false positives—much harder to debug.</p>
<p>There might be important factors I&#x27;m completely oblivious to that heavily impact the quality of my predictions. This is caused by ignorance, and it is not always my fault. What if, for example, Player Carlos just broke up with his partner yesterday, and is playing in a devastated state of mind? I would do well to draw a new boundary out of what I considered to be mere background noise, and add a new &quot;Carlos&#x27;s ex&quot; black box to include in my framing. Only then it might occur to me to monitor both sweethearts&#x27; social feeds on the day before the match, and ask around from sources in the know, in order to update my model to include the latest scoop.</p>
<p>(In case you&#x27;re wondering, I&#x27;m making all this up. I don&#x27;t really think girlfriends and boyfriends are worthwhile black boxes for most tennis matches. Hey, I&#x27;m just trying to keep things simple by sticking to the same example.)</p>
<p>Conversely, I might be including irrelevant black boxes that don&#x27;t significantly affect the outcome of the match. This problem is similar to the &quot;boundaries too small&quot; case, so refer back to that.</p>
<h3><span id="premise-errors">Premise Errors</span></h3>
<h4>Am I assuming that the same-sounding word used by someone else is necessarily part of the same framing as mine?</h4>
<p>The Naivety Errors section reminds me that framings and models are relative to the goals of their users. This causes a lot of problems when two people use the same word to refer to different real-world phenomena. They framed things differently from each other, but the fact is hidden by language.</p>
<p>When a male fan of Female Player A mentions &quot;A&#x27;s legs&quot;, he might mean something entirely different from when A&#x27;s physician pronounces the same words.</p>
<p>The solution, again, is to <em>first</em> uncover each other&#x27;s goals, <em>then</em> understand the discrepancies between each other&#x27;s framings, and <em>finally</em> attempt to communicate about the models.</p>
<h4>Am I assuming that black boxes I&#x27;ve never seen interacting would interact in predictable ways?</h4>
<p>The number of possible interactions between the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>N</mi></mrow><annotation encoding="application/x-tex">N</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em"></span><span class="mord mathnormal" style="margin-right:0.10903em">N</span></span></span></span> black boxes in a framing increases quickly as <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>N</mi></mrow><annotation encoding="application/x-tex">N</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em"></span><span class="mord mathnormal" style="margin-right:0.10903em">N</span></span></span></span> grows: five black boxes can be combined in ten pairs, ten black boxes can pair up in 45 ways, and so on. These numbers shoot up even faster if I consider three-way interactions and even more complex ones.</p>
<p>So, have I really considered all the possible (and feasible) ways the elements can clash and synergize with among themselves? Have I <em>seen</em> how the black boxes behave in all cases? Or am I just expecting them to behave the same even in novel situations?</p>
<p>Player Coco has just bought the newest high-tech racket model. Can I be confident about how her game will be affected?</p>
<h4>Am I forgetting that framings are arbitrary choices and can be changed as and when needed?</h4>
<p>This is the last, but possibly greatest, question. I wonder why we are so good at <em>not</em> asking this.</p>
<p>Boundaries are in the eye of the beholder. My framings are all up to me. I am not forced to use someone else&#x27;s framings, nor to stick to the same framings forever. I can fix them, evolve them, integrate them with new ones at any moment. I can even throw them away and start from scratch.</p>
<p>Given that all conscious thought is based on the predictions and simulations of mental models, and given that models are always built on top of framings, the skill of <strong>reframing</strong> is arguably one of the most important for any human being to acquire.</p>
<p>It happens all too often. When something just doesn&#x27;t make sense, when no amount of mental manipulations and considerations help, it&#x27;s probably time to revise the framing. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/a-pocket-size-checklist-of-thinking-errors/mario-gogh-8xaMOOkKNsw-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[New article on Nautilus Magazine]]></title>
            <link>https://aethermug.com/posts/new-article-on-nautilus-magazine?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/new-article-on-nautilus-magazine</guid>
            <pubDate>Wed, 23 Jul 2025 00:47:23 GMT</pubDate>
            <description><![CDATA["How Optimists Are Alike"]]></description>
            <content:encoded><![CDATA[<p>A quick update: Nautilus Magazine has just published a new short article written by yours truly. It&#x27;s about an intriguing new experimental result in psychology: for some reason, optimists have similar brain activity to each other, while each pessimist&#x27;s neurons fire differently.</p>
<p>The article is titled &quot;How Optimists Are Alike,&quot; and you can read it <a href="https://nautil.us/how-optimists-are-alike-1226484/" rel="nofollow noopener noreferrer" target="_blank">here</a>.</p>
<p>Nautilus articles are paywalled: if you&#x27;re subscribed to Nautilus, I hope you&#x27;ll enjoy the read. You might also like these other two articles I wrote a while back about aphantasia: <a href="https://nautil.us/my-brain-doesnt-picture-things-407444/" rel="nofollow noopener noreferrer" target="_blank">1</a>, <a href="https://nautil.us/when-logic-beats-imagination-746995/" rel="nofollow noopener noreferrer" target="_blank">2</a>.</p>
<p>There&#x27;s one study result I found particularly interesting but couldn&#x27;t explore fully in today&#x27;s article due to space constraints. It turns out optimists aren&#x27;t optimistic because they see positive events more positively than pessimists—they&#x27;re optimistic because they think about negative events in a more abstract, distant way. Pessimists, on the other hand, seem to visualize negative events as vividly as the positive ones. Perhaps that is enough to make all the difference. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/new-article-on-nautilus-magazine/kobu-agency-TWIRIAizZFU-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Is It Bad to Have a Parrot Speak for You?]]></title>
            <link>https://aethermug.com/posts/is-it-bad-to-have-a-parrot-speak-for-you?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/is-it-bad-to-have-a-parrot-speak-for-you</guid>
            <pubDate>Thu, 17 Jul 2025 02:31:20 GMT</pubDate>
            <description><![CDATA[Retro sci-fi stories and how they can help with Reddit rants]]></description>
            <content:encoded><![CDATA[<p>I often have reason to lie down in a <a href="/posts/boxed">noisy box</a> for entire mornings, engaging my brain in various activities. Lately, the main activity has been listening to short stories through a pair of earphones—each story multiple times. This has been one of the most enjoyable experiments I&#x27;ve experienced, as reading is something I would be doing anyway. These sessions are never dull, especially because the selection of stories I get to hear is quite good.</p>
<p>They are all by a famous Japanese science fiction writer named Hoshi Shin&#x27;ichi, who was <em>very</em> active—he wrote over a thousand stories—in the latter half of the 20th century. None of them is longer than ten pages, and most are much shorter.</p>
<p>Hoshi&#x27;s stories have a light-hearted, uncomplicated worldview. They are all G-rated and sometimes read like children&#x27;s stories—which is completely fine with me. They deliver what Sci-Fi from the 70s should deliver: astronauts commuting around the Solar System but writing home with &quot;portable typewriters,&quot; AGI robots using wired telephones, and other delights like that.</p>
<figure><img src="https://aethermug.com/assets/posts/is-it-bad-to-have-a-parrot-speak-for-you/Hoshi_Shinichi.webp" alt="Black and white portrait photo of Hoshi Shin&#x27;ichi&#x27;s face"/><figcaption>Hoshi Shin&#x27;ichi in 1963.</figcaption></figure>
<p>Sometimes Hoshi&#x27;s predictions come eerily close to today&#x27;s reality. In one story, titled &quot;The Era of Advertisements,&quot; he envisions a world where people can sell their &quot;reflexes&quot; to ad companies. Everyone, of course, does it: a child automatically sings the theme song of a cookie brand every time someone strokes his head, a man praises a health supplement product every time he yawns, and an elevator girl talks about a soft drink whenever someone kisses her—which the protagonist, a random stranger, can&#x27;t help doing when he rides with her on the elevator (ah, Japan in the 70s must have been something else). Given the state of YouTube today and the recent advancements in brain-computer interfaces, I am now terrified.</p>
<p>There was one story, though, that stood out to me more than the other thirty or so I listened to (<strong>spoiler alert</strong>). In this version of the future, a new amazing technology has taken the world by storm: talking parrots. Everyone now has one of these on their shoulder, and they function as what you could call &quot;etiquette translators.&quot; People don&#x27;t talk to each other directly anymore, but whisper their unfiltered thoughts to their parrots, which then rephrase them into polite, socially-appropriate sentences for the interlocutor to hear.</p>
<p>The short story&#x27;s title is &quot;Shoulder-Top Secretary.&quot; A door-to-door salesman—whose mode of transport is a pair of self-propelling roller skates—is trying to sell a gadget to a &quot;housewife.&quot;</p>
<p>He whispers to his parrot, &quot;I&#x27;m here to sell,&quot; and the parrot announces clearly, &quot;A very good morning to you, Madam! Allow me to introduce myself, I&#x27;m XYZ from ABC Inc., and I have the most amazing new product with me that you won&#x27;t be able to do without!&quot;</p>
<p>The woman&#x27;s parrot hears that greeting and translates it back to the woman: &quot;he wants to sell you something,&quot; to which the woman replies in a low voice, &quot;what is it, already?&quot; The parrot converts it to: &quot;Oh, it&#x27;s a pleasure to meet you, Mr. XYZ. I&#x27;ve heard so many good things about ABC Inc., in fact, I&#x27;ve desired your products for a long time! But tell me, what do you have to offer me today?&quot;</p>
<p>The farce goes on like this for a while, with each of the two humans secretly annoyed with the other&#x27;s company and the parrots carrying out an amiable and cheerful conversation in their stead. The woman isn&#x27;t interested in the gadget at all, and her parrot eventually succeeds in sending the man away with promises to &quot;discuss it with her husband, because she really wants it but can&#x27;t decide without consulting him first.&quot;</p>
<p>The first time I listened to this story, I thought it was a funny satire of Japanese society, with its deeply ingrained divide between <em>honne</em> and <em>tatemae</em>, or &quot;what you think&quot; and &quot;what you actually say.&quot; Although this divide exists in most cultures I know of, it is sharp and almost institutionalized in Japanese culture. I remember thinking that &quot;Shoulder-Top Secretary&quot; would be difficult to fully translate into English, because of the stark difference in language between what the characters whisper and what the parrots say (Japanese has different verbs and word structures for the two modes of speaking, basically two different sub-languages).</p>
<p>Still, this was only a curiosity, and I quickly forgot about it when the next story streamed through my earphones.</p>
<p>Then, yesterday, the same story came up in my experiment again, and this time I recognized what was <em>really</em> interesting about it. It seems like another one of the author&#x27;s accurate predictions of today&#x27;s world.</p>
<p>Only a few days ago, I happened to read a post on the <code>r/ClaudeCode</code> or <code>r/Claudeai</code> subreddit (for the life of me I can&#x27;t find it again) about some issues the poster had had with Anthropic&#x27;s coding tool. It started with this preamble, obviously patched on after the initial submission (I paraphrase):</p>
<blockquote>
<p>EDIT: for those who are accusing me of using AI to write my post, of course I did. I spoke my thoughts in a disorganized way and had a model tighten the wording. This is the future, guys, get used to it.</p>
<p>(...then the actual post)</p>
</blockquote>
<p>Looking at the comments below the post, there were indeed several people complaining that the post was clearly written by an LLM. It had all the telltale signs: the ample use of bold and em-dashes, the neat structure, the admirable eloquence. Some commenters wrote things like &quot;Thank you ChatGPT,&quot; while others directly reprimanded the poster (still paraphrasing): &quot;If you need a robot to write your opinions, you&#x27;re really finished.&quot;</p>
<p>This wasn&#x27;t the first time I read criticism of AI-generated content. AI slop is taking over the internet, and people are noticing. Today&#x27;s foundation models tend to have rather repetitive patterns (e.g. <a href="https://www.reddit.com/r/LocalLLaMA/comments/1lv2t7n/not_x_but_y_slop_leaderboard/" rel="nofollow noopener noreferrer" target="_blank">not X, but Y</a>) and quirks that get boring fast. Many people are wondering if we can trust anything at all anymore. I get it.</p>
<p>But this case was a little different. It was a post about someone&#x27;s opinions <em>on a forum of generative AI lovers</em>. Most of those people pay US $100 or $200 for monthly subscriptions to have LLMs write code and do tasks for them. Yet they were getting into arguments about the morality of using AI to reformulate their words.</p>
<p>Now, side-quibbles on rant posts on Reddit are <em>not</em> my preferred source for the state of society, but this struck me as interesting and worth thinking about a little longer.</p>
<p>Is using a language model to &quot;tighten your wording&quot; morally shady? I don&#x27;t use it for my own blog posts and essays beyond grammar and punctuation checks, but I do (openly) output AI-written documents for my day job, like reports, proposals, policies, and cookie-cutter emails. Am I doing something reproachable? Are my colleagues secretly resenting me for that?</p>
<p>I don&#x27;t think so, as long as I take full responsibility for whatever I actually deliver. I have been guilty of not checking a document thoroughly enough at least once, and that led to more work for the person who had to review it. But that was a human&#x27;s error—mine—not the machine&#x27;s. The problem was not that I used an LLM, but that I used it sloppily. It is clearly a tool with a learning curve, and I can understand someone being tired of being surrounded by too many sloppy tool-users, especially when that is done in secret.</p>
<p>But, again, the Reddit thread seems to present a different issue: even <em>after</em> the poster openly admitted to using AI to write his post and stood by what he wrote, some people remained critical of his practice. Were they right?</p>
<p>I think this lingering question in my mind is why &quot;Shoulder-Top Secretary&quot; resonated with me differently the second time around. It offers some clarity on this particular doubt.</p>
<p>Admittedly, the world envisioned by Hoshi doesn&#x27;t seem very appealing. Everyone communicates indirectly, entirely through their parrots. But why exactly is that bad? Is it because the communication is indirect?</p>
<p>A good argument could be made for a &quot;yes&quot;. One could say that the effort of being polite with your language and exchanging pleasantries is what&#x27;s valuable, not the exact words used. The role of ornate and gentle speech, even if bland and trite, shows your respect for the other person. The medium is the message.</p>
<p>The story ends with the salesman, tired and depressed after an unsuccessful workday, entering his favorite cocktail bar. The lady bartender welcomes him warmly. Or rather, her parrot welcomes him, because we don&#x27;t hear what she actually whispered. It says: &quot;I&#x27;ve missed you, darling. You know, it&#x27;s so lonely here when you&#x27;re not around!&quot;</p>
<p>That&#x27;s a neat ending for this three-page skit. It leaves you wondering what the cocktail lady really thinks—or rather, it leaves you knowing exactly what she thinks, without actually spelling it out.</p>
<p>The real question, though, is this: would those same words of welcome have carried a different weight if the lady had said them with her own lips?</p>
<p>I like the story for what it is. It&#x27;s fine. But I think the author pulled a sleight of hand on the reader with this one: it only ever shows people who are negative and irritated with each other. The &quot;housewife&quot; doesn&#x27;t want the gadget, the salesman doesn&#x27;t like his job, his boss thinks he&#x27;s a loafer, and naturally, we&#x27;re led to believe the bartender probably doesn&#x27;t care for the man as much as she claims to.</p>
<p>This kind of negativity does exist aplenty in our real, modern world, but it&#x27;s not <em>that</em> ubiquitous. People actually like each other sometimes, and they actually want to say what they think without polite inversions. &quot;Shoulder-Top Secretary&quot; only shows us people on their bad days when they think thoughts that can&#x27;t be spoken plainly.</p>
<p>This story leads us to another viewpoint on the question raised by the Reddit thread: perhaps the medium is the message, but that message is often a lie. If it&#x27;s a lie, it shouldn&#x27;t matter what the medium is. And if it isn&#x27;t a lie, what&#x27;s the problem? ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/is-it-bad-to-have-a-parrot-speak-for-you/inko.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Human Stigmergy]]></title>
            <link>https://aethermug.com/posts/human-stigmergy?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/human-stigmergy</guid>
            <pubDate>Thu, 10 Jul 2025 01:18:35 GMT</pubDate>
            <description><![CDATA[The world is my task list]]></description>
            <content:encoded><![CDATA[<p>Termites and ants have no central planning. There are no <em>architect ants</em> in a nest-building project, no sponsors or supervisors, no instructions. Each worker is unaware and completely uninterested in what form the final mega-structure will take. No blueprints are to be found in any of their minds or outside them. Yet they build them all the time, and very well, too.</p>
<p>Their substitute for plans and blueprints is what biologists call <em>stigmergy</em>. Each worker instinctively marks the environment with pheromones as it works—the termite infuses it in the dollops of mud it deposits, the ant marks the path it took to find food—then other workers smell the pheromone and act based on it. This is repeated by each insect, and it is all they need to build and stock their great cathedrals, complete with effective ventilation shafts, highways leading straight to the best sources of food, and everything else they need to thrive.</p>
<p>This topic never ceases to fascinate me. It&#x27;s a demonstration that great things can be achieved together in a fully decentralized way. Intelligence can be distributed rather than fenced off. The exercise of power isn&#x27;t the only way.</p>
<p>But stigmergy interests me in yet another way—a more mundane and pragmatic way: it is proof that there are other kinds of memory besides the &quot;mental&quot;.</p>
<p>I have a <a href="/posts/i-do-not-remember-my-life-and-it-s-fine">terrible memory</a>, and I forget the concrete and ephemeral duties and to-dos of my daily life all the time. I&#x27;ve learned to trust my ability to remember to do things exactly 0%. I live on the solid certainty that I will forget things. To-do apps and notes help, but they&#x27;re never enough. Digital supports have the fatal flaw of requiring me to remember to check my digital devices—something I do often, but not necessarily when I need it.</p>
<p>A single termite isn&#x27;t very smart. I doubt it even knows what it is doing while carrying its ball-shaped bricks up its half-constructed mound. I&#x27;m grateful to it for just how low it sets the bar for me. Can memory-less Marco learn something from that termite? I can&#x27;t secrete pungent pheromones (not intentionally, at least), and that&#x27;s probably a good thing. But I have two very useful appendages with many accurate fingers working for me, and those should function as good substitutes.</p>
<p>Eusocial insects shape the environment around them as a form of external, localized memory. I do the same! If I decide I want to refill my bicycle&#x27;s tires with air the next time I go out, I don&#x27;t even try to commit that to memory, nor do I write a memo on Google Keep: I immediately take the floor pump and place it on the path out of my room. When I need to remember to throw trash away, my wife or I put the bags right at the foot of the front door. To keep track of how many hours I&#x27;ve worked in a day, I move Lego bricks from one side of my computer&#x27;s monitor to the other at every periodic break.</p>
<p>All these acts remove the need for me to remember, even to know. I could hit my head and have my short-term memory wiped clean, and simply looking at the pump, the garbage bag, the toy brick would instantly inform me of what I&#x27;m supposed to do.</p>
<p>It&#x27;s not just me. People seem to do this all the time without much thought: they leave their umbrellas in the foyer right next to their shoes, to remember to check the weather; they drape their jackets over the backs of their seats in cafes, both to find the seats again and to signal to others to look elsewhere; they tie knots in strings to keep track of their lives.</p>
<p>This is fabulous. We tend to think of memory and mind-related concepts as purely abstract, separate, and invisible processes that happen somewhere up there—at best in the brain, at worst in a separate &quot;world of the mind&quot; <em>à la</em> Descartes, entirely disconnected from the &quot;physical world&quot;.</p>
<p>Humble ants teach us otherwise. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/human-stigmergy/termite_mound.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[How to Organize Your Thoughts with a Simple Bullet List]]></title>
            <link>https://aethermug.com/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list</guid>
            <pubDate>Thu, 03 Jul 2025 04:04:05 GMT</pubDate>
            <description><![CDATA[and a good text editor]]></description>
            <content:encoded><![CDATA[<p>The approach I describe below is very simple yet extremely general. It helps you organize and structure your thoughts for any purpose: from writing clearly to planning complex projects and activities to working through your tangled emotions.</p>
<p>It is a form of mind-mapping, but it&#x27;s easier, faster, more universally portable, and doesn&#x27;t require special dedicated tools that lock you into custom file formats. All we need is a text editor that supports bullets.</p>
<p>The basic idea is to &quot;divide and conquer&quot; the muddled notions in your head, reducing the organizing process to a series of small, easy steps. It starts with vague concepts and uncertain directions and ends with a bullet list of high clarity.</p>
<p>A note of caution, though: although this can be done, to some extent, in your head, you&#x27;ll need a proper digital tool for anything more complex and detailed than a one-page essay. It has to be digital, for reasons that will become clear very soon.</p>
<p>After explaining the theory behind this approach, I&#x27;ll conclude with a very concrete example of it in action: the work I did to prepare this very blog post. I will also briefly mention how part of this process can be outsourced to AI.</p>
<h2>The How</h2>
<p>Many people who believe they&#x27;re bad at thinking logically seem to stumble at the very beginning. They assume your ideas are supposed to magically fall into place like a clockwork mechanism, each sliding snugly into its prescribed place with a pleasant <em>click</em>. Since this doesn&#x27;t happen to them, they deduce that they must have some cognitive disadvantage for this specific task.</p>
<p>Maybe it&#x27;s like that for a few mysterious savants quietly coasting their way through life. For the rest of us it doesn&#x27;t work like that. You have to consciously break the process down in more ways than one. The most fundamental way to do it is to focus on <em>diverging</em> first, and <em>converging</em> later.</p>
<figure><img src="https://aethermug.com/assets/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list/dc.webp" alt="Diagram of the diverge and converge process, showing a triangle expanding from left to right for the diverge step, followed by a triangle contracting from left to right for the converge step. Under the diverge step it says &#x27;pour your ideas down&#x27;, and under the converge step it says &#x27;reorder, nest, prune&#x27;."/></figure>
<p>The bonus section at the end of this post will show a very concrete example, but let&#x27;s begin with the steps in the process.</p>
<h3>Diverge</h3>
<p>Let&#x27;s look at the <strong>Diverge step</strong> first. This is about getting <em>everything</em> even remotely relevant out of your head and into a big, messy list. Self-proclaimed messy thinkers, rejoice! Here, logic and tidiness are unnecessary—and often even harmful.</p>
<p>To diverge means to explore an uncharted world of possibilities. <em>Explore</em> is the keyword here: the goal is to find the raw ideas and stick them in the bag of your bullet list, <em>not</em> to make sense of them.</p>
<p>Call it &quot;good old (solo) brainstorming.&quot; Call it a &quot;brain dump.&quot; It&#x27;s a one-directional, fast process resulting in a disorganized, almost lunatic sequence of bullet points. Once you get into the mindset, it is quite easy, even fun. There are so many things you are allowed to gloss over here!</p>
<p>Don&#x27;t pay attention to the order of the bullets. Put them in random order.</p>
<p>Don&#x27;t pay attention to how relevant or worthy an idea is. If you need to stop and wonder whether an item should be on the list or not, you&#x27;re doing it wrong. You thought about it, so it goes in, period. This is a brain dump, not a show-window arrangement job.</p>
<p>Don&#x27;t pay attention to the quality of the bullets: as long as they are barely comprehensible to you later on, they&#x27;re good enough. Keep them short, though.</p>
<p>The key here is to go fast and rough—get it all out as quickly as possible—but you shouldn&#x27;t rush the end of the process. When to stop diverging and move on to converging is both a science and an art, like making popcorn: too soon and you&#x27;ll waste many potentially genius kernels, too late and you&#x27;ll overwork the thing to death.</p>
<h3>Converge</h3>
<p>Once you have all your ducks and ugly ducklings in the tub, it&#x27;s time to put them all in a row. Converging means curating your list until it reaches a state that you consider meaningful, clear, and well-organized. Here you <em>do</em> need some degree of logic and &quot;tidiness&quot; (whatever that means), but no one should feel daunted by that, because it is a very gradual, baby-steps kind of work. Someone more &quot;naturally rational&quot; or well-practiced in this process may be able to do this faster, but not—I believe—significantly better. The process does the work for you.</p>
<p>You have a big, messy list. To converge, you massage it in three ways: you <em>reorder</em>, <em>nest</em>, and <em>prune</em>.</p>
<p>To <strong>reorder</strong> means just that: you move a bullet point up or down within the list to change its position. In this way, you can group related points together (since we put them down in random order in the diverge step) and arrange groups of points in an order that makes more sense.</p>
<figure class="my-8 "><img src="https://aethermug.com/assets/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list/reorder.gif" alt="Animation showing a list of bullet points that are being reordered by moving them up and down with the keyboard." class="w-full max-w-4xl mx-auto rounded-lg shadow-lg"/><figcaption class="text-center text-sm text-gray-600 mt-3 italic">We&#x27;ll see which tools allow you to do this later on.</figcaption></figure>
<p>You can do this one bullet at a time, so it&#x27;s easy. Ask yourself if this bullet seems to belong better near other bullets, or before/after other topics. At first, it will feel like mucking around in a pig&#x27;s pen: keep going, one bullet at a time, and things will gradually begin to look more orderly.</p>
<p>To <strong>nest</strong> means indenting bullets, i.e., shifting them one step to the right so that they look subordinate to the less-indented bullet immediately above them. For example:</p>
<ul>
<li>Basic operations
<ul>
<li>Reorder</li>
<li>Nest
<ul>
<li>Nest more, but not too much</li>
</ul>
</li>
<li>Prune</li>
</ul>
</li>
</ul>
<p>Here, &quot;Basic operations&quot; is not indented, while &quot;Reorder,&quot; &quot;Nest,&quot; and &quot;Prune&quot; are indented once. This makes it very clear that the latter three are sub-topics, clarifications, or details related to &quot;Basic operations.&quot;</p>
<p>This example also shows that you can indent items more than once, like the &quot;Nest more, but not too much&quot; item above. This is great for adding more depth and granularity to your list—it creates a clear hierarchy of ideas, from the more fundamental (less-indented) to more supplementary (more-indented).</p>
<p>Nesting sounds like a bland and discrete tool, but it&#x27;s really the secret ingredient that makes this thinking method shine. In effect, it turns any simple list into a full-blown mind-map, with a neat tree-like hierarchy.</p>
<p>Once again, you don&#x27;t need to figure out the whole nesting structure all at once. Look at one bullet at a time and ask yourself if it&#x27;s fundamental enough to remain un-indented, or if it is really an addition or supplement to some other point—in which case, nest it once. Then compare it with the other bullets nested under the same parent, rinse and repeat.</p>
<p>Of course, you can always un-nest or un-indent a bullet if you change your mind.</p>
<p>Finally, <strong>prune</strong> is simply the act of removing bullets that feel unnecessary or obsolete. During the Diverge phase, you freely dumped everything into the list, but as you work your bullets with small reordering and nesting steps, you&#x27;ll begin to see the whole thing take a more coherent shape. You shouldn&#x27;t try to cram every little idea in—delete bullet points mercilessly.</p>
<p>Don&#x27;t prune too early, though: first reorder and nest things enough to get a feeling of what the general theme is shaping up to be. After that, pruning should come easier and easier, as some items will feel awkward and unfit in any order or nesting level. Remove them to give the remaining bullets room to breathe and blossom.</p>
<p>Needless to say, you&#x27;ll get new ideas even during the Converge phase. Just add them to the list at any time and give them the same treatment.</p>
<h3>The Goals</h3>
<p>Notice how all of the actions above—reordering, nesting, pruning—are more about &quot;feeling&quot; the proper place of a bullet than thinking rationally or in any difficult way. Still, you need something to guide your feeling work. How do you find the right order, nesting level, and pruning-worthiness of a bullet?</p>
<p>There can be no single answer because the answer depends on your <strong>goal</strong> in the exercise.</p>
<p>If you&#x27;re organizing your thoughts for writing, think about what the objective of the final product is: is it to sell an idea, to give a thorough understanding of a topic, to make the reader feel something? And will the readers have advanced knowledge on this topic from the start, or are they completely new to it? Knowing these kinds of things will make your bullet-level decisions easier.</p>
<p>If you&#x27;re planning an event, think about who should be involved and what the deliverables should be—a series of instructions, a schedule, a shopping list? Depending on the answer, you will prefer very different list structures. To coordinate a group of people, you might prefer to keep people&#x27;s names at the root level (not indented) and nest the tasks or groups of tasks under them. To make a schedule, you might instead use some milestone dates as roots and indent the work to do under those. And so on.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h3>No Goals?</h3>
<p>Sometimes, you might start without a clear goal in mind. You have a pile of ideas and loosely connected concepts, and you sense there&#x27;s something important to express, but you can&#x27;t quite organize them in your head. That&#x27;s completely fine! In fact, this bullet-list method is great for figuring that out as you go.</p>
<p>In these cases, the process remains the same. You go through the Diverge phase and the Converge phase as usual: one bullet at a time. The only difference is that you&#x27;ll need to dig a bit deeper when trying to &quot;feel&quot; the right positioning for each bullet. Ask yourself: which bullet feels &quot;on the money&quot;? Which is &quot;nice to have&quot;? Which feels like a premise and which like an arrival point?</p>
<p>You might need to go back and forth with the bullets, move them around, refine them, or even break them down into smaller bullet points. All of this is normal and very common, and the digital tools we&#x27;ll discuss below make this work so fast that you&#x27;ll have the luxury of quickly trying many arrangements until you find the one that feels just right.</p>
<p>Do this enough, and you&#x27;ll end up with something much clearer than what you originally had in your head. Usually, you&#x27;ll either find the heart of the issue that was eluding you before, or you&#x27;ll discover that there wasn&#x27;t a worthwhile heart to begin with: both are useful discoveries. In fact, often you&#x27;ll start with a goal in mind and discover a better one in the process!</p>
<p>One last piece of advice: if you have time, pause after you&#x27;ve done some Converge work, let the list sit for a few hours, a day, or more, and then resume your Converge later on. Seeing the partially organized list with fresh eyes helps make your granular decisions more obvious.</p>
<h2>Recommended Tools</h2>
<p>The method above might sound super simple, but the tools you use to do it make a huge difference.</p>
<p>In theory, you could do this on paper, but imagine how much work it would be: every reordering and every nesting operation would mean rewriting the whole thing. Digital tools are the way to go, but most of them don&#x27;t make this method easy either—which might explain why this basic approach isn&#x27;t more popular.</p>
<p>A good tool for this work needs to have all of the following functions:</p>
<ol>
<li><strong>The ability to create new bullets when you press Enter in a list</strong>: most modern text editors support this.</li>
<li><strong>The ability to perform all of the following operations with easy keyboard shortcuts</strong>: thought is fast, pointing and clicking is slow. Having to use your mouse at every step would get in the way. Many editors support shortcuts to some extent, but often they are clunky and can&#x27;t be customized. Look for an editor that makes these operations quick and easy for you.</li>
<li><strong>The ability to indent and de-indent bullets</strong>: most modern editors have this, and usually the shortcut is the <code>Tab</code> key to indent and the <code>Shift + Tab</code> combination to de-indent (sometimes it&#x27;s <code>Ctrl/⌘ + ]</code> and <code>Ctrl/⌘ + [</code> though). Build it into your muscle memory.</li>
<li><strong>The ability to move a line (bullet point) up or down by one</strong>: this is extremely important for fast reordering, but many popular (non-programming) text editors can&#x27;t do this properly, or have strange shortcuts. This is where Word, Google Docs, and basic Notepad bow out.</li>
<li><strong>The ability to cut/paste whole lines at a time</strong>: moving a bullet up or down by one step is very useful for small changes, but if you have to move an item to a totally different part of a long list, you&#x27;re better off copy-pasting it. Most people copy-paste by selecting words with a mouse, which is very slow. Instead, many editors will cut the whole line (bullet point in the case of a list) by pressing <code>Ctrl/⌘ + X</code> <em>when nothing is selected</em>. Then move your cursor to the new spot in the list, and press <code>Ctrl/⌘ + P</code> to drop the bullet there.</li>
</ol>
<p>As an honorable mention, I recommend you look for editors that allow you to &quot;fold&quot; a branch of the nested list. This means that the children of a parent bullet are hidden out of view temporarily, like this:</p>
<figure class="my-8 "><img src="https://aethermug.com/assets/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list/folding.gif" alt="A list of bullet points that are being folded and unfolded with the mouse." class="w-full max-w-4xl mx-auto rounded-lg shadow-lg"/></figure>
<p>Folding transforms your list into an even better form of a mind-map. Fold the branches containing the details to zoom out and see the big picture emerge.</p>
<h3>So Which Tools Do I Recommend?</h3>
<p>The great thing about this method—the aspect that, in my opinion, makes it superior to standard mind-maps—is that you can use it with many different editors, both free and paid. As long as they have the functions we discussed above, any editor you prefer is fine and will let you organize your ideas at (almost) the speed of thought. Still, I do have some personal favorites.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list/notion.webp" alt="" title="the Notion logo"/></figure>
<p><strong>Notion</strong> supports all the necessary functions, including bullet folding, so it works fine. I use it occasionally, and it does its job. However, as far as I know, it doesn&#x27;t support customizing your shortcuts. You either like the default ones, or you&#x27;re better off with another tool.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list/obsidian.webp" alt="" title="the Obsidian logo"/></figure>
<p><strong>Obsidian</strong> does this very well out of the box, and you can choose the shortcuts you want. This is what I use for all my writing work, and I can&#x27;t recommend it enough.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list/sublime.webp" alt="" title="the Sublime Text logo"/></figure>
<p>But a surprising winner for me, at least for random impromptu thought-organization work, is <strong>Sublime Text</strong>. This is actually a code editor by design, but I&#x27;ve found that it works very well as a lightweight Markdown editor (<a href="https://en.wikipedia.org/wiki/Markdown" rel="nofollow noopener noreferrer" target="_blank">Markdown</a> is a basic formatting style that I recommend you spend 15 minutes learning). It supports all of those functions (with plugins), is fully customizable, and, most of all, it&#x27;s blazing fast. When I have a random idea to quickly process and organize, the fastest way to get diverging is Sublime Text. It&#x27;s not for everyone, but you might want to give it a try.</p>
<h2>Bonus: A Real-World Example</h2>
<p>If the generic explanation above wasn&#x27;t clear enough, here&#x27;s a practical demonstration of how I organized my ideas into an outline for this very blog post. Feel free to skip this part if you&#x27;ve had enough details, though!</p>
<h3>Diverging</h3>
<p>I&#x27;ve been following this approach for years now, so I&#x27;ve become quite fast at it. Often, I can start some nesting already in the diverging phase, although it&#x27;s not necessary (remember, chaos is welcome when diverging, and nesting is a converging task).</p>
<p>I first jotted down this rough list of ideas:</p>
<blockquote>
<ul>
<li>Will give a practical example at the end</li>
<li>Sometimes people tell me I&#x27;m organized, clear, smart</li>
<li>Those people claim to have trouble keeping their thoughts in order, and usually dislike writing and managing projects</li>
<li>This is puzzling for me, because my thoughts also start disorganized
<ul>
<li>That&#x27;s how the brain works</li>
</ul>
</li>
<li>All I have is a process to organize them</li>
<li>This can be done in your head a little, but doing it with digital text is much more effective (and necessary for larger and more granular organization)</li>
<li>Without this approach, I don&#x27;t know how I would do it</li>
<li>Disclaimer: this will sound super-elementary for some readers. But you&#x27;d be surprised how many people don&#x27;t even attempt this.</li>
<li>Base idea: diverge first, converge later
<ul>
<li>Simple divide and conquer, reduces the cognitive load</li>
</ul>
</li>
<li>Diverge = brain dump, brainstorm
<ul>
<li>Don&#x27;t mind the order, relevance, or quality of bullets</li>
<li>Get it all down into a big, messy list</li>
<li>Go fast, but don&#x27;t rush</li>
<li>Only stop when you feel that you&#x27;ve listed more than enough for the purpose</li>
<li>My guess is this will be easy for most</li>
</ul>
</li>
<li>Converge = curate the list
<ul>
<li>Reorder</li>
<li>Prune</li>
<li>Nest</li>
<li>Nest more, but not too much</li>
<li>Critical: if you already have a precise goal in mind</li>
</ul>
</li>
</ul>
</blockquote>
<p>Notice how there&#x27;s <em>some</em> sense of order, but there are also bullets that are clearly out of place. The very first bullet, &quot;Will give a practical example at the end,&quot; wasn&#x27;t something I planned to put at the beginning. It was just at the top of my mind when I started the list, and I didn&#x27;t want to forget it, so I wrote it first.</p>
<p>I wrote the above in one linear sequence, but while writing that last bullet (&quot;Critical:...&quot;), another idea popped into my mind. I stopped mid-sentence and added this bullet just below:</p>
<blockquote>
<ul>
<li>If you already have a precise goal in mind, use it as guide for the organization</li>
</ul>
</blockquote>
<p>Then I went back and finished the interrupted sentence. I continued like this, adding new bullets as they came to mind—mostly below the previous ones, but I often jumped around the list to add bullets here and there when I already had a vague sense of ordering. No deep thinking, no advanced reasoning abilities: just putting things down as they came.</p>
<p>At the end of the diverging phase, I had this:</p>
<blockquote>
<ul>
<li>Will give a practical, real-world example at the end</li>
<li>Sometimes people tell me I&#x27;m organized, clear, smart</li>
<li>Those people claim to have trouble keeping their thoughts in order, and usually dislike writing and managing projects</li>
<li>This is puzzling for me, because my thoughts also start disorganized
<ul>
<li>That&#x27;s how the brain works</li>
</ul>
</li>
<li>All I have is a process to organize them</li>
<li>This can be done in your head a little, but doing it with digital text is much more effective (and necessary for larger and more granular organization)</li>
<li>Without this approach, I don&#x27;t know how I would do it</li>
<li>Disclaimer: this will sound super-elementary for some readers. But you&#x27;d be surprised how many people don&#x27;t even attempt this.</li>
<li>Base idea: diverge first, converge later
<ul>
<li>Simple divide and conquer, reduces the cognitive load</li>
</ul>
</li>
<li>Diverge = brain dump, brainstorm
<ul>
<li>Don&#x27;t mind the order, relevance, or quality of bullets</li>
<li>Get it all down into a big, messy list</li>
<li>Go fast, but don&#x27;t rush</li>
<li>Only stop when you feel that you&#x27;ve listed more than enough for the purpose</li>
<li>Keep bullets short, &lt;3-4 lines, nest when you feel a sense of continuation, but don&#x27;t worry about doing it well at this step</li>
<li>My guess is this will be easy for most</li>
</ul>
</li>
<li>Converge = curate the list
<ul>
<li>Reorder</li>
<li>Prune</li>
<li>Nest</li>
<li>Nest more, but not too much</li>
<li>Critical: these choices depend entirely on the goal of the exercise</li>
<li>If you already have a precise goal in mind, use it as guide for the organization</li>
<li>If you don&#x27;t have a clear goal yet, it&#x27;s okay: it will emerge in this phase
<ul>
<li>How to find the goal?
<ul>
<li>Simply writing the list down will make it easier to compare the relative importance and relevance of the items</li>
<li>Which feels &quot;on the money&quot;? Which is &quot;nice to have&quot;? Which feels like a premise and which like an arrival point? This is more about instinct than logic in most cases.</li>
<li>The key is that you don&#x27;t need to get it right at the first time: this method makes it super-cheap to try different organizations</li>
</ul>
</li>
</ul>
</li>
<li>Sometimes you start with a goal, and a better one will emerge in this phase!</li>
</ul>
</li>
<li>Converge phase might feel daunting: &quot;I can&#x27;t organize my thoughts&quot; mindset kicks in
<ul>
<li>But it&#x27;s just a series of retouches. You don&#x27;t need to keep it all straight in your head until the very end. Just compare two bullets at a time. Refine, take time, iterate.</li>
</ul>
</li>
<li>This approach works for any application where it&#x27;s important to think straight:
<ul>
<li>Writing</li>
<li>Planning</li>
<li>Working through emotions</li>
</ul>
</li>
<li>If you have time, let it sit for a day or more, then re-read and do more converge-diverge. Helps a lot.</li>
<li>This is hard to do without the proper tool. It&#x27;s one area that you can&#x27;t really replicate on paper.</li>
<li>Tool requirements
<ul>
<li>Auto-bullets: when you press enter, a new bullet is created automatically</li>
<li>Keyboard-shortcut-rich editor</li>
<li>Shortcut to indent/de-indent</li>
<li>Shortcut to move line up/down</li>
<li>Shortcut to cut/paste whole line</li>
</ul>
</li>
<li>Recommendations
<ul>
<li>Word or notepad won&#x27;t do</li>
<li>Obsidian, Notion, and similar work well. Prefer those that allow you to customize the shortcuts. Notion doesn&#x27;t, Obsidian does.</li>
<li>Surprising winner for speed: Sublime Text with Markdown plugin
<ul>
<li>Super-fast and simple</li>
</ul>
</li>
</ul>
</li>
<li>Practical example: writing this post
<ul>
<li>Can skip, it&#x27;s only for those who are still unclear</li>
<li>(Steps-by-step outline)</li>
</ul>
</li>
</ul>
</blockquote>
<p>At this point, my ideas ran out, or at least the ones I considered most relevant for my goal of sharing these practical steps with the world.</p>
<h3>Converging</h3>
<p>I immediately began nesting items (⬆️ and ➡️ arrows indicate that they were moved and nested, respectively):</p>
<blockquote>
<ul>
<li>...</li>
<li>Sometimes people tell me I&#x27;m organized, clear, smart
<ul>
<li>➡️ Those people claim to have trouble keeping their thoughts in order, and usually dislike writing and managing projects</li>
<li>➡️ This is puzzling for me, because my thoughts also start disorganized
<ul>
<li>➡️ That&#x27;s how the brain works</li>
</ul>
</li>
<li>➡️ All I have is a process to organize them</li>
</ul>
</li>
<li>🌟 Process in brief
<ul>
<li>⬆️➡️ Base idea: diverge first, converge later
<ul>
<li>⬆️➡️ Simple divide and conquer, reduces the cognitive load</li>
</ul>
</li>
</ul>
</li>
<li>...</li>
</ul>
</blockquote>
<p>In the previous phase, I had written some of those bullets in sequence, not nested, but they clearly felt like a group on their own. I identified the first bullet, &quot;Sometimes people...&quot;, as the intro to the piece and moved three other bullets right below it, indenting them to show that they are follow-ups to the same topic. I also created a new (🌟) root bullet: &quot;Process in brief,&quot; because that felt like the natural way to continue the introduction before getting into the details.</p>
<p>These two idea branches (the intro statement and the process in brief) started to exert a gravity of their own, attracting other related points that were scattered elsewhere in the list. I moved things around some more, making them grow.</p>
<blockquote>
<ul>
<li>...</li>
<li>Sometimes people tell me I&#x27;m organized, clear, smart
<ul>
<li>Those people claim to have trouble keeping their thoughts in order, and usually dislike writing and managing projects</li>
<li><del>This is puzzling for me, because my thoughts also start disorganized</del>
<ul>
<li><del>That&#x27;s how the brain works</del></li>
</ul>
</li>
<li>All I have is a process to organize them</li>
<li>⬆️➡️ Without this approach, I don&#x27;t know how I would do it</li>
</ul>
</li>
<li>Process in brief
<ul>
<li>⬆️➡️ <del>Disclaimer: this will sound super-elementary for some readers. But you&#x27;d be surprised how many people don&#x27;t even attempt this.</del></li>
<li>Base idea: diverge first, converge later
<ul>
<li>Simple divide and conquer, reduces the cognitive load</li>
</ul>
</li>
<li>🌟 From your head to an organized bullet list</li>
<li>⬆️➡️ This is hard to do without the proper tool. It&#x27;s one area that you can&#x27;t really replicate on paper.</li>
<li>⬆️➡️ This can be done in your head a little, but doing it with digital text is much more effective (and necessary for larger and more granular organization)</li>
</ul>
</li>
<li>...</li>
</ul>
</blockquote>
<p>I also began deleting bullets when I felt they added little value (normally, I would just cut the bullet with <code>Ctrl + X</code> and not paste it, effectively deleting it, but I&#x27;m showing it with a strikethrough here for clarity).</p>
<p>I continued reordering, nesting, and deleting as inspiration guided me, trying to condense those scattered thoughts into groups, subgroups, and sub-subgroups. Toward the end, I started seeing the overall structure more clearly, so I added root-level bullets for the section titles and nested the related bullet branches under those.</p>
<p>By the end of the process, I arrived at this:</p>
<blockquote>
<ul>
<li>🌟 <strong>Intro</strong>
<ul>
<li>➡️ Sometimes people tell me I&#x27;m organized, clear, smart
<ul>
<li>➡️ Those people claim to have trouble keeping their thoughts in order, and usually dislike writing and managing projects</li>
<li>➡️⬆️ All I have is a process to organize them</li>
<li>➡️⬆️ Without this approach, I don&#x27;t know how I would do it</li>
</ul>
</li>
<li>➡️⬆️ This approach works for any application where it&#x27;s important to think straight:
<ul>
<li>Writing</li>
<li>Planning</li>
<li>Working through emotions</li>
</ul>
</li>
<li>🌟 Process in brief
<ul>
<li>➡️⬇️ Base idea: diverge first, converge later
<ul>
<li>Simple divide and conquer, reduces the cognitive load</li>
</ul>
</li>
<li>🌟 From your head to an organized bullet list</li>
<li>➡️⬆️ This is hard to do without the proper tool. It&#x27;s one area that you can&#x27;t really replicate on paper.</li>
<li>➡️⬇️ This can be done in your head a little, but doing it with digital text is much more effective (and necessary for larger and more granular organization)</li>
</ul>
</li>
<li>➡️⬇️ Will give a practical, real-world example at the end</li>
</ul>
</li>
<li>🌟 <strong>The nitty-gritty</strong>
<ul>
<li>➡️⬇️ Diverge = brain dump, brainstorm
<ul>
<li>Get it all down into a big, messy list</li>
<li>Don&#x27;t mind the order, relevance, or quality of bullets</li>
<li>Keep bullets short, &lt;3-4 lines, nest when you feel a sense of continuation, but don&#x27;t worry about doing it well at this step</li>
<li>Go fast, but don&#x27;t rush</li>
<li>Only stop when you feel that you&#x27;ve listed more than enough for the purpose</li>
<li>My guess is this will be easy for most</li>
</ul>
</li>
<li>➡️⬇️ Converge = curate the list
<ul>
<li>🌟 Basic operations
<ul>
<li>➡️⬇️ Reorder</li>
<li>➡️⬇️ Nest
<ul>
<li>➡️⬇️ Nest more, but not too much</li>
</ul>
</li>
<li>➡️⬇️ Prune</li>
</ul>
</li>
<li>➡️⬇️ Critical: these choices depend entirely on the goal of the exercise
<ul>
<li>➡️⬇️ If you already have a precise goal in mind, use it as guide for the organization</li>
<li>➡️⬇️ If you don&#x27;t have a clear goal yet, it&#x27;s okay: it will emerge in this phase
<ul>
<li>How to find the goal?
<ul>
<li>Simply writing the list down will make it easier to compare the relative importance and relevance of the items</li>
<li>Which feels &quot;on the money&quot;? Which is &quot;nice to have&quot;? Which feels like a premise and which like an arrival point? This is more about instinct than logic in most cases.</li>
<li>The key is that you don&#x27;t need to get it right at the first time: this method makes it super-cheap to try different organizations</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li>➡️⬇️ Sometimes you start with a goal, and a better one will emerge in this phase!</li>
<li>➡️⬇️ Converge phase might feel daunting: &quot;I can&#x27;t organize my thoughts&quot; mindset kicks in
<ul>
<li>🌟 But it&#x27;s just a series of elementary retouches. Think small.
<ul>
<li>🌟 You don&#x27;t need to keep it all straight in your head until the very end.</li>
<li>🌟 Just compare two bullets at a time. Refine, take time, iterate.</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li>➡️⬇️ If you have time, let it sit for a day or more, then re-read and do more diverge-converge. Helps a lot.</li>
<li>🌟 Keep at it until it feels right.
<ul>
<li>🌟 You end up with a structured list, which is really a mind-map, but more convenient</li>
</ul>
</li>
</ul>
</li>
<li>🌟 <strong>Which tool?</strong>
<ul>
<li>🌟 Extremely simple method, but many don&#x27;t use it. Maybe because the editors that allow this are still rare?</li>
<li>➡️⬇️ Tool requirements
<ul>
<li>➡️⬇️ Auto-bullets: when you press enter, a new bullet is created automatically</li>
<li>➡️⬇️ Keyboard-shortcut-rich editor</li>
<li>➡️⬇️ Shortcut to indent/de-indent</li>
<li>➡️⬇️ Shortcut to move line up/down</li>
<li>➡️⬇️ Shortcut to cut/paste whole line</li>
<li>🌟 Nice to have: branch-folding</li>
</ul>
</li>
<li>➡️⬇️ Recommendations
<ul>
<li>Word or notepad won&#x27;t do</li>
<li>Obsidian, Notion, and similar work well. Prefer those that allow you to customize the shortcuts. Notion doesn&#x27;t, Obsidian does.</li>
<li>Surprising winner for speed: Sublime Text with Markdown plugin
<ul>
<li>Super-fast and simple</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li>⬇️ <strong>Practical example: writing this post</strong>
<ul>
<li>Can skip, it&#x27;s only for those who are still unclear</li>
<li>(Steps-by-step outline)</li>
</ul>
</li>
</ul>
</blockquote>
<p>It&#x27;s still a bit rough, and if I had more time, I would have refined it a little more. But I was short on time, so I decided it was good enough to move on.</p>
<p>Since I was using Obsidian, I could fold the big outline to show only its root items:</p>
<blockquote>
<ul>
<li>Intro &gt;</li>
<li>The nitty-gritty &gt;</li>
<li>Which tool? &gt;</li>
<li>Practical example: writing this post &gt;</li>
</ul>
</blockquote>
<p>Neat! With this bird&#x27;s-eye view, I can more easily stay on track when writing.</p>
<p>Of course, I made several more changes to the content and structure while actually writing this blog post. New ideas came to me, and I decided to drop some others, etc. Writing is another <a href="https://www.paulgraham.com/essay.html" rel="nofollow noopener noreferrer" target="_blank">excellent way to clarify your thoughts</a>, so it&#x27;s natural not to follow the outline exactly. But by spending time refining a structured outline, I made my writing process much faster and easier.</p>
<h2>Bonus: What About AI?</h2>
<p>This guide was about how to <em>learn</em> to organize your thoughts autonomously. I think it&#x27;s a very useful skill, and practicing it also makes you better at doing some of it in your mind. Still, in 2025 you might ask: why should I do it myself when an AI can do it instead?</p>
<p>It will work.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list/claude_prompt.webp" alt="" title="Screenshot of the Claude web UI showing the beginning of a prompt: &#x27;I&#x27;ve made a brain-dump of some ideas I have, which I want to make into a blog post. I want you to help me turn this messy list into a structured outline for the blog post. Don&#x27;t write the blog post itself, and don&#x27;t significantly change the wording of these bullets. You are only allowed to reorder them in any way, nest them one or more times to form hierarchies of ideas, delete the low-value ones, and add a small number of new bullets to help with the structure (e.g., to act as the root to group some sections or concepts together in a nested structure)...&#x27;"/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list/claude_answer.webp" alt="" title="Screenshot of the Claude web UI showing the answer to the prompt above. The answer is a structured outline of the blog post."/></figure>
<p>One way to use AI for this is to do the Diverge yourself and let the model Converge for you. Be sure to explain your goal, target audience, expected deliverables, and all that stuff clearly to the model, though, or it&#x27;s as likely as not to shape your ideas into an inappropriate form. But if you do those things, the AI will do an excellent job at outlining something nice.</p>
<p>Another way is to involve the AI already in the Diverge phase. LLMs can be very useful in brainstorming, but your input can&#x27;t be entirely removed from this one. You decide what goes in and what doesn&#x27;t, and you still have to set the course by specifying your goals. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/how-to-organize-your-thoughts-with-a-simple-bullet-list/507664ldsdl.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Water Lilies, Water Lilies Everywhere]]></title>
            <link>https://aethermug.com/posts/water-lilies-water-lilies-everywhere?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/water-lilies-water-lilies-everywhere</guid>
            <pubDate>Fri, 27 Jun 2025 04:26:39 GMT</pubDate>
            <description><![CDATA[Patterns are anti-randomness and thought is made of pattern-stuff]]></description>
            <content:encoded><![CDATA[<p><em>Note: <a href="https://planktonvalhalla.com/20241030-recursion-tidy-stars-and-water-lilies/" rel="nofollow noopener noreferrer" target="_blank">This long-form essay</a> I wrote last year goes more in-depth on the same topic.</em></p>

<h2>I / Universes</h2>
<p>Take a universe, one just like our own Universe, only much simpler. It&#x27;s a square made up of 10,000 smaller squares, each of which can only be one of four colors at a time.</p>
<p>Like our Universe after the Big Bang, this little universe starts as a chaotic mess. This is what true randomness looks like:</p>
<figure class="my-8 "><img src="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/grid_random.webp" alt="Animation showing a grid of 100x100 squares that are constantly changing in color. They flicker randomly all the time, so that, even though each frame in the animation is technically unique, the overall impression is that nothing is happening or changing." class="w-full max-w-4xl mx-auto rounded-lg shadow-lg"/></figure>
<p>Stare at it for a while. Replay the video a few times. What does it make you think of? How would you describe the phenomena occurring inside that square? What names would you give to the things you&#x27;re observing?</p>
<p>Look at any specific pixel or region in that image. In some areas, it seems like groups of the same color stick together, but those alliances disappear in an instant, in the flicker of a single frame, never to be repeated. The color of each pixel is truly random, with no rhyme or reason, no explanation, nothing &quot;causing&quot; them to be a specific color instead of another. Anything could happen, and nothing is more likely to happen than anything else.</p>
<p>In a case like this, nothing you see can have a &quot;name,&quot; be described, or talked about—all you can say is that there are pixels behaving randomly. Their behavior cannot be predicted.</p>
<p>This is what we call random: the unpredictable. If something is random, you can&#x27;t predict what it will do next. The reverse is also true: if, despite your best efforts, something remains consistently unpredictable, you call it random.</p>
<p>Randomness and unpredictability are one and the same!</p>
<p>Now look at this other universe, which begins with exactly the same random arrangement of colors:</p>
<figure class="my-8 "><img src="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/grid_06.webp" alt="The animation starts in the same random state as the previous one, and at first seems to behave just the same. But gradually some different characteristics become visible, if barely. Larger agglomerates of same-color pixels form for brief times, then disappear within a few frames." class="w-full max-w-4xl mx-auto rounded-lg shadow-lg"/></figure>
<p>Do you notice anything? It&#x27;s subtle, but there&#x27;s something different going on. It still looks quite random, that&#x27;s true, but isn&#x27;t it a little <em>less</em> random than before? It&#x27;s easier to notice if you compare the first frame,</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/grid_06_frame_001.webp" alt="" title="Still image of the first frame of the animation, which is a field of randomly-colored pixels."/></figure>
<p>with the last one,</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/grid_06_frame_040.webp" alt="" title="Still image of the last frame of the animation, which has several rough patches nestled in a sea of random pixels."/></figure>
<p>It starts uniformly random, but by the end of the simulation, the frame appears somehow changed. During the simulation, the pixels seem to gradually form small clusters, thin filaments that you can distinguish from the messier regions around them. Although they wriggle around randomly and are rather unstable—melting away and reforming in unexpected ways—they linger longer than before and seem at least a bit more predictable. If you focus on a dark area for a moment and pause the video, you could guess that the area will continue being dark half a second later, even if its shape will have changed. Your prediction would be right more often than not.</p>
<p>What changed from the previous video? I only introduced a small twist: each pixel&#x27;s color is partially affected by the colors it had around in the previous frame. If most of its eight pixel-neighbors were terracotta-colored at the previous step, for example, a pixel will turn terracotta next with a 60% chance. It still has a 40% probability of <em>not</em> turning terracotta, though, which is why the effects are still mostly random.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/20250626_semirandom_cells.webp" alt="" title="Diagram of the pixel update rule."/></figure>
<p>The effect of reducing randomness was the emergence of <em>patterns</em>. There you go! Even though these are still rather anonymous and elusive, we can finally use a noun to indicate the stuff that&#x27;s going on in the frame—a luxury we didn&#x27;t have in the fully-random universe.</p>
<p>That&#x27;s not all: we can finally talk about what&#x27;s happening. We can describe the behavior with something other than &quot;it just happens.&quot; I just did exactly that when I wrote things like &quot;they wriggle around,&quot; meaning that there is something (other than single pixels) that can become the subject of a sentence. Much better than nothing!</p>
<p>In fact, &quot;pattern&quot; is arguably the most general term we can use for something like this. I&#x27;ll go even further: &quot;pattern&quot; is a close <em>synonym</em> of &quot;predictability.&quot; What is predictable can be called a pattern, and whatever you call a pattern—whether it is a shape you&#x27;re looking at, a sequence of related events, or something more abstract—is always grounded in your ability to foresee some aspects of it.</p>
<p>&quot;Pattern,&quot; then, is the opposite of &quot;random behavior.&quot; They don&#x27;t exclude each other, though: you usually have patterns in the middle of randomness, just like the animation above. This is why people can make <em>some</em> predictions about notoriously random things like a stock market or an elementary particle—to the extent that someone can predict it, it is a pattern. The rest is random.</p>
<p>In other words, randomness and patterns are inversely proportional, as you can see by comparing the last universe with this new one. Here, I cranked the probability of &quot;joining the majority&quot; up from 60% to 90%, reducing the randomness.</p>
<figure class="my-8 "><img src="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/grid_09.webp" alt="Cellular automaton animation where pixels form much more recognizable, stable blobs. The probability of joining the majority color is set to 90%, creating predictable patterns that maintain their general shape over time." class="w-full max-w-4xl mx-auto rounded-lg shadow-lg"/></figure>
<p>Now the pixels form much more recognizable, stable blobs. They are more predictable because you can be quite confident that they&#x27;ll maintain roughly the same general shape for at least a while.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/grid_09_big_frame_030.webp" alt="This is a close-up of one of the blobs. Take this pixel in the middle of the blob, for example. After watching the animation above, most people will instinctively expect this pixel to remain of the same color for many frames still. Most people will not even see it as a pixel, but as part of a bigger &quot;object&quot; that exists and acts in this simulated space. This is a pattern—this is non-randomness." title="Close-up frame of one blob with a single pixel highlighted near its center, showing how the pixel remains stable as part of the larger pattern structure."/><figcaption>This is a close-up of one of the blobs. Take this pixel in the middle of the blob, for example. After watching the animation above, most people will instinctively expect this pixel to remain of the same color for many frames still. Most people will not even see it as a pixel, but as part of a bigger &quot;object&quot; that exists and acts in this simulated space. This is a pattern—this is non-randomness.</figcaption></figure>
<p>Even though you can&#x27;t predict the <em>entire</em> future of any given blob—predictability usually has a time horizon—you can now make educated guesses about what the color of any given pixel will be several frames ahead. The randomness has decreased, and the patterns have become proportionally stronger.</p>
<p>So strong, in fact, that you could give proper names to the cutest of those blobs. One that looks a bit like a worm you could call Mr. Wriggles, and another resembling a hat could be Professor Brimley. Go ahead, invite your friends over and place bets with them, pit your predictive powers against each other. Will Mr. Wriggles eventually devour old Prof. Brimley, or will Brimley cover Wriggles entirely?</p>
<p>Patterns are literally all we can talk about. Patterns are all we can <em>think</em> and <em>care</em> about. Let me rephrase that: no thought, no language, no notion of existence is possible without patterns of some kind.</p>
<figure><img src="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/squiggles.webp" alt="A still image of a pixel simulation with cute figures scribbled on top."/></figure>
<h2>II / Why?</h2>
<p>This is interesting and all to ponder about, but it also raises some big questions.</p>
<p>First, <strong>why do new patterns form when you reduce randomness?</strong> All we did was make each pixel sometimes imitate its neighbors. Not a very sophisticated rule. It&#x27;s not obvious why this would lead to the growth of nice, plump lumps behaving precisely like that, with those specific sizes and curves.</p>
<p>The answer is recursion. Adding that neighbor-mimicking behavior created the ability for <em>each pixel to affect its own color at a later time</em>. The fact that pixel A is green at Frame 1 will impact A&#x27;s neighboring pixel&#x27;s color at the next frame and may well push some of them to become green. This, in turn, means that <em>those</em> neighbors will impact the probability of A being green again two or more frames down the road. This is a case of recursion—a feedback loop—and it&#x27;s the reason why you find some level of stability and predictability in those patterns.</p>
<p>In the fully-random example, on the other hand, each pixel was created entirely from scratch at every time step, regardless of what happened before. No recursion could happen then: no pixels affecting themselves over time, no reinforcement of previous colors, no patterns, no predictability.</p>
<p>The second question is <strong>why these patterns are predictable at all</strong>, and it has to do with another key property of causal links like the color-updating rule I introduced.</p>
<p>Although I only allowed a pixel to &quot;remember&quot; the color of its neighbors in the <em>previous frame</em>—and no earlier!—this unlocked something much grander: the neighbors in the previous frames also &quot;remembered&quot; their respective neighbors from <em>two frames</em> ago, those of two frames ago &quot;remembered&quot; their own neighbors from <em>three frames</em> ago, and so on in an uninterrupted chain. This enables what&#x27;s technically called &quot;history-dependence&quot;, the relationship between what&#x27;s happening right now with everything that happened before, since the beginning of time.</p>
<p>History dependence is the secret sauce that lets you make predictions based on patterns: by knowing what those patterns did in the past, or elsewhere—stay still, wriggle around, disappear after a certain time, etc.—you can <a href="/posts/a-framing-and-model-about-framings-and-models">extrapolate predictions</a> of what they might do in the future. Randomness, on the other hand, knows no history. (Turns out that it&#x27;s very hard to have history-dependence without also triggering some kind of recursion, like in this case.)</p>
<p>But there is something even deeper and more counterintuitive to be gleaned from this simple universe: novel patterns can arise spontaneously out of pure, utter randomness. One moment there is nothing; the next moment you have Wriggles and Brimley squirming their ways around. Depending on the <em>nature</em> of the interactions, a field of &quot;nothing in particular&quot; can give rise to something that can be said to &quot;exist&quot; as its own thing. Meaningfulness born from meaninglessness. Order out of chaos.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h2>III / Water Lilies</h2>
<p>Although the pixels of the first simulation had the same probability of assuming any of the available colors, in the less-random cases this probability became &quot;loaded&quot; or skewed, with some colors more probable than others. At the level of the single pixel, in a single frame transition, this skew was intentionally designed by me in the form of my neighbor-mimicking rule. But recursion brought the skew much farther than I designed it. By reinforcing its skew over many frames, recursion made whole neighborhoods acquire the same color, so that the probability of those neighborhoods collectively retaining the same color became much higher than that of single pixels.</p>
<p>Echoing my <a href="https://planktonvalhalla.com/20241030-recursion-tidy-stars-and-water-lilies/" rel="nofollow noopener noreferrer" target="_blank">earlier essay</a>, something unlikely became likely thanks to the self-reinforcing work of recursion. The long-lived blotches of color bordering each other—<em>very</em> rare in the fully-random universe—became the norm. I call the outcomes of such probability-enhancing feedback loops Water Lilies.</p>
<p>Almost all patterns that exist in reality are Water Lilies: the results of feedback loops, often arising spontaneously from random origins.</p>
<p>Let&#x27;s leave this pixel world now and return with our minds to our own vast Universe. To avoid confusion with the complexity of biology, let&#x27;s imagine a barren, lifeless planet. Here, too, Water Lily patterns abound, thanks to myriad kinds of recursion.</p>
<figure><img src="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/harsh-kumar-GbqbFuAmiIk-unsplash.webp" alt="Artistic rendering of an alien planet."/><figcaption>Image by Harsh Kumar, Unsplash</figcaption></figure>
<p>The fact that the planet itself exists is a Water Lily, that is, a product of recursion: gravity first caused a small clump of dust and debris to accrete out of a randomly swirling primordial cloud; this clumping in turn caused the local pull of gravity to strengthen, pulling more clumps closer, and so on in a loop that swept up most of the nearby matter into a single dense object. This process created the pattern &quot;celestial object,&quot; because its compactness means that it keeps existing predictably over a long period of time—just like the uniform neighborhoods in the simple universe.</p>
<p>The planet&#x27;s spherical shape is a pattern, too, because you can predict or reconstruct its whole shape by looking only at a part of it. This highly non-random shape is another Water Lily, of course, because any random protrusion it had at the beginning was pulled down by gravity, increasing the internal pressure of the planet, and that internal pressure in turn pushed out the valleys that were too deep. A recursive cycle that can only end with a neat ball. In the pixel universe, the relatively smooth borders of the blobs were something analogous, although they were based on a different recursive rule.</p>
<p>The stratification of the planet&#x27;s inner layers, another neat pattern, is a Water Lily, for reasons I explained elsewhere;</p>
<p>the conical shapes of the planet&#x27;s volcanoes are Water Lilies, as are their roughly periodic (i.e., not-fully-random) eruptions;</p>
<figure><img src="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/eros.webp" alt="Photograph of an irregular asteroid."/><figcaption>The recursive rounding process isn&#x27;t strong enough in smaller asteroids, which are stuck with their random, irregular shapes. NASA/JPL/JHUAPL, Public domain, via Wikimedia Commons.</figcaption></figure>
<p>the fact that rivers of lava and other liquids on the planet&#x27;s slopes are able to &quot;find&quot; their ways to the lowest depressions in the ground is a Water Lily;</p>
<p>the tidal locking of a moon (like Earth&#x27;s) that makes it always display the same face towards its planet is yet another Water Lily effect.</p>
<p>The list goes on. In short, nearly anything you can predict even a little—anything you can label with a name and describe in some way—emerges out of randomness with a (possibly very long) sequence of recursive interactions.</p>
<h2>IV / Perspective</h2>
<p>Despite all that, perhaps you still find it hard to believe that the Universe would have become so full of amazing and diverse patterns, all arising spontaneously from the random quantum noise of the Big Bang. If that is the case, try putting things in perspective.</p>
<p>According to <a href="https://astronomy.stackexchange.com/a/40655" rel="nofollow noopener noreferrer" target="_blank">rough back-of-the-envelope estimates</a>, all solid, concentrated matter in existence occupies a total volume of space equivalent to one part in 3,200,000,000,000,000,000,000,000,000,000,000,000,000,000 of the Universe&#x27;s volume. To give you a sense of the scale, if the Universe were a bag as large as the Earth, all structured matter inside it would be <em>15 times smaller than a single virus</em>—let&#x27;s say the leg of a virus.</p>
<p>That virus leg accounts for nearly all patterns humanity knows and talks about, from stars and planets to all living creatures, places, and objects, scattered in that Earth-sized bag.</p>
<p>Everything <em>other</em> than all patterns is empty space, peppered with largely un-pattern-like things like gas and dust clouds, stray particles zipping around, and dark matter. But these things are so poor in patterns that there is very little we can say about them—despite the fact that they comprise an enormous, crushing majority of the Universe&#x27;s volume. And so we tend to ignore them altogether.</p>
<p>Patterns are not that common, after all. Recursion is what makes improbable things like planets and worms probable, sure, but it does so quite rarely. We just happen to be in the midst of it all. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/water-lilies-water-lilies-everywhere/simpleplanet.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Are We Losing Control?]]></title>
            <link>https://aethermug.com/posts/are-we-losing-control?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/are-we-losing-control</guid>
            <pubDate>Thu, 19 Jun 2025 00:17:15 GMT</pubDate>
            <description><![CDATA[A nitpick and a fable about the world getting more complex than ever]]></description>
            <content:encoded><![CDATA[<p>You might have heard this argument before: &quot;The world we live in is more complex, fast-changing, and interconnected than ever, and to avoid losing control in this harsh new world, we urgently need new solutions.&quot;</p>
<p>I&#x27;m sure you have, because it&#x27;s everywhere:</p>
<blockquote>
<p>The world is changing at an ever-increasing pace... primarily because it has become more connected and interdependent than in our entire history. ... However, a paradigm shift in the way we are creating and managing these systems could solve our problems.</p>
<p>— <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2583391" rel="nofollow noopener noreferrer" target="_blank">Dirk Helbing</a></p>
</blockquote>
<blockquote>
<p>Today, we live in a vastly different world. The person more qualified to lead is not the physically stronger person. It is the more intelligent, the more knowledgeable, the more creative, more innovative.</p>
<p>— <a href="https://www.goodreads.com/quotes/7002503-today-we-live-in-a-vastly-different-world-the-person" rel="nofollow noopener noreferrer" target="_blank">Chimamanda Ngozi Adichie</a></p>
</blockquote>
<blockquote>
<p>The world is more interconnected than ever before, and it&#x27;s becoming more connected every day. Building walls won&#x27;t change that... If we want to close loopholes that allow large corporations and wealthy individuals to avoid paying their fair share of taxes, we&#x27;ve got to have the cooperation of other countries ... .</p>
<p>— <a href="https://time.com/4340310/barack-obama-commencement-address-transcript-rutgers/" rel="nofollow noopener noreferrer" target="_blank">Barack Obama</a></p>
</blockquote>
<blockquote>
<p>We live in an interconnected world, in an interconnected time, and we need holistic solutions.</p>
<p>— <a href="https://www.whatshouldireadnext.com/quotes/naomi-klein-we-live-in-an-interconnected" rel="nofollow noopener noreferrer" target="_blank">Naomi Klein</a></p>
</blockquote>
<blockquote>
<p>Globalization seems ungovernable … only a cosmopolitan perspective can reunite humanity with nature.</p>
<p>— <a href="https://en.wikipedia.org/wiki/Francesco_Sidoti" rel="nofollow noopener noreferrer" target="_blank">Francesco Sidoti</a></p>
</blockquote>
<p>It&#x27;s a narrative that we hear over and over, and I&#x27;ve generally grown numb to it. I just let it slide without a second thought. But recently, while working on a project with my father, I realized just how much I disagree with it. When the time came to introduce our approach to designing effective organizations, my father instinctively reached for this same opening, that our &quot;new world&quot; needs new solutions—and I couldn&#x27;t help nitpicking.</p>
<p>I understand where the people who make these kinds of arguments are coming from. We have technologies today that would have been unthinkable even a few decades ago. Especially since the industrial revolution, new inventions and discoveries have profoundly changed every aspect of our lives. The internet has upended our traditional ways of communicating—and it&#x27;s <em>much</em> faster than anything we had before.</p>
<p>Even to me—someone trying to argue that the above statements are faulty—it feels intuitively true that the world is now more complex than ever, that we risk losing control, or that we already lost control in 2016, or 2001. That intuitive feeling doesn&#x27;t make it logical, though.</p>
<p>My father and I quickly agreed on a change of wording because it wasn&#x27;t important to the core of our arguments, and the problem is happily resolved. But I think I owe him a clearer explanation of my thought process in pushing back on such arguments. Here is my brief attempt, for anyone to read and react to.</p>
<p>(For new readers of this blog, who might be unfamiliar with what I&#x27;m usually interested in: my nitpick is about the logic behind that kind of statement, not the intentions or conclusions!)</p>
<h2>I</h2>
<p>My rebuttal to the &quot;New World, New Solutions&quot; mindset is simple. Yes, the world is very different today compared to the past. Yes, the rate at which such changes happen is consistently faster now overall. But no, we&#x27;re not losing control, because <em>we were never in control</em>.</p>
<p>If we need new solutions today, it&#x27;s not primarily because the world is more complex, but because the solutions we&#x27;ve tried before have <em>never</em> worked in creating the world we want today.</p>
<p>Take climate change, a topic where this kind of flawed reasoning is often applied. You&#x27;ll often see a narrative like this: 1) global climate is changing faster than ever, which is a problem, 2) so countries need to come together, change how they consume, reduce waste, and scale the technologies that counter that change.</p>
<p>I agree with 1, and I agree with 2: both are entirely true. But I disagree that linking them with &quot;so&quot; in that way is the obvious or preferable way to think about the issue.</p>
<p>An argument along these lines was made by Kamala Harris at a summit in 2021:</p>
<blockquote>
<p>As a global community, it is imperative that we act quickly and together to confront this crisis. And this will require innovation and collaboration around the world. It will require the use of renewable energy and new technologies.</p>
</blockquote>
<p>Here I have no problems with the message. Let&#x27;s do those things! But I take issue with her <a href="/posts/a-framing-and-model-about-framings-and-models">framing</a>, based on having to confront &quot;this crisis,&quot; i.e., a problem that has come up recently.</p>
<p>If today we have this huge issue of human-induced, rapidly changing climate, it is because people 50, 100, and 150 years ago didn&#x27;t do what was necessary to avoid it. Climate change was a global problem already on the day when <a href="https://en.wikipedia.org/wiki/Jesse_Fell" rel="nofollow noopener noreferrer" target="_blank">Jesse Fell</a> successfully managed to burn anthracite on an open-air grate in 1808—people simply didn&#x27;t realize it yet.</p>
<p>Human-induced climate change is certainly more urgent now, but it is not a new problem, nor is it more <em>important</em> now than before. Claiming otherwise, and saying that we need to act based on urgency, would mean that all the <strong>important but non-urgent</strong> problems we have today can be postponed until they become both <strong>important <em>and</em> urgent</strong>. The worst kind of procrastination, on a global scale.</p>
<p>Nor do I believe that the problem of climate change would have been easier to solve earlier on. It was definitely harder to detect, for many reasons, but even with a successful detection, humanity would have required the same kind of international cooperation, regulations, and grassroots efforts that we are advocating for today. And good luck achieving those things in 1890!</p>
<p>The point is similar for most other topics. If you&#x27;re tempted to think that today&#x27;s problems are more treacherous than ever before, that the stakes are higher, and the necessary level of cooperation has gotten harder to realize, just open a history book. We have horrible wars today, but we had horrible wars before—arguably much more horrible. People died like flies from epidemics, starvation, and tyranny. Of those that lived, the vast majority spent their whole lives in abject poverty, subject to abuse and very hard limits on their freedom.</p>
<figure><img src="https://aethermug.com/assets/posts/are-we-losing-control/bosch_the_last_judgement_detail.webp" alt="Part of a painting of people in hell, with many scenes of suffering and torment."/><figcaption>A visual summary of any history book. (Detail of The Last Judgement, Hieronymus Bosch).</figcaption></figure>
<p>If our ancestors had to go through all that misery, all that carnage and pain, it was because they didn&#x27;t know better. They had no clue how to make things better, and the few who had the means to improve things didn&#x27;t care.</p>
<p>My point is: the world was <em>always</em> out of our hands, and we were <em>always</em> tossed around by the complexity which we participated in creating. If thousands of animal and plant species have gone extinct from human causes, if countless human cultures, languages, and traditions have been lost to oblivion in the past few thousand years, it is not (primarily) because people wanted things to go that way: they were helpless to avoid them. We were never in control.</p>
<p>Even the justification that the world has &quot;gotten too fast&quot; in recent years, that we &quot;risk falling behind&quot; with terrible consequences, sounds suspicious.</p>
<p>Were the lower classes in 1800 prepared for the changes the industrial revolution would bring by 1860? Were people in 1500 CE South America prepared for the changes the Europeans would bring by 1600 CE? Were middle-eastern populations in 400 BCE prepared for the changes Alexander the Great would bring by 300 BCE? I doubt it, otherwise they might have prevented those changes. And I think they would have <em>loved</em> to be granted a bit more time to prepare. The world was &quot;too fast&quot; even for them, even while being much slower than today. It was too fast for all humans in history, at least since the days we moved away from our ancestral hunter-gatherer lifestyle.</p>
<p>If I were to speculate, I would say that our <em>awareness</em> of our helplessness has increased in the past century, thanks to advancements in science, education, and access to information. Perhaps the spread of democracy has a role, too, because now many more people are allowed to participate in the decisions that affect them, and to learn and reflect about the largest-scale problems—luxuries unavailable to most of our ancestors. This is good news.</p>
<h2>II</h2>
<p>Not everyone makes the mistake of believing the world is significantly more human-averse than ever before. For instance, Kevin R. Lowell, CEO of UScellular, <a href="https://link.springer.com/chapter/10.1007/978-3-031-36429-7_2" rel="nofollow noopener noreferrer" target="_blank">reminds</a> us that “the world we live in today is complex. The world yesterday, to the people who lived yesterday, was also complex. Tomorrow will be complex, and so will the day after that. ... Our challenges are as vexing to us as the challenges of the eighteenth, nineteenth, and twentieth centuries were to the people who lived and worked and died in those days gone by.”</p>
<p>Librarian <a href="https://thelibraryelement.com/2018/04/08/the-world-is-becoming-more-complex-is-it-really-pause-and-consider-this/" rel="nofollow noopener noreferrer" target="_blank">Philip Williams</a> even makes some compelling points for the opposite argument: the world is getting <em>less</em> complex! Globalization has increased interconnectedness, sure, but it has also removed many of the enormous differences we had between cultures. Mass production and standardization have simplified things in many ways, reducing the world&#x27;s variety and making many things more predictable. In many ways, we live in a culturally impoverished world, although the way mass and social media work hides that fact.</p>
<p>I don&#x27;t know if the factors brought up by Williams are enough to claim that the world is overall less complex today than before. The answer probably depends on how you define the word &quot;complexity.&quot; Whichever it may be, though, doesn&#x27;t change the fact that the complexity has been too much for us to handle for thousands, perhaps even tens of thousands of years already. If anything, we have gotten better at handling it.</p>
<p>The reason for the logical mistake I&#x27;m pointing out seems simple enough: humans have a natural tendency to consider past events as more inevitable and predictable than they were—something called <em>hindsight bias</em>. This happens during one&#x27;s lifetime, like when you blurt out &quot;I knew they wouldn&#x27;t last&quot; about a relationship that just ended, even though you had been supportive until then.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>But this innate bias also happens on historical time scales, in which case it is often called the <em>historian&#x27;s fallacy</em>. We tend to judge past actors as if they had access to the same information and foresight that are available now to us, who already know how things turned out. The &quot;old times&quot; seem so simple and quaint now only because we don&#x27;t actually have to live through them.</p>
<p>I said that I&#x27;m only nitpicking on the logical soundness of those arguments, not with the actual messages they&#x27;re trying to convey. How might one rephrase them better?</p>
<p>It is quite simple, really: don&#x27;t make any appeal to the differences between the past and the present, new worlds and old worlds, and instead focus on the <em>kind</em> of problem faced today. For example:</p>
<blockquote>
<p>In a world in which all problems are global, there is no way countries can handle issues by themselves; we need global responses.</p>
<p>— <a href="https://unfoundation.org/blog/post/14-quotes-from-un-leaders-on-humanitys-biggest-challenges/" rel="nofollow noopener noreferrer" target="_blank">António Guterres, UN Secretary-General</a></p>
</blockquote>
<p>Sounds good! And here&#x27;s another one from the evergreen Dana Meadows. She focuses on the perennial thinking weaknesses we carry as humans, such as our biases:</p>
<blockquote>
<p>The world is a complex, interconnected, finite, ecological - social - psychological - economic system. We treat it as if it were not, as if it were divisible, separable, simple, and infinite. Our persistent, intractable global problems arise directly from this mismatch.</p>
<p>— Donella Meadows, Whole Earth Models and Systems</p>
</blockquote>
<p>Flawless and timeless.</p>
<h2>III</h2>
<figure><img src="https://aethermug.com/assets/posts/are-we-losing-control/grasshopper_ant.webp" alt="Drawing of grasshopper and ant, in the style of an old-fashioned children&#x27;s book."/><figcaption>The Grasshopper and the Ant (modified), Percy J. Billinghurst</figcaption></figure>
<p>I will conclude with a sequel to Aesop&#x27;s fable of the Sour Grapes.</p>
<p>After the disgruntled fox leaves the vineyard with an empty belly, a group of ants sees the same bunch of grapes hanging high above their heads. To them, it looks as if it were attached directly to the sky.</p>
<p>Being more honest and methodical than the fox, they hold a council.</p>
<p>&quot;<em>The grapes are way too high for us to reach, even if we all stand on each other&#x27;s bodies,</em>&quot; observes one ant.</p>
<p>&quot;<em>We survived on grapes during all past years. Now we can only find a few rotten scraps on the ground. If we don&#x27;t find a way to reach up to those intact grapes now, the whole nest will be at risk!</em>&quot; cries another.</p>
<p>&quot;<em>My sisters, this is a new crisis, and we need to come together to find a new solution,</em>&quot; concludes a third.</p>
<p>At that point, a young entrepreneurial ant comes forth with a new idea: split up and explore the surrounding area, as there might be a trunk or structure that will let them climb up to the grapes. Some ants are enticed by the novelty of this indirect approach, but most are not. They don&#x27;t see why walking left and right would get them any closer to the grapes. A long debate ensues, and no action is taken.</p>
<p>A grasshopper, who overhears the ants&#x27; arguments, chimes in to remind them of something they had all forgotten.</p>
<p>&quot;<em>Congratulations on your &#x27;discovery&#x27;,</em>&quot; scoffs the grasshopper. &quot;<em>You called it a &#x27;new crisis&#x27;: don&#x27;t make me laugh. And now you&#x27;re surprised that those grapes are out of reach? Ridiculous.</em>&quot;</p>
<p>&quot;<em>What do you mean?</em>&quot; ask the ants. &quot;<em>You seem to know something important. We will share some of the grapes with you if you tell us.</em>&quot;</p>
<p>&quot;<em>Very well,</em>&quot; says the grasshopper with an air of importance. &quot;<em>You ants are always so engrossed in your work that you forget everything. I, instead, sit around, always alert, so I have time to remember things.</em></p>
<p>&quot;<em>In the past years you survived on grapes, yes, but barely. Many of you starved and suffered every year. You never managed to get the good ones hanging from up there. I don&#x27;t think you ever even saw them, so high they were! You were always satisfied with the half-rotten grapes that are found lying on the ground in this season. Good thing, for you, that this year&#x27;s grapes were hanging lower than ever, so that you could notice them.</em></p>
<p>&quot;<em>You were missing out on the good grapes this whole time, and yet you cried when you saw them today, and called this a new crisis! You could have been climbing up this whole time, and instead you scraped by on rotten food every year!</em>&quot;</p>
<p>The ants sprang into action and eventually found a way to reach the grapes. The nest was saved, and they even had some excess grapes left over. As for the grasshopper, for some reason he was not invited to the feast. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/are-we-losing-control/606373slsdl.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[I Like the Sound of How You Look at Me]]></title>
            <link>https://aethermug.com/posts/i-like-the-sound-of-how-you-look-at-me?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/i-like-the-sound-of-how-you-look-at-me</guid>
            <pubDate>Wed, 11 Jun 2025 01:00:25 GMT</pubDate>
            <description><![CDATA[Japanese gitaigo are nuggets of mundane joy]]></description>
            <content:encoded><![CDATA[<p>We are all creatures of habit, but there are some things we never seem to get used to. Never mind the unpleasant stuff—we already dedicate enough time mulling over those. I want to think about the <em>good</em> things that keep surprising us. The things that never bore you, that always make you smile, no matter how many times you encounter them. These are little treasures to be sought out and cherished. Why is no one talking about them?</p>
<p>Here&#x27;s one of mine: Japan. In 20 years of learning about this culture and 14 years of living here, the sense of marvel has never really washed away for me. I know this place quite well, I have experienced it in a thousand different ways, and I have no plans of leaving it—yet I still feel like I&#x27;m on a grand voyage, a Marco Polo on an extended stay in a strange and wonderful land that most of my compatriots have never even dreamed of. A tad delusional, you say? Well, I love this feeling.</p>
<p>But something doesn&#x27;t have to be epochal and marvelous to become an evergreen delight in your life. Sometimes it&#x27;s the small details that can make your day over and over again. I could talk about food, or the way the sun sets in the mountains, or the way the wind sounds when it passes through the trees, but you already know me for the geek I am: I want to talk about Japanese sound symbolism.</p>
<p>The other day my Japanese wife recounted to me a conversation she had with a colleague that morning. They were talking about back pain. I translate:</p>
<p>&quot;Lately it&#x27;s been killing me,&quot; said my wife.</p>
<p>&quot;Don&#x27;t tell me. I have a history too,&quot; said her colleague.</p>
<p>&quot;Oh, you have back issues too?&quot;</p>
<p>&quot;On and off. What kind of pain do you have, though?&quot;</p>
<p>&quot;It&#x27;s an intense <em>zuki-zuki</em> down here in the lower back.&quot;</p>
<p>&quot;Then it&#x27;s a bit different. In my case it was usually <em>piri-piri</em>.&quot;</p>
<p>&quot;Yeah, mine isn&#x27;t <em>piri-piri</em> at all, it&#x27;s definitely <em>zuki-zuki</em> pain.&quot;</p>
<p>&quot;Well, take care of yourself. Get well.&quot;</p>
<p>Then, apparently, they went on with their jobs as usual.</p>
<p>When I heard this, I was awe-struck. Although those two words, <em>zuki-zuki</em> and <em>piri-piri</em>, roughly mean &quot;tingling/twitching&quot; and &quot;throbbing/smarting,&quot; respectively, they have a <em>feel</em> to them that can&#x27;t be translated from Japanese. They are what&#x27;s technically called <em>gitaigo</em>, basically &quot;faux onomatopoeia,&quot; in the sense that they imitate the sounds of things that <em>make</em> no sound.</p>
<p>That means that they&#x27;re not verbs (like &quot;throbbing&quot;) nor multi-purpose adjectives (like &quot;acute&quot;), but unique sounds—funny and rather cute sounds—used only for those situations. In most cases, the etymology of these words is unknown, and even where an origin is hypothesized (<a href="https://en.wikipedia.org/w/index.php?title=Piri_piri" rel="nofollow noopener noreferrer" target="_blank">piri piri</a> is a type of pepper), almost no Japanese speaker is aware of them. These words aren&#x27;t self-explanatory, and I&#x27;ve never heard anyone properly explain them. You just... learn what they mean through exposure.</p>
<p>Despite having spent a long time immersed 24/7 in the language, the precise nuance of many <em>gitaigo</em> still eludes me to this day. The fact that my wife and her colleague could casually exchange those sounds and know precisely what kind of pain the other meant is low-key amazing.</p>
<p>There are many, many <em>gitaigo</em> in common use in Japan. If someone stares at you in a way that makes you uncomfortable, they&#x27;re looking at you <em>jiro-jiro</em>; if you&#x27;re being lazy with your homework, your parents might tell you to stop doing <em>guzu-guzu</em> and get to work; if you take your time to do something properly, you&#x27;re doing it <em>jikkuri</em> (not all <em>gitaigo</em> are sound duplications); and so on and <a href="https://www.tofugu.com/japanese/japanese-onomatopoeia/" rel="nofollow noopener noreferrer" target="_blank">so forth</a>. Every now and then my wife casually drops a new one that I&#x27;ve never heard before and is shocked when I admit not having the faintest idea what she&#x27;s talking about. For my part, I love it when it happens.</p>
<p>It&#x27;s difficult to pin down exactly what entices me about this breed of Japanese words. One aspect of it is surely linked with how childish this would sound in another language. In English, if an adult tells you, with a straight face, that you should be careful because the path is <em>swoosh-swoosh</em> with mud, you&#x27;d wonder why they didn&#x27;t use a &quot;proper&quot; word for it, like <em>slippery</em>. If, instead of saying <em>shaking out of fear</em>, they say <em>twitch-twitch</em>, you will, at a minimum, think they&#x27;re being theatrical. This kind of &quot;baby talk&quot; is normalized—even expected—for Japanese adults, and I find it cute.</p>
<p>But, beyond that, I like the modesty of it all, for lack of a better term. Instead of reaching for ten-dollar words, words with a pedigree going back to Ancient Greek or Germanic kingdoms, or even whole explanatory sentences, the Japanese are content with simply assigning a unique sound to a fuzzy and intangible concept and rolling with it, no fuss. <em>Gitaigo</em> are a reminder of what language is at its core: arbitrary sounds slapped on <a href="https://aethermug.com/posts/language-is-a-bottleneck-for-thought" rel="nofollow noopener noreferrer" target="_blank">shared mental constructs</a>.</p>
<p>Given that there is a seemingly endless supply of them, it&#x27;s the perfect kind of bite-sized pleasure that I will, luckily, never tire of. They always make me go, you know, <em>fuzz-fuzz</em> inside. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/i-like-the-sound-of-how-you-look-at-me/il-vagabiondo-E1uaHo9TvwE-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[I Do Not Remember My Life and It's Fine]]></title>
            <link>https://aethermug.com/posts/i-do-not-remember-my-life-and-it-s-fine?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/i-do-not-remember-my-life-and-it-s-fine</guid>
            <pubDate>Thu, 05 Jun 2025 02:50:22 GMT</pubDate>
            <description><![CDATA[What reminiscing is like without mental imagery]]></description>
            <content:encoded><![CDATA[<p><em>This post is part of the <a href="/posts/a-list-of-introspective-descriptions">List of Introspective Descriptions</a>.</em></p>

<p>I&#x27;ve <a href="/posts/aphantasia">written</a> <a href="/posts/aphantasia-is-no-creativity-killer">about</a> <a href="/posts/new-aphantasia-article-on-nautilus">aphantasia</a> <a href="/posts/reading-blood-meridian-with-aphantasia">several</a> <a href="/posts/normality-and-surprise-in-an-image-free-mind">times</a> <a href="/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes">before</a> on this blog, and many people have shown interest in the topic. Most readers are simply curious when I say that I can&#x27;t form any kind of image, sound, or other sensation in my mind. Occasionally, someone shows pity or commiseration towards me, as if I were in constant, daily suffering from a crippling disability. Nothing could be further from the truth, of course. I&#x27;ve been successful at most of what I&#x27;ve tried to accomplish in my life until now, and never had to battle with a sense of being disadvantaged. On top of that, even aphantasia experts generally agree that <a href="https://onlinelibrary.wiley.com/doi/10.1111/sjop.12887" rel="nofollow noopener noreferrer" target="_blank">it is not a disorder</a>.</p>
<p>That is not to say that I feel as capable as anyone else at everything. In particular, there is an area in which I <em>do</em> feel—all too well—that I am weaker than most: my memory of past episodes.</p>
<p>For obvious reasons, my recollections lack a visual component, but that is only part of the story. I seem to have an extremely poor ability to &quot;relive&quot; past events mentally. In fact, my condition is accurately described as</p>
<blockquote>
<p>a mnemonic syndrome that is confined to an inability to mentally travel backwards in time in the absence of detectable neuropathology or significant daily handicap,</p>
</blockquote>
<p>which is the definition of a trait called SDAM, for <em>Severely Deficient Autobiographical Memory</em>.</p>
<p>SDAM was only <a href="https://www.sciencedirect.com/science/article/pii/S002839321500158X" rel="nofollow noopener noreferrer" target="_blank">discovered</a> in 2015, and it is still poorly understood. Yet there is mounting evidence that it has deep links with aphantasia: about half of the people with SDAM also <a href="https://www.youtube.com/watch?v=Zvam_uoBSLc" rel="nofollow noopener noreferrer" target="_blank">report</a> having aphantasia, and many people with aphantasia claim to have difficulties with recalling past episodes from their own lives. For these reasons, I believe I have SDAM or something closely resembling it.</p>
<p>What does that imply, though? That is what I aim to clarify with this post. I always find it very difficult to tell how much of my subjective experience is rare and how much of it is normal for most of humanity. I&#x27;ve never swapped brains with anyone to find out. The only solution <a href="/posts/normality-and-surprise-in-an-image-free-mind">I&#x27;ve found</a> is to try my best at explaining what the inner experience is like for me, and hope to receive comments from readers who have similar—or entirely different—experiences.</p>
<p>Below are some brief observations about the way my episodic memory works, based on notes I took over the past couple of years.</p>
<h2>Recalling Specific Episodes</h2>
<p>When I was looking for my first job, a Japanese company I applied to had me fill in a screening questionnaire. One question was something along these lines: &quot;Write about a time during your university studies in which you faced a difficult problem, and what you did to overcome it.&quot; A perfectly reasonable question to ask a potential recruit with no employment history, but an impossibly hard question for me to answer.</p>
<p>I was completely stumped. In my university years, I worked on many research projects, and it wasn&#x27;t always easy. I <em>knew</em> I had faced various kinds of problems during my graduate studies, and I assumed I had overcome them all before getting my degree. Why couldn&#x27;t I come up with a single example?</p>
<p>This was the first time I noticed that something was off. Those questions about relevant episodes are pretty standard in certain industries, and I had never heard anyone complain about them specifically. Yet they were anathema for me.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/i-do-not-remember-my-life-and-it-s-fine/kolar.webp" alt="Original photo by Jan Antonin Kolar, Unsplash (modified)" title="A row of wooden drawers with metal handles and blotted-out labels."/><figcaption>Original photo by Jan Antonin Kolar, Unsplash (modified)</figcaption></figure>
<p>My memory feels like a file cabinet without labels, a database without an index, a dictionary of randomly-ordered words without a table of contents. There are many memories there, but most of them can&#x27;t be retrieved with convenient keywords like &quot;a time when X happened&quot;.</p>
<p>Only with very specific cues and external help am I able to, sometimes, recall the events I&#x27;m looking for. In the case of the job application questionnaire, I struggled with it for several days, asked a friend for advice, and eventually managed to put together a lame but passable answer based on my research notes. Still, I was left with the nagging feeling that more fitting and relevant examples remained buried away in my psyche, somewhere out of reach.</p>
<p>I felt the same limitation very strongly again last year, when my grandfather passed away. I determined to sit down and write everything I could remember about him, and my relationship with him. I went back in my mind to his house in the Roman countryside and wrote things as they came to me.</p>
<p>He was kind and jovial with us grandchildren. He often involved us in making bread and pizza together in his stone oven, and I liked that. I could even write a general visual description (not by putting into words what I saw in my mind at the time of writing, but by recalling what I &quot;knew&quot; about his looks). And so on, I could muster a good number of generic, timeless <em>facts</em> about him, including my feelings related to him, but I soon realized that episodes and conversations were sorely absent: alright, he used to keep bees, and took me to see them more than once; but how many times? What did we say during those visits? What happened specifically?</p>
<p>Nothing that resembles a &quot;scene&quot; or sequence of events resurfaced in my memory. Everything I wrote was in the past progressive tense: &quot;he used to be like this&quot;, &quot;we would often do that&quot;, &quot;more than once we did so&quot;.</p>
<p>Nowhere in what I wrote was any sense of sequential events, nor any specific conversation, not to mention specific utterances. I could write a good deal about him, but I had to rely on educated guesses in order to put together a coherent description of things that happened.</p>
<p>I wanted to bring back specific episodes, one-time events that we had experienced together, but I could find very little. He was there in my mind, no mistake about that, but in an intangible, elusive way. That day I felt disheartened, and dropped the project in the middle.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Most of the time this weakness in recollecting specific life episodes doesn&#x27;t have major practical consequences. If necessary, others can help me bring back a memory, and I can remember the most consequential information as facts rather than episodes. In fact, a <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10598423/" rel="nofollow noopener noreferrer" target="_blank">recent study</a> has found that aphantasics aren&#x27;t any worse than non-aphantasics as eyewitnesses: although the participants &quot;recalled 30% less correct information and accounts were less complete,&quot; &quot;they made no more errors and were as accurate as typical imagers.&quot;</p>
<p>The downsides, then, seem to be mostly emotional, not pragmatic.</p>
<p>(If you&#x27;re wondering how I can remember this &quot;episode&quot; of me trying to write about my grandfather, it&#x27;s because I have the unfinished text saved, and wrote my reflections about the attempt soon afterwards. I have what I need to reconstruct the episode without leaning very much on my remembering powers.)</p>
<h2>Memory Voids</h2>
<figure><img src="https://aethermug.com/assets/posts/i-do-not-remember-my-life-and-it-s-fine/512200ldsdl.webp" alt="A painting of a desert scene with ruins of ancient buildings."/><figcaption>A Bedouin encampment surrounded by ruins, Charles Théodore Frère</figcaption></figure>
<p>The blank I drew with my grandfather is just an example of what you might call a &quot;memory void.&quot; It&#x27;s not that I tend to forget people. Indeed, my loved ones are safe in my mind, albeit in that intangible and elusive form, with no risk of being forgotten—more on this later. My memory voids are specifically about <em>the concrete things I did in my life.</em></p>
<p>Ask me how my childhood was, or if I had fun in my twenties, and all I can answer is &quot;I think so.&quot; Not because I wasn&#x27;t sure about it at the time, but because I don&#x27;t remember what I thought of it. With such broad and general questions, I have almost no hope of coming up with representative memories to help me answer those questions. No flashbacks to times I thought &quot;this is great!&quot; nor to moments of sadness. Again, many such events are buried as facts and observations somewhere in my memory, but that&#x27;s not how I can recover them.</p>
<p>My past feels like someone else&#x27;s. I know a great deal about it, more than anyone else in fact, yet I don&#x27;t remember <em>being</em> in it. I can create a year-by-year history of my whole life with information such as the places I lived in, the schools I went to, the major turning points in my life; I can explain many facts about the key people and events of each time period; I can even arrange many of these in the form of stories or ordered stages of growth—yet none of this feels like things <em>I</em> did. It&#x27;s like being the world&#x27;s top expert about a stranger&#x27;s life.</p>
<p>To be clear, this is not dissociative amnesia, trauma-induced selective forgetting, or anything like that. I know I had a good, sheltered childhood and early adulthood, with a caring and kind family, good friends, no financial difficulties, no scarring or traumatizing events. I was, by all measures, a happy and privileged kid. But I know that as a dry fact, not as a rush of nostalgic emotions.</p>
<p>Why aphantasia would do that to one&#x27;s memory is still unclear. The topic itself hasn&#x27;t been studied much yet, but this is slowly changing. In a very recent <a href="https://www.researchsquare.com/article/rs-6675918/v1" rel="nofollow noopener noreferrer" target="_blank">experiment</a>, Boere et al. (2025) used EEG (electroencephalography) to show that there might be fundamental differences in neural activity at the time of <em>forming</em> new episodic memories, rather than at the time of retrieving them. Aphantasics, they found, have lower levels of the kind of brain waves associated with attention and, crucially, memory updating.</p>
<p>This is very interesting in itself, but the following observation in their abstract is arguably even more important:</p>
<blockquote>
<p>Despite these neural differences, behavioral performance remained comparable, indicating possible compensatory strategies.</p>
</blockquote>
<p>In other words, people with aphantasia don&#x27;t fare significantly worse in their practical use of memory—they just use it in a different way.</p>
<p>Whenever I think about a period of my life, all the &quot;situational&quot; and somewhat &quot;concrete&quot; memories I get are <em>averaged out</em>, all similarities between separate days and recurrences overlapping each other and blending together, while all the deviations from routine are washed away into oblivion: everything in the past progressive.</p>
<h2>Semantic and Spatial Memory Are Fine</h2>
<figure><img src="https://aethermug.com/assets/posts/i-do-not-remember-my-life-and-it-s-fine/535271ldsdl.webp" alt="A painting of a group of people with turbans in a middle-eastern bazaar."/><figcaption>A Bazaar in Cairo, Charles Théodore Frère</figcaption></figure>
<p>If the results of Boere et al. are confirmed, this could shed some fascinating light on how different non-episodic, or &quot;semantic&quot; memory is from the episodic kind. In my case, semantic memory seems to be perfectly intact, and only the episodic, autobiographical kind is impaired.</p>
<p>From the observations above, it&#x27;s as if my memory-encoding neural circuits work by comparing new experiences with pre-existing <a href="/posts/a-framing-and-model-about-framings-and-models">mental models</a>, tweaking and tuning those mental models of the world with each new sensory input, rather than collecting separate instances of similar but slightly different situations.</p>
<p>This would explain why the important, recurrent facts remain, while all the fickle details are washed away as if by an averaging operation. Perhaps this is what goes on in everyone&#x27;s brains, except that in most people the episode-storing circuits are also working at the same time, and the two processes feel inseparable.</p>
<p>The interpretation above would also explain why my mental models—the <a href="/posts/embedded-prophesy-devices">embedded prophesy devices</a> I rely on to predict the future and function in everyday life—are as good as anyone else&#x27;s. Indeed, it might even explain why I seem to care and think about mental models, and about cognition in general, more than the average person. For me, mental models are <em>the main way</em> I benefit from my memory. They help me not only to form reasonable expectations about what might happen in the future, but also to &quot;reconstruct&quot; my past—the educated guesses about my own past I referred to before.</p>
<p>And we shouldn&#x27;t forget spatial awareness. This &quot;sense of space and location&quot; plays a major role in my thinking system.</p>
<figure><img src="https://aethermug.com/assets/posts/i-do-not-remember-my-life-and-it-s-fine/heidi-kaden-TJrorizZcPA-unsplash.webp" alt="A view of Florence&#x27;s Duomo, seen from a small street."/><figcaption>Photo by Heidi Kaden, Unsplash</figcaption></figure>
<p>For as long as I can remember, I&#x27;ve been very good at understanding maps and not getting lost. When I lived in Florence, at the age of nine or ten, I loved to be the one guiding my family through the city&#x27;s meandering side-streets with the help of a paper map. I would choose (past progressive!) new and unknown routes every time, just for the fun of exploration, but we always ended up emerging into the square or courtyard I had intended.</p>
<p>For all the difficulties I have in recalling scenes that unfolded on specific days of my life, I have no trouble at all remembering the spatial layout of the places where those scenes took place. I can draw the floor map of any house I&#x27;ve lived in or spent more than a couple of days in the past 30 or more years. When I visit Rome, a city I haven&#x27;t lived in for more than a decade, the routes to get anywhere familiar come back to me as clearly as if I had learned them the previous day. This is clearly another kind of memory, quite distinct from both the episodic and the semantic kinds.</p>
<p>As a matter of fact, spatial memory is the closest thing I have to an &quot;index&quot; for the musty file cabinet of my episodic memories. If I can remember <em>where</em> something happened, there is a good chance I can remember many more details about <em>what</em> happened.</p>
<p>This is a recent realization of mine, and I&#x27;ve taken to calling it the <em>Swoosh Effect</em>. Often my wife mentions an event or the name of a shop, saying something like &quot;I miss the Flavor Savor hamburgers we used to go to when we lived in Nagareyama!&quot; Usually, to her unconcealed dismay, I draw a complete blank: &quot;what&#x27;s the Flavor Savor?&quot; We used to go there all the time, she says, and it wasn&#x27;t even that long ago.</p>
<p>I get absolutely nothing. I frantically try to think of hamburger joints in Nagareyama: zero hits.</p>
<p>Then she adds some spatial information, like &quot;it&#x27;s on the last floor of the XYZ building in front of the station&quot; and suddenly I&#x27;m transported there in a roller-coaster instant and it all comes back to me clearly. I almost <em>feel</em> the swooshing movement of going from the station to the entrance of XYZ building, then to the escalators, then up to the last floor, and finally homing into the entrance of the Flavor Savor, all in less than a second. Now all the semantic information pours out: &quot;of course, the Flavor Savor! We went there, like, six times in a year. They have great avocado burgers and a tasty homemade sauce there!&quot; If not too averaged-out, even some fragments of Flavor Savor episodes might come back to me at that point.</p>
<p>In short, I use my semantic and spatial memory to fill in what my episodic memory is unable to recover (or store). Most of the time this works fine, but in some cases that way of compensating doesn&#x27;t work.</p>
<p>For example, I think I may also have mild face-blindness, the difficulty in recognizing faces and linking them with names. Usually, it doesn&#x27;t cause major issues, and with some effort and repetition, I can learn to recognize people. But the face-blindness really rears its head when I meet someone not-so-familiar in an unexpected place, like random encounters on a train. Since I don&#x27;t have the usual contextual cues to help me, in these cases I find it very hard to pin down who they are. They go &quot;hey Marco, what&#x27;s up?&quot; and all I get is the vague sense that I know this person from <em>somewhere</em>. Only when they mention names or other contextual information do I have a chance of allocating them in their rightful place in my mental social network.</p>
<h2>Not Bad, All in All</h2>
<figure><img src="https://aethermug.com/assets/posts/i-do-not-remember-my-life-and-it-s-fine/535272ldsdl.webp" alt="A painting of a man in a boat on a peaceful middle-eastern river."/><figcaption>Dhows on the Nile, Charles Théodore Frère</figcaption></figure>
<p>If you have intact episodic memory, some of the descriptions above might sound entirely alien to you. You might have many questions, and I don&#x27;t know how to answer them all. Before concluding, though, I will try to address two of what might be your biggest doubts.</p>
<p>First, does my lack of remembered episodes and nostalgic flashbacks mean that the people in my life don&#x27;t really exist as people in my mind, and that my forgotten experiences taught me nothing? No, and no.</p>
<p>It is hard to explain, but the things that matter do stay with me, even if I can&#x27;t reminisce about the specific times they happened to me. I may not be able to play back fond memories of distinct interactions with my late grandfather, for instance, but I internalized them all. Intangible and invisible as he may be, he is there in my mind and will always be, and thinking about him does evoke many emotions in me—emotions I feel <em>now</em>, not replicas of past emotions. Something invisible can never fade.</p>
<p>More broadly, my mind&#x27;s constant &quot;averaging&quot; work makes it very hard for me to build an encyclopedic memory—the minor details quickly escape me—but the <em>understanding</em> remains. The important insights stick with me, and my learning takes the form of better and better mental models: more sophisticated, more inclusive of many factors, more widely applicable or abstract. This, even when the specifics of how I obtained those insights refuse to be summoned back. Those vague &quot;problems I had to overcome in university&quot; that the job screening question wanted from me had happened, and I <em>had</em> learned my lessons from them, even though I forgot how they unfolded.</p>
<p>I consider this to be a key component of my intelligence because it allows me to concentrate on what is important. My experience is distilled directly into wisdom. Which brings me to the second doubt you might have: is SDAM a detestable handicap?</p>
<p>More so than with aphantasia, I can see how one might want to call SDAM a disorder. It does sound like a net negative, the removal of an ability that cannot be fully replaced, at least when your goal is to be &quot;reunited&quot; with a lost or distant loved one. Unlike aphantasia, I do often feel my weakness on this front, and I understand the many people with SDAM who bemoan their condition.</p>
<p>But—call me an indefatigable optimist—I also see benefits to having SDAM.</p>
<p>By doing away with reminiscences, flashbacks, and graphic visions of possible futures, I can stay focused on the now, and on what I can do now to improve tomorrow. I don&#x27;t get intrusive scenes to distract me and sway me with sudden emotions.</p>
<p>Perhaps SDAM pushes me to work harder at interpreting the new information <em>as I perceive it</em>, because I know, deep down, that I will either &quot;get&quot; it now—updating a mental model—or I risk forever missing the opportunity to &quot;get&quot; it. This commitment to immediate understanding, in turn, helps me improve as a rational thinker.</p>
<p>And, once again, there is still no empirical proof that these memory &quot;deficits&quot; bring significant disadvantages <em>in practice</em>. The paper I mentioned at the beginning—the one that found no difference in eyewitness accuracy between people with and without aphantasia—makes this conclusion:</p>
<blockquote>
<p>Our pattern of results indicates reduced mental imagery ability might be compensated for by alternative self-initiated cognitive strategies.</p>
</blockquote>
<p>That is one fair way to put it. But here is another one, from my point of view: <em>having strong mental imagery and episodic memory doesn&#x27;t seem to help much in practice.</em> It is just an alternative way to experience the world. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/i-do-not-remember-my-life-and-it-s-fine/516720ldsdl.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[I'll Take a Short Blogging Break]]></title>
            <link>https://aethermug.com/posts/i-ll-take-a-short-blogging-break?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/i-ll-take-a-short-blogging-break</guid>
            <pubDate>Thu, 15 May 2025 00:08:04 GMT</pubDate>
            <content:encoded><![CDATA[<blockquote>
<p>If one does not know to which port one is sailing, no wind is favorable.</p>
<p>—Seneca, Epistolae, LXXI., 3.</p>
</blockquote>
<h3>TL;DR</h3>
<p>Aether Mug is taking a short holiday next week, and possibly the week after that.</p>
<h3>What&#x27;s Going on Inside Marco&#x27;s Mind - Don&#x27;t Peek!</h3>
<p>I started this blog as a side project to capture the many ancillary thoughts I had while working on my main project, <a href="https://planktonvalhalla.com/" rel="nofollow noopener noreferrer" target="_blank">Plankton Valhalla</a> (PV). The key difference between AeMug and PV was in frequency: AeMug was to be weekly, while PV had no fixed schedule. What could go wrong?</p>
<p>Of course, the weekly deadlines of AeMug ended up consuming most of my writing time, and PV was pushed to the sidelines. I have an unfinished essay trilogy on the other site that is constantly on my mind, but my progress with the final piece has been glacial (oh boy, it&#x27;s already been half a year since I published the second one!). Partly, it&#x27;s because the ideas I intend to describe there are <em>epically difficult</em> to organize clearly, but mostly it&#x27;s because I only get a few crumbs of time to work on it each week.</p>
<p>So I&#x27;m going to flip things upside down for a while, giving precedence to my essay writing over my blogging.</p>
<p>Why didn&#x27;t I think of this before? Simple cognitive dissonance (&quot;I committed to blogging every week, it would be dishonorable to skip!&quot;). I&#x27;m going to ignore that now: take that, Conscience!</p>
<h3>Links</h3>
<p>These are the first two essays in the &quot;Purpose Trilogy&quot; on Plankton Valhalla:</p>
<ol>
<li><a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">Boundaries Are in the Eye of the Beholder</a></li>
<li><a href="https://planktonvalhalla.com/20241030-recursion-tidy-stars-and-water-lilies/" rel="nofollow noopener noreferrer" target="_blank">Recursion, Tidy Stars, and Water Lilies</a></li>
</ol>
<p>If you aren&#x27;t already subscribed to Plankton Valhalla, you can do so <a href="https://planktonvalhalla.com/subscribe/" rel="nofollow noopener noreferrer" target="_blank">here</a>. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/i-ll-take-a-short-blogging-break/540068ldsdl.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Determinate Self-Sabotage, or Obsessive Connoisseurship?]]></title>
            <link>https://aethermug.com/posts/self-sabotage-connoisseurship?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/self-sabotage-connoisseurship</guid>
            <pubDate>Thu, 08 May 2025 03:18:15 GMT</pubDate>
            <description><![CDATA[On the quandaries of kodawari]]></description>
            <content:encoded><![CDATA[<p>I used to go to the movies often. I used to buy four or five books every time I entered a bookstore—another frequent occurrence. In my early twenties, entertaining myself was easy. I learned Japanese entirely by entertaining myself: hundreds of manga volumes, dozens of anime series, scores of novels. Fantasy sagas, space-faring action flicks, slices-of-life, voyage chronicles, meditations, mysteries, thrillers, romances, ninjas and pirates.</p>
<p>Then, without me even noticing, the range of things capable of entertaining me shrank. I began to exercise some discretion. The flaws of a creative work revealed themselves to me more often, then all the time. Most Hollywood flicks lost their appeal. Best-seller-strewn show windows ceased to excite me. Other people&#x27;s recommendations rarely worked out. Nowadays, I read more and more classics—not because I don&#x27;t believe good contemporary works exist, but because the effort of <a href="/posts/you-don-t-have-time">wading through the noise</a> can be very tiring.</p>
<p>My Japanese wife calls this state I&#x27;m in &quot;a strong <em>kodawari</em>&quot; for storytelling.</p>
<p>In English, you would have to call it a &quot;fixation&quot; or &quot;obsession&quot; with a certain style of storytelling. You could say that I&#x27;m &quot;particular&quot; about the consistency, structure, and depth of stories. But all of these English terms fail to convey the efficient subtlety of the Japanese word <em>kodawari</em>. Look at a Japanese-English dictionary, and <em>kodawari</em> appears to be a plain translation of those words: a mildly negative state of mind where someone stubbornly—sometimes even illogically—refuses to compromise on something.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/self-sabotage-connoisseurship/kodawari_definition.webp" alt="Definition from JMdict through tangorin.com." title="Screenshot of an online dictionary definition: kodawari. Noun: 1 obsession; fixation; hangup; determination; fastidiousness; pickiness about (trait, style, etc.). 2 complaining; criticizing. 3 speciality (e.g. of restaurants)."/><figcaption>Definition from JMdict through tangorin.com.</figcaption></figure>
<p>But, unlike its English counterparts, <em>kodawari</em> is regularly used in a very positive connotation: that of resolutely—sometimes even heroically—refusing to compromise on something. You can guess that by how many ads use the term to talk about their products.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/self-sabotage-connoisseurship/kodawari-ramen.webp" alt="A ramen stand titled &quot;Native Fowl Kodawari&quot;. The lantern says &quot;Kodawari ramen&quot;." title="A picture of a ramen stand, with the word kodawari highlighted in the restaurant&#x27;s banner and on a lantern."/><figcaption>A ramen stand titled &quot;Native Fowl Kodawari&quot;. The lantern says &quot;Kodawari ramen&quot;.</figcaption></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/self-sabotage-connoisseurship/shampoo.webp" alt="Ad for a shampoo. It says that their &quot;Kodawari Point 1&quot; is &quot;using only detergent containing amino-acids that don&#x27;t stress your hair&quot;, and &quot;Kodawari Point 2&quot; is that &quot;we have completely removed all unnecessary additives.&quot;" title="A bottle of shampoo sits next to a woman, with various blocks of text in Japanese, out of which the word kodawari, in alphabet, is visible multiple times."/><figcaption>Ad for a shampoo. It says that their &quot;Kodawari Point 1&quot; is &quot;using only detergent containing amino-acids that don&#x27;t stress your hair&quot;, and &quot;Kodawari Point 2&quot; is that &quot;we have completely removed all unnecessary additives.&quot;</figcaption></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/self-sabotage-connoisseurship/lemonsour.webp" alt="Ad for a lemon-flavored alcoholic drink." title="A man is holding a glass and a can of lemon sour. Below, a can of the lemon sour drink contains the words &#x27;kodawari taste&#x27; in its emblem."/><figcaption>Ad for a lemon-flavored alcoholic drink.</figcaption></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/self-sabotage-connoisseurship/lemonsour2.webp" alt="Another lemon-flavored alcoholic drink. Here the brand itself, operated by Suntory, is called &quot;Kodawari Sakaba&quot; or &quot;Kodawari Watering Hole&quot;." title="An advertisement for suntory lemon sour with a man in a lemon costume. A part of the can is highlighted, containing the word kodawari in Japanese symbols."/><figcaption>Another lemon-flavored alcoholic drink. Here the brand itself, operated by Suntory, is called &quot;Kodawari Sakaba&quot; or &quot;Kodawari Watering Hole&quot;.</figcaption></figure>
<p>These marketers don&#x27;t mean to say that they&#x27;re unhealthily obsessed, hung up, or in a criticizing attitude with their products. Their <em>kodawari</em> is meant to be a big selling point.</p>
<p>According to the Japanese, the best of the best all have very strong <em>kodawari</em> in their arts. A <a href="https://brutus.jp/theartof_working_studioghibli/" rel="nofollow noopener noreferrer" target="_blank">Japanese magazine</a> uses the word twice in a paragraph describing Hayao Miyazaki&#x27;s work: that he &quot;insists&quot; (<em>kodawari</em>) on creating textured films and &quot;resolutely sticks&quot; (<em>kodawari</em>) to hand-drawn animation. You&#x27;ll find dozens of Japanese blog posts about the many <em>kodawari</em> of Steve Jobs (for example, <a href="https://zuuonline.com/archives/212986" rel="nofollow noopener noreferrer" target="_blank">here</a>): he was a fructarian and ate mostly apples, he had strong opinions on the placement of bathrooms in office buildings, and insisted that one should be explicit about what they <em>won&#x27;t</em> do.</p>
<p>Alright, <em>kodawari</em> must have a neutral flavor, then. Perhaps &quot;a refusal to compromise (on something specific)&quot; is the English phrasing that comes closest to replicating that neutrality. This isn&#x27;t very helpful. When is it a good thing, and when is it bad? Am I better off compromising on as many things as possible, or should I avoid it? I find these questions incredibly difficult to answer.</p>
<p>Somewhat paradoxically, these doubts disappear in the context of achieving greatness. I find it easy enough to believe that, to be as impactful and famous as Miyazaki or Jobs, one has to be more than just <em>discerning</em> with their work—that one needs to be <em>adamant</em> about the precise way they do their thing, and more so than their peers. It may not work every time, but it does seem like a prerequisite for success.</p>
<p>No, what I struggle to form clear ideas about is the mundane, unambitious kind of <em>kodawari</em>.</p>
<p>I remember the awe I felt towards some of my friends back in university, who could at a moment&#x27;s notice pull out a guitar or keyboard and entertain us all with exciting, deep, skillful music. For someone like me, whose only musical output was a few lame recorder tunes during music class in middle school, those friends were nothing less than wizards.</p>
<p>Of course, to achieve those levels of skill, they had practiced almost daily for years, trying to strum their favorite songs time and time again, alone in their rooms, noticing everything that was wrong in their delivery and fixing it one shade at a time. They had to develop not only excellent coordination of their fingers but also a very discerning ear and a refusal to compromise for sounds that were &quot;barely recognizable&quot; or even &quot;decent.&quot; Their <em>kodawari</em> for flawless performances is what propelled them into the realm of what I considered to be wizardry.</p>
<p>At the same time, many of those same virtuoso friends were unable to enjoy 99% of the music that could be found on radio stations and in CD shops. Most songs were too &quot;commercial&quot; or too &quot;derivative&quot; for them. They worshiped Pink Floyd and Led Zeppelin, Chopin and Mozart—and I&#x27;m sure they were able to enjoy those artists more deeply, more viscerally than I could, thanks to their refined ears—but they were also trapped in a noisy, bland musical world by precisely the same <em>kodawari</em>.</p>
<p>I see this all around me: in how my well-dressed wife will suffer cold and heat rather than use layered clothing due to a strange <em>kodawari</em>; in how a stylish ex-colleague precluded himself from working in any company that didn&#x27;t have an elegant office in a posh part of town; in how some audiophile friends of mine are unsatisfied and frankly a little irritated by any encounter with cheap or sub-par speakers; in how Italians, French people, and really most of the rest of the world don&#x27;t seem capable of enjoying British food.</p>
<figure><img src="https://aethermug.com/assets/posts/self-sabotage-connoisseurship/tokyo.webp" alt="Photograph of a Tokyo street with electric poles and many cables contrasting against the evening sky."/><figcaption>I find great beauty in scenes like these, very common in Tokyo streets, but I know many Europeans who consider the stained concrete and visible cables hideous. Photo by me.</figcaption></figure>
<p>Sometimes, when I witness the extent to which others go just to avoid compromising on their passions and pet peeves, I almost feel lucky not to be like them. By not having a <em>kodawari</em> for drinks, for example, I effortlessly avoid alcohol and coffee, two of the most widespread addictions of modern society: water is my favorite drink. By not being very discerning with brand value and status displays, I never feel the need to splurge on luxury items, saving myself who knows how much money over my lifetime. The same goes for spending on hi-fi tech, because I&#x27;m perfectly happy with—blissfully unaware of!—the mediocrity of any audio source.</p>
<p>When I observe these facts, I find it hard to imagine how any extra enjoyment of 1% of a given category, afforded by the refinement of a <em>kodawari</em>, can possibly outweigh the loss of satisfaction for the remaining 99%. In these cases, it seems to me that a deep-seated refusal to compromise on something is a net liability, a handicap toward the goal of enjoying one&#x27;s life.</p>
<p>It&#x27;s easy to say such things about others&#x27; lives, though. Perhaps it is even irresponsible. What about my own <em>kodawari</em>?</p>
<p>I already mentioned how I learned Japanese by being an entertainment omnivore: for a few years, I immersed myself in Japanese media, ranging from children&#x27;s books to mildly popular rock music, from trashy anime to slapstick comedy. Very few things bored me. While I did draw a line at certain points, I was able to find sufficient delight in a wide variety of Japanese-language sources, and this helped me immensely. If I go back to those same works today, I find that I&#x27;m unable to derive any satisfaction from most of them. Those books, anime, and artists have lost all of their appeal to my eyes. I&#x27;ve grown too picky, too particular about how I use my time. I&#x27;m afraid that it would be much harder to learn a new language with this <em>kodawari</em> I&#x27;ve developed for myself.</p>
<p>In my late twenties, I went through a rather insufferable phase in which I would point out to my partner and friends all the ways a movie was bad and unpleasant—right after we were done watching it. Needless to say, this annoyed the others, especially when they had enjoyed the film.</p>
<p>For a while, I thought those people were being unreasonable. How could they deny those inconsistencies or blatant clichés? Then I realized that my criticism was annoying not because, or when, it was wrong, but <em>precisely because and when it was right</em>. What I was really doing was making flaws and warts stand out for people who hadn&#x27;t noticed or cared about them while watching. Warts and flaws which, if I was right, those people couldn&#x27;t unsee. In other words, I was ruining their enjoyment.</p>
<p>I stopped doing that a long time ago. If a fellow movie-goer asks me directly what I thought of the film, I will tell them vaguely whether I liked it or not—expanding freely on the parts I feel like praising—but I won&#x27;t volunteer any specific criticism before my interlocutor. Everyone has the right to enjoy a creative work at the level of sophistication they want. And, I admit, sometimes I wish I could turn off the &quot;story analysis device&quot; in my brain, sit back, and fully savor whatever blockbuster most other people are raving about at any given time.</p>
<p>All of this doesn&#x27;t sit comfortably with me. When I put things that way, even my own <em>kodawari</em> feels like a handicap, an obstacle to fully enjoying whatever is available to me, rather than only those things I have painstakingly selected. But if you were to offer me a magic pill that <em>really</em> removes, permanently, that &quot;story analysis device&quot; I have honed over a lifetime, I don&#x27;t think I would take it. Why? Because writing stories, fiction or nonfiction, is what I have chosen as My One Goal in life. If I ever write a great piece of writing, it will also be thanks to this <em>kodawari</em>, the same one that robs me of some fun in other areas. It is a blessing with a price I am willing to pay.</p>
<p>Here we are again on the topic of greatness. Perhaps having a <em>kodawari</em> is a net positive only when it is related to Your One Goal, or one of a few, and not worth it in any other case?</p>
<p>Or is there an artful way of developing a <em>kodawari</em> that can be toggled off or dialed down when needed—a way to raise your ceiling of enjoyment without affecting your floor—a breadth-preserving pickiness?</p>
<p>I wish I knew. But the more I think about this, the more I feel like I should be discerning about my discerningness. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/self-sabotage-connoisseurship/edu-grande-0vY082Un2pk-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Replaced]]></title>
            <link>https://aethermug.com/posts/replaced?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/replaced</guid>
            <pubDate>Thu, 01 May 2025 01:59:37 GMT</pubDate>
            <description><![CDATA[What happened when I let an LLM pretend to be me]]></description>
            <content:encoded><![CDATA[<p>If you&#x27;ve been following Aether Mug for more than seven days, you might have read the blog post I wrote last week. It&#x27;s titled <a href="/posts/meditations-on-the-color-of-pee">Meditations on the Color of Pee</a>, and it&#x27;s about a small logical contradiction I noticed in what is considered a healthy amount of water to drink in a day, and what that means for the standard color of urine &quot;should&quot; be. Not a very serious or important topic, to be honest, but one I&#x27;ve found myself returning to over and over in my mind.</p>
<p>Except I just lied: I <em>didn&#x27;t</em> write that blog post! A large language model did. Last week I was very short on time and, considering how long it takes me to write my usual stuff, I realized I might not be able to publish anything decent at all. The idea of using AI crossed my mind. Although the prospect of sitting back while a machine writes in my stead never attracted me, I figured that maybe doing it once could turn out to be an interesting experiment. And &quot;the color of pee,&quot; while admittedly a bit gross, was the perfect topic to try: something I was mildly intrigued by, but not big or important enough to spend a long time developing.</p>
<p>The choice was between having AI slop it out for me or leaving the blog silent for a week, and I chose the former.</p>
<p>At least in terms of the time it took me, it was a success. In terms of quality, I&#x27;ll share some of my thoughts below. But first: if you read it without knowing it was AI-generated, did you notice anything strange? Did it sound different from my usual writing, and was it all logically sound and interesting? Shoot me an email or a <a href="https://bsky.app/profile/marco-giancotti.bsky.social" rel="nofollow noopener noreferrer" target="_blank">Bluesky</a>/<a href="https://x.com/marco_giancotti" rel="nofollow noopener noreferrer" target="_blank">Twitter</a> message with your impressions to let me know.</p>
<p>(To be fair, I didn&#x27;t <em>really</em> hide the fact that it was generated: the author attribution on that page clearly states that &quot;Anthropic Claude&quot;—Sonnet 3.7, to be precise—was the actual writer, with ideas and editing by me; even the subtitle of the post hinted at this fact!)</p>
<h2>How I Did It</h2>
<p>Although AI slop blogs abound, you don&#x27;t often get to see the process behind the scenes. For posterity, I&#x27;ll share the approach I used for last week&#x27;s blog post here.</p>
<p>It was my first time using an LLM for &quot;creative nonfiction&quot; writing, so this process is probably not the best or most efficient. Still, I use language models on a daily basis for other work tasks, including coding and drafting corporate documents, project proposals, and executive memos, and I&#x27;m quite confident with the overall method.</p>
<h4>Step One: Think</h4>
<p>Make what you want with this piece of information, but I had been wondering about the evolutionary and cultural implications of the color of pee for <em>years</em>. I created a note about it in my Obsidian vault in February 2024, but by then my thought process about it was already well formed. It wasn&#x27;t fully spelled out, though, nor structured in a way that would be easy to follow for a generic blog reader, which is why I had to revisit the topic and massage it into a prompt last week.</p>
<h4>Step Two: Stress Testing</h4>
<p>It&#x27;s tempting, when you have a convincing argument in mind, to jump straight into writing, but of course <a href="https://aethermug.com/posts/rationality-fails-at-the-edge" rel="nofollow noopener noreferrer" target="_blank">rationality fails at the edge</a>. My assumptions could be wrong, in which case no argument, no matter how elegant or acrobatic, would be able to save my conclusions.</p>
<p>So I sat down with Perplexity AI and had a critical conversation with a few reasoning models (mostly Deepseek R1 and Claude Sonnet 3.7 Thinking).</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/replaced/prompt1.webp" alt="" title="Screenshot of a Perplexity AI page with a prompt and part of an answer. The prompt reads: &#x27;Help me make a reasoned argument. First I&#x27;ll state the premises and I want you to critique them, researching whether they are sound before I proceed with the argument. Premises: 1. Standard medical knowledge says that an adult human should consume about 2 liters of water per day. 2. The recommended diet for a healthy life is necessarily attuned to the standard lifestyle of that species during most of its evolutionary history. In other words, 2 liters must be the water amount that our hunter-gatherer ancestors could expect to drink consistently during an average day. Drinking much more or less than that would have led to an evolutionary disadvantage.&#x27; The AI&#x27;s answer begins with: &#x27;The premises presented contain inaccuracies that warrant critique based on current scientific understanding and evolutionary evidence...&#x27;"/></figure>
<p>While this method isn&#x27;t as good as bouncing ideas off a scientist or a highly educated friend, it works quite well and it&#x27;s great for when you&#x27;re on your own. I did a lot of back-and-forth this way, pushing back on some of the AI&#x27;s arguments, and prodding it to strengthen its points when it went on irrelevant tangents.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/replaced/prompt2.webp" alt="" title="Screenshot or Perplexity AI page. The new prompt is: &#x27;Your first two points about why prehistoric humans were fine drinking less water than today&#x27;s recommendation don&#x27;t make any sense. First, saying that humans developed mechanisms for water efficiency in no way implies that today&#x27;s humans are any less efficient. Second, I don&#x27;t see how the fact that hunter-gatherers chose to live near water sources is relevant: today&#x27;s humans also live near water sources, so what? The only plausible point you made is that their food diet might have been more water-rich. But do we have evidence that this difference was enough to account for drinking half as much liquid water as is recommended for humans?&#x27; The answer begins with: &#x27;Let&#x27;s systematically address your valid critiques and refocus on the strongest evidence: Clarifying Water Efficiency &amp; Evolutionary Biology. Your first objection is correct: modern humans retain...&#x27;"/></figure>
<p>One advantage of using a search-enabled LLM for this kind of self-critiquing is that it will give you its references. You can, and should, double-check these to make sure the sources are reliable and actually say what the AI claims they say (they usually do, but sometimes have added caveats).</p>
<p>After about 30 minutes of this, I had a revised, more science-informed understanding of the topic. I realized there was more nuance to the question of urine colors, and included that nuance in the following steps.</p>
<p>Note that I didn&#x27;t take any of what the AI told me as gospel: I had to pick and choose the bits and arguments that were both relevant and meaningful (for or against my initial ideas) from a lot of useless stuff it offered me. This is a good skill to develop, with applications that go well beyond prompting.</p>
<h4>Step Three: Outline</h4>
<p>Based on my existing notes and on the insights I gained with the chatbot stress test, I created a very quick and rough outline of my ideas: what I found puzzling, what new nuance I had obtained in my brief research, and my thoughts about it.</p>
<h4>Step Four: Context</h4>
<p>Finally, I fired up Cursor—the AI-enabled code editor I use to write Aether Mug—and started a conversation with Claude Sonnet 3.7 Thinking. Before asking it to write the post, though, I gave it a lot of context to start from: the outline I had created, of course, but also the full text of an older post I&#x27;d written, as a style reference.</p>
<p>With all that background information ready, I asked Claude to write a blog post imitating my style, based on my &quot;Color of Pee&quot; outline, and of course it did it in a few seconds.</p>
<h4>Step Five: Become an Editor</h4>
<p>Claude wrote a first version, then revised it several times based on my comments. All I did was read the piece critically, select certain passages, and give it feedback via prompts. For example, at one point it wrote that a source I had discovered was a &quot;surprisingly interesting rabbit hole,&quot; to which I responded:</p>
<blockquote>
<p>This sentence (&quot;surprisingly interesting&quot;) is exaggerated. Tone it down. It&#x27;s not really a rabbit hole, just a small revision of my interpretation.</p>
</blockquote>
<p>I also didn&#x27;t like where its conclusions went, so I gave it a prompt starting with:</p>
<blockquote>
<p>Change the conclusions in the last section of the blog post. They should focus on these things: ... [the points that made it in the final version]</p>
</blockquote>
<p>Since this was an experiment, I made sure not to directly write a single word of the blog post (with the exception of the title, which I chose). It was all prompting: expand on this, shorten that, cut that other part off, and so on. If I were to do this regularly, I think I would directly edit certain portions myself.</p>
<h2>Its Proper Place</h2>
<p>Why this approach? It&#x27;s something I developed over the past years as I went through several phases in my relationship with generative AI.</p>
<p>When ChatGPT first came out, I was enthralled by it and talked about it enthusiastically with anyone who would listen (for some reason, few of them were excited about it back then). Of course, at the time it was little more than an over-hyped toy, and I lost interest in it for a while.</p>
<p>Then the models got better and better, and I started using them for coding tasks and web searches. I gradually fell into what you could call a <em>trough of laziness</em>, where I relied more and more on the LLMs to do things for me, to the point of becoming worried about atrophying my thinking abilities. Was I offloading <em>too much</em> to this new technology?</p>
<figure><img src="https://aethermug.com/assets/posts/replaced/tweet1.webp" alt="Tweet: &#x27;It was only a matter of time, but I now catch myself using AI chatbots as a crutch and I don&#x27;t like that.&#x27; Posted on Feb 21, 2024"/></figure>
<p>The answer was yes. I was using LLMs wrong—as if they were consultants I could delegate my hard work to. I was so bought into the dream of sitting back and having instant magic happen for me that I often stopped thinking. This, of course, led to mistakes and time wasted trying to get it to fix its mess. But even when it did spit out correct answers, I was a mere spectator, and I began to fear I would develop an unhealthy dependence on that magic.</p>
<p>I don&#x27;t know exactly what triggered a change, but one day I realized that I was (mostly) out of the trough. Maybe I read some prompting advice that gave me a better <a href="/posts/a-framing-and-model-about-framings-and-models">framing</a> of the whole matter (needless to say, my prompting approach isn&#x27;t original or anything). Maybe it was just wisdom acquired through my daily trial and error. In any case, at some point I started seeing the AI not as a <em>consultant</em>, but as a young, smart, and very fast <em>assistant</em> (or secretary, or intern) who nevertheless lacks real-world experience and common sense.</p>
<p>It needs guidance, it needs nudges, and it often needs even extremely basic context spelled out for it.</p>
<p>Most importantly, you don&#x27;t use an assistant&#x27;s work as-is, without critical review and editing—whether the assistant is human or not. When you pass on or employ an assistant&#x27;s output in your work, you&#x27;re not allowed to blame them for any mistakes. If the output is good, great! You&#x27;ve achieved something you aimed for. If it&#x27;s crap, though, it&#x27;s <em>your</em> fault for not supporting the assistant better, and it&#x27;s <em>your</em> responsibility to discard it and try something else.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Once I started seeing things this way, the problem of &quot;AI as a crutch&quot; mostly disappeared. I started thinking hard about how to prompt the models well, predicting the various pitfalls and dead ends they might fall into and avoiding them with carefully chosen wording. For complex queries (and I do a lot of those), I often spend more than 15 minutes composing a single prompt! Most of the time, this effort pays off with high-quality, deep responses that, all considered, speed up my work enormously. If anything, lately I&#x27;ve felt that the very act of crafting a good prompt has helped me think more clearly about the subject.</p>
<p>The approach I described above, where I spent time in an intense conversation with a language model and then looked for the right context to use as reference, is based on the AI-as-assistant framing. I could have directly asked Claude to &quot;write a blog post about the ideal color of urine compared to the cultural stereotype of that color, in a lighthearted and pop-science style,&quot; and it would have done so in a fraction of the time it actually took me. But I can guarantee you, it would have produced something much, much worse—mildly informative, perhaps, but more boring and shallow. (In fact, I just tried that prompt and the answer was... I&#x27;ll spare you the pee-related pun that crossed my mind.)</p>
<h2>Deeply Sloppy</h2>
<p>Alright, but how good was the blog post that came out of this elaborate process? I hope many readers will share their thoughts on this with me, but here are mine.</p>
<p>The blog post that came out wasn&#x27;t very good. In fact, I think it was rather poor. I mean, from one point of view it&#x27;s nothing short of <em>magic</em>, and I would have been astonished by this feat only four years ago. But when judging it as an Aether Mug post, it was not great. At least—boy, how I hope you&#x27;ll agree!—it was worse than <em>most</em> AeMug posts.</p>
<p>In the spirit of taking responsibility: sorry! Last week, I published what I considered to be a sub-par blog post. One lesson I learned is that there is a limit to how much an editor can improve a piece, short of rewriting it entirely. But since I intended, all along, to dissect the experience in this follow-up post, I hope you&#x27;ll forgive me.</p>
<p>Let&#x27;s start with the things Claude did relatively well. It did replicate many of the patterns and idiosyncrasies of my own writing style: short paragraphs, informal tone, ample use of parentheses and self-posed questions, and things like that. In this sense, I was impressed (and not a little creeped out—similar to when I listen to my own recorded voice).</p>
<p>The language model also structured the text more or less as I would have, breaking it into sections with titles of a similar flavor to those I author. I&#x27;m not sure how much credit the AI deserves for this, though, since I did give it a loosely structured but complete outline to work from. All in all, I&#x27;d say it <em>looks</em>, at first glance, like a Marco post.</p>
<figure><img src="https://aethermug.com/assets/posts/replaced/958885ilsdl.webp" alt="A drawing of a man on a horse with a sign that says to shandy hall."/><figcaption>Life and opinions of Tristram Shandy, gentleman; The overthrow of Dr. Slop. Henry William Bunbury.</figcaption></figure>
<p>Dig even a little beneath the surface attributes of the text, though, and the illusion comes crumbling down. Some of its choices of expression felt cringey to me, like the cheesy &quot;...the whole performance&quot; and the reader-directed command, &quot;think about it&quot; (I generally avoid expressions like these because they can sound condescending).</p>
<p>And let&#x27;s not talk about the awkward attempts at humor. Hey, I&#x27;m not saying that my own humor is brilliant or particularly witty, but I think I can do better than &quot;a little nuance can really flush away our most golden assumptions.&quot;</p>
<p>As for the part where it says, &quot;of course, my initial thoughts needed some correction (they usually do),&quot; I see what it&#x27;s doing. I guess it&#x27;s a passable imitation of what I might have written—although I think I prefer it when self-deprecation comes from me.</p>
<p>The main reason I say that the post was of poor quality, though, lies in its logical flow. The argument is wobbly: first, it says in the &quot;Reality Checks and Nuances&quot; section that we don&#x27;t have proof that people really believe &quot;deep yellow&quot; is the natural color of urine, then it concludes that we should be careful not to believe such a thing. There is a missing link there—an underdeveloped point about the influence of media on people&#x27;s thinking patterns (or framings), and perhaps a missing concrete example or two. The way it is laid out in the published post, this hole is not glaring enough to destroy the whole argument, but the non sequitur does weaken its thesis. Are we justified in caring about the way pee is depicted in cartoons and illustrations? I don&#x27;t think the piece makes a convincing case for that.</p>
<p>Then again, the original concept I prepared might have been too weak to begin with, so I still can&#x27;t fully dump the responsibility onto the machine assistant. Perhaps another lesson to be learned is that you have to do the work of writing the whole thing yourself in order to confirm whether it is publish-worthy.</p>
<h2>My Takeaways</h2>
<p>A state-of-the-art LLM was unable to write a strong, deeply reasoned, and coherent blog post, even when given a backbone and plenty of reference materials. It&#x27;s possible that other recent models (perhaps Gemini 2.5 Pro?) might do a somewhat better job at this, but I don&#x27;t have high expectations for the time being. A good blog post—at least the kind of blog post I like—is more than a sequence of facts laid out in passable prose. It needs to convey a... zest, a charged intent while also being held back by tics and idiosyncrasies to some extent, revealing a struggle—an effort by the author to push beyond their own expressive abilities. In other words, it needs to reveal the presence of a human behind the scenes.</p>
<p>I&#x27;m not saying that LLMs will never be able to write blog posts. They already can, and they already do. There has always been a humongous amount of human-made &quot;content&quot; slop, and AI is already more than good enough to replace that (I hear that <a href="https://en.wikipedia.org/w/index.php" rel="nofollow noopener noreferrer" target="_blank">content mills</a> are going extinct). My claim is that LLMs can&#x27;t write a <em>good</em> blog post, under my definition, <em>yet</em>. Surely not from scratch, and apparently not even when given a human-crafted seed like I did this time.</p>
<p>What about future versions? I think it is entirely possible that generative AI will become so good as to be able to write a great blog post, as long as it is closely guided and supervised by a human. Maybe we&#x27;re not far from the day when an original, zesty human writer will be able to produce blog posts and essays of the same quality as before, but at five or ten times their unaided rate. With LLMs as <em>amplifiers</em> of their skills—as opposed to being replacements—future writers may be able to share their humanity, all their original and hard-earned insights, much more efficiently.</p>
<p>When that day comes, maybe I&#x27;ll take advantage of this new tool. Maybe not—I haven&#x27;t quite made up my mind. For now, I will stick to typing each word myself, even at the cost of being very slow at it. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/replaced/evan-yang-LPAYmP4KSrg-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Meditations on the Color of Pee]]></title>
            <link>https://aethermug.com/posts/meditations-on-the-color-of-pee?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/meditations-on-the-color-of-pee</guid>
            <pubDate>Wed, 23 Apr 2025 01:19:12 GMT</pubDate>
            <description><![CDATA[The synthetic origins of what we consider normal]]></description>
            <content:encoded><![CDATA[<p>You&#x27;ve probably heard the recommendation to drink around 2 liters of water daily. A few years ago, I decided to take this advice seriously. I started carrying a water bottle everywhere, setting reminders, the whole performance. And after a couple of weeks of this new hydration ritual, I noticed something peculiar: my urine had become almost clear, barely a hint of yellow.</p>
<p>This observation stuck with me for reasons I couldn&#x27;t immediately articulate. It was one of those seemingly trivial things that nonetheless gnaws at the edges of your consciousness.</p>
<h2>The Evolutionary Puzzle</h2>
<p>Here&#x27;s where my brain went with this: if humans are &quot;supposed&quot; to drink that much water (according to modern health wisdom), then our hunter-gatherer ancestors—who presumably lived in optimal conditions they were perfectly adapted to—must have produced nearly clear urine most of the time. Yet, whenever urine makes an appearance in popular culture—movies, TV shows, even medical diagrams—it&#x27;s depicted as distinctly yellow, often a deep amber shade.</p>
<p>So are we collectively treating something unhealthy as the baseline? Have we normalized mild dehydration to the point where we think deep yellow is the default setting for human pee? Or have we synthetically constructed an idea of &quot;normal&quot; that doesn&#x27;t reflect biological reality?</p>
<p>I found this oddly fascinating. It&#x27;s like discovering that you&#x27;ve been holding your pencil &quot;wrong&quot; your entire life, but so has everyone else.</p>
<h2>Reality Checks and Nuances</h2>
<p>Of course, my initial thoughts needed some correction (they usually do):</p>
<p>First off, &quot;optimal&quot; hydration isn&#x27;t a binary state but exists on a gradated spectrum. Rather than a single perfect amount of water, there&#x27;s a range where our bodies function well, with diminishing returns as we approach either extreme. Those &quot;8 glasses&quot; or &quot;2 liters&quot; recommendations represent a reasonable target within this range, not a precise threshold between &quot;hydrated&quot; and &quot;dehydrated.&quot; Small deviations from this ideal aren&#x27;t catastrophic—they&#x27;re just progressively less optimal for long-term functioning. And this optimal range itself shifts based on numerous factors: the climate you&#x27;re in, what you&#x27;re eating, how active you are, your body size, and probably your genetic makeup too.</p>
<p>Second, is it really true that deep yellow urine is universally considered &quot;normal&quot;? Its portrayal in media might be purely practical—yellow is simply more visually recognizable as urine than a clear liquid would be. (Imagine a movie scene where someone holds up a clear specimen cup: &quot;Is that water or...?&quot;)</p>
<figure><img src="https://aethermug.com/assets/posts/meditations-on-the-color-of-pee/915900ilsdl.webp" alt="A page of a book titled vogel &#x27;s scale of urine tints, showing nine gradations of urine colors, from pale yellow to brownish black."/><figcaption>Plate I: Vogel&#x27;s Scale of Urine Tints, Charles Wesley Purdy</figcaption></figure>
<p>Third (and perhaps most importantly), I was falling into what philosophers call the naturalistic fallacy—the idea that there exists some ideal natural environment that humans are perfectly adapted to. Evolution doesn&#x27;t work toward perfection; it works toward &quot;good enough to not die before reproducing.&quot;</p>
<p>From an evolutionary perspective, there might even be advantages to tolerating mild dehydration. Our ancestors faced constant trade-offs: spending time searching for water meant less time gathering food or building shelter. The ability to function while slightly under-watered could have been genuinely adaptive in certain environments.</p>
<p>Think about it—if our bodies required absolutely optimal hydration all the time, we&#x27;d be spectacularly fragile creatures.</p>
<h2>From Observation to Insight</h2>
<p>My innocent glance into the toilet bowl somehow spiraled into a philosophical reimagining of &quot;normal&quot; hydration throughout human history. Funny how one minute you&#x27;re judging your own bodily fluids, and the next you&#x27;re deconstructing the flawed reasoning of &quot;crystal clear equals well-watered, therefore our ancestors must have peed like mountain springs.&quot; A little nuance can really flush away our most golden assumptions.</p>
<p>While there are practical reasons for media&#x27;s deep yellow portrayal of urine (as mentioned earlier—it&#x27;s visually recognizable), this consistent imagery might still subtly influence our perception. We might unconsciously accept amber as the default when, biologically speaking, that&#x27;s not necessarily the case. It&#x27;s a perfect example of how our sense of &quot;normal&quot; can be constructed through artificial means.</p>
<p>Medical professionals generally agree that pale yellow urine—not completely clear and not amber—indicates proper hydration for most people. This light straw color suggests your kidneys aren&#x27;t working overtime to conserve water, but you&#x27;re also not overwhelming your system.</p>
<p>Perhaps the most interesting lesson here is how easily we can misinterpret biological signals through cultural lenses. And that&#x27;s something worth meditating on, regardless of the color of your pee. What other &quot;normal&quot; things in our lives might actually be synthetic constructs rather than natural baselines? ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/meditations-on-the-color-of-pee/231115fgsdl.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[What if Gratitude Was Built Into the Grammar?]]></title>
            <link>https://aethermug.com/posts/what-if-gratitude-was-built-into-the-grammar?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/what-if-gratitude-was-built-into-the-grammar</guid>
            <pubDate>Thu, 17 Apr 2025 03:23:24 GMT</pubDate>
            <description><![CDATA[Another Japanese marvel]]></description>
            <content:encoded><![CDATA[<p>Language has <a href="/posts/a-fundamental-framing-of-human-language">arbitrary boundaries</a> built into it that we don&#x27;t even notice, despite using them every day in every form of communication we partake in. This isn&#x27;t just a cosmetic feature—it affects how we think.</p>
<blockquote>
<blockquote>
<p>&quot;Sylvia plays tennis, Monopoly, and violin.&quot;</p>
</blockquote>
<p>This sentence involves a musical instrument and two types of game. ... If one tried to measure the distances between these three concepts by asking people to estimate them, it’s likely that most people would place violin quite a long ways from tennis and Monopoly ... not too surprisingly, this matches the collective choice of Italian speakers, who would translate the above sentence as follows:</p>
<blockquote>
<p>&quot;Sylvia <em>gioca</em> al tennis e a Monopoly, e <em>suona</em> il violino.&quot;</p>
</blockquote>
<p>It would be unthinkable, in Italian, for anyone to play (in the sense of <em>giocare</em>) a musical instrument; the mere suggestion is enough to make an Italian smile. ... While it is natural for English and French speakers to see violin-playing as belonging to the same category as soccer-playing and basketball-playing, the idea would seem downright silly to Italian speakers.</p>
<p>...</p>
<p>Mandarin speakers are considerably more finicky in this matter than Italian speakers: they linguistically perceive four broad types of musical instruments, each type meriting its own special verb. Thus for stringed instruments there is the verb “拉” (pronounced “lā”), meaning roughly “to pull”, while for wind instruments one says “吹” (“chuī”), which means “to blow”. Then for instruments such as the guitar, whose strings are plucked by the fingers, or the piano, whose keys are pushed by the fingers, the verb is “弾” (“tán”) — and finally, for drums, which are banged, what one says is “打” (“dǎ”).</p>
<p>— Surfaces and Essences: Analogy as the fuel and fire of thinking, D. Hofstadter and Emmanuel Sander</p>
</blockquote>
<p>Whether your language clumps two concepts together under the same word, or separates them into distinct terms, has implications for what you consider logical and for how similar those concepts seem to you. You see the world a little differently.</p>
<p>The obvious question, then, is what would happen if we could design a &quot;nice&quot; framing into a language? How would it feel to be gently and implicitly steered into positive worldviews by the very rules of your tongue?</p>
<p>You might surmise, for example, that it would be really quite nice to be able to express gratitude with the mere <em>inflection</em> of a verb.</p>
<p>Well, there&#x27;s no need to surmise: Japanese—that treasure trove of <a href="/posts/the-beautiful-dissociation-of-the-japanese-language">natural</a> <a href="/posts/in-japanese-you-need-a-dictionary-to-count-things">linguistic</a> <a href="/posts/is-there-anything-untranslatable">experiments</a>—has precisely that feature!</p>
<h2>Built-In Thanks</h2>
<p>Take the following English sentence: &quot;Tanaka sold me a book.&quot; This represents an action done by one person (Tanaka) toward &quot;me&quot;, and that is all we need to know to understand what happened. You can translate this directly into Japanese as follows while retaining its intended meaning:</p>
<blockquote>
<span class="text-2xl">田中は私に本を<div>売った</div>。</span>
<p>Tanaka wa watashi ni hon wo <div>utta</div>.</p>
<p>Tanaka (=topic) me (=recipient) book (=object) <div>sold</div>.</p>
<p>Tanaka <div>sold</div> me a book.</p>
</blockquote>
<p>(The last line above is a word-for-word translation, and I&#x27;ll keep using this format below. The clarifications in parentheses, like &quot;(=topic)&quot;, represent the function of grammatical particles like <em>wa</em>, <em>ni</em>, and <em>wo</em>. The important parts to look at are the highlighted words, though.)</p>
<p>The above translation is fine, but in Japanese you can modify any verb to indicate that you&#x27;re grateful for its occurrence. The auxiliary verb <em>kureru</em> (くれる; <em>kureta</em> in the past tense) exists for this purpose:</p>
<blockquote>
<span class="text-2xl">田中は私に本を<div>売ってくれた</div>。</span>
<p>Tanaka wa watashi ni hon wo <div>utte kureta</div>.</p>
<p>Tanaka (=topic) me (=recipient) book (=object) <div>sell+did me a favor</div>.</p>
<p>Tanaka <div>sold</div> me a book [thankfully].</p>
</blockquote>
<p>The concrete contents are exactly the same, and you would usually translate this version to English exactly like the previous one, without clarifying the thankfulness aspect. Yet that transformation of the verb <em>utta</em> (sold) to <em>utte kureta</em> (sell+did me a favor) adds extra nuance—it wasn&#x27;t just a dry transaction: Tanaka did a kind gesture by selling the book to me.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>The <em>kureru</em> form is optional in many cases, such as the one we just saw. Modifying the verb to make a favor explicit in the action is nice, but it could work without it. Other times, though, it is almost obligatory:</p>
<blockquote>
<span class="text-2xl">報告書を<div>書いてくれて</div>ありがとう。</span>
<p>Houkokusho wo <div>kaite kurete</div> arigatou.</p>
<p>Report (=object) <div>write+did me a favor</div> thank you.</p>
<p>Thank you for <div>writing</div> the report.</p>
</blockquote>
<p>Although the grammar would technically allow you to simply use <em>kaite</em> (write) in this case, it would sound very unnatural. Since you&#x27;re thanking someone, you are basically required to express your gratitude for their action with <em>kaite kurete</em> (write+did me a favor).</p>
<p>At this point, you might be wondering if this is really so special. After all, in English you can always say things like &quot;she helped me do X&quot; or &quot;he did X for me&quot;. And to some extent, you would be right. These English &quot;benefactive&quot; constructs—to use the technical term—do, in some cases, replicate the meaning of <em>kureru</em> sufficiently well. But <em>kureru</em> differs in several key ways.</p>
<p>For one thing, <em>kureru</em> is not just an added word, like &quot;help me&quot; or &quot;for me&quot;, but a grammatical inflection of the verb itself. Although <em>kureru</em> exists also as a stand-alone verb meaning &quot;to give (to me/us)&quot;, its use as an auxiliary attached to another verb is more abstract than that. In fact, it can even be used with the verb &quot;help&quot; itself:</p>
<blockquote>
<span class="text-2xl">大丈夫、友達が<div>手伝ってくれる</div>から。</span>
<p>Daijoubu, tomodachi ga <div>tetsudatte kureru</div> kara.</p>
<p>(It&#x27;s) alright, friend (=subject) <div>help+does me a favor</div> (=because).</p>
<p>It&#x27;s alright, my friend <div>will help</div> me.</p>
</blockquote>
<p>If you tried to add the effect of <em>kureru</em> in the translation, you would end up with very weird sentences, like &quot;my friend will help me for me&quot; or &quot;my friend will help me by helping me&quot;. In English, one word is enough to clarify the fact that there is (probably) a favor involved, and there is no need to repeat that in another way. In Japanese, on the other hand, you&#x27;re expected to use <em>kureru</em> precisely <em>because</em> you&#x27;re mentioning received help.</p>
<h2><em>Kureru</em> as a Cultural Lens</h2>
<figure><img src="https://aethermug.com/assets/posts/what-if-gratitude-was-built-into-the-grammar/andreas-rasmussen-O7QQovvWJyU-unsplash.webp" alt="A grid of white lanterns with Japanese writing on them."/><figcaption>Paper lanterns displayed at a temple, each one bearing the name of the person or company that kindly donated to the temple. Photo by Andreas Rasmussen, Unsplash.</figcaption></figure>
<p>This way of singling out received favors seems to be unique to Japanese, and it is a major hurdle for students of this language before they can reach a near-native level. <a href="https://nomadit.co.uk/conference/eajs2021/paper/57867" rel="nofollow noopener noreferrer" target="_blank">A study in 2021</a> compared native Japanese speakers and native speakers of other languages (English, Chinese, Korean, and French) in the way they phrase stories in Japanese. They found that, although all participants were fluent enough to correctly recount those stories, the Japanese natives used <em>ten times</em> as many benefactive forms as English and French participants, and around three times as many as the Korean and Chinese. The way I interpret this result is that even the <em>option</em> of highlighting gratitude through grammar doesn&#x27;t usually occur to the people of those other countries, even though they probably understand it just fine when listening.</p>
<p>The trickiest part, and another big difference from English, is that you can&#x27;t use <em>kureru</em> in just any direction. For example, you can&#x27;t apply it to an action that you have done for someone else, or that your sister has done for a neighbor. <em>Kureru</em> is an expression of gratitude—<em>your</em> gratitude—so it doesn&#x27;t make sense for any verb that doesn&#x27;t bring benefit in your direction.</p>
<p>Direction is what matters: is the action coming toward you or emanating away from you?</p>
<p>As soon as you introduce the idea of a direction of benefits, you have to begin thinking about who is closer to you socially and who is farther. A spontaneous order appears. Your family is closer to you than a random passer-by, so it is okay to use <em>kureru</em> to express gratitude when the stranger finds and returns your mother&#x27;s lost wallet to her. Even if the action is not specifically for your own personal benefit, it does come in your direction—from someone more &quot;outside&quot; your circle to someone relatively more &quot;inside&quot;. Thus, the very grammar of Japanese supports the social distinction of <em>uchi</em> and <em>soto</em>, or &quot;in-group&quot; and &quot;out-group&quot;.</p>
<p>This <em>uchi</em>-<em>soto</em> difference, and the way language intertwines deeply with the culture of its speakers, is fascinating and worthy of a whole book, but I won&#x27;t go more in depth than this. Today I&#x27;m more interested in something else.</p>
<p>Most of the Japanese teaching materials I&#x27;ve seen stop here, concluding that <em>kureru</em> (and its siblings <em>ageru</em>—more on this below—and <em>morau</em>) are social auxiliary verbs. But, while this is <em>part</em> of what these auxiliary verbs do, it&#x27;s definitely not the whole story. Treating <em>kureru</em> as merely a reflection of Japanese sociality would be underselling it.</p>
<p>For example, the following sentences are perfectly natural and representative of things natives commonly say (examples from <a href="https://ndlsearch.ndl.go.jp/books/R100000002-I000010694088" rel="nofollow noopener noreferrer" target="_blank">here</a>):</p>
<blockquote>
<span class="text-2xl">あ，丁度いいところに<div>来てくれた</div>。ちょっと手伝って。</span>
<p>A, choudo ii tokoro ni <div>kite kureta</div>. Chotto tetsudatte.</p>
<p>Ah, just right timing (=target) (you) <div>come+did me a favor</div>. A little help me.</p>
<p>Ah, you&#x27;<div>ve come</div> at the right time. Give me a hand.</p>
</blockquote>
<br/>
<blockquote>
<span class="text-2xl">子どもが<div>寝ていてくれた</div>ので，たまっていた仕事が片付いた。</span>
<p>Kodomo ga <div>nete ite kureta</div> no de, tamatte ita shigoto ga kataduita.</p>
<p>Child (=subject) <div>remain asleep+did me a favor</div> (=reason), piled up work (=subject) sorted out.</p>
<p>Since my child <div>remained asleep</div>, I was able to sort out the work that had piled up.</p>
</blockquote>
<p>At first glance, these may look similar to the previous examples about people helping the speaker or doing something for them. But does &quot;coming at the right time&quot; count as a favor? Is &quot;remaining asleep&quot; an act of benevolence? I don&#x27;t think so. <em>Kureru</em> is routinely used for actions that are not intentional gestures towards the speaker or their in-group.</p>
<p>It goes further still (examples from <a href="https://takushoku-u.repo.nii.ac.jp/records/222" rel="nofollow noopener noreferrer" target="_blank">here</a>):</p>
<blockquote>
<span class="text-2xl">Windows Media Player で移すのなら勝手に CD-audio 形式に<div>してくれる</div>。</span>
<p>Windows Media Player de utsusu no nara katte ni CD-audio keishiki ni <div>shite kureru</div>.</p>
<p>Windows Media Player (=means) transfer (=condition) automatically CD-audio format (=target) <div>convert+do you a favor</div>.</p>
<p>If you transfer it with Windows Media Player, it&#x27;ll automatically [and kindly?] <div>convert</div> it to the CD-audio format.</p>
</blockquote>
<blockquote>
<span class="text-2xl">雨が<div>降ってくれた</div>。</span>
<p>Ame ga <div>futte kureta</div>.</p>
<p>Rain (=subject) <div>fall+did me a favor</div>.</p>
<p>It <div>rained</div> [thankfully].</p>
</blockquote>
<p>Even inanimate things can dispense favors! (Apparently media players are more &quot;outside&quot; than &quot;you&quot;.)</p>
<p>From these examples, you can see that <em>kureru</em> is, first and foremost, a personal expression of gratitude, nothing more and nothing less. This gratitude doesn&#x27;t need to be directed at people, nor do people use it necessarily to keep tabs on social relationships. It is more intimate than that: it&#x27;s gratitude towards the Universe, conveniently packaged with your grammar.</p>
<figure><img src="https://aethermug.com/assets/posts/what-if-gratitude-was-built-into-the-grammar/hamaya.webp" alt="The inside of a Shinto shrine, with some ritualistic arrows and a wooden sign with a picture of a tiger on it."/><figcaption>Photo by yellow_bird_woodstock, Wikimedia Commons, CC-BY-SA-2.0.</figcaption></figure>
<p>The result is that appreciation is a ubiquitous feeling in Japanese language. Not only do Japanese speakers say &quot;thank you&quot; explicitly more often than in English, using the word <em>arigatou</em> (ありがとう), but the grammatical thankfulness of <em>kureru</em> imbues many sentences, especially but not only those related to social relationships.</p>
<h2>Flipping It</h2>
<p>All of the above sounds wonderful—everyone is thankful!—but is there a flip side to this coin? Should we all rush to implement this same linguistic mechanism in all other languages, too?</p>
<p>I mentioned before another auxiliary verb, <em>ageru</em> (あげる), in relation to the social distinction of <em>uchi</em> and <em>soto</em>. <em>Ageru</em> is the precise directional inverse of <em>kureru</em>: you use it when you, or someone in your in-group, do something in favor of someone &quot;further out&quot; socially.</p>
<blockquote>
<span class="text-2xl">お母さんがお菓子を<div>作ってあげる</div>よ。</span>
<p>Okaasan ga okashi wo <div>tsukutte ageru</div> yo.</p>
<p>Mother (=subject) snacks (=object) <div>make+do you a favor</div>.</p>
<p>Mom <div>will make</div> snacks [for you].</p>
</blockquote>
<p>Despite being essentially &quot;<em>kureru</em> with the arrow&#x27;s direction inverted&quot;, this simple difference makes <em>ageru</em> a rather different beast. Unlike <em>kureru</em>, it is not socially safe to use <em>ageru</em> very generously, because you risk sounding condescending. One thing is to err on the side of being too thankful, and quite another is to suggest the other should be grateful to you for something you did—often, that&#x27;s not for you to decide. <em>Ageru</em> is also used much less frequently in relation to inanimate objects and for unintentional behaviors (although it does happen sometimes).</p>
<p>I find <em>ageru</em> to be less interesting than <em>kureru</em> because it implies a rather standard way of thinking. Outward-facing benefactive forms like <em>ageru</em> are relatively common in other languages, such as Korean and Chinese, even if in less &quot;grammaticized&quot; forms. Most non-Japanese languages tend to have a strong focus on the subject&#x27;s point of view, regardless of the direction of the action, and <em>ageru</em> follows that same pattern because it&#x27;s about doing something for someone else. In contrast, <em>kureru</em> is a curious case of seeing an action in a different light simply by being on the receiving end. It feels natural in a language like Japanese, where grammatical passivity is the norm, but it would stand out as rather bizarre in most other tongues.</p>
<p>In case you&#x27;re wondering, there is another way to &quot;flip&quot; <em>kureru</em>. If <em>ageru</em> involved flipping the direction of a verb&#x27;s &quot;benefit arrow&quot;, <em>yagaru</em> (やがる; becomes <em>yagatta</em> in the past tense) keeps the arrow in the same direction—seeing the action as the receiver—but inverts the sentiment that is involved. <em>Yagaru</em> expresses contempt or disapproval towards some event or act.</p>
<p>Compare this sentence using <em>kureru</em>...</p>
<blockquote>
<span class="text-2xl">先生が作文を<div>直してくれた</div>。</span>
<p>Sensei ga sakubun wo <div>naoshite kureta</div>.</p>
<p>Teacher (=subject) essay (=object) <div>correct+did me a favor</div></p>
<p>The teacher <div>[generously] corrected</div> my essay.</p>
</blockquote>
<p>...with the same sentence using <em>yagaru</em>:</p>
<blockquote>
<span class="text-2xl">先生が作文を<div>直しやがった</div>。</span>
<p>Sensei ga sakubun wo <div>naoshi yagatta</div>.</p>
<p>Teacher (=subject) essay (=object) <div>correct+did me an offense</div></p>
<p>The teacher <div>[had the nerve to] correct</div> my essay.</p>
</blockquote>
<p>With a single twist, the perceived emotion is completely changed.</p>
<p>Fortunately, <em>yagaru</em> is very seldom used. It&#x27;s so rude that you&#x27;ll probably never encounter it in the wild outside of manga and yakuza movies, although I suspect it occurs much more frequently inside people&#x27;s heads, never reaching their lips for etiquette reasons.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/what-if-gratitude-was-built-into-the-grammar/kusatteyagaru.webp" alt="Kusatte yagaru... hayasugitanda. (It&#x27;s rotting [damn it!]... it was too early.) This character regrets waking up a sleeping god-soldier too early, presumably before it had finished regenerating from past injuries. For some reason, the first part of this sentence from a Studio Ghibli film became popular among fans, and now they use it for anything they despise." title="A man with a beard drawn in anime style is on a tv screen."/><figcaption>Kusatte yagaru... hayasugitanda. (It&#x27;s rotting [damn it!]... it was too early.) This character regrets waking up a sleeping god-soldier too early, presumably before it had finished regenerating from past injuries. For some reason, the first part of this sentence from a Studio Ghibli film became popular among fans, and now they use it for anything they despise.</figcaption></figure>
<p>There is much more to Japanese benefactives than <em>kureru</em>, <em>ageru</em>, and <em>yagaru</em>. Another verb, <em>morau</em>, is important to express &quot;actively receiving&quot; something. Then there are alternative versions of all these verbs that are appropriate in different social situations. I won&#x27;t go into any of those details, though. To me, the star of the show is always <em>kureru</em>.</p>
<h2>Subliminal Society</h2>
<p>Despite decades of academic debate and opposition, the idea that language shapes the way we think <a href="https://www.scientificamerican.com/article/how-language-shapes-thought/" rel="nofollow noopener noreferrer" target="_blank">is gaining traction</a> again. The affordances and limitations of a language do affect how you and I form and manipulate thoughts. It has even been shown that bilinguals change their worldviews depending on which language they&#x27;re speaking at a given moment.</p>
<p>Of course, <em>how strong</em> an impact language has on actions is still unclear. Does the explicit distinction Italians make between playing an instrument and playing a game matter for their real-life decisions? Do Chinese violinists all uniformly think in a measurably different way from Chinese flutists?</p>
<p>I&#x27;m not sure about any of that, but one thing I&#x27;m sure about: to speak natural Japanese, you are <em>required</em> to remain well aware of your group belonging and social distances (<em>uchi</em> vs <em>soto</em>), and also of all the cases in which you are grateful for something happening. Nor are you ever allowed to forget about it, because everyone around you expresses these things in their own words. If you want to join the flock&#x27;s flight, you&#x27;d better flap your wings.</p>
<p>Would an artificial or modified language that includes this kind of benefactive grammar have a positive influence on society? Try living in Japan, and you&#x27;ll be able to judge for yourself. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/what-if-gratitude-was-built-into-the-grammar/possessed-photography-_E1PQXKUkMw-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Case Study: Is There a Strange Culture War Over AI Art?]]></title>
            <link>https://aethermug.com/posts/is-there-a-strange-culture-war-over-ai-art?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/is-there-a-strange-culture-war-over-ai-art</guid>
            <pubDate>Thu, 10 Apr 2025 06:57:16 GMT</pubDate>
            <description><![CDATA[Studying a thought process]]></description>
            <content:encoded><![CDATA[<p>I often write about general and abstract concepts and &quot;thinking tools&quot;, but I would hate it if people thought they are idle philosophy. Everything I write about is meant to actually help you think a little better, a little clearer—and of course I dogfood all of it. The difficult thing when writing about them, though, is finding simple, non-contrived examples to make these ideas, and their applicability, clear. Here I want to try a different approach: I&#x27;ll take someone else&#x27;s argument as a case study. I&#x27;ll deconstruct its reasoning and show what needs to be done to make it stronger.</p>
<p>For this purpose, I have selected this recent post titled <a href="https://www.pluralityofwords.com/p/the-strange-culture-war-over-ai-art" rel="nofollow noopener noreferrer" target="_blank">The Strange Culture War Over AI Art</a> by Danny Wardle on the Substack &quot;Plurality of Words&quot;. It is a brief disquisition about what the author calls a &quot;culture war&quot; between the proponents of AI-generated art and those who want to ban it.</p>
<p>I hesitated when choosing the subject of this analysis, because I would essentially critique someone&#x27;s writing in a very strict way, and hurting people or picking fights is the last thing I want. Wardle&#x27;s post seems like a good choice, though: it&#x27;s short, its mistakes are not too subtle, it&#x27;s about a very contemporary topic that I am interested in, and it was written by someone I don&#x27;t know, living in a distant country whose politics I am not entangled with. I don&#x27;t have any horse in the race, because I don&#x27;t identify with &quot;tech bros&quot; or &quot;indie artists&quot;—the two factions that are apparently waging this war against each other. (In fact I identify a little bit with both groups.)</p>
<p>Most important of all, Wardle is a philosopher: if any group of people ever appreciates a stern review of their thought processes, that must be philosophers!</p>
<p>Still, just to be extra-clear, I have nothing against Danny Wardle, nor do I think the post&#x27;s conclusions are necessarily wrong. I read other posts on Plurality of Words that I found well-reasoned and that helped me see those topics more clearly. Here I am only going to critique the reasoning process laid out in the AI art post—is it sound and convincing or not? Of course, the whole point of bringing it up is that I don&#x27;t think it is sound or convincing, <em>regardless of whether its conclusions are right or wrong</em>. Showing why is my only goal below.</p>
<p>Before proceeding, I recommend you read Wardle&#x27;s post in full <a href="https://www.pluralityofwords.com/p/the-strange-culture-war-over-ai-art" rel="nofollow noopener noreferrer" target="_blank">here</a>: it&#x27;s short enough to finish in a few minutes. I don&#x27;t want to misconstrue the arguments therein, so I will assume that you have understood them through direct reading.</p>
<p>I will also use my ideas from these two posts of mine: <a href="/posts/rationality-fails-at-the-edge">Rationality Fails at the Edge</a> and <a href="/posts/a-framing-and-model-about-framings-and-models">A Framing and Model About Framings and Models</a>.</p>
<h2>The Claims</h2>
<p>First of all, what is the ultimate message of Wardle&#x27;s post, the point towards which it&#x27;s trying to build a convincing argument? It&#x27;s impossible to begin the critique without clarifying this.</p>
<p>The piece&#x27;s title is &quot;The Strange Culture War Over AI Art&quot;, suggesting that <strong>it is a description of that culture war, and a demonstration that it is strange and dumb</strong> (&quot;one of the dumbest&quot;). In other words, the author seems to be taking an impartial stance, a third-party look from outside the debate, and showing that it is strange indeed. This hunch is corroborated by the first paragraph, with statements about &quot;modern culture wars&quot; and &quot;political polarization&quot;. The last paragraph&#x27;s conciliatory tone seems to support this as well. I will call this the Surface Claim of the post.</p>
<p>There is another claim half-hidden in this post, though, which I&#x27;ll call the Subsurface Claim. This is not signaled in the title or in the first paragraph, but the author seems to want to show that <strong>AI art is fully morally permissible</strong>. This is clear from the fact that the post <em>only</em> brings up and attempts to demolish the arguments against AI art, never those in favor of it. In fact, refuting those arguments is all the post does. Even though the author could use the same argument against conservatives (&quot;they traditionally espoused protecting intellectual property rights, but look at them now, happy to ignore the artists&#x27; rights&quot;), for some reason I don&#x27;t care to guess about, the author doesn&#x27;t do that.</p>
<p>The existence of the Subsurface Claim is a problem for the strength of the argument, because it undermines the stance of the Surface Claim. How can we trust someone as an impartial observer of the strange culture war if they show a strong preference for one side of it?</p>
<p>This self-contradiction weakens the whole argument, but it doesn&#x27;t invalidate it altogether. Maybe the initial impression was wrong, and the author wants to show, <em>from the inside</em>, that the culture war is strange and dumb because the opposing side is making it so. That approach is perfectly fine, too, although I think it would be more honest (as in, trust-inducing) to make this partiality explicit in the title and introduction.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h2>Weaknesses in the Arguments</h2>
<p>There are some important minor weaknesses that I want to mention briefly before getting to the parts I&#x27;m most interested in.</p>
<p>First, it is not at all clear to me that this isn&#x27;t a strawman argument. Are people really making those accusations and claims under scrutiny? Who are those people? Are they a compact group all agreeing with each other? Is it really a debate worthy of being called a &quot;culture war&quot;?</p>
<p>There are no quotes in the text, no concrete references we can check to verify with our own eyes that yes, there is indeed a faction of bigots trying to carpet-ban AI art, and they rely on all of those supposedly-faulty arguments for their activism. The only evidence given takes the following form:</p>
<ul>
<li>&quot;A common objection to AI art is...&quot;</li>
<li>&quot;Another claim is...&quot;</li>
<li>&quot;The final criticism I&#x27;ll address, which is perhaps the most common, is that AI art is...&quot;</li>
</ul>
<p>Perhaps the reader is supposed to be already familiar with these claims, or to seek them out on their own. If so, I guess that&#x27;s okay. This is a blog post, after all, not an academic paper, so we shouldn&#x27;t be too picky. Still, this omission doesn&#x27;t help the argument.</p>
<p>A flaw that seems less forgivable, even for a blog post, is the game-theoretical confusion shown in the first section, titled &quot;Property Is Theft, but Don&#x27;t Steal My Ideas&quot;. It is an attempt to show a contradiction in the views of &quot;progressives&quot;, who ostensibly want to simultaneously abolish private property and protect the property rights of artists against AI &quot;theft&quot;.</p>
<p>This point is entirely moot, because it compares ideals for a different way to run society (e.g. communism) with demands to follow <em>the current society&#x27;s rules</em>. It&#x27;s akin to saying, &quot;anyone advocating for the reduction of private property in a future society must also be in favor of theft <em>now</em>, otherwise they contradict themselves.&quot;</p>
<p>It&#x27;s the difference between proposing to play Game A, and demanding that the usual (different) rules of Game B be followed <em>while playing Game B</em>: there is no contradiction, no strangeness here. Clearly, even if one believes that Game A would be better and more fun, one may still be justified in opposing the introduction of only some very specific rules from Game A into Game B, claiming that such change might break Game B and make it even less fun than it was before.</p>
<p>In most countries—including the US, which I take to be what the author cares about—copyright laws ban human artists from cloning other artists&#x27; works, and from making derivative works without the original artist&#x27;s permission. Is it fair to not require the same laws to be applied to AI artists <em>in this legal context</em>? That&#x27;s the real question that needs to be debated (and <em>is</em> being debated at length elsewhere): not &quot;is it fair to require those laws in an ideal communist society?&quot;</p>
<p>This mistake invalidates a large portion of the blog post&#x27;s argument. It doesn&#x27;t show that AI art is morally acceptable (the Subsurface Claim), nor does it show the opposite: a flawed argument doesn&#x27;t show anything.</p>
<p>Alright, but why all these logical hiccups from a blogger who seems otherwise reasonable? We come finally to what I believe is the core of this flawed argument—and maybe of most flawed arguments: it is built on a bad framing.</p>
<h2>Poor Boundaries Mean Poor Thinking</h2>
<p>In my definition of the term, <a href="/posts/a-framing-is-a-choice-of-boundaries">a framing is a choice of boundaries</a>. It&#x27;s the definition of an ontology, the arbitrary but intentional separation of parts of reality into &quot;black boxes&quot; that exist with certain predictable properties and can be used to <a href="/posts/a-framing-and-model-about-framings-and-models">simulate the world</a>. It is also the choice of what boundaries <em>not</em> to draw—i.e., what things we take not to exist, at least in the context in which the framing is used.</p>
<p>What framing does Wardle&#x27;s blog post use? A very simple one. Here is a representative quote from the piece:</p>
<blockquote>
<p>The two camps are clearly divided among the two professions they choose to support: the artists or the programmers. Those on the left fear the displacement of their indie artist friends and those on the right relish at the prospect of telling left-leaning artists to ‘learn to code’.</p>
</blockquote>
<p>This is a rather extraordinary claim to make, and there are more claims of a similar nature in that short Substack article. (Unfortunately, as we saw above, none of those extraordinary claims is backed up by any evidence beyond the author&#x27;s word.)</p>
<p>Here is what the framing used by the author looks like, based on the post&#x27;s text.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-a-strange-culture-war-over-ai-art/aiart_framing_01.webp" alt="" title="Diagram with a vertical line in the middle, and two groups of text on its two sides. One group says: conservative, pro-AI art, pro property rights, tech proponents. The other group says: progressive, against AI art, against property rights, artists."/></figure>
<p>Only one line is drawn, and only two black boxes exist in the universe of this post&#x27;s framing. Because of this, the resulting model of reality is straightforward: two kinds of people disagree on all fronts, so they must make war.</p>
<p>Drawn like this, it looks like an oversimplification, but by how much? In fact, I think it is made up of several false dichotomies:</p>
<ul>
<li>A person can be either in the &quot;tech bro&quot; camp or in the &quot;indie artist&quot; camp.</li>
<li>&quot;Tech bros&quot; are for AI art; &quot;indie artists&quot; are against it.</li>
<li>Right-wing people are for AI art; left-wing people are against it.</li>
<li>Progressives in this debate are against private property; conservatives are in favor of it.</li>
<li>You are either 100% for AI art or 100% against it.</li>
<li>And so on.</li>
</ul>
<p>Each of these is false or incomplete on its own terms, but the blog post goes further, making it sound like all of these categories are aligned without contradictions: if you&#x27;re conservative, you&#x27;re also pro-AI, pro-property rights, etc.</p>
<p>To understand just how reductive this framing is, let&#x27;s try to make one a bit more nuanced. First of all, let&#x27;s account for, say, three possibilities on the political spectrum, instead of just two, by drawing a second line.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-a-strange-culture-war-over-ai-art/aiart_framing_02.webp" alt="" title="Table with three columns: conservative, progressive, and neither. The columns are empty."/></figure>
<p>Still an oversimplification, since I clumped under &quot;neither&quot; all ideologies that can&#x27;t be defined as fully or traditionally &quot;conservative&quot; or &quot;progressive,&quot; but at least this framing acknowledges their existence. Next, we do the same with the question of tech and art (a strange axis to begin with, but we can still use it), and <em>consider all their combinations</em>.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-a-strange-culture-war-over-ai-art/aiart_framing_03.webp" alt="" title="Table with three columns: conservative, progressive, and neither; and three rows: non-artist tech proponents, non-tech-proponent artists, and everyone else. The nine cells thus formed are empty."/></figure>
<p>Here I have accounted for something the author of the blog post left out of the framing: there exist people who are both in favor of tech <em>and</em> artists (or are engineers and artists themselves). To omit that category impoverishes the discussion and makes it sound more clear-cut than it really is.</p>
<p>We now have nine categories, from the two we started with. But we&#x27;re not done, because any of those groups might have one of, say, three opinions regarding AI art. They could be in favor of it, against it, or—if they have no opinion or an in-between opinion—neither.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-a-strange-culture-war-over-ai-art/aiart_framing_04.webp" alt="I think it might be getting hard to see the small text, so I&#x27;ve color-coded the three opinions on AI: yellow for &quot;pro AI art,&quot; orange for &quot;against AI art,&quot; and green for &quot;neither&quot;." title="The same table with nine cells as the previous image, but this time each cell is further divided into three areas: pro AI art, against AI art, and neither."/><figcaption>I think it might be getting hard to see the small text, so I&#x27;ve color-coded the three opinions on AI: yellow for &quot;pro AI art,&quot; orange for &quot;against AI art,&quot; and green for &quot;neither&quot;.</figcaption></figure>
<p>This brings us to 27 groups of people. Of course, some of these groups contain many more people than others, but at least we&#x27;re not leaving out anyone. Finally, there is one last axis that seems relevant to Wardle&#x27;s discussion, to account for opinions on property rights.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-a-strange-culture-war-over-ai-art/aiart_framing_05.webp" alt="" title="The same table again, where each of the three areas inside each cell are further divided into two areas: against property rights and pro property rights."/></figure>
<p>It is getting so cluttered that I gave in to a dichotomy here just to make things visible (this should really be a 4D table, but I couldn&#x27;t find a tool that makes those). People in each of those 27 groups can be for or against property rights (or somewhere in between, but you get the idea).</p>
<p>Surely, as the author of the blog post wrote, there are proportionally more people in the &quot;pro property rights&quot; boxes under the &quot;conservative&quot; column than there are in the corresponding boxes under &quot;progressive.&quot; But there might still be a significant number in the opposing boxes, not to mention the boxes in all the other combinations in the &quot;neither&quot; column and &quot;everyone else&quot; row.</p>
<p>With (at least) 54 groups of people with different positions, there might be so many alternative debates, so many nuances and solutions to consider and pitch against each other. Yet the author writes as if only these existed:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-a-strange-culture-war-over-ai-art/aiart_framing_06.webp" alt="" title="Again the same table, but this time all cell contents are removed except for the small fragments labeled &#x27;pro AI art, pro property rights, conservative non-artist tech proponents&#x27;, and &#x27;against AI art, against property rights, progressive non-tech-proponent artists&#x27;, and another one for the same labels but in the &#x27;everyone else&#x27; row."/></figure>
<p>If we use the full picture (the one before this last one), it is evident that any claim that is supported or criticized by an argument needs to be carefully attributed to the right groups of people, and any allegation of contradictions needs to prove that it is indeed a single group of people making both contradicting claims. The author&#x27;s original two-faction framing, on the other hand, can only lead to seeing things as a simple, and indeed bizarre, &quot;war.&quot;</p>
<p>(I will not go in depth on this one, but there is another flawed assumption in this article: that having two ideas that contradict each other somehow invalidates both of them. So, not only did the author claim there is a contradiction where there might not be one, but even assuming the contradiction was real, it would not lead to the conclusion that being against AI art is wrong.)</p>
<h2>Asking Good Questions</h2>
<p>Rationality fails at the edge, and we&#x27;ve seen that the edge—the foundational framing—in the blog post is not suited to discuss these topics and leads the whole house to crumble. How could the author have avoided this? How to shore up an argument to give the nuance and depth that it deserves?</p>
<p>One method I like to use is to ask myself the good, hard questions. I think a sharp argument should interrogate itself.</p>
<p>Below is a selection of what I think would have made great questions for the author to ask and attempt to answer in the blog post in order to take the conversation beyond all the apparent strangeness. I&#x27;ve already posed several such questions above, so I won&#x27;t repeat those.</p>
<p>Note that the following might be mistaken for rhetorical questions, that is, a way for me to make an implicit argument of my own thesis while camouflaging them as questions. This is not the case, and I genuinely don&#x27;t have a clear answer to any of them! Even for those where I do have a hunch one way or the other, my confidence is low, and I wouldn&#x27;t die on any of those hills yet. I intentionally chose to analyze a blog post on a topic I don&#x27;t have strong opinions on.</p>
<p>Let&#x27;s follow the blog post&#x27;s original structure.</p>
<h3>&quot;Property Is Theft, but Don’t Steal My Ideas&quot; Section</h3>
<p>This section quotes and seems to fully agree with a blog post by Richard Y. Chappell. The thesis is this: just as an artist has no right to prevent certain groups of people from consuming their art, they have no right to ban AI from training on it. Wardle then mentions the &quot;long history of progressives supporting media piracy on freedom of information grounds,&quot; using it to show that there seems to be a contradiction in the stance of anti-AI people (who, remember, are taken to be primarily non-tech leftists).</p>
<p>Here are some possible questions:</p>
<ul>
<li><em>Movie theaters and laws prohibit people from recording movies and redistributing them for free or for a fee: in what ways, if any, does the desire to prohibit AI training on copyrighted art differ?</em> (Any answer to this would be good progress.)</li>
<li><em>Would those people still be against AI training if they had guarantees that the trained models would not be used for inference based on their art?</em> (This would show us whether the real problem is the training part or the inference part.)</li>
<li><em>Assuming one accepts that AI training on art is moral, does that automatically imply that AI inference mimicking that art is moral, too?</em> (Chappell also leaves this question unanswered, at least in the non-paywalled part of their post.)</li>
<li><em>Are the same people who support media piracy also generally against AI art?</em> (The author&#x27;s binary framing doesn&#x27;t allow for a question like this.)</li>
<li><em>Assuming the answer to the previous question is YES, would those people still be against AI art if their ideals of abolishing property rights, or at least copyright, were realized in society?</em></li>
</ul>
<h3>&quot;Effort Moralism&quot; Section</h3>
<p>Here the author aims to refute another claim of some anti-AI people: that AI art is immoral because it is easy.</p>
<p>Setting aside the doubt of whether people actually make this argument in this manner, another good question could be:</p>
<ul>
<li><em>Are these people criticizing lazy individuals who use AI art for personal use as harshly as they criticize those who use it for personal profit? Do they equally despise the 10-year-old girl who Ghiblifies her friends for fun and the AI artist or company that sells Ghibli-style portraits to rich people?</em> (I think the answers would have a large bearing on how this debate is analyzed and just how strange or dumb it feels.)</li>
</ul>
<h3>&quot;The Slop Heap&quot; Section</h3>
<p>Next, we are told that another claim for the immorality of AI art is that it is missing something fundamental: it&#x27;s inauthentic. I&#x27;ve heard people say that AI artists are lazy and have even seen some attempt to shame AI artists with this argument. But I don&#x27;t know the answer to this question:</p>
<ul>
<li><em>Do a majority of those against AI art claim that being inauthentic is a key reason to ban it or to consider it immoral?</em></li>
</ul>
<p>Since the blog post doesn&#x27;t ask or answer this question, we don&#x27;t even know if this section is linked with the rest.</p>
<h3>&quot;It’s All Displacement&quot; Concluding Section</h3>
<p>The last three paragraphs of the Substack entry attempt to reframe the problem in a more meaningful, less strange light. The author wants to make a parallel between what&#x27;s going on in the art industry and what Freddie DeBoer wrote in a 2021 blog post about the displacement of the media industry. When Substack arrived, writes DeBoer, there was much backlash from people on &quot;Media Twitter,&quot; who claimed the way the platform made and distributed money was harmful to news outlets, journalists, and readers. But the real cause of these issues, we&#x27;re told, was the failings of the media industry itself. Substack was the <em>solution</em> that would save journalists, giving them financial independence and full control over their own output.</p>
<p>These are some questions that come to mind based on this:</p>
<ul>
<li><em>Given that we have a few years of retrospection available, was DeBoer&#x27;s assessment of Substack vs. Media fair and accurate?</em></li>
<li><em>If the seismic shift caused by Substack ultimately made journalists&#x27; jobs more sustainable, will the even bigger shake of generative AI ultimately save traditional artists from the evils of their own industry?</em> (This would make the parallel explicit and justify bringing it up.)</li>
<li><em>How many people are against AI art in absolute moral terms, and how many are against AI art in the context of the current capitalistic forces that seem prone to squashing the already-precarious livelihood of human artists?</em> (An expansion on a question I asked before.)</li>
<li><em>Of those who are mainly worried about human artists starving to extinction, how many would still be against AI art if it were effectively regulated so that part of its profits were redirected to the artists, sufficient to make their work sustainable?</em></li>
<li>If the implication is instead that traditional artists will (unfortunately) have to go extinct to be replaced by powerful new AI art, this may or may not have a large impact on the production of new, entirely novel, and non-derivative works. <em>Might this prospect be what many people are worried about, instead of universal moral truths?</em> (In my opinion, this is the most important question, worthy of being tackled at the beginning of the post.)</li>
</ul>
<h2>TL;DR</h2>
<p>By analyzing &quot;The Strange Culture War Over AI Art,&quot; we have seen some ways in which a rational, intelligent person can sometimes undermine their own arguments. More constructively, we have seen some helpful thinking tools in action. Here is a brief roundup.</p>
<p>First, be explicit about what you&#x27;re really trying to argue for. If you want to be impartial, act impartial. If you want to be partial to one opinion, that is fine as long as you don&#x27;t try to pass as a neutral observer. Making the reader hunt for &quot;subsurface goals&quot; between the lines doesn&#x27;t bode well for your chances of convincing them.</p>
<p>Second, <del>always draw 4D tables of your framings</del> remember that you can make the most flawless logical argument possible, but if your basic assumptions are wrong or inadequate, it will all be in vain. To the extent possible, always prove—to yourself and to the reader—that you&#x27;re jumping off from the solid foundations of a suitable framing.</p>
<p>Third, ask yourself the hardest questions at every step of the process. Don&#x27;t just write what sounds reasonable: always wonder if there is more to it, seek out the possible objections and gradations, grope in your blind spots.</p>

<p>After all that, some readers might still be wondering what I think about the subject of AI art. I avoided saying anything about it in the text above because it is really beside the point. But it&#x27;s not like I want to keep it secret. For what it&#x27;s worth, here is a short version of it.</p>
<p><strong>Is there a debate important and compact enough to merit the term &quot;culture war&quot;?</strong> I didn&#x27;t see any convincing evidence for that, unless you call any instance of &quot;some people arguing on moral matters online&quot; a culture war. That&#x27;s a pretty low bar. That there is a debate—several debates in parallel—however, I have no doubts.</p>
<p><strong>Is that debate(s) or war strange?</strong> Only if you look at it through strange lenses.</p>
<p><strong>And is AI art &quot;morally permissible&quot;?</strong> I really don&#x27;t know yet. I&#x27;ve marveled at it and used it in some cases (you&#x27;ll find it in some of the early Plankton Valhalla essays), and I have also avoided relying on it too much as a substitute for anything more than personal and private fun. I suspect the answer will not be a yes/no dichotomy. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/is-there-a-strange-culture-war-over-ai-art/possessed-photography-U3sOwViXhkY-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Breathing Through Your Belly Button]]></title>
            <link>https://aethermug.com/posts/breathing-through-your-belly-button?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/breathing-through-your-belly-button</guid>
            <pubDate>Thu, 03 Apr 2025 00:13:56 GMT</pubDate>
            <description><![CDATA[Take a look at reality for a minute]]></description>
            <content:encoded><![CDATA[<p>When I was 14, I met a girl who couldn&#x27;t breathe if someone covered her navel. She often wore crop tops, exposing her waist. Resting a finger on her belly button—<em>ever so gently!</em>—was enough to make her gasp and push me away in panic. Even a flat hand hovering over her stomach made her uneasy. I thought it was silly.</p>
<p>The way my school friends would become ecstatic on a Sunday, then cry in dejection the following one, all based on the number of times a synthetic-leather sphere had crossed two imaginary rectangles in a faraway stadium—and the way others hurt each other and vandalized things for the same reason—it baffled me every time.</p>
<p>Those things don&#x27;t exist, I thought; they don&#x27;t really <em>happen!</em> They have no physical bearing on the people who think them: they could safely go on with their lives by ignoring them.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/breathing-through-your-belly-button/tyler.webp" alt="" title="Screenshot of a tweet by Tyler, the Creator saying &#x27;Hahahahahahaha How the fuck is cyber bullying real hahahaha Nigga just walk away from the screen like nigga close your eyes haha&#x27;"/></figure>
<p>Then I heard someone talk about &quot;First World problems,&quot; and it got me thinking. I could not sleep without the perfect pillow. I would leave shellfish on my plate, untouched and wasted because &quot;I didn&#x27;t like them.&quot; I was embarrassed to walk outside after spilling ice cream on my pants. Was I whining about fake, &quot;First World problems&quot;? Could I just ignore those things?</p>
<p>I found more things in my own life that didn&#x27;t exist, didn&#x27;t really <em>happen</em>: the pain of speaking in public, the terrible consequences of being late to an appointment with friends, the end of the world if I failed the next test. None of that was physically true, yet I felt and behaved as if it were.</p>
<p>Fiction, fiction everywhere! The money in every bank account is fiction: we only pretend it exists. The hurt I felt when a loved one was mean to me wasn&#x27;t &quot;real,&quot; either: words are harmless vibrations of the air. The project I worked on with my team was nothing more than text scattered over many files, yet we worked hard and argued as if this invisible thing called &quot;our product&quot; already existed and made demands of us. Relationships can&#x27;t be touched or seen, and neither can cultures, holidays, the lack or abundance of time. Religion, business, games, states, love, hate, and <a href="/posts/matsuri-s-grown-up-world">duty</a> are nothing more than mind-stuff, even though we (<em>mentally</em>) relate them to physical objects.</p>
<p>All these things are of the Tinkerbell kind: simply stop—all of us—believing in them, and they will disappear.</p>
<p>In my quest to find the Real around me, I found that nothing was: if I wanted to care only about the physical, I&#x27;d have to ignore the entire world.</p>
<blockquote>
<p>If men define situations as real, they are real in their consequences.</p>
<p>—Thomas Theorem, W. I. &amp; D. S. Thomas</p>
</blockquote>
<p>I decided not to ignore. It&#x27;s all in our heads—that&#x27;s <a href="/posts/living-in-a-real-world-acting-in-imaginary-ones">what makes it real</a>. Silly, perhaps, but <a href="/posts/you-impersonate-what-you-think-you-are">every person&#x27;s breath depends on it</a>. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/breathing-through-your-belly-button/kevin-et-laurianne-langlais-iMRlp_Ldxus-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Vibe Check UX]]></title>
            <link>https://aethermug.com/posts/vibe-check-ux?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/vibe-check-ux</guid>
            <pubDate>Fri, 28 Mar 2025 09:41:40 GMT</pubDate>
            <description><![CDATA[Design proposal for the no-brainer social media feature all non-animals deserve]]></description>
            <content:encoded><![CDATA[<p>A couple of years ago, I decided to turn off YouTube&#x27;s viewing history function on my account, the one that records my past searches and the videos I watched. The unexpected result was this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/vibe-check-ux/image.webp" alt=""/></figure>
<p>My YouTube landing page is a blank void. There are no video suggestions for what I might like to watch next, no array of the most popular videos in my city, region, or country—nothing.</p>
<p>I didn&#x27;t mean to turn off the main feed of the platform. Sure, if they are really oblivious to what I watched recently, then they can&#x27;t make personalized suggestions. But did they really have no other ideas? I&#x27;m subscribed to many channels, for instance, and YouTube still keeps track of my likes and my saved videos, so those could be good starting points for the landing page feed. Some people view this all-or-nothing switch as a bully move from Google, a way to bulldozer-nudge users into turning their history feature back on. If nothing else, it is a strong statement on their part.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/vibe-check-ux/9ow6oz.webp" alt=""/></figure>
<p>The thing is, <em>I loved the new landing page</em>. It&#x27;s the feature I never knew I needed. I have been happily living without a YouTube suggestion feed for years now. That big fat &quot;Your watch history is off&quot; is a serene, quiet greeting awaiting me whenever I open the app, a reminder that I am in control, that nothing will be pushed into my face without my explicit request. Thank you, Big Bully.</p>
<p>Let&#x27;s be fair, though. The idea of recommending new things that might interest the user is not evil in and of itself. Presumably, people go to social media and content platforms to discover new things. We want to be pleasantly surprised, and an automated, smart curator tailoring the content just for you, just to delight you every time, is an inviting prospect. So why did it feel so good for me to finally turn it off on YouTube?</p>
<p>The reason, of course, is that the feed was never pleasant for me. The signal was almost always drowned in a sea of noise. Irrelevant suggestions, slop, more of the same things I watched and didn&#x27;t particularly like before. It&#x27;s not that there aren&#x27;t many gems on the platform—the algorithm just doesn&#x27;t seem capable of surfacing them for me.</p>
<p>Crucially, the recommender system often does find things that might interest me, but it offers them when I&#x27;m looking for something else altogether. This can be mildly frustrating, like finding only salty food in the fridge when you&#x27;re looking for sweets, and then, when you&#x27;re finally in the mood for salty, somehow you only have sweets at hand.</p>
<p>Maybe it&#x27;s just me. Sometimes I feel like the various algorithms see me as this incomprehensible alien creature, inscrutable and unpredictable, and all they can do is desperately throw random things at me, hoping I&#x27;ll eventually display some patterns of behavior that they can make sense of. (Bad news, robots: I don&#x27;t think I&#x27;m heading in that direction at all.)</p>
<p>Throttling the fire hose off entirely is a rather extreme way to solve the recommendation quality issue. I can do that with YouTube because it&#x27;s easy enough to get suggestions on what to watch from other people or links on the internet. It&#x27;s not really feasible on other platforms like Instagram and BlueSky, though, where each entry is short and transitory. Turning those feeds off, even if it were possible, would mean removing most of the value of their platforms—and I think there <em>is</em> value in them, despite all the glaring issues.</p>
<p>Is there a middle ground, then, between an unpleasant torrent of noise and nothing at all?</p>
<p>I think there is, but it needs to be built into the platforms: <em>every time the user opens the app, ask them what kind of content they wish to see.</em> Not in detail, because they still want to be surprised, but in terms of the kind of vibe they&#x27;re looking for at the moment.</p>
<p>I&#x27;ll call this &quot;Vibe Check UX,&quot; and I&#x27;ll develop it as a rough but straightforward design idea below.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h2>The Problem</h2>
<p>Social media platforms and entertainment services are riddled with well-known problems—they&#x27;re too addictive, they&#x27;re unhealthy, they stunt the social growth of young people, they create false objects of comparison, they polarize opinions, <em>etcetera</em>. Yet, despite all that, I think everyone can agree that these services are here to stay. Sociality and recreation are core human instincts, and nowadays many people—whether it is good for them or not—get those instincts fulfilled mostly online. There is no going back, so the question is how to make them better.</p>
<p>It all hinges on the way recommendation algorithms are implemented. Currently, they consider some cleverly processed mixture of factors like:</p>
<ul>
<li>Demographic information</li>
<li>Which content the user reacted to and engaged with in the past</li>
<li>What group of users the user is most similar to</li>
<li>What the user posted recently</li>
</ul>
<p>These are all behavioral observations. In other words, they collect an enormous amount of data about your behavior, crunch it with powerful computers, and create a simulation of you to find the new stuff that is most likely to attract your attention. And judging by how rich and powerful the companies behind these algorithms have become, these simulations, more often than not, work.</p>
<p>But there is one thing that they all ignore when putting together your profile: what you&#x27;re thinking about and feeling <em>right now</em>. Some concrete examples from my own experience will help illustrate this point.</p>
<p>I follow many scientists on Bluesky and Twitter, and I enjoy reading their science-related posts... except when I don&#x27;t feel like it. Sometimes I want a break from thinking about science, but my feed is still full of it because of my past activity. Also, many of the scientists I follow are American, and lately they (understandably) post about American politics all the time—not something I am often looking for.</p>
<p>On Reddit, it is enough to open a post about knitting once to turn my feed into a string-work festival for days: crochet posts, embroidery posts, and macramé posts weave themselves into the fibers of my timeline. The platform kindly allows me to give feedback, and if I tell it I&#x27;m not interested in a quilting post, it will mute the quilting community. But it will keep recommending posts from the neighboring felting community, and the appliqué community, not to mention the bargello, lucet, mizuhiki, and quipu communities. Muting them all is more work than just scrolling past, so I end up feeling defeated and overwhelmed by the feed.</p>
<p>On Spotify, I often listen to relaxing music at certain times of the day when I need to wind down, and that seems to convince the algorithm that I&#x27;m a very sad and melancholy person. My home page, which Spotify claims is &quot;made just for me,&quot; is dominated by this kind of slow, soothing music. Half the time, though, I want to quickly put on an upbeat or experimental playlist instead—something to keep me awake or brighten my mood—but the app only has Satie, Norah Jones, and a playlist titled &quot;Rainy Day Jazz&quot; for me. I have to go and dig something up for myself, hoping that whatever I find won&#x27;t completely wipe out those calmer tunes from my personal page by the next time I want them.</p>
<p>Then there is Netflix. I will only say that, on any given day, I have only 1% interest in 99% of the shows they claim are a 100% match for me.</p>
<p>You probably see the pattern now. These companies spend millions on armies of brilliant engineers crafting complex AI models capable of profiling me in detail, but often simply asking me what I&#x27;m looking for would be enough to improve their suggestions by orders of magnitude—something a junior developer could build in a day.</p>
<h2>Now, Just Give Me What I Want Now</h2>
<p>The key observation here is that usually the algorithm isn&#x27;t completely off regarding what I might like <em>at some point</em>, only about what I might like <em>right now</em>. I&#x27;m not that alien to them, after all—just too volatile.</p>
<p>The Vibe Check UX I&#x27;m proposing simply asks what vibe you&#x27;re looking for when you open the app. You could answer in many ways:</p>
<ul>
<li>&quot;I want to know what my closer friends are up to.&quot;</li>
<li>&quot;I want to engage with my wider circle.&quot;</li>
<li>&quot;I need some comforting thoughts.&quot;</li>
<li>&quot;I want to know what people are saying about politics/sports/cooking today.&quot;</li>
<li>&quot;I want to escape reality for a while.&quot;</li>
<li>&quot;I want to catch up on gossip.&quot;</li>
<li>&quot;I want to discover different opinions.&quot;</li>
<li>&quot;I don&#x27;t want to think.&quot;</li>
</ul>
<p>This hypothetical UI wouldn&#x27;t ask for very specific topics, though. There is already the search feature for those, and chances are you want to be pleasantly surprised by the algorithm—the keyword here being &quot;pleasantly.&quot; The AI only needs to know what <em>kind</em> of content to prioritize for you, what <em>general mood</em> or <em>type of interaction</em> you&#x27;re hoping to engage in during that particular session.</p>
<p>The result is a feed that matches your immediate needs, allowing for more intense engagement. In other words, you get more of what you want, and the platform gets more of your engagement. It sounds like an easy win-win to me.</p>
<p>Here, a skeptical reader could and should ask: if it is so good for everyone, why hasn&#x27;t it been done before? Thank you, skeptical reader, that is a very good question!</p>
<p>My best guess is a mix of factors. First, this might have been technically challenging to build in the past. A general &quot;vibe&quot; is too vague and slippery a property to infer reliably with traditional algorithms. But the recent leaps in LLM and image recognition capabilities should be more than enough to bridge this capability gap.</p>
<p>Another likely possibility is that the analysts and growth engineers in the big corporations did consider the idea of asking for direct user feedback at some point... then decided not to do it.</p>
<p>Perhaps, even though users would be more engaged with the content on their timeline, they would also tend to become satisfied earlier, leading to a shorter total time on the platform. But is this really bad for the platform, considering the long-term view?</p>
<p>Or, perhaps—and this is the cynical and unavoidable doubt—they <em>want</em> people to be frustrated and angry while scrolling, because these emotions lead to more interactions and contagious network effects. A funny meme might elicit only a repost from you, but a single flame war will tempt you into thirty minutes of furious discussion.</p>
<p>Again, consider the long-term engagement effects: what if you could choose between two socials, one where you&#x27;re angry for two hours every day, and one where you&#x27;re satisfied for one hour? I suspect that most people would eventually ditch the former. This suspicion is shared by some researchers, too. For example, Agarwal et al. make a very similar point in <a href="https://dl.acm.org/doi/10.1145/3630106.3659004" rel="nofollow noopener noreferrer" target="_blank">this 2024 paper</a>, writing that &quot;users tend to return to a platform in the long run if it creates utility for them, while pure engagement-driven interactions may affect user return in the short term but will not have a lasting effect.&quot;</p>
<h2>Prior Art</h2>
<p>Before getting into the actual design of the Vibe Check UX, I&#x27;ll briefly review what has been done by others in the neighborhood of this idea (I won&#x27;t be offended if you [skip] this section, though).</p>
<p>I&#x27;m not the first to think seriously about using direct user feedback to inform algorithms. Scientists and designers have experimented with variants of this idea in the past, and it&#x27;s useful to look at them. Based on what I could find, none of those designs are quite like the one I&#x27;m proposing, though, for the reasons explained below.</p>
<p>In <a href="https://academic.oup.com/pnasnexus/article/4/3/pgaf062/8052060" rel="nofollow noopener noreferrer" target="_blank">a recent experiment</a>, Milli et al. studied how people feel about tweets chosen in three different ways: with the usual behavior-based algorithms, with the tweets picked in simple chronological order, and with tweets cherry-picked based on the users&#x27; explicit preferences. They found that the explicitly preferred tweets made the users happier and less angry, although they tended to be more conducive to echo chambers.</p>
<p>This study is interesting but doesn&#x27;t really tackle the question of real-time feedback. For each tweet shown, they asked the participants, &quot;When you use Twitter, do you want to be shown tweets like [@author-handle]’s tweet?&quot; which is different from &quot;What are you looking for in <em>this current session</em>?&quot; Vibe Check UX is about constantly re-tailoring the filter based on user-specified criteria.</p>
<p>There have been many attempts (and actual implementations) of real-time &quot;mood detection&quot; over the years. For example, there are patents for using the smartphone&#x27;s <a href="https://patents.google.com/patent/US20060143647A1/en" rel="nofollow noopener noreferrer" target="_blank">camera</a> to automatically infer if you&#x27;re more inclined toward classical or rock music, and patents to use the <a href="https://www.axios.com/2021/01/30/spotify-patent-users-speech-recommend-music" rel="nofollow noopener noreferrer" target="_blank">microphone</a> to interpret the tone of your voice and make content recommendations based on that. I&#x27;m ready to believe that these technologies can (somewhat) accurately label my biometric data as &quot;happy,&quot; &quot;sad,&quot; and other broad emotions, but how much nuance can they extract? When a sensory algorithm guesses that I need some cheering up, can it also determine whether I feel more like watching comedy or people dancing?</p>
<p>The product that seems to best agree with my argument—the need for real-time, explicit-but-fuzzy feedback from the user—is a recommendation algorithm called <a href="https://www.taranify.com/blog/power-of-netflix-mood-based-recommendation-ai-16" rel="nofollow noopener noreferrer" target="_blank">Taranify</a>, a cool side project of a single developer. Taranify is an AI recommendation system that tries to suggest the best music, movies, books, and other media that best suit your <em>current</em> mood.</p>
<p>One of the blurbs on the landing page reads, &quot;Your Current Mood Matters More! Find what you enjoy now. Not just things similar to your played or listened history.&quot; I can stand behind that.</p>
<p>Taranify&#x27;s process works like this: you are <a href="https://www.taranify.com/what-movie-to-watch" rel="nofollow noopener noreferrer" target="_blank">shown</a> a grid of 10 colors and are asked to click them &quot;in the order you feel most to least drawn to at this moment.&quot; Based on that ranking, it does some LLM magic behind the scenes to infer your current mood and offer recommendations that, supposedly, align well with it.</p>
<p>This is the verdict after I tried:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/vibe-check-ux/tanarify.webp" alt="" title="Screenshot of a page on the Tanarify website saying &#x27;Your Mood Analysis&#x27;, with a needle indicator pointing to a value of about 40%, the words &#x27;quietly contemplative&#x27; in large letters, and below an explanation saying &#x27;I found 5 Spotify playlists, 6 Netflix shows, 6 Movies, and a Food cuisine for you to enjoy.&#x27;"/></figure>
<p>To be honest, the recommendations were way off for me. None of its suggestions enticed me to try those products. This is not to blame Taranify in particular, though. As I wrote earlier, the billion-dollar algorithms of Meta &amp; Co. don&#x27;t fare much better for me. I had no idea how to rank those colors in the first place because they all looked equally neutral to me—perhaps due to my mild <a href="https://en.wikipedia.org/wiki/Alexithymia" rel="nofollow noopener noreferrer" target="_blank">alexithymia</a>. I might simply be the wrong subject to try the service. Taranify is a small experiment, and I think it goes in the right direction in many ways, even if it ends up failing on the accuracy side. Especially laudable is the fact that it is completely devoid of tracking or even the need to register. I really hope more people experiment with privacy-friendly ways to recommend content.</p>
<p>Still, this is not what I&#x27;m looking for. I don&#x27;t believe mood will ever be a good way to find appropriate suggestions. It is too coarse and abstract most of the time, as can also be inferred from the fact that the Mood Analysis result in the screenshot above is represented as a one-dimensional gauge. Why make me jump through hoops like that, only to land on this vague estimate of my temper that could be compatible with a hundred different desires? Why not ask me directly—you know, with words?</p>
<h2>Let Us Own Our Gaps</h2>
<p>The main limitation of all these past attempts seems to be that they always treat people, in one way or another, as if they were speechless animals. They have tried basically everything they could imagine to guess what people want, <em>except</em> hearing it directly from them.</p>
<p>Reducing people to the subjects of ethological observations removes their agency as intelligent beings. Since all we are shown is based on what we did before, consciously or not, the best we can do to retain a modicum of control is to try to &quot;behave better&quot;: we might avoid clicking on a link for fear of being plagued with related ads or posts in the following days, or we might spend more time than we&#x27;d fancy liking and disliking stuff just to &quot;train the algorithm.&quot; And, to the extent that we <em>do</em> often make unconscious choices and errors, we are stuck with them and their consequences. The algorithms study us impassively as we stumble around in frustration.</p>
<p>Now, there is a well-known <a href="https://en.wikipedia.org/wiki/Value-action_gap" rel="nofollow noopener noreferrer" target="_blank">psychological phenomenon</a> called the &quot;intention-behavior gap&quot; or &quot;value-action gap,&quot; and it has been confirmed through many experiments. In brief, it&#x27;s the observation that people often say they believe one thing but proceed to act against that belief. The most well-studied instance of this cognitive error is the <a href="https://www.sciencedirect.com/science/article/abs/pii/S0959652606002472" rel="nofollow noopener noreferrer" target="_blank">beliefs around climate change</a>: many people <em>claim</em> to be worried about climate change but fail to <em>do</em> their part in fighting it. This gap is also well known in the world of product design. One of the first maxims I learned as a product manager was: &quot;Don&#x27;t ask the customer what they want; look at what they do.&quot; The reason was (supposedly) demonstrated by Henry Ford when he said, &quot;If I had asked people what they wanted, they would have said faster horses.&quot;</p>
<p>Humans <em>are</em> animals. Observing someone carefully for a while will tell you a lot about them, including things that they themselves are not aware of. As long as they are used ethically—a big &quot;if&quot;—algorithms that analyze user behavior are a good idea with a lot of potential. That&#x27;s not where the problem at hand lies.</p>
<p>The problem is assuming that those unconscious behaviors that the algorithms can partly anticipate are all there is to the person. The problem is assuming that those unconscious behaviors are even desirable for the users displaying them.</p>
<p>When I say, &quot;I don&#x27;t want to see this topic again,&quot; I actually mean it, even if I end up clicking it again the next time it comes up. When I think, &quot;I need some energetic and cheerful music,&quot; and you recommend me Debussy or lo-fi playlists based on Yesterday Me, I&#x27;m <em>not</em> going to listen to them. And even assuming I have this great inner contradiction where I actually want the opposite of what I think I want, <em>let me learn about it</em>. If you can see right through me, help <em>me</em> see through myself, too, so that I may perhaps find a way to reconcile!</p>
<figure><img src="https://aethermug.com/assets/posts/vibe-check-ux/stool.webp" alt="A drawing of a stool with three legs. One leg has the words &#x27;expert design&#x27; written on it, another leg has the words &#x27;User input&#x27;, and the third leg the words &#x27;behavioral analysis&#x27;. The seat of the stool has the word &#x27;recommender&#x27; written on it."/><figcaption>Diagram from the recorded presentation of Ekstrand and Willemsen (2016), about the proposed new paradigm of recommendation systems.</figcaption></figure>
<p>Behavioral data shows what you might do, not why. Did you click on that Reddit post because you love that subreddit&#x27;s topic, or is it because a friend posted it, your partner told you to, or you were doing some research that you have since abandoned? An observer is not going to guess it just from my actions, even if they know your mood. No amount of behavioral data is going to allow them to guess a human being&#x27;s real, conscious intentions. Search is the solution when the user knows exactly what they want, but often users arrive with vague but confined needs.</p>
<p>We need a way to discover content that is based both on behavioral data and on our stated intents, as some researchers (like Ekstrand and Willemsen <a href="https://dl.acm.org/doi/10.1145/2959100.2959179" rel="nofollow noopener noreferrer" target="_blank">here</a>) have already suggested. The explicit desires of a user need not be the only criterion, but they should be the starting point from which the recommender system begins its clever arrangements.</p>
<h2>Vibe Check UX</h2>
<p>With that out of the way, here are the two key requirements for a Vibe Check UX:</p>
<ol>
<li><strong>Transience by default</strong>: Assume that the user&#x27;s goals for using the app will change every few hours, and sometimes every few seconds.</li>
<li><strong>Conscious control</strong>: The questions should focus on conscious needs and wants, not mood and tendencies.</li>
</ol>
<p>I&#x27;ll show this with a generic image-sharing app as an example, although the same approach can be applied to any other kind of recommendation system. Start with a behaviorally personalized feed like this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/vibe-check-ux/Frame%2026.webp" alt="" title="Stylized representation of a smartphone with a generic social media interface showing a feed of image posts. The post at the top shows scenes of street violence, perhaps during some protests."/></figure>
<p>Then ask what the user wants:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/vibe-check-ux/Frame%2014.webp" alt="" title="The same generic social feed as before, but at the top of the feed, there is a card element asking &#x27;What are you looking for? Tell us the kind of experience you want, and we&#x27;ll tweak the timeline for you.&#x27; Next to the text is a button that says &#x27;Start&#x27;."/></figure>
<p>Here, it&#x27;s important not to create too much friction for the user. Adding something more distracting, like a popup window, might confuse and discourage them from using the app altogether. The question is clearly visible at the top, but if the user wants to ignore it, all they have to do is flick one finger to scroll down.</p>
<div class="col-start-1 col-end-1 mb-10"><video class="w-full h-auto" loop="" muted="" controls="" aria-label="A video demonstrating the user scrolling down through a social media feed, where a card element asking the user what they are looking for disappears outside the screen&#x27;s viewport."><source src="https://aethermug.com/assets/posts/vibe-check-ux/scroll_down.webm" type="video/webm"/><p>Your browser does not support the video tag.</p></video></div>
<p>Still, the Vibe Check function should always be conveniently accessible somewhere in the UI. In this prototype, it is represented by the &quot;I want...&quot; button in the bottom bar.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/vibe-check-ux/Frame%2026b.webp" alt="" title="A detail of the bottom part of the main app screen, where five buttons are displayed. One of them is highlighted with an arrow and reads &#x27;I want...&#x27;."/></figure>
<p>When the user clicks on Start (or on the &quot;I want...&quot; button), a list of preset options appears. These should be chosen based on the choices the user made in the past or on choices popular with other users if there isn&#x27;t enough personal historical data yet. When they submit their choice, the user will see an updated timeline focused on the desired vibe.</p>
<div class="col-start-1 col-end-1 mb-10"><video class="w-full h-auto" loop="" muted="" controls="" aria-label="A video showing a cursor clicking on the Start button next to the Vibe Check question. A list appears with the title &#x27;I want to...&#x27; and the following options: &#x27;relax and escape reality&#x27;, &#x27;know what people say about politics today&#x27;, &#x27;know what my closer friends are up to&#x27;, &#x27;engage with my wider circle&#x27;, and others. Each option has a checkbox next to it. There are two buttons under the list: &#x27;Custom&#x27; and &#x27;Save&#x27;. The user checks the box next to &#x27;relax and escape reality&#x27;, then clicks on &#x27;Save&#x27;. The list disappears, and a temporary notification says &#x27;Sure! We&#x27;ll show you more posts that might help you relax and escape reality!&#x27; with a small link saying &#x27;change this setting&#x27;. The feed now shows different posts: they are about meditation, smiling faces, and a pretty flower."><source src="https://aethermug.com/assets/posts/vibe-check-ux/main_flow.webm" type="video/webm"/><p>Your browser does not support the video tag.</p></video></div>
<p>But the best thing for the user, of course, is to specify exactly what they want, even if it isn&#x27;t on the list yet. The LLM will handle the rest.</p>
<div class="col-start-1 col-end-1 mb-10"><video class="w-full h-auto" loop="" muted="" controls="" aria-label="Another video. Starting again from the initial unfiltered feed, the user presses &#x27;Start&#x27;, but instead of checking any of the boxes, clicks &#x27;Custom&#x27; next, and enters the words &#x27;I want puppies and babies&#x27;. After the user confirms, the option &#x27;puppies and babies&#x27; appears in the list with its checkbox already checked. The user clicks on &#x27;Save&#x27;, and the feed is now updated to show pictures of puppies and babies."><source src="https://aethermug.com/assets/posts/vibe-check-ux/customization_flow.webm" type="video/webm"/><p>Your browser does not support the video tag.</p></video></div>
<p>Remember that this algorithm isn&#x27;t meant to function as a search engine. Even when given a specific request like &quot;puppies and babies,&quot; it might—and probably should—expand a little on that, for example, by showing penguin chicks and other things within the same category of cuteness. The user will enjoy being surprised, as long as the vibe stays on topic.</p>
<p>There may not be an infinite number of recent posts with the desired vibe, though. What should the algorithm do when it exhausts the appropriate material? The ideal option is to simply tell the user to take a break from scrolling. The evil option is to quietly begin feeding them more and more off-topic stuff, plunging them back into the usual frustrating experience.</p>
<p>The more realistic option, I think, is to conduct another vibe check.</p>
<div class="col-start-1 col-end-1 mb-10"><video class="w-full h-auto" loop="" muted="" controls="" aria-label="Another video. The user is scrolling through the timeline with the &#x27;relax and escape reality&#x27; vibe, but eventually reaches the bottom, where a checkmark icon is followed by another question card component, saying &#x27;Do you want to look for something else?&#x27;, followed by the same message and Start button."><source src="https://aethermug.com/assets/posts/vibe-check-ux/reaching_the_bottom.webm" type="video/webm"/><p>Your browser does not support the video tag.</p></video></div>
<p>How can we mitigate the risk, as suggested by the Milli et al. paper, of creating even stronger echo chambers with this UX? Favor more generic keywords, like &quot;relax,&quot; &quot;follow sports,&quot; &quot;banter,&quot; and &quot;politics,&quot; and block the opinion-related ones, like &quot;Democrats,&quot; &quot;criticism of XYZ,&quot; and so on. This should help prevent people from diving too deeply into their own bubbles, although how well it works ultimately depends on the underlying algorithm.</p>
<h2>Conclusions</h2>
<p>You could develop these ideas further with more features: the ability to make certain choices permanent, to mix and modulate these wants like a DJ, and to set predefined schedules, to name just a few. But the quick and dirty sketch above should be enough to show the simple core of the Vibe Check UX concept.</p>
<p>Again, this is not meant to replace the existing algorithms based on behavioral data, but to inform them. As much as I would love to see the engagement industry simply drop its obsession with making people scroll endlessly, realistically speaking, none of the big corporations is going to do that.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/vibe-check-ux/goggles_meme.webp" alt="" title="A meme with the text &#x27;1997: Don&#x27;t sit too close to the TV, you will damage your eyes&#x27;, then &#x27;2024:&#x27; followed by an image of a woman wearing augmented-reality goggles."/></figure>
<p>Short of regulatory China-style interventions that radically limit this dynamic, we have to work within the system. So, if many users are going to remain glued to their phones, we might as well treat them more humanely, like the intelligent agents that they are, and make their time on the platforms more worthwhile. I&#x27;m not claiming that this is the best way, in the long term, for people to engage with social media: it is a feasible improvement. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/vibe-check-ux/Frame 28.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[An Aphantasic's Observations on the Imagination of Shapes]]></title>
            <link>https://aethermug.com/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes</guid>
            <pubDate>Thu, 20 Mar 2025 02:00:16 GMT</pubDate>
            <description><![CDATA[Log entry of a scientific test subject]]></description>
            <content:encoded><![CDATA[<p><em>This post is part of the <a href="/posts/a-list-of-introspective-descriptions">List of Introspective Descriptions</a>.</em></p>
<p><em>Premise: I have <a href="https://aethermug.com/posts/aphantasia" rel="nofollow noopener noreferrer" target="_blank">aphantasia</a>, and I&#x27;ve been <a href="https://aethermug.com/posts/boxed" rel="nofollow noopener noreferrer" target="_blank">participating</a> in fMRI (functional magnetic resonance imaging) experiments at a local university for a couple of years. This allows the researchers to record the specific activation patterns in people&#x27;s brains, and to train AI models to re-convert them back into images.</em></p>

<h2>Log: A March Afternoon, 2025</h2>
<p>I had another fMRI experiment with Dr. O today, the type where I hear the name of a geometrical shape in my earphones, then &quot;picture&quot; it on an empty square canvas shown to me on a monitor. I was shown a set of nine shapes before the procedure started, but during the experiment the canvas is empty. I&#x27;m supposed to make it materialize in my inner eye, replicating as closely as possible the shapes I saw before.</p>
<p>Of course, because of my aphantasia, I can&#x27;t really &quot;picture&quot; anything. Studying what my brain does and doesn&#x27;t do when I try is the whole point of this research.</p>
<p>The shapes are:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes/allshapes.webp" alt="" title="Nine versions of the previous screen, each with a different black shape inside the faint square outline: a circle, a square, an equilateral triangle, a plus sign, an X sign, a horizontal line segment, a line segment inclined 45 degrees, a vertical line segment, a line segment inclined 45 degrees the other way."/></figure>
<p>During the experiment, the screen is empty. It looks like this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes/blankcanvas.webp" alt="The tiny dot in the center is the fixation target, which is always present and which the test participants are required to stare at for the duration of the experiment." title="A gray rectangle where the only features are a faint square outline to indicate the canvas and a small dot with a cross on it in the very center."/><figcaption>The tiny dot in the center is the fixation target, which is always present and which the test participants are required to stare at for the duration of the experiment.</figcaption></figure>
<p>First, I hear the name of a shape (&quot;circle,&quot; &quot;45-degree line,&quot; &quot;cross,&quot; etc.), then I&#x27;m given a few seconds to make the magic happen: I stare at the little dot in the middle and project the corresponding shape onto the canvas with my imagination—that is, I <em>try</em>. Like in the six or so previous sessions of this experiment, I had to do a full (randomized) 9-shape cycle twelve times. Since each cycle takes a little over three minutes, the whole thing lasted a little less than one hour.</p>
<p>The first few times I did this, I needed to focus intensely. From the very start, I could confirm once more that my inner eye doesn&#x27;t work: I see no shapes at all. But I could still &quot;pretend&quot; that the shapes were there, that perhaps they were <em>about to appear</em> in certain predefined regions of the canvas, so that&#x27;s what I&#x27;ve been doing during these several hours of non-picturing inside the fMRI machine. Although I could never really &quot;see&quot; anything mentally (or perhaps because of that), I had to struggle to keep my mind from wandering, which would invalidate the whole experiment.</p>
<p>With time, though, I&#x27;ve grown accustomed to this task enough to be able to do it without special strain on my concentration. &quot;Pretending&quot; there is a specific invisible shape in a given part of my field of vision has become almost automatic for me, and this has freed up some of my attention to notice the finer details of my experience. That is what I&#x27;ll describe below.</p>
<h2>Invisible Halos</h2>
<p>According to Dr. O, my brain data while doing this task shows some slight predictability (this is still unpublished research). His specially trained artificial neural network creates image representations of what I&#x27;m supposedly &quot;picturing,&quot; and these images do have a rough resemblance to the actual shapes I am supposed to be imagining. So, when the vocal prompt is to imagine a vertical line, the neural network outputs an image that resembles an &quot;I&quot; shape, only much more... bulbous and riddled with artifacts. The resemblance is weaker or absent for more complex shapes like the square, but even in those cases, the generated image is clearly different from a simple straight line.</p>
<p>In other words, in theory Dr. O could pull off a true—if slightly low-key—mind-reading exercise with me: I imagine one of those nine shapes without telling him what it is and, with the aid of his electronic brain, he would be able to guess the correct one with an imperfect but significantly better-than-chance accuracy. This is very interesting, although it doesn&#x27;t surprise me very much. It seems to indicate that there are <em>some</em> stable brain activation patterns linked with the idea of each shape, even though those patterns do not create in me a conscious sense of &quot;seeing&quot; with my mind&#x27;s eye. What are those patterns doing, then?</p>
<p>The best way I can express what happens subjectively when I try to project a shape onto an empty canvas is &quot;halos of attention.&quot; I don&#x27;t <em>see</em> anything, in any common sense of the word—there are no contours, no filling, no colors, or connected patterns in my field of view—but I <em>know</em> that certain parts of the canvas are more important than others at any given time, and that can feel similar to seeing. It&#x27;s as if those regions of the canvas are more &quot;active,&quot; more alive than the others.</p>
<p>If I&#x27;m trying to imagine an &quot;I&quot; shape in the middle of the canvas, for example, I can &quot;sense&quot; a vertical channel in my vision that passes through the center. Again, this channel doesn&#x27;t have boundaries or any kind of solidity, but it does have something resembling a basic shape: it&#x27;s vertical, i.e. it doesn&#x27;t extend in the left-right direction.</p>
<figure><img src="https://aethermug.com/assets/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes/verticalhalo.webp" alt="The canvas screen with a very faint band of what seems like soft reddish light running vertically through the center."/><figcaption>My &#x27;sense&#x27; of a vertical line on the canvas. Rather than a visible effect, it&#x27;s an awareness that my attention is focused there. If you think you don&#x27;t see anything, try looking for a couple of seconds at the fixation target dot.</figcaption></figure>
<p>I call them &quot;halos&quot; to emphasize that they are fuzzy and in some way intangible, but that term might still lead people to think about the sight of &quot;objects&quot; in the traditional sense, whether illusory or not. It doesn&#x27;t feel like that. If anything, they are the <em>lack</em> of something—voids waiting to be filled. Another way to describe them would be as &quot;layer masks,&quot; like those that can be created in photo-editing programs to apply visual effects to certain parts of an image with the desired shapes. The masks themselves are invisible; it&#x27;s the effect that you add through them that becomes visible.</p>
<p>When I try to imagine a vertical line segment, the vertical &quot;channel&quot; I &quot;sense&quot; is not a finite segment but seems to stretch vertically without end. I have tried many times to limit its length so that it fits inside the square canvas, like the sample image, but I couldn&#x27;t do it. The channel runs all the way off the canvas, cutting through my entire field of vision vertically. It&#x27;s as if the &quot;layer mask&quot; comes in a preset shape that I can&#x27;t modify—an archetype for all vertical lines rather than the representation of a specific line of a given length or width.</p>
<figure><img src="https://aethermug.com/assets/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes/crosshalo.webp" alt="The canvas screen with four very faint blobs in the square outline, positioned to the northeast, southeast, southwest, and northwest of the central dot."/><figcaption>When I try to picture the X symbol inside the square canvas, the attention halos are arranged roughly like this.</figcaption></figure>
<p>The same halo/layer mask phenomenon happens to me for horizontal and diagonal lines, although the diagonal ones seem to be fainter and do not appear to extend indefinitely like the up-down and left-right ones. For other shapes, I&#x27;m able to conjure roughly globular masks within the boundaries of the canvas, but they are vaguer and &quot;disjointed.&quot; By this, I mean that they don&#x27;t form the shapes I want them to form but create separate masks around the salient features of each shape. For example, when I try to picture a triangle, I can form a mask with three very faint globular halos where the angles should be. For the X shape, I see four such halos. In other words, those masks only track the angles and tips of the shapes, not the lines connecting them.</p>
<figure><img src="https://aethermug.com/assets/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes/circlehalo.webp" alt="The canvas screen with a very faint, roughly globular shape around the center of the square outline."/><figcaption>The mask I get when trying to imagine a circle or a square. In truth, this area never stays still but wobbles and pulsates with the strength of my attention.</figcaption></figure>
<p>For some reason, the square shape is the hardest for me to &quot;sense&quot; on the canvas. It ends up as a vague blob at the center, almost indistinguishable from the blob of the circle, no matter how much I strive to add four angles to it. I find it strange that it doesn&#x27;t resemble the halos of the X shape, which should be positioned roughly in the same parts of the canvas.</p>
<h2>What Even <em>Is</em> a Mental Image?</h2>
<p><a href="https://www.sciencedirect.com/science/article/pii/S0042698911001544" rel="nofollow noopener noreferrer" target="_blank">Many</a> <a href="https://www.cell.com/neuron/fulltext/S0896-6273(00)80448-1" rel="nofollow noopener noreferrer" target="_blank">scientific</a> <a href="https://www.science.org/doi/abs/10.1126/science.2296719" rel="nofollow noopener noreferrer" target="_blank">experiments</a> have shown that people can &quot;prime&quot; their minds and eyes to spot certain things more quickly. This is sometimes called <em>selective attention</em> and works by focusing on specific visual features before they actually appear in front of your eyes: if you crouch over a lawn with the intention of finding four-leaf clovers, you&#x27;re more likely to find four-leaf clovers than if you crouched to look at the grass without any goal in mind. I wonder if this is what&#x27;s happening when I try to imagine a shape appearing on a blank slate. I might be creating &quot;slots&quot; for the desired shapes to appear in.</p>
<p>I&#x27;m sure this description of my experience will leave some people unsatisfied. It really sounds like I&#x27;m <em>seeing</em> something in my mind, after all. Whether I call it a halo or a layer mask, <a href="/posts/human-language-is-not-a-network">I&#x27;m forced to use visual metaphors</a>. Probably my drawings above don&#x27;t help to dispel that suspicion, since they&#x27;re (barely) visible.</p>
<p>Am I really aphantasic, then? Or am I, perhaps, hypophantasic, meaning that my mental imagery is faint but not totally absent?</p>
<p>Well, I can&#x27;t tell for sure without taking one of the objective tests that some researchers have demonstrated, but none of them have been standardized for clinical diagnoses yet. Still, I think I have good reasons to believe I truly have total aphantasia.</p>
<p>I wrote that I &quot;sense&quot; channels and blobs located on the canvas, and that these seem to be more active and alive compared to the background. These <em>are</em> localized differences in specific parts of my visual field, so someone could argue that they count as &quot;seeing mentally&quot; under some definitions. But the real question is whether these are <em>mental images</em> in the scientific sense or not. Here is why I don&#x27;t think they are.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>First, these differences in my visual experience are <em>extremely faint</em>—barely noticeable when I concentrate for a long time on a completely empty and flat gray surface and all but invisible in any real-life situation. In fact, they are so dim that it took me several cumulative hours of focused staring during these experiments to even notice that those halos exist at all! If this is hypophantasia, it&#x27;s indistinguishable from aphantasia in any situation outside the lab.</p>
<p>Second, these halos only work for the simplest of shapes, like circles and lines, and quickly become fainter and more muddled when attempted for slightly more complex shapes like triangles and squares. There is no way I can create meaningful pictures, like people or buildings, with this approach.</p>
<p>Third, although they do seem to roughly reflect the shapes I&#x27;m trying to imagine, they are almost impossible for me to manipulate mentally. I can&#x27;t make the lines shorter or rotate the triangles at will. They&#x27;re a bit like non-lucid dreams, where my mind creates all the events, but I can&#x27;t consciously control them.</p>
<p>Fourth, and most telling, is the fact that all these halos or masks instantly disappear the moment I shift my gaze even slightly. During the experiments, I&#x27;m required to always stare straight at a little dot target at the center of the canvas, so that my field of vision is fixed and my neural activation patterns can be compared from one set of fMRI recordings to the next. Rarely, however, I distractedly shift my eyes away from the target for a fraction of a second. In these cases, the halos vanish and take a few seconds to reappear after I&#x27;ve realigned my line of sight with the target.</p>
<figure><img src="https://aethermug.com/assets/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes/20041117183611.webp" alt="Picture of a smiling woman holding a glass frame with a piece of burnt toast inside it."/><figcaption>Diane Duyser, the woman famous for having seen the face of the Virgin Mary on a grilled cheese sandwich. (After 10 years of devoutly storing it, she tried to sell it through an eBay auction, but eBay shut it down.) Photo: AP.</figcaption></figure>
<p>What does that mean? Again, I can&#x27;t know for sure, but to me this is a strong indication that the halos are artifacts of the neural networks in my brain tasked with interpreting what I see, rather than direct inner representations of concepts. I believe I&#x27;m experiencing something akin to <em>pareidolia</em>, the natural human tendency to see faces and objects in places where they really aren&#x27;t. Pareidolia is what happens when you see a face in a natural rock formation, an animal in a cloud, or a monster in a Rorschach (inkblot) test. When you expect to see something—either consciously or unconsciously—you might end up seeing it even when it isn&#x27;t there, a false positive connecting dots that shouldn&#x27;t really be connected.</p>
<p>However, pareidolia needs some substrate to work on, like a rock with a complex shape. In my case, although the halos appear while looking at a blank gray area of the screen, my field of view is not empty or static at all. Everyone first notices this as a child: when you&#x27;re staring at a featureless space, whether it be the underside of your closed eyelids, the complete darkness of a cave, or simply a cloudless sky, all sorts of hazy shapes seem to spring to life all over your field of vision—little moving lights, shifting shapes like drapes or tiny lava-lamp blobs, and so on. These, of course, are not objects flying <em>in front</em> of your eyes but rather objects and noise <em>on</em> and <em>inside</em> them.</p>
<p>There are many kinds of these visual phenomena known to science, like <a href="https://en.wikipedia.org/wiki/Afterimage" rel="nofollow noopener noreferrer" target="_blank">afterimages</a>, <a href="https://en.wikipedia.org/wiki/Closed-eye_hallucination" rel="nofollow noopener noreferrer" target="_blank">closed-eye hallucinations</a>, <a href="https://en.wikipedia.org/wiki/Phosphene" rel="nofollow noopener noreferrer" target="_blank">phosphenes</a>, <a href="https://en.wikipedia.org/wiki/Floater" rel="nofollow noopener noreferrer" target="_blank">floaters</a>, and <a href="https://en.wikipedia.org/wiki/Blue_field_entoptic_phenomenon" rel="nofollow noopener noreferrer" target="_blank">blue-field entoptic phenomena</a>. Within certain limits, all of these visual artifacts are normal and always present in humans, but they usually go unnoticed in most daily-life situations because of how faint they are. Some of these are caused by noise or glitches in the neural processes associated with vision, but they are not mental images.</p>
<p>There you go. The halos I described are just as faint and ephemeral as that visual noise. In fact, it feels like they are made of the same substance.</p>
<p>Here is my current best guess for what those halos or layer masks are: they are pareidolia or &quot;wishful seeing&quot; that arises when I intensely try to see something that isn&#x27;t there. Instead of the usual rocks and inkblots, this kind of pareidolia would use those evanescent visual artifacts in my eyes to <em>almost</em> convince me that there is indeed something like a line or a circle to be seen there.</p>
<p>If I move my gaze, however, the illusion vanishes, because my visual networks can&#x27;t keep lying to themselves: there is no line or corner there. Since aphantasia is the inability to form <em>voluntary</em> mental images, it doesn&#x27;t prevent optical illusions and hallucinations, even, I think, when they are induced by the desire to see specific shapes.</p>
<h2>There&#x27;s an Elephant in this Room</h2>
<p>I glossed over an important point in this entry: how was I able to create those drawings I showed above, if I don&#x27;t see those shapes in my mind?</p>
<p>I&#x27;m very confident that those drawings are accurate down to the finer details like the shade of gray and the relative sizes (although I did make the canvas and shapes larger than they really are relative to the monitor, for ease of viewing). What gives me that confidence, given that I can&#x27;t imagine them visually?</p>
<p>This is a pretty huge question, and I have some hypotheses about it, too. But I need to do some more research and introspection work to sort out my ideas. I&#x27;ll try to write about this in the future. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/an-aphantasic-s-observations-on-the-imagination-of-shapes/jadon-barnes-XRCO-SSiaL4-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Interview on People and Blogs]]></title>
            <link>https://aethermug.com/posts/metapost-interview-up-on-people-blogs?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/metapost-interview-up-on-people-blogs</guid>
            <pubDate>Tue, 11 Mar 2025 23:51:23 GMT</pubDate>
            <description><![CDATA[A Small Web meeting place]]></description>
            <content:encoded><![CDATA[<p>TL;DR: see <a href="#interview-link">the last paragraph</a>.</p>
<p>Once upon a time the web was an open field with blogs as its flowers. Most were bland and short-lived flowers, but some were rare beauties gifted to the world. Then, like a layer of synthetic grass smothering the meadow-life, the Attention Grabbers lured everyone into their controlled and monetized platforms. People bemoaned the Death of the Blog. But blogs, of course, never really died out. The flowers found the cracks between the plastic grass-patches to see the light of day again: you just need to know where to look.</p>
<p>The Big Bloggers, with hundreds of thousands of followers, are easy to find. They&#x27;re popular enough to find their way onto social media platforms quite regularly. But there is a constellation of small, yet-unknown blogs, each of which might or might not be a real beauty of an orchid. It&#x27;s a quiet, cozy part of the internet, which some call the Small Web.</p>
<p>(If you want to have some fun wandering around the Small Web garden, I recommend this tool called <a href="https://kagi.com/smallweb/" rel="nofollow noopener noreferrer" target="_blank">Kagi Small Web</a>, which lets you jump from one random blog to the next in the style of the now-defunct <a href="https://en.wikipedia.org/w/index.php?title=StumbleUpon" rel="nofollow noopener noreferrer" target="_blank">StumbleUpon</a>. The current website is boring? Click on &quot;Next Post&quot;. I find it to be a charming initiative.)</p>
<p>Long story short, I was interviewed as a proud member of that Small Web on the website <em><a href="https://peopleandblogs.com/" rel="nofollow noopener noreferrer" target="_blank">People and Blogs</a></em>, run by <a href="https://manuelmoreale.com/" rel="nofollow noopener noreferrer" target="_blank">Manuel Moreale</a>. Every week, he asks the same set of (meta-)questions to a different blogger: who they are, how they started their blog, and how they run it in practice, among other things. It&#x27;s a simple premise, but the more people contribute (we&#x27;re at 80 and counting!), the more value it gains, I believe. In case you, dear reader, are still reluctant to start your own blog because you don&#x27;t know why and how to start, <em>People and Blogs</em> is a good place to clear up your doubts. It&#x27;s also a good way to find bloggers you might want to follow.</p>
<p>I&#x27;m a bit late sharing this. <span id="interview-link">My interview was published <a href="https://manuelmoreale.com/pb-marco-giancotti" rel="nofollow noopener noreferrer" target="_blank">on this page</a></span> at the end of February. My answers there are different from the usual stuff I write on Aether Mug, so they might be more interesting to you, or less. To find out, click on the link! ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/metapost-interview-up-on-people-blogs/meadow_sisley.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Living in a Real World, Acting in Imaginary Ones]]></title>
            <link>https://aethermug.com/posts/living-in-a-real-world-acting-in-imaginary-ones?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/living-in-a-real-world-acting-in-imaginary-ones</guid>
            <pubDate>Thu, 06 Mar 2025 06:50:51 GMT</pubDate>
            <description><![CDATA[On our Virtual Physics and Tunnel Vision]]></description>
            <content:encoded><![CDATA[<h2>I</h2>
<p>When <a href="/posts/matsuri-s-grown-up-world">the story of Matsuri Takahashi&#x27;s <em>karoshi</em> (&quot;fatal overwork&quot;) suicide</a> became public, around 2017, it hit me harder than I would have thought. I recognized it as a tragic, extreme, and all-too-natural extension of the social dynamics that I saw every day around me, as I was immersed in Japanese work culture.</p>
<p>I felt some of those same forces at play on me and on the people around me, although I was very lucky to join welcoming, open-minded workplaces, where no one tried to actively shame me into working longer and harder, and people were kind and polite regardless of hierarchy. Even there, many of my Japanese colleagues worked until late night every day, didn&#x27;t use any of their paid leave, and seemed to be able to think about nothing other than work. Although I didn&#x27;t think it would affect me personally, the topic fascinated me. Sometimes it terrified me.</p>
<p>Why didn&#x27;t Matsuri simply quit her job? When her seniors bullied her and made her feel worse than useless, why didn&#x27;t she just walk away? In her tweets, she sometimes mused about quitting and suing her company. She even mentioned having signed up for a job search consultancy: wasn&#x27;t that the ray of hope she needed to break out of the hellish environment she had stumbled into?</p>
<p>Did Matsuri Takahashi have to die? Of course not—but she clearly believed she did.</p>
<p>When I wondered about these things, I did so not with condescension, but with apprehension. She had been a young, extremely bright person, with an education that would have opened the doors of any job for her, who nevertheless fell into an overpowering death spiral defying all reason and rebellion, apparently set up from the beginning to lead her to killing herself. What if that spiral caught me, or a loved one, too?</p>
<figure><img src="https://aethermug.com/assets/posts/living-in-a-real-world-acting-in-imaginary-ones/elti-meshau-7nenxEGmEok-unsplash.webp" alt="The inside of a tight, featureless tunnel with flat concrete walls that goes on as far as the eye can see, with no exit in sight."/><figcaption>Photo by Elti Meshau, Unsplash</figcaption></figure>
<p>Around the time these thoughts were disturbing me, I started dating someone. I liked her very much, but her profile and situation also alarmed me: Japanese, in the same age group as Matsuri Takahashi, and herself trapped in a job she hated, with abusive, male chauvinist bosses who constantly gaslighted her. The first time we talked, she admitted she had been thinking about quitting for years already but never seemed to be able to do it.</p>
<p>In other words, she lived in the same world Matsuri Takahashi had lived in. The questions I had been wondering about became more relevant than ever for me. I didn&#x27;t want to see my partner slide any deeper into the spiral.</p>
<p>The usual ways we approach this problem are meaningful: the direct causes of <em>karoshi</em> in Japan, and of much workplace suffering all over the world, are the gaslighting and other mental manipulations perpetrated by heartless jerks for their selfish purposes; broader causes are the cultural backgrounds and various systemic dehumanization traps, especially strong in places like Japan and South Korea but present everywhere else, too; depression and other mental disorders, which can magnify these problems, too often go undiagnosed and untreated. It is important and useful to discuss these problems and to act at these levels: they should not be left unchecked.</p>
<p>But others have explicated these issues better than I could. My own question, the one I want to personally explore and fully understand, is this: how can all that apparently unnecessary suffering even be possible?</p>
<p>The root enabling cause of all that seems to be in the very nature of human consciousness, and to affect us much more often than those extreme cases of self-destructive stress. I think it&#x27;s a problem of Virtual Physics and Tunnel Vision.</p>
<h2>II (Virtual Physics)</h2>
<p>As a thought experiment, imagine I completely lost my ability to make predictions about reality. Suddenly, I have no idea what to expect from any of my actions or from external events. Will the sun rise tomorrow? No idea. Will a stone fall up or down, or left or right, when I let it go? I just can&#x27;t tell. If I give a command to my arm and hand to bend and scratch my nose, will I actually scratch my nose or will a unicorn spring out of nowhere and stab me to death? The only way to know is to try. Even if I do manage to scratch my nose, this won&#x27;t help me next time I wonder the same thing, because this sickness prevents me from learning anything from experience.</p>
<p>I would not survive for long in this scenario. Making projects and achieving goals would be out of the question. Most likely I would starve myself to death, because I can&#x27;t imagine which actions would actually lead me to food. I might even suffocate myself by ceasing to breathe, because I can neither predict that asphyxiation is the outcome, nor that resuming breathing again is the simple solution.</p>
<p>Making predictions about what might happen in the future is one of the most fundamental functions of our minds. As Donella Meadows said, &quot;everything we think we know about the world is a model&quot;. After all, we think and &quot;experience&quot; with our brains, pale lumps of jelly hidden in the pitch-black insides of thick boxes made of bone. We can&#x27;t and don&#x27;t think about the world directly: we refer to streams of highly fragmented, distorted, and contradictory data from our sensory organs to form mental representations of the world in our minds, and then we manipulate those mental objects to make guesses about the future. Those approximate representations are what I call &quot;black boxes&quot;, and the way we segment the world into distinct things is what I call &quot;framings&quot;. I wrote about these things <a href="/posts/a-framing-and-model-about-framings-and-models">here</a> and <a href="/posts/a-black-box-view-of-life">here</a>.</p>
<p>This is not solipsism, a belief that the real world doesn&#x27;t exist and &quot;the world is an illusion&quot;. The real world is out there, all right. We get hints and measurements of it through our senses, and we use them to form and update our framings and models. But <em>we think, decide, and act in those simulated worlds that exist only in our heads</em>.</p>
<p>This inner simulation works very much like a video game does. Some entities or objects are defined, and they behave following prescribed rules. When Mario jumps, he falls back down. When you attack an innocent shopkeeper in an RPG, the AI-driven city guards will retaliate by attacking you. These are worlds with rules and predictable &quot;if-this-then-that&quot; chains that you need to learn and navigate to exercise your free will, which happens in the parts of the game that remain unconstrained.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Like video games have &quot;physics engines&quot; to power their virtual physics, so do our brains create a Virtual Physics to predict the world.</p>
<p>Psychologists and philosophers talk about &quot;naive&quot; or &quot;folk&quot; physics to indicate the intuitive, uneducated understanding that every human being has of the laws of physics. They also talk about &quot;folk biology&quot;, for our intuitive, uneducated understanding of the organic world, and also &quot;folk linguistics&quot;, &quot;folk psychology&quot;, and so on (all of these have their Wikipedia pages, and are categorized under <a href="https://en.wikipedia.org/wiki/Folk_science" rel="nofollow noopener noreferrer" target="_blank">folk science</a>).</p>
<p>While you might need different tools and backgrounds to fully study how these intuitions work in people, they must be the result of the same kind of process, the same &quot;engine&quot;, under the hood: neurons buzzing each other to simulate things, abstract black boxes interacting with each other following <em>some</em> laws—accurate or not. For this reason, I will refer to all of these &quot;folk sciences&quot; as Virtual Physics.</p>
<p>Models are always abstractions of the thing they model. Every belief and snippet of knowledge, then, is Virtual Physics, regardless of its layer of abstraction: physical, social, psychological, video-game, fairy tale—it makes no difference. If you give a name to something and convince yourself that it will tend to do certain things and not to do other things, and that it &quot;has&quot; certain distinguishing properties, then it becomes just another object (black box) in our simulations, a pawn in our mental game of predicting the future as accurately as possible.</p>
<p>Notice how we&#x27;re not required to understand how those black boxes work internally. Education, and scientific progress in general, gives us detailed explanations of the workings of things, but those only help us refine our predictions: they don&#x27;t fundamentally change how we think about things.</p>
<p>This extremely multi-purpose approach to simulation, supporting anything from physical laws to social and psychological cause-and-effect, allows us to make predictions about basically anything, and even to immerse ourselves in worlds that could be, and impossible worlds that will never exist.</p>
<h2>III (The Limits of Virtual Physics)</h2>
<p>This black-box function of our brain is <em>amazing</em>. It&#x27;s an extremely powerful solution that has evolved and expanded over time to make living things more and more effective at staying alive. Whenever competition arises, those who can &quot;see&quot; further into the future have an enormous strategic advantage against the more mentally myopic. In fact, I think this might be <em>the only</em> viable solution for truly multi-purpose future prediction, even from a theoretical point of view. The only alternative would be to somehow become <a href="https://en.wikipedia.org/w/index.php?title=Laplace&#x27;s_demon" rel="nofollow noopener noreferrer" target="_blank">Laplace&#x27;s Demons</a>, all-knowing creatures with enough memory to contain the whole universe and enough attention to track every quantum of energy at the same time.</p>
<p>Unfortunately, these amazing models that keep us alive are just that: well-meaning caricatures of reality. In order to be useful, they <em>have</em> to be simplifications, approximations, linearizations, and, to some extent, misunderstandings of the actual systems they refer to. The art is not to use the &quot;right&quot; models—which don&#x27;t exist—but merely to find models that are &quot;good enough for their job&quot;.</p>
<p>I wrote above that we improve and sophisticate our models by learning. Your &quot;naive physics&quot; knowledge of gravity, for example, is enhanced or replaced by what the physics professor teaches you, and that allows you to predict—mentally or with pen and paper—the future possible behavior of more things: instead of just knowing that an unsupported stone will fall downwards, you can predict where the stone will fall if thrown at a certain velocity and direction, and what orbits it will take if thrown in outer space.</p>
<p>But there is a limit to how much we can cram into our intuition, even with education. You might have learned the rough principles of car engines, but you&#x27;re not thinking about them every time you press the accelerator. Unless you have special reasons to focus on them (e.g., when your car stops in the middle of the highway) you still treat the car as a black box: &quot;if I press this pedal, the car goes faster.&quot;</p>
<p>In other words, even when we do obtain better mental models through education, our brains seem to be doing constant work of greedy simplification in order to streamline our thoughts. This must be another evolutionary trick, and another double-edged sword.</p>
<h2>IV (Tunnel Vision)</h2>
<figure><img src="https://aethermug.com/assets/posts/living-in-a-real-world-acting-in-imaginary-ones/anseric-soete-bBm2PbNvRmE-unsplash.webp" alt="The inside of a narrow brick corridor, wide enough for only a couple of people to walk in, with nothing but blackness in the distance."/><figcaption>Photo by Anseric Soete, Unsplash</figcaption></figure>
<p>Problems come up when we forget that our decisions and actions happen entirely in a virtual, partial, uncertain world. Instead of seeing our mental models as probabilistic guesses of what <em>might</em> happen, sometimes we make the mistake of seeing them as immovable truths.</p>
<p>Some of our mental simulations are so ingrained in our minds that their predictions begin to feel inevitable or impossible, uncertain outcomes ossifying into absolute laws that (we think) cannot be escaped. Whether caused by common sense, social pressure, superstition, or simply ill-informed models, this is a trap we fall into all the time. I don&#x27;t know of a perfect word for this phenomenon, so let&#x27;s call it Tunnel Vision.</p>
<p>This use of &quot;Tunnel Vision&quot; shouldn&#x27;t be confused with the literal tunnel vision of one&#x27;s eyes, or the psychological <a href="https://en.wikipedia.org/wiki/Tunnel_vision_(metaphor)?useskin=vector" rel="nofollow noopener noreferrer" target="_blank">tunnel vision linked with confirmation biases</a>. The simple definition of the Tunnel Vision I&#x27;m talking about is</p>
<blockquote>
<p>you&#x27;re trapped because you believe you&#x27;re trapped.</p>
</blockquote>
<p>Look at the sentences I highlighted in black in <a href="/posts/matsuri-s-grown-up-world">the post about Matsuri Takahashi</a>. See how, despite viscerally <em>hating</em> her oppressive workplace, she also firmly believed that her situation was normal: &quot;<em>this is what society is like.</em>&quot; Even when she tries to joke about it, she can&#x27;t help telling us what she believes to be the rule of life:</p>
<blockquote>
<p>When you can&#x27;t tell if you&#x27;re working to live or living to work, <strong>that&#x27;s when real life begins.</strong></p>
<p>— Tweeted at 8:22 PM, November 3, 2015, emphasis mine
Her most telling metaphor must be the one she tweeted on December 8, 2015, less than three weeks before her death, when she compared herself to &quot;a salmon swimming upstream to spawn&quot;. What she is saying—what, perhaps, she is hoping to exorcise by framing it as a joke—is that such a painful life annihilated by work, fear, and humiliation is her biological destiny, the natural behavior written in her DNA.</p>
</blockquote>
<p>Reading her announce, resigned, &quot;no wonder people die from overwork,&quot; chills me to the bone every time.</p>
<p>In Matsuri&#x27;s Virtual Physics, even death was more natural an outcome than quitting her job. Her Tunnel Vision was such that running away from her workplace was merely a fairy tale, one she should be ashamed for contemplating.</p>
<p>Was it her fault then? Was Matsuri Takahashi responsible for &quot;forgetting&quot; that she had better options? Of course not. While no one has a life of toil and abuse written in their DNA, we all have to live with the limits of Virtual Physics. Tunnel Vision pervades humanity, affecting all of us in serious and in mundane circumstances.</p>
<p>Tunnel Vision is at work when strongmen oppress and subjugate groups of people who don&#x27;t rebel despite having the numbers to do so successfully.</p>
<p>It&#x27;s there when family members are able to bully family members for whole lifetimes.</p>
<p>I see it often in Japan, when my colleagues tell me they &quot;can&#x27;t&quot; use even half of their contractual paid leave. &quot;Why not?&quot; I ask them. &quot;I just can&#x27;t,&quot; they reply. And when, like my partner back in 2017, they claim to have been dreaming of working for a non-abusive company for a long time, yet have been &quot;unable&quot; to take the necessary steps for the better part of a decade.</p>
<figure><img src="https://aethermug.com/assets/posts/living-in-a-real-world-acting-in-imaginary-ones/ranurte-DIZKKHJm6Fc-unsplash.webp" alt="The inside of a wide underground corridor that continues straight for a distance, before being interrupted by a wall."/><figcaption>Photo by Ranurte, Unsplash</figcaption></figure>
<p>I see it also in the befuddling cases of <a href="https://www.findzebra.com/details/A05QP8g-list-of-mass-hysteria-cases" rel="nofollow noopener noreferrer" target="_blank">mass hysteria</a>, in the deep-rooted racism and discrimination against minorities, and in the demonization of &quot;the enemy&quot;.</p>
<p>It&#x27;s often in the eyes of those who give up, saying, &quot;I&#x27;m naturally bad at it&quot;; of those who miss out when rare opportunities present themselves, giving up on them not because they don&#x27;t ardently wish to take them, but out of shyness, or &quot;it would be rude to ask...&quot;, or &quot;they would never consider someone with my background...&quot;</p>
<p>I see it, of course, in myself, when I realize I was upset and riled up over a minor thing as if it was a giant moral issue; and when I can&#x27;t seem to stop thinking about work for days at a time.</p>
<p>How often we decide that the world <em>is certainly, clearly</em> like the shaky simulation we&#x27;ve contrived on impulse, based on precious little evidence!</p>
<h2>V (Escaping Habit)</h2>
<p>When Tunnel Vision grips you, your world becomes smaller, simpler, claustrophobia-inducing. You forget that there is a vast and varied universe around us, an infinity of options and opportunities. You see the gray walls of your room and mistake them for the horizon. Tunnel Vision means that your Virtual Physics is over-constrained. It makes you something less than you were. And it locks out the notion that you are not really <em>forced</em> to be in the situation you&#x27;re in. This is what causes the bad kind of stress: being caught in a world too tight and narrow for you.</p>
<p>When does this kind of Tunnel Vision happen? It&#x27;s not only a matter of losing track of the Big Picture and of using oversimplified models of the world, because those are not always bad things. Being in a state of &quot;flow&quot;, hyper-concentrated on a familiar task, works by &quot;forgetting the world&quot;, too, and it&#x27;s usually a good thing. All is fine if you&#x27;re able to zoom in and out of your hyper-focus at will. Put another way, it&#x27;s not Tunnel Vision if you can &quot;open the black box&quot; and re-expand your framings to something more realistic when necessary. Being focused is great if you&#x27;re able to snap out once you&#x27;re done.</p>
<p>Tunnel Vision only happens <em>when mental automatisms replace the conscious assessment of what is possible here and now, and then prevent you from going back</em>.</p>
<p>It happens, for example, when the habit of thinking &quot;I should work until 3 AM, otherwise my boss will bitterly humiliate me&quot; crystallizes into &quot;I should work until 3 AM because that&#x27;s what work is like,&quot; and when &quot;I haven&#x27;t been able to understand these topics yet&quot; degrades into &quot;I&#x27;m inherently too stupid for this,&quot; and when &quot;if she doesn&#x27;t wash the dishes today, the kitchen will be a mess&quot; becomes &quot;if she doesn&#x27;t wash the dishes, she&#x27;s being mean to me.&quot;</p>
<p>Our natural and useful tendency to simplify our models (&quot;pushing the pedal makes the car go faster&quot;) becomes a liability when we get so used to them that we lose the capacity to un-simplify them. In most cases, this seems to be caused by the mind-numbing repetition of the same task, by gaslighting from our peers, by being stuck in the same restricted environment for a long time, or by some combination of these. In other words, any situation in which some part of the mind is induced to switch off for long periods at a time.</p>
<p>The way out is not to convince ourselves that Virtual Physics is wrong, only that <em>it can always be wrong</em>. It&#x27;s also <em>not</em> about realizing that Virtual Physics is &quot;fake&quot;, &quot;just&quot; a simulation—it&#x27;s as real as it gets, otherwise society wouldn&#x27;t function—otherwise Matsuri Takahashi would still be alive. I&#x27;m saying we need to remind ourselves and each other that this is how we work as human beings, and this, sometimes, is how we die.</p>
<p>Every now and then, simply remembering that you might be stuck in a virtual tunnel of despair is enough to crawl out of it with your own strength and save yourself. Often, though, you&#x27;re just too deep in to even remember that you can be saved. That is why we need to remind each other, break the spell of harmful social and cultural conventions, call out the mind-manipulators, talk about it.</p>
<h2>Epilogue</h2>
<figure><img src="https://aethermug.com/assets/posts/living-in-a-real-world-acting-in-imaginary-ones/samuel-jeronimo-CLc3G8HEooM-unsplash.webp" alt="The end of a stone tunnel, facing the open sea and a clear evening sky."/><figcaption>Photo by Samuel Jerónimo, Unsplash</figcaption></figure>
<p>My partner, in 2017, was caught in the same world that tragically crushed Matsuri Takahashi. I could see that she was unable to fathom how <em>any</em> job could be less painful than her current detestable job—the only one she had ever experienced. The idea of resigning provided no ray of hope for her, only fear of retaliation.</p>
<p>So I told her. I told her over and over, for months, that fun workplaces do, indeed, exist. I told her that &quot;real life&quot; was not what she thought: there are companies where all colleagues respect you and value you and won&#x27;t judge you for taking care of yourself, for enjoying your life beyond work. I pestered her with the idea that any brief pain and abuse that would surely be unleashed on her when she announced her decision to quit would be small compared to the years of daily grinding away at her mental health.</p>
<p>It took a while for her to break out of the spell and take action, and the action was painful as expected, but she did it with her own strength. She left and never looked back. I&#x27;m happy to see that she now lives in a world where joy and optimism are possible, even probable. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/living-in-a-real-world-acting-in-imaginary-ones/miguel-bruna--oz89jDfR4M-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Matsuri's Grown-Up World]]></title>
            <link>https://aethermug.com/posts/matsuri-s-grown-up-world?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/matsuri-s-grown-up-world</guid>
            <pubDate>Thu, 27 Feb 2025 02:05:56 GMT</pubDate>
            <description><![CDATA["The difficulty level will increase even more with the new season."]]></description>
            <content:encoded><![CDATA[<p>April 2015: Matsuri Takahashi, fresh out of the most prestigious university in Japan, begins her first job at Dentsu, a major advertising company.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/matsuri-s-grown-up-world/matsuri.webp" alt="Takahashi on her graduation day in March 2015. Photo by her mother, Yukimi Takahashi." title="Picture of a young woman smiling with her graduation certificate in her hands. She is standing next to the bronze statue of a man playing with a dog, located in one of the campuses of the University of Tokyo."/><figcaption>Takahashi on her graduation day in March 2015. Photo by her mother, Yukimi Takahashi.</figcaption></figure>
<p>These are some of her tweets from the following months (all emphasis mine).</p>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 12:41 PM, May 30, 2015 (Saturday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Love weekends so much</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 12:39 AM, September 9 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Leaving work now, I&#x27;m wrecked</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 12:41 AM, September 9 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2"><strong>A proper junior employee should virtuously hang on until the last train of the day...</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 8:22 AM, September 26 (Saturday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">This is too intense, it&#x27;s getting way too painful...</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 9:59 AM, October 1 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">While new hires are attending their job offer ceremony with dreams of a bright future, second-year employees are ironically at the clinic on the same floor for health checkups as &quot;people requiring management.&quot; New hires, <strong>this is what society is like.</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 9:11 PM, October 2 (Friday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">It&#x27;s so hard, I think being a new employee isn&#x27;t fun at all as I go to work, come home on the last train, sleep without removing my makeup, wake up without even being able to talk to friends, every day. But it was really healing when I went to Ginza with a coworker from my cohort for a small work errand, and we walked and said &quot;I can smell osmanthus flowers, autumn is nice.&quot;</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 6:13 PM, October 3 (Saturday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Working is just too painful.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:35 PM, October 4 (Sunday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Dear God, I don&#x27;t want to go to work.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:39 PM, October 4 (Sunday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I was so full of life during my student days, and now look what&#x27;s happened to me. &quot;<strong>A member of society</strong>&quot;...</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:55 PM, October 4 (Sunday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">&quot;You&#x27;ve worked hard for six months, now you don&#x27;t have to work anymore&quot; — I wish some rich 50-year-old divorced man who thinks like that (balding is fine) would propose to me <strong>so I could become a housewife and live like a pet</strong>. Even though I graduated from Tokyo University.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 10:08 PM, October 6 (Tuesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Are there any new employees who don&#x27;t get mentally ill? I can&#x27;t tell...</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:30 PM, October 7 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I want to meet at a small bistro around Higashi-Ginza around 9 PM, drink well-chilled white wine while complaining &quot;Work is tough—I hate it,&quot; and have a kind, smart boyfriend who nods and says &quot;I know&quot; while giving advice and a bit of scolding, but also says &quot;Worst case, <span class="bg-gradient-to-b from-[#209440] to-[#203060] bg-clip-text text-transparent inline font-semibold whitespace-normal">it&#x27;s okay to quit</span>.&quot; Since I have no other goal than to be proposed to by such a person, I have nothing to tell the department head during my evaluation interview.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:31 PM, October 7 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">It&#x27;s still only Wednesday?</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:20 PM, October 12 (Monday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2"><strong>Came to work on a holiday thinking I was being diligent</strong>, but 60% of our department was already there. No wonder people die from overwork.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 1:28 AM, October 13 (Tuesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I&#x27;m going to take a bath now and then head to the office ( ･ᴗ･̥̥̥ ) death</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 12:03 AM, October 15 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">When everyone else is leaving at 4 AM or pulling all-nighters, there&#x27;s an atmosphere where <strong>new employees can&#x27;t say they&#x27;re sleepy or tired.</strong> I&#x27;ll be the type who feels guilty about escaping even during a fire or earthquake, and will die clinging to my PC until the end.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 3:28 AM, October 20 (Tuesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Cause of death: &quot;tough love&quot; from the boss</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 3:41 AM, October 21 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Made it home...</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 5:24 PM, October 21 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">State of emergency #1 has been declared, <span class="bg-gradient-to-b from-[#209440] to-[#203060] bg-clip-text text-transparent inline font-semibold whitespace-normal">registered on a job search site</span>.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:21 PM, October 22 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Recently I&#x27;ve been working 19 hours straight, lunch is either at my desk from a convenience store or I skip it altogether ( ´- ̥̥̥ω- ̥̥̥` )</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:26 PM, October 22 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I&#x27;m tired of only talking to people from work every day...</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 4:36 PM, October 25 (Sunday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Even with the worst feelings after breaking up with my boyfriend, <strong>it&#x27;s comforting to know that endless work awaits me at the office.</strong><br/><br/><strong>Good thing I have such a tough job.</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 8:56 PM, October 28 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I was told the document I made over two consecutive all-nighters was completely useless, but then he said if I fix it we might take it to the client, and <strong>now I&#x27;m motivated enough to work on weekends. I have an abundance of corporate slave talent.</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 12:00 AM, October 29 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Thanks to overtime pay, my 7th month salary is 1.5 times my starting salary! 🎊 <strong>Overwhelming growth</strong> 🙌🏻😊💕</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 9:48 PM, October 29 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2"><strong>I broke the village-like company rule that &quot;the seniority wall is deeper than the ocean&quot;</strong> by speaking too casually to a slightly senior colleague and incurred their anger, damaging my mental state again.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 7:58 AM, October 31 (Saturday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Department head: &quot;Your 20 hours of overtime is a waste for the company,&quot; &quot;Looking sleepy during meetings shows you can&#x27;t manage yourself,&quot; &quot;Don&#x27;t come to work with messy hair and bloodshot eyes,&quot; &quot;If you&#x27;re struggling with the current workload, your capacity is too small.&quot;<br/><br/>Me: &quot;<strong>Bloodshot eyes aren&#x27;t allowed either?</strong>&quot;</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 10:53 PM, October 31 (Saturday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I mean, even first-class professionals can&#x27;t control their blood vessel constriction.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 3:23 PM, November 1 (Sunday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I don&#x27;t want to go to work. I wish we had two-day weekends (&gt;&lt;)</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 8:22 PM, November 3 (Tuesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">When you can&#x27;t tell if you&#x27;re working to live or living to work, <strong>that&#x27;s when real life begins.</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:59 PM, November 5 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I&#x27;m even using Twitter to collect evidence for <span class="bg-gradient-to-b from-[#209440] to-[#203060] bg-clip-text text-transparent inline font-semibold whitespace-normal">when I sue the company after quitting</span>.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:58 PM, November 12 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I thought I could keep going, but I broke down <strong>unexpectedly early and now I hate myself.</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 12:34 AM, November 13 (Friday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2"><strong>Miraculous return to society. Phew</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 1:12 AM, November 15 (Sunday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">What I want to tell job-hunting students is that <strong>work is not like fun activities or part-time jobs—it&#x27;s &quot;labor&quot; that continues for life, and if it doesn&#x27;t suit you, your mental and physical strength may wear down every day.</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 8:32 AM, November 18 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I absolutely hate waking up in the morning.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 7:20 AM, November 20 (Friday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I managed to wake up again today...<br/>
<strong>The difficulty level will increase</strong> even more with the new season.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 7:21 AM, December 3 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2"><strong>It&#x27;s not that I don&#x27;t want to work</strong>, I just don&#x27;t want to wake up in the morning.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 8:42 PM, December 8 (Tuesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Shit! Hey, I&#x27;m heading to the office <strong>like a salmon swimming upstream to spawn</strong>!</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 4:06 AM, December 9 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I&#x27;m heading home now, but no matter how I calculate it, there&#x27;s not enough time?<br/>
When I have to do things I&#x27;m not good at, <strong>I become way too inefficient</strong>...</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 5:39 AM, December 11 (Friday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Heading home now (T ^ T)</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 3:10 PM, December 12 (Saturday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Since all my cohort is coming to work on Saturday, <strong>I can go to work happily!</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 12:40 AM, December 14 (Monday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2"><strong>I hope I can get married and</strong> <span class="bg-gradient-to-b from-[#209440] to-[#203060] bg-clip-text text-transparent inline font-semibold whitespace-normal">quit my job</span> as soon as possible.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 11:18 PM, December 15 (Tuesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">A &quot;good job&quot; from my favorite senior - <strong>priceless</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 9:41 PM, December 16 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Getting to leave before 10 PM... <strong>it&#x27;s a miracle</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 9:48 PM, December 16 (Wednesday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">I wonder what will remain after overcoming such stressful days while constantly wanting to die.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 12:04 AM, December 17 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">My eyes are dying, my heart is dying, and I even thought I might be better off dead, so today I was considering who would be the best lineup to CC on my suicide note email.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 7:37 PM, December 17 (Thursday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Senior~! Please stop praising me properly when I do well and gently scolding me when I mess up~(&gt;&lt;) Stop becoming my only ray of hope in these painful days~(&gt;&lt;) At this rate I might fall for you~(&gt;&lt;)</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 4:01 AM, December 18 (Friday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">When you&#x27;re at the company for 20 hours a day, you start to lose track of what you&#x27;re even living for <strong>and just laugh</strong>.</div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 3:55 PM, December 20 (Sunday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Even though I finally get to see my boyfriend after 4 months, it&#x27;s incredibly stressful having to finish work super quickly for that and <strong>feeling pressured not to complain. Being a working adult means never being able to let your guard down for a moment.</strong></div></div>
<div class="col-start-1 col-end-1 mx-4 md:mx-auto my-4 max-w-[540px] xl:w-[540px] rounded-xl bg-white p-6 shadow-sm"><div class="text-sm text-[#657786] mb-4">Tweeted at 5:54 PM, December 20 (Sunday)</div><div class="text-base leading-relaxed text-[#14171a] pb-2">Being told by male bosses that I lack femininity or whatever, even if it&#x27;s meant as a joke to get laughs, is my limit of patience. It&#x27;s unfair that men don&#x27;t get told they lack masculinity when they go bald. So depressing.</div></div>
<p>On the morning of Christmas Day, 2015, Matsuri Takahashi threw herself from her company dormitory and died at the age of 24. She had been working for 9 months, doing an estimated 130 monthly hours of overtime.</p>
<p>According to her mother, Matsuri was well aware that excessive work could lead to death, <strong>but she still couldn&#x27;t quit her job.</strong></p>
<p>(I started this post with the intention of making a point about something, but rereading this story broke my heart all over again and it&#x27;s difficult to pick up from here. I think it&#x27;s enough for today.) ●</p>

<p>Note: Matsuri Takahashi&#x27;s Twitter account in Japanese can be found <a href="https://x.com/matsuririri" rel="nofollow noopener noreferrer" target="_blank">here</a>.</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/matsuri-s-grown-up-world/ian-keefe-LOUCuFYqkwE-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Rationality Fails at the Edge]]></title>
            <link>https://aethermug.com/posts/rationality-fails-at-the-edge?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/rationality-fails-at-the-edge</guid>
            <pubDate>Thu, 20 Feb 2025 05:53:08 GMT</pubDate>
            <description><![CDATA[Assumption-digging as ritual]]></description>
            <content:encoded><![CDATA[<p>You can spawn whole new mathematical worlds by shifting the axioms of a theory even a little bit. This happened, for example, in the mid-19th century, when some daring mathematicians asked themselves: what happens if I try to flip one of the foundations of Euclid&#x27;s time-tested geometry (the one children learn in school) upside down?</p>
<p>Surprisingly, it works. Gauss wrote about it in a private letter in 1824:</p>
<blockquote>
<p>The assumption that (in a triangle) the sum of the three angles is less than 180° leads to a curious geometry, quite different from ours, but thoroughly consistent, which I have developed to my entire satisfaction.</p>
</blockquote>
<p>Although Gauss never published these results, later Nikolai Lobachevsky developed and published the same, and Bernhard Riemann extended, expanded, and deepened it. The result was non-Euclidean geometry, a radically different, but equally valid, way to reason about space. Although it was completely novel (and rather startling) at the time of its inception, today non-Euclidean geometry is indispensable for physical theories like general relativity, and by extension for applications like GPS, astrodynamics, celestial mechanics, the study of black holes, cosmology, and more.</p>
<p>How can both Euclidean geometry and a theory that subverts its core tenets be right? That&#x27;s a false dichotomy: they&#x27;re both &quot;right&quot;, because in mathematics all that matters is whether a theory correctly follows from the given assumptions. As Einstein famously said, “as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”</p>
<p>In math, truth—at least in the colloquial sense of the word—doesn&#x27;t exist: you either have correct or incorrect statements based on other statements you have previously accepted.</p>
<p>(It is true that Gödel&#x27;s theorems showed that even mathematical systems have limits in how much they can correctly state, but we don&#x27;t need to go there today. My goal here is to consider the limitations of rationality in real-world settings.)</p>
<p>Outside mathematics things work a bit differently. We recognize a statement as &quot;true&quot; only when it is both correct (logical) <em>and</em> anchored to some objective reality. It&#x27;s not enough, as inhabitants of the real world, to say that A is logically followed by B: for that information to be useful we also need to be sure that A is a good starting point in the first place. Unfortunately, we tend to focus on the B part much more than on A, although a rational argument that is flawlessly self-consistent and logical can—and often will—be wrong due to its flawed implied assumptions. Rationality fails at the edge.</p>
<h2>Scientific Prestidigitation</h2>
<p>Many magic tricks rely on the fact that rationality fails at the edge. They are surprising because the audience believes that the tools and laws of physics involved are different from what they really are, and this leads them to the wrong <em>logical</em> expectations.</p>
<figure class="rehype-figure"><img src="https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExajAxb2xmZHRnNDJyY3RkZzNrOWY0MjZyZ2RzMGF6d20yNW8zMDBnbiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/JOXzH8WcOgR8PLBqtm/giphy.gif" alt="Those are rings, yes, but they&#x27;re not closed as you might normally assume a ring to be. If the trick surprises you, your mistake is not in expecting that two normal rings shouldn&#x27;t be able to cross each other—that is a logical and correct expectation—but in thinking that those are two normal rings." title="Animation of a girl performing a trick where she starts with two linked metal rings and pulls them away as if they passed through one another."/><figcaption>Those are rings, yes, but they&#x27;re not closed as you might normally assume a ring to be. If the trick surprises you, your mistake is not in expecting that two normal rings shouldn&#x27;t be able to cross each other—that is a logical and correct expectation—but in thinking that those are two normal rings.</figcaption></figure>
<p>In many intellectual arguments, we do the equivalent of (reciprocal) magic tricks without even realizing it. We take some things for granted as starting points of our logical steps, and don&#x27;t feel compelled to voice them. Since they&#x27;re obvious starting points for us, they must be obvious also for our counterpart, we think. Being implied assumptions, they are hard to notice and compare with each other. Confusion and frustration ensue.</p>
<p>I see this all the time in arguments between highly rational, educated, and well-meaning people. Out of these, let&#x27;s start with the scientists.</p>
<p>Of course, there are cases when even PhDs draw illogical or unjustified conclusions from the evidence—i.e. their arguments themselves are irrational—but exposing those mistakes is relatively easy. The problem is when both sides of a scientific debate are rigorous <em>and correctly apply their reason to the same evidence</em>... except, they start from different implicit assumptions.</p>
<p>Here are three examples of current debates in science (summarized by Perplexity&#x27;s Deep Research model). Even if some of those terms are unfamiliar to you, the <em>nature</em> of those contrasting arguments should still transpire.</p>
<blockquote>
<p><strong>Viral Origins of Complex Life Debate</strong></p>
<ul>
<li><strong>Description</strong>: Evolutionary biologists debate whether viral genetic material played a decisive role in eukaryotic cell evolution.</li>
<li><strong>Contention</strong>: Proponents cite homology between viral fusion proteins and eukaryotic membrane systems. Critics argue horizontal gene transfer evidence remains circumstantial.</li>
</ul>
</blockquote>
<blockquote>
<p><strong>Dark Matter vs Modified Gravity Debate</strong></p>
<ul>
<li><strong>Description</strong>: Astrophysicists disagree on whether the observed gravitational effects in galaxies are caused by invisible dark matter or require adjustments to Newtonian dynamics (MOND).</li>
<li><strong>Contention</strong>: Proponents of dark matter cite gravitational lensing and cosmic microwave background data as evidence. Critics counter that MOND successfully predicts galaxy-scale phenomena without requiring fine-tuned dark matter distributions.</li>
</ul>
</blockquote>
<blockquote>
<p><strong>CRISPR Off-Target Effects Debate</strong></p>
<ul>
<li><strong>Description</strong>: While CRISPR gene editing shows therapeutic promise, researchers disagree on the clinical significance of off-target DNA modifications. Disputes center on detection methods and long-term safety.</li>
<li><strong>Contention</strong>: Some labs report high-fidelity editing with minimal off-targets using advanced algorithms like GUIDE-seq. Critics argue current assays miss structural variants and epigenetic changes.</li>
</ul>
</blockquote>
<p>See what&#x27;s going on? Every time, the evidence is open for all to see, but the proponents and critics of a theory give different weights to it. The reason is that they base themselves off different perspectives: what kind of evidence is more authoritative, how elegant an explanation is &quot;supposed&quot; to be, and even things like how much work and reputation they have already invested in their own hypotheses.</p>
<p>Often, the experimental evidence is simply not enough to determine which assumptions are best. In these cases, the debate stems from that fundamental vagueness. Then, the scientists&#x27; job is to try to uncover more evidence, not to convince each other with the brute force of their logic.</p>
<p>I don&#x27;t mean to say that scientific debates are useless and should always be avoided. There are times when the assumptions are shared, and the thing that is unclear is which line of reasoning should be applied. In these cases, a debate can be a powerful way to sort things out. What seems silly and harmful to me is keeping a dispute going on for decades and getting nasty about it. When one side calls the other &quot;intellectual fraud&quot; and is called by the other &quot;fundamentalists,&quot; as was the case in the long-running <a href="https://blog.oup.com/2015/01/kin-group-selection-controversy/" rel="nofollow noopener noreferrer" target="_blank">unit of selection debate</a> in evolutionary biology, you know that something has gone awry.</p>
<h2>The Messier Cases</h2>
<p>By and large, edge-failure remains under control in scientific circles, and eventually, things do get sorted out. But the same thing happens regularly outside of academia, too, and that&#x27;s where it can cause endless trouble.</p>
<p>In most daily conversations, which don&#x27;t usually hinge on big complex logical arguments, everything is fine. It becomes an issue in slightly more intellectual disputes, on topics like ethics, business and military strategy, technological predictions (some AI doomsayers are very rational people), and—hmmm... all political ideology?</p>
<p>This is where we are at a constant risk of doing all our logical steps perfectly well and still failing at the edge without realizing it.</p>
<p>Now, the kinds of assumptions hiding under an argument will vary from case to case, from philosophical and religious beliefs to value judgments, from degrees of selfishness to political interests and so on, but one major assumption that is always present in every explanation is <em>its purpose</em>.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Every line of reasoning must always have one goal or another as its driving force, but we&#x27;re often unable to state it clearly. Two people may get caught in an argument where both sides are rational and their logic is based on true facts, yet they apply different <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">boundaries</a> in their mental <a href="/posts/a-framing-is-a-choice-of-boundaries">framings</a> because they care about different things. This effectively means that the worlds they&#x27;re simulating in their heads are different, and are ruled by different forces and rules of logic. And guess what: they will reach different answers, answers that will be impossible to reconcile until this fundamental rift is made explicit.</p>
<p>An area where this becomes an almost insurmountable problem is with conspiracy theories and religious pseudo-science. Here the superficial assumption is obvious enough—that the other side is a deceitful conspirator, or simply insane—but the deeper root of that disagreement is really the <em>purpose</em> of discussing those topics.</p>
<p>The goal of a Flat Earther is to feel part of a minority of enlightened people, while the goal of an Earth scientist is to be objectively right, or as close as possible. A creationist aspires to be a paladin of Christianity, while the evolutionary biologist aims to get published in scientific journals. This is why arguing with a conspiracist never works: with different goals, the choice of base assumptions will also be different.</p>
<p>Showing conspiracists holes in their reasoning only encourages them to shift their assumptions until those specific holes are covered and even to sophisticate their arguments in ways that sound largely reasonable and self-consistent. When creationists were told that you need science to explain the world, they rebranded their contorted arguments with names like &quot;creation science&quot; and &quot;flood geology&quot;; when their reliance on biblical texts was attacked for being mere dogma, they removed direct references to religion from their &quot;theories&quot;, keeping them instead as implicit subtext; when more and more strong evidence piled up in favor of evolution, they retreated to attacking the parts that are still unclear with the concept of &quot;irreducible complexity&quot;.</p>
<p>If your purpose is to <em>appear</em> right at all costs, you&#x27;ll be able to interpret the same evidence in ways that are diametrically opposite to your adversary. When Bill Nye (the Science Guy) <a href="https://en.wikipedia.org/wiki/Bill_Nye%E2%80%93Ken_Ham_debate" rel="nofollow noopener noreferrer" target="_blank">debated</a> creationist Ken Ham, Nye asserted that radiometric dating is a useful tool to estimate the age of the Earth, because of its low error margins of less than a few million years (small, when compared to the estimated age of the Earth of 4.5 <em>billion</em> years). His opponent&#x27;s rebuttal was that a few million years of error sounds like a lot, thus radiometric dating is useless. Who is right? Well, your definition of &quot;right&quot; will depend on what you&#x27;re trying to achieve. The Wikipedia article about that debate makes the difference in agendas—and the futility of a discussion—quite clear:</p>
<blockquote>
<p>Towards the end of the debate, Ham admitted that nothing would change his mind concerning his views on creationism, whereas Nye acknowledged that, if enough convincing evidence were presented to him, he would change his mind immediately.</p>
</blockquote>
<h2>Assumption-Digging</h2>
<p>How does all of this affect you, beyond diatribes about conspiracies and religions? If you&#x27;re reading this blog, chances are <del>you stumbled here while looking for the bathroom</del> you have the habit of engaging with rational arguments every now and then. Maybe you might even like to discuss reasoned arguments with other people yourself.</p>
<p>Reading or hearing something that you strongly disagree with might be tolerable when it comes from an illogical or openly biased individual, but what if it&#x27;s from a smart, honest friend or expert in the field? You might know what I mean. Sometimes, you just can&#x27;t fathom why someone you respect could think a certain way or how they continue to insist on reasonable-sounding ideas that you nevertheless <em>know</em> to be incorrect. That can stir up emotions.</p>
<p>I think that reminding ourselves that rationality often fails at the edge makes reconciliation easier.</p>
<figure><img src="https://aethermug.com/assets/posts/rationality-fails-at-the-edge/ivan-bandura-jjDhD4GZeuk-unsplash.webp" alt="Picture taken from above of a forest, with trees on one side and what seems like an excavation site, dunes of dirt without any vegetation, on the other side, separated by a sharp boundary."/><figcaption>Photo by Ivan Bandura, Unsplash</figcaption></figure>
<p>I don&#x27;t know how to effectively debate a conspiracist because they assume that the debate itself is pointless. This might be a (rare) case where disengagement is the wisest choice. For other cases, however, I propose a simple conversation or thinking technique, a <em>ritual</em> I call &quot;assumption-digging.&quot;</p>
<p>To assumption-dig, you explicitly dedicate time, early on when you encounter a puzzling disagreement, to make an inventory of the things that the participants are taking for granted. If it is a live discussion, set aside your planned arguments for a moment and instead begin with a reciprocal Q&amp;A to uncover each other&#x27;s root assumptions.</p>
<p>The keyword here is &quot;explicitly.&quot; You want to be upfront and relaxed about the need to expose and compare those foundational beliefs: both sides should agree that this step is necessary because the premises could otherwise be overshadowed by your laborious explication of logic.</p>
<p>Ask questions like:</p>
<ul>
<li>Why do you think it is important to show that you&#x27;re right?</li>
<li>What, in practice, will you lose if you&#x27;re proven wrong?</li>
<li>Do we mean the same thing when we use the keywords &quot;A,&quot; &quot;B,&quot; and &quot;C&quot; in our arguments?</li>
<li>Do we agree on the value we assign to each type of available evidence?</li>
<li>Do we have compatible philosophical views with a bearing on how we think about this topic?</li>
<li>Are personal preferences and inclinations involved?</li>
</ul>
<p>More often than not (if all participants are honest), you&#x27;ll find that you don&#x27;t even need to present the arguments because you&#x27;ve found a fundamental mismatch at the level of your premises. The conversation then becomes one of realigning on the purpose of the debate and, sometimes, accepting that you have different goals and thus different problems to solve. Congratulations, no debate is necessary! That might be, in my humble opinion, the nicest way to &quot;agree to disagree.&quot;</p>
<p>If you both think you have the same goals but can&#x27;t agree on which assumptions are appropriate, you can either agree that the lack of evidence is the problem—and defer the discussion to a time when more evidence will be available—or you treat the assumptions as <em>explicit</em> hypotheticals: <em>if</em> assumption X were true, do we agree that Y would be the consequence? Next, what if assumption Z were true? While this might not give you the satisfaction of &quot;winning&quot; the argument definitively, it can still be a valuable use of your time for its conciliatory and elucidating effects.</p>
<p>Why do we usually jump to the arguments and insist on our conclusions without thoroughly reviewing our premises? Perhaps we should blame <a href="https://en.wikipedia.org/wiki/Cognitive_dissonance" rel="nofollow noopener noreferrer" target="_blank">cognitive dissonance</a>. Perhaps <a href="https://www.gurwinder.blog/p/why-smart-people-hold-stupid-beliefs" rel="nofollow noopener noreferrer" target="_blank">pride, hubris, and laziness</a>. But I suspect that, in most cases, we just don&#x27;t realize that those assumptions we&#x27;re making are only one choice among many. They&#x27;ve become part of the mental furniture—they are our default <a href="/posts/a-framing-and-model-about-framings-and-models">framings</a>, lenses that we forget we have on our noses. That makes edge-failure a bit more forgivable, at least until you&#x27;ve read this post. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/rationality-fails-at-the-edge/shainee-fernando-yON4XwM70yA-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Whatever a Man or Woman Was Able to Accomplish Before, You Can Accomplish, Too]]></title>
            <link>https://aethermug.com/posts/whatever-a-man-or-woman?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/whatever-a-man-or-woman</guid>
            <pubDate>Thu, 13 Feb 2025 07:18:26 GMT</pubDate>
            <description><![CDATA[A play over milkshakes]]></description>
            <content:encoded><![CDATA[<h1>Act 1 - Hair</h1>
<p><strong>INT. CROWDED CAFE - MORNING</strong></p>
<p><em>A crowded midtown cafe with large drawings of coffee plants and beans on the walls. All seats are filled, and a lively chatter pervades the place, punctuated by the shrill voice of a barista.</em></p>
<p><em>Two young men, well dressed, sit at a table with coffee mugs in their hands. One, with abundant ruffled hair, is wearing his winter coat while seated. The other has his coat folded on the back of his chair and his shirt&#x27;s sleeves are rolled up to his elbows. He has very short hair and a tidy, full beard.</em></p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Anyway, what was it you wanted to sell me?</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p style="margin-bottom:0.5rem">(perplexed)</p>
<p>Sell you?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>You said you had something you wanted to illuminate me with. Usually, that means you want to sell me some weird philosophical idea that you&#x27;ve come up with in the shower, or something.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Oh, yeah. I&#x27;m not selling anything, though, ideas or else. I just thought you might like this maxim I came up with. I put a lot of thought into it.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Let&#x27;s hear it.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>So, it&#x27;s like this: <strong>whatever a man or woman was able to accomplish before, you can accomplish, too.</strong></p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>There you go. I knew it would be something out there like that.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>You know me.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Yes, I know how annoying you are.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>C&#x27;mon, don&#x27;t you agree? Whatever a man or woman was able to accomplish before, you can accomplish too. It makes sense.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>We&#x27;ll see about that. You need to explain yourself first. It does not make any sense to me now.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Which part doesn&#x27;t make sense?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Well, it sounds like you&#x27;re saying that everyone has equal capacities and potential. I don&#x27;t agree with that.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Potential, yes, but not capacities. I don&#x27;t mean to say that everything is equally easy for everyone.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Go on.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>People have different strengths and weaknesses, physiological and psychological barriers, things that come easily and others that require great struggle. Put two random people to any given task, and one of them is going to do better than the other. Train them, coach them to mastery on the task, put them through the same hard exercise routine, and they&#x27;ll both improve, but they&#x27;ll improve at different rates.</p>
<p>You may even find that one of them plateaus at a mediocre performance level, while the other goes on improving steadily for a much longer time. She might reach levels of mastery that leave the other astonished and dejected. Everyone is different.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Now that&#x27;s more reasonable. So what&#x27;s the deal with your maxim? It seems to contradict what you&#x27;ve just said.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>There is no contradiction. See, I used the word &quot;can&quot; there, &quot;you <em>can</em> accomplish&quot;. It simply means that the laws of physics do not prevent you from doing X. That isn&#x27;t really saying very much, but I do think it is useful to keep in mind.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Well, you&#x27;re not making a convincing argument yet. The fact that the laws of physics don&#x27;t prevent me from flying towards Mars right now and reaching it in a couple of hours by moving near the speed of light doesn&#x27;t give me much hope of achieving that.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>True, but no one has done that before, especially in the middle of a conversation in a cafe, so it&#x27;s outside the scope of my maxim.</p>
<p>The maxim is about human potential: it takes you from the knowledge that <em>someone</em> actually accomplished something, and infers the fact that it must be physically possible for you to do the same.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Alright. The way you&#x27;re putting it, it doesn&#x27;t sound like it&#x27;s saying much at all. What&#x27;s the point, then?</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>The point is that we all tend to forget it from time to time. People give up on things that they know others succeeded at before, and to give up they use excuses along the lines of &quot;I was not cut for that&quot; or &quot;it turned out that it was impossible for me&quot;. My maxim is a reminder that we shouldn&#x27;t use those absolute and fatalistic words.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>What words should we use then?</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>For example, we could talk about sacrifices.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Sacrifices.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Yes. We humans are sentient beings, so we work by making plans all the time. We simulate in our heads how the external world will play out and create a mental series of intermediate steps that are likely to lead to the achievement of our goals. Our plans are never exact, of course. We account for the many uncertainties in our mental simulations by making them branch out. We use conditionals: if this thing happens, then I&#x27;ll use this &quot;branch&quot; of my plan, and if that other thing happens, I&#x27;ll use that other &quot;branch&quot;. All alternative branches are designed to lead back to the same goal.</p>
<p style="margin-bottom:0.5rem">(Sips at his coffee)</p>
<p>Now, when we make a plan like that—and we do this hundreds of times every day—we implicitly make value judgments on whether the plan is worth executing or not. If something seems physically impossible, for instance, we will give up before expending a single precious calorie of our energy.</p>
<p>For example, I could physically pass an architecture qualification exam next month, without knowing anything about architecture right now, but I give up on that thought immediately. I&#x27;m not going to do it. Why? Not because I wouldn&#x27;t like having an architect&#x27;s license, but because it would cost me too much in terms of effort, studying the subject like crazy from now until next month. I would have to come up with some really creative ways to learn those things quickly, pay people a lot of money to teach me every day, and probably lose a lot of sleep. But I have other things I want to do more, so I won&#x27;t do any of that.</p>
<p>In other words, I renounce following a plan when I think its achievement would not be worth the cost of doing it for me. All of this is happening in my own head, in the mental simulation of how I <em>think</em> the process of getting to the goal would have to work.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>So your point is that, if I ask you why you&#x27;re not aiming for an architect&#x27;s license next month, you should answer that you don&#x27;t because it&#x27;s your <em>choice</em>, not because it&#x27;s impossible.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Right.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Okay, but it&#x27;s not a very representative case, is it? When people say, &quot;I wasn&#x27;t cut out for that,&quot; they usually mean things that they actually tried doing and failed after some struggle. They don&#x27;t say it for things that they&#x27;re not even sure they want to attempt in the first place.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>You&#x27;d think so, but I&#x27;ve seen so many cases in which people give up without even trying properly. They decide that math, or dancing, or winning a competition is out of the question for them and justify those decisions with their own &quot;nature&quot; of being &quot;bad&quot; or having no &quot;talent&quot; at those things. Sure, they might say that they did in fact try hard at math or dancing in, say, high school, and were left thoroughly embarrassed by how bad they &quot;turned out&quot; to be.</p>
<p>In my experience, when people say that, they&#x27;re usually talking about the lowest beginner levels of expertise. So what they&#x27;re implying is that there is some hard physical limit that prevents them from doing the thing even mildly well. They&#x27;re saying that trying any harder, in any way or context conceivable, is futile because it&#x27;s just against their nature. That&#x27;s saying a lot, because, by definition, there are a large number of people who are above the low beginner level. Otherwise, we wouldn&#x27;t call it beginner level.</p>
<p>So, before saying &quot;I just can&#x27;t,&quot; you need to know for a fact that none of the many methods and conditions that people have surely used to achieve that initial breakthrough can possibly work for you. If you...</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Okay, okay, sorry to interrupt you, but I get where you&#x27;re going. This feels a bit like a strawman argument, to be honest. Sure, people who refuse to think hard about derivatives and integrals with a lazy &quot;I&#x27;m bad at math&quot; are probably just making an implicit value judgment that it&#x27;s too unpleasant or tiring or something, and they don&#x27;t see the point in trying harder or trying many other approaches, but they still say it&#x27;s &quot;impossible&quot;.</p><p>And I guess you also want to say that they frame it as impossible as a way to get off the hook, so to speak, to avoid feeling the social pressure of achieving something that they feel they&#x27;re socially required to achieve, right?</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>You&#x27;re very astute! It&#x27;s almost as if you read my mind.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Yeah, but it&#x27;s strawmanny. Your initial statement is sweeping, saying that <em>whatever</em> someone has done you can do too. The striking part of it is that you imply that very difficult things, even historical feats, can be achieved by literally any random human being. But now you&#x27;re criticizing those who give up on the simpler things. It&#x27;s not apples to apples.</p></div>
<p><em>The man with the ruffled hair looks thoughtfully out of the large window to his left for a few seconds.</em></p>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Yes, what you say makes sense, but to be fair I wasn&#x27;t finished. Also, I don&#x27;t criticize people who give up, even on the things we might generally consider to be easy. Saying that you <em>can</em> accomplish something does not automatically mean that you <em>ought</em> to attempt it. Actually, often you really ought <em>not</em> to do it: my sweeping maxim also applies to all the horrible things people have done. If Stalin has done it, so can you, but I&#x27;m not for a second suggesting you should! The only thing I&#x27;m wishing people would do more is to frame their choices not as impossibilities but as free choices.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Well, let&#x27;s get to the main point, then. What do you say to those who spend years of full-time, highly informed efforts and trial and error, only to end up anything less than the best in history?</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p style="margin-bottom:0.5rem">(smiling sagaciously)</p>
<p>Okay then, let&#x27;s go there now. It has to do with context. Context is a huge aspect of people achieving stuff, but for whatever reason, it&#x27;s constantly forgotten. There is this deep-rooted belief that everything people can do comes from within them, so that our genes, phenotype, and, at best, their current situation. That is just false.</p>
<p>Let&#x27;s take this from two angles, with two thought experiments. First, the teleporting-Einstein experiment.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p style="margin-bottom:0.5rem">(eye-rolling)</p><p>Oh boy.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p style="margin-bottom:0.5rem">(unfazed)</p>
<p>If, in ca 1903, Albert Einstein had been instantly transported to the surface of Iapetus, one of Saturn&#x27;s moons, by an evil time traveler, he would have frozen to death and/or suffocated to death within seconds, and he wouldn&#x27;t have published his seminal paper on special relativity in 1905. He would have remained forever obscure.</p>
<p>I don&#x27;t think anyone would argue with that. It&#x27;s not saying much, but it highlights the importance of the place you&#x27;re in at the most basic level.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>I guess...</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>I know, it&#x27;s extreme, but you can tune this to be less extreme. What if Einstein was transported to the bottom of the ocean?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>No dice.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Right. Still no relativity, most likely.</p>
<p>But what about the top of Mt. Everest? Probably no luck there, either. The most likely outcome is that he would have died of cold very quickly. But in this case, it&#x27;s not a 100% certainty anymore. There is a slight chance he could be rescued by a passing expedition. If that happened, he might have made it safely back home and finished his initial work on the theory of relativity.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>But here we start to get a bit more nuance. His astonishing teleportation to the Himalayas and his miraculous survival might have stirred quite a news sensation around the world. This yet-unknown, modest clerk working in a patent office in Bern might have become famous overnight, interviewed by droves of reporters, studied by renowned physicians. His characteristic wit and brilliance might have impressed some important people, and he might have made new friends and connections. His life would be changed in one way or another. Would he still have found the time, interest, and opportunity to work on his new theory? Maybe, maybe not.</p>
<p>What if Albert had been transported somewhere else at a much younger age? What if he had gone to different schools, or been raised by different parents, foster parents? Would his theory have come to life? It&#x27;s conceivable that it wouldn&#x27;t, and even that he might not have produced any other major theory in his lifetime, had the context been different in certain ways. He might have pursued a career in music, another passion of his, and become a brilliant musician. I don&#x27;t think that many people would argue that he was destined, <em>no matter what</em>, to gift general relativity and the first seeds of quantum mechanics to the world, because of some kind of &quot;physics-breakthrough genes&quot;.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Of course, I&#x27;m not going to argue that the places and experiences that Einstein lived through had no impact on his achievements. So what? Do you mean to say that I would have achieved the same if only I had lived the same life he did?</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>No, no, that&#x27;s not what I mean at all. As I said, no two people are the same. It&#x27;s <em>the unique combination of the person and the environment</em> that leads to whatever happens. Genes, gene expression, and memories on one side, and places, people, information sources, and tools on the other side, all form ephemeral systems that produce unique outcomes. Of course, the timing of things, the circumstances in which all those things align and interact at any moment are critical, too.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Then I still fail to see your point.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Here is the point: in the alternative scenarios, the musician Albert Einstein wouldn&#x27;t have been called the &quot;genius of the century&quot; and those other hyperbolic appellations given to him in reality. Would he, then, still count as a &quot;genius&quot;? No doubt everyone around him would praise how intelligent he was, but whether he achieved world-changing feats was not written in his genes. It was in his actions, and his actions were always a function also of his circumstances and history, like everyone else. Being intelligent, or generally talented, is not a goal but a means, and possessing it only matters in retrospect, <em>if</em> you manage to achieve something with it.</p>
<p>This is the relativity of talent, you could say.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Let me see if I&#x27;m following your reasoning. When you look at the distribution of great achievements in some area, you usually see a strong skew in favor of one subset of people or another. For example, scientific achievements were dominated by western white males over the past couple of centuries. This is not because western white males are somehow innately better at science, but because some historical conditions were in place, often unjustly, that allowed them to apply their skills at higher levels than other groups.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Of course. So, for example, among the billions of very poor people in Africa and elsewhere there must be, conservatively, millions of individuals alive at any given time who are just as &quot;talented&quot; as those luckier high-achievers, or more. But they never get to use that talent—not on great feats, at least. Similarly, among the 4 billion women on Earth, hundreds of millions of them are super intelligent, statistically speaking. Yet, sadly, there are not many women, and even fewer ultra-poor that achieve famously difficult goals, at least historically.</p>
<p><em>The two young men remain silent for a while.</em></p>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>By the way, I used examples involving intelligence, but difficult deeds can also be physical, and all the same logic applies. Each sport seems to have a different skew, with different countries and cultures dominating to some extent over most others.</p>
<p>It&#x27;s not that, for example, Italians have &quot;soccer&quot; genes and lack &quot;baseball&quot; genes, and so that&#x27;s what they get respectively good and bad at. It&#x27;s that the historical circumstances have led to soccer being popular and easier to learn than baseball in that specific country. Almost everywhere across the Italian landscape you&#x27;ll find soccer courts, for instance, but baseball courts are nonexistent. I checked. The opportunities to get good at each of them are uneven from the start.</p>
<p>So we almost certainly have children with Einstein-level brains being born in African slums and never even learning how to read; children with Olympic-gold-worthy bodies who never have the opportunity to step on a court. How do you recognize those kids? I think they would look like normal kids, each with their unique tendencies, strengths, and weaknesses, just never getting to the point of being significantly noticed by the broader society. Because they <em>are</em> normal kids.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>I agree with all that but... it&#x27;s not exactly new information.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Fair enough. These may be old arguments, but we don&#x27;t seem to have internalized them as a society, yet. I&#x27;m not done, bear with me.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Okay...</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Okay. How does one know, then, when they fail at achieving something, if the cause of failure is a physical or biological impossibility or just the wrong combination of person and environment? Does it even make sense to talk about having a &quot;gift&quot; or a &quot;talent&quot; without mentioning the surrounding conditions?</p>
<p>A natural gift isn&#x27;t a gift if it never helps you. Imagine having an incredible gift for understanding and manipulating highly abstract mathematics. Except you&#x27;re born in an uncontacted Amazonian tribe. Will anyone ever notice? Chances are, you&#x27;ll live and die without anyone even dreaming that you have a gift.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Hmm, I don&#x27;t think there are such selective talents, that make you only good at one thing and nothing else.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Even assuming that your mental prowess spills over to other areas, like better hunting strategies, it will at best make you and your tribe marginally better-fed and thriving. It might be noticed over the years, but it can&#x27;t make a huge, lasting difference in that cultural and natural setting. The upside is tiny.</p>
<p>Another way to put it is that the definition of what counts as a &quot;great deed&quot; is largely a cultural artifact. Five hundred years ago, being exceptionally good at slaying adversaries in sword-fight might have been enough to earn you the &quot;genius&quot; label. Today, if you cut someone up as competently as Miyamoto Musashi, you&#x27;re only going to jail.</p>
<p>All that to say that, for all we know, every human being is a potential &quot;genius&quot; at something, but most find themselves in circumstances that are incompatible with their specific talents, and are unable to unlock their most efficient mental and muscular functions because of a lack of external affordances.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Finally we get to a kicker. Everyone is a latent genius. Do you really believe that?</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>If you ask it like that: no. I mean, I stand by what I said, but I think the wording is upside down. As I said, the word &quot;genius&quot; is the problem because it makes it sound like someone has a special innate aura of predestination inside them. That&#x27;s wrong. It&#x27;s a matter of compatibility of any given skill set with any given set of circumstances. Instead of talking about &quot;latent geniuses,&quot; we should be talking about optimal conditions for a given person and goal.</p>
<p>Here&#x27;s the second thought experiment I promised you. Imagine a great &quot;genius&quot; again. Let&#x27;s take Alan Turing this time. Turing, among other things, discovered the concept of computation and artificial intelligence, and is considered to be the father of computer science. Imagine that, instead of publishing his seminal papers, he turned his results into a teaching curriculum for an elementary-school child. Maybe it&#x27;s because he&#x27;s shy and doesn&#x27;t like the attention, whatever. He doesn&#x27;t tell any adult about his ideas, but picks a child who is struggling with elementary-school math and gives his best to teach her everything he knows.</p>
<p><em>The man with a tidy beard laughs out loud, but lets the other continue.</em></p>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Every day, Turing teaches the child—let&#x27;s call her Hannah—he teaches her all the theoretical basics needed to eventually get to his results. He goes slowly, using trial and error to find the most effective ways to make his abstract ideas comprehensible by Hannah. He studies from other good teachers the methods to make the learning process more engaging and fun. Whenever a teaching method or environment falls short, he switches to a different one. He devotes all of his time and infinite patience to this teaching job, spending years just focusing on Hannah&#x27;s learning. For her part, Hannah (with the blessing of her open-minded parents) places Alan&#x27;s lessons at the top of her list of things to do, replacing all other school subjects. It&#x27;s an obsession for both.</p>
<p>Now, tell me. Can you imagine that Hannah would actually understand, with time, what Alan is teaching her, and that she would eventually get to be only the second person in the world to fully grasp those concepts, and at an incredibly young age?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p style="margin-bottom:0.5rem">(draining the last drop of coffee from his cup)</p><p>It&#x27;s probable that she would succeed, with enough expense of effort on both sides, yes. But it might take much longer than with another more gifted child.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Right. The speed of learning will also depend on the child&#x27;s brain, but that&#x27;s beside the point. Children have a knack for learning, and the main obstacle they face is how much effort they can expend on a single topic.</p>
<p>Anyway, let&#x27;s just say that Hannah, representative of the average child, would be able to grasp computer science with a few years of Turing&#x27;s tutoring.</p>
<p>If she then went on and wrote a paper on computer science, she would be the first to do so, because in our thought experiment Turing has refrained from telling anyone else about his discoveries. Her paper would become a seminal work, and it would be <em>her</em> words that spark the whole new realms of mathematics, engineering, and philosophy, tidal waves rippling into the distant future. Although Turing would deserve all the credit for actually figuring out the math, Hannah, with her new understanding and her courage to publish, would still be achieving a &quot;great deed&quot;, a grand performance that would astonish everyone and leave a mark in history. You can even imagine that, perhaps, Hannah would be able to come up with some additions here and there to Turing&#x27;s theory, thanks to her inevitably different perspective and thinking patterns.</p>
<p>Do you see where I&#x27;m going with this?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Sure. You want to show that, even without uniquely developed innate capacities, anyone can do great things if the circumstances are tuned just so.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p style="margin-bottom:0.5rem">(nodding)</p>
<p>You truly are acute. I should make you my apprentice. But I see from your face that you&#x27;re not pleased yet.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>The thing is, most people can&#x27;t devote years of intense learning (tutored by a great scientist or not) that is tailored exactly the way they need to achieve a given high goal. There are limits in the available circumstances as well.</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Here you&#x27;re using the wrong words! Yes, Turing&#x27;s pupil was an extreme example, on purpose. I&#x27;m willing to bet that, given enough money, anyone could hire a great scientist to tutor them like crazy. And, given enough effort—possibly doing very illegal and immoral things—anyone can get enough money for that. Here we finally return to the thread about mental simulations and value judgments.</p>
<p>People never discover that a certain task, although possible for someone else, is impossible for them. What one might discover is that it would take many more environmental changes and painful exertions, and even more risks compared to that other person who did it before them. And, instinctively, we may not be willing to do all those things to achieve that goal. The effort cost comes with the package, and we might reject the package.</p>
<p>Some achievements can take a lot of time, and you may have to give up on a lot of other things that you would like. Life isn&#x27;t long enough to achieve <em>all</em> the things you can dream of. You have to pick and choose carefully, and often one big goal isn&#x27;t worth the sacrifice of ten smaller ones. And so we give up.</p>
<p>...have I lost you?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p style="margin-bottom:0.5rem">(looking at his watch)</p><p>No, I get it. You may have a point, but I&#x27;m struggling to see what the final message of your maxim is, in practice. Has this whole conversation been only a philosophical exploration of the meaning of the word &quot;possible&quot;?</p></div>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p>Okay, let&#x27;s be practical. To repeat, the maxim is: <strong>whatever a man or woman was able to accomplish before, you can accomplish too.</strong></p>
<p>Internalizing this idea allows you to shift your focus to what&#x27;s really important. Instead of creating bucket lists of all the things you wish will <em>happen</em> to you, you can weigh what kind of circumstances and environmental changes you could <em>create</em> yourself that are likely to make those goals actually feasible.</p>
<p>Sometimes you&#x27;ll find that the necessary changes would take a huge effort, and the sacrifice of several other goals you have set for yourself. After seeing all that, you&#x27;ll be able to better judge if you <em>really</em> want to work towards goal X or, after all, it&#x27;s not as inviting a prospect as you had initially thought.</p>
<p>We do this kind of judgment anyway, deep in our guts. But the effort of doing it consciously, and of being very frank to ourselves, liberates us from the torment and regrets that having too many fuzzy wishes inevitably leads us to. It liberates us from feeling inherently inadequate. If you&#x27;re not explicit about these things, they will seem to shut themselves out of your reach all of a sudden, while you&#x27;re struggling to achieve them, and you will probably want to blame the environment as a way to not blame yourself for the failure. I don&#x27;t want people to do that. If the necessary effort is too much in your assessment, that doesn&#x27;t make you a failure!</p>
<p><em>The man with the tidy beard picks up his coat and puts it on.</em></p>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p style="margin-bottom:0.5rem">(checking his watch in turn)</p>
<p>Oh, it&#x27;s time to go.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p>Yep. We don&#x27;t want to keep Elisa waiting.</p></div>
<p><em>The two stand up and return the mug cups, then walk towards the door.</em></p>
<h4 style="font-size:small;color:dimgray">RUFFLED HAIR MAN</h4>
<p style="margin-bottom:0.5rem">(keeping the cafe&#x27;s exit door open for the other man)</p>
<p>So, saying &quot;I would need to work harder than I&#x27;d like for it, so I choose not to,&quot; is how you turn down a possibility with undamaged self-esteem.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">TIDY BEARD MAN</h4><p style="margin-bottom:0.5rem">(passing through the door)</p><p>You really should stop using me as a guinea pig for your theories, Alf.</p></div>
<p><em>They both stroll away on the sunlit sidewalk.</em></p>
<h2>Act 2 - Milkshakes</h2>
<figure><img src="https://aethermug.com/assets/posts/whatever-a-man-or-woman/sandie-clarke-pYAbhWb_LaQ-unsplash.webp" alt="Picture of a chocolate milkshake with whipped cream in a plastic cup."/><figcaption>Photo by Sandie Clarke, Unsplash</figcaption></figure>
<p><strong>INT. CROWDED CAFE - MORNING</strong></p>
<p>The same crowded midtown cafe, seconds after the two young men left. At the table next to the one just vacated by the two men sit a young girl of ten or eleven, and a white-haired man in his seventies. The girl has braids in her hair and is keeping a book open in front of her, with a large cocoa milkshake at her side. The elderly man is drinking an identical milkshake and appears to be just sitting contentedly there, doing nothing in particular. They have both been quiet for a long time.</p>
<p><em>When the two young men disappear out of the door, the elderly man looks at the girl with a grin on his face. The girl looks up at him.</em></p>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>What?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>You haven&#x27;t turned a page in twenty minutes.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p style="margin-bottom:0.5rem">(with a crossed expression)</p>
<p>That&#x27;s not true.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p style="margin-bottom:0.5rem">(still smiling)</p><p>You were listening to those two folks at the next table, weren&#x27;t you?</p></div>
<p><em>The girl hesitates for a moment, then smiles and closes her book and puts it in front of her on the table. It is titled &quot;What I Believe&quot;.</em></p>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>I <em>tried</em> to read, but they were so distracting!</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>That&#x27;s alright. They were saying some interesting things, weren&#x27;t they?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>Not interesting. Annoying.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p style="margin-bottom:0.5rem">(raising his eyebrows)</p><p>How so?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>I mean, that guy with the coat was so full of himself, but he was wrong.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Hmm-mm.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>I mean, he was <em>obviously</em> wrong, wasn&#x27;t he?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Are you saying that everything he said was wrong, Bettie?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p style="margin-bottom:0.5rem">(screwing up her face)</p>
<p>Well, of course not <em>everything</em>, but the maxim he was saying over and over. That&#x27;s obviously wrong.</p>
<p style="margin-bottom:0.5rem">(pauses briefly)</p>
<p>In my opinion.</p>
<p><em>The elderly man straightens up in his chair and leans forward, putting his elbows on the table and his chin on his knuckles.</em></p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Very well, then. The boy&#x27;s maxim was, I believe, &quot;anything that has been done before can be done again by you.&quot; Why do you think it&#x27;s wrong?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p style="margin-bottom:0.5rem">(with a gentle but resigned expression)</p>
<p>No, it was &quot;whatever a man or woman was able to accomplish before, you can accomplish too.&quot; But you got close enough, Grampa.</p>
<p>First of all, what about trying something for the first time? Should I give up, because there is no proof that it is physically possible?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>I see. I&#x27;m not that boy, but maybe he would answer your question by saying that his maxim simply doesn&#x27;t apply to those cases. Advice on how to lose weight is still useful even if skinny people exist. As long as it is useful to someone, it might be worth listening to.</p></div>
<p><em>The elderly man winks knowingly at the girl.</em></p>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p style="margin-bottom:0.5rem">(shaking her head)</p>
<p>I&#x27;m not falling for your tricks, Grampa! That&#x27;s not what I&#x27;m talking about.</p>
<p>Where do you draw the line between &quot;has been done before&quot; and not? &quot;Becoming the president of Finland&quot; has been done before, but what about &quot;becoming the president of Finland as an underfed Cambodian orphan child&quot;?</p>
<p><em>The elderly man nods with understanding.</em></p>
<p>Does the maxim apply to that child who dreams to be the next Finnish head of state, or not? Is being underfed and not born in Finland just an obstacle to work around? Or can we get him off the hook because it&#x27;s probably &quot;physically impossible&quot; in his particular case?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>You&#x27;re asking some very good questions, Bettie.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>It&#x27;s not just that. Even if you take someone who seems to fall squarely in the &quot;done before&quot; category, the maxim doesn&#x27;t work. That guy with the funny hair said it himself, different people have different obstacles, and some have more than others. So let&#x27;s take a Finn as an example, an average Finn born in Finland instead of a Cambodian child. Can they become the president of their own country? What if they&#x27;re very poor, and also have learning deficiencies, and also they&#x27;re older than you, Grampa, and they have only worked as a toilet cleaner for the past fifty years?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p style="margin-bottom:0.5rem">(with an outraged look)</p><p>Older than me? Impossible.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p style="margin-bottom:0.5rem">(giggling)</p>
<p>Come on, Grampa. I&#x27;m being serious. How many obstacles are too many? When does possibility become impossibility, exactly?</p>
<p><em>The girl taps the palm of her hand on the book as she says those last words.</em></p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>I suspect we have no good way to tell, darling. I think there might be people who have just an obstacle too many.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>And the line between possible and impossible is different for everyone and can&#x27;t be measured. That maxim is useless.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>It seems like the boy got himself into a problem there.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>He should retract it!</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Why didn&#x27;t you tell him so?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p style="margin-bottom:0.5rem">(looking down, drinking her milkshake through a straw)</p>
<p>I don&#x27;t know.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>That&#x27;s it then, you&#x27;ve solved it, Bettie. We should scrap that notion. We are doomed to accept that some things are impossible for some people, and we&#x27;re allowed to blame factors outside of our control when we fail at something. Maybe it&#x27;s for the better, after all.</p></div>
<p><em>There is a long pause. The girl carefully studies the face of her grandfather as she takes more sips from her milkshake&#x27;s straw. Then the girl says something in a feeble voice, but it gets drowned out by the barista yelling that the take-away coffee of someone called Sunny is ready.</em></p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>What&#x27;s that again? Your Grampa has bad ears, you know.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>I said that it&#x27;s not <em>all</em> wrong, what he was saying.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Oh?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>I mean, he&#x27;s terribly wrong with his big and mighty maxim. But it&#x27;s true that people sometimes say that a thing is impossible when it&#x27;s not so impossible. Mom always says she is sick and tired of the squeaky door in the kitchen, but when I ask her, &quot;Why don&#x27;t you fix it?&quot; she says that she can&#x27;t because it requires some special tools. It&#x27;s so annoying, both her and the door.</p>
<p>Also, she always says, &quot;I can&#x27;t possibly buy you another book until next month!&quot; but I know she has enough money in her wallet.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>How can you possibly know that?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>I checked when she was sleeping.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Maybe she owed the money that was in her wallet to someone else, and couldn&#x27;t spend it as she wished.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>No, no, I... took the money once, to buy a book, and nothing bad happened. She never even noticed it!</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p style="margin-bottom:0.5rem">(laughing)</p><p>You little...! Don&#x27;t do that again, Bettie.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>It&#x27;s okay. It was just an <em>experiment</em>.</p>
<p>Anyway, maybe it&#x27;s true that people say &quot;I can&#x27;t&quot; because it relieves them of the responsibility to follow through.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Hmm-mm. Let&#x27;s see. For example, I remember being very impressed when I saw the live broadcast of the first men who walked on the Moon. I thought, &quot;Oh boy, that&#x27;s something I will never be able to do!&quot; Was I wrong?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>Yes! I mean, no, because you didn&#x27;t do it in the end, right? But you were selling yourself short. It&#x27;s not like those people who made the trip to the Moon possible had Moon-genes. Also, it&#x27;s not like they were just very lucky. Maybe a little, with their education and so on. But it was very daunting even for them, I think, before they actually did it.</p>
<p style="margin-bottom:0.5rem">(her face lighting up and her speech getting faster)</p>
<p>Also, if people were able to form peaceful democracies, or to draw their deepest emotions with paint, or to discover the structure of matter, it was because each person involved found paths <em>in</em> and <em>around</em> their environmental and innate obstacles, not because those things were no-brainers to them. They were very big yes-brainers for them, too, I think.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Ah ah ah, Bettie, you really are something!</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p style="margin-bottom:0.5rem">(confidently)</p>
<p>So I think that guy was right when he said that people shouldn&#x27;t feel pressured to do things even if they are possible for them. There are just too many possible things. If someone is not willing to do the work to achieve something, that doesn&#x27;t make them any less worthy as a person.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Hmm-mm. We&#x27;re intelligent and adaptable creatures for a reason. Going back to the drawing board is one of our privileges, not a sin.</p></div>
<p><em>The girl nods emphatically.</em></p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><p>Then you can&#x27;t blame your mother for not oiling the kitchen&#x27;s door, either.</p></div>
<p><em>The girl stops nodding.</em></p>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>Wait... Yes I can. It&#x27;s like that guy was saying. People don&#x27;t admit that they can do something but they just don&#x27;t want to do it, and instead complain about things.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Do you think your mother is being insincere?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>No... I think she really believes it when she says she can&#x27;t.</p>
<p>That&#x27;s the point of that guy, I think. If we get into the mindset of &quot;impossible things,&quot; then we create walls in front of us that don&#x27;t really exist. It&#x27;s an opportunity cost.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>An opportunity cost. That&#x27;s an interesting way to put it! So what will you do, when you get back home this afternoon?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>Hmmm. I&#x27;ll tell Mom, &quot;Don&#x27;t be at war with the world. Make bargains with it. And stop complaining.&quot;</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>I don&#x27;t think that&#x27;s the right thing to do, if you ask me.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>Why?</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>Well, for starters, I&#x27;ve heard you complain about your mother only a minute ago.</p></div>
<p><em>The girl pauses to think, then gives a gentle slap on her grandfather&#x27;s forearm.</em></p>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>Grampa. Stop turning things against me!</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p style="margin-bottom:0.5rem">(leaning forward)</p><p>Didn&#x27;t you?</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>Alright, I complained. I guess... I guess we complain when we hope that someone else will do something in our place. And...</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>I&#x27;m all ears.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>And it&#x27;s not fair. I could do something about the kitchen door myself. I could help Mom with it. But instead, I just complain about her complaining. I&#x27;m just like her.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>And she is a wonderful woman. But you know what I think? I think that complaining is usually unfair, like you said, but sometimes it can be a good idea. That&#x27;s why babies cry, after all. And I can tell you, when <em>you</em> were a baby, no one blamed you for crying.</p></div>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>Ah. You&#x27;re right. But I&#x27;m still being unfair now to Mom, because I didn&#x27;t complain out of necessity, only out of laziness.</p>
<div style="margin-left:4rem;grid-column-start:1;grid-column-end:1"><h4 style="font-size:small;color:dimgray">ELDERLY MAN</h4><p>How about we go and buy another book, Bettie?</p></div>
<p><em>The girl finishes her milkshake in one big slurp and stands up.</em></p>
<h4 style="font-size:small;color:dimgray">GIRL</h4>
<p>No complaining about the bookstore&#x27;s stairs! ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/whatever-a-man-or-woman/tomas-jasovsky-d5SZqLkpIrY-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Year One]]></title>
            <link>https://aethermug.com/posts/year-one?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/year-one</guid>
            <pubDate>Thu, 06 Feb 2025 05:32:39 GMT</pubDate>
            <description><![CDATA[On trying to be "The Invisible Elephant Guy" after one year of Aether Mug]]></description>
            <content:encoded><![CDATA[<p>One year ago this week, a quiet, LED-laden server in a cool data center not too far from you blinked and started serving my very first blog post, &quot;<a href="/posts/designing-for-emergence">Designing for Emergence</a>,&quot; to the internet. And thus, Aether Mug was born. *cue the kazoos*</p>
<p>Three hundred sixty-five days and forty-seven blog posts later, it&#x27;s as good a time as any to do a little retrospective. No one asked for it. I could just keep it in my private notes, to return to for comfort during the cold, hard days of Year Two. Instead, I put it up on this page. Ah!</p>
<h3>The Practical</h3>
<p>I realize now that I should have started blogging earlier. What was I waiting for? I was already publishing essays on <a href="https://planktonvalhalla.com/" rel="nofollow noopener noreferrer" target="_blank">Plankton Valhalla</a> since 2022, of course, but AeMug is a very different beast. Its weekly cadence is a harsh constraint for me, because I&#x27;m a slow writer, and this work tends to consume time meant for other projects. But it also pushes me to places I might never have gone otherwise. Writing catalyzes thought, so faster writing helps me triage and clarify my vague ideas <em>very</em> quickly.</p>
<p>What they say is also true: posting more frequently does help attract more attention (more on this below). As straining as it can sometimes be, I&#x27;m glad I got on this train, and I plan to keep going. It was a good idea. Now I understand what Faulkner meant when he said, &quot;I write when I’m inspired, and I see to it that I’m inspired at nine o’clock every morning.&quot; It&#x27;s something that any journalist knows very well, but it took me time to grasp.</p>
<h3>The Philosophical</h3>
<p>There is a popular verse from the <em>Bhagavad Gita</em>, an ancient Hindu scripture, where Krishna says:</p>
<blockquote>
<p>You have the right to work, but never to the fruit of the work. You should never engage in action for the sake of reward, nor should you long for inaction.</p>
</blockquote>
<p>Now, I am neither Hindu nor a scholar of Hindu theology. The first time I read this, it didn&#x27;t make any sense to me. Why would I work if not for the fruits of my labor? Then I started Aether Mug, and suddenly it clicked.</p>
<p>Here&#x27;s how I interpret it in the context of writing: write what you think you should be writing. No—write what you think <em>someone</em> should be writing. The act of weaving sentences and clicking &quot;publish&quot; <em>is</em> the fruit. At that point, you&#x27;re ready to start writing the next thing. What happens once your text is out in the wilderness is none of your business.</p>
<p>This mindset is a surprising source of joy for me, because it frees me to do my best work, to focus entirely on how well I&#x27;m doing it, by my own standards, without getting caught up with the kind of nitpicking some abstract omni-present reader out there might come up with. I mean, of course there is someone, somewhere, who can find fault in anything you or I do. That&#x27;s their job, so let them do it in peace. Our job as writers is hard enough without those worries.</p>
<p>Although this <em>Gita</em> wisdom does help weather any backlash, it&#x27;s not merely a coping mechanism. I&#x27;ve found that it helps with the <em>positive</em> response to my writing, too. When a post I wrote happens to hit a network effect surge and gets a lot of welcoming attention, the temptation to bask in the praise and ego-surf is strong. I made this mistake at first, and regretted it. Sure, you get a warm fuzzy feeling out of it, but not in a way that helps you do better next time. Quite the contrary. I&#x27;ve since made it a point not to read any of the online discussions about Aether Mug.</p>
<p>(I&#x27;m making it sound like I&#x27;m so popular and everyone is talking about me—I am not, and no one is; the buzz only happened a couple of times, and to a limited degree. The problem exists nonetheless.)</p>
<p>There is one exception, though. When readers write to me directly with feedback of any kind, I&#x27;m very happy to read and respond. Knowing that there is someone willing to engage in a direct conversation about what one writes is all the external reward a writer needs, in my opinion.</p>
<h3>The Pig-Headed</h3>
<p>I deeply respect those bloggers who make a name for themselves with very specific areas of expertise (when they actually have that expertise). Almost every successful blog seems to be like this: you have the &quot;habit guy,&quot; the &quot;life-hack gal,&quot; the &quot;timely business insights blog,&quot; and so on. They know specific things and spend a lot of time researching and thinking about those specific things, and people go to them and trust them on those. It&#x27;s an effective approach, it&#x27;s easy on the readers, and it&#x27;s <em>excellent</em> for SEO. The internet loves that kind of blog, and for good reasons.</p>
<p>I&#x27;m happily going in the opposite direction. I write about all kinds of very disparate things: systems, thinking tools, language, project management, evolution, literature, life, Life, blimps, and mental imagery, just to name a few. Needless to say, this does not make finding long-term readers easy. Some people are attracted to certain topics and not to others. I have no well-defined niche that people can find with a search keyword, and it&#x27;s not very clear who is the ideal person to recommend Aether Mug to.</p>
<p>I don&#x27;t think it would be hard to pivot to being &quot;a blog about XYZ.&quot; The easiest way is to see where I get the strongest response and the biggest increase in subscribers, and keep writing every week on that same topic.</p>
<p>For example, when my post titled <a href="/posts/the-beautiful-dissociation-of-the-japanese-language">The Beautiful Dissociation of the Japanese Language</a> floated to the front page of Hacker News (a forum popular among software developers), I got a big influx of new subscribers. Suddenly, more than half of my readers were people who knew me only as &quot;a guy who has written something interesting about kanji.&quot; That was a clear opportunity: I like Japanese, I like thinking about the hidden quirks of its scripts, therefore I could just make this a Japanese Quirks blog. Tap into the network effects. This is a sound strategy, and I readily recommend it to any blogger who finds it attractive.</p>
<p>The thing is, it is not attractive to me. Although I love writing about Japanese, <em>it is not because it&#x27;s Japanese</em>. I love something buried beneath the surface, some connection I sense with other important aspects of life and knowledge, and Japanese is only a manifestation of that unnamed thing. My interests have always been multidisciplinary, and it&#x27;s precisely that multidisciplinarity—the messiness—that I&#x27;m interested in.</p>
<figure><img src="https://aethermug.com/assets/posts/year-one/elephant_blindmen.webp" alt="A drawing of a sitting elephant surrounded by seven people, each touching a different body part of the animal. There are inscriptions in Devanagari, the Indian script (presumably in Sanskrit or Hindi), and two saintly figures sitting in a meditative posture overlooking the scene."/><figcaption>Drawing from the Sammati Tarka Prakarana, photographed by Romana Klee, Wikimedia Commons.</figcaption></figure>
<p>There is a famous parable of a group of blind men who meet an elephant for the first time. Each man touches a different part of the animal—one touches an ear, one the trunk, one a tusk, and so on—and each forms a very different conception of what an elephant is based on that. I see AeMug as a blog focused on an invisible elephant, and each post is a probe on some corner of that elephant. They all seem very unlike each other at first, but they are really all about the same thing.</p>
<p>If I&#x27;ve disappointed some readers by not writing more about Japanese, or about any other previous topic they&#x27;ve subscribed for, I&#x27;m sorry. But I don&#x27;t think there is a need to sweat it too much as a blogger. For most people, it&#x27;s very easy to skip reading a new post that doesn&#x27;t seem to scratch a personal itch. Readers are magnanimous.</p>
<p>One day, I&#x27;ll be so good at writing that I&#x27;ll be able to encapsulate that unnamed central topic into a single blog post. Until then, I keep feeling my way around.</p>

<p>When all is said and done, a blog is a way to connect with other people, like any other form of communication. The important requirement is that one makes connections that feel true and natural to them—to the author. In my case, the things I want to communicate are better expressed through gradually-researched and slowly-crafted prose than through impromptu conversations. I need time to think and digest ideas big and small. There is an elephant there, I have no doubt about it, though I don&#x27;t see it clearly yet. I write on this site because it is a great joy to uncover that majestic beast, together with people patient and curious enough to stick around despite, or even <em>for</em> the mess. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/year-one/32235drsdl.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[We Need a Sense of Systemic Perspective]]></title>
            <link>https://aethermug.com/posts/we-need-a-sense-of-systemic-perspective?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/we-need-a-sense-of-systemic-perspective</guid>
            <pubDate>Mon, 27 Jan 2025 00:15:01 GMT</pubDate>
            <content:encoded><![CDATA[<p>The farther away a situation is from us socially, like a different industry or culture or community, the more puzzling and irrational and crazy—and sometimes evil—it looks. We&#x27;re dismayed at how people drive in other countries, or at how some of the commonplace behaviors over there are <em>so rude</em>, or how ruthless the colleagues in Sales are, and so on.</p>
<p>Maybe this is just because we are not built to see perspective that is not visual. Our eyes and brains have evolved to know when something is far from us or close, and we automatically correct our expectations based on that. You don&#x27;t expect a person on the other side of a restaurant&#x27;s hall to hear clearly what you&#x27;re whispering at your table, and you&#x27;re not amazed by how <em>tiny</em> she looks, because your intuition tells you that this is how distance works.</p>
<p>Yet, when we talk—in a more abstract way—about contexts and circumstances different from our own, we expect them to make sense to us here and now, no matter how &quot;far away&quot; they are in terms of their structure, behavior and histories.</p>
<p>That&#x27;s not surprising, considering that for most of our evolutionary history we&#x27;ve lived in small groups and in relatively stable environments. The mind-boggling variety of contexts people live in today, and especially our ability to know about them, are a very recent turn of events.</p>
<p>I wonder if we&#x27;re in the process of evolving an intuitive sense of systemic perspective. I hope we are, although I don&#x27;t think we&#x27;ve made much progress yet. In the meantime, we&#x27;re forced to make up for this deficiency with the slow, tiresome, and rational parts of our brains. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/we-need-a-sense-of-systemic-perspective/work_fordmadoxbrown.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Primitive Atlas of Glass Circuits]]></title>
            <link>https://aethermug.com/posts/primitive-atlas-of-glass-circuits?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/primitive-atlas-of-glass-circuits</guid>
            <pubDate>Thu, 23 Jan 2025 09:35:15 GMT</pubDate>
            <description><![CDATA[Aren't we all air converters?]]></description>
            <content:encoded><![CDATA[<p>How much work can a simple <a href="/posts/a-framing-and-model-about-framings-and-models">framing</a> do? Here is as simple a framing as they get.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/primitive-atlas-of-glass-circuits/system_in_out.webp" alt="" title="A simple diagram with a box in the middle marked &#x27;System: something happens inside&#x27;, an arrow entering the box marked &#x27;Stuff goes in&#x27; and an arrow coming out of the box marked &#x27;Other stuff goes out&#x27;."/></figure>
<p>In this view of the world, there are only two kinds of things: &quot;stuff&quot; going around and &quot;systems&quot; where something happens. It doesn&#x27;t matter what the system is, as long as some kind of &quot;stuff&quot; can pass through it. It&#x27;s so general that you may think it almost useless, but think again! It&#x27;s enough to change how we see the world, and its generality means it can tell us something about <em>anything</em>.</p>
<p>I first introduced this model on Plankton Valhalla, in an essay titled <a href="https://planktonvalhalla.com/20221216-toying-with-ideas-of-glass-circuits/" rel="nofollow noopener noreferrer" target="_blank">Toying With Ideas of Glass Circuits</a>. There, I made the point that we must seek different ways to segment the world, bring down the walls that separate the disciplines of human knowledge, and seek new connections between apparently unrelated ideas. Thinking in terms of <a href="https://planktonvalhalla.com/20221006-the-invention-of-systems/" rel="nofollow noopener noreferrer" target="_blank">systems</a> is great for this, and I tried to show that with the disarmingly simple diagram above.</p>
<p>Looking at that picture, there aren&#x27;t many questions you can ask without getting into the nitty-gritty of concrete examples. Here is a fully general one, though: <em>do those two elements—&quot;stuff&quot; and systems—change a lot, a little, or none at all in the process?</em></p>
<p>The following &quot;Glass Circuit Chart&quot; is what came out of that exercise.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/primitive-atlas-of-glass-circuits/glass_circuits_v2.webp" alt="" title="A Cartesian plane with two axes: &#x27;Conductance&#x27; horizontally and &#x27;Transparency&#x27; vertically. The former shows, near the end of the arrow, the words &#x27;Higher Conductance: less altered by passing differences&#x27;, and near the end of the other arrow are the words &#x27;Higher Transparency: alters the passing differences less&#x27;. On this plane are positioned many pairs of words, like &#x27;heart vs blood&#x27; in the upper-right of the plane (high conductance and high transparency), light bulb vs electrons in the lower-right (high conductance and low transparency), &#x27;beach town vs tsunami&#x27; in the lower-left (low conductance and low transparency) and &#x27;A/C filter vs air&#x27; in the upper-left (low conductance and high transparency)."/></figure>
<p>There&#x27;s a lot of information in that chart, so let&#x27;s break it down. First, the &quot;stuff&quot; I&#x27;m talking about is what we can call <em>differences</em>—that is, the property of things of not being like some other things.</p>
<blockquote>
<p>Differences as in “here, not there”, “this much, not that much”, “machine that does what I want, not inert piece of metal”. Whenever you can talk about something or some amount as unlike others, there you have a difference. Everything [...] is made of lots and lots of differences, and it will do different things, <em>be</em> different things based on what those differences are.</p>
<p>The word “difference” as I use it here covers pretty much everything you can think of. [...] If you can talk about it, you’re talking about its differences.</p>
<p>— <a href="https://planktonvalhalla.com/20220818-differences-contraptions-and-a-ripple-universe/" rel="nofollow noopener noreferrer" target="_blank">Differences, Contraptions, and a Ripple Universe</a></p>
</blockquote>
<p>Why use this peculiar concept of &quot;differences&quot; instead of &quot;objects&quot;, &quot;matter&quot;, or &quot;energy&quot;? Because, with systems, the <em>patterns of organization</em> are what matters, and those need not be in the form of objects, matter, or energy (although they often are). Differences are a more universal, agnostic idea at the base of any kind of pattern. We&#x27;ll see plenty of concrete examples below.</p>
<p>At the most fundamental level, then, differences of all sorts go in and out of systems, and whatever happens inside those systems happens to and between differences.</p>
<p>The question was whether there is any change in the system and in the &quot;stuff&quot; (the differences) going through it, and how big of a change that is. This is actually two questions, which translate into the two axes of the chart:</p>
<ol>
<li><strong>System conductance</strong>: <em>How much does the system change when certain differences enter it?</em> The analogy with the electrical conductance (and its opposite, resistance) of a circuit is based on the observation that, in more resistive materials, the collisions with incoming electrons make the atoms vibrate, producing heat. More conductive materials remain relatively unscathed by the current. Similarly, a more conductive system lets specific differences pass through it without being affected by them.</li>
<li><strong>System transparency</strong>: <em>How much do those differences change as they pass through the system?</em> Intuitively, something transparent lets the images of objects on the other side pass through unchanged, while a more opaque material degrades or transforms them. In our framing, a transparent system is one that lets differences through without transforming them.</li>
</ol>
<p>In this sense, a system has the properties of both a circuit and a glass: a &quot;glass circuit&quot;. The answers to these two questions are different for each pair of system and incoming differences, and that allows us to place each pair somewhere on that Cartesian plane.</p>
<p>Clear glass, for example, is both highly conductive of visible light (the glass transmits the light while remaining unchanged by its passage) and highly transparent to it (the light is unchanged), so we place it in the top-right corner of the diagram.</p>
<p>On the other hand, if now you take the same &quot;visible light&quot; differences but look at what they do in a &quot;photographic film&quot; system, the answers are quite different. The patterns of light that touched the film become impressed there. The film, which was originally blank, is greatly changed by this process. The light patterns, however, are preserved for the viewer of the picture, albeit to a limited extent, because some colors and details may be altered. We can thus put this pair, &quot;photo film vs light&quot;, somewhere to the far left, midway along the vertical axis of transparency.</p>
<p>Most of the time the definition of a system and of the differences traversing it can&#x27;t be very accurate, because of <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">how reality works</a> and because of <a href="/posts/a-fundamental-framing-of-human-language">limitations in the language</a> we use to describe them, so that their position on the Glass Circuit Chart is hard to pin down precisely. As I wrote in the original essay, this is definitely not an exact science. But there are meaningful distinctions between how things work in this framing. It is a different way to draw boundaries, which implies the elimination of other boundaries, too. In this view, very disparate systems turn out to have some very fundamental features in common.</p>
<p>The implications of these newly found links are often exciting new research directions, but this Chart is largely unexplored territory. I don&#x27;t understand most of it. But since we have this toy, someone&#x27;s gotta play with it!</p>
<p>Here is a primitive Atlas of Glass Circuits, barely detailed enough to make out four &quot;continents&quot; on this strange map.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/primitive-atlas-of-glass-circuits/glass_circuits_v2_quadrants.webp" alt="" title="The Cartesian plane divided into four quadrants. From the top-left: clays, channels, reactants, converters."/></figure>
<p>The following are my brief interpretations (and bizarre names) for these four &quot;systemic continents&quot;. For each of them, I will also mention how that category of systems can be employed by evolution or by people for specific outcomes.</p>
<h2>The Channels</h2>
<figure><img src="https://aethermug.com/assets/posts/primitive-atlas-of-glass-circuits/glass_circuits_v2_channels.webp" alt="A replica of the Conductance-Transparency chart described before, highlighting the word-pairs in the top-right quadrant."/><figcaption>Click to enlarge.</figcaption></figure>
<p>The top-right quadrant of the Glass Circuit Chart is reserved for systems that are both highly transparent and highly conductive of their associated differences. I will call them <em>channels</em>.</p>
<p>An old-fashioned plumbing pipe is, of course, a channel because it transfers water without significantly changing it or being changed by it. The same goes for a Wi-Fi repeater, which picks up a weak signal and repeats it, unaltered, with more power. Also, a country&#x27;s postal system, and each individual mailman and mailwoman (along with their means of transportation), would qualify as channels with respect to mail.</p>
<p>Some interaction <em>is</em> happening here, otherwise we wouldn&#x27;t say that those differences are inputs to the system, but the interactions are only (or mostly) in the form of transport and relay, not transformation. Channels have the property of extending the reach of specific differences.</p>
<p>This property is useful for engineers. Sometimes we use channels to move matter or information from one place to another without tampering with it, like with moving walkways or fiber-optic cables. Other times, we want to filter and sort things, and we can achieve that by using systems that are only channels for one desired type of difference and not another, like a window pane that allows light through intact but blocks the passage of insects and rain.</p>
<p>Channels are used extensively in the bodies of plants and animals: blood vessels, nerves, esophagi, cell membranes, roots, xylems. In sensory systems, they are necessary to forward differences from the outside—like sound waves and light—to the brains and proto-brains hidden deep inside the animals&#x27; bodies. The more transparent they are, the better for this purpose.</p>
<h2>The Converters</h2>
<figure><img src="https://aethermug.com/assets/posts/primitive-atlas-of-glass-circuits/glass_circuits_v2_converters.webp" alt="A replica of the Conductance-Transparency chart described before, highlighting the word-pairs in the bottom-right quadrant."/><figcaption>Click to enlarge.</figcaption></figure>
<p>South of the channels is the region of the <em>converters</em>. These systems are just as unaffected by differences that pass through them, but this time they do alter those differences. What comes out is not the same as what goes in.</p>
<p>This is a fun category because it&#x27;s about the systematic transformation of certain differences into different media and formats. A light bulb trades electric current for photons, just as a team of software engineers turns a set of written requirements—like the specifications of a new mobile app—into an <em>actual</em> mobile app.</p>
<p>Sometimes the transformation is nothing more than a mangling of the input: think about frosted glass for light or an encryption program. But it is more often interesting to convert the differences into a form that can be fed into some other system, like when an AV signal from a cable gets translated into a pattern of pixels on a screen.</p>
<p>Chain several converters in a row, and you have a—literal or metaphorical—assembly line. In fact, it doesn&#x27;t even have to be a linear chain because we can (and very often do) create cascades and networks of converters that together form bigger and more complex converters. Every machine and computer is an example of that, and every living organism too—the channels that shuttle differences from the eye to the brain would be useless if there wasn&#x27;t a converter called &quot;retina&quot; that turned patterns of photons into patterns of neural excitation.</p>
<p>Converters are how the purposeful transformation of the world happens.</p>
<h2>The Clays</h2>
<figure><img src="https://aethermug.com/assets/posts/primitive-atlas-of-glass-circuits/glass_circuits_v2_clays.webp" alt="A replica of the Conductance-Transparency chart described before, highlighting the word-pairs in the top-left quadrant."/><figcaption>Click to enlarge.</figcaption></figure>
<p>Some systems are transparent to the differences passing through them but are reshaped by them. These are <em>clays</em>, in the top-left corner of the Glass Circuit Chart.</p>
<p>A battery&#x27;s internal structure mutates as it shuttles electrons from one end to the other, for instance, and eventually stops doing that. In fact, one way to see clays is as &quot;unstable channels&quot;. In the short term, they behave like channels, transmitting differences quite faithfully to some other place, but in general, that very process makes them lose such ability. It&#x27;s a negative feedback loop.</p>
<p>From the perspective of a human engineer, this property of clays is often undesirable because it gives a short lifespan to whatever they&#x27;re building. How nice would it be to have batteries that don&#x27;t discharge and air filters that don&#x27;t clog up! In these cases, we only use clay systems because we have no other option.</p>
<p>Other times, though, clays are indispensable <em>because</em> of their pliability, not despite it. The process of adaptation of a species or ecosystem to its changing environment—without significantly damaging that environment—is a hallmark of evolution, and those systems need to be clay-like to make that happen.</p>
<p>On a smaller scale, learning is possible only for systems in the clay quadrant. A student wants to be changed by the information given to them, and, at least initially, strives to be able to reapply it as accurately as possible without undue alterations. Similarly, an LLM is designed to be molded by the observation of enormous amounts of training data, itself unaltered by this process.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/primitive-atlas-of-glass-circuits/glass_circuits_v2_migrations.webp" alt="Examples of clay migrations." title="Trajectories of certain systems from the &#x27;clays&#x27; quadrant to other quadrants. Not mentioned in the text is the example of &#x27;a market vs a new technology&#x27;, which starts at a relatively high transparency and low conductance, then gradually shifts towards the &#x27;channels&#x27; quadrant as it reaches saturation."/><figcaption>Examples of clay migrations.</figcaption></figure>
<p>Because clays are unstable, they often tend to migrate away from that corner of the Glass Circuit Chart. For example, an A/C filter may be relatively transparent to the air from the fan, only removing microscopic particles from the flow. But as time goes on, the blocked particles will keep accumulating on the filter, gradually degrading its ability to fully transmit the air current. Eventually, if it isn&#x27;t replaced sooner, the filter will be completely blocked, and no air will be able to pass through it anymore. On the chart, this translates to a gradual descent from the upper-left quadrant to the lower-left one, a process that we could call <em>fouling</em>.</p>
<p>Another possibility is that the clay system instead moves to the right, becoming a channel. An example of that is an empty tub into which you begin pouring water. For a while, the tub will &quot;absorb&quot; the water without emitting it elsewhere because the water remains trapped inside it. If you wait long enough, however, the water will begin overflowing, and you&#x27;ll get a stable flow of water out of the tub that is identical to the flow of incoming water from the tap.</p>
<p>This is analogous to what happened to the human population when the SARS-CoV-2 virus spread around the world. For a while, humanity suffered tremendous harm, deeply changing on many levels because of the virus&#x27;s deadliness, but by the fourth year after the outbreak, most people had become resistant, and the virus itself had mutated to less harmful variants. The virus continues spreading, but its transformative effects on society have subsided, and in this sense humanity has become a channel for that specific virus. This rightward migration of a clay system might take the name of <em>saturation</em>.</p>
<h2>The Reactants</h2>
<figure><img src="https://aethermug.com/assets/posts/primitive-atlas-of-glass-circuits/glass_circuits_v2_reactants.webp" alt="A replica of the Conductance-Transparency chart described before, highlighting the word-pairs in the bottom-left quadrant."/><figcaption>Click to enlarge.</figcaption></figure>
<p>Finally, we have the lower-left quadrant, which I&#x27;ll call <em>reactants</em>. I borrow this word (a bit loosely) from chemistry, where it is used to indicate chemical substances that are consumed or transformed during a reaction. In this quadrant, <em>everything</em> is transformed: neither the system itself, nor the differences that enter it, remain the same as before.</p>
<p>This category of systems is broader and more varied than the other three. As a matter of fact, the vast majority of things out there probably fall in this corner of the chart. It is the default type, because most things in the Universe change in an interaction—if anything, the other three quadrants are the special cases, with their unusual feature that at least one side of the system-input pair remains stable in the course of an interaction.</p>
<p>Channels, converters, and clays may sometimes be formed by the lucky (or guided) migration of reactants to their corners of the Chart, but that is a rare occurrence. The Universe, in other words, is a bubbling sea of change, with small islands of stability here and there.</p>
<p>Because of such an extreme variety, this quadrant is the most complex and hard to model. Since everything here mutates all the time, these systems tend to be so short-lived that it may not even make sense, in practice, to define them as systems. Even the distinction between what constitutes the system and what, instead, counts as the differences traversing it becomes blurred and ambiguous here: they&#x27;re all mutable differences after all.</p>
<p>The easiest examples that I can think of, both natural and human-made, are about destruction. (So much so that I was initially tempted to call them the &quot;bombs&quot;.) You might be tempted to believe that these processes, inherently uncontrollable and unpredictable as they are, can only be useful to annihilate rather than build. But there are positive examples as well: the arrival of a baby into a family or of a new member into a team, the landing of paint on canvas, the mingling of two genomes during sexual reproduction. Reactant systems are simply about total transformation, in a neutral sense. What constitutes destruction or creation is a matter of point of view.</p>

<p>That concludes my brief survey of this vast systemic landscape. It feels a bit like the endearingly inaccurate world maps of the middle ages, with only the vaguest resemblance to the actual landscape they try to replicate.</p>
<p>I intentionally avoided adhering to the traditional ways to draw boundaries between categories and disciplines—an exercise that we do, perhaps, too rarely. Who knows, maybe one day I, or someone else, will figure out ways to refine and make exact these mapping techniques. ●</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/primitive-atlas-of-glass-circuits/van_ewyk.webp" alt="Nicolaas Van Ewyk, Nouvelle Carte de la Moitie Septentrionale du Globe Terrestre montrant la Variation du Compas, ou le Merveileux accord enchaine des mouvemens regles et ne jamais cessans du vivant pourvoir Magnetique; telles qu&#x27;on les a trouvez l&#x27;An 1750, David Rumsey Historical Map Collection." title="Another historical map of the north pole, this time from a more recent time and much more accurate than the one at the beginning."/><figcaption>Nicolaas Van Ewyk, Nouvelle Carte de la Moitie Septentrionale du Globe Terrestre montrant la Variation du Compas, ou le Merveileux accord enchaine des mouvemens regles et ne jamais cessans du vivant pourvoir Magnetique; telles qu&#x27;on les a trouvez l&#x27;An 1750, David Rumsey Historical Map Collection.</figcaption></figure>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/primitive-atlas-of-glass-circuits/mercator.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[The Luxurious Pain of Using My Time]]></title>
            <link>https://aethermug.com/posts/the-luxurious-pain-of-using-my-time?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/the-luxurious-pain-of-using-my-time</guid>
            <pubDate>Thu, 09 Jan 2025 07:31:47 GMT</pubDate>
            <description><![CDATA[Possibly just me coming out as a lucky optimist]]></description>
            <content:encoded><![CDATA[<h2>About the Pain</h2>
<p>Sometimes other bloggers read Aether Mug and are kind enough to tell me that they like it, that they&#x27;ve read several posts, and that they&#x27;ll be reading the next one. They comment on what I wrote, engage in interesting and cogent conversation, ask me follow-up questions. This obviously brings me joy, because it&#x27;s a kind of connection that I hope to spark with my writing.</p>
<p>But this joy is often tempered by a kind of pain: I seldom reciprocate by following <em>their</em> blogs. This is not necessarily because I don&#x27;t appreciate their writing. I won&#x27;t even follow blogs that I genuinely found intriguing and well-crafted. I may read a couple of posts by them because I want to be able to expand the conversation. More often than not, though, I decide not to add them to my regular reading feed.</p>
<p>This is a painful decision every time, and not only because it can make me feel unfair or rude (&quot;I&#x27;m too picky to follow your writing, but you can follow mine if you want!&quot;). It stings because it is a very conscious sacrifice on my part, a loss that I impose on myself because <em>I have too many things to read already</em>.</p>
<p>This is a manifestation of a bigger phenomenon encompassing all forms of reading. People and algorithms recommend fantastic <a href="/posts/you-don-t-have-time">books</a>, essays, and articles, and I want to read them all. And the more I read, the more ideas I get for further things to read.</p>
<p>There seems to be an inexorable law of nature at work: reading lists always grow faster than you can devour them.</p>
<p>And it&#x27;s not even something limited to reading or consuming content, either: I have bucket lists of all kinds—places to visit, languages to learn, skills to master—and they are always overwhelming. I&#x27;m forced to pick and choose, and that is where the pain arises.</p>
<p>In part, this is only an apparent problem. Those blogs and books, those places and activities already existed out there, and my becoming aware of them doesn&#x27;t really change anything. It&#x27;s a bit silly to feel mortified for missing something that I would have blissfully missed if only I had never known about it. On the other hand, whether I know all of it or not, it&#x27;s a real bummer not to be able to read and do everything I would like to read and do in my lifetime.</p>
<p>For this reason, I consider this an important topic, even if it is composed of tiny, almost insignificant choices like what to point my eyeballs at on a given half-hour.</p>
<p>That might sound like existential anxiety, but it... doesn&#x27;t feel like that to me. I choose not to do certain inviting things—instead of gleefully throwing myself at whatever sounds fine and interesting at the time—because it is rewarding to do so, every time. Choosing carefully how to spend my time is liberating. It makes me feel like I&#x27;m living more intensely.</p>
<h2>The Time That Is Given Us</h2>
<figure><img src="https://aethermug.com/assets/posts/the-luxurious-pain-of-using-my-time/apianus03.webp" alt="Another circular calendar in the same style, this time depicting a blank circle without lines or drawings inside it."/><figcaption>Astronomicum cæsareum pl 003, Petrus Apianus</figcaption></figure>
<p>A long time ago, I made it a point to never use the words &quot;I don&#x27;t have time&quot; to opt out of doing something. Time is the only thing we all certainly have for as long as we live. There is no guarantee that I&#x27;ll have money, friends, health, or whatever else I desire at a given period of my life, but <em>I certainly have that period of time</em>. The time we have, however, is not infinite, so the real question is not <em>if</em> we have time, but &quot;what to do with the time that is given us&quot; (to use the words of a wizard).</p>
<p>Put differently, whether I realize it or not, at every waking moment I <em>decide</em> to give up on many activities in favor of a few, often only by default. If I tell you &quot;I&#x27;d like to exercise more/read more books/learn a new language, but I don&#x27;t have the time&quot;, what I&#x27;m really saying is &quot;I&#x27;ve considered doing that thing and decided that, despite its allure, it&#x27;s not worth sacrificing any of my current activities for it&quot;.</p>
<p>It&#x27;s like packing a holiday suitcase and finding that my volleyball &quot;doesn&#x27;t fit in&quot;: the capacity of the suitcase is not reduced by the clothes I already put in it—I may simply be unwilling to remove those necessary clothes in favor of a ball I may or may not use during my trip.</p>
<p>Consuming media is another choice of what to stuff into the Lifetime Suitcase, and, because media can be <em>really</em> interesting, making that choice can hurt.</p>
<p>Here I&#x27;m referring to a small subset of all media. Sure, there is an immense amount of trash content and pointless noise on the internet and on bookshelves—an amount growing even faster since the advent of cheap LLMs—but that&#x27;s not the problem I&#x27;m talking about. Filtering out spam and slop is relatively easy with the right tools and a little thought, at least at an emotional level.</p>
<p>The much tougher job, I think, is giving up on things that would be good, meaningful, fulfilling, and useful in order to do things that are even more so—or, to be precise, to do things that are better aligned with what I really care about right now. The hard part is dealing with the fact that, whatever I may try, I will never get to do the vast majority of those amazing activities.</p>
<h2>Dealing With It</h2>
<figure><img src="https://aethermug.com/assets/posts/the-luxurious-pain-of-using-my-time/apianus20.webp" alt="Another circular calendar in the same style, this time depicting a green dragon with spread wings flying over 13 green dragons arranged at equal angles all around the circle."/><figcaption>Astronomicum cæsareum pl 020, Petrus Apianus</figcaption></figure>
<p>I&#x27;ve found that the most satisfying way to manage my time is to make those choices explicit. As often as I can, I avoid defaulting to the first activity that captures my attention or seems interesting enough. Not because those are always the wrong things to do, but because it is so easy to forget about the options that are not currently in the front of my mind.</p>
<p>Purpose is key here. The kind of deliberate choice I&#x27;m talking about is only possible when I know what I want, and it gets easier and easier the clearer I am about my goals.</p>
<p>It doesn&#x27;t matter what purposes I have in mind during a given period of my life, or how relatable they would be for others. Am I looking to switch to a job or industry I prefer? To paint a picture or craft a sentence that will leave at least one person forever changed? To laugh wholeheartedly every single day? To make a thousand people happier? To build a lifestyle that is not hindered or thwarted by a medical condition I have?</p>
<p>Invariably, I have not one but a constellation of many goals. It doesn&#x27;t need to be about the One Scary and Difficult Life Purpose. Even assuming such a purpose exists, life is not a global optimization game. That&#x27;s why I talk about <em>goals</em> in the plural. There are always several things I&#x27;m aiming for, and I think it&#x27;s important for me to inventory them regularly. Once I have them in mind, it&#x27;s not too hard to discern which activities will leave me feeling fulfilled and happy with the sacrifices I made.</p>
<p>The <em>relationship</em> between those purposes is just as important. Each goal is valuable in its own right, but some are more valuable than others. Maybe I really want to learn Korean, but pursuing that goal properly would hurt my goal of spending quality time with my family. If I know and believe that the latter purpose is more important to me than the former, my decision to give up Korean—even only for a while—is the right sacrifice for me. It took me a while but, at some point, I realized that understanding the <em>hierarchy</em> of my goals at a given time is the only way to make good decisions of this sort.</p>
<p>Some might find this line of thinking stressful, like treating one&#x27;s whole life as a job. In the sense that I apply reason to &quot;manage&quot; my time, maybe that&#x27;s an apt analogy. But the implication that this would lead to anxiety says more about those people&#x27;s view of work than about the approach I described above.</p>
<p>First, working for myself feels very different than working for someone else. Having conscious control of what I do allows me to remain focused on what <em>I</em> want, not what some boss or shareholder requires me to do. Second, if the &quot;sounds-like-a-job&quot; analogy is accurate in any meaningful way, it is the kind of job Mark Twain referred to when he suggested to &quot;find a job you enjoy doing, and you will never have to work a day in your life.&quot;</p>
<h2>Grateful Pain</h2>
<figure><img src="https://aethermug.com/assets/posts/the-luxurious-pain-of-using-my-time/apianus24.webp" alt="Another circular calendar in the same style, this time depicting a smiling sun-face with rays of light emanating in all directions inside the circle."/><figcaption>Astronomicum cæsareum pl 024, Petrus Apianus</figcaption></figure>
<p>I suffer a little when I have to choose what to read and what to leave unread. I struggle a little every time I enter a bookstore (my favorite kind of place), every time I receive a good but not spot-on book recommendation or gift, every time I find another hidden gem of a blog that scratches an itch located too low on my &quot;hierarchy of purposes&quot;.</p>
<p>Still, it gets easier. The more I exercise my right to carefully manage my time based on my goals and values, the more instinctive those choices become. Sometimes they are almost automatic. Nowadays, for example, I naturally lose interest in video games that turn out to be &quot;just fun&quot;, and dropping them has become almost effortless. I&#x27;ve also learned to abandon a book midway through—something I once had huge cognitive dissonance against—when I realize it&#x27;s not resonating with me.</p>
<p>At the same time, I increasingly experience the immense freedom of being aligned with my own compass, and more intensely than I feel the pain of giving up good things. This further convinces me that it&#x27;s a lucky pain to have, the happy kind of pain, a luxury to be grateful for. It is a privilege to live in a world and situation where I have the means and options to choose among endless great ways to spend my time. It may not be easy to navigate, but it&#x27;s not something I will ever complain about.</p>
<p>I don&#x27;t believe the pain of turning down amazing opportunities will ever completely disappear, because of how human nature works. It comes with the territory, like the frequent soreness in an athlete&#x27;s body. But I believe it&#x27;s a pain worth treasuring. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/the-luxurious-pain-of-using-my-time/apianus28.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[A Kanji Always Pays Her Debts]]></title>
            <link>https://aethermug.com/posts/a-kanji-always-pays-her-debts?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/a-kanji-always-pays-her-debts</guid>
            <pubDate>Tue, 10 Dec 2024 10:52:15 GMT</pubDate>
            <description><![CDATA[In defense of steep learning curve]]></description>
            <content:encoded><![CDATA[<p>I often find myself having to take an elevator in a sparsely-populated wing of a medical research facility. When I do, I go up and down more than a dozen floors completely alone, because no one else ever seems to use that specific elevator. So I stand there in the metal box and I look around for something to contemplate, and my eyes fall on this elegant floor guide beside the doors.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-kanji-always-pays-her-debts/elevator_1.webp" alt="" title="Picture of a wooden panel with a carved list of floors, each floor with a list of medical disciplines."/></figure>
<p>It&#x27;s beautiful, and its contents are fascinating. It is essentially a Japanese-English dictionary of medical disciplines. The thing that captures my attention is how differently these two writing approaches work for unfamiliar words. Most of those English-language terms are somewhat familiar to me, but I&#x27;m fuzzy on the meaning of some of them. There is one that, at first, I have no idea what it is about: &quot;cytoarchitectonics&quot;. It must have something to do with architecture, but what of?</p>
<p>Fortunately, I can peek at the five <em>kanji</em> 細胞構築学 written next to the English word, and instantly everything is clear. These are all easy <em>kanji</em> that Japanese children learn early on, and that I&#x27;ve seen myself many times before. Here, the Japanese term is actually made of three common &quot;words&quot; (actually, the first two are &quot;compounds&quot;, combinations of characters, and the third is a standalone character):</p>
<ol>
<li>細胞 (<em>saibou</em>, cell),</li>
<li>構築 (<em>kouchiku</em>, construction), and</li>
<li>学 (<em>gaku</em>, &quot;discipline&quot;).</li>
</ol>
<p>It must be the study of how cells are built and structured!</p>
<p>If the English term carved in the wooden panel had been &quot;cell architectonics&quot;, it would have had the same clarity as the Japanese. Instead, it was cytoarchitectonics, a rather scary word. Looking that up, the &quot;cyto-&quot; prefix comes from the Ancient Greek word <em>kútos</em>, meaning &quot;container&quot;. Now that I think about it, that prefix is used a lot in biology to mean &quot;cell&quot;—for example cytoplasm, the contents of a cell, and cytoskeleton, the protein filaments that give shape and solidity to a cell. But the link between cytoarchitectonics and those other terms didn&#x27;t occur to me in the elevator, and I&#x27;m not very well versed in Ancient Greek.</p>
<p>This is a small peek into why I think scripts using Chinese characters are more effective than some people give them credit for.</p>
<p>There are, of course, many who love <em>kanji</em> and <em>hanzi</em>, but I&#x27;ve often heard the claim that these writing systems are—at best—quaint relics of the past, over-complicated and poorly adapted to our modern needs. At worst, they hold back the cultures that use them, and should be abolished in favor of more &quot;efficient&quot; systems like an alphabet or the Korean <em>hangul</em>.</p>
<p>I&#x27;m not going to argue against the power of phonetic systems. Their versatility and extreme learnability needs no defense. <em>Hangul</em> are fantastic: simple, unambiguous, and nearly as beautiful as old-style Chinese characters themselves. The point I want to make is another one: practically speaking, <em>kanji</em> have clear downsides in their numbers and complexity, but those same properties offer benefits that cannot be replicated by phonetic symbols alone.</p>
<p>Here is why.</p>
<p>(Most of what follows holds, of course, for the <em>hanzi</em> characters used in China—but I&#x27;ll stick to Japanese because that&#x27;s what I know.)</p>
<h3>Open-Book Etymology</h3>
<p>The Japanese word for cytoarchitectonics was more helpful than its English counterpart, and that is not a coincidence. Once you&#x27;ve done the hard work of learning the most common two to three thousand <em>kanji</em>, you&#x27;ll find that you&#x27;re able to infer the meaning of most compound words, even those you have never seen before.</p>
<p>Try this game yourself. Look at the following list of terms from the floor guide (without peeking at the photo!), and the meaning of the individual characters that compose them.</p>
<table><thead><tr><th></th><th>Japanese</th><th>Meaning of Each <em>Kanji</em></th></tr></thead><tbody><tr><td>1</td><td>免疫学</td><td>discipline of + avoid + diseases</td></tr><tr><td>2</td><td>法医学</td><td>discipline of + law + medic</td></tr><tr><td>3</td><td>病理学</td><td>discipline of + sickness + rules</td></tr><tr><td>4</td><td>病理診断学</td><td>discipline of + sickness + rules + [medical examination] + decide</td></tr><tr><td>5</td><td>生体構造学</td><td>discipline of + living + body + [structure]</td></tr><tr><td>6</td><td>生理学</td><td>discipline of + living + rules</td></tr></tbody></table>
<p>(The terms in square brackets are commonplace Japanese compounds themselves, so I give the meaning of the whole compound for those).</p>
<p>Then try to guess to which of the shuffled English terms that follow they correspond. If you don&#x27;t know what those words mean in English, feel free to look them up on Wikipedia.</p>
<table><thead><tr><th></th><th>English</th></tr></thead><tbody><tr><td>A</td><td>Forensic Medicine</td></tr><tr><td>B</td><td>Structural Biology</td></tr><tr><td>C</td><td>Immunology</td></tr><tr><td>D</td><td>Physiology</td></tr><tr><td>E</td><td>Diagnostic Pathology</td></tr><tr><td>F</td><td>Pathology</td></tr></tbody></table>
<p><em>The answer of what corresponds to what is at the bottom of this page.</em></p>
<p>How did it go? Note how you don&#x27;t need to be able to pronounce those words to extract their meaning. This realization should give a bit of relief to beginning Japanese learners, who are inevitably daunted by the bonkers number of possible pronunciations for each character.</p>
<p>In other words, if you know the meaning of those characters, they&#x27;ll offer you <a href="/posts/in-japanese-you-need-a-dictionary-to-count-things">the etymology of the compound words</a> that contain them on a silver plate.</p>
<p>In Western languages like English, French, or Italian, there are always several possible ancient sources of meaning you could be dealing with. Taking English as an example, its words could come from Latin, Greek, Anglo-Saxon, Old Norse, Celtic languages, and sometimes a mixture of these. In order to guess the meaning of an unfamiliar word, you would have to pay close attention to how other words you know are composed, distinguish the meanings of those strange prefixes and suffixes, and put them together in the new combination. And that is only if you&#x27;re lucky enough to be dealing with a word that hasn&#x27;t been altered or fused together in some contorted way by language evolution.</p>
<p>Compare that with Chinese and Japanese, where the components are atomic blocks of meaning. While their <em>pronunciations</em> may have had as interesting and confusing a history as any English word, the <em>meaning</em> is its own source. Because of their modularity, these logograms don&#x27;t tend to merge and morph as freely as the sounds in the phonetic systems. The etymology, thus, is often an open book for the reader.</p>
<p>Alternatively, you could say that phonetic writing systems are particularly bad at producing clear, self-evident jargon.</p>
<figure><img src="https://aethermug.com/assets/posts/a-kanji-always-pays-her-debts/ryden_en_jp.webp" alt="Book covers of the original English and the translated Japanese book Introduction to Cosmology."/></figure>
<p>As a side note, this hidden power of <em>kanji</em> is what allowed me to try a rather scary experiment back in graduate school, in Rome. I was almost three years into my Japanese studies, and two years since I had memorized the meanings of the most commonly-used <em>kanji</em>. At the time, I was able to read manga and novels with the help of a dictionary, but I felt I needed a bigger push. A Cosmology exam came up, and the classes—held in Italian—were based on <em>Introduction to Cosmology</em>, a textbook by Barbara Ryden. Of course, we were supposed to buy the book and study it for big exam, consisting of a written test and the kind of oral grill-interrogation typical of Italian universities. When I found out that Ryden&#x27;s textbook had been translated into Japanese, I couldn&#x27;t resist. I bought it, thinking I was probably wasting my money, but was surprised to find that all those fancy scientific terms were instantly recognizable from their <em>kanji</em>.</p>
<p>Of course 力学, or &quot;discipline of + force&quot;, means &quot;dynamics&quot;! Of course 重力 (weight + force) means &quot;gravity&quot;, 星雲 (star + cloud) &quot;nebula&quot;, 運動エネルギー (movement + energy) means &quot;kinetic energy&quot; and 等方的 (of + equivalent + directions) means &quot;isotropic&quot;!</p>
<p>I was astonished. I had expected the technical terminology of graduate-level physics to be the main challenge when reading that book, and instead they were entirely intelligible, logical, self-explanatory. If anything, the textbook was <em>easier</em> to read than a novel. I never bothered buying the English version. I studied the whole course on the Japanese book and passed the exam without problems—except perhaps a couple of curious looks from the professor.</p>
<h3>Unknown Words</h3>
<p>Being able to easily match Japanese and English words like that may seem like a small perk. But the benefits extend to words that don&#x27;t necessarily have exact equivalents in another tongue.</p>
<p>In most cases, in Japanese you&#x27;re not even expected to know the rare composite words in a text—unless you want to reuse them yourself. More often than not, the moment you encounter a new compound you have enough context to triangulate with the meaning of its <em>kanji</em> and &quot;get it&quot; without the help of a dictionary.</p>
<p>For example, this is the first sentence of the essay &quot;Chewing Gum&quot; by Terada Torahiko, published in 1932:</p>
<blockquote>
<p>銀座を歩いていたら、派手な<strong>洋装</strong>をした若い女が二人、ハイヒールの足並を揃えて遊弋（ゆうよく）していた。そうして二人とも美しい顔をゆがめてチューインガムをニチャニチャ噛みながら白昼の都大路を<strong>闊歩</strong>（かっぽ）しているのであった。</p>
</blockquote>
<blockquote>
<p>While walking in Ginza, I saw two young women dressed in flashy (Western + attire), strutting along in high heels. Both of them were (broad/bold + walk) down the busy streets of the city in broad daylight, their beautiful faces contorted as they chewed gum noisily.</p>
</blockquote>
<p>In the translation, I picked two relatively uncommon compounds and replaced them with the meanings of their separate <em>kanji</em>. Someone educated enough to read a daily newspaper would know those characters independently, even if they have never seen them combined in those specific ways. Put like this, I don&#x27;t think I even need to tell you what those two words &quot;mean&quot;. It comes almost for free.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-kanji-always-pays-her-debts/kanji_guessing.webp" alt="" title="The first word, yousou, means Western clothes, the second, kappo, means to walk boldly."/></figure>
<p>This fact is a boon for intermediate and advanced students of Japanese with a good number of <em>kanji</em> under their belt, because the writing system gives them a lot of help in learning new vocabulary.</p>
<p>Like with all other languages, the reader has the context of what they&#x27;re reading to constrain the possible meanings of the new word. But, in addition to that, the <em>kanji</em> restrict the space of possibilities even further, often to the level of unambiguous certainty. And, finally, uncommon words usually come with <em>furigana</em>, the pronunciation of the word written in small phonetic script next to the relative <em>kanji</em>. This little package of information teaches you everything you need to know about the word, and it does so in the most effective way possible: as an organic part of living, working language. No one ever needs to learn aseptic word lists for Japanese.</p>
<p>(About the other amazing effects of this meaning/pronunciation duality in Japanese I&#x27;ve written already <a href="/posts/the-beautiful-dissociation-of-the-japanese-language">here</a>.)</p>
<p>Of course, you might occasionally find a <em>character</em> you don&#x27;t know, as opposed to a novel <em>compound</em> of known characters. In this case you might be stumped, just like you are when you find an unknown word in English. But there are many fewer <em>kanji</em> than there are English words, so this situation is relatively rare as you go beyond the intermediate level.</p>
<p>As far as I understand, this ease of acquiring new &quot;words&quot; exists in Chinese, too, given that it is the original source of this writing approach. But there are many more <em>hanzi</em> characters in circulation in modern Chinese than there are in Japanese, so the experience of being stumped by unknown logograms might be a little more common.</p>
<h3>Front-Loading</h3>
<p>Saying that meaning-based characters are a more primitive or less efficient form of writing is like trying to argue that humans are more &quot;evolved&quot; than fish.</p>
<p>At a superficial level, it seems obvious: people are much smarter than fish, and an alphabet can be learned with a tiny fraction of the effort that it takes to learn enough <em>kanji</em>. Compare a fourth-grade student in the UK with one in Japan, and you&#x27;ll find the former able to read out loud—if not fully understand—any book intended for an adult audience, while the latter will get lost very soon in the sea of unknown characters.</p>
<p>But, of course, fish are <em>precisely</em> as &quot;evolved&quot; as humans, and as all the other species currently alive on Earth. We all have an evolutionary history of the same duration, and have always had to adapt and survive in a harsh environment. If fish are dumber than people, it is not because they are more &quot;primitive&quot;, but because they don&#x27;t <em>need</em> to be any smarter to thrive as gloriously as they do in every body of water on the planet. Similarly, the various ways people write today have survived until now because each of them works precisely as it should.</p>
<p>So, learning a gazillion <em>kanji</em> is a PITA. It&#x27;s a feat that takes time and a good amount of grit even for an adult, while any child has their A-B-C securely in their head before turning 7. This comparison is unfair, though. Learning those gazillion <em>kanji</em> also equips you with a huge amount of modular, consistent &quot;semantic bits&quot; equivalent to what we call words. A more apt comparison, then, would be between learning the <em>kanji</em>, on the one hand, and learning the alphabet <em>plus</em> a few thousands more words on the other.</p>
<p>But it&#x27;s more different than that: knowing the <em>kanji</em> also gives you <a href="/posts/what-if-gratitude-was-built-into-the-grammar">the toolkit you need to</a> build and parse any number of novel combinations of those symbols, multiplying your reading power. The effort needed to learn new words seems to follow two very different curves for Japanese and &lt;insert alphabet-based language here&gt;:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-kanji-always-pays-her-debts/chart.webp" alt="" title="Cartesian chart showing two curves. The x axis represents the number of words known to the learner, while the y axis represents the effort necessary to understand new words. One curve, marked &quot;Japanese&quot;, has a very pronounced peak very early on, tapering down in a long tail going towards zero to the right. The other curve, marked &quot;English&quot; instead raises quickly to a moderately low level, then levels out and eventually ends up being higher than the other curve on the right side of the chart."/></figure>
<p>The effort in Japanese (and Chinese) is heavily front-loaded, but it pays back once you&#x27;re past the bump. A <em>kanji</em> always pays her debts.</p>
<p>Eventually I reach my destination floor, the elevator&#x27;s doors open, and I step out. ●</p>

<p><em>Answers to the word-matching game: 1-C, 2-A, 3-F, 4-E, 5-B, 6-D.</em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/a-kanji-always-pays-her-debts/ryunosuke-kikuno-uZmVTd4Arzo-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[See the Living Loops]]></title>
            <link>https://aethermug.com/posts/see-the-living-loops?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/see-the-living-loops</guid>
            <pubDate>Fri, 06 Dec 2024 06:16:45 GMT</pubDate>
            <description><![CDATA[Some control comes for free]]></description>
            <content:encoded><![CDATA[<h2>The Muddy Middle Ground</h2>
<p>In a recent Plankton Valhalla <a href="https://planktonvalhalla.com/20241030-recursion-tidy-stars-and-water-lilies/" rel="nofollow noopener noreferrer" target="_blank">essay</a>, I made the bold claim that recursion—things influencing their own future through their behavior—is at the core of everything we can know and reason about. &quot;Feedback,&quot; I wrote, &quot;makes recursion possible, and recursion is what makes the unlikely likely.&quot; That <em>unlikely-made-likely</em> thing that a feedback loop produces, whatever it is, I call a Water Lily. The reason I wrote a long essay, full of examples, to emphasize that point, and the reason I coined that strange term, is that I believe it is a very important framing to have even in our daily lives. Feedback loops aren&#x27;t scientific curiosities or rare phenomena that we might read about occasionally and then forget: they are everywhere, and we should all be intimately familiar with how they work.</p>
<p>I&#x27;ll repeat the definition of a Water Lily once more, because it will come up frequently in what follows.</p>
<blockquote>
<p>A <strong>Water Lily</strong> is the particular state, out of many possibilities, that a system tends toward as the result of a <strong>feedback loop</strong>.</p>
</blockquote>
<p>In order to understand these supposedly ubiquitous loops, we need to <em>see</em> them, and seeing them can be very tricky at times. Some are easy to spot—we often talk of &quot;downward spirals of depression&quot; and &quot;virtuous cycles of economic growth&quot;, for example—but what about all the others?</p>
<p>In the essay, I provided plenty of examples of non-living, natural feedback like smooth pebbles and &quot;tidy stars&quot;. These are fascinating and important, but they tend to be less relevant at the day-to-day level. I also explained the kind of feedback that is possible at the other extreme of the spectrum, something called &quot;control mechanisms&quot;. Here the feedback loop is clearly a repeating sequence of <em>observing</em>, <em>measuring the discrepancy from a goal</em>, and <em>acting to remove the discrepancy</em>. Control, in this sense, is extensively studied in various fields of knowledge, from robotics to biology to sociology, and much has already been written about it.</p>
<p>But it is especially the murky middle ground between those two extremes that can create confusion: what about living and engineered feedback loops that are <em>not</em> &quot;control&quot; in the proper sense of the term? What about goal-less processes that nonetheless seem to be achieving useful outcomes? That is what I will try to illuminate below.</p>
<p>I think that most people&#x27;s intuition is that a brain is necessary to achieve anything useful and difficult in this harsh world. And if not a full-blown brain, surely an organism needs at least a nervous system to sense the world and make decisions about what to do about it, or so the thinking goes.</p>
<p>Not so, because—to expand on a point already touched upon in the essay—feedback is much more fundamental than that.</p>
<p>All living things are powered by some kind of &quot;<a href="/posts/embedded-prophesy-devices">embedded prophesy devices</a>&quot;, i.e. internal models of the world. To remain stable or achieve an unlikely state (a Water Lily), an organism must be able to predict the future to some extent and choose its course of action based on those predictions. But prediction is impossible without <em>some</em> form of input from the environment. So the question becomes what that feedback, and those prophecies, could possibly look like when no brain and no nerve-based senses are involved. Some examples will bring us closer to an answer.</p>
<h2>Always Start with the Dead Ones</h2>
<p>When talking about feedback, the easiest place to begin is usually non-living, human-made stuff.</p>
<figure><img src="https://aethermug.com/assets/posts/see-the-living-loops/nick-fewings-P75xNKgeLag-unsplash.webp" alt="An iron weather vane with the shape of a sailboat riding an arrow, and the four cardinal directions N, S, E, W."/><figcaption>A weather vane. Photo by Nick Fewings, Unsplash.</figcaption></figure>
<p>Consider, for example, the weather vane. It is essentially a flat surface attached to a rotating pole. When the wind is strong enough, the pressure from the incoming air creates a force that turns the weather vane, with a strength proportional to its current angle relative to the airflow. This is clearly a feedback loop, its Water Lily being the state of aligning with the wind&#x27;s direction. Since this Water Lily was explicitly chosen by a human designer, it belongs to the category of Water Lilies known as &quot;goals&quot;.</p>
<p>Notice how this simple system, devoid of brains, sensors, or any kind of wire, can correct the discrepancy between its Water Lily state and the current state.</p>
<p>Other examples include the moving bicycle&#x27;s tendency to remain vertical on its own and the paper airplane&#x27;s ability (when well-folded) to go on gliding horizontally and without flipping until it touches the ground. These sensor-less feedback loops are called <em>passive control</em> in engineering because there is no &quot;conscious&quot;, active deliberation on the best course of action: they just seem to &quot;know&quot; what to do automatically.</p>
<p>Passive control mechanisms are the unsung heroes of so much modern (and ancient) technology, humbly supporting fancier cybernetic advances while being mostly overshadowed by them. With some practice, you can learn to spot them literally everywhere. They were the first &quot;control&quot; processes to be invented (see James Watt&#x27;s famous governor in steam engines). They are in every analog mechanism (like car dampers and roly-poly dolls), but also in those that are actively controlled, which we&#x27;ll return to later.</p>
<p>Take a moment to consider just how <em>unlikely</em> all those outcomes would be otherwise—the bicycle staying vertical, the weather vane&#x27;s arrow pointing at the &quot;N&quot; sign precisely when the wind comes from the north, the airplane not randomly tumbling like a dead leaf. It is through these recurring cycles of gradual and unthinking &quot;error-correction&quot; that they occur reliably enough for everyday use.</p>
<h2>Of Course, Evolution Was there First</h2>
<p>Moving on to living things, the most obvious examples are plants, which are known to adapt and react to their environment even without neurons or central processing units. But I have already described how <a href="https://planktonvalhalla.com/20231207-there-is-no-script-in-the-life-of-a-water-lily/" rel="nofollow noopener noreferrer" target="_blank">there is no script in the life of a Water Lily</a> before, so here are other examples.</p>
<p>The albatross, the greatest voyager of the skies, can soar for days at a time without expending much energy at all: they have tendons capable of locking the wings into a spread-open position like umbrellas, and they stay open even against the constant pushing and tugging from the wind currents. Once extended like that, the wings also participate in other passive stabilizing feedback loops, similar to those that keep a paper plane aloft, so the bird doesn&#x27;t have to worry too much about sudden pressure dips or turbulence.</p>
<p>Evolution has produced many other recursion-based processes. Scientists have <a href="https://liaolab.com/wp-content/uploads/2020/10/2006Beal_etal.pdf" rel="nofollow noopener noreferrer" target="_blank">found</a>, for example, that trout are able to swim upstream <em>even while dead</em>, purely because of the hydrodynamic properties of their bodies.</p>
<div class="lg:col-start-1 lg:col-end-1 aspect-w-16 aspect-h-9 mb-8"><iframe class="w-full p-0" src="https://www.youtube-nocookie.com/embed/_ZBWnhzYvts?start=20" title="YouTube video player" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerPolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe></div>
<p>Sea sponges, among the very few animals without a nervous system to speak of, can still detect when there is too much silt in their pores and activate a sort of slow-motion sneeze to expel it. This happens because silt triggers additional production of calcium in their surface cells, which in turn induce calcium production in nearby cells, and so on in a cascade that makes them all contract in turn over the course of half an hour, freeing the sponge of the unwanted materials.</p>
<figure><img src="https://aethermug.com/assets/posts/see-the-living-loops/jody-confer-HJxU8BUFgNc-unsplash.webp" alt="Seven furry bats cuddling together as they hang from near the ceiling of a wooden house."/><figcaption>Bat toes have a tendon-locking mechanism to keep them hooked upside down. The harder the pull of gravity, the deeper the hook sinks in its perch. It requires no energy to work, and it keeps working even if they die in that position. Photo by Jody Confer, Unsplash</figcaption></figure>
<p>Likewise, the human skin (to pick one example out of many in the human body) adapts to the outside conditions independently from any nervous activity. Soon after I started playing badminton a couple of years ago, a callus formed precisely in the spot of my right palm where the grip of the racket is tightest. The stress applied to those skin cells by the racket encouraged them to produce much more keratin, a tough, fibrous protein that accumulates to form what we call a callus. A bit more keratin allows me to play badminton a little longer without rupturing my skin, which in turn induces more stress and more keratin production, and so on until the callus is thick enough to completely shield my skin cells from the wear and tear of my (amateurish) smashes.</p>
<p>Here, again, we see the same pattern of unlikely things—bats and soaring birds not crashing down at the first vibration, clean sea sponges, skin stronger in all the right spots—happening regularly and reliably thanks to self-enforcing mechanisms, completely independent of active control or commands from specialized &quot;thinking organs&quot;. These are the Water Lilies that give shape and stability to the most fundamental of living functions.</p>
<h2>The Middle of the Middle of the Way</h2>
<p>You may not be surprised to learn that the boundary between &quot;passive&quot; and &quot;active&quot; control is very fuzzy. Many technological and biological processes straddle the two approaches. A modern aircraft in flight, for example, is stabilized both by its inherent aerodynamic shape—technically called <em>positive static stability</em>—and by the complicated operation of many sensors and flaps—i.e., traditional control.</p>
<p>Or think about how urination works: the conscious &quot;I need to go&quot; impulse arises in your brain after the bladder is filled beyond a certain threshold, before which the control of the urethra valves is automatic and &quot;locked&quot; almost like the albatross&#x27; wings. This allows you to forget about that passive feedback loop most of the time, while giving you almost full control when the bladder is filled. It&#x27;s a clear handover from an evolutionarily tuned <em>Water Lily</em> (you not wetting your pants) to a conscious <em>goal</em> (&quot;I will do it once I get to the W.C.&quot;): the &quot;mechanical&quot; parts are the same in both phases, but they&#x27;re driven by different &quot;circuits&quot; and computational systems.</p>
<p>An E. coli bacterium is another interesting example because it showcases a very common kind of cellular feedback loop positioned halfway between the works-without-batteries approach of a locking tendon and the sensor-actuator approach of a person threading a needle.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>E. coli does have something akin to elementary &quot;sensors&quot;, similar to the eyes or ears of an animal, despite not having any nerves (it is a single cell, after all), using a simple but genius-level protein-based mechanism. <a href="https://youtu.be/LgPDOSou1tw" rel="nofollow noopener noreferrer" target="_blank">This excellent video</a> by W. W. Norton &amp; Company explains it more clearly than I can hope to achieve in words, but here is the gist. E. coli can rotate its flagella (hair-like motors) in two modes, a <strong>random tumbling motion</strong> and a <strong>straight-line motion</strong>. In stable conditions, the switching between these two modes is random, leading to an aimless movement going nowhere in particular. But if, during one of its straight lunges, the concentration of nutrients happens to increase, this will trigger a chain reaction in the movement and production of molecules inside the bacterium, which in turn inhibits the flagella from switching back to the tumbling mode.</p>
<figure><img src="https://aethermug.com/assets/posts/see-the-living-loops/ecoli_chemotaxis.webp" alt="A diagram of the zig-zag path taken by an E. coli bacterium in a nutrient gradient. It seems random, but the paths that happen to be in the direction of increasing gradient are longer, leading to it getting closer to the food source."/><figcaption>When the bacterium is moving towards the food source, the length of its straight paths is longer. Frame from the linked video.</figcaption></figure>
<p>This process means that the cell is more likely to keep swimming straight ahead if it happens to be moving toward a source of nutrients, but will otherwise change direction frequently. This has a clear effect of feedback-induced skewing of probabilities: even though the cell <em>always</em> changes direction randomly, and it doesn&#x27;t have any idea what it&#x27;s doing, the probability of it getting closer to its source of sustenance (its Water Lily) is higher than that of moving away.</p>
<p>This type of feedback is simple enough to recognize as a slightly fancier version of a weather vane, passively turning in the wind. At the same time, it involves a few intermediate steps to function, in the form of proteins within it being activated and moving molecules around. This is reminiscent of how long chains of nerves carry information around through electrochemical means.</p>
<p>Here, then, is the key insight: there is no sharp division between &quot;passive&quot; and &quot;active&quot; control, but there is a gradation in how <em>direct</em> or <em>indirect</em> the error-correcting mechanism is—how many steps it involves.</p>
<p>The common theme we see in all these examples is that the model of how the world works is <em>built into the structure</em> of those systems, and it operates based on basic physical laws. Contrast this with brain- and computer-based control, where there is this thing we call &quot;information&quot; being shuttled around by axons or integrated circuits, stored somewhere in a rather abstract form, and somehow &quot;processed&quot;. The brain-based approaches seem to have their own laws and principles of operation that are somewhat independent of the mechanical functioning of the organism. But the conclusion of the previous section was that the brain approach and its more passive counterpart aren&#x27;t really <em>that</em> different: they simply have (vastly) different levels of indirection.</p>
<p>Some of our oldest technologies, like mercury thermometers and hourglasses, rely on low-indirection feedback loops to be useful. And low-indirection &quot;computation&quot; is at the base of a new trend in robotics—possibly the most control-focused discipline of all—called <em>morphological computation</em>. That&#x27;s a fancy name for an ancient idea: don&#x27;t over-engineer a control mechanism when you can make your machine solve a problem by its shape alone. Here is a glimpse of that idea in action: a robot capable of walking down a slope using no power or sensors at all.</p>
<div class="lg:col-start-1 lg:col-end-1 aspect-w-16 aspect-h-9 mb-8"><iframe class="w-full p-0" src="https://www.youtube-nocookie.com/embed/HwlKouopjqM?start=0" title="YouTube video player" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerPolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe></div>
<h2>Seeing the Prophecies</h2>
<p>In what way are all these low-indirection feedback loops &quot;prophesy devices&quot;? How are they &quot;models of the outside world&quot;?</p>
<p>I hinted at the answer in a previous post:</p>
<blockquote>
<p>Those models are not the same as the physical things they are modeling. In fact, they are very different from most points of view—a network of a few thousand neurons looks and feels nothing like a door handle. That&#x27;s alright. They don&#x27;t need to be the same, as long as they behave similarly enough to produce reliable prophecies when and where you need them.</p>
<p>— From <a href="https://aethermug.com/posts/embedded-prophesy-devices" rel="nofollow noopener noreferrer" target="_blank">Embedded Prophesy Devices</a></p>
</blockquote>
<p>To predict the future, a model only needs to capture the essential aspects of the external phenomena, and it needs to inform its parent system on how to steer towards its Water Lily for future benefit. The key is <em>correlation</em>: a good model produces an effect that &quot;happens&quot; to couple with the surrounding conditions to produce a specific, useful Water Lily in the future.</p>
<p>In most of these primitive cases, the &quot;prediction&quot; is as simple as</p>
<blockquote>
<p>&quot;The conditions an instant ago were X, so taking action Y will probably lead to &lt;Water Lily&gt; in the near future&quot;.</p>
</blockquote>
<p>This is what the weather vane does (&quot;The wind came from the north just now, so turning the arrow a little more to the north is likely to point in the direction of the wind—the Water Lily—in the next couple of seconds&quot;). It&#x27;s what the sea sponge does (&quot;Dirt is clogging my pipes, so making a sneezing motion is likely to lead to my Water Lily of being a clean sponge&quot;). And it&#x27;s what E. coli does, as well (&quot;I&#x27;m catching more nutrients than I was a few seconds ago, so keeping my current course will bring me closer to dinner&quot;).</p>
<p>I find this category of &quot;low-indirection recursive phenomena&quot; intriguing, because they have all the necessary ingredients without the mind-boggling complexity of brains and microchips. They allow you to focus on what&#x27;s really fundamental in these processes, without having to worry about the intricacies of computer architectures and neuroscience.</p>
<p>All these examples teach us a major truth about how <em>useful</em> feedback works in biology and engineering: far from being idle predictions, of the kind a horse-racing gambler or a political pundit might make, these models directly contribute to the achievement of their own Water Lilies. They are all self-fulfilling prophecies.</p>
<p>If that sounds tautological (the predictions are the actions that realize the prediction), it&#x27;s because these are <em>loops</em>. Of course they are both cause and effect. The meaning of &quot;prediction&quot; we&#x27;re more familiar with is an extension of this into the realm of higher cognition. How that extension works is precisely what the next <a href="https://planktonvalhalla.com/subscribe/" rel="nofollow noopener noreferrer" target="_blank">Plankton Valhalla</a> essay will be about. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/see-the-living-loops/pexels-tonywright-694214.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[My Head as a Lake]]></title>
            <link>https://aethermug.com/posts/my-head-as-a-lake?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/my-head-as-a-lake</guid>
            <pubDate>Thu, 28 Nov 2024 23:54:50 GMT</pubDate>
            <description><![CDATA[Thinking about aphantasia since long before aphantasia was a thing]]></description>
            <content:encoded><![CDATA[<p><em>Marco&#x27;s note: This post is a contribution by Marie Cosgrove-Davies to Aether Mug&#x27;s <a href="/posts/a-list-of-introspective-descriptions">List of Introspective Descriptions</a>. I&#x27;m always very interested in other people&#x27;s stories of inner discovery, because, as Marie writes, each brain experiences the world in a different way. In my case, learning about aphantasia was a surprise. Not so for Marie.</em></p>

<p>I don’t have many memories from my life—they just don’t stick to my brain—but I remember the moment when I realized that other people’s minds were different from mine.</p>
<p>“Close your eyes, and picture yourself on a beach.” My second-grade teacher was leading the class in a visualization exercise. “You can hear the waves crash, and feel the sun on your face.” I was an obedient child, so I sat quietly, eyes closed, as… nothing happened. But the rest of the class was so still, and my teacher knew what she was doing. Something must have been happening for them.</p>
<p>As a 7-year-old, I didn’t really make much of this realization. It wasn’t until college that I started talking with other people about what went on in their minds. I developed a series of questions I’d ask: “When you’re thinking, what’s going on inside your head? Is there a voice? Are there several? Do you see pictures? Diagrams?” Usually, they’d stare at me and say they didn’t know, then come back after a few days of observation to tell me their answers.</p>
<p>I found more questions: “When you’re reading a novel, what’s happening in your mind?” People told me that a voice read them the words, that they saw images like a picture book, that they heard the characters speaking, that it played out like a movie. This all sounded remarkable and strange. Sometimes I’d ask a follow-up question about reading nonfiction, but the responses were usually less interesting (diagrams, mechanisms, tabular data).</p>
<p>Later I found one more question, to try to suss out the crispness of the pictures in their heads: “Imagine an elephant. Do you see it clearly?” “Yes,” they&#x27;d say. “Is the top of the ear above or below the eye?” Some people answered immediately; some people, who a moment ago had said they had a clear picture, were unsure. My favorite answer, from a friend whose inner life sounds way more fun than mine, was “I don’t know, it’s already flown away.”</p>
<p>I’ve had this conversation dozens of times but never bothered to look at a picture of an elephant to see what the correct answer is. For my purposes, it doesn’t matter—I’m curious about how confident people are in their answers. I wouldn’t remember the “right” answer anyway.</p>
<p>Sometimes, people would turn the question around, and ask what happened inside my mind. This happened less often than you might think—people are self-absorbed—and I always struggled to articulate it. “There’s nothing happening in there,” I’d tell them. Occasionally, they’d tell me their mind was the same, but more often I could see them trying to grasp how I could be sitting in front of them, breathing and talking, with such limitations.</p>
<figure><img src="https://aethermug.com/assets/posts/my-head-as-a-lake/luca-bravo-ESkw2ayO2As-unsplash.webp" alt="Picture of a tranquil lake nestled between two thickly forested shores. Majestic mountains are in the background."/><figcaption>Photo by Luca Bravo, Unsplash</figcaption></figure>
<p>Eventually I started to describe the inside of my head as a lake - the surface could be placid or fierce, but either way I couldn’t see below, to where things happened. My mind responds to stimuli but without them tends to sit quietly. I don’t know what I’m going to say until I’ve said it; I “think through” things in conversation. Yes, I can say words inside my head, but the level of effort feels about the same as saying them out loud, so I mostly do it if I’m trying to translate, or trying to phrase something just so. There are no pictures, no smells, no tastes, and few sounds. Songs do sometimes get stuck in my head, but they don’t stay there - usually I wind up humming or singing them, and it&#x27;s never the orchestral extravaganza that some people report—only the melody.</p>
<p>Some people tell me it sounds restful, meditative. Reading and hearing about what it can be like inside other people’s brains, I usually agree. The articles on “negative self-talk,” for instance, indicate that some people have an uncontrollable voice talking in their head saying bad things. If I make a mistake or hurt someone, it sticks with me as an ambiently bad feeling of regret… for a few days, maybe a week. After that the feeling, and the memory, fade and fall like a dead leaf.</p>
<p>Some brains seem to spend untold hours ruminating, thinking through how they would have done things differently, nurturing old grudges going back to kindergarten. I can barely remember yesterday, much less last week, unprompted. Even if I wanted to dwell on past indignities it would take a concerted effort to record and recall them.</p>
<p>Others’ minds seem to feed them uncontrollable, scary images of what their future might be. Last week I walked home alone at night through a graveyard; it was as atmospheric as it sounds. I get the impression that for some, the experience would have been paralyzing, intolerable, full of fears of what might go wrong. Yes, I startled at the noise when a deer crashed away through the undergrowth, but mostly I was warming my cold hands in my pockets, watching for bumps in the path, looking at the moon.</p>
<p>I don’t mean to seem smug, or superior. I’m not claiming any of this is actionable. “Get a new brain” is a challenging suggestion, and it’s not clear to me that cognition is mutable. These are impressions of what it is like to have a different mind. Most articles and essays tend to focus on negative experiences, things people want to change or fix, so that’s the data I have to go on.</p>
<p>Not having these specific experiences doesn&#x27;t mean I never feel tired, or depressed, or frustrated. In fact, I’ve had trouble completing this essay because of just those feelings. I’m not very good at understanding myself, though - I don’t have the visibility. I figure out what I’m feeling the same way an external observer would. Examining my energy levels, my responses to things around me, my motivation. Am I abrupt or frustrated in conversations? Do I mostly want to lie down and snuggle a cat? Maybe I could improve my self-awareness with therapy, but that hasn’t happened yet, and I’m not that optimistic.</p>
<p>Maybe a decade ago I came across new research on a condition called “aphantasia” and recognized in it what I already knew about myself. Like other aphantasics whose writing I’ve read, I have poor episodic memory, trouble connecting with my emotions, and have long considered myself to be un-creative. It&#x27;s interesting, although not immediately impactful to my life, to learn this term. To recognize that these traits are part of my make-up, like hair color or shoe size, and like hair color or shoe size may be somewhat alterable, but difficult to change dramatically or permanently.</p>
<p>If there’s anything I’ve learned from my decades of discussions with friends, family, and acquaintances about how their minds work, it’s that everyone’s cognition is different, often in fascinating ways. The most common thread is that almost everyone assumes that other people’s minds work the same way as theirs… and they’re shocked to learn that this isn’t true. ●</p>
<blockquote>
<h2>Bio</h2>
<p><em><strong>Marie Cosgrove-Davies</strong> lives in Pittsburgh, Pennsylvania, USA, with her husband and three cats. She enjoys gardening, reading, taking cat photos, buying strange objects at thrift stores, and dabbling in assorted other activities (sewing, choral singing, hiking). In her spare time she is a product manager for a small software company.</em></p>
</blockquote>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/my-head-as-a-lake/gabriel-tamblin-7Wt1icln9dY-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[A Framing and Model About Framings and Models]]></title>
            <link>https://aethermug.com/posts/a-framing-and-model-about-framings-and-models?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/a-framing-and-model-about-framings-and-models</guid>
            <pubDate>Thu, 21 Nov 2024 06:21:39 GMT</pubDate>
            <description><![CDATA[Some basic metacognition tools]]></description>
            <content:encoded><![CDATA[<p>There is a lot of talk, nowadays, about mental models and thinking frameworks, yet these concepts are both <em>vastly</em> underrated and poorly understood. Here I want to make the claim that they are (*<em>ominous gongs</em>*) the very foundations of all human thought, and that it is worth understanding them in full. My definitions of the terms might be slightly different from others you&#x27;ve seen before (there aren&#x27;t universally accepted definitions to begin with), but hopefully the explanations and examples below will make them crystal clear.</p>
<h1>Models</h1>
<p>A model is an analogy. It is a simplified simulation of something else. Systems mimicking other systems. Models are intuitive, at least when they are physical and tangible. We all see them around us every day: a map is a model of a place, a photograph is a model of a scene, and an architect&#x27;s model is, well, a model of a building. A model doesn&#x27;t have to resemble the thing it models, though. For instance, even a thermometer is a model, because it approximately replicates a range of variation that helps you know what to expect (e.g. the sense of being hot or cold, based on the temperature).</p>
<p>Typically—although not always, and not for everyone—we are familiar with the limitations of models, stemming from the simplifications they always entail. A map can be inaccurate, outdated, and limited in scope; a photograph doesn&#x27;t fully convey what was going on when it was shot; and an architectural model usually looks way cleaner and neater than the final product (architects, those sneaky foxes). In the famous words of Alfred Korzybski, <em>the map is not the territory</em>, and <em>the word is not the thing</em>. No matter how sophisticated the analogy, it is never the same as the thing it refers to.</p>
<p>We use models in our minds too, although those tend to be more abstract and less intuitive. You may have heard of Occam&#x27;s Razor—<em>the simplest explanation is usually the best one</em>—referred to as a mental model, but what does it simulate? The answer must be, &quot;it simulates someone making a correct judgement&quot;, which is why it is useful as a rule of thumb. Clearly it is a simplification, because sometimes there are special reasons for the simplest explanation to be wrong, but (according to William of Occam) it is right most of the time.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-framing-and-model-about-framings-and-models/blogs_mental_models.webp" alt="" title="Several screenshots of blog post headers about mental models"/></figure>
<p>Other mental models popular on the internet are even fancier, with catchy names like Hanlon&#x27;s Razor, Eisenhower Matrix, and Pareto Principle. But most of the models we use don&#x27;t have a name—in fact, we build new ones up all the time. Don&#x27;t let bloggers and self-help gurus convince you that &quot;Using Mental Models is What the Smart Kids Do&quot;, because <em>it is impossible to think without mental models</em>, regardless of one&#x27;s IQ. Everybody is <em>forced</em> to use them every second of their lives, because <a href="/posts/embedded-prophesy-devices">we live by predicting the future</a>, and mental models is how we do that. Simulating the world is what our brains are (largely) for. Some examples below should make this point clearer.</p>
<h1>Framings</h1>
<p>Framings are less intuitive. They, too, are mentioned sometimes in relation to how we think, together with the related and often overlapping terms &quot;frames of thought&quot;, &quot;thinking frameworks&quot;, and &quot;schemata&quot;. They seem to involve seeing things in different lights, and interpreting events and situations in a positive or negative way. Yet the meaning of these words is slippery and elusive, and their relationship with mental models unclear.</p>
<p>That&#x27;s a pity, because it&#x27;s a very useful concept. So here is a plain definition: <em><a href="/posts/a-framing-is-a-choice-of-boundaries">a framing is a choice of boundaries</a></em>. Reality is a continuum, with no neat inherent separation or boundaries between things, because everything affects, and is affected, by everything else around. It is up to us sentient beings to <a href="/posts/human-language-is-not-a-network">invent boundaries so that we</a> have something to build our mental models with.</p>
<figure><img src="https://aethermug.com/assets/posts/a-framing-and-model-about-framings-and-models/samule-sun-lqISplTobR8-unsplash.webp" alt="Photo of a miniature replica of an ancient Chinese city."/><figcaption>Photo by Samule Sun, Unsplash</figcaption></figure>
<p>Is the hand a separate entity from the arm it is attached to?</p>
<p>Are the people born in this city compatriots of those born in that other city?</p>
<p>Where exactly does the atmosphere end and space begin?</p>
<p>Is light the absence of darkness, or the other way around?</p>
<p>How many basic human rights are there?</p>
<p>None of these questions makes any sense from a physical point of view, and Nature offers no hints. But the moment people are involved, those questions could begin to matter. We often find that we must answer them one way or another, even if arbitrarily, just to understand each other and get along with our lives. Whenever we define a word, we&#x27;re making a framing.</p>
<p>But a framing is also a choice of what exists and what doesn&#x27;t—what&#x27;s inside and outside of an ontological boundary. The possible ways to frame reality are virtually infinite, but our time and mental capacities are finite. Like a child choosing which dolls or toy bricks to play with at the beginning of playtime, we must choose what counts as &quot;a thing&quot; and what can be waved away as irrelevant. This cherry-picking work also includes any salient or meaningless details about what&#x27;s <em>inside</em> the things we take to exist.</p>
<p>Only after we have defined our moving parts like that, can we begin to think about what those things <em>do</em>, i.e. we can simulate the world with our models.</p>
<p>Thus every model needs a framing. In other words, given that we use models for everything, framings are an equally central concept for cognition. In short:</p>
<blockquote>
<p>A <strong>framing</strong> is a set of <a href="/posts/a-black-box-view-of-life">black boxes</a>, and it answers the question: what exists in the first place, and what properties does each of those things have?</p>
<p>A <strong>model</strong> is a <a href="/posts/embedded-prophesy-devices">prophesy device</a>, and it answers the question: what happens when those things that exist interact? What do those black boxes do together?</p>
</blockquote>
<div class="col-start-1 col-end-1 m-10"></div>
<h1>Six Examples</h1>
<p>Armed with these two definitions, let&#x27;s look at some examples in order of increasing familiarity.</p>
<h2>Example 1: Bohr&#x27;s Model of the Atom</h2>
<p>Science is where the use of models is most explicit and deliberate. The word &quot;model&quot; itself it used most often by scientists, and it refers to the condensation of centuries of study into powerful prediction tools.</p>
<p>Take, for instance, the Bohr model of the structure of the atom, also known as the &quot;solar system model&quot;. In the late 1910&#x27;s, the physicist Niels Bohr chose a framing approximately as follows:</p>
<ul>
<li>There exists a phenomenon called <strong>electromagnetic force</strong> with certain properties.</li>
<li>There exist particles called <strong>nuclei</strong> with positive charge.</li>
<li>There exist negatively charged particles called <strong>electrons</strong>, and they can only have energies in discrete amounts, i.e. quantized.</li>
</ul>
<p>These were his black boxes. It wasn&#x27;t clear at his time what the nuclei were made of, nor that those particles also had a wave-like nature (that would come a few years later with de Broglie, Schrödinger, and the rise of quantum mechanics). But Bohr had access to a lot of knowledge about the behavior of electrons and of atoms, and to many well-known physical theories like electromagnetism and all of classical mechanics. With these assumptions, he put forth a model for how those particle black boxes might work together:</p>
<blockquote>
<p>The electrons settle in orbits around the nucleus; due to the quantization of their energy, these orbits exist at distances that are discrete multiples of a base distance (which he computed); plus more details that we&#x27;ll gloss over.</p>
</blockquote>
<p>This model worked better than its predecessors, because it made more accurate predictions, like the energy levels and the spectral lines of hydrogen atoms. In this sense, it was a &quot;good&quot; model. Although it was replaced only a few years later by much more powerful models from quantum physics, it served as a stepping stone to better understand those natural phenomena. (Its simplicity was perhaps a bit <em>too</em> effective, because we still tend to think about and represent atoms as little solar systems, despite the fact that we&#x27;ve long since discovered they don&#x27;t look like that.)</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-framing-and-model-about-framings-and-models/US_Atomic_Energy_Commission_logo.webp" alt="" title="Picture of a badge with the US Atomic Energy Commission&#x27;s logo, a Bohr-style atom with little beads circling around a nucleus."/></figure>
<h2>Example 2: Supply and Demand</h2>
<p>One of the most fundamental models in macroeconomics is the law of supply and demand. Its framing goes something like this:</p>
<ul>
<li>There exists something being traded, called a <strong>good</strong>.</li>
<li>There exists a value, called <strong>price</strong>, for which the good can be sold.</li>
<li>There exists a <strong>perfectly competitive market</strong>, i.e. a large group of buyers and sellers of the same good with no barriers to their trade, nor special advantages or disadvantages given to anyone.</li>
<li>There exists something called <strong>supply</strong>, meaning the amount of the good for sale in the market; this supply decreases or increases as the price goes up or down.</li>
<li>There exists something called <strong>demand</strong>, the amount of the good that people are seeking to buy in the market, which grows as the price goes down and vice versa.</li>
</ul>
<p>Again, these are black boxes, because it doesn&#x27;t matter how many people there are in the market, why they want to buy the good, or what the good even is. With these, the model is simple:</p>
<blockquote>
<p>The price, supply, and demand will change until they reach the point where supply and demand are identical.</p>
</blockquote>
<p>This model is thousands of years old and it is here to stay, because it works. But it doesn&#x27;t work nearly as well as we typically need. Its extreme simplicity makes it memorable and easily applicable, but it ignores too many factors to be truly realistic. To increase its accuracy (somewhat), economists have had to expand the framing to include many more factors, like:</p>
<ul>
<li>Forms of imperfect competition, like monopolies and oligopolies</li>
<li>Innovation and entrepreneurial disruption</li>
<li>Asymmetric information, with some traders much more well-informed than others</li>
<li>And many more.</li>
</ul>
<p>With more moving parts, the model becomes exponentially more complex—a common trade-off for models.</p>
<figure><img src="https://aethermug.com/assets/posts/a-framing-and-model-about-framings-and-models/rafael-garcin-4MlI0oE1D9E-unsplash.webp" alt="Photo of a miniature replica of a renaissance European city."/><figcaption>Photo by Rafael Garcin, Unsplash</figcaption></figure>
<h2>Example 3: Weight Loss</h2>
<p>Of course, models are not confined to the blackboards of professors and analysts. An example of a model we often apply in daily life is the basic formula for weight loss.</p>
<p>First, the framing:</p>
<ul>
<li>There exists a type of organic mass called <strong>fat</strong>, which is generally considered unhealthy and unattractive if you have more than a certain amount of it.</li>
<li>There exists an activity called <strong>exercise</strong>, which you can do by dedicating money, time, and sweat.</li>
<li>There exists another activity called a <strong>diet</strong>, which is the quantity and quality of food you eat.</li>
</ul>
<p>The model is this:</p>
<blockquote>
<p>Exercising more and following a better diet leads to a decrease in fat.</p>
</blockquote>
<p>This model is a benefic one, because it motivates people to follow a healthier lifestyle, even though it is extremely rough (there are countless theories about what a &quot;better&quot; diet looks like, which makes all the difference) and grossly inaccurate (exercise is great for health but <em>very</em> ineffective for weight loss). We tend to think in these terms, and when we decide to start a Paleo diet or to sign up to a gym, we&#x27;re—more or less implicitly—using this model.</p>
<h2>Example 4: Favorite Creators</h2>
<p>Take a famous creator you&#x27;re a fan of, like a pop star or a writer. For the sake of this example, I&#x27;ll use the American filmmaker Wes Anderson.</p>
<p>The framing:</p>
<ul>
<li>There exists a medium of exchange of abstract value called <strong>money</strong>.</li>
<li>There exists a property of reality which separates events in a continuum, called <strong>time</strong>.</li>
<li>There exists this human being called <strong>Wes Anderson</strong> who makes movies (and also breathes and eats, I guess).</li>
</ul>
<p>That&#x27;s all we need in the framing. We don&#x27;t need to assume that a place called Hollywood—or even the idea of &quot;space&quot;—exists, we don&#x27;t need to know anything else about Wes. We can go straight ahead with the mental model:</p>
<blockquote>
<p>If someone gives Wes $25 million and a couple of years (and this happens regularly), he&#x27;ll release an amazing movie.</p>
</blockquote>
<p>A film producer who is asked to produce Wes&#x27;s next film might need a more extensive framing (what is Wes&#x27;s next idea, which actors are available, where the film should be shot, etc.) and a more sophisticated model for how those additional black boxes might work together, but for me, the above is an accurate-enough prophesy device, and it has never fully failed me over the years.</p>
<h2>Example 5: Kicking</h2>
<p>During a sports match, there may come a time when you are called upon to kick a ball and to kick it well.</p>
<p>You&#x27;re using a framing and a model here, too, although in this case they are more automatic and non-verbal. The framing would probably involve certain specific movements of your legs that you&#x27;ve learned to make, the position and speed of the ball, and the targets you wish to deliver the ball to. The model is your mental expectation of where the ball will go if you kick it in this or that way, with this or that timing.</p>
<p>If you&#x27;re a beginner, your framing will be barebones: you can only focus on a few things at once, perhaps only the ball&#x27;s location, with no time to consider anything else. As you practice your kicks, you learn to take more factors into account, like the assigned and actual positions of your team mates and opponents, and your mental simulation grows in accuracy and confidence.</p>
<p>If you&#x27;ve ever worked hard at a sport, you probably know the feeling of suddenly discovering a subtly different way to use your body, one that you could have used any time before if only it had occurred to you. That discovery is you expanding your framing of the task. But in order to master that new technique, you&#x27;ll need to refine your mental model (and muscles) through practice.</p>
<h2>Example 6: Going Meta</h2>
<p>Clearly, the act of saying &quot;there are framings and there are models&quot; is itself a new framing (a meta-framing?). By writing all these words to explain what these two concepts imply and how they can help you think more clearly, I&#x27;m laying out the model that I built with them. (A discussion of framings about framings about framings will have to wait for another blog post.)</p>
<h1>The Crux of Framings</h1>
<p>Here is the issue: much more so than mental models, our use of framings is almost always unconscious and implicit. We don&#x27;t usually sit down in front of a Framing Catalog and say, &quot;let me make a prediction based on... hang on... yep, <em>this</em> framing should do the job!&quot;</p>
<figure><img src="https://aethermug.com/assets/posts/a-framing-and-model-about-framings-and-models/shubham-sharan-JAAW5jL8BVk-unsplash.webp" alt=""/><figcaption>Photo by Shubham Sharan, Unsplash</figcaption></figure>
<p>We find it easy to speak about our models and we often argue about which predictions we believe to be more accurate—what could happen, how the world works, etc. But we rarely question our framings. We seldom take a step back to ask what other factors might be at play, and if the definitions of our words are really solid and compatible with other people&#x27;s. This might be fine for semi-automatic activities like muscular movement and day-to-day tasks with low stakes, but it&#x27;s a big mistake whenever serious thought is involved.</p>
<p>Forgetting that framings are at the root of all our thinking leads to being oblivious that <a href="/posts/a-pocket-size-checklist-of-thinking-errors">those assumptions we&#x27;re making</a> about the world are somewhat (or entirely) arbitrary and ad hoc. We miss the opportunity to seek other fruitful ways to segment the world in order to solve any given problem.</p>
<p>What&#x27;s worse, relying on unconscious framings also means that we suffer from massive blind spots, because we forget that we&#x27;re hand-waving away most of reality in order to have neat, workable models. I like to call the latter issue <em>tunnel vision</em>: we get so entrenched in our little framings that we start to believe that they are the whole of reality, often with depressing consequences. (More on this in future blog posts.)</p>
<h1>The Art of Boundarycraft</h1>
<p>Suppose you <em>do</em> remember that your framings are tools at your disposal, how can you optimize them? Well, besides replacing them altogether with others, there are only two logical approaches: you can change the resolution, and you can move the boundaries.</p>
<p>By &quot;changing the resolution of your framing&quot;, I mean either clumping things together into bigger black boxes to make a more general framing, or trying to figure out what&#x27;s going on inside your current black boxes, effectively increasing the number of moving parts that you can simulate in your models. We could also call these two operations <em>zooming out</em> and <em>zooming in</em>.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-framing-and-model-about-framings-and-models/zooming.webp" alt="" title="Hand-drawn diagram showing two geometrical shapes transformed in two ways. The first one, titled &#x27;zoom out&#x27;, shows a new shape that is the union of the two original shapes. The second one, titled &#x27;zoom in&#x27;, shows that one of the two shapes is actually divided into smaller sub-shapes."/></figure>
<p>Zooming out—decreasing the resolution of your framings—is what philosophers do when they try to come up with universal categories and overarching explanations for reality or the human condition. It is also what theoretical physicists do when they chase &quot;grand unifying theories&quot;, a few simple formulae capable of explaining (simulating) as many different phenomena as possible.</p>
<p>The risk and trade-offs of over-simplification remain, but when you carefully zoom out your framings, you&#x27;re learning to notice patterns in the world that you had been oblivious to before. The details fade away, things seem simpler and more regular. Perhaps this is what it means to become wiser.</p>
<p>Zooming in, on the other hand, is about reverse-engineering reality. It&#x27;s asking questions about why things work in a certain way, instead of being satisfied with knowing they do. Again, this is something scientists do systematically: observe deviations from our predictions, take into account smaller and subtler factors, build hypothetical models for how those factors could interact, and experiments to validate those models. For example, when quarks were discovered, what used to be a &quot;nucleon&quot; black box (proton or neutron) was replaced with a model composed of three smaller &quot;quark&quot; black boxes.</p>
<p>Science is a bunch of good-enough framings and the best models we can devise on top of them. At the individual level, a higher-resolution model means being not only more knowledgeable, but also better at predicting what the system at hand can or will do—it means being smarter. Of course, you could do it the scientist&#x27;s way, or you could open a book or attend a lecture and have other people teach you those higher-resolution framings.</p>
<p>The second thing you can do with your malleable framings is to &quot;move the boundaries&quot;. This doesn&#x27;t necessarily imply any kind of zooming, but it is more cataclysmic in nature. You&#x27;re metaphorically redrawing the map, and this could lead to new continents emerging from the waters, old borders being shifted, and land bridges appearing across straits.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-framing-and-model-about-framings-and-models/movingtheboundaries.webp" alt="" title="Hand-drawn diagram with the same two shapes as the previous diagram, this time transformed so that each of them has a different shape."/></figure>
<p>Moving the boundaries is what the pioneers of quantum mechanics did in the 1920&#x27;s, when they decided that being a particle and being a wave were not mutually-exclusive descriptions. It&#x27;s what avant garde novelists like James Joyce did in the early 20th century, when they expanded the reach of the occasional description of a character&#x27;s thoughts to fill whole books in the form of a &quot;stream of consciousness&quot;.</p>
<p>You&#x27;re moving the boundaries when you realize that exercise is important for your fitness but not for your body weight, or when you learn (like <a href="/posts/normality-and-surprise-in-an-image-free-mind">I did</a>) that the way you experience the world is only one variation out of many. Shifting a framing&#x27;s boundaries is what we mean with the term &quot;thinking outside the box&quot;.</p>
<figure><img src="https://aethermug.com/assets/posts/a-framing-and-model-about-framings-and-models/rafael-garcin-AfXEugJMutI-unsplash.webp" alt="Photo of a miniature replica of sailships leaving port."/><figcaption>Photo by Rafael Garcin, Unsplash</figcaption></figure>
<p>Changing the resolution of our framings is an excellent way to refine our understanding and predictions of the world, and people do at least some of that regularly in schools (mainly zooming in) and in universities (zooming both in and out). But it is by moving the boundaries that we can make the biggest leaps—what philosophers call &quot;paradigm shifts&quot;. And you can only move a boundary if you&#x27;re well aware that it exists in the first place. This is <em>why we need to be thinking about framings even harder than we think about models</em>.</p>
<p>The best and surest way to learn that art is to be somewhat versed in the scientific method. <a href="https://planktonvalhalla.com/20230905-mass-producing-the-mistake-minimizer/" rel="nofollow noopener noreferrer" target="_blank">Science is humanity&#x27;s mistake-minimizing tool</a>, and in this sense it is very much not a prerogative of PhDs and &quot;rational people&quot;: the methods at its core are essential to the day-to-day thinking of every human being, whatever their intellectual inclinations and interests. That is not to say that scientists are all masters of the art of framing and reframing: in my experience, they very often are not. Even they can benefit from the acquired habit of putting their models to rest for a while, taking a step back, and revisiting their base assumptions. A new good paradigm might be just a boundary-tweak away. ●</p>

<p><em>Other AeMug posts about these topics:</em></p>
<ul>
<li><a href="https://aethermug.com/posts/embedded-prophesy-devices" rel="nofollow noopener noreferrer" target="_blank">Embedded Prophesy Devices</a></li>
<li><a href="https://aethermug.com/posts/internal-model-translation-table" rel="nofollow noopener noreferrer" target="_blank">Internal Model Translation Table</a></li>
<li><a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">Boundaries Are in the Eye of the Beholder</a></li>
<li><a href="https://aethermug.com/posts/a-framing-is-a-choice-of-boundaries" rel="nofollow noopener noreferrer" target="_blank">A Framing Is a Choice of Boundaries</a></li>
<li><a href="https://aethermug.com/posts/a-fundamental-framing-of-human-language" rel="nofollow noopener noreferrer" target="_blank">A Fundamental Framing of Human Language</a></li>
<li><a href="https://aethermug.com/posts/bad-framings" rel="nofollow noopener noreferrer" target="_blank">&quot;Bad&quot; Framings</a></li>
<li><a href="https://aethermug.com/posts/a-black-box-view-of-life" rel="nofollow noopener noreferrer" target="_blank">A Black-Box View of Life</a></li>
<li><a href="https://aethermug.com/posts/philosophy-is-the-battle-against-the-bad-framing-kraken" rel="nofollow noopener noreferrer" target="_blank">Philosophy Is the Battle Against the Bad Framing Kraken</a></li>
</ul>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/a-framing-and-model-about-framings-and-models/y-s-ZXaLWCnbBqM-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Presenting Visual Koans]]></title>
            <link>https://aethermug.com/posts/presenting-visual-koans?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/presenting-visual-koans</guid>
            <pubDate>Thu, 14 Nov 2024 08:40:06 GMT</pubDate>
            <description><![CDATA[An experiment in mind wandering]]></description>
            <content:encoded><![CDATA[<p><em>This post is a presentation of <a href="https://visualkoans.aethermug.com/" rel="nofollow noopener noreferrer" target="_blank">Visual Koans</a>, a little web app I made to experiment with an unusual kind of creativity-boosting &quot;meditation&quot;, for lack of a better term. Your feedback is welcome!</em></p>

<p>In August I published a blog post titled <a href="/posts/boxed">Boxed</a>, where I reviewed all the strange things I&#x27;ve experienced and learned while remaining strapped up inside a Magnetic Resonance Imaging (MRI) machine for dozens of cumulative hours, doing cognitive tasks for neuroscientists. One of these strange things was the effects of watching sequences of completely random images.</p>
<p>This is part of an experiment I&#x27;m participating in, where the researchers record my brain&#x27;s activity while I passively watch thousands upon thousands of pictures. It&#x27;s an unusual experience for a human being to experience this level of randomness: usually there is a logic in what we see, some kind of intentionality and interpretability, however minimal. But in those experiments I have no idea what to expect next, and can find no logic at all, because, of course, there is none.</p>
<p>I wrote:</p>
<blockquote>
<p>Something strange happens: even though it&#x27;s all purely random, the brain tries to make sense of it all, tries to find patterns and associations. With no time to establish conventional <a href="/posts/a-framing-is-a-choice-of-boundaries">framings</a>, it has to improvise, take in the images in a partly-unconscious way, without thorough processing. This, I think, is a great way to stimulate creativity.</p>
</blockquote>
<p>This part sparked the curiosity of several people on social media. They looked for apps giving you the same experience, just to try to get the same creativity boost I wrote about. I was interested in this myself, because I don&#x27;t always have a 3-Tesla MRI behemoth to meditate in.</p>
<p>It&#x27;s true that you can buy digital photo frames or install slideshow apps to achieve something resembling the experiments I described. However, as soon as I tried those solutions, I knew they were missing the mark. They felt like scrolling on social media or zapping through channels on the TV or on Youtube. The randomness is there, but it feels distracting, rather than focusing. It took me a while to figure out why I felt like that.</p>
<p>The problem with those existing apps is not in the functionality, which is extremely basic, but in <em>the selection of contents</em>. The slideshows and media feeds easily accessible via app stores and on web pages all have the goal of entertaining you. Each transition might be random, but what goes on between transitions is far from it: it&#x27;s all designed to interest you, to be pleasant or &quot;nice&quot; in some way, and sometimes even to become addictive.</p>
<p>This is not what I got during the fMRI experiments. The sequences of images in those sessions are devoid of purpose, non-designed, cryptic. The first reaction is to think, &quot;Why should I be looking at a poorly-shot picture of a chicken plucking corn? What is the intent of this other picture of an empty parking lot, with nothing in particular highlight?&quot; Those pictures are sublimely random and unnecessary in and of themselves, <em>in addition to</em> being randomly sequenced. Each, a kind of visual koan on its own. This, I realize now, might be what made staring at them for a long time such a queer experience for me.</p>
<p>If an app to do that doesn&#x27;t exist yet, I might have build it myself.</p>
<figure><img src="https://aethermug.com/assets/posts/presenting-visual-koans/slideshow.webp" alt="Screenshot of a random image in the slideshow view of the Visual Koans app. The image is poorly-shot, with a crouching man touching the head of a horse while he&#x27;s looking at his phone."/><figcaption>The slideshow view of Visual Koans.</figcaption></figure>
<p>And so I present you <strong><a href="https://visualkoans.aethermug.com/" rel="nofollow noopener noreferrer" target="_blank">Visual Koans</a></strong>, an experimental mind-wandering app designed to emulate the effects of my fMRI sittings, minus all the uncomfortable bits. Just out of the oven, still steaming and <em>beta</em>. The app is an extremely simple slideshow: it cycles through random images every four seconds, and there is no pause, no back or forward button, no text. And, most importantly, it contains <em>24,000 images</em>, all puzzling, not particularly well shot, and meaningless—just what your brain might need to be kicked out of its usual cozy <a href="/posts/embedded-prophesy-devices">prophesying habit</a>.</p>
<p>The only setting you can play with is which categories of images to display. Depending on how well this works, I might add more categories and more images.</p>
<p>Here is how you use it:</p>
<ol>
<li>Choose one or more categories.</li>
<li>Click on Begin.</li>
<li>Look.</li>
<li>Keep looking.</li>
<li>Just look, don&#x27;t try to interpret the meaning of those images, because they have none. They weren&#x27;t meant for you. They don&#x27;t give a peanut that you&#x27;re looking at them.</li>
<li>...I don&#x27;t know, try different things: empty your mind or let it wander, think about your to-do list, rehearse a speech, whatever works for you.</li>
</ol>
<p>Don&#x27;t expect immediate magic. It might take you two minutes, or five, or thirty, to get in the necessary state of mind (or it might never work for you). My experiments run for about two hours at a time, although I often get creativity sparks relatively early on.</p>
<p>I&#x27;m stressing that this is an &quot;experimental&quot; and &quot;beta&quot; app, because I&#x27;ve really just published it. I haven&#x27;t spent much time actually using it myself, so it will probably need improvements before it can actually work as intended. If you try it and have any kind of feedback, respond to the newsletter, or find my contacts <a href="https://marcogiancotti.com/contact" rel="nofollow noopener noreferrer" target="_blank">here</a>. I&#x27;m curious to know how you use it, and I won&#x27;t ask you to share the million-dollar ideas you got out of it. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/presenting-visual-koans/visualkoans.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Ten Books That Transformed Me]]></title>
            <link>https://aethermug.com/posts/ten-books-that-transformed-me?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/ten-books-that-transformed-me</guid>
            <pubDate>Thu, 07 Nov 2024 10:31:27 GMT</pubDate>
            <description><![CDATA[Some non-recommendations of Damned Good Books, with due apologies]]></description>
            <content:encoded><![CDATA[<p>In September I published a piece titled <a href="/posts/you-don-t-have-time">You Don&#x27;t Have Time to Read Books That Won&#x27;t Change Your Life</a>, where I talked about <em>Damned Good Books</em>, i.e. those that change your life. There are so many books out there with the potential to impact you deeply and remain with you forever, spanning every possible genre and goal. You incur a literal life-sized opportunity cost if you spend time reading forgettable books or books that you only feel you <em>have</em> to read. Of course, the first question that I got in response to that post was, what are my own Damned Good Books?</p>
<p>I realize that this is a very reasonable thing to ask. It even feels a little irresponsible to make bold statements like that without providing a list of recommendations. The thing is, the view of reading that I&#x27;m offering makes that very proposition almost impossible to fulfill. I always hesitate to recommend any book unless I know the recipient very well, because I see the act as an extremely low signal-to-noise form of communication, an exercise in failure.</p>
<p>I&#x27;ll try anyway, with a constraint.</p>
<p>A <a href="https://www.youtube.com/@asahisism8" rel="nofollow noopener noreferrer" target="_blank">Japanese literature youtuber</a> I follow—that is, a Japanese guy who talks about world literature in front of a camera—often uses the term <em>dokki</em> (読機) which I like very much for its convenience. <em>Dokki</em> literally means &quot;reading opportunity&quot;, but you could translate it better as &quot;reading window&quot; or &quot;reading ripeness&quot;. It&#x27;s the notion that there is an ideal time to consume a book, a period in which it is &quot;in season&quot;, and that period (if it ever comes) is different for each person.</p>
<p>I think this phenomenon will be familiar to most people: a book that fascinated you in your youth may now appear insipid and uninteresting, while a book that you previously failed to see the point of might suddenly take you by storm.</p>
<p>That doesn&#x27;t mean that the book itself has lost or gained in readworthiness. Even in retrospect, it was worth reading the former book back then when you liked it, and it wasn&#x27;t worth reading the latter when you still didn&#x27;t get it.</p>
<p>Familiar as it may be, I think this idea of a <em>dokki</em> is very much underrated and underused. The value gained from a reading experience does not reside only in the book itself, nor only in the combination of a reader with the book. Reading is an ecological act: the mutable context, history, and timing are everything, and the value of a book is relative to the myriad connections in place in the <a href="https://planktonvalhalla.com/20220829-the-world-is-a-network/" rel="nofollow noopener noreferrer" target="_blank">Universal Network</a> at the time of reading.</p>
<p>Another way to see the same thing is through a <a href="/posts/a-black-box-view-of-life">black-box lens</a>. A human mind is an opaque system, the precise workings of which are still largely unknown, but one thing is certain: it works based on a physical substrate and it obeys the laws of physics. When you&#x27;re given an input—an image, a sound, a story—your brain processes it <em>somehow</em> and eventually produces some kind of action, or none. Whatever happens in that process will be based on the current structure of your neural network, the activation weights and rates of your thinking cells.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>This structure inside the box changes over time, and that, in turn, will change how you process and react to a given input. Like a turnstile that gives way to a light push only after a coin has been inserted, you may need an input A—whatever it is—before you can accept and appreciate an input B.</p>
<p>A forest (to use a less mechanical metaphor), which would normally be indifferent to the feeble heat of a cigarette butt, will nevertheless be engulfed in an unstoppable fire if that same cigarette is dropped during a very dry season, after the forest has had time to accumulate plenty of dead wood fuel. Likewise, a mind will behave differently depending on its <a href="/posts/the-odysseus-black-box">unique and ever-changing characteristics</a>.</p>
<p>All this means that finding Damned Good Books is strictly the reader&#x27;s job, not the reviewer&#x27;s. Someone &quot;recommending&quot; a book can only share what a book did for <em>them</em>, and hope to be lucky enough to reach some readers for whom the book happens to be <em>dokki</em>. I am diffident of any claim about the absolute worth of books. Even an instructions manual for assembling a chair, or the shipping manifest of a cargo vessel, may be Damned Good Books for the right person at the right time in the right circumstances. It is not for the reviewer to assume one way or another.</p>
<p>Below I will share a list of ten (well, eleven) Damned Good Books of mine, but not as an answer to the question &quot;what books should I read?&quot; but &quot;what does it mean for a book to change your life?&quot; Most of these works are too famous to need recommending. You may have already read them all. And some of them are well past <em>dokki</em> for me as well. It is inevitably a very personal list, closely linked to my own life journey until now. For this reason, I will present them in the order in which I read them.</p>
<h3>1. Michael Crichton - Jurassic Park</h3>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ten-books-that-transformed-me/jp.webp" alt="" title="Book cover of Jurassic Park, with a stylized image of a T-rex skeleton"/></figure>
<p>In 1998 I was a quiet 8th-grader in a US school. Overall I didn&#x27;t like that school very much, but I liked that it had a well-furnished library set up almost like a bookstore. I already enjoyed reading back then, but I wasn&#x27;t an avid reader yet. I wandered into the library one day, with no specific goal in mind, and my eyes fell on the cover of Jurassic Park. I immediately recognized it, because I had watched Steven Spielberg&#x27;s film a few years before, and had loved it. I remember that the thickness of that book scared me a little. I had never read a 400-plus-page brick like that before, and I wasn&#x27;t that good at English either. But the love for the movie must have helped, because that day I borrowed the book and unwittingly sealed my fate.</p>
<p>The thriller could not have been more <em>dokki</em> for me. Its English was easy enough to follow at my level, and it also happened to be the first page-turner in my life. For these reasons, it simultaneously taught me that reading can be addictive and extremely pleasurable, and gave me confidence in my English reading abilities. I would go on to read mostly books in English for the rest of my life.</p>
<p>Jurassic Park also planted a seed that was a long time germinating, because it was my first exposure to the scientific concepts of chaos and unpredictable living systems. These became major themes in my thinking as an adult.</p>
<h3>2. J. R. R. Tolkien - The Lord of The Rings</h3>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ten-books-that-transformed-me/lotr.webp" alt="" title="Book cover of The Lord of the Rings, with a painting of snowy mountain peaks."/></figure>
<p>I first read the trilogy in my early teens, around the year 1999 or 2000, when I knew nothing about the fantasy genre or any other aspect of that geeky universe. By sheer luck, I read it right before the movies were released, so I was one of the last few to enjoy it in its purest, unspoiled form. It was quite an initiation.</p>
<p>LotR was never just an epic story for me. I liked the characters, and the narrative engaged me well enough to re-read the thing multiple times, but these books did something more for me, something that books like Jurassic Park had not done. LotR taught me what it means to create a world out of thin air. A world that feels deep and beautiful and independent in itself, regardless of the stories that enfold within it.</p>
<p>This awe at the power of imagination stays with me to this day. I&#x27;ve been a lifelong fantasy geek, and this subculture has been the cradle of many a friendship. But I also realize that my philosophical thought is indirectly affected by Tolkien&#x27;s writings. This is not the blog post to delve into that topic, so I will only say that I believe imaginary worlds and physical laws to be no less &quot;real&quot; than what we <em>treat</em> as &quot;reality&quot;.</p>
<h3>3. Isaac Asimov - Extraterrestrial Civilizations and  The Collapsing Universe: the Story of Black Holes</h3>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ten-books-that-transformed-me/asimov.webp" alt="" title="Book covers of the two books, one showing a vague solar system and the other a black circle radiating thin red lines."/></figure>
<p>I read these two non-fiction books by Asimov when I was in high school, some time between 2001 and 2003, when I found them collecting dust in my father&#x27;s library. By then, they were both over 20 years old and probably outdated in terms of scientific accuracy. Their accessible, exhilarating presentation of astronomy shook me anyway.</p>
<p>Extraterrestrial Civilizations is essentially a long, fun calculation of the odds of aliens existing somewhere in the Universe, probably following the approach of the <a href="https://en.wikipedia.org/wiki/Drake_equation?useskin=vector" rel="nofollow noopener noreferrer" target="_blank">Drake equation</a>. The Collapsing Universe was all about the astrophysics of stars, neutron stars, and black holes.</p>
<p>I think I was lucky to read them when I did, because they were among the first non-fiction books I tried. They were so good that they must have imprinted in me a general liking—or at least a lack of dread—for non-fiction. The numbers and the scales that Asimov spanned in his passionate explanations irreversibly boggled my mind. I remember going around emphatically telling my family that a single teaspoon of a neutron star weighs <em>six hundred million metric tons</em>, and that the sun burns <em>hundreds of millions of tons</em> of hydrogen every second, and yet it still has enough in stock to keep burning for billions of years!</p>
<p>That excitement is what led me to pick physics and astronomy when it was time to go to university. After Asimov, I had no other choice.</p>
<h3>4. Simon Singh - The Code Book</h3>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ten-books-that-transformed-me/codebook.webp" alt="" title="Book cover of The Code Book, showing a lock inside two concentric circles of letters and numbers."/></figure>
<p>In the mid-aughts I was reading English-language non-fiction books in rapid succession. I had read and loved Singh&#x27;s Fermat&#x27;s Last Theorem, but it was with The Code Book that I really fell in love. This one broke a new personal record for non-fiction readability, surpassing even Asimov&#x27;s books in terms of pure, popcorn-munching fun. This was Jurassic Park again, but about real science and technology.</p>
<p>The Code Book gave me the foundations of what I know about cybersecurity (useful in my later work) and encryption. I remember borrowing my parents&#x27; tools to build two wooden <a href="https://en.wikipedia.org/wiki/Cipher_disk" rel="nofollow noopener noreferrer" target="_blank">cypher disks</a>—simple devices that you use to produce nearly unbreakable ciphers—and making my girlfriend learn how to use them, so that we could send each other encrypted messages. Yeah, I know.</p>
<p>More important, it sparked my fascination with computer science and programming, which I later pursued with a passion.</p>
<h3>5. Alberto Gandolfi - Formicai, imperi, cervelli</h3>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ten-books-that-transformed-me/formicai.webp" alt="" title="Book cover of the book in Italian, showing a stylized representation of the complex insides of a cell."/></figure>
<p>Never translated to English, this book is titled approximately &quot;Anthills, Empires, Brains: Introduction to the Science of Complexity&quot;. It was written by a Italian mathematician and professor not especially famous to the Italian public. I don&#x27;t remember how I found this book, but my father claims I asked for it as a gift when I was 20 or so. I must have sensed the <em>dokki</em>, somehow, because it was perfectly in season.</p>
<p>It talked about a kind of science that, unfortunately, no one was teaching me in my university classes. It was like a secret new dimension of nature, not limited to any previously named field of knowledge but encompassing everything, including concrete situations in daily life. Instead of strict laws of motion and quantum equations, this book expounded on order and chaos (which I remembered since Jurassic Park).</p>
<p>Judging by its lack of popularity abroad, Gandolfi&#x27;s introduction may not have been the best book out there about complexity, but it was interesting enough to finish, and it did its job very well for me. The complexity science lens it gave me has been in front of my pupils for most of the following two decades, now more than ever.</p>
<h3>6. Jorge Luis Borges - Fictions</h3>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ten-books-that-transformed-me/ficciones.webp" alt="" title="Book cover of Fictions, with a photo of the elderly author looking up at the ceiling of an historical building."/></figure>
<p>If fiction is lying, why not lie more sneakily, more convincingly than anyone else? This book shook my understanding of storytelling by doing things differently from all other authors. It showed me that a compelling story doesn&#x27;t need to be always about characters or plot—it can be about ideas, even scientific ideas.</p>
<p>Borges&#x27; short stories are like many thought experiments, but they also land as koans for me. The Library of Babel, Funes the Memorious, Tlön, Uqbar, Orbis Tertius, and <a href="/posts/borges-on-chaos-theory">Pierre Menard, Author of the Quixote</a>, among others, still haunt me and push me to the brink of difficult philosophical questions. I read it first in my twenties, just as early as I should have. It is possibly my favorite book to date.</p>
<h3>7. Eiji Yoshikawa - Musashi</h3>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ten-books-that-transformed-me/musashi.webp" alt="" title="Book cover of Musashi, with a drawing of a samurai with his katanas unsheathed in front of the sun."/></figure>
<p>In 2006, when I read this historical novel, I was just beginning to learn the Japanese language and wished to immerse myself more in the country&#x27;s culture. Unfortunately I wasn&#x27;t able to read it in the original, which could mean that it was a bit premature for me. Then again, it really resonated with me at the time, and who knows if it would have been so powerful a few years later.</p>
<p>Besides being my first look at (a fictional) historical Japan—a charm that still attracts me today—this story worked on me at an emotional level like few others. It is about a kid who grows to become the greatest swordsman of all time. A run-of-the-mill coming of age story, but one that I needed at the time for its vision of a true strength that transcends the physical world. Musashi is already a formidable and brutal fighter at the beginning of the story. While his technique evolves as he grows, his real journey is internal.</p>
<p>Despite his many exciting battles against hordes of enemy samurai, the part that struck me the most was when he settles down for a long time to work as a farmer. This turns out to be surprisingly difficult, and it takes him months of painful trial and error to reach an epiphany that will help him far beyond growing vegetables.</p>
<p>The idea that the true warrior is a warrior of the mind helped me at a time when the future was uncertain and I struggled to decide what I wanted to dedicate my life to. If farming can help me grow into a top-class person, I have more options than I thought.</p>
<h3>8. J. K. Rowling - Harry Potter and the Philosopher&#x27;s Stone, Japanese Edition</h3>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ten-books-that-transformed-me/harry.webp" alt="" title="Book cover of the first book of the Harry Potter series, in Japanese. It shows something resembling a very stylized feather or broom."/></figure>
<p>This is an unusual case, because the real <em>dokki</em> for this book came for me not the first but the second time I read it. My first reading, in my early teens, was pretty unremarkable. I guess I liked the book, because I went on to read the other ones, yet it didn&#x27;t leave a special impression on me. But oh, was the second time a historic milestone in my life!</p>
<p>In early 2007 I felt that I knew enough Japanese to attempt reading an actual book in the language. After some consideration, I decided to go with the first Harry Potter book, because it was a book written for kids and because I knew already roughly where it would go, removing the stress of getting hopelessly lost along the way. That turned out to be an excellent choice.</p>
<p>At first it was excruciatingly difficult work. I would lay in bed for an hour or so before sleeping trying to read the thing, and I would spend most of the time looking up words in a dictionary. I was able to partially understand one sentence or two before growing too sleepy to continue. But the book was accessible and familiar, and I had a good amount of grit, so I kept going. Over the weeks and months that followed, my pace gradually improved. Instead of a sentence per hour, I was able to read half a page, then a whole page, then two. By the time I finished it, six months later, I was reading at a rate of almost 20 pages per hour, and I was actually enjoying it.</p>
<p>Put simply, this book held my hand as I trudged through the first and toughest step to Japanese reading fluency. It gave me confidence, and I never looked back.</p>
<h3>9. Richard Dawkins - The Selfish Gene</h3>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ten-books-that-transformed-me/selfishgene.webp" alt="" title="Book cover of The Selfish Gene. It shows the author&#x27;s name and the title in large letters, and a DNA strand with various animals standing in front of it."/></figure>
<p>I read this book quite late, in 2012, and I&#x27;m glad I waited so long. By then I had absorbed a lot of science in my studies, and I had the basics of complexity science and game theory needed to interpret Dawkins&#x27; book. Reading it much earlier would have confused me, or it might have been too dense for me to handle. But in 2012 I was ready to devour The Selfish Gene, and it shifted my worldview once again.</p>
<p>Not only was this my true gateway to the wonders of biology and to a new level of reasoning about evolutionary processes (evolutionary game theory is a world of fun!), it also got me thinking for the first time about the power of <a href="/posts/a-framing-is-a-choice-of-boundaries">framings</a>. Scientific understanding is not only about looking at the facts and figuring out how they work, I realized. It is also about finding <em>better ways</em> to look at the same facts.</p>
<p>Dawkins&#x27; approach to genes has since been criticized by some, and perhaps it is true that he pushed his framing a bit too dogmatically. But I think those critics miss the unmovable value: framings are especially useful because you can change them based on your needs, and he provided us with a <em>very</em> useful option to use when we need it.</p>
<h3>10. Donella Meadows - Thinking in Systems, a Primer</h3>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/ten-books-that-transformed-me/systems.webp" alt="" title="Book cover of Meadows&#x27; book, showing a Slinky forming an arch."/></figure>
<p>By the time I read this book, in 2020, I had a good implicit familiarity with system thinking. I had picked it up implicitly in my years of graduate-level physics and engineering, and Gandolfi&#x27;s &quot;complexity lens&quot; had been my preferred way of seeing the world for some fifteen years already. What this book did was bring it all into clear focus with simple, intuitive mental models that are easy to reason and talk about. Meadows put this topic more succinctly and accessibly than anyone else in my knowledge, and in an endlessly quotable prose (she might well be the most quoted author in my writings).</p>
<p>This book is a guiding light for curious minds, and it gives me hope: ideas that today sound difficult to put to words might just be waiting for the right angle to frame them.</p>

<p>That concludes my brief selection. There are of course many more Damned Good Books in my life, but you get the idea. What is the common thread in all of these reading experiences? This might be different for another person, even if they read and liked the exact same books, but for me it seems to be a sense of wonder.</p>
<p>It was when I felt, viscerally, that my conception of what&#x27;s possible was expanding, my horizons acquiring new and unexplored dimensions—that is when I knew I had a special thing going on. Each of these books literally restructured my mind, set me up anew for the events and the books that would come next, and shaped what I did and how I thought. They all felt like growth. That&#x27;s what I mean when I say they were <em>damned good</em>. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/ten-books-that-transformed-me/54942apsdl.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[New Plankton Valhalla Essay!]]></title>
            <link>https://aethermug.com/posts/metapost-new-plankton-valhalla-essay?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/metapost-new-plankton-valhalla-essay</guid>
            <pubDate>Thu, 31 Oct 2024 02:55:53 GMT</pubDate>
            <description><![CDATA[A meta-cross-post]]></description>
            <content:encoded><![CDATA[<p>Yesterday I published a new Plankton Valhalla essay titled <a href="https://planktonvalhalla.com/20241030-recursion-tidy-stars-and-water-lilies/" rel="nofollow noopener noreferrer" target="_blank">Recursion, Tidy Stars, and Water Lilies</a>, and it marks a little milestone for me. I feel in the mood to celebrate.</p>
<p><a href="https://planktonvalhalla.com/" rel="nofollow noopener noreferrer" target="_blank">Plankton Valhalla</a> (PV) is the sister site to Aether Mug—a big sister, since I started it a year and a half earlier. Where AeMug blog posts are frequent, regular, and a bit random, PV essays come &quot;when they&#x27;re ready&quot;, and they tend to be longer (this latest one might be a record, at almost seven thousand words!). And, unlike AeMug, PV is a single thematic thread, and each essay builds on top of the previous ones while striving to assume no academic knowledge in the readers.</p>
<p>So what is this thread about? It&#x27;s a reframing of certain parts of science that I wish were more widely known and understood, with the purpose of making them more accessible. It&#x27;s not quantum physics or general relativity. I believe that with the topics I have in mind (complexity, emergence, evolution, and friends) most of the difficulty is not in advanced math or high abstraction, but in the <a href="/posts/a-framing-is-a-choice-of-boundaries">framings</a>—the words we employ to talk about them. The framings used by the experts seem to work just fine for them, but they go counter to so many intuitions of daily life. PV is my attempt to <a href="https://planktonvalhalla.com/20230905-mass-producing-the-mistake-minimizer/" rel="nofollow noopener noreferrer" target="_blank">change that</a>.</p>
<p>That&#x27;s the idea, at least, but it was harder than I imagined. All the convenient technical terms that I would normally use to explain even a simple concept are unavailable to me. Some are too technical to be palatable to most, others are the source of confusion and counter-intuition, the very thing I&#x27;m trying to replace. It&#x27;s doable (after all, there is a <a href="https://simple.wikipedia.org/wiki/Simple_English_Wikipedia" rel="nofollow noopener noreferrer" target="_blank">whole version</a> of Wikipedia written in Simple English), but it leads to some interesting detours.</p>
<p>The ongoing trilogy of essays on PV, of which yesterday&#x27;s is the middle one, addresses the big confusing mess that is human intent and its real and perceived effects on everything around us.</p>
<p>I set out with the idea to write a single essay explaining cybernetician Stafford Beer&#x27;s awesome maxim, <em>the purpose of a system is what it does</em> (POSIWID for short). I love this sentence, it compresses a universal and universally-misunderstood truth in eight memorable words. If you ask me, it&#x27;s worth having every 6-year-old master its meaning in first grade. You just need to unpack it a little, and everyone will see its applicability, right? That&#x27;s what I thought in December 2023, before I tried writing the darn thing down.</p>
<p>As it turned out, Beer&#x27;s word choice may be clear enough for someone who is thinking about systems, organizations, and complexity all the time, but it is flawed in memetic terms. The word &quot;purpose&quot; means something specific to people: it&#x27;s what people want the system to do, <em>regardless</em> of what it does. POSIWID is intentionally paradoxical in this sense, and once you get its intent, that&#x27;s fine. Unfortunately, it also makes the concept opaque and frustrating to the non-initiated. Why should the we all have to redefine a common word like &quot;purpose&quot; just for this insight?</p>
<p>It took me several drafts of the essay to realize that I had to ditch Beer&#x27;s wording, as much as I liked it. Quietly redefining common words for specific goals is a very confusing practice, a sin forbidden on Plankton Valhalla and Aether Mug. Not only did I have to throw away most of what I&#x27;d written, I also had to iterate with my framings and my word choices, making sure that they didn&#x27;t stray far from their most common definitions.</p>
<p>In the process I decided that I needed three essays, not one, to unpack the concept, and that I had to coin at least one new term—something far preferable to repurposing existing ones.</p>
<p>That&#x27;s how I arrived at <a href="https://planktonvalhalla.com/20241030-recursion-tidy-stars-and-water-lilies/" rel="nofollow noopener noreferrer" target="_blank">Recursion, Tidy Stars, and Water Lilies</a>. It&#x27;s very long because it contains lots of worked examples, including almost thirty images. It took me ten exhausting months, many revisions, and dozens of hours to (re)write. It&#x27;s the longest thing I&#x27;ve published until now, barring my PhD thesis. Yet its goal is only to introduce, in the clearest way I can muster, the key concept I&#x27;m calling <em>Water Lilies</em>. I&#x27;m quite happy with how it turned out. This will be a powerful tool when I tackle the topic of purpose head on in the next essay. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/metapost-new-plankton-valhalla-essay/washingmachine.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Normality and Surprise in an Image-Free Mind]]></title>
            <link>https://aethermug.com/posts/normality-and-surprise-in-an-image-free-mind?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/normality-and-surprise-in-an-image-free-mind</guid>
            <pubDate>Thu, 24 Oct 2024 09:52:17 GMT</pubDate>
            <description><![CDATA[On discovering aphantasia and talking about it]]></description>
            <content:encoded><![CDATA[<h3>Retelling What It&#x27;s Like Inside the Dome</h3>
<p>I&#x27;ve always found the ending of the 1998 motion picture <em>The Truman Show</em> to be wonderful in the literal sense of the word: it leads to some deep and interesting questions. Only much later did I realize that some of those doubts applied to my own life.</p>
<p>The thirty-year old protagonist of the film, Truman Burbank, has lived his whole life unaware that he is the star of a lifelong reality TV show. But one of the strengths of this story is its ending (spoiler alert): he eventually manages to get out of the dome and into the real world, although we are not shown what happens afterwards.</p>
<figure><img src="https://aethermug.com/assets/posts/normality-and-surprise-in-an-image-free-mind/trumanshow.webp" alt="Poster of the movie The Truman Show, showing Truman&#x27;s face as the aggregation of thousands of still frames of a TV show, presumably Truman&#x27;s life on camera."/></figure>
<p>How will the outside world appear to his naive eyes? Will he be able to build a new, happy life out there, after all that&#x27;s happened? And <em>will he ever be able to convey what this transition felt like to anyone else?</em></p>
<p>That last question is the most familiar to me as I write this. I suspect that Truman, after a period of adjustment to the real world, might write a book titled &quot;I am Truman&quot;, or &quot;My Life, My Lie&quot;, or something more clever than that. The stories he can tell in that book wouldn&#x27;t be that interesting in themselves, because everybody already knows all about them: the interesting part would be the way he expresses what it was like to <em>live</em> those stories.</p>
<p>What is it like to look up at night and believe that the lousy white pizza painted on a firmament no more than a kilometer or two away is what people call the &quot;Moon&quot;? How does it feel to have people constantly do product placement to your face? When all the conversations you&#x27;ve ever had were partly or fully scripted and performed by actors, including your family and closest friends, what persistent state of mind does that induce in you?</p>
<p>Now, that&#x27;s a description I&#x27;d love to read: a journey into another human being&#x27;s radically different perspective on reality. But what if I could write something equivalent myself?</p>
<p>Like this hypothetical post-Show Truman, I&#x27;ve recently found that the experiences I took for granted in the first three decades of my life were less &quot;normal&quot; than I had thought. I have what neuroscientists call <em>aphantasia</em>, a complete lack of visual imagination: the way I think and interact with reality, it turns out, lacks something that the vast majority of people would swear they couldn&#x27;t live without. It&#x27;s as if I had grown up in the artificial world of <em>The Truman Show</em>—not as the protagonist, but an unwitting extra—convinced, alongside a minority of other extras like me, that nothing was amiss, only to find out suddenly that this is not what the others mean when they speak of &quot;a normal life&quot;.</p>
<p>In my case, the dome of ignorance was not a fake sky physically blocking my escape, but the constellation of words we use to talk about cognition.</p>
<p>The biggest barrier, I have found, was the host of assumptions we make regarding imagination. On my part, I never suspected that imagination might need mental images to be called such; for everyone else, apparently, there was never a doubt of the opposite. Upon first hearing about my aphantasia, people often take that to mean that I am incapable of imagining. After all, the word <em>imagination</em> itself contains the word &quot;image&quot;.</p>
<p>Are aphantasics like me, then, the ultimate realists, minds helplessly pinned down to earth, caged in a world composed of nothing but the concrete objects in front of their eyes, cut off from the wonders of daydreams and legends? The answer is &quot;no&quot;, but to understand it we need to be clear on what we mean by imagination. The definition might differ from one human being to the next, but that doesn&#x27;t make one person&#x27;s definition inherently better than another&#x27;s. Different doesn&#x27;t mean abnormal.</p>
<p>I will begin, then, with a description of my own different sense of what a &quot;normal&quot; imagination has been like throughout my life with aphantasia. Only after seeing what life feels like from a different perspective will we be able to talk about imagination and consciousness in a broader sense of those words.</p>
<h3>Yep, Sounds Normal Enough!</h3>
<p>The irony is that fantastic worlds were always my favorite kind of place. As a small child, I relished my bedtime, when my father would tell me a new fable every evening. Like most children that age, I spent most of my afternoons playing with toy soldiers and knights, action heroes venturing into the lairs of villains, enacting adventures and building new worlds for my own amusement. Never did I realize—not to mention worry—that my imagined worlds lacked the visual element that other kids had.</p>
<p>One day, when I was eleven or twelve, a relative gifted me with a book titled <em>The Hobbit</em>, by a certain J. R. R. Tolkien—I had never heard that name before. It told the story of a little man named Bilbo Baggins going on an adventure across a vast, complex continent, <em>Middle Earth</em>. I enjoyed the story, but it was that world that really struck me. It felt to me as real and meaningful as the one I lived in, if not more. The fact that all of it—its geography, its peoples, its languages and history—could have been created out of thin air by a single mind astonished and exhilarated me.</p>
<figure><img src="https://aethermug.com/assets/posts/normality-and-surprise-in-an-image-free-mind/lotr_it_cover.webp" alt="Cover of a book with an Italian title, Il Signore degli Anelli."/><figcaption>The Italian edition of the Lord of the Rings trilogy I read some 24 years ago.</figcaption></figure>
<p><em>The Hobbit</em> was my gateway to the fantasy genre and to the more nerdy side of me. Soon afterwards, I read <em>The Lord of the Rings</em> trilogy and was hooked forever. I read those books over and over, and later devoured their (then-new) movie adaptations with the same enthusiasm. I remember asking a friend to open a page at random in the trilogy and read any single sentence out loud. I knew the books well enough to correctly guess, every time, the name of the chapter containing that sentence. I engaged in long, un-planned re-enactments of entire scenes from the movies with my school mates, and co-authored whole cosmogonies and mythologies for our own imaginary universes.</p>
<p>We also started playing fantasy-themed games—chief among them a tabletop role-playing game called <em>Dungeons and Dragons</em>. We would gather at someone&#x27;s home, sit around a table with pen and paper, and spend hours impersonating dwarves, elves, and wizards in a Middle-Earth-like fantasy world. Everything was done with words only: brief descriptions of the environment and monsters around us, of the actions taken by our characters, and of the complex battles with our foes, were all we had to work with. We filled in most of it with our fantasy. It felt as if I was there myself, immersed in that fantasy world. None of it, of course, was visual in nature, but it didn&#x27;t seem to diminish my enjoyment compared to my friends.</p>
<p>My craving for alternate realities was not limited to elves and dragons. I was always an omnivore reader. I adored the science-fiction worlds of Isaac Asimov, Arthur C. Clarke, and Michael Crichton, the historical accounts of Magellan and other explorers of the Age of Discovery, the Victorian London of Arthur Conan Doyle and the comedic mundanity of Japanese school life. Like a seasoned multiverse traveler, I loved the feeling of being yanked into a very different reality every time, governed by different rules and values, and all of it felt natural. It felt like an essential way of being me.</p>
<h3>Applied Imagination</h3>
<p>Imagination was not only a means of entertainment for me. The studies I pursued—physics, astronomy, aerospace engineering—benefit from a strong ability to simulate reality in my mind. Even though these disciplines are founded on mathematical formulas and the algebraic manipulation of symbols, it would be very difficult to follow them without a strong intuition of how those physical mechanisms actually unfold.</p>
<p>One thing is remembering that the distance of a planet from the Sun follows Kepler&#x27;s first law, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>r</mi><mo>=</mo><mi>p</mi><mi mathvariant="normal">/</mi><mo stretchy="false">(</mo><mn>1</mn><mo>+</mo><mi>ε</mi><mi>cos</mi><mo>⁡</mo><mi>θ</mi><mo stretchy="false">)</mo></mrow><annotation encoding="application/x-tex">r=p/(1+\varepsilon \cos\theta)</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em"></span><span class="mord mathnormal" style="margin-right:0.02778em">r</span><span class="mspace" style="margin-right:0.2778em"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em"></span></span><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em"></span><span class="mord mathnormal">p</span><span class="mord">/</span><span class="mopen">(</span><span class="mord">1</span><span class="mspace" style="margin-right:0.2222em"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em"></span></span><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em"></span><span class="mord mathnormal">ε</span><span class="mspace" style="margin-right:0.1667em"></span><span class="mop">cos</span><span class="mspace" style="margin-right:0.1667em"></span><span class="mord mathnormal" style="margin-right:0.02778em">θ</span><span class="mclose">)</span></span></span></span>, another thing is understanding intuitively what it means for an orbit to be elliptical, and how a change in its eccentricity parameter <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>ε</mi></mrow><annotation encoding="application/x-tex">\varepsilon</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em"></span><span class="mord mathnormal">ε</span></span></span></span> (epsilon) affects its shape and behavior. You need to somehow recreate the orbit in your head, and learn how it changes as you change the values in those mathematical symbols. Perhaps surprisingly, the <em>spatial</em> component of my imagination is intact, so I was able to do that kind of mental simulation just fine even without <em>seeing</em> any of it in my mind.</p>
<figure><img src="https://aethermug.com/assets/posts/normality-and-surprise-in-an-image-free-mind/Ryugu_rotation.gif" alt="Animation of an asteroid rotating in space. It is shaped like two bowls closing each other&#x27;s mouth, and its surface is rugged and covered in rocks of various sizes."/><figcaption>The asteroid 162173 Ryugu, which Hayabusa 2 visited in 2018. It is a little less than one kilometer in diameter. Source: ISAS/JAXA, CC BY-SA 4.0</figcaption></figure>
<p>For example, I had to apply my imagination when I pursued my PhD research at the Japanese Space Exploration Agency (JAXA). They were planning a deep-space mission, called <em>Hayabusa 2</em>, that would send a probe all the way to an asteroid, pick up a sample of its rocks, and bring it back to Earth. My job there was to explore the possible orbits that the spacecraft might use to park itself around that tiny asteroid. To do that, I had to throw away all I knew about the neat circular and elliptical orbits famously studied by Kepler: an asteroid&#x27;s mass can be so small that even <em>sunlight</em> might exert a stronger force on the probe than gravity, wreaking havoc on all the neat equations we normally use for planets around the sun.</p>
<p>As a matter of fact, most of the closed orbits around such a tiny asteroid were yet undiscovered. My project was a computational exploration of whole &quot;families&quot; of such orbits. Instead of simple oval trajectories flatly laid out on a plane, my algorithm returned shapes that looked like lotus and strawberry flowers, fish, fishhooks, spaceships, water fleas, and other three-dimensional paraphernalia. To find them, I had to convert the speed and position of the probe into the coordinates of an abstract mathematical space of six dimensions, and simulate its movement in search of shapes with special symmetries that would allow the trajectories to close onto themselves, repeating indefinitely. For months, I traced the branching lineages and relations of those families—how a circle morphs into a flower, then into a hook, for instance.</p>
<p>This was a highly visual exercise on the computer screen, but my non-visual imagination somehow worked just fine to understand and make predictions about it. I suspect it has to do with the brain&#x27;s sense of space, a distinct imaginative capacity from visualization that I am perfectly capable of. The science behind this, unfortunately, is still unclear.</p>
<p>Over the years I moved away from academia and shifted my attention back to earth. I grew interested in the design and development of software products, like online platforms to access satellite imagery and mobile applications for financial institutions. As the product manager, my role within the team was to work out precisely the goals of that software, and to ensure that we achieved them quickly and at a low cost, all while offering a pleasant, meaningful experience to the users.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>This was quite a leap from my previous orbit research, but at least one thing hadn&#x27;t changed: I still had to put my imagination to hard work. You can&#x27;t make a product that someone else loves to use without putting yourself in their shoes and pretending you&#x27;re facing the same problems and situations they are facing. I spent weeks at a time immersed in hypothetical scenarios, impersonating my target customers. I sat in Japanese corporate boardrooms as a graying manager, struggling with my less-than-sharp tech savvy to navigate multi-layered map interfaces; I rode motorcycles on dusty Sri Lankan highways as a microfinance loan officer, tracking my daily route of customer visits on a tablet terminal; I returned to my home-on-stilts in rural Cambodia after a long day of work at the local textile factory, and looked at my husband&#x27;s smartphone—the only one in the family—to remind myself of my outstanding loan balance. These flights of the imagination from the office were fundamental for my job, because there is a limit to how much time one can spend directly observing the customers and asking them questions.</p>
<p>Imagination, in other words, was always very much at the center of my experience. In nine cases out of ten, when someone asked me to imagine a hypothetical scenario, I had no trouble with it. When I said things like &quot;I imagine that must have been very exciting&quot; or, metaphorically, &quot;picture yourself on a stranded island&quot;, no one ever batted an eyelash. When (a little patronizingly) I suggested my wife take fewer pictures of landmarks and moments and instead &quot;impress the image into your mind by experiencing it directly&quot; (I can&#x27;t believe I actually said that), she admitted that it sounded like wise advice (then snapped another dozen pictures on the spot).</p>
<p>Communication appeared to be happening as expected. We were, I firmly believed, on the same page on the topic of imagination. In retrospect, though, we really weren&#x27;t, and I might have guessed it. I realize now that there was also a certain amount of cognitive dissonance on my part.</p>
<h3>Omens</h3>
<p>Take the last one and a half stanzas of William Wordsworth&#x27;s <em>I Wandered Lonely as a Cloud</em>, following a melancholic description of &quot;a host of golden daffodils&quot; stretching, in the thousands, near a lake:</p>
<blockquote>
<p>I gazed—and gazed—but little thought<br/>
what wealth the show to me had brought:</p>
<p>For oft, when on my couch I lie<br/>
In vacant or in pensive mood,<br/>
They flash upon that inward eye<br/>
Which is the bliss of solitude;<br/>
And then my heart with pleasure fills,<br/>
And dances with the daffodils.</p>
</blockquote>
<p>That he could imagine going back to that place and describe what he saw there sounded ordinary enough to me—but what did he mean with &quot;they flash upon that inward eye&quot;? What kind of wealth is he talking about? That seemed like a rather puzzling way to put it, one I would never think of writing myself (even assuming I had the poetic talent to do it). I could write that I enjoyed very much being there, and that I would love to visit the place again, but the pleasure itself ended when I physically left the meadow. The feeling it incited in me is gone now. <em>The bliss of solitude</em>?</p>
<p>This happens a lot. I don&#x27;t remember what I thought the first time I read, a long time ago, mysterious passages like this, but I must have filed them away in my mind as fanciful figures of speech, the kind of metaphor sometimes dreamed up by an eclectic artist. The idea of seeing with one&#x27;s &quot;mind&#x27;s eye&quot; sounds strange to me if I take it literally, but not stranger than claiming that someone has a &quot;heart of gold&quot; or that &quot;you are my sunshine&quot;.</p>
<p>There were other hints that something was different about me, although I never made a connection with visual imagery. When I was looking for my first job, I kept getting screening questions about specific episodes of my past experience, like &quot;write of a time when you faced a complex problem in your work, and how you applied reason and initiative to solve it.&quot; I just couldn&#x27;t do it. Even when I was given whole days to reflect on it, I failed to come up with good episodes from my own life. I was quite sure that I had faced <em>some</em> problems in the past, and that I had solved at least a few of them, but none surfaced when I needed them. I had to ask my friends and read old notes and reports to cobble together an answer, and even then I somehow knew it wasn&#x27;t the best example I could give.</p>
<p>Then there was the question of emotions. Most other people are so wonderfully <em>emotional</em>. They laugh hard and cry often, they look sad or brooding or excited most of the time. Some do a good job at masking that, but will open up in more intimate circles, or explode when they can&#x27;t take it any more. Others swing continuously from one colorful mood to another. I&#x27;ve always observed these changes with surprise and, every once in a while, with a (muted) tinge of envy.</p>
<p>My own emotional life is the epitome of stability. It&#x27;s the golf cart to other people&#x27;s roller coasters. I am very lucky to be stuck in a moderately optimistic state of mind most of the time, and I have experienced extreme emotions—both positive and negative—perhaps half a dozen times in my life (although my poor memory of specific episodes might lead me to under-count). My family and I always attributed this difference to &quot;personality&quot;, and left it at that. But was it really a random trait?</p>
<p>This was, and still is, my normality. Even the few areas that were to me incomprehensible at first, like talk of a &quot;mind&#x27;s eye&quot;, the recollection of past episodes, and strong emotions, were gradually subsumed into my definition of &quot;normal&quot; as I grew older and adjusted to society. I rationalized them away, explained them as random and disconnected idiosyncrasies of life, and even came to embrace them. Normal, I believed, is what everyone takes for granted.</p>
<h3>&quot;Imagine an Apple&quot;</h3>
<p>Then, on a day like any other on my thirty-seventh year that normality, I learned about the existence of aphantasia as a cognitive trait. Almost immediately, I knew that its description closely matched my own experience. Now that they mentioned it, I had never actually &quot;seen&quot; anything voluntarily in my mind, in the sense of perceiving shapes, colors, and visual details that aren&#x27;t in front of my eyes. Nor, for that matter, had I &quot;heard&quot;, &quot;touched&quot;, &quot;smelled&quot;, or &quot;tasted&quot; anything mentally. Did they mean that other people could do those things?</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/normality-and-surprise-in-an-image-free-mind/Aphantasia_apple_test.webp" alt="The classic aphantasia test popular on social media. If you relate with number 5, you probably have aphantasia like me." title="A drawing with five stylized heads in a line, numbered one through five. Inside the first head is a photograph-like realistic image of an apple; in the second one an approximate but clear drawing of an apple; in the third one a less clear and colorful drawing; in the fourth one a simple outline of an apple; and in the fifth one nothing at all."/><figcaption>The classic aphantasia test popular on social media. If you relate with number 5, you probably have aphantasia like me.</figcaption></figure>
<p>It was as if a community of reputable scientists had demonstrated, incontrovertibly, that most people actually have hearts made out of 24-karat gold, and I was one of the very few who didn&#x27;t.</p>
<p>Or as if they had just told me the world I knew was all an immense film set.</p>
<p>My first reaction was one of dismissal, or perhaps denial. I told myself that, even though I <em>might</em> have aphantasia, it didn&#x27;t really matter. It was a mere curiosity. &quot;I function as well as anyone else in society,&quot; I thought, &quot;so what difference does it make how I do it?&quot; I didn&#x27;t read much further about the condition. I mentioned it only in passing to a few friends, then forgot about it. My normality continued.</p>
<p>Only about one year later, when I saw aphantasia mentioned again on various social media and scientific papers—now more often and in depth than before—I thought about it for a second time. I started reflecting on it, on my own inner life, more seriously. I began reading extensively about the science behind aphantasia, and found online communities of people in the same situation. We discussed, compared our subjective descriptions, and gradually mapped the similarities and differences between the aphantasic experience and that of everyone else, as well as those between different &quot;flavors&quot; of aphantasia.</p>
<p>I also got in touch with researchers involved in its study, and took the <em>Vividness of Visual Imagery Questionnaire</em> (VVIQ), the standard scientific test to measure the power of the mind&#x27;s eye—for the time being, the only way to detect aphantasia outside the lab. The VVIQ brought further confirmation to my hunch: I very probably had aphantasia.</p>
<p>In this initial phase, which unfolded over the course of several months, the true implications of aphantasia, both for myself and for humanity as a whole, finally began to dawn on me.</p>
<h3>Epiphanies</h3>
<p>First, I realized that there were many more individuals like me, and that they were as puzzled and fascinated as I was. I learned that most people are indeed capable of &quot;dancing with the daffodils&quot;, recreating visual scenes and faces in their minds like an order of discreet wizards.</p>
<p>I also learned that they tend to rely heavily on that ability—so completely, in fact, that the idea of <em>not</em> having mental imagery sounds like a crippling handicap to them. And yet, while I do feel somewhat different, I don&#x27;t feel disadvantaged in the least. This paradox might be explained by some very <a href="https://www.sciencedirect.com/science/article/pii/S1053810024000618" rel="nofollow noopener noreferrer" target="_blank">recent</a> science, showing that people can employ different mental strategies to solve a cognitive task. Aphantasics may have their own non-visual ways to think things through.</p>
<p>By knowing about this condition and the science behind it, I can more effectively find those alternative strategies that make my aphantasic life easier. But aphantasia science is barely ten years old: had I been born a mere fifty year earlier, in all likelihood, I would have lived and died without ever knowing about aphantasia and all of the subtle effects it has on aspects of my consciousness. In this sense, I was lucky to be born when I was.</p>
<p>After a moment of complacency, though, another thought arises: what other fundamental aspects of what makes me who I am are going to be discovered in my lifetime? Which key mental traits will get their own names and internet communities only after I am gone? This is hardly about me alone. Think about all the other people who currently believe they are &quot;normal&quot;—trying to fit in by sweeping their perplexities and unexplained personality differences under a rug (or dome) of ill-defined words.</p>
<p>How many people who consider themselves mentally &quot;typical&quot; will soon discover that they are actually part of a cognitive minority in one way or another?</p>
<p>Ten, fifty, a hundred years from now, will there be anyone at all left in the general category of &quot;neurotypical people&quot;?</p>
<p>I was finally convinced: this was not just a curiosity about me, but a big deal about everyone else, too. Scientists have been studying mental imagery for decades, because they believe it is a key to understanding other core functions of the brain, such as vision processing, attention, and spatial awareness. Now that aphantasia is known, the experimental comparison of people with and without mental imagery has become a powerful new method to answer those old questions.</p>
<p>When my Japanese neuroscientist acquaintances asked me to join some of their experiments, I immediately <a href="https://nautil.us/my-brain-doesnt-picture-things-407444/" rel="nofollow noopener noreferrer" target="_blank">accepted</a>. By letting them study my brain, I knew I could help them deepen our understanding of aphantasia itself, as well as those other major questions about the human mind.</p>
<h3>There Is More Yet</h3>
<p>At that point, I thought I had learned most of what I needed to learn about myself. Although there were still some scientific and social ramifications to be explored, for a while I believed that was, essentially, it: my mind is non-visual, undisturbed by sensory embellishments.</p>
<p>Of course I was wrong. As I took part in long experiments over weeks and months, completing tasks inside fMRI machines and in front of computer screens, I had the opportunity to <a href="/posts/boxed">reflect</a> on my conscious experience more thoroughly, and I researched the matter in depth. I learned the rudiments of <a href="/posts/a-list-of-introspective-descriptions">introspection</a>: tuning my attention into <em>how</em> I think, rather then <em>what</em>. With that new skill, I entered a new sequence of small epiphanies.</p>
<p>I realized, for instance, that my abysmal episodic memory is a trait I share with many other aphantasics, called <em>Severely Deficient Autobiographical Memory</em> or SDAM; I learned that what I considered to be my &quot;emotional stability&quot; might be a mild form of <em>alexithymia</em>, the difficulty in noticing and identifying one&#x27;s emotions, also <a href="https://www.sciencedirect.com/science/article/pii/S2666144624000248" rel="nofollow noopener noreferrer" target="_blank">common</a> among people with aphantasia; I noticed, for the first time, certain qualities of my dreams that I had never noticed before; I also observed how my strong sense of direction helps me, how my lack of imagery <a href="/posts/aphantasia-is-no-creativity-killer">affects</a> my creativity, and the ways I can work around those obstacles.</p>
<p>Some of these things I have already written about, others I will in the future. I don&#x27;t know what happened to Truman after he escaped the TV dome, but I suspect it was a long and fascinating journey. My own journey certainly is. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/normality-and-surprise-in-an-image-free-mind/wright.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Ballistic Moments of Weightlessness]]></title>
            <link>https://aethermug.com/posts/ballistic-moments-of-weightlessness?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/ballistic-moments-of-weightlessness</guid>
            <pubDate>Thu, 17 Oct 2024 08:33:26 GMT</pubDate>
            <description><![CDATA[On purpose and its temporary suspension]]></description>
            <content:encoded><![CDATA[<p>It feels like we have to work hard to earn joy, yet it seems elusive, like trying to catch a fish darting among coral reefs. We try harder, and usually fail harder. We try less, and blame ourselves for not having tried harder. Sometimes, though, we get it for free. Perhaps it has nothing to do with trying.</p>
<p>Some of us sit high up in glossy skyscrapers, others have to listen to angry customers in tiny suburban shops. Some give erudite talks to crowds, and some hammer metal scraps all day. There are those who struggle trying to eat less, and those whose hands shake from hunger. We are lonely and we are together. No two people live identical lives, none face the same combinations of problems. Eight billion minds, every day, live eight billion unique days. Yet, without exception, our minds are all doing the same thing as everyone else: chasing one purpose after the other.</p>
<p>Few claim to possess a supreme purpose guiding them like a North Star throughout their lifetimes. The ones who do may structure their lives like histories, each decision and smaller goal tethered in one way or another to that ultimate purpose. The majority, those who—by confusion, choice, or doubt—do not have such grand existential plans, focus on more down-to-earth goals like careers, families, and the fulfillment of their duties. The fundamental activity of each of us, though, is always to pursue an endless sequence of goals, linked in a vast webbed hierarchy of interconnected purposes that extends beyond what any of us can fathom. We are goal-seeking beings, our minds purpose-chasing organs.</p>
<p>The university student stands up to put her shoes on not as an ultimate end, but because she wants to leave the house; she wants to leave the house not because that is enough for her, but because she wants to go to campus; she wants to go there in order to take a class; taking a class is itself both a goal and a means for the larger goal of finishing the year&#x27;s studies, itself a means to eventually graduate, itself a means to find a good job, itself a means of doing many other things that she doesn&#x27;t fully understand or even imagine yet. Those things, in turn, are likely to be stepstone goals to even further goals, albeit gradually vaguer and more mysterious as she tries to trace them with her mind. But no matter, because all she has to do now is to stand up.</p>
<p>Some people insist that the ultimate goal, or at least the most central of all discernible goals, is to become happy. We talk of the <em>pursuit of happiness</em> as the worthiest of enterprises—unassailable and self-evidently necessary. The state of happiness is the X on the map, it is the bright light at the end of a dark and inhospitable tunnel that oppresses us at all times.</p>
<p>This view of happiness as a higher goal leads to some perplexing questions. A goal is a state that doesn&#x27;t exist yet, which we have to work to realize. People are known to sometimes &quot;reach&quot; happiness, but it never seems to last very long. It is not too rare to lose something soon after you have obtained it, but an objective that you have to pursue over and over again from scratch is a strange one indeed. Constantly-moving goalposts are not what we typically call meaningful goals.</p>
<p>There is also the question of reliability. A goal is good only insofar as you can draft a plan that will make it happen with a certain level of confidence. Yet the Happiness purpose seems to elude all formulas. Even the precise steps that allowed one to reach the blissful state once, will, more often than not, fail on following replications. Being very rich or powerful doesn&#x27;t guarantee constant joy, and even very poor and oppressed people can experience bliss at times. Often happiness comes to you when it is least expected: it finds you, rather than the other way around.</p>
<p>Being happy is, in fact, not a goal but the freedom from goals. It is made of the brief ballistic moments of weightlessness in which you realize that you need nothing more. It happens when you stop trying or, more accurately, when you know that, for a little while, you are free to stop trying. You&#x27;re allowed to stay still where you are, or to keep doing the same thing you were doing before, without immediately training your aim at the next step in a plan. Happiness is when you sense that no goal can take precedence over the current situation: you can just breathe, and rest your weary purpose-chasing organ. You&#x27;re aware of having enough, even if only for the duration of a sunset walk, and having enough is the opposite of having a purpose.</p>
<p>What does one have to do to get it, then? A nonsensical question. Happiness occurs when you&#x27;re seeking other goals. It is a moment of stasis, but an interstitial one that always implies movement before and after it—which is why it never lasts long. Asking for otherwise would be like wishing for every step of a long journey to be on the peak of a mountain. It would be like asking for death.</p>
<p>What one can do, however, is learn to notice and relish those moments when they happen. Chances are, they are far more frequent than one realizes. Everyone around teaches us to be incessantly on a hunt, flinging ourselves towards the next goal as soon as we achieve the last. We teach each other to abhor stasis, to lose not a minute: if you have spare time, the thinking goes, you have time to pursue something extra on the side. This is not bad in itself, but it is harmful when it robs you of that delicate sensitivity to your own joy. When an afternoon of happiness visits you, you must be ready to welcome it. You must stop planning, set aside your thick web of purposes, and simply exist in that moment, outside of time. Only then the subtler shades and tones of joy will become apparent. Observe those moments, notice the multifaceted effects they have on you, let them do their course. Then, when you know it is over, return to your usual work as a purposeful human being. It doesn&#x27;t matter whether you engage in a lifelong vocation or a ragtag assortment of more mundane goals. You will be there when timid bliss happens again.</p>
<p>But treating happiness as something agnostic to any goal also frees you. It means you have permission to do the things you want to do for their own sake, instead of those that—you&#x27;re told—will lead you to happiness. We are made for immersion in a sea of purposes, and that pursuit is our natural state. But it isn&#x27;t the only thing we live on. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/ballistic-moments-of-weightlessness/willian-justen-de-vasconcellos-wF5d_SKJ-jI-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Reading Blood Meridian with Aphantasia]]></title>
            <link>https://aethermug.com/posts/reading-blood-meridian-with-aphantasia?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/reading-blood-meridian-with-aphantasia</guid>
            <pubDate>Fri, 11 Oct 2024 09:30:23 GMT</pubDate>
            <description><![CDATA[What it feels like]]></description>
            <content:encoded><![CDATA[<blockquote>
<p>The universe is no narrow thing and the order within it is not constrained by any latitude in its conception to repeat what exists in one part in any other part. Even in this world more things exist without our knowledge than with it and the order in creation which you see is that which you have put there, like a string in a maze, so that you shall not lose your way. For existence has its own order and that no man&#x27;s mind can compass, that mind itself being but a fact among others.</p>
<p>— Blood Meridian, Cormac McCarthy</p>
</blockquote>
<p>In a recent survey experiment, <a href="https://journals.sagepub.com/doi/full/10.1177/02762366241279404" rel="nofollow noopener noreferrer" target="_blank">Williams and Suggate (2024)</a> compared the reading experience of people with different abilities to form mental images. They had three groups of participants: people with &quot;normal&quot; mental imagery, people with extremely good mental imagery (hyperphantasia) and people with no mental imagery at all (<a href="/posts/aphantasia">aphantasia</a>). They found that the biggest non-obvious difference between these groups was in the measures of &quot;transportation&quot; (into the story) and &quot;emotional engagement&quot; with characters and events.</p>
<p>This replicates the results of <a href="https://www.sciencedirect.com/science/article/pii/S1053810024000126?dgcid=api_sd_search-api-endpoint" rel="nofollow noopener noreferrer" target="_blank">Speed et al. (2024)</a>, who found that &quot;aphantasics were less likely to be engaged with, interested in, and absorbed in the story, and experienced reduced emotional engagement with and sympathy for the story characters, compared to controls.&quot; <a href="https://royalsocietypublishing.org/doi/10.1098/rspb.2021.0267" rel="nofollow noopener noreferrer" target="_blank">Wicken et al. (2021)</a> have also shown that people with aphantasia have a &quot;flat-line physiological response (skin conductance levels) to reading and imagining frightening stories,&quot; meaning that they lack the automatic excitation of nerves that is usually associated with fear.</p>
<p>In other words, there is mounting evidence that people like me, who see nothing at all unless it&#x27;s actually in front of their eyes, have a different relationship with literature from the other groups. It certainly seems so if you look at <a href="https://www.reddit.com/r/Aphantasia/" rel="nofollow noopener noreferrer" target="_blank">r/aphantasia</a>, a large forum about the condition. Most aphantasics there seem to agree that descriptive text is boring because we can&#x27;t picture it in our minds.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/reading-blood-meridian-with-aphantasia/reddit_1.webp" alt="" title="Reddit comment: They&#x27;re just words on a page, basically. I read them and I understand what they mean; that&#x27;s about it. This is also why I (and a lot of other aphants) find descriptive sequences in books boring - we&#x27;re not imagining any environment or person to add those details to in our minds."/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/reading-blood-meridian-with-aphantasia/reddit_2.webp" alt="" title="Reddit comment: I’ve been a big reader all my life and love fiction, but I tend to skim/speed read over excessive imagery/scenery. As a kid, visual descriptions were always boring and monotonous to me… now I know why!"/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/reading-blood-meridian-with-aphantasia/reddit_3.webp" alt="" title="Reddit comment: I love reading but kinda skim if it&#x27;s description heavy. I like dialogue!!"/></figure>
<p>My own opinion is mixed. Like many other aphantasics, I love reading both fiction and non-fiction, and I do it all the time. But do descriptions throw me off? I&#x27;m not sure. Sometimes they do, sometimes they don&#x27;t. I don&#x27;t usually skim over description-rich pages, though. Just as I was wondering about this and hoping to bring clarity to the matter, I realized that I had already begun the perfect test without even realizing it. I happened to be reading a book titled <em>Blood Meridian, or The Evening Redness in the West</em>, by the late Cormac McCarthy.</p>
<h2>A Bloody Good Case Study</h2>
<p><em>Blood Meridian</em> is something else. Ignored by critics for a while after its publication, it later came to be called one of the great American novels of the 20th century. Harold Bloom, a famous literary critic, said more than once that it is possibly the greatest American novel by a (then) living author, and wrote at length about how it is a worthy successor to classics like Melville&#x27;s <em>Moby Dick</em> and Faulkner&#x27;s <em>As I Lay Dying</em>. It tells of a gang of mercenaries roaming the Texas-Mexico border in the middle of the 19th century, hunting down Apache and Comanche Indians on behalf of the Mexican government. The book is <em>very</em> experimental in terms of style, eschewing most punctuation and several rules of grammar, but also in terms of the narrative tone itself, which hammers at you with one harrowing and horrifying event after the other as if to check your endurance. The people who like it are willing to be scalped alive to defend it, and the people who don&#x27;t are usually nauseated and repulsed by it for life.</p>
<p><em>Blood Meridian</em> has two qualities relevant to my question about aphantasic reading. First, this novel is famous for being one of the most &quot;graphically&quot; violent ever written. No one is spared from a gory and ignoble death—indigenous people, Americans, Mexicans, men, women, elderly, children, newborn babies, animals, you name it—although some have it delayed with suffering and mutilation.</p>
<p>Second, it is also filled to the brim with long environmental descriptions. There is very little dialogue in the book, the action is short and sparse, and most of the time the characters are trudging through a barren and hostile environment, short on water and often wounded and hunted by murderous enemies. (I must have read variations of the words &quot;they rode on&quot; and &quot;they moved on&quot; at least fifty times between those covers.)</p>
<p>This sounds like the ideal case study for emotional engagement and the role of mental imagery.</p>
<p>Before this, I had only read another novel by McCarthy, <em>The Road</em>, and I had liked it well enough to try another one. It was harrowing in its own right, but totally manageable. I was aware that <em>Blood Meridian</em> was considered a must-read by his fans, and I already knew that it was ridiculously bloody. So when I finally picked it up, I was mentally prepared for it. I didn&#x27;t expect to like it very much, though, because of what sounded like a gratuitous and meaningless use of violence.</p>
<p>Who am I kidding, I wasn&#x27;t prepared for it at all and I <em>loved</em> it.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h2>How I Read It</h2>
<p>Although <em>Blood Meridian</em> sort of has a plot, it didn&#x27;t feel like that to me. It felt like an experience I was going through, a gritty and realistic nightmare I could opt out of at any time but chose not to. Except I do see stuff in dreams sometimes (aphantasia doesn&#x27;t prevent that), and I didn&#x27;t see anything at all while reading this book.</p>
<p>The brutal violence was a major element on every other page, as I had expected. And—also expected based on the scientific papers I cited above—it didn&#x27;t bother me too much. While reading those descriptions of heinous acts against the innocent, I didn&#x27;t feel like I was seeing them. They were not anywhere near me, even though I felt fully immersed in the story.</p>
<p>That is not to say that those scenes didn&#x27;t affect me at all. My reaction was one not of anger but of resigned disappointment towards humanity. Did I feel horror? I would say so, but is it the same kind of horror felt by a person capable of visually playing the scene in their mind? I have no way of telling, but in my case it was an abstract horror, an incredulity that a human being may be capable of doing those things. And sadness. A sense that this is wrong, that it should never be allowed to happen, yes, but not a visceral or physiological reaction. It feels important but remote, not something related to me personally. Kind of like observing the events from a flying bird&#x27;s (metaphorical) eyes—more than far enough for objectivity, but keen enough to take it all in.</p>
<p>What I found is that the violence was indeed gratuitous, needless, existing for its own sake most of the time, but <em>that was the whole point of the novel</em>. This is not a story of a character fighting for a good cause, with violence as a regrettable side-effect—it&#x27;s a statement about the infinite ruthlessness of the world, a cynical view on the incurable evil of human beings. For this reason, the act of reading it felt meaningful and necessary, even if the events themselves didn&#x27;t. Violence <em>is</em> the story. McCarthy&#x27;s genius was in finding a compelling way to tell it.</p>
<p>Still, there is <em>much</em> more to <em>Blood Meridian</em> than the blood. I will not get into the themes, the philosophy, or the characters here, and instead limit myself to an example. There was only one passage, out of 353 dense pages, that made me pause reading for a moment, in order to take it in a bit at a time. It was a sentence uttered by Judge Holden, the central character in the book and one of the most terrifying villains I have ever known:</p>
<blockquote>
<p>Whatever exists, he said. Whatever in creation exists without my knowledge exists without my consent.</p>
</blockquote>
<p>If you haven&#x27;t read the book, this probably sounds underwhelming after all those mentions of shocking deaths. Yet, when it sneaked up on me while reading, it was like a punch in the stomach. I literally laughed out loud from nervousness. I may have gone as far as swearing, rather uncharacteristically. This is the power of McCarthy&#x27;s writing: he puts you in a queer state of mind where the implications <em>for the state of the universe</em> of what&#x27;s going on, more than the immediate magnitude, become painfully important for you, the reader. Even an action that looks innocuous enough—the judge&#x27;s derailed monologues, a member of the gang suddenly missing, an indecipherable movement on the horizon—feels important, ominous. Like another convincing sentence against your species. The narrator explains nothing and gives no background, so you find yourself constantly guessing about their significance, and trying to form a theory about what this all means, all the scary ramifications. I found this state to be cathartic and refreshing.</p>
<p>The same holds true for things that you&#x27;re pretty sure have no bearing on the characters&#x27; fates. This takes us to the many descriptions of deserts and mountain ranges of <em>Blood Meridian</em>.</p>
<p>Strictly speaking, the descriptions are superfluous, because the author could just as well remove them and still be able to tell the same story. But thinking in this way means giving more importance to the story than the author intends. As I wrote above, his job is to put you in a specific state of mind, to take you through an experience. His descriptions are an important tool for that, and he achieves the goal thanks to his beautiful, heart-wrenching prose. For me, it works. I love it, and I suspect that other aphantasics (and not) would appreciate it too.</p>
<p>It may be useful to compare a descriptive passage from this book with one from another author. An example that often comes up when aphantasics complain about boring descriptions is Tolkien&#x27;s <em>The Lord of the Rings</em>. Even though Tolkien is another one of my favorite authors, and I&#x27;m a big fan of his books, I have to admit that his descriptions aren&#x27;t his greatest appeal for me. They fall flat with me a lot of the time. Here is an extract from the first book of the trilogy, <em>The Fellowship of the Ring</em>:</p>
<blockquote>
<p>The cold increased as darkness came on. Peering out from the edge of the dell they could see nothing but a grey land now vanishing quickly into shadow. The sky above had cleared again and was slowly filled with twinkling stars. Frodo and his companions huddled round the fire, wrapped in every garment and blanket they possessed; but Strider was content with a single cloak, and sat a little apart, drawing thoughtfully at his pipe.</p>
</blockquote>
<p>And here is a similar situation from <em>Blood Meridian</em>:</p>
<blockquote>
<p>He&#x27;s left behind the pinewood country and the evening sun declines before him beyond an endless swale and dark falls here like a thunderclap and a cold wind sets the weeds to gnashing. The night sky lies so sprent with stars that there is scarcely space of black at all and they fall all night in bitter arcs and it is so that their numbers are no less.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/reading-blood-meridian-with-aphantasia/hasan-almasi-5ddKJB6fgTQ-unsplash.webp" alt="A shooting star falling over a desert scenery."/><figcaption>Photo by Hasan Almasi, Unsplash</figcaption></figure>
<p>Clearly, these are two very different books, written in different times and with different sensitivities. Still, to me, McCarthy&#x27;s description of nightfall is so much more satisfying, so much more immersive than Tolkien&#x27;s.</p>
<p>The hobbit passage is just that, a passage among others that can be safely forgotten by the time you get to the next page. Night has come, the stars are appearing, and the characters are huddling around the fire. That may be a pleasant and effective little snapshot if you are able to picture it in your mind. You might mentally fill the sky with beautiful constellations, and contemplate this restful atmospheric moment with your inner eye. But for someone with aphantasia, it&#x27;s just some information about the time of day and about the presence of things that I could have guessed were there anyway: stars, clouds coming and going, the characters sitting around a fire. It&#x27;s not bad, but it doesn&#x27;t give me much to work with.</p>
<p>The second passage, on the other hand, makes me feel things. Darkness doesn&#x27;t just &quot;come on&quot;, it &quot;falls here like a thunderclap&quot;. It&#x27;s not an abstract &quot;cold&quot; that &quot;increases&quot;, but &quot;cold wind&quot; that &quot;sets the weeds to gnashing&quot;. The sky doesn&#x27;t just &quot;fill with twinkling stars&quot;, it &quot;lies so sprent with stars that there is scarcely space of black at all&quot;. Every single expression is carefully crafted, making sensory, almost synaesthetic connections and dynamic similes that actually work for me. Everything is moving, and I understand movement even without the aid of vision. Without the need to see or hear it in my mind, I know what it feels like for something to be &quot;like a thunderclap&quot;: it&#x27;s surprisingly fast. Similarly for the other expressions. They go beyond merely stating the presence of something, and help you evoke the sensations bypassing the need for visual simulation.</p>
<p>While reading this book, I often felt tired and jaded. At times, the many long descriptions of murders and of wilderness odysseys made me grow numb to it all (which, I suspect, is precisely what the author wanted). But I never felt the urge to stop reading, as some people claim they did. When I was done, I didn&#x27;t feel scarred, but certainly <a href="/posts/you-don-t-have-time">changed</a>.</p>
<p>While I can&#x27;t say for sure, I think that aphantasia might have played a big role in how I experienced this novel, for better or for worse. I wrote that I felt &quot;immersed&quot; while reading, and also that I felt a certain distance from the scenes unfolding textually. That sounds a little paradoxical. How can one feel immersed in the story when they can&#x27;t play it back in their head like a movie? I don&#x27;t know, but I have some ideas.</p>
<h2>Perception without the Cheese</h2>
<p>I grew up in Italy and ate pasta almost every day for some twenty-six years. There was one fundamental truth I lived by all those years: tomato sauce pasta is complete only once you&#x27;ve sprinkled a generous dose of grated Parmesan cheese on top. It was a belief that never wavered, never in doubt. Then I moved to Japan, and was appalled to find that most Italian restaurants brought me cheese-less pasta. I couldn&#x27;t accept it. I must be mistaken, I thought, and I tasted the sauce to see if, by any chance, the cheese was simply hidden somewhere out of view. It was nowhere to be found. Eventually I got used to it, and you know what? It&#x27;s fine. It&#x27;s still pasta after all, and I can make my own with cheese at home.</p>
<p>This is what I think perception is like, based on my careful observations. Everyone thinks that it&#x27;s a single process, but we, the inhabitants of Aphantasia-land, know deep down that it&#x27;s actually two separate things.</p>
<p>(As a disclaimer, my observations are entirely subjective and thus unscientific, but they seem to be somewhat compatible with what I understand about the neuroscience of visual processing, which works by <a href="https://en.wikipedia.org/wiki/Visual_system#Neural" rel="nofollow noopener noreferrer" target="_blank">levels of abstraction</a>. Who knows, maybe someone will read this and provide a more accurate explanation.)</p>
<p>I think there are actually two separate steps in conscious perception, one which I&#x27;ll call &quot;attention&quot; and another one which I&#x27;ll call &quot;interpretation&quot;. When you and I look at something, these two processes happen together, like pasta with cheese. The difference arises when imagining things.</p>
<p>&quot;Attention&quot;, in my definition, is the passive acceptance of sensory data. The optic nerve (and any other sensory nerve) takes data from your senses to your brain, processed into a coherent representation, and you become conscious of it. The sensory data is agnostic to meaning and vastly more detailed than you can possibly acknowledge. You can scan it and peruse it while it lasts. Some of the patterns in this incoming data may trigger automatic responses and fast-moving emotions like fear and attraction, but only at a very basic and unconscious level.</p>
<p>The other step is &quot;interpretation&quot;, by which I mean the active work of incorporating the sensory data into your conscious understanding of reality. This is where you update your <a href="/posts/embedded-prophesy-devices">model of the world</a>, massaging both the new information and the things you already knew so that they fit in well together. It&#x27;s also where you begin to consider the consequences and ramifications of this update. (This probably happens both at the conscious level and at the unconscious or instinctive one, but I won&#x27;t worry about that distinction too much here.)</p>
<p>Now, these two steps always happen together when a person, aphantasic or not, actually perceives something with their senses. And, as far as I can tell, people capable of mental imagery also do them together when imagining things. When they read Tolkien&#x27;s passage above, for example, they apply their &quot;attention&quot;, at least in part, to the mental representation of the starry sky and the campfire. They also &quot;interpret&quot; these images <em>and</em> the other more abstract and non-sensory information given by the text.</p>
<p>My conjecture is that aphantasics only have the &quot;interpretation&quot; step when they read, and mostly at the conscious level. This removes the knee-jerk reactions and the more visceral emotions, because there isn&#x27;t a direct sensory experience to induce them, but it doesn&#x27;t spoil the pleasure of a good description. A good description is meaningful, and as such it is capable of evoking emotions non-sensorially. It picks at the strings of the semantic web inside my head, and that is how it can make me feel something.</p>
<p>With Tolkien&#x27;s description, the information is too flat and generic to evoke much in me. I&#x27;ve seen the sky clearing of clouds so many times in my life, in so many variations, that reading &quot;the sky above had cleared&quot; doesn&#x27;t point me to any specific emotion. Reading that the stars &quot;fall all night in bitter arcs and it is so that their numbers are no less&quot; is much more nuanced and unique, and includes not only a dynamic fact about what happens, but also a reflection about the mysterious nature of those phenomena. I don&#x27;t need to see shooting stars in order to remember the awe of looking at them and pondering where they might go. The use of the word &quot;bitter&quot; here is a little unsettling, but it tells me about a specific worldview, and I can simulate staring up at the night sky in a similar mood. Having images would be nice, but it would be like the Parmesan powder on my pasta: a tasty addition to what exists as its own, delicious plate.</p>
<h2>Enjoyment Not Diminished</h2>
<p>Reading with aphantasia, then, is not so different from reading without it, after all. The lack of imagery, in my view, dampens the more instinctive reactions, and makes the reading experience feel relatively more detached from my body. Because of that, certain kinds and styles of writing—the styles that lean on those sensations to work—come out a diminished. The bulk of the meaning-making work, though, remains intact. Certain writers, like Cormac McCarthy, rely less on the reader&#x27;s &quot;attention&quot; work, instead speaking directly to their &quot;interpretation&quot; process. This doesn&#x27;t make them inherently better, and saying so would be presumptuous of me. They just tend to work better for at least some aphantasics.</p>
<p>On the other hand, I imagine that there are cases in which the mental imagery may actually get in the way. Violent scenes might be one such case, because a strong visualizer could be so shocked and disturbed by the scenes being depicted that they lose track, miss other important details, or become unable to enjoy the other aspects of the narrative. If this does happen sometimes, it would represent an advantage for aphantasic readers.</p>
<p>Of course there are many other sides of a novel that one can like or dislike. The fact that the prose in <em>Blood Meridian</em> worked very well for an aphantasic mind like mine doesn&#x27;t mean that all aphantasics will surely like the book. Things are fortunately more complex than that.</p>
<p>But I will venture only one confident assertion: an aphantasic who blames aphantasia for their not liking <em>any</em> kind of fiction or descriptive text is almost certainly wrong. Don&#x27;t listen to me. The science says so:</p>
<blockquote>
<p>Finally, our results indicated that imagery level was not strongly related to reading enjoyment.</p>
<p>— <a href="https://journals.sagepub.com/doi/full/10.1177/02762366241279404" rel="nofollow noopener noreferrer" target="_blank">Williams and Suggate (2024)</a></p>
</blockquote>
<blockquote>
<p>This suggests that experiencing a story via visual imagery is not the only way in which stories can be enjoyed.</p>
<p>— <a href="https://www.sciencedirect.com/science/article/pii/S1053810024000126?dgcid=api_sd_search-api-endpoint" rel="nofollow noopener noreferrer" target="_blank">Speed et al. (2024)</a></p>
</blockquote>
<p>And many other bookworms with aphantasia say so:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/reading-blood-meridian-with-aphantasia/reddit_4.webp" alt="" title="Reddit comment: Heavy reader, love books. Very relaxing because there&#x27;s no visualization involved, and I can just enjoy the story. Lots of fantasy and sci-fi."/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/reading-blood-meridian-with-aphantasia/reddit_5.webp" alt="" title="Four reddit comments. 1: I love to read. I actually read splatterpunk and erotica horror BECAUSE I can&#x27;t visualize.; 2: I definitely credit the aphantasia with being able to read gorier stories than I would otherwise! I can&#x27;t do anything gross in TV/movies, but with books I can handle a lot more!; 3: Big same! I can&#x27;t watch it, but I can read virtually anything; 4: Yes, love reading Stephen King. Some of the movies not so much."/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/reading-blood-meridian-with-aphantasia/reddit_6.webp" alt="" title="Reddit comment: I love to read. It&#x27;s a lifelong passion. I read mainly literary fiction, classical literature, nonfiction, poetry, and plays. It seems to me there are a great many modern people who don&#x27;t care to read. I don&#x27;t think it has anything to do with aphantasia."/></figure>
<p>Now, if you haven&#x27;t read <em>Blood Meridian</em> yet and it sounds intriguing (even if a little scary), I recommend you give it a try. Whether you have pictures in your mind or not, I think it has something to offer to you. ●</p>

<p><em>This post is included in the <a href="/posts/a-list-of-introspective-descriptions">List of Introspective Descriptions</a>.</em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/reading-blood-meridian-with-aphantasia/artem-maltsev-Cc3QrL4S01Q-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Philosophy Is the Battle Against the Bad Framing Kraken]]></title>
            <link>https://aethermug.com/posts/philosophy-is-the-battle-against-the-bad-framing-kraken?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/philosophy-is-the-battle-against-the-bad-framing-kraken</guid>
            <pubDate>Thu, 03 Oct 2024 11:25:35 GMT</pubDate>
            <description><![CDATA[In the hallowed Aether Mug tradition of strained metaphors]]></description>
            <content:encoded><![CDATA[<p>When you are born, you begin without a single good <a href="/posts/a-framing-is-a-choice-of-boundaries">framing</a> to understand the world, except perhaps some innate ones that clarify the role of your parents for you. In other words, you begin in a stormy sea without any navigational notion, and it is all but impossible for you to understand the world.</p>
<p>As you grow up, receive an education, and experience stuff, you collect framings that help you navigate reality and—why not—maybe even thrive in it. By the time you&#x27;re an adult, in general, you have good enough framings for all the important things that happen in your life. Not always, because there&#x27;s always something new or difficult happening to you that requires new framings, but there is usually someone who can point the way for you. All in all, you are equipped for survival on the main routes across the ocean of reality. Good sailing to you.</p>
<p>Enter the philosopher, the framing-hunter. She has all the same &quot;daily life&quot; framings as you and I do, but she&#x27;s not satisfied. She doesn&#x27;t just want to navigate the usual transoceanic routes through life. She wants to explore the untrodden nooks and crannies of all of the New World coasts. For that, the usual framings that we all use are no good. She&#x27;s venturing into <a href="/posts/bad-framings">bad-framing</a> waters, dangerous seas that no one has been able to tame yet. And her mortal enemy is the Bad Framing Kraken.</p>
<p>What does it mean for something to <em>cause</em> something else? Why do I <em>feel</em> like I&#x27;m conscious? What is the fabric of reality <em>made</em> of? There are plenty of blind spots, uncharted areas of the map of knowledge that our education and daily experience leaves blank, or warps conveniently out of sight. We don&#x27;t need those answers in our daily lives! Or do we?</p>
<p>The philosopher heads straight for those bewitched waters, fully knowing that the Kraken is there somewhere, lurking below the surface, aiming to sink her ship. The bad framings may just be too bad to get where she&#x27;s heading, their tentacles too strong and slimy and vicious. The good framings, the routes that lead beyond, might be hopelessly out of reach. But she, and her fellow philosophers, go there anyway. Most perish in the attempt, and end up in the Kraken&#x27;s stomach. But if no one tried, we&#x27;d never get to see those mysterious coasts. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/philosophy-is-the-battle-against-the-bad-framing-kraken/kraken.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The Odysseus Black Box]]></title>
            <link>https://aethermug.com/posts/the-odysseus-black-box?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/the-odysseus-black-box</guid>
            <pubDate>Fri, 20 Sep 2024 09:56:27 GMT</pubDate>
            <description><![CDATA[Curiosity as the drive to be changed by the world]]></description>
            <content:encoded><![CDATA[<p>Circe warns Odysseus:</p>
<blockquote>
<p>You will come first of all to the Sirens, who are enchanters
of all mankind and whoever comes their way; and that man
who unsuspecting approaches them, and listens to the Sirens
singing, has no prospect of coming home and delighting
his wife and little children as they stand about him in greeting,
but the Sirens by the melody of their singing enchant him.
They sit in their meadow, but the beach before it is piled with boneheaps
of men now rotted away, and the skins shrivel upon them.
You must drive straight on past, but melt down sweet wax of honey
and with it stop your companions’ ears, so none can listen;</p>
<p>— Homer, The Odyssey Book XII (Lattimore tr.)</p>
</blockquote>
<p>She also plants an astonishing idea in his head:</p>
<blockquote>
<p>but if you yourself are wanting to hear them,
then have them tie you hand and foot on the fast ship, standing
upright against the mast with the ropes’ ends lashed around it,
so that you can have joy in hearing the song of the Sirens;
but if you supplicate your men and implore them to set you
free, then they must tie you fast with even more lashings.</p>
</blockquote>
<p>Soon afterwards, Odysseus goes on and does just that. He orders to be tied to the ship&#x27;s mast, has everyone except himself plug their ears with beeswax, then begs them to release him so that he can commit his sweet suicide in the arms of the euphonious monsters. Fortunately they obey his orders, tie him tighter still, and everyone survives the brief rendezvous.</p>
<p>Ever since my father narrated the Odyssey to me a long time ago, probably in lieu of a bedtime fable, this has been the most tantalizing episode in the Odyssey for me. I asked myself, why would he take that risk? Why go to such extents to hear a few minutes of beautiful singing? Was that brief &quot;joy&quot; really worth it?</p>
<p>What was tantalizing was not that I didn&#x27;t know the answers to these questions, but that I knew I would do the same in his place. It feels like such a <em>human</em> thing to do. But still, why?</p>
<p>This is where I think a <a href="/posts/a-black-box-view-of-life">worldview based on black boxes</a> helps clearing things up a little.</p>
<figure><img src="https://aethermug.com/assets/posts/the-odysseus-black-box/Strings_by_Starlight.webp" alt="Picture of the cellist Yo-Yo Ma sitting in front of a small crowd under a starry sky."/><figcaption>Yo-Yo Ma getting ready to do some inputting. (G. Hüdepohl/ESO, CC BY 4.0)</figcaption></figure>
<p>A black box has <em>something</em> going on within it, but you don&#x27;t need to know exactly what it is. Yo-Yo Ma can play heart-rending melodies with his cello whether or not he understands acoustics or solid-state physics: all he needs is a deep familiarity with the vibrations the instrument will make given specific, finely-tuned movements of his bow. He <a href="/posts/embedded-prophesy-devices">predicts</a> the sounds that will come out, and uses his enormous experience to weave them precisely as he intended. To the player, an instrument works just fine as a black box.</p>
<p>Even if you don&#x27;t necessarily know anything about it, <em>structure</em> is what determines the sound response of the &quot;musical instrument&quot; black box. This includes it shape, size, density, material composition, the properties of the air inside and around it, and so on (the further you go the more blurred you&#x27;ll find its boundaries to be, but that&#x27;s <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">another story</a>). When an &quot;input&quot; interaction happens between something else, like a bow or a finger, and the instrument, vibrations propagate and resonate throughout its structure in characteristic ways, eventually leading to the production its &quot;output&quot; notes. That&#x27;s why many professional players spend tens of thousands of dollars on their instruments, and why you can instantly recognize if a given melody—<em>Happy Birthday</em>, say—is being played by a piano and not a flute: their different structures process, amplify, and compose the same vibrations differently.</p>
<p>Musical instruments are neat examples of black boxes, but precisely the same argument can also be made for any &quot;organism&quot; black box. A living system processes not only sound but many other kinds of inputs as well, like photons, impacts, food, and temperature, to name a few. Of course, the pathways this information takes inside the organism before producing an &quot;output&quot; action, if any, are mind-numbing in variety and scope. These are highly complex black boxes we&#x27;re talking about. The principle is the same, though: the internal structure of tissues and organs and nerves, and how they are all connected to each other, determines the range of possible behaviors and responses.</p>
<p>Each organism is a unique resonance chamber for the <a href="https://planktonvalhalla.com/20220818-differences-contraptions-and-a-ripple-universe/" rel="nofollow noopener noreferrer" target="_blank">differences</a> (aka information) that traverse it. Its outputs are tuned by evolution to work reasonably well in favor of its preservation and proliferation, within the environmental context it finds itself in.</p>
<p>Needless to say, the same is also true for that king of black boxes, the human nervous system. Every human mind receives information through its various sensory entry points, does <em>something</em> with it, and produces specific patterns of activation that may or may not lead to muscular action—be it to speak, to deambulate, or to interact with other things around it in any way. Here we&#x27;re talking about levels of structural complexity beyond compare. The way the brain&#x27;s structure affects its processing is still a mystery—albeit quickly shrinking—and the range of its outputs is still mostly unexplored.</p>
<p>Neuroscience is the attempt to map that out. The aphantasia experiments I <a href="/posts/boxed">participate</a> in are, at their core, nothing more than a systematic and hyper-focused tinkering with the black box inside my skull. The researchers ask, &quot;if I show Marco this hippo picture, which brain regions will light up in the MRI recording?&quot; or &quot;if he sees a picture of a bedroom, then a picture of a lampshade that was in that bedroom, will he be able to recognize that he has already seen it?&quot; Input, output. Do it a few thousand times, and they might begin to get a rough idea of what&#x27;s going on inside the box.</p>
<p>Once you start looking at things like that, many of our common human activities turn out to be very similar in nature to those scientific experiments.</p>
<p>What is a conversation if not a reciprocal tinkering with each other&#x27;s brains to see what will come out? The feedback loop between different people can lead to escalations towards surprising and exhilarating results that would have been impossible for any of the participants on their own. Similarly, when you talk about &quot;seeking inspiration&quot; for an idea, what you mean is that you want to stimulate your brain to produce a novel output by feeding it with the right inputs.</p>
<p>Even a mere change of environment will often incite new behaviors and mental regimes. Karl Ove Knausgård reflects on this in his autobiographical novel:</p>
<blockquote>
<p>To live in Japan, surrounded by all this foreignness, all the things one saw but did not understand, whose meaning one might intuit without ever being sure, was a dream I had long held. To sit in a Japanese house, furnished in simple, Spartan fashion, with sliding doors and paper partitions, created for a neatness that was alien to me and my northern European impetuousness, would be fantastic. To sit there and write a novel and see how the surroundings slowly and imperceptibly shaped the writing, for the way we think is of course as closely associated with the specific surroundings of which we form part as the people with whom we speak and the books we read.</p>
<p>— Karl Ove Knausgård, My Struggle, Book 1</p>
</blockquote>
<p>Call it a sort of &quot;mental plucking and drumming game&quot;, an exploration of what &quot;notes&quot; and &quot;tones&quot; your specific mind can produce by altering the things you interact with.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>But the brain is more interesting than a musical instrument in at least one aspect: oftentimes, <em>the inputs will change the structure of the brain</em>, possibly forever. For example, I believe in the <a href="/posts/you-don-t-have-time">transformative power of some books</a>. Somehow, with a complicated mix of synapse formation, synapse potentiation and depression, dendritic remodeling, and other fancy mechanisms I don&#x27;t understand in the least, the contents of those books are capable of restructuring my brain into novel resonance patterns. When I put the book down, I&#x27;m a slightly different person, one endowed perhaps with deeper overtones, perhaps with a broader dynamical range. It&#x27;s exciting to think about what these new &quot;powers&quot; will allow me to do next: what new conversations, ideas, creative outputs.</p>
<p>Simply out of the sheer complexity of our brains and developmental processes, each of us is born with a unique mental resonance pattern, a &quot;black-box fingerprint&quot; that distinguishes one from everyone else. And as we live, as we watch and hear and learn from everything that happens around us, we accumulate unique assortments of changes that sharpen even more those differences in mental processing.</p>
<p>I think this is an excellent and beautiful truth. As someone aiming to pour his output into the world in the form of prose, I want to accelerate and catalyze this process—make my mind as unique as possible in the limited time I have until I die. Thus I crave interesting things and new things, but not just anything: it has to be the right input at the right time for me to be changed by it. (Also, preferably it shouldn&#x27;t kill me.)</p>
<p>Maybe that is also what Odysseus was after when he told his men not to pour the wax in his ears, with great risk to his sanity and to the success of his crew&#x27;s journey. Perhaps what he sought was not the fleeting joy of a sublime song, nor the right to boast about his bravery, but the kind of input that no other mind had ever survived long enough to make theirs.</p>
<p>Odysseus was human: he took the risk because he wanted to be changed. ●</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-odysseus-black-box/stamnos.webp" alt="Red-figured stamnos (jar) showing Odysseus and the Sirens. (The Trustees of the British Museum, CC BY-NC-SA 4.0)"/><figcaption>Red-figured stamnos (jar) showing Odysseus and the Sirens. (The Trustees of the British Museum, CC BY-NC-SA 4.0)</figcaption></figure>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/the-odysseus-black-box/draper_ulysses.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Close Reading of a Modern Japanese Vignette]]></title>
            <link>https://aethermug.com/posts/close-reading-of-a-modern-japanese-vignette?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/close-reading-of-a-modern-japanese-vignette</guid>
            <pubDate>Thu, 12 Sep 2024 06:08:31 GMT</pubDate>
            <description><![CDATA[Things you can see by looking over a local's shoulder.]]></description>
            <content:encoded><![CDATA[<p>My friend M., who is almost as forgetful as myself, accidentally bought the same self-help book twice, so I got a free copy. The book is titled 自分という壁, meaning something like <em>The You Wall</em>, and it was written by Gensho Taigu, an exponent of the recent charming trend of Hip Monks (for lack of a better term)—Buddhist monks with Youtube channels and the tech savvy and street smarts needed to connect with the younger generations from the austere wooden temples they live in.</p>
<p>The book is a modern synthesis of Buddhist philosophy, augmented with a bit of psychology and applied to the daily life of the modern Japanese. Of course it comes with a cute enlightened cat mascot ready to sooth you at every chapter heading. As far as self-help books go, it&#x27;s a nice book, short and to the point, and I think many readers will find something that resonates with them between its covers (unfortunately it doesn&#x27;t look like it&#x27;s been translated to English yet).</p>
<figure><img src="https://aethermug.com/assets/posts/close-reading-of-a-modern-japanese-vignette/jibuntoiukabe.webp" alt="Picture of the book in question"/><figcaption>Gensho Taigu&#x27;s book.</figcaption></figure>
<p>But there was one passage, a little autobiographical parable, that struck me as something worth sharing on Aether Mug. I find it interesting not so much for the thing it teaches (you&#x27;ll see why in a minute) but for how much it inadvertently reveals about Japanese culture. This is a book written by a Japanese for a Japanese audience, so it&#x27;s a great way to see the local culture without the usual Western lens.</p>
<p>Here is an AI translation of the full passage, which I retouched a little:</p>
<blockquote>
<p><strong>What I Learned from a Non-Bitter Foreigner</strong></p>
<p>When I was a student, I had the following experience.</p>
<p>One day, I got on a crowded train carrying a lot of luggage, including a large bag with my karate uniform and protective gear. I put the luggage at my feet. Naturally, other passengers gave me disapproving looks that said &quot;You&#x27;re in the way.&quot; I was aware of being a nuisance, but I had so much luggage that I thought, &quot;Whatever, it&#x27;s not like there&#x27;s anything I can do about it...&quot;</p>
<p>Then, at one point, a foreign man who had got on the train near me turned to me and said, &quot;Your luggage is in the way, so please put it on the overhead rack.&quot;</p>
<p>I was startled to be spoken to so suddenly and was on guard for a moment. But there was no anger or bitterness at all in his manners. He just seemed to be conveying that it would be better to do that, that it would be good for both of us.</p>
<p>And he helped me put the heavy luggage on the overhead rack.</p>
<p>If someone had bluntly said to me, &quot;You&#x27;re in the way, [jerk]&quot; or &quot;You&#x27;re so inconsiderate, [idiot]&quot; my younger self at the time might have been irritated. But I felt a welling sense of gratitude for the foreign man who matter-of-factly pointed it out and helped me.</p>
<p>In this way, even if you have the same feeling, how you interact with the other person can greatly change how it comes across to them.</p>
<p>We Japanese tend to be relatively shy and not good at pointing these things out to others or speaking frankly. But maybe we need to learn this attitude from the foreigners who act straightforwardly without unnecessary implications.</p>
</blockquote>
<p>I sometimes think I&#x27;d like to share more about my observations of Japanese culture as someone living inside it, but I always desist for fear of creating black-and-white pictures and caricatures. I&#x27;ll do a (sort-of) close reading of this passage as an attempt at a middle ground: the text is original, and my comments are just my interpretation.</p>
<p>Let&#x27;s go piece by piece.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<blockquote>
<p>When I was a student, I had the following experience.</p>
</blockquote>
<p>The path to becoming a Buddhist monk is very similar to any other career. You take compulsory education and usually also high school, then some form of higher education, preferably one of the many Buddhist universities in Japan. Then you do a couple of years of training, and finally you&#x27;re ordained. Taigu seems to be an exception, because he was ordained at 10, but he rebelled and left his temple as a teenager, and probably lived as a rather typical student in his teens.</p>
<blockquote>
<p>One day, I got on a crowded train carrying a lot of luggage, including a large bag with my karate uniform and protective gear. I put the luggage at my feet.</p>
</blockquote>
<p>The author is a karate practitioner. The fact that he had to ride a train to and from his dojo is normal in a Japanese city, as the vast majority of transportation is by train rather than car.</p>
<p>A &quot;crowded&quot; train in Japan can be <em>extremely</em> crammed, as in, I-don&#x27;t-have-the-luxury-to-bend-my-arm-to-scratch-my-nose crammed. Big luggage can indeed be quite a problem, forcing people to stretch uncomfortably and lose their balance to avoid stepping on it.</p>
<blockquote>
<p>Naturally, other passengers gave me disapproving looks that said &quot;You&#x27;re in the way.&quot;</p>
</blockquote>
<p>Here the deeper side of Japanese culture begins to come out. People, especially in a big city like Tokyo, will rarely express their discontent directly, because doing so would attract attention to them. Exposing yourself to the eyes of the &quot;indefinite others&quot; (世間 <em>seken</em> in Japanese) is treated as a risky thing to do, because they might look down on you. The fact is self-perpetuating: since no one exposes themselves, the price of being the first one exposing yourself is highest. This makes for very peaceful train rides.</p>
<figure><img src="https://aethermug.com/assets/posts/close-reading-of-a-modern-japanese-vignette/train_crammed_level.webp" alt="Illustrations of five levels of crammed-ness of Japanese trains: 100% means &#x27;Full capacity (Can sit or stand holding on to a strap or a pole near the door)&#x27; 150% means &#x27;Can hold a newspaper open.&#x27; 180% means &#x27;Can read a newspaper by folding it or squeezing a little.&#x27; 200% means &#x27;The bodies press against each other a bit uncomfortably, but you can contrive to read a small magazine.&#x27; 250% means: &#x27;Every time the train car sways your body goes diagonal and you can&#x27;t stir, and you can&#x27;t move your arms.&#x27;"/><figcaption>Official definition of crammed-ness levels in Japanese trains, by the Ministry of Land, Infrastructure, Transport and Tourism of Japan.</figcaption></figure>
<p>At the same time, being unable to address the problem directly will make the problem itself more irritating. The best action plan left to you is to turn the <em>seken</em> weapon around and make it really obvious that you&#x27;re annoyed, without actually saying it. You apply pressure on the perpetrator as an anonymous member of the &quot;indefinite others&quot; collective. This is what the author experienced on the train.</p>
<blockquote>
<p>I was aware of being a nuisance, but I had so much luggage that I thought, &quot;Whatever, it&#x27;s not like there&#x27;s something I can do about it...&quot;</p>
</blockquote>
<p>There is a tinge defiance in his attitude here. He&#x27;s steeling his mind against the pressure of the others, telling himself that it&#x27;s not really his fault if his bags are bulky. He is in defense mode.</p>
<blockquote>
<p>Then, at one point, a foreign man who had got on the train near me turned to me and said, &quot;Your luggage is in the way, so please put it on the overhead rack.&quot;</p>
<p>I was startled to be spoken to so suddenly and was on guard for a moment.</p>
</blockquote>
<p>This is meant to be a rather surprising turn of events, as indicated by his action-like narrative. It&#x27;s startling for the average Japanese reader for two reasons.</p>
<p>One reason is that people don&#x27;t usually talk to you out of the blue like that. Even I, born and raised in a rather talkative country, have unwittingly grown into this mindset after many years in Tokyo. Strangers talking to me on the train or on the street happens maybe once or twice in a year, and most of those times it&#x27;s from non-locals. (To be fair, in Osaka and the surrounding Kansai region it&#x27;s relatively more common to chat with strangers.)</p>
<p>The second thing that makes this passage surprising is that the person talking to him is a foreigner. Foreigners (intended as people who are not Japanese natives) are a small minority even in Tokyo, and almost none of them speak Japanese. Thus, having one of them talk to him, a Japanese person who is perhaps not very confident in their spoken English, is a very rare and possibly embarrassing situation. The fact that it happened in front of many other Japanese strangers makes it even worse.</p>
<p>This startled response is something I experience every now and then when I &quot;suddenly&quot; talk to Japanese strangers—even if I talk to them in fluent Japanese. Many expats find this reaction insulting and racist, and I see their point. There is definitely a tendency to stereotype people based on their appearance, and that&#x27;s never a healthy thing to do.</p>
<p>Still, I don&#x27;t see it now as a major social issue as it might be in other countries. Japanese culture is indeed very homogeneous, and the behavioral patterns are indeed largely correlated with the ethnic background. The &quot;different face = different behavior&quot; simplification does work quite well in the vast majority of cases.</p>
<p>Also, in most cases of discrimination I experienced directly, the discrimination was not a way to look down on me or any other kind of racial superiority statement, but mere hesitation as to how to behave and how to not look like an idiot themselves. If anything, Japanese discrimination towards me often consists in <em>elevating</em> me to some noble or gifted being somehow genetically out of reach of the aspirations of a Japanese native. This is bollocks, of course, and I feel uncomfortable when it happens, but it&#x27;s much more innocuous than the real racism you see elsewhere.</p>
<p>That&#x27;s not to say that there is no bad discrimination happening in Japan. I have it easy because I look Western, but things could be very different for, say, South Asian immigrants. I cannot speak for them, and I do think that the stereotyping of people should be eradicated in any case. I made this detour to clarify that the author&#x27;s being startled by a foreigner talking to him doesn&#x27;t necessarily mean he was a terrible xenophobe or racist: he was an average member of his culture, with the virtues and the problems that implies.</p>
<blockquote>
<p>But there was no anger or bitterness at all in his manners. He just seemed to be conveying that it would be better to do that, that it would be good for both of us.
...
In this way, even if you have the same feeling, how you interact with the other person can greatly change how it comes across to them.</p>
</blockquote>
<p>This is the key message of Taigu&#x27;s parable, and the least surprising thing for me. More interesting is the fact that he needed a foreigner to display this good example. I think it paints a different picture from the (also stereotyped) view of Japanese people as always polite, always obedient, always calm. Guess what, they&#x27;re human too.</p>
<blockquote>
<p>We Japanese tend to be relatively shy and not good at pointing these things out to others or speaking frankly. But maybe we need to learn this attitude from the foreigners who act straightforwardly without unnecessary implications.</p>
</blockquote>
<p>This is a rather common self-stereotype held by many Japanese. I find it to be an interesting kind of &quot;collective self-consciousness&quot;, and it goes to show the point I mentioned earlier: the discrimination here, as problematic as it might be in principle, does not have a clear &quot;superior side&quot;, a will to oppress and marginalize a minority.</p>
<p>Still, it&#x27;s undoubtedly a gross oversimplification. There is no <em>Foreignland</em>, no single country inhabited only by patient, helpful straight talkers (or whatever other trait they might have observed in a small random sample of non-Japanese people). The redeeming element is the author&#x27;s softening of the statement with &quot;<em>tend</em> to be <em>relatively</em>&quot;, which is more than most would do.</p>
<p>This parable works fine for most of its intended readers, and it sends a positive message using the most benign kind of stereotype. It would be unfair to judge the author harshly on this one, and that&#x27;s not my goal. He is writing for people who will see nothing wrong in that worldview, and who will not be significantly nudged towards xenophobia by the text (if anything, the opposite). This is—I should repeat—a good, unpretentious self-help book.</p>
<p>I decided to dissect this passage to do what I would call &quot;backgrounding&quot;.</p>
<p>In literary analysis, <a href="https://en.wikipedia.org/wiki/Foregrounding" rel="nofollow noopener noreferrer" target="_blank">foregrounding</a> is when an author highlights their message by presenting it in an unusual or surprising way, either in the use of language or in the way it subverts expectations of what will happen. As a reader, you intuitively focus on the unusual bits, and understand them as the most important part. I like to think about it the other way around, though: the events and descriptions that are <em>not</em> at the center of the narrative will tend to be the most typical and representative of what the intended readership will consider &quot;normal&quot;. The contrast works both ways. In that sense, to a large extent, whatever is in the background will <a href="/posts/i-like-the-sound-of-how-you-look-at-me">tell you more about the world</a> than it does about the author themselves. It&#x27;s a neat trick to use when learning about different cultures.</p>
<p>In Taigu&#x27;s story, the foreground (someone asking another person to move their baggage) is something that probably would not have been foregrounded by a Western author. The background (the crowded train, the silent hating, the fact that he was startled by the foreigner), on the other hand, is rather more fascinating. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/close-reading-of-a-modern-japanese-vignette/eutah-mizushima-2TlAsvhqiL0-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[You Don't Have Time to Read Books That Won't Change Your Life]]></title>
            <link>https://aethermug.com/posts/you-don-t-have-time?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/you-don-t-have-time</guid>
            <pubDate>Thu, 05 Sep 2024 10:46:55 GMT</pubDate>
            <description><![CDATA[On some Damned Good Books]]></description>
            <content:encoded><![CDATA[<p><em>Sept 6, 2024 edit: added one sentence to clarify the role of reading for pleasure.</em></p>

<p>The Internet Archive <a href="https://archive.org/about" target="_blank" rel="noopener noreferrer">provides</a> 44 million books you can borrow for free, and Amazon&#x27;s English language catalog contains at least a few dozen million book titles for sale. These two catalogs don&#x27;t overlap completely, but I&#x27;ll take the IA number as a conservative estimate: 44 million is roughly how many books are available to you as an English reader today.</p><p>Let&#x27;s assume that 90% of those are objectively irrelevant to you because they are too niche or specialized for your needs. That leaves 4.4 million books you could read and <em>potentially</em> get something out of. The &quot;potentially&quot; is important, because most of them won&#x27;t do much for you. In fact, the vast majority of those 4.4 million will be a net opportunity cost for you, because they will suck your time away from reading the few among them worthy of being called <em>Damned Good Books</em>.</p><p>A Damned Good Book is a book that changes your life. As you read it, you become a different person.</p><p>It may be because it gives you knowledge that is both actually useful to you in practice and it sticks to your mind, entering your life-long mental toolkit like a new little superpower.</p><p>It may be that it offers you a new <a href="/posts/a-framing-is-a-choice-of-boundaries" target="_blank">framing</a> that immediately clicks for you, and you&#x27;ll never see the world the same again.</p><p>Or it could gently plant an idea in your head that later flowers into something that makes a huge difference for the trajectory of your life.</p><p>Anyone who has read more than a few books knows what I&#x27;m talking about—they&#x27;re the ones that give you pleasure at the deepest level, fulfilling much more than your immediate need to be entertained. But the frequency with which one encounters Damned Good Books varies from person to person. It depends on your age, cultural background, what you read before, what you&#x27;re trying to do, and more. In my case, I started as a child with perhaps one Damned Good Book out of thirty titles I read, but I&#x27;ve gotten much better at choosing books since then. Now the number is closer to one out of six. How about you?</p><p>Unless you keep a neat record, it&#x27;s not easy to come up with a number, but I can help you. First, roughly how many books do you read in a year?</p><p style="text-align:right"><strong>You:</strong><br/><em>Something like<br/><select style="font-style:italic;font-weight:700;text-align:center;background-color:transparent;border-bottom:3px solid #772233"><option value="5">five</option><option value="12">twelve</option><option value="20">twenty</option><option value="30">thirty</option><option value="60">sixty</option><option value="100">a hundred</option><option value="0">zero</option></select> per year.</em></p><p>Now think about the one hundred books you read in the past <span><span></span> years</span> circa (if you&#x27;re younger than that, try to answer the following question by extrapolating to one hundred from the number of books you&#x27;ve actually read). Roughly how many of those have stayed with you, in one way or another, and will stay with you for the rest of your life? It&#x27;s okay if you&#x27;re not entirely sure. We only need a very rough estimate here.</p><p style="text-align:right"><strong>You:</strong><br/><em>Maybe <select style="font-style:italic;font-weight:700;text-align:center;background-color:transparent;border-bottom:3px solid #772233"><option value="5">five or so</option><option value="12">twelve or so</option><option value="20">twenty or so</option><option value="30">thirty or so</option><option value="60">sixty or so</option><option value="100">a hundred or so</option><option value="0">zero</option></select><br/>out of a hundred.</em></p><p>With <span></span> Damn Good Books out of your last hundred, that means that you&#x27;re catching one of them approximately <strong>every <span></span> titles</strong> you try reading.</p><p>Damned Good Books are gifts from the universe to you. They&#x27;re made for you, and each is timed to open itself to you when you need them—never too soon, never too late. </p><p>Damned Good Books are better than plain &quot;books&quot;, because by changing you they allow you to grow, and to grow is the only way to move on to the next stage in your life, whatever stage you choose to make it. Changing is synonymous to being alive. If there is any non-changing to be done, you can leave it to the rocks: they can do it better than you anyway.</p><p>One might object that reading books for pleasure and to escape reality is a perfectly acceptable thing to do. I agree, with a caveat: you can do those things better with Damned Good Books. The pleasure of Damned Good Books is better than the pleasure of books, and likewise for reality-escaping. Damned Good Books are how you marry pastime with value.</p><p>We&#x27;ve found that one every <span></span> books you read is a Damned Good Book. That means that, out of the 4.4 million books that you might try reading, 220,000 books would be capable of blowing your mind anew, and at exactly the right time you need it.</p><div class="xl:col-start-2 xl:h-0 xl:overflow-visible"><figure class="max-w-full xl:relative" id="floating-2"><img src="https://aethermug.com/assets/posts/you-don-t-have-time/billy.webp" alt="" class="max-w-full"/><figcaption>One of these BILLYs is waiting to be filled. Just sayin&#x27;. Source: Jean-Etienne Minh-Duy Poirrier, CC BY-SA 2.0</figcaption></figure></div><p>That&#x27;s <em><span></span> life-changing books.</em> Let that sink in for a while. You&#x27;d need 1,100 six-shelf Ikea BILLY bookshelves to hold them all.</p><p>To me, that re-defines what &quot;reading&quot; means. It&#x27;s a treasure hunt. A search among piles of inert rocks for the magic gems that will make the You of the future. I&#x27;m not here to dictate anyone&#x27;s life goals, but if you were to choose, as one of your goals, to maximize the number of Damned Good Books you read before you die, I doubt you would regret it.</p><p>Maximizing that number means being <em>ruthless</em> in selecting which books you pick up in the first place, and <em>brutal</em> in quitting those that, after reading for a while, turn out to be less than <em>damned good</em>. This is an art, not a science, but it gets much easier once you learn to let go of your inertia and your weird sense of duty and your let&#x27;s-give-it-a-fourteenth-chance mindset. Don&#x27;t ask yourself &quot;can&#x27;t I read one more page?&quot;—ask &quot;is this already beginning to transform me or not?&quot;</p><p>Even ruthless won&#x27;t really be enough. Assuming you&#x27;re a very fast reader, and made even faster by the sheer awesomeness of each of your Damned Good Books, you could perhaps read one every four days (including some time to quickly discard non-life-changing books). Then it&#x27;s going to take you 2,411 years to read them all.</p><p>So don&#x27;t let me hold you any longer. Go find the next one. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/you-don-t-have-time/jaredd-craig-HH4WBGNyltc-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Boxed]]></title>
            <link>https://aethermug.com/posts/boxed?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/boxed</guid>
            <pubDate>Thu, 29 Aug 2024 04:26:06 GMT</pubDate>
            <description><![CDATA[Things I learned after lying in an MRI machine for 30 hours]]></description>
            <content:encoded><![CDATA[<p>Last year a researcher from a Japanese lab asked me if he could borrow my <a href="https://en.wikipedia.org/wiki/Aphantasia" rel="nofollow noopener noreferrer" target="_blank">aphantasic</a> brain for an experiment. Neuroscientists have been studying the weird capacity most people have of &quot;visualizing&quot;, or mentally conjuring pictures of things that aren&#x27;t there, for a long time. The problem is that the brain is such a wild tangle of interconnections that it&#x27;s hard to tell which of its parts are involved in visualization and which are unrelated, or downstream of it. Comparing people who visualize with aphantasics who don&#x27;t is a very convenient way to partially work around that problem.</p>
<p>I agreed to participate in the experiment. That decision led to me spending an inordinate amount of time deep inside an MRI, looking at pictures or trying (and failing) to imagine pictures. The machine, worth half a dozen Lamborghinis, is hidden somewhere in a basement on the University of Tokyo campus.</p>
<figure><img src="https://aethermug.com/assets/posts/aphantasia/mri.mp4" alt="Animation of an MRI brain scan."/><figcaption>Click to play the animation. Source: Fastfission, CC BY-SA 3.0.</figcaption></figure>
<p>Essentially, an MRI is a big pipe you climb into, which happens to be capable of seeing right through you. Wrapped around the pipe is a hidden network of metal coils cooled to 9 degrees Celsius above absolute zero, constantly switching very large currents to fill the hollow inside with a strong magnetic field and shooting (harmless) radio waves at some corner of your body—the brain, in this case.</p>
<p>In neuroscience, MRI is used to detect microscopic changes in blood flow inside the brain, allowing the researchers to obtain a 3D video map of which neural networks are active at each instant.</p>
<p>In my experiments, the researchers record what goes on in my head when I look at a picture of a fire hydrant, then a camel, then a piece of wood, and so on, covering thousands of pictures over the months. Other times they asked me to imagine various things, or to remember images that I had been shown earlier.</p>
<p>The scientific side of these experiments is very interesting, but there are many other places to read about it in depth. Here, instead, I want to leave some notes on my subjective observations of this experience. I&#x27;ve found that lying down in a cramped space for hours at a time, fully awake and with only very weird kinds of pastimes, leads to some unusual reflections.</p>
<p>These are a few of those reflections, in approximate order of decreasing banality.</p>
<h2>1 - It&#x27;s noisy</h2>
<p>Anyone who has spent more than two seconds in an MRI knows this, but if you&#x27;re thinking of trying for the first time, be warned: those things are LOUD.</p>
<p>The high-frequency current switching causes the metal coils around you to expand and contract and vibrate like a jackhammer. You feel like a rescue team is trying all they can to get you out of an unbreakable iron coffin. If you&#x27;re sensitive to loud noises, you might want to avoid it.</p>
<p>Usually they give you foam earplugs before you go in, and those make the experience bearable for me. Still, there are risks. The other day I mis-inserted one of the earplugs and got a taste of the unabridged experience: I tried to continue like that for a while, but I was so worried for my eardrum that eventually I had to stop the experiment early. When I got out I felt like the left side of my head had been in the front row at a rock concert. Not healthy.</p>
<p>Even with properly-inserted earplugs, the racket echoing all around you is the most unpleasant part of the experiment. It&#x27;s surprisingly draining. If it weren&#x27;t for that, I think I could keep going just fine for much longer.</p>
<h2>2 - It&#x27;s <em>very</em> good at putting you to sleep</h2>
<p>Remaining horizontal in a semi-dark space for a while doesn&#x27;t help you stay awake. This shouldn&#x27;t come as a surprise to most people, but it surprised me. I usually have the opposite problem, because I <em>never</em> nap or sleep during daytime (as in, maybe once a year), and not for lack of trying. There is something about the infinite possible activities that I could be doing instead of sleeping that excites my brain during the day, even when I&#x27;m very tired.</p>
<p>Yet, looking at long sequences of unrelated pictures or—much worse—the same set of pictures over and over has an almost magical power to induce slumber. I haven&#x27;t succumbed to it yet, but I&#x27;ve come close a few times. I should try something similar for my next napping attempt.</p>
<figure><img src="https://aethermug.com/assets/posts/boxed/mri.webp" alt="A gray surface with a gray stripe running through it."/><figcaption>The scenery as seen from inside the machine.</figcaption></figure>
<h2>3 - My brain craves novelty</h2>
<p>Aside from the problem of sleepiness, I was shocked at how the lack of new stimuli can strain my grasp on my mental faculties.</p>
<p>Some experiments consist of seeing the same images hundreds of times, or repeating the same task over and over with only minor variations. Since an MRI pipe is essentially an isolation chamber, those tasks and images (and the noise) are the only sensory input you&#x27;re going to get. After a while, my brain started rejecting them.</p>
<p>At first, it wasn&#x27;t a big deal, but after a few weeks of those repetitive tasks, even the simple act of paying attention to what was in front of my eyes began to take a tremendous effort of concentration. I came out in shambles every time, depleted of life force, and I even thought of the word <em>torture</em> once or twice. But hey, someone&#x27;s gotta do this.</p>
<p>Luckily that repetitive series of experiments ended just before I reached my breaking point. I wonder if this is something that can be trained, but I&#x27;m not sure I&#x27;d want to do that either.</p>
<h2>4 - Novel, random images are great for creativity</h2>
<p>Most of the experiments involve looking at non-repeating images, meaning that I see each one only once and never again. My brain is apparently fine with this and, with a good infusion of caffeine, it&#x27;s actually happy to go on the ride.</p>
<p>This is an experience you don&#x27;t usually have in your daily life. Normally you know, more or less, what to expect to see next. Even when you can&#x27;t predict what&#x27;s coming—when you&#x27;re watching a movie, for example—things have some kind of connection to each other, some theme or context that ties them together. With random images in a lab, none of that exists. Now you&#x27;re seeing a picture of a man blowing smoke from his mouth in the Grand Canyon, next it could be a close-up on a smudged corner of a book, or a group of penguins near an ice cliff, or a pile of broken CRT monitors, or something else altogether.</p>
<p>Every four seconds or so, you see something new that you would never have guessed from the previous pictures. Each time it&#x27;s a different cascade of activations in your brain, evoking random memories, creating unexpected connections, and stimulating thoughts that would never have occurred to you.</p>
<p>Something strange happens: even though it&#x27;s all purely random, the brain tries to make sense of it all, tries to find patterns and associations. With no time to establish conventional <a href="/posts/a-framing-is-a-choice-of-boundaries">framings</a>, it has to improvise, take in the images in a partly-unconscious way, without thorough processing. This, I think, is a great way to stimulate creativity.</p>
<p>A few times during those experiments, I came up with so many ideas—things to write about, better ways to explain things, new intriguing questions about the world, etc.—that my biggest worry was trying to remember them all for the 20 or 30 minutes left until the end of the session. In the short pauses between bursts of images, I tried to rehearse the list of ideas with shortened mnemonics, but found that I could only keep around five in my head before I forgot some of them.</p>
<p>This is an amazing state to have my brain in, and I wish I could induce it at will. Social media feeds look similar on the surface, but they don&#x27;t give you truly random stimuli. Their contents are highly edited to appeal to the viewer, and come with lots of cultural baggage and trend-following. They don&#x27;t work to unhinge my creativity—rather, they trap it.</p>
<p>What I need is an app that does nothing but show you truly random pictures, with no curation and no memetic aspirations. If you know of one, please let me know.</p>
<figure><img src="https://aethermug.com/assets/posts/boxed/dylan-nolte-qxYDhV0rBPk-unsplash.webp" alt="An athlete&#x27;s hand being wrapped in tape by someone."/><figcaption>A random picture. I bet you didn&#x27;t see that coming. Source: Dylan Nolte, Unsplash </figcaption></figure>
<h2>5 - Our sense of time is non-linear even at the shortest scales</h2>
<p>Everybody has experienced the subjective relativity of time. When you have fun, it flies. When you&#x27;re waiting anxiously, it never budges. But, before these experiments, I had never realized just how warpy my perception of time can be even on the scale of a couple of seconds.</p>
<p>These tasks with pictures require me to stay focused on what&#x27;s shown on the screen. To ensure that I&#x27;m not distracted, the screen will show certain cues at random intervals, to which I have to react by pressing a button. Usually, the cue is the repetition of the same image twice in a row: normally each image stays there only four seconds, but sometimes it will flash back to itself instead of being replaced by a different image. This sounds like an easy thing to spot, and most of the time it is. But when I&#x27;m not in an optimal shape, it can become fiendishly difficult.</p>
<p>I find myself asking, have I seen this same picture <em>two seconds ago</em> or not? Is this still the first four seconds?</p>
<p>This is so strange and almost disconcerting. In the highly controlled environment of the lab, I can easily notice these lapses in my perception of time, but what about all the other times? Does my sense of time ebb and flow like that every few seconds of my waking life?</p>
<h2>6 - Having your thoughts monitored feels... weird</h2>
<p>Sometimes, during the experiments, I wonder about things like:</p>
<ul>
<li>Is it enough to just watch, or should I think intensely about the subject?</li>
<li>I was asked to imagine the picture of a duck, but I imagined a moving duck, beating its wings quickly. Will this taint the results?</li>
</ul>
<p>In order to protect the objectivity of the experiments, the researchers don&#x27;t tell me exactly how they&#x27;re analyzing the fMRI data, or what their hypotheses are. Still, I know that they happen to have the closest thing ever to <em>mind-reading technology</em>, which has interesting implications.</p>
<p>In previous studies, they have successfully trained generative AI to read fMRI scans and replicate the images people were thinking about, or to add captions describing those mental images. Considering the amount of data they&#x27;re taking of my brain, it would be possible for them to train an AI for my specific brain patterns.</p>
<p>I don&#x27;t think many people have experienced this situation before. People have had tyrants and Big Brothers spying on their actions and words for millennia, but has anyone ever had their thoughts monitored? In a sense, I feel naked.</p>
<p>For now, this is not a big problem. At worst, I might worry that they would know it when I&#x27;m distracted, and I might try not to think about kinky stuff (usually a big mistake). Not the end of the world. But I can&#x27;t help imagining about the dystopian societies that could emerge if the same technology was somehow scaled to portable sizes and affordable prices.</p>
<h2>7 - We are usually oblivious to what our brains are doing</h2>
<p>I&#x27;ve only ever tried short-ish sessions of mindfulness-like meditation, where the goal is to free your mind of thoughts, focus on your breathing, or something along those lines. I don&#x27;t know about other types of meditation, but I would guess that most of them are about relaxing or at least not thinking very hard. All of these may help you, in one way or another, to feel better and more centered and even, in some cases, to know your body and mind better.</p>
<p>But I doubt there is a kind of meditation that prompts the level of introspection that long hours in an MRI machine doing simple but focused tasks can give you.</p>
<p>Inside the machine, I have to remain very still with no phone to check, nothing to read, no tossing and turning just for the sake of it, nothing to fiddle with, and—given that the tasks all require a moderate but constant level of attention, no opportunity to get really lost in thought. I did this for over 30 hours now, and counting.</p>
<p>In other words, I got to spend a lot of time in the peaceful, forced company of myself, not too cognitively busy but neither focusing on breathing or clearing my head. It&#x27;s a sort of Goldilocks zone not only for creativity (point 4), but also for the observation of how my mind works.</p>
<p>In this process, I&#x27;ve learned more about myself than I had in the previous two decades. For example, it&#x27;s how I uncovered the details of how my non-visual imagination works—something I had never noticed before. One day, right after a session in the MRI, I ran to a cafe and wrote a Twitter thread explaining exactly what that is like for me. The thread had a surprising success, attracting the keen interest of lots of other people (you can read a copy of the thread <a href="/posts/aphantasia">here</a>). Apparently it&#x27;s a kind of description that is usually hard to come by, yet it came easily to me with all that time of confinement.</p>
<p>This unexpected success at discovering new things about myself has encouraged me to try this introspection outside the lab, too. That&#x27;s how I first realized I have <a href="https://synesthesia-test.com/time-space-synesthesia" rel="nofollow noopener noreferrer" target="_blank">time-space synesthesia</a>, something that I hadn&#x27;t even heard of before. It&#x27;s also how I realized that wearing special earplugs in noisy places helps me understand what the people around me are saying, mitigating a mild auditory processing disorder that I had never thought much about. And so on, with a new quirk or peculiarity coming to my attention every now and then as I do other things.</p>
<p>Somehow the mere fact of staying still in a state halfway between emptying my mind and filling it to the brim has helped me become more attuned to myself. I feel a bit more centered in the moment, so to speak. Much more than before, I now consider the brain to be an organ that you can observe and study, a <a href="/posts/a-black-box-view-of-life">black box</a> you can tinker with (carefully) to better understand it. This kind of exploration can be very fruitful, showing you what works best for you, what to avoid, how to be kind to yourself, and generally how to &quot;use&quot; your brain more expertly. ●</p>

<p><em>This post is included in the <a href="/posts/a-list-of-introspective-descriptions">List of Introspective Descriptions</a>.</em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/boxed/vladimir-kramer-xzZtV9ED5Bs-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Blimp Blues]]></title>
            <link>https://aethermug.com/posts/blimp-blues?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/blimp-blues</guid>
            <pubDate>Thu, 22 Aug 2024 06:25:04 GMT</pubDate>
            <description><![CDATA[I mean, don't you want it too?]]></description>
            <content:encoded><![CDATA[<p>The other day I felt the sudden and urgent need of boarding a blimp.</p>
<p>I am the son of a pilot. Throughout my life I have flown on a variety of aircraft. Big and small airliners, of course, as well as ultralight aircraft and gliders. I have flown in the pope&#x27;s helicopter with my grandmother, who sat in the pope&#x27;s heavenly soft seat like a saint herself, and traversed the Italian peninsula with it. In every case, I&#x27;ve found flying to be an exhilarating and stunning and insufficient experience.</p>
<p>Human flight is amazing, and it is also frustrating for how <em>functional</em> it is. For functional reasons, the cabin windows on passenger aircraft are one-twentieth the size I wish they&#x27;d be, and do not extend under my feet. Most of the time, the airplane flies at such high altitudes that I can&#x27;t make out what is going on down there, and when it&#x27;s lower down it goes so fast that I can&#x27;t take a good long look anyway. Helicopters emit an unacceptable amount of noise. These machines are all so close to the quintessential experience of flight—as I envision it—yet they so <em>intentionally</em> avoid it. I could learn to pilot myself and take control of part of that experience, but I&#x27;d rather not be desperately focused on staying alive for the next five minutes if I&#x27;m to enjoy the whole stunt (this is also why I dislike driving and why I may never be able to live outside Tokyo again). Heavier-than-air flight is like a cliff-hanger ending to a film whose Part 2 has been forever canceled.</p>
<p>For mysterious reasons, I and everyone else seem to forget that we ever invented airships: a way to fly as slowly and as peacefully and as low as you want, a way to grant you the luxury of going through the multiple stages of awe on your own time, of thickly conversing with a landscape rendered so alien by the mere change of viewing angle and the inexplicable <em>it&#x27;s-all-just-one-place-ness</em>. Excluding space stations, how else do you get a dose of the <a href="https://en.wikipedia.org/wiki/Overview_effect" rel="nofollow noopener noreferrer" target="_blank">overview effect</a>?</p>
<p>For all these reasons, I would like to ride one.</p>
<figure><img src="https://aethermug.com/assets/posts/blimp-blues/the_balloon_merse.webp" alt="Painting of a striped spherical balloon flying low with a man in its basket."/><figcaption> The Balloon, Pál Szinyei Merse</figcaption></figure>
<p>Apparently, blimps went out of fashion in the nineteen-thirties, when several of them exploded very theatrically or went out of control and crashed, burning or smothering their terrified passengers and losing their cargo. We still have a few of them today—the estimates I found count 25 in total—and they&#x27;re all used to advertise car tires and beer and some for scientific observation. The very rare recent initiatives mostly aim for laudable and functional things like providing <a href="https://www.sceye.com" rel="nofollow noopener noreferrer" target="_blank">stratospheric internet hubs</a> and <a href="https://spectrum.ieee.org/lta-airship-faa-clearance" rel="nofollow noopener noreferrer" target="_blank">&quot;humanitarian and cargo transport&quot;</a>. That&#x27;s not what I want.</p>
<p>The year the Hindenburg exploded, 1937, was also the year with the <a href="https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in_U.S._by_year" rel="nofollow noopener noreferrer" target="_blank">highest</a> fatality rate per capita of car accidents in US history. Those deaths weren&#x27;t as flashy as a gigantic ball of fire in the sky, and people-killing cars remained in fashion until now, and in the meantime we&#x27;ve managed to make driving roughly 14 times safer than it was back then (relative to miles traveled). Surely we can do safer and cheaper passenger airships today, too. What&#x27;s blocking us?</p>
<p>Consumed by these burning questions, I did what every rational person would do: I looked for answers on Reddit.</p>
<p>I will not relate everything I found there, but I&#x27;ll paraphrase what I thought was the most convincing answer. To a redditor asking &quot;why haven&#x27;t airships evolved to be viable today?&quot; a sardonic commenter answered: &quot;but they <em>did</em> evolve: today we call them jetliners.&quot;</p>
<p>I admit that this is a powerful way to put it. It can shut you up for a minute. Hmmm... it&#x27;s true. Jetliners are a much faster way to get to places than airships can ever hope to be. They need much smaller hangars, and they carry more cargo relative to their size. There really is no competition. Heavier-than-air wins hands down.</p>
<p>But wait a minute: it wins at what game? It evolved into airplanes based on which measure of fitness? Apparently, the commenter meant that today&#x27;s aircraft are the culmination of a long series of optimizations toward the goal of traveling from A to B fast, safely, and with lots of cargo. But is that <em>the one</em> goal of flight?</p>
<p>I can think of an alternative goal: &quot;enjoy and absorb the wonder of human flight and the wholeness of nature.&quot;</p>
<p>Right after Joseph-Michel Montgolfier had the idea for the hot-air balloon, he wrote to his brother and co-inventor Jacques-Étienne the following message:</p>
<blockquote>
<p>Get in a supply of taffeta and of cordage, quickly, and you will see one of the most astonishing sights in the world.</p>
</blockquote>
<p>A century after that, another pair of brothers (is this a coincidence?), still children, received as a present a toy helicopter. They much enjoyed playing with it, and even contemplated building a scaled-up version large enough to carry both of them. Later, they claimed that this toy was the initial inspiration for their pursuit of heavier-than-air flight, the <a href="https://en.wikipedia.org/wiki/Wright_brothers" rel="nofollow noopener noreferrer" target="_blank">result</a> of which is now history.</p>
<p>While I can&#x27;t know for sure, I get the impression that neither the Montgolfiers nor the Wrights were primarily motivated by the goal of &quot;traveling from A to B fast, safe, and with lots of cargo&quot;. More likely, based on these reports, is that they simply wanted to <em>fly</em>. They wanted to know what it was like to be up there with a lot of air between their feet and the ground. They wanted to be able to visit this house and surprise that lady, loop around a clock-tower and under a bridge. Like Peter Pan. How I would like to be in their shoes the moment they finally attained some of those powers!</p>
<p>If my guess is right, the driving purpose for the invention of flight—both balloons and airplanes—was lost somewhere along the way. It looks like people forgot about it. Today we consider only transportation, business, war, and efficiency worthy of investment.</p>
<p>That&#x27;s not to say that the goal of relishing in the miracle of human flight hasn&#x27;t been achieved at all. Any form of flying, even with all the functional restrictions, is a wondrous experience, and it goes a long way towards the kind of contemplation I wish for. Many people choose to learn how to pilot small aircraft for nothing other than the thrill and fascination of being up there in the sky. That&#x27;s a <em>Good Thing</em>. I&#x27;m not here to diminish that experience for the thousands who love it. All I&#x27;m saying is that we could make it even better, but apparently we aren&#x27;t trying very hard.</p>
<p>This makes me wonder: where do purposes go? Why do they slip from our collective fingers so easily?</p>
<figure><img src="https://aethermug.com/assets/posts/blimp-blues/the_lost_balloon_beard.webp" alt="Painting of a landscape with a hot-air balloon far away, carried away by strong winds."/><figcaption> The Lost Balloon, William Holbrook Beard</figcaption></figure>
<p>In some cases, changing who is in charge doesn&#x27;t kill the original purpose. A newly-appointed CEO is still determined to maximize shareholder profits like their predecessor, and each relay runner still wants to deliver the baton to the finish line faster than the opponents.</p>
<p>In other cases the changing circumstances seem to warrant at least a partial transformation of the goals. For instance, the founders of many democracies originally set certain objectives of equality and justice, but they also pursued other ideals, like slavery and male-only voting, that we have long since disowned as discriminatory and bigoted. Similarly, if you set out to buy groceries but an earthquake hits you along the way, it&#x27;s natural for the new goal of urgent self-preservation to supplant the original one of carrot-purchasing.</p>
<p>But I think it is possible for some goals to simply go missing in action, without people wanting or even noticing it. The spark of an idea, an inner promise to oneself, disappears without being inherited or understood or heard. Who knows how many times it has happened before? How many projects both ambitious and feasible—if not then, now—have been cut short by aleatory forces or hijacked by entirely different goals?</p>
<p>Do we have a duty towards the intentions of the people who began something we benefited from? Probably not, but I find it sad when we let them fall into oblivion. I believe that a purpose, a passionate enterprise, is one of the most <em>alive</em> things we can produce. Some of those are wrong or silly or impossible, but what about those that aren&#x27;t?</p>
<p>Maybe there is still some hope for the Icarus-types like me, after all. During my idle internet searches, I eventually did find a few people trying to bring blimps back in fashion. <a href="https://oceanskycruises.com" rel="nofollow noopener noreferrer" target="_blank">This Swedish company</a> is selling leisure airship cruises to the Arctic. I won&#x27;t have my overview moment any time soon, though: the estimated date of first flight is &quot;unknown&quot;, and the ticket price is &quot;from 2Million SEK  (approx. $200,000 USD)&quot;. At least that purpose isn&#x27;t entirely dead yet. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/blimp-blues/dedale-et-icare-daedalus-and-icarus.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[A Black-Box View of Life]]></title>
            <link>https://aethermug.com/posts/a-black-box-view-of-life?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/a-black-box-view-of-life</guid>
            <pubDate>Thu, 15 Aug 2024 08:41:54 GMT</pubDate>
            <description><![CDATA[We are meant to tinker]]></description>
            <content:encoded><![CDATA[<p>A couple of years ago I decided to get into badminton. I wanted a teams sport involving exertion of my brain as much as my muscles, because why not use both? For some reason badminton gave me the right kind of vibe. I found a friendly club nearby and started training with them every week in gruesome 4-hour sessions. The sport is very fun, and I&#x27;m now able to complete doubles matches with only moderate levels of embarrassment. It&#x27;s great in and of itself, and I recommend to anyone living near a court to give it a try.</p>
<p>But I enjoyed this period if learning even more because I could observe something I hadn&#x27;t observed in a long time: myself learning a new physical skill from scratch.</p>
<p>When I started, I thought I could get ahead by grokking the needed skills with raw brainpower. Since then, I&#x27;ve learned to play much better than before... and I still have no idea how any of it works. My body just does the right thing a little more often than before.</p>
<p>The thing that most helped correct my silly expectations for the learning process was seeing little kids (even seven or eight years old) who are enormously better than me at the game. They just do all the movements right, and it looks effortless. Even assuming that they&#x27;ve been at it longer than me, they don&#x27;t have the physics training, the celestial mechanics research background, or the (flimsy) &quot;adult street smarts&quot; that I have. They presumably got there without a trace of intellectual labor.</p>
<p>It looks like the learning process, at its core, is something more basic and fundamental—although not necessarily easier—than rational understanding. Of course it involves repetition, but only the special kind of repetition we call <em>trial and error</em>.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>As mundane as that sounds, it&#x27;s quite magical if you think about it. A sport like badminton is squarely in the realm of physics and spatial relationships, yet you don&#x27;t need to know Newton&#x27;s laws, nor the Navier-Stokes equations, nor even trigonometry in order to master it. At the most fundamental level, we achieve this by treating things like black boxes—mystery machines—and tinkering with them to see how they respond.</p>
<blockquote>
<p>The child who tries to open a door has to manipulate the handle (the input) so as to produce the desired movement at the latch (the output); and he has to learn how to control the one by the other without being able to see the internal mechanism that links them. In our daily lives we are confronted at every turn with systems whose internal mechanisms are not fully open to inspection, and which must be treated by the methods appropriate to the Black Box.</p>
<p>— W. Ross Ashby, Introduction to Cybernetics</p>
</blockquote>
<p>But what about things where all the pieces are in full view, like badminton or bicycles? Although they may not be literally boxes, the principle is the same.</p>
<blockquote>
<p>At first we are apt to think that a bicycle is not a Black Box, for we can see every connecting link. We delude ourselves, however. The ultimate links between pedal and wheel are those interatomic forces that hold the particles of metal together; of these we see nothing, and the child who learns to ride can become competent merely with the knowledge that pressure on the pedals makes the wheels go round.</p>
<p>— Ashby, ibid.</p>
</blockquote>
<p>Treating something as a black box is as much about the interesting outcomes a certain kind of tinkering will produce as it is about the bits that can be safely ignored.</p>
<p>I learned my current flimsy badminton skills not through mental problem-solving but curious tinkering, in the form of swinging my racket in all possible ways near a shuttlecock until I found the ways that led to slightly less ridiculous results. Here my racket, my body, and the shuttle are the mysterious boxes I&#x27;m trying to figure out.</p>
<p>Black-box tinkering is also how you learn to program, sculpt statues, grill sausages, and beat a videogame. You provide some input in the form of muscle movements, and the output is a more or less desirable outcome from the black box.</p>
<p>Black-box tinkering is how this scientist has learned to &quot;design&quot; snowflakes in this documentary segment:</p>
<div class="lg:col-start-1 lg:col-end-1 aspect-w-16 aspect-h-9 mb-8"><iframe class="w-full p-0" src="https://www.youtube-nocookie.com/embed/ao2Jfm35XeE?start=86&amp;end=168" title="YouTube video player" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerPolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe></div>
<p>In this case, Dr Libbrecht was able to build a scientific understanding of how snowflakes work, but that isn&#x27;t strictly necessary to the objective of designing pretty crystals: give a group of 9-year-olds access to Libbrecht&#x27;s instruments and dials and they&#x27;ll quickly learn to farm snowflakes in the shapes of any Minecraft character you want.</p>
<p>In this sense, everything can be a black box—everything is there ready to respond in one way or another to the inputs you give it.</p>
<blockquote>
<p>What is being suggested now is not that Black Boxes behave somewhat like real objects but that the real objects are in fact all Black Boxes, and that we have in fact been operating with Black Boxes all our lives.</p>
<p>— Ashby, ibid.</p>
</blockquote>
<p>But tinkering isn&#x27;t the only thing you can do with a black box, and the same mindset is useful far beyond the topic of learning. It&#x27;s also how we <em>create anything at all</em>.</p>
<p>There is no hard evidence of how string instruments were first invented, but the leading hypothesis is that, one day, a primitive (let&#x27;s say) woman realized that plucking the string of a hunting bow makes a funny sound. I imagine she would have much enjoyed the pastime for about ten minutes, after which her band members would have threatened to exile her for the monotony. But some others would have been curious enough to try with their own bows, and soon they would have found that strings of different lengths, and of different thicknesses and materials, emit different sounds. The variety is interesting! So why not tie several different strings to the same piece of wood? And why not add an empty turtle shell, since it seems to amplify the sound so nicely?</p>
<p>Or, anyway, something like that must have happened in one or more places around the world: people without the least education in music theory and resonance physics who tinkered with <em>and modified</em> black boxes until they got the results they liked.</p>
<p>And a lot of tinkering and modifying followed...</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-black-box-view-of-life/string_instruments_1.webp" alt="These and the following images are not in chronological order, nor do they represent a linear evolution. These instruments evolved in many branches and possibly from different roots. The important thing is that they&#x27;re still black boxes designed by tinkering. Sources: Tomb of Nakht (top), British Museum (bottom two)."/><figcaption>These and the following images are not in chronological order, nor do they represent a linear evolution. These instruments evolved in many branches and possibly from different roots. The important thing is that they&#x27;re still black boxes designed by tinkering. Sources: Tomb of Nakht (top), British Museum (bottom two).</figcaption></figure>
<p>...and followed...</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-black-box-view-of-life/string_instruments_2.webp" alt="Sources: Walters Art Museum (top), British Museum (bottom)."/><figcaption>Sources: Walters Art Museum (top), British Museum (bottom).</figcaption></figure>
<p>...and followed.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-black-box-view-of-life/string_instruments_3.webp" alt="Sources: Jongleur100 (top), Carlos Delgado; CC-BY-SA (bottom)."/><figcaption>Sources: Jongleur100 (top), Carlos Delgado; CC-BY-SA (bottom).</figcaption></figure>
<p>A similar story has unfolded for most tools and technologies ever invented by humanity. We seem to be extremely interested, as a species, in plucking and kicking things and cobbling them together to assess the sounds (literal or metaphorical) they will make.</p>
<p>Maybe black-box thinking goes even further.</p>
<blockquote>
<p>Text functions as a &quot;whole&quot;, an aggregate, in other words a black box. The reader has the right to take that, process it, and chew it as they prefer. If the author were to process and chew it before it reaches the reader, the meaning of the text would be greatly damaged. ... There is nothing more inconvenient, for a novelist, than beginning to analyze oneself.</p>
<p>— Haruki Murakami, Novelist as a Vocation (translation mine)</p>
</blockquote>
<p>Perhaps works of art are <em>especially</em> meaningful as black boxes—resonance devices tuned by one person for a specific &quot;timbre&quot; or quality, but capable of responding differently to the inputs of each observer.</p>
<p>That is not to say that we should never &quot;open&quot; the black boxes and try to understand what&#x27;s going on inside. If you can learn how the smaller black boxes inside work and interact, it&#x27;ll be easier to combine and re-combine them to produce new desired effects. Science is all about that, and so is the even more ubiquitous process of creating <a href="/posts/embedded-prophesy-devices">mental models</a>.</p>
<figure><img src="https://aethermug.com/assets/posts/a-black-box-view-of-life/canetoad.webp" alt="A toad looking this way."/><figcaption>Meet the destructive input. Source: C. Brück; CC BY-SA 4.0</figcaption></figure>
<p>At the same time, we need to learn the limits of those more theoretical approaches. No matter how much neuroscience and psychology you study, you can&#x27;t fully divine how a &quot;Person&quot; black box will respond to your next utterance. And, when researchers in 1935 <a href="https://en.wikipedia.org/wiki/Cane_toads_in_Australia" rel="nofollow noopener noreferrer" target="_blank">introduced</a> a poisonous toad to the &quot;Australian Ecosystem&quot; black box, hoping to eliminate a nasty beetle, they couldn&#x27;t foresee that the result would be devastating ecological effects on many local species (except the beetle) over half the continent.</p>
<p>Sometimes the inside of the box is too complex, too interconnected to model or explain with any accuracy. Sometimes it&#x27;s not even clear what to consider as part of the black box or not—how to <a href="/posts/a-framing-is-a-choice-of-boundaries">draw a boundary</a> around it.</p>
<p>Often it&#x27;s a combination of both complexity and naive boundaries—the Australian researchers thought they were inputting the toads in a small black box containing only their sugarcane crops and the nasty beetles, and (apparently) they believed that the two species would neatly eliminate each other and disappear like +1 and -1. Sometimes tinkering based on oversimplified mental models can be very costly.</p>

<p>My process of learning badminton seems easy in comparison, but the same principles apply. I&#x27;ve redrawn the boundary in my head to include more than just racket and shuttlecock: my body is part of the black box, too, and also all the other players on court, and environmental factors like air temperature and currents. The complexity of finely coordinating movements for a stroke, which involves calibrating and timing hundreds of muscles, means that my analysis will never be enough. Not to mention the recursive psychological interactions with my companions and opponents. Studying the theory might help a little bit. Just playing the game and seeing what works sounds more promising.</p>
<p>This is the reality we face every day, and we&#x27;re pretty accustomed to its more mundane manifestations. But working with the complex phenomena, from ecosystems to markets to other brains, doesn&#x27;t come natural to us. The black box lens helps you debug these processes. It makes you ask sharp questions like <em>should I tinker or should I open?</em> and <em>are you talking about the same black box I&#x27;m talking about?</em> ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/a-black-box-view-of-life/robbie-down-aq9ouSXaVng-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[In Japanese You Need a Dictionary to Count Things]]></title>
            <link>https://aethermug.com/posts/in-japanese-you-need-a-dictionary-to-count-things?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/in-japanese-you-need-a-dictionary-to-count-things</guid>
            <pubDate>Thu, 08 Aug 2024 05:26:03 GMT</pubDate>
            <description><![CDATA[On Josuushi and questionable language approaches]]></description>
            <content:encoded><![CDATA[<p>This isn&#x27;t a blog about Japanese, but there are some aspects of that language that are just too interesting keep to myself. Here I want to share another one of those, one that might lead you to question the sanity of those who willfully speak it. Insane we are not, though, as I hope I&#x27;ll convince you by the end.</p>
<h3>A Wake Up Call</h3>
<p>One of the first things you learn of a language is counting to ten. When you&#x27;re getting started, Japanese counting seems to be as easy as any other language: one is <em>ichi</em>, two is <em>ni</em>, three is <em>san</em>, and so on. These words come from ancient Chinese. (If you remember only one of these, remember 2=<em>ni</em>, because I&#x27;ll use it in the examples that follow).</p>
<p>The numbers from eleven up are extremely regular combinations of the first ten. For example, thirteen is just <em>juu-san</em>, literally &quot;ten-three&quot;. &quot;No big deal!&quot; you think as an endearing beginner, &quot;I&#x27;ve got this!&quot;.</p>
<p>Then you learn a bit more. And more. And more.</p>
<p>Let&#x27;s put aside the pronunciation quirks, which aren&#x27;t all that special compared to any other language. In what follows I&#x27;ll focus on the numbers and the grammar.</p>
<p>First of all, there is a second, entirely different way to count to ten. Let&#x27;s call it the &quot;traditional&quot; way, because it is based not on imported sounds but on the local words that predated Chinese influence.</p>
<table><thead><tr><th>Digit (Kanji)</th><th>Japanese (modern)</th><th>Japanese (trad.)</th></tr></thead><tbody><tr><td>1 (一)</td><td>ichi</td><td><strong>hi</strong></td></tr><tr><td>2 (二)</td><td>ni</td><td><strong>fu</strong></td></tr><tr><td>3 (三)</td><td>san</td><td><strong>mi</strong></td></tr><tr><td>4 (四)</td><td>shi</td><td><strong>yo</strong></td></tr><tr><td>5 (五)</td><td>go</td><td><strong>i</strong></td></tr><tr><td>6 (六)</td><td>roku</td><td><strong>mu</strong></td></tr><tr><td>7 (七)</td><td>shichi</td><td><strong>nana</strong></td></tr><tr><td>8 (八)</td><td>hachi</td><td><strong>ya</strong></td></tr><tr><td>9 (九)</td><td>kyuu/ku</td><td><strong>ko</strong></td></tr><tr><td>10 (十)</td><td>juu</td><td><strong>to</strong></td></tr></tbody></table>
<p>This still sounds very manageable: you&#x27;ll just learn twenty words instead of ten, and when to use which.</p>
<p>But if, at this stage, you get the bold idea of applying your new knowledge to a basic sentence like &quot;I bought two books&quot;, someone will politely tell you that you&#x27;re doing it wrong (or, more likely, they&#x27;ll think so without telling you, out of extra politeness).</p>
<p>You can&#x27;t just put the number next to the noun it&#x27;s meant to count, as you do in English. You need to <em>qualify</em> the number based on the kind of thing being counted!</p>
<h3>Some Complications</h3>
<p>It turns out that, on their own, those beginner-friendly words you learned above are only good to communicate abstract numbers, like &quot;the number 2&quot; or a rocket launch countdown. When you want to refer to numbers <em>of things</em>—arguably the most common use case—you need to attach something called <em>josuushi</em>, or &quot;counter word&quot;, after the number.</p>
<p>For example, to count books, you have to add the <em>kanji</em> 冊 <em>satsu</em> after the number, so that the &quot;two&quot; part of &quot;two books&quot; becomes &quot;<em>ni-satsu</em>&quot; instead of just &quot;<em>ni</em>&quot;. <em>Satsu</em> is a word specialized for counting books, and nothing other than books. To say &quot;two magazines&quot; the number part will become &quot;<em>ni-bu</em>&quot;, and for &quot;two carrots&quot;, &quot;<em>ni-hon</em>&quot;.</p>
<p>In total, there are around 500 different <em>josuushi</em> in the Japanese language. Other common <em>josuushi</em> are 人 <em>nin</em> for people, 匹 <em>hiki</em> for animals roughly smaller than people, 頭 <em>tou</em> for animals roughly larger than people, 羽 <em>wa</em> for birds, 本 <em>hon</em> for long objects, 枚 <em>mai</em> for flat and thin objects, and 個 <em>ko</em> for smallish objects that aren&#x27;t too long or too flat.</p>
<p>As you might have already surmised, it&#x27;s not always crystal clear which counter word you&#x27;re supposed to use. There are detailed rules for the right counters for the right situations, sometimes even dictating which <em>kanji</em> to use in writing even when the spoken pronunciation would be the same.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/in-japanese-you-need-a-dictionary-to-count-things/counters_rules.webp" alt=""/></figure>
<p>There are also many irregular cases, of course. For whatever reason, you&#x27;re supposed to count butterflies with 頭 <em>tou</em>, the counter usually reserved for big animals like cows and elephants; sometimes 貫 <em>kan</em> is used to mean a <em>pair</em> of sushi, instead of one; the same 帖 <em>jou</em> counter represents groups of 10 when counting paper-like <em>nori</em> seaweed, 100 when counting tissue paper, 20 when counting small-sized <em>washi</em> rice-paper sheets, and 48 for <em>Mino washi</em>, a kind of paper traditional of Gifu prefecture.</p>
<p>My favorite bonkers counter might be that for rabbits. Being small animals, you&#x27;d expect to count them with 匹 <em>hiki</em>, like cats, dogs, and raccoons. Instead, rabbits get 羽 <em>wa</em>, the bird counter (the <em>kanji</em> 羽 literally means &quot;feather&quot;). There are many <a href="https://japanese.stackexchange.com/questions/1316/why-is-the-correct-counter-for-rabbits-%E7%BE%BD-%E3%82%8F" rel="nofollow noopener noreferrer" target="_blank">wild theories</a> for why bunnies ended up in the same bucket as pigeons. I&#x27;ll let you form your own.</p>
<h3>Admittedly a Mess</h3>
<p>If all that sounds like a mess to you, most Japanese speakers would agree. As Fujisawa Kazuhito (or Kazunari, <em>kanji</em> are <a href="/posts/the-beautiful-dissociation-of-the-japanese-language">ambiguous</a>), a researcher of the Japanese language put it:</p>
<blockquote>
<p>I think that almost no one can correctly use <em>josuushi</em>. I doubt that anyone except a fish nerd would know to count fish as <em>ichi-bi</em>, <em>ni-bi</em>, etc.</p>
<p>— <em>Nihongo no Chikara</em>, <a href="https://kf-planning.blogspot.com/2015/12/blog-post_7.html" rel="nofollow noopener noreferrer" target="_blank">Fujisawa&#x27;s blog</a>, translation mine</p>
</blockquote>
<p>People have a few ways to simplify things when necessary. For objects and abstract concepts, you can often fall back to a generic counter つ <em>tsu</em>. This is very convenient, but it only works up to 10 things, and can&#x27;t be used for animals or people (that would be rude!). The counter for small round things, 個 <em>ko</em>, can also serve as a generic/abstractish replacement for lots of inanimate things, and has no upper limit.</p>
<figure><img src="https://aethermug.com/assets/posts/in-japanese-you-need-a-dictionary-to-count-things/qna.webp" alt="Screenshot of a Japanese website."/><figcaption>Question on a popular Q&amp;A website: &quot;How do I count bicycles?&quot; Top answer: &quot;I count them as ichi-dai, ni-dai, san-dai... Sorry if its wrong.&quot;</figcaption></figure>
<p>So, if you don&#x27;t remember that sushi is counted with 貫 <em>kan</em>, you can still ask your sushi chef to squeeze you two more using つ <em>tsu</em> or 個 <em>ko</em>. They&#x27;ll get it, no disconcerted looks. But using these shortcuts makes an already vague language even vaguer, and advertises to everyone around that you&#x27;ve given up, that you don&#x27;t know how to properly count this one. That&#x27;s why the internet is full of &quot;<a href="https://www.sanabo.com/kazoekata/" rel="nofollow noopener noreferrer" target="_blank">counting dictionaries</a>&quot; and SEO-boosted blog posts explaining at length how to count <a href="https://net-business888.com/tree-kazoekata/" rel="nofollow noopener noreferrer" target="_blank">trees</a>, <a href="https://www.zatsugaku-jiten.net/blog-entry-1168.html" rel="nofollow noopener noreferrer" target="_blank">streets</a>, and <a href="https://d-suga.com/530" rel="nofollow noopener noreferrer" target="_blank">pokemon</a>.</p>
<h3>Doubts of Insanity</h3>
<p>For someone not familiar with Japanese, Chinese, or other languages that depend on vast amounts of counter words, all this might sound like a terribly backwards way of counting.</p>
<p>(My gut response to that is that the vowel pronunciation rules in English are hardly more sensible, but I won&#x27;t rub it in.)</p>
<p>Most languages, including English, do have the same kind of counter as Japanese, only for certain words: you don&#x27;t say &quot;three water&quot;, but &quot;three drops of water&quot; or &quot;three milliliters of water&quot;or &quot;three glasses of water&quot;.</p>
<p>In these languages the counters are needed only for a category of nouns called &quot;mass nouns&quot;, indicating things that cannot be directly counted. They don&#x27;t function <em>exactly</em> the same, because <em>josuushi</em> are grammatical particles rather than nouns, but they are the closest equivalent. The main difference is that Japanese and similar languages use them for <em>everything</em>.</p>
<p>Japanese grammar has no distinction for singular and plural. The sentence 葉っぱが落ちた <em>happa ga ochita</em>, by itself, could mean &quot;the leaf fell down&quot; or &quot;the leaves fell down&quot;. Neither the noun nor the verb carries any information about the number of things being talked about.</p>
<p>In other words, all Japanese nouns are mass nouns. There is no built-in counting.</p>
<p>(Nerd detour: This isn&#x27;t entirely true. <a href="https://www.glossa-journal.org/article/id/4960/" rel="nofollow noopener noreferrer" target="_blank">There are</a> certain cases in which a Japanese sentence can make the singular/plural difference evident without the use of counters or grammatical elements. The secret is in the word order. Compare the familiar Japanese ambiguity of</p>
<blockquote>
<p>りんごの一部が腐っている <em>ringo no ichibu ga kusatteiru</em></p>
</blockquote>
<p>which could mean either &quot;some of the apples have gone bad&quot; or &quot;part of the apple has gone bad&quot;, with</p>
<blockquote>
<p>一部のりんごが腐っている <em>ichibu no ringo ga kusatteiru</em></p>
</blockquote>
<p>which can only be plural, i.e. &quot;some of the apples have gone bad&quot;. This is a subtle and very limited effect and I had never noticed the distinction consciously before researching this blog post.)</p>
<p>In Japanese, sometimes the singular/plural distinction isn&#x27;t important, and sometimes it is clear enough from the context. But often you&#x27;ll want to make the plurality of a noun evident in the same sentence, in which case you&#x27;re usually forced to add the number explicitly. And if you really need to have the number, you might as well enrich it with information about the type of thing being counted.</p>
<p>In summary, Chinese, Japanese, and company have &quot;evolved&quot; to solve the same problem as the other languages—making it clear how many things we&#x27;re talking about—with very different grammatical devices. This is not a standalone feature: it emerged in symbiosis with other characteristics, such as the lack of grammatical inflection and the high contextuality. And it does the job, considering that over a billion people use this approach every day.</p>
<p>Still, you might ask why these languages evolved into a practice that sounds more inconvenient. Wouldn&#x27;t it be more efficient to embed the plurality in the nouns themselves?</p>
<h3>Language, Evolution, and Games</h3>
<p>Again, I could turn that question around. A German speaker might ask why English didn&#x27;t evolve to distinguish female, male, and neuter grammatical gender—a practice that would surely enrich the language. And a Portuguese or Italian speaker might want to know why English and French don&#x27;t allow you to omit the subject from a sentence, even when the same information is conveyed anyway by the verb inflection.</p>
<p>But a more honest answer is that I don&#x27;t know and, as far as I can tell from my research, no one else knows with confidence.</p>
<p>The most reasonable explanation for these obtuse-sounding language quirks—from Japanese <a href="/posts/the-beautiful-dissociation-of-the-japanese-language">written/spoken dissociation</a> and counter words to English irregular-is-the-new-regular pronunciations—may be the same that explains zebra stripes and <a href="https://www.amnh.org/explore/news-blogs/news-posts/to-hunt-the-platypus-uses-its-electric-sixth-sense" rel="nofollow noopener noreferrer" target="_blank">platypus electrolocation</a>: happenstance, drift, and <a href="https://www.polyglossic.com/exaptation-extreme-language-change/" rel="nofollow noopener noreferrer" target="_blank">exaptations</a>. It&#x27;s not a logical process, but it is a pragmatic one.</p>
<p>Once an approach to a linguistic function has taken hold, it might be very difficult to change it—even when it is objectively less efficient than alternative approaches.</p>
<p>For example, assuming for a minute that the Japanese wanted to stop using their countless <em>kanji</em> for writing, that would still be a very difficult task. Without <em>kanji</em>, the written language would be very hard to read because of the large number of same-sounding words, so they would probably need to create a huge amount of new words that are pronounced differently to remove the ambiguity. You can&#x27;t change one part of the language without also changing most of the rest. (Of course, no one wants the <em>kanji</em> to go away, least of all me.)</p>
<figure><img src="https://aethermug.com/assets/posts/in-japanese-you-need-a-dictionary-to-count-things/thom-reijnders-3sOi2pwoslI-unsplash.webp" alt="Four meerkats standing upright on a rock, each staring in a different direction."/><figcaption>Meerkats in the middle of a &quot;who warns the others first&quot; competition. Photo by Thom Reijnders, Unsplash.</figcaption></figure>
<p>Another thing to consider is the role of game-theoretical dynamics. Here I&#x27;m in the realm of speculation, because the <a href="https://www.sciencedirect.com/science/article/pii/S0303264717302885#bib0265" rel="nofollow noopener noreferrer" target="_blank">evolutionary dynamics of language change</a> are a new area of research. Still, it looks like a promising approach. Evolutionary game theory considers the interactions between groups of individuals. One of its main results is that the &quot;ideal&quot; adaptation or solution to a problem depends not only on the individual and its environment, but also—and sometimes especially—on <em>what everyone else is doing at the moment</em>.</p>
<p>For example, many social animals like meerkats and bonobos display altruistic behavior like giving warning calls to their peers when they spot danger and sharing the food they find. For any one of them, being selfish might be the more clever option in the short term: they could hide sooner, expose themselves less, and eat all the food they find. But any individual animal who went rogue like that would lose reputation within the group, and might stop receiving the same benefits from the others. They would be disadvantaged compared to the mutually-altruistic members. Being selfish might be the best strategy only if all the members of the group became selfish at the same time.</p>
<p>Altruism here is what is called an <a href="https://www.nature.com/scitable/knowledge/library/game-theory-evolutionary-stable-strategies-and-the-25953132/" rel="nofollow noopener noreferrer" target="_blank">evolutionarily stable strategy</a> (ESS): not necessarily always better in absolute terms, but good enough that it&#x27;s very difficult for any alternative strategy to replace it.</p>
<p>Something similar might be preventing languages from changing in certain ways. In the case of Japanese <em>josuushi</em>, if everyone somehow stopped using them at once in favor of another grammatical method of achieving the same function, it might work just fine. But if a small minority of rebels decided to stop using <em>josuushi</em>, they would have a very hard time spreading that practice to the whole population.The <em>josuushi</em>-denier&#x27;s lives would actually get harder: they&#x27;d be left behind in certain conversations where the counter words are the only thing that disambiguates between objects, and the others would have trouble understanding what the rebels are saying.</p>
<p>Of course, neologisms make their way into dictionaries and populations all the time, but new grammatical changes seem harder to scale. They would have to be <em>much</em> more useful than the status quo in order to overcome all the friction.</p>
<figure><img src="https://aethermug.com/assets/posts/in-japanese-you-need-a-dictionary-to-count-things/socks.webp" alt="Screenshot of an e-commerce website in Japanese, with the counter for &#x27;pakku&#x27; highlighted."/><figcaption>The usage of newly-minted counter &quot;pakku&quot; out in the wild.</figcaption></figure>
<p>But enough with the conjectures. Speaking of neologisms, Japan crucially has them for counter words, too. For instance, several foreign words have been subsumed as counters in the past few decades: to count workout sets, the counter is セット <em>setto</em> (set), while for packaged goods you use パック <em>pakku</em> (pack). Examples of indigenous neo-counters are 車線 <em>shasen</em>, which literally means &quot;car lane&quot; and is used to count car lanes, and 面 <em>men</em> to count sports courts.</p>
<p>If <em>josuushi</em> were really more burdensome than they are useful, people would just use the generic fallback counters like <em>tsu</em> and <em>ko</em> for all new noun categories. But they don&#x27;t. This, I think, evidence enough that <em>josuushi</em> are actually fine, and that people love them and need them. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/in-japanese-you-need-a-dictionary-to-count-things/usagi.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[New Aphantasia Article on Nautilus]]></title>
            <link>https://aethermug.com/posts/new-aphantasia-article-on-nautilus?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/new-aphantasia-article-on-nautilus</guid>
            <pubDate>Thu, 01 Aug 2024 06:51:52 GMT</pubDate>
            <description><![CDATA[Rotating things without seeing the things]]></description>
            <content:encoded><![CDATA[<p>Today Nautilus published a new <a href="https://nautil.us/when-logic-beats-imagination-746995/" rel="nofollow noopener noreferrer" target="_blank">article</a> I wrote about aphantasia, following my <a href="https://nautil.us/my-brain-doesnt-picture-things-407444/" rel="nofollow noopener noreferrer" target="_blank">previous feature piece</a> for them on the same subject (note: they are both paywalled). This time the focus is not on my own experience as an aphantasic, but on a recently published <a href="https://www.sciencedirect.com/science/article/pii/S1053810024000618" rel="nofollow noopener noreferrer" target="_blank">paper</a> where aphantasia helped shed light on some puzzling mechanisms of the brain—and cast some new shadows in the process.</p>
<p>I think this result is a great example of the new territories that aphantasia is opening up for neuroscientists. I won&#x27;t go to into the details here, but the paper highlights a striking finding: people with aphantasia are able to do three-dimensional manipulations in their heads <em>better</em> than people who can form mental images.</p>
<p>Considering that until now researchers believed mental imagery to be indispensable for this kind of spatial reasoning, I think that the paper will spark some interesting new research.</p>
<p>The paper, like my own article, is geared at the general population, because its implications are universal. But, needless to say, it has a special significance for anyone with aphantasia. There is a tendency by some people in online communities to take their aphantasia as a handicap, a guarantee that they&#x27;re shut out of many essential experiences. I think that this paper deals a hard blow to that pessimistic worldview. Not only it demonstrates that aphantasics can actually be better at certain visual tasks than the visualizers—it suggests a clear, convincing reason why that&#x27;s the case: many problems, including but not limited to those of spatial manipulation, can be tackled with different strategies, and not all those strategies rely on the ability to picture something.</p>
<p>The experiments were conducted by Lachlan Kay, Rebecca Keogh, and Joel Pearson of the University of New South Wales, Australia (the first two of which I had the pleasure to interview). Pearson is leading what is, in my opinion, some of the most important and striking research in mental imagery. For instance, in 2021 the same three authors (plus Thomas Andrillon) were the <a href="https://www.biorxiv.org/content/10.1101/2021.09.02.457617v1" rel="nofollow noopener noreferrer" target="_blank">first</a> to demonstrate objective, almost-impossible-to-fake ways to detect aphantasia, destroying the arguments of certain individuals who refused to believe subjective reports. I&#x27;m always waiting with anticipation for new papers by the UNSW group.</p>
<figure><img src="https://aethermug.com/assets/posts/new-aphantasia-article-on-nautilus/strategies.webp" alt="Bar charts, showing that a relatively smaller, but non-negligible fraction of those with aphantasia reported rotating the objects in their heads."/><figcaption>Chart from Kay et al. (2024) showing what strategies the participants reported using when solving the mental rotation task.</figcaption></figure>
<p>Unfortunately, due to space constraints on my Nautilus piece, I couldn&#x27;t expand on another interesting result of the same paper. Very much in passing, Kay et al. mention that &quot;those with aphantasia significantly favoured using analytic strategies&quot;, where by &quot;analytic strategy&quot; they mean ways to answer the rotation tasks using logic instead of actual mental rotations. Note how they didn&#x27;t find that <em>all</em> aphantasics used the analytic strategy: only a significant majority did.</p>
<p>The obvious question is, <em>what were the aphantasics who <strong>didn&#x27;t</strong> use the analytic strategy doing?</em></p>
<p>Apparently, some 30 or 40% of the aphantasics said that their strategy was to actually rotate things around mentally—while having no mental images to rotate! What&#x27;s going on here?</p>
<p>The authors told me that they didn&#x27;t have enough data on this side of the experiment to make bold statements about it—more research will be needed to figure this one out.</p>
<p>I was lucky enough to catch <a href="https://scholar.google.com/citations?hl=en&amp;user=yHK_TKEAAAAJ" rel="nofollow noopener noreferrer" target="_blank">Alfredo Spagna</a> of Columbia University while he was visiting Japan for a conference. He is well known in the field of attention and mental imagery, and was the perfect person to bring some clarity in my confused mind. (I was unable to incorporate his comments directly in my article this time, but the insights he gave me were very useful to orient myself in this vast topic, and his explanations will definitely make it into my future work.)</p>
<p>I met with Alfredo in beautiful cafe in Ueno Park, and I asked him about these and other topics. He explained very kindly and clearly to me a great deal of what we know about the interplay between these various functions of the brain, and just how much we still don&#x27;t understand. The mechanisms for mental rotation still largely fall in the latter category. There are theories and tantalizing hints, but no clear picture has emerged yet. Perhaps paradoxically, aphantasia might be the solution to that. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/new-aphantasia-article-on-nautilus/shepartmetzler.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Seamanship of Life]]></title>
            <link>https://aethermug.com/posts/seamanship-of-life?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/seamanship-of-life</guid>
            <pubDate>Wed, 24 Jul 2024 02:47:38 GMT</pubDate>
            <description><![CDATA[Aye, pass that razor hither, me hearty!]]></description>
            <content:encoded><![CDATA[<p><em>I originally published this on February 17, 2023 on <a href="https://planktonvalhalla.com/" rel="nofollow noopener noreferrer" target="_blank">Plankton Valhalla</a>. I&#x27;m moving it here because it fits in with Aether Mug&#x27;s happy chaos of ideas better. Back then the number of readers was a small fraction of today&#x27;s, so hopefully it&#x27;s not a repetition for most.</em></p>

<p>There is a sense, in today&#x27;s society, that things are getting out of control. The world is becoming too much for us to handle. Too complex, too fast, too fake, too inhuman. Leaders and academics tell us that we are in a VUCA (Volatile, Uncertain, Complex, Ambiguous) world and sprinkle their speeches with the word &quot;unprecedented&quot;. We&#x27;re clueless about the future, and clueless about <em>how</em> clueless we are, they say, and might slip off the tip of the hockey stick at any moment now. Others are less pessimistic, arguing that the world is great and getting better and better. Apparently, everyone has been saying that things are getting worse since prehistory—that&#x27;s just something people do—so don&#x27;t worry about them.</p>
<p>The debate rages on and I doubt it will ever stop. Is the world is really getting worse? I don&#x27;t know. Trying to settle whether we are living in heaven or hell may be a compelling pastime, but it&#x27;s not very useful. I&#x27;ll leave that kind of discourse to those who enjoy it. To me the interesting bit is the present situation, and <em>what we can do about it now</em>.</p>
<p>No doubt, we&#x27;re living in a lot of uncertainty, and the world is indeed extraordinarily complex. We <em>are</em> pretty clueless, both as a species and as individuals, and it&#x27;s us individuals who need to cope with it. When I think about how to cope, as a human, with this Great Confusion, I can&#x27;t help but think about sailing.</p>
<p>I&#x27;ve been lucky enough to be born in the middle of the Mediterranean Sea. Since my pre-school years, my father took our family on yearly weeks-long sailboat trips around that warm, gentle embrace of a sea. For weeks at a time, we lived afloat, waves rocking us more or less gently at every waking and sleeping moment. My siblings and I grew up with the intuitive sense of what it means to be—literally—in a volatile, uncertain, complex, and ambiguous environment. We were at the mercy of unpredictable weather. One step in the wrong direction and we would fall overboard, into a fathomless expanse of wet ambiguity. We knew that beneath us were natural phenomena and creatures that we would never be able to see fully, let alone understand. On board were several people, sometimes unfamiliar to each other, crammed into very tight quarters, 24/7. The resulting social dynamics were not always easy to navigate either. We lived on an artificial sliver of partial stability in a veritable sea of chaos, and there was only so much we could do to determine our immediate fates.</p>
<p>Thinking about it now, living on a sailboat is an <em>exceedingly</em> good metaphor for our experience of living in this modern world.</p>
<p>Like long-distance voyagers, we&#x27;re all cast out into a whimsical and ruthless environment with no land in sight. We regularly face the limits of our power to control our lives. Some things we just can&#x27;t predict. Some waves make us sick. Every now and then a metaphorical storm hits us, and we get soaked, and go off course, and sometimes we don&#x27;t make it, or our loved ones don&#x27;t make it. Similar to how a ship&#x27;s crew can only get somewhere if they agree to collaborate in one way or another, so do we earthlings benefit from joining our forces without conflict.</p>
<figure><img src="https://aethermug.com/assets/posts/seamanship-of-life/biesty.webp" alt="Two pages from Stephen Biesty&#x27;s Cross Sections - Man-of-War."/><figcaption>I own a copy of the &#x27;Stephen Biesty&#x27;s Cross Sections&#x27; book about a Man-of-War of Great Britain&#x27;s Royal Navy from around 1800. Yes, it&#x27;s supposedly a children&#x27;s book, and yes, you can have one too.</figcaption></figure>
<p>Let me step up the metaphor here. All things considered, even with all the uncertainty, modern leisure sailboats are very safe and relatively comfortable. But this metaphor holds just as well for more extreme instances of sea life. In what follows, I&#x27;ll use the example of the navigators of old, when ships were made of wood, mechanical engines didn&#x27;t exist, and life was way harder than now on almost all possible fronts. <em>That</em> was a crazy kind of life!</p>
<p>Now, if we&#x27;re figurative sailors in an ocean of social, economic, and environmental complexity, we&#x27;re not very good ones. Listening to the public discourse, it seems like we&#x27;re much more confused and bitter than the average sailor was at sea. We&#x27;re more hysterical about our situation than the members of, say, the British or Portuguese navies in the early 1500&#x27;s, who were quite adept at their profession. Those voyagers must have been doing <em>something</em> right to fare so well in such a life-threatening world. We might as well try to figure out what it was, for our own sake.</p>

<p>Since before history began, human beings have left the green hills and solid ground of the mainland to spend their days riding the tides. They abandoned the comfort and certainties of land and surrendered themselves to the forces of nature, forever squeezed in that thin boundary layer that separates the atmosphere from the abyss. Nevertheless, they thrived. They managed to populate the remotest of islands, to introduce the goods of one continent to the people of another, to tell a thousand stories about worlds defying the imagination. They mapped the world.</p>
<p>It&#x27;s true that a lot of those seafarers were eaten by the fish. Many of them suffered greatly with their bodies and minds. We&#x27;ll never know the number of those who cursed such a life, wondering why they didn&#x27;t choose to grow pumpkins in the country instead. In the words of Ralph Waldo Emerson, &quot;the wonder is always new that any sane man can be a sailor.&quot;</p>
<p>I think that means that it takes a peculiar state of mind to live a sailor&#x27;s life:</p>
<blockquote>
<p>Lulled into such an opium-like listlessness of vacant, unconscious reverie is this absent-minded youth by the blending cadence of waves with thoughts, that at last he loses his identity; takes the mystic ocean at his feet for the visible image of that deep, blue, bottomless soul, pervading mankind and nature; and every strange, half-seen, gliding, beautiful thing that eludes him; every dimly-discovered, uprising fin of some undiscernible form, seems to him the embodiment of those elusive thoughts that only people the soul by continually flitting through it. In this enchanted mood, thy spirit ebbs away to whence it came; becomes diffused through time and space […] There is no life in thee, now, except that rocking life imparted by a gently rolling ship; by her, borrowed from the sea; by the sea, from the inscrutable tides of God.</p>
<p>— Herman Melville, Moby Dick (Ch. 35)</p>
</blockquote>
<p>The majority of sailors chose that hard life consciously and repeatedly. Perhaps it was because they had no other option, or because they were addicted to that state of opium-like listlessness. Hard to tell. But it&#x27;s not very much unlike our own modern predicament, where we seek to live our lives as fully as we can, in spite of all the hardships it bears us.</p>
<p>If the people living in one of the most extreme VUCA environments on Earth can become attuned to it, it stands to reason that we should be able to do that with our lives too. The key difference is that while the challenges of the sea are usually visible and even tangible, our modern city life throws at us a more abstract kind of waves.</p>
<p>What can we, caught in the storms of our metaphorical life/sea, learn from the nautical art of living a literal sea-life?</p>
<p>I&#x27;ll try to answer that with a simple observation. As you might expect, the mariners of old needed both <em>know-how</em> and <em>determination</em>. There was no secret ingredient, no amazing ritual or formula that a sailor could use to overcome their crushing cluelessness at sea. And that&#x27;s the point: no tricks, just raw, patient absorption of what the environment had to teach them.</p>
<p>It goes without saying that every sailor knew a lot about ships and the sea. They had to master a vast vocabulary unique to that space, and know how to handle hundreds of tools and solve thousands of problems quickly and effectively. But the know-how of a sailor was that of a manual laborer, not a knowledge worker or a theoretician. They didn&#x27;t learn their profession in classrooms or by thinking about it, but through months and years of direct, practical trial and error, an endless sequence of physical manipulations, attempts, setbacks, injuries, and firsthand observations.</p>
<p>They started learning how to tie knots the moment a rope was put in their hands. The sails and yards and riggings became extensions of their bodies, the wind became a language that spoke to them, and the sky a book they could read, more often than not. It was a know-how molded in intuition, not intellect.</p>
<p>Beyond their practical skills, seafarers had a striking sort of determination. They spent grueling months shaken by the tides, half-starving on tasteless, rotten food, assaulted by disease, crammed in shared quarters like sardines. They never knew exactly where in the world they where, not to mention when they would reach their next destination. Yet again and again, after enjoying the safety and predictability of a harbor, they would step back onto that wooden world of chaos.</p>
<p>I find the Japanese word <em>kakugo</em> (覚悟) to be more accurate than &quot;determination&quot; here. Kakugo means, quite literally, being aware of, or awakened to, the harshness of reality; knowing the worst that could happen, and choosing to take the risk anyway. Sailors had (and probably have today) kakugo to sell, because there&#x27;s no hiding from the worst of the ocean: it <em>will</em> come for you.</p>
<p>But we&#x27;re not talking about saints or even notoriously sage people. These were some of the least-educated and most alcohol-indulging groups of people of their time. Their wisdom was not of the sublime kind sought by Socrates or the Buddha. As far as I can tell, sailors were &quot;awakened&quot; to reality not by meditation or deep reflection, but by love and necessity. They saw something in that floating existence that they couldn&#x27;t resist and, in exchange for that, they were forced to keep their eyes wide open in front of its terrifying realities.</p>

<p>Fast forward to our own lives today. Our sea is made of social interactions rather than water; wars, economic ups-and-downs, and cultural clashes constitute our weather; and technology has become—like it or not—the vessel we use to stay afloat. To thrive in such a world we need something analogous to that raw, no-frills know-how and kakugo mastered by the sailors of old.</p>
<p>This is why we need science now more than ever. Science is the only area of knowledge, outside of handiwork, that is empirically created by interacting with reality. Humanity creates advanced scientific knowledge with the same trial-and-error process that a deck hand went through during their apprenticeship. Science is the seamanship of life.</p>
<p>Just as a 16th-century seafarer needed a visceral knowledge of their environment and an intimate proficiency with their tools in order to survive and prosper, so we need a scientific intuition of the world and a boxful of mental tools to handle. Just as the sailor prospered thanks to their keen awareness of the limits of their powers, so should we all be versed in the scientific expression of error, chaos, and nonlinearity.</p>
<p>Nowadays, the usual reaction to a crisis or an unexpected downturn is the formation of two groups, one claiming that we&#x27;re all going to die, the other that everything will be even better than before. That&#x27;s not what seamen did when uncertainty ran rampant: refusing to acknowledge the destructive power of a storm may doom the whole crew to death, and screaming in panic isn&#x27;t going to make things better either. There&#x27;s a—possibly apocryphal—quote by Joseph Conrad saying, &quot;any fool can carry on, but a wise man knows how to shorten sail in time.&quot; Wise sailors kept their cool, loosened the sails, and remained alert to whatever the winds were going to throw at them.</p>
<figure><img src="https://aethermug.com/assets/posts/seamanship-of-life/science_tweet.webp" alt="Tweet: Science isn&#x27;t a job, nor is it something that science-minded people do. It&#x27;s a tool, and one of the most marvelous we have ever created."/></figure>
<p>Few people today would disagree that humanity, as a whole, needs science more than ever, but what about at the level of the individual? You and me, we&#x27;re sailors too. Do we know, in our day-to-day, how to read the stars and interpret the winds? Do we know what to do when an emergency strikes? Will we be able to communicate with the sailors on distant ships and read their flag signals when they appear on the horizon? Science gives us the best tools to (try to) accomplish all those things.</p>

<p>To be clear, this isn&#x27;t about the academic knowledge of science. School teaches us to memorize scientific facts and methods, which is equivalent to showing us maps and descriptions of ship maneuvers. That approach has an important place—it&#x27;s how scientists do most of their work. But it&#x27;s not necessary for the kind of day-to-day sense-making that we&#x27;re in dire need of. Here I&#x27;m talking about the multi-disciplinary science that every single human being—scientists or not!—needs to stay afloat in this world. We only need a subset of science to do well in our lives, a subset that schools aren&#x27;t teaching us yet.</p>
<p>To be like inveterate mariners, we especially need a rough kind of knowledge, rooted in direct experience and working mostly on the strength of intuition. That&#x27;s a pragmatic side of science, less preoccupied with ultimate truths about the infinitely big and small, and more involved in answering questions like &quot;what the hell does that mean for me?&quot; and &quot;what should I do <em>right now</em>?&quot;. It&#x27;s science, yes, but of a kind made with the same dough as wisdom.</p>
<p>For example, the scientific concept of <a href="https://planktonvalhalla.com/20221006-the-invention-of-systems/" rel="nofollow noopener noreferrer" target="_blank">systems</a> is simple enough, but it&#x27;s a powerful thinking tool to see things in a more objective light. It helps you frame puzzling things like economies, companies, and social groups as systems with specific dynamics, tendencies, and leverage points. This lets you shift your focus away from the confusion and frustration, and towards what you can or can&#x27;t actually do to grok and affect those systems. Plus, it&#x27;s a good stepping stone to more sophisticated tools like network science, dynamical systems, and emergence.</p>
<p>Know-how, remember, is only half of what makes a good sailor. The other half is the determination, the kakugo to accept all the ugliness and the pain and strain without running or hiding. Knot-making knowledge won&#x27;t help you secure a sail if you refuse to climb the mast. Nowadays, this means being able to accept what you can&#x27;t understand without judgement, and admitting that you make many more mistakes than you realize. Stare at your mistakes, fight them with science, and make peace with them.</p>
<p>Kakugo is also a form of humility: stay down to earth. Find what&#x27;s really important, and don&#x27;t burden yourself with more. Use <a href="https://en.wikipedia.org/wiki/Occam&#x27;s_razor" rel="nofollow noopener noreferrer" target="_blank">Occam&#x27;s Razor</a>. “At sea, I learned how little a person needs, not how much.” (The guy who said that, Robin Lee Graham, set out to circumnavigate the world alone on a ship at the age of 16 and completed his trip five years later.)</p>
<p>That kind of science, and that kakugo, is all I really ever write about here. I (will) write essays about mathematics, but not for the sake of enjoying mathematics—that&#x27;s a side effect. I (will) write essays about complexity, cell biology, management, and lots of other fields, always with the goal of distilling insights for our voyage through the turbulent expanses of life. I also craft new thinking tools (or refine existing ones), ideas like <a href="https://planktonvalhalla.com/20220829-the-world-is-a-network/" rel="nofollow noopener noreferrer" target="_blank">differences propagating through the universal network</a>, <a href="https://planktonvalhalla.com/20221216-toying-with-ideas-of-glass-circuits/" rel="nofollow noopener noreferrer" target="_blank">glass circuits</a>, <a href="https://planktonvalhalla.com/20230323-the-demon-and-the-tree/" rel="nofollow noopener noreferrer" target="_blank">trees of possibilities</a>, and more. All with the intent of catalyzing that raw scientific intuition that I&#x27;m comparing to seamanship.</p>
<p>The tricky part is that this know-how can&#x27;t really be taught. A good mariner is first of all a creature of intuitions, whose knowledge is not intellectual but practical, not rationalized but absorbed, and whose thinking rides more on the affordances of language than logic. While I can&#x27;t be an instructor, I hope to be a sailmaker, or a carpenter, churning out nautical tools that may or may not turn out to be useful during navigation. I hope the reader will wield, rather than swallow, the ideas I offer, give them a spin at the next real-life opportunity to do so, and only keep them if they prove to serve any purpose. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h3>Notes</h3>
<ul>
<li>If you want to learn more about the harsh environmental, social, and mental conditions of sailors in the past, there&#x27;s plenty of books about this, beyond Biesty&#x27;s picture book I mentioned above. I especially recommend <a href="https://www.goodreads.com/book/show/174354.Over_the_Edge_of_the_World" rel="nofollow noopener noreferrer" target="_blank">Over the Edge of the World: Magellan&#x27;s Terrifying Circumnavigation of the Globe</a> and <a href="https://www.goodreads.com/book/show/335795.The_Bounty" rel="nofollow noopener noreferrer" target="_blank">The Bounty: The True Story of the Mutiny on the Bounty</a>. Oh, and Moby Dick. Don&#x27;t forget Moby Dick.</li>
<li><a href="https://www.goodreads.com/book/show/3828902-thinking-in-systems" rel="nofollow noopener noreferrer" target="_blank">Thinking in Systems: A Primer</a> is the perfect book to get familiar with the concept of systems.</li>
</ul>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/seamanship-of-life/Ivan_Aivazovsky_Ship_on_Stormy_Seas_thumb.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Blowing Against the Fog of War]]></title>
            <link>https://aethermug.com/posts/blowing-against-the-fog-of-war?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/blowing-against-the-fog-of-war</guid>
            <pubDate>Thu, 18 Jul 2024 09:57:55 GMT</pubDate>
            <description><![CDATA[A little about Plankton Valhalla and Aether Mug]]></description>
            <content:encoded><![CDATA[<p>As a kid I would sometimes play real-time strategy (RTS) video games with my friends. Each of us took on the role of an army—be it the Roman Empire at war with Carthage or an alien species trying to conquer a planet—and built bases, trained troops, and sent battalions around the territory, hunting for enemy camps. The fact that my friends were also trying to surprise-attack my own military bases made my explorations high-risk and high-stress missions.</p>
<p>I think that, for my friends, the fun was in overpowering and obliterating enemy forces. For me, though, the best part was getting to blow away the &quot;fog of war&quot; as I mapped out the land.</p>
<p>Almost every RTS and exploration game has fog of war. At the beginning of the game, you get a map of the battlefield that is entirely black, except for a little circle of land around your first troops—the only part known to you. As you send scouts and regiments around the hills and the valleys, the places they&#x27;re able to observe directly get added to the map, replacing the black background.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/blowing-against-the-fog-of-war/war2.gif" alt="Soldiers in a new base, surrounded by fog of war, in Warcraft 2."/><figcaption>Soldiers in a new base, surrounded by fog of war, in Warcraft 2.</figcaption></figure>
<p>It&#x27;s a gross simplification of a real battle situation (thinking about it now, how can the soldiers know so little about the territory? Did they reach their starting position blindfolded?), but it serves its purpose well: it throws you head-first into a sea of uncertainty, and you have to grope your way around, gather your own intelligence.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/blowing-against-the-fog-of-war/Baldurs-Gate-3-Map.webp" alt="Fog of war isn&#x27;t only for RTS games. This is a partly-explored area map in Baldur&#x27;s Gate 3, an RPG."/><figcaption>Fog of war isn&#x27;t only for RTS games. This is a partly-explored area map in Baldur&#x27;s Gate 3, an RPG.</figcaption></figure>
<p>For a long time I believed the fog of war was something exclusive to video games. It turns out that it is, first and foremost, a military term:</p>
<blockquote>
<p>War is the realm of uncertainty; three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty. A sensitive and discriminating judgment is called for; a skilled intelligence to scent out the truth.</p>
<p>— Carl von Clausewitz, On War (1832)</p>
</blockquote>
<p>There are better and worse metaphors, and there are the delicious ones. The fog of war is in the latter category for me.</p>
<p>I like it not because I have a special interest in wars or video games, but because it works just fine for more or less everything else, too. Replace &quot;life&quot; for &quot;war&quot; in von Clausewitz&#x27;s quote above, and the remark only <em>gains</em> in depth and significance. We do need a sensitive and discriminating judgment in life, and there aren&#x27;t many more useful ways of passing our time than &quot;scenting out the truth&quot;.</p>
<p>At one level, the scientists are our heroic scouts. We task them with scouring the land, scratching away at the black parts of our map like a giant epistemic scratchcard. Gradually, they reveal the features of the knowledge landscape and expand what we can think and do as a species. They work on what we might call the &quot;Humanity Map&quot;, shared by all of us collectively.</p>
<p>At the individual level, though, our progress is much more limited and patchy. Each of us knows only a tiny fraction of the cumulative human know-how. Any person&#x27;s map is distinct and—compared to the Humanity Map—almost entirely covered with thick fog of war. A scientist&#x27;s work won&#x27;t help you unless you go and study their work in depth, copying the newly-mapped landscape to your own &quot;Me Map&quot;. And no matter how much time you spend on that, you won&#x27;t be able to do it for <em>all</em> scientists that have ever lived. Your brain is a battalion of one.</p>
<figure><img src="https://aethermug.com/assets/posts/blowing-against-the-fog-of-war/johannes-plenio-a72o8w9HC2w-unsplash.webp" alt="A foggy forest, with a few trees visible and a white wall blocking the view beyond them."/><figcaption>Johannes Plenio, Unsplash</figcaption></figure>
<p>This is something that has always bothered me. I want to see the whole map! I want all of us to see it all! That wouldn&#x27;t only be very interesting to see, but it would endow us with superpowers, because the map we&#x27;re talking about shows the workings of reality itself.</p>
<p>I thought about how to achieve that. Eliminating the &quot;fog of life&quot; entirely is not feasible for obvious reasons, but can we at least do <em>much</em> better than now?</p>
<p>The easy answer is that we can do better by improving education. Teach more stuff and more meaningful stuff to more people—have children copy more of the Humanity Map into their own Me Maps. As much as I agree with that, I&#x27;ve found that it&#x27;s roughly as easy to pull off as clearing an Earth-sized scratchcard with a toothpick. Everyone&#x27;s time and brainpower is still limited, and it&#x27;s not like no one ever tried before.</p>
<p>My modest (but still super ambitious) answer is to work on the &quot;field of view&quot; of each individual, rather than on the amount of land they explore. If you have a wider epistemic field of view, you clear up more fog of war with each step you take. In less metaphorical terms, <a href="https://planktonvalhalla.com/20230905-mass-producing-the-mistake-minimizer/" rel="nofollow noopener noreferrer" target="_blank">I believe we need to increase each human being&#x27;s power to interpret and understand the world around us</a>. This would increase our Big-Picture View of the world superlinearly (quadratically?), in the same amount of time and effort.</p>
<p>One way to do that is by devising a powerful <em>language</em> that makes it easier to talk about complex things. Its vocabulary should capture the meaningful patterns that we encounter in our lives, but it should be general enough to be applicable in all kinds of situations, for all kinds of people. Its grammar should be simple and its rules of application few, otherwise no one would bother using it. That&#x27;s all easier said than done, but I wouldn&#x27;t be attempting it if I didn&#x27;t think it was possible.</p>
<p>At the collective level, science has given us many good technical languages pertaining to the separate fields. Specialization is great for humanity as a whole, because we have lots of people specializing in different things. But if you want to fight the individual-level fog of life, specialization doesn&#x27;t help. You need some amount of scientific <a href="/posts/jack-of-all-trades-master-of-something">generalism</a>.</p>
<p>Fortunately, we&#x27;re beginning to develop strange new ways to study the world that transcend the specifics of any field of science, being applicable to just about anything.</p>
<p>Fragments of these universal insights come out of niche-sounding fields like network theory, game theory, dynamical systems theory and cybernetics, others from the more mainstream theories of evolution, behavioral psychology, and economics. You get hints from books on &quot;<a href="https://www.goodreads.com/book/show/13530973-antifragile" rel="nofollow noopener noreferrer" target="_blank">antifragility</a>&quot; and &quot;<a href="https://www.goodreads.com/book/show/45188069-ergodicity-economics" rel="nofollow noopener noreferrer" target="_blank">ergodicity</a>&quot;. Even computer science, which might sound like a very specialized field, has some powerful generalist language. Systems theory and complexity theory are two broader umbrella or &quot;connector&quot; fields for this stuff (this kind of science is inherently about making connections between disparate areas).</p>
<p>Those researchers are thinking hard about wacky-sounding stuff like molecular computation, the Darwinian evolution of ideas, the similarities between lightnings and bacterial growth, and the fluid dynamics of crowds and ant colonies. Wacky they are not, though—at least not wackier than the kinds of situations we all meet every day, like chaotic households, political workplaces, group-think media, and collapsing ecosystems.</p>
<figure><img src="https://aethermug.com/assets/posts/blowing-against-the-fog-of-war/adrian-infernus-c4Q-00zQ-3o-unsplash.webp" alt="A mountain road fading into thick fog among the trees."/><figcaption>Adrian Infernus, Unsplash</figcaption></figure>
<p>Knowing the structure of an atom, the curvature of the Universe, or how to make blue LEDs is useful to humankind as a whole: it&#x27;s good new territory uncovered on the Humanity Map, and we know how to put them to good use sooner or later. But for the fog on your personal Me Map, understanding liquid crowds and adaptive ideas is more useful, because it trains you to spot connections and form better <a href="/posts/embedded-prophesy-devices/">models</a> of everything you see.</p>
<p>The only problem is that those &quot;generalist&quot; fields are still highly technical. The language used today to talk about those topics is too specialized for the average person. Applying general rules to concrete problems still requires some understanding of fundamental natural processes that can be hard to visualize. The tools we have to work on them, like the mathematics of dynamical systems and nonlinear differential equations, can be hard to swallow for the non-scientists among us. Despite all that, I&#x27;m convinced the important bits can be made easy enough for everyone.</p>
<p>That&#x27;s my project with <a href="https://planktonvalhalla.com" rel="nofollow noopener noreferrer" target="_blank">Plankton Valhalla</a> and, to some extent, with this blog—Aether Mug. I&#x27;m trying to craft a user-friendly mental toolkit that any non-expert can learn in a breeze. This language could then be used every day, as casually as we talk about the weather, sports, and politics—and indeed used to talk about all those familiar things. I think it would have the potential to deepen the popular discourse, squash at least the most blatant mental biases, and loosen the grip that short-sighted faction ideology has on us.</p>
<p>Plankton is a toolkit to understand the physical world with the generalist-science lens. Aether contains, among other things, <a href="/posts/a-framing-is-a-choice-of-boundaries">framings</a> about framings for building that toolkit. Both are works in progress, and both are very difficult and very fun to write, requiring constant sorties into the black parts of my own Me Map. I hope they&#x27;ll help blow away the darkness for some readers, too. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/blowing-against-the-fog-of-war/noah-silliman-4KWeFO13MFM-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Jack of All Trades, Master of Something]]></title>
            <link>https://aethermug.com/posts/jack-of-all-trades-master-of-something?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/jack-of-all-trades-master-of-something</guid>
            <pubDate>Thu, 04 Jul 2024 04:49:46 GMT</pubDate>
            <description><![CDATA[Generalism is a specialization]]></description>
            <content:encoded><![CDATA[<blockquote>
<p>... for there is an upstart Crow, beautified with our feathers, that with his Tygers hart wrapt in a Players hyde, supposes he is as well able to bombast out a blanke verse as the best of you: and being an absolute Johannes fac totum, is in his owne conceit the onely Shake-scene in a countrey.</p>
<p>— Robert Greene, established dramatist, about upstart William Shakespeare, 1592</p>
</blockquote>
<p>The common thinking about generalists (<em>Johannes factotum</em>, i.e. &quot;John do-it-all&quot; or Jack of all trades) is that they may able to do various things, but they&#x27;re not very good at any of them. The reason for this seems self-evident: if you have time to learn the basics of many different fields, you don&#x27;t have time to learn any of their finer nuances.</p>
<p>People use the expression &quot;Jack of all trades, master of none&quot; in a dismissive way, implying that not being specialized in anything is undesirable or embarrassing in some way. Yet generalists are chosen for many important roles in society. When one talks of a &quot;family doctor&quot;, they usually mean a general practitioner with some level of understanding of most disciplines of medicine. Many CEOs, managers, and facilitators in business and tech tend to be generalists, too. Clearly society has special spots for these people which it claims are really good at nothing.</p>
<p>At the evolutionary level, &quot;specialist species&quot; are the first to go extinct when the environment has an abrupt shift. The &quot;generalist species&quot; are the adaptable ones, those that can pivot to different diets, different hunting and foraging strategies, and different climates when the need arises. Consider raccoons, cockroaches, and... humans. Whether we realize it or not, we owe our world-bullying success to being natural Jacks of all trades, not specialists.</p>
<p>That&#x27;s the easy case in favor of those with eggs in many baskets. One basket crashes to the ground—no problem, because you have others. But individual humans aren&#x27;t the same as species: Darwinian adaptation is a multi-generational affair based on innate traits, whereas people aren&#x27;t randomly born with an excess of disparate skills.</p>
<figure><img src="https://aethermug.com/assets/posts/jack-of-all-trades-master-of-something/910769absdl.webp" alt="Abstract design based on leaf and feather shapes, Goerge Auriol"/><figcaption>Abstract design based on leaf and feather shapes, Goerge Auriol</figcaption></figure>
<p>In people, being a generalist seems to be, first and foremost, about a certain attitude or process, the variety of skills a secondary perk that comes with time. Generalists may <em>appear</em> to know &quot;all trades&quot;, but they usually acquire that know-how just in time for it to be useful.</p>
<p>All this considered, the &quot;master of none&quot; bit sounds a little off. Something is missing. Why would society reward being mediocre at everything? Why aren&#x27;t all generalists outcasts, at least for the first several years it takes them to build up a large toolbox of skills?</p>
<p>At this point in the discussion some people end up mentioning how fast-changing today&#x27;s world is, and how flexibility and adaptability are more important than ever. Others recommend becoming &quot;T-shaped&quot; instead, i.e. generalists who have both breadth and depth by becoming experts in one field. I will discuss neither of those things, though, because I&#x27;m interested in something else.</p>
<p>I think &quot;master of none&quot; is wrong, not because it is used dismissively, but because it is false.</p>
<p>If generalists are given the key roles they do, it&#x27;s because they&#x27;re really good at some non-obvious skill that the specialists lack or avoid practicing. Generalists must be masters of <em>something</em>—something other than the typical specialization. My question is, what <em>are</em> they masters of?</p>
<h2>Quick Learning</h2>
<p>To return to my previous point, to be a generalist you must be, at the very least, a quick learner.</p>
<p>By &quot;quick learner&quot; I don&#x27;t mean someone who memorizes a lot of information fast—a skill equally valued by specialists—but someone who can grok, in a short time, things that take place in very different contexts. That&#x27;s not a complete answer yet, but a hint: generalists excel at <em>something</em> related to context-switching.</p>
<p>The first step we humans take to understand the world is to <a href="/posts/a-framing-is-a-choice-of-boundaries">frame</a> things—we mentally segment the continuous, seamless network of interactions around us into named black boxes like &quot;car&quot;, &quot;society&quot;, or &quot;Jonathan&quot;. These mental black boxes, in turn, become the building blocks for the second step, the creation of the <a href="/posts/embedded-prophesy-devices">mental models</a> we need to simulate the world and make predictions about what will happen.</p>
<p>Suppose you&#x27;re trying to learn computer programming for the first time. You begin by learning what&#x27;s what, build a new ontology. You&#x27;re told that there are things like &quot;variables&quot;, &quot;functions&quot;, &quot;expressions&quot;, &quot;loops&quot;, and &quot;branching&quot;. If you&#x27;re a beginner, you don&#x27;t need to know how each of these things works inside, the mechanisms that make them behave the way they do. They&#x27;re just black boxes to you: they have a defined behavior and rules of application, and that&#x27;s all you need to know. For example, a <em>for loop</em> is a black box concept, an object that simply repeats its contents a designated number of times.</p>
<p>This naming of the important moving parts, or ontology-building, is what I&#x27;m calling <em>framing</em> the task.</p>
<p>Then you move on to learn how those elements can work together to produce more complex results. If you take the <em>for loop</em> object and put an <em>expression</em> object inside it which adds the number 1 to the <em>variable</em> object called <code>x</code>, you&#x27;ll get an ever-increasing number: the for loop repeats this <code>x+1</code> operation over and over. This simple mechanism may not be useful on its own, but you can combine it with other mechanisms to make useful programs. You could add another expression inside the loop—say, multiply a variable <code>y</code> by itself—and stop the loop when <code>x</code> has reached the value <code>N</code>. In this way (you learn to predict) the program will compute <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msup><mi>y</mi><mi>N</mi></msup></mrow><annotation encoding="application/x-tex">y^N</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.0358em;vertical-align:-0.1944em"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em">y</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8413em"><span style="top:-3.063em;margin-right:0.05em"><span class="pstrut" style="height:2.7em"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight" style="margin-right:0.10903em">N</span></span></span></span></span></span></span></span></span></span></span>, the <code>N</code>-th power of <code>y</code>.</p>
<p>The process of learning to put those building blocks (e.g. for loops, expressions, variables) together in specific ways with outcomes you can predict (e.g. computing <code>y</code> to the <code>N</code>th) is what I&#x27;m calling <em>creating a mental model</em> of computer programming.</p>
<p>If you&#x27;ve never done anything similar to computer programming before, this work of framing and model-building can be slow and difficult.</p>
<p>You&#x27;ll need to forget some previous beliefs you had, like the fact that a loop is something that looks circular in shape. And you&#x27;ll need to learn new ways of thinking, such as the fact that the computer will do exactly what you <em>tell</em> it to do, with no regard whatsoever for what you <em>want</em> it to do. Different context have different laws of interaction.</p>
<p>Here is where the generalists seem to excel. Being a &quot;quick learner&quot; must involve being nimbler at reframing and mental model-building tasks. This could be partly due to innate mental faculties, but I think it also has a lot to do with the approach and attitude towards the new areas of study.</p>
<h2>The Mastery</h2>
<p>No field of knowledge is completely unrelated to all others. There is always some overlap, either in the objects defined in two separate framings or in the ways multiple objects interact in those framings. This creates a pretty evident self-reinforcing effect wherein knowing more disparate fields makes it gradually easier to learn even more.</p>
<p>But there is more to it than simple accumulation. A good generalist is necessarily good at spotting patterns: even if a model is different—different labels, different ways to clump things together—it might be fully or partly <em>equivalent</em> to other models in different disciplines in subtle ways.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Many processes are context-independent, amenable to powerful analogies. The ability to recognize a familiar pattern in what looks, at first glance, like an alien world opens up shortcuts to understanding. And even when the pattern is similar but not identical to one previously known, spotting the bits that differ helps you cut to the chase and focus your learning.</p>
<p>Thus, if you&#x27;re very good at cooking, with its complex recipes and accurate procedures, some parts of programming might feel more natural to you; and the art of balancing the organic and the methodical that you learn in a project management job might give you insights about how to maintain a thriving garden.</p>
<p>Another skill required of a true Jack of all trades is traveling light. Things like tradition and &quot;common sense&quot; are heavy mental baggage. Carry it with you for too long, and you&#x27;ll even forget it&#x27;s there to bog you down. To quickly learn a new skill or area of expertise, one needs to drop the baggage and be ready to start from scratch. Empty the Lego bucket on the floor and start playing with a brick at a time. Some call this &quot;thinking from first principles&quot;, but it&#x27;s only part of the story.</p>
<p>In any project and group of people, there are usually many separate goals that co-exist, sometimes in harmony and sometimes in competition with each other. As far as I can tell, generalists tend to be better at navigating those trade-offs. A specialist easily falls into a kind of &quot;tunnel vision&quot;, becoming blind to all but a small set of standard goals, like corporate KPIs or completing a prescribed sequence of steps well. A generalist is never immune to the same pitfalls, but tends to be better at avoiding them. This is helped by the fact that, by definition, they come to each field with fresh eyes, but it&#x27;s still a muscle they get exercise regularly.</p>
<p>A generalist also needs to be good at some form of systemic thinking—a kind of meta-framing—where the focus is on the structure of the problem and of the relationships at play, rather than predefined procedures and fixed categories.</p>
<p>A specialist can live within one framing their whole life: the ontology is relatively static, and the same kinds of things happen over and over. This allows them to focus on the fine-tuning of those processes, without worrying about cataclysmic changes. But a generalist needs to think about those same topics very differently. Every moving part <em>might</em> move any time, and no process is taken for granted.</p>
<p>Instead of focusing on how any given process could vary, a generalist needs to see how different processes arise from the system&#x27;s interactions. It&#x27;s a slower and coarser way to look at the same field, but it gives them a lot more agility when the basic assumptions change.</p>
<figure><img src="https://aethermug.com/assets/posts/jack-of-all-trades-master-of-something/910775absdl.webp" alt="Abstract design based on leaves, Goerge Auriol"/><figcaption>Abstract design based on leaves, Goerge Auriol</figcaption></figure>
<p>The last ability I notice in every generalist is that they get a kick out of reframing and remodeling reality. If you dread the sense of confusion and ineptitude that pervades the initial phases of learning, you won&#x27;t get far as a generalist.</p>
<p>The young William Shakespeare, to whom Robert Greene (presumably) refers in the opening quote, was an actor, but he threw himself into playwriting with a passion that, it seems, irritated the incumbent specialists. He then branched out to poetry, theater management, real estate, and mentoring.</p>
<p>But this isn&#x27;t just a matter of preference, a hobby or quirk that can be safely ignored by all others with different interests. Often you&#x27;re <em>forced</em> to learn new things, even if you&#x27;re a specialist. Welcoming the new, rather than resisting it, is going to make anyone&#x27;s life easier.</p>
<h2>The Something</h2>
<p>To sum up, a generalist is someone who knows how to</p>
<ul>
<li>recycle existing framings,</li>
<li>spot analogies and deviations from known patterns,</li>
<li>think from first principles,</li>
<li>apply a systemic meta-framing,</li>
<li>balance trade-offs, and</li>
<li>enjoy the learning process.</li>
</ul>
<p>I&#x27;m not sure this list is complete, and not all generalists are amazing at all these things. But all of these skills can be trained, learned, and mastered with enough time. You can&#x27;t become a generalist overnight. It&#x27;s a gradual process that takes years of practice to excel at.</p>
<p>Even veteran generalists fall into generalist-specific pitfalls sometimes: dipping one&#x27;s feet in too many fields at once, tumbling down rabbit holes, impostor syndrome, and so on. Avoiding these takes dedication, focus, and practice.</p>
<p>In other words, generalists are specialists, too. They are experts at... I&#x27;ll call it &quot;generalism&quot;.</p>
<p>The only difference generalism has from other specializations is that it isn&#x27;t a neatly-defined field with sharp boundaries. Generalism is a meta-specialization, something more universal than anything you can call a &quot;field&quot;. And for this reason, unlike most specialties, it&#x27;s an expertise that you can recommend <em>everyone</em> learns.</p>
<p>Given how fundamental it is, it&#x27;s surprising that, as far as I can tell, no school has a curriculum for anything like generalism yet. There are no generalism teachers in high school, nor a Generalism Department in university, nor job descriptions saying &quot;must have at least 3 years in generalism&quot;.</p>
<p>Of course, some of these abilities are taught in school in one form or another—today&#x27;s generalists learn them somewhere, after all. But this learning seems to happen almost by accident, a side effect of doing things like extra-curricular activities and group projects. Instead, school seems to be designed for the opposite effect: try many disconnected disciplines, then pick one and go as deep as you can in that. That&#x27;s the only way, they claim, to be &quot;job-ready&quot;.</p>
<p>The result, I think, is that those lucky enough to have a natural tendency for it, or to have the right teachers and environments, are the only ones who can hope to &quot;specialize in generalism&quot;. I believe it doesn&#x27;t have to be this way.</p>
<p>Not everyone needs to become a professional generalist, but everyone can benefit from learning a bit—or a lot—more of that skill set. For specialists in other sectors, this would allow them a flexibility and a perspective useful in tearing down those tunnel-vision walls and bridging the gaps between silos. And, very often, being &quot;good enough&quot; at a task, as opposed to being a specialist, is actually enough.</p>
<p>Unlike most other disciplines, I imagine no one would ever call the time spent being taught generalism a waste. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/jack-of-all-trades-master-of-something/910771absdl.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Aphantasia Is No Creativity-Killer]]></title>
            <link>https://aethermug.com/posts/aphantasia-is-no-creativity-killer?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/aphantasia-is-no-creativity-killer</guid>
            <pubDate>Thu, 27 Jun 2024 10:15:50 GMT</pubDate>
            <description><![CDATA[I turned my back to art, then turned again]]></description>
            <content:encoded><![CDATA[<p>When I was in junior high school, every child in Italy had a &quot;school diary&quot; to keep track of homework, deadlines, and teachers&#x27; notes. Mine was filled with drawings—fantastic and physically-improbable great-swords, muscular Super Saiyan, and geometrical patterns. I spent more time drawing than most of my classmates, and I wasn&#x27;t half bad at it. I was no precocious Picasso, but my friends would sometimes ask to see my works and commission new ones.</p>
<p>I was also into writing poetry, reading books, and all things artistic: one thing I <em>didn&#x27;t</em> care for, though, was math and science.</p>
<p>As I grew up, I began to feel like I was <a href="/posts/hills-not-walls">hitting against a wall</a>. Granted, I could draw a predefined list of subjects well enough, but I couldn&#x27;t easily come up with new, original imagery.</p>
<p>I wanted to <em>surprise</em> myself with fantastic beasts and conjure characters whose stories had not been told yet—but I couldn&#x27;t. Time and again, I sat in front of a blank page waiting for some ecstatic rapture to begin moving my pen into something truly new, and it never happened. I didn&#x27;t fare much better with prose. Unless I had a clear idea or inspiration to begin with, I found myself staring at a void made of my own ineptitude.</p>
<p>My handle name as an adolescent, when I was just beginning to form an identity on the internet, was something cringey like <code>i_am_not_creative</code>. I don&#x27;t think I ever thought much about it—such unpleasant reflections were to be avoided at any cost—but somewhere deep I must have decided that I wasn&#x27;t cut for creative work. It hurt to see myself fail consistently at something I&#x27;d believed to be my strength.</p>
<p>At the same time, I began to learn the pleasures of mathematics and physics. Its inevitability felt reassuring. There was no blank page in science, only a never-ending wellspring of hard facts and assumptions, on which I could apply reason to build new understanding.</p>
<p>By the time I turned 18, I had left behind the drawing and the poetry to become the quintessential rationally-minded type. I did well in the scientific disciplines. I chose to study physics at University, then specialized in astronomy and astrophysics, then went through a doctorate in astrodynamics. For more than a decade I lived in a world of evidence-based reason and pragmatism. During that time, my fascination for creative work never completely faded, but I only dared try my hand at fiction, on and off, to my constant dissatisfaction. It felt too hard.</p>
<figure><img src="https://aethermug.com/assets/posts/aphantasia-is-no-creativity-killer/a_flash_and_its_over.webp" alt=""/></figure>
<p>Now I know that I have this neurological condition called &quot;aphantasia&quot;—the absence of any &quot;mind&#x27;s eye&quot; (I wrote about it <a href="https://aethermug.com/posts/aphantasia" rel="nofollow noopener noreferrer" target="_blank">here</a>). Picturing nothing but a blank, featureless world when creating something from scratch, then, is not the result of panic, lack of practice, or overthinking. It&#x27;s my natural state.</p>
<p>But the most surprising thing with aphantasia is how I was able to misuse words like &quot;mental picture&quot; for over three decades.</p>
<p>When you learn the meaning of a word, most of the time you have ways to cross-check that you got it right. You might get it wrong the first time, but every time you see the word in use again you&#x27;ll be able to triangulate and home in to the &quot;real&quot; meaning (to the <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">extent</a> that a word can have a &quot;real&quot; meaning). Others may correct you, show you examples, and describe it more accurately until you get it. This gradual learning process is very hard, if not impossible, with words belonging to the realm of inner experience.</p>
<p>All my life, people around me would say things like &quot;I can picture his expression when he finds out,&quot; and I would nod in approval. When mindfulness instructors told me to close my eyes and visualize myself sitting in a peaceful beach, I just... closed my eyes and thought about being in a peaceful beach.</p>
<p>None of that sounded strange to me because, at some point in my early childhood, I had filed all expressions of &quot;mental seeing&quot; under the definition of &quot;thinking about something and what it feels like to be looking at it&quot;. The latter was easy for me, so the definition seemed to be working fine. No one could peer into my mind and correct me, saying &quot;no, Marco, that&#x27;s not what I meant by <em>seeing mentally</em>&quot;.</p>
<p>I can conjure in my head the feeling of seeing the thing, and I can have a sense of its shape and its position relative to me. That, I concluded, was mental visualization. Of course I never paused to analyze those sensations objectively, never wondered how similar or different they were from actual seeing with one&#x27;s eyes. Only after learning about aphantasia I realized that none of what goes on in my imagination is literally visual or sensory. When I imagine a sailboat, the sailboat is there in my mind, but it has no contours, no color, no texture.</p>
<p>This redefinition of the meaning of mental imagery was made harder by the co-existence of a &quot;spatial sense&quot;. I have a very strong sense of direction, and a quick glance at the map of an unfamiliar area is enough for me to keep walking confidently for a while. If I close my eyes and imagine my childhood house, I&#x27;m able to probe it thoroughly, visit each room in turn, and point with my finger at this and that piece of furniture. When I do that, my eyes move under my eyelids, because the house is there for me as a topological, three-dimensional object, and I can pretend to look around it.</p>
<p>Again, none of this imaginary probing is visual: if you ask me whether, in my mind, the bedroom is brightly lit or in complete darkness, the very question is meaningless to me. Yet, only a few years ago, I would have told you that I could mentally see the house. I didn&#x27;t know that vision and spatial sense were <a href="https://en.wikipedia.org/w/index.php?title=Place_cell" rel="nofollow noopener noreferrer" target="_blank">entirely</a> <a href="https://en.wikipedia.org/wiki/Grid_cell" rel="nofollow noopener noreferrer" target="_blank">separate</a> <a href="https://en.wikipedia.org/wiki/Boundary_cell" rel="nofollow noopener noreferrer" target="_blank">functions</a> of the brain.</p>
<figure><img src="https://aethermug.com/assets/posts/aphantasia-is-no-creativity-killer/all_the_places.webp" alt=""/></figure>
<p>When I took my first <a href="https://en.wikipedia.org/w/index.php?title=Vividness_of_Visual_Imagery_Questionnaire" rel="nofollow noopener noreferrer" target="_blank">VVIQ questionnaire</a>, designed by psychologists to measure the vividness of one&#x27;s mental imagery, I turned out as mostly but not totally aphantasic. Some questions were easy, like &quot;How clearly can you imagine the exact contours of the face, head, shoulders and body of a familiar person?&quot;. My answer to those was a straightforward &quot;not at all&quot;.</p>
<p>The questions with a stronger spatial component were trickier. For example, when asked to imagine the sun setting over the sea, I saw no colors or shapes, but I did get a <em>spatial</em> sense of the flat expanse of the sea and of the sphere of the sun. It wasn&#x27;t a picture but—because I had equated that spatial sense with the words &quot;seeing mentally&quot; until that point—I answered that I was able to vaguely visualize it.</p>
<p>Only after much more introspection I learned to distinguish actual mental images from place-like imagination. My VVIQ now returns a verdict of full aphantasia.</p>
<p>Apart from non-visual 3D objects, what do I imagine? In fact, do I even <em>have</em> an imagination? (Someone recently asked me if I was a <a href="https://en.wikipedia.org/wiki/Philosophical_zombie" rel="nofollow noopener noreferrer" target="_blank">philosophical zombie</a>, a possibility that I found hard to reject convincingly.)</p>
<p>The best way I can describe my imagination is a <em>web of meaning</em>. It is mostly abstract and disembodied, but it is all connected. When given a prompt—a word, a sound, an (actual) image—I immediately recall a lot of information and feelings I&#x27;ve accumulated about it. From that, it is very easy for me to pull at the threads and expand in any direction. But I need something to start with, hence the hard time blank pages give me.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>To be clear, I&#x27;m convinced that this &quot;web of meaning&quot; way of imagining is something everyone does, aphantasics or not. But it might be a bit easier for someone like me to describe, because I don&#x27;t have the extra, attention-grabbing feature of mental pictures popping suddenly in my mind. Aphantasia simplifies the landscape a little, makes things easier to separate.</p>
<p>Learning about <a href="/posts/i-used-to-know-how-to-write-in-japanese">aphantasia hasn&#x27;t made the act</a> of drawing or conjuring visual ideas any easier for me. A blank page is still perfectly blank to me, and arbitrary details are quick to escape my memory. But things <em>are</em> a little better. Now that I know what I&#x27;m weak at, I can work around it. There is much less stress involved in the process.</p>
<p>For one thing, aphantasia bears no effect on any kind of art involving the use of a reference. In 2016 I got into artistic photography (all the images in this post are mine). More recently I returned to drawing daily for a couple of years, and my technique has improved drastically.</p>
<figure><img src="https://aethermug.com/assets/posts/aphantasia-is-no-creativity-killer/exist.webp" alt=""/></figure>
<p>For a while I felt guilty for using reference images or objects while drawing. I felt like a cheater, someone resorting to a crutch to make up for their mediocrity. Then I thought about it for five minutes and realized that a majority of all visual artists throughout history routinely used references.</p>
<p>The Impressionist movement was all about getting out <em>en plein air</em> and painting what was in front of them. Some of the most renowned paintings are portraits—think of the Mona Lisa, Rembrandt&#x27;s self-portraits, or the Girl with a Pearl Earring. Most drawing tutorials on YouTube assume the use of one or more reference images as a matter of fact. This varies from artist to artist, but I think it&#x27;s safe to say that not all great artists appeared to be confident with creating entirely new, fantastic subjects out of thin air.</p>
<p>Many artists <em>do</em> put original elements in their art that seem to come directly from their own imagination. This is harder for me to emulate, but it seems to be mostly a matter of convenience, easily replaced by an external reference, rather than a show-stopper.</p>
<p>As I studied drawing techniques, I also learned that most illustrators don&#x27;t begin drawing their art in its final form right away, like printers do. Instead they to go by <a href="https://www.cgspectrum.com/blog/5-stages-of-digital-illustration" rel="nofollow noopener noreferrer" target="_blank">iterations</a>, laying down a very quick, sketchy first draft, then gradually refining the drawing and adding detail, returning over and over to each spot to gradually improve the whole picture.</p>
<p>(Of course there are printer-like exceptions like <a href="https://www.youtube.com/watch?v=MGbvhyTZXfs" rel="nofollow noopener noreferrer" target="_blank">Kim Jung Gi</a>, who almost certainly wasn&#x27;t aphantasic, but he was revered as something of a genius even by non-aphantasic professionals.)</p>
<p>This iterative approach is perfect for me, because aphantasia only makes the initial, rough sketch difficult, not the 99% of subsequent work of adding and tweaking.</p>
<p>A good sense of proportion seems to be more important, when drawing, than whatever the source of the idea was. If you&#x27;re a beginner, it&#x27;s easy to mess up the relative sizes and positioning of the parts, so that the whole comes out wonky and amateurish. This is true even when simply copying a reference image you have in front of your eyes.</p>
<p>Especially when drawing complex subjects like people and animals, the key skill is seeing what&#x27;s already on the paper and adding new strokes or colors that harmonize well with that, <em>not</em> having the &quot;right&quot; image available to refer to. Considering that I&#x27;ve kept improving consistently at this over time, I believe that aphantasia has little bearing on the sense of proportion. It&#x27;s a matter of eyeball-powered observation and deliberate practice.</p>
<p>Writing, fiction or not, comes even easier with the right approach. Words are inherently more abstract than pictures, and my intact &quot;web of meaning&quot; kind of imagination can evoke all the links and details I want.</p>
<p>Sure, if you ask me to describe a never-before-seen Pokemon, on the spot, I will struggle. I need something to get started with. I won&#x27;t struggle for long, though, because I&#x27;ve built up a catalog of visual and textual references from existing media that I can dip into to stimulate my synapses. Once the initial idea is sparked, my imagination can metabolize and flow freely in weird new directions that are deeply mine.</p>
<figure><img src="https://aethermug.com/assets/posts/aphantasia-is-no-creativity-killer/paused_to_think.webp" alt=""/></figure>
<p>All this might sound a bit confusing. Does aphantasia hinder creativity or not? Wasn&#x27;t that the creative wall I hit in my youth?</p>
<p>Although one can&#x27;t say for sure, I think that the answer to both questions is more a &quot;no&quot; than a &quot;yes&quot;. Aphantasia does prevent certain uses of imagination, certain creative pathways that specific artists might rely on for their work. In that sense, there are <em>approaches</em> to art that I can&#x27;t even hope to emulate. Art is an immensely varied and flexible category, though, and creativity is about doing things differently from others in the first place.</p>
<p>An aphantasic artist may not be able to compete with, say, Jackson Pollock or Gustav Klimt at their own styles, but who can? I believe that the same aphantasic person can find their own approach, even one that Klimt or Pollock would be hard pressed to replicate.</p>
<p>My problem when I gave up art in my teens wasn&#x27;t really my aphantasia, but my expectations for what I should be able to produce. I thought that staring at the blank page was the right way to begin a piece, and it didn&#x27;t occur to me that there may be other routes to self-expression. If anything, I lacked imagination about the ways to find my creativity. The good news is that this last part is <a href="/posts/year-two">discoverable</a>, and no longer a problem for me.</p>
<p>Now I create something every day. My drawings and photos are squarely in the amateur category, and my focus on writing means that they&#x27;ll probably stay there. But the pleasure they give me, even in little bursts! I hope no one will ever give up on that pleasure just because of aphantasia. It could mean depriving the world of something good and unique. ●</p>

<p><em>This post is included in the <a href="/posts/a-list-of-introspective-descriptions">List of Introspective Descriptions</a>.</em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/aphantasia-is-no-creativity-killer/borders_of_the_heart.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[POSIWID]]></title>
            <link>https://aethermug.com/posts/posiwid?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/posiwid</guid>
            <pubDate>Thu, 20 Jun 2024 02:50:00 GMT</pubDate>
            <description><![CDATA[A mosaic]]></description>
            <content:encoded><![CDATA[<h2>I</h2>
<p>If your pants keep sliding down as you walk, you tighten your belt. The intention is for them to stay up at all times, but it takes a structural change—a tighter belt through its loops—for the intention to be realized. Until then, they are <em>Slide-Down Pants</em>.</p>
<p>If people keep on dying in the same curve in the road, month after month, the transportation authorities add warning signs, reflective arrows, rumble strips, and guardrails. They might even change the shape of the curve. All along, the intention is for the road to leave the people that pass through it alive and unharmed, but it takes a structural change for the intention to be realized. Until then, it is a <em>People-Killing Curve</em>.</p>
<p>If a forest keeps catching fire, burning homes and destroying ecosystems and polluting the air, the forest and conservation authorities thin out the trees, create buffer zones and firebreaks, do controlled burns, switch to fire-resistant construction materials, and encode fire prevention habits and response plans in the synapses of the nearby people. The intention was never for the forest to wreak havoc. It takes a structural change for the intention to be realized. Until then, it is a <em>Catastrophe Factory</em>.</p>
<figure><img src="https://aethermug.com/assets/posts/posiwid/506816ldsdl.webp" alt="Painting of tree trunks in a forest, covered with verdant foilage."/><figcaption>Undergrowth, Vincent van Gogh</figcaption></figure>
<h2>II</h2>
<p>If I&#x27;m a prisoner in a labor camp, my intention isn&#x27;t to help my oppressors. On the contrary, my intention is to leave the place forever, and to have nothing to do with them. Yet there is a structure in place that makes me comply with <em>their</em> intentions. This structure is made of mechanisms that detect my disobedience and trigger retaliation, such as violence and deprivation of food, sleep, and comfort. For my oppressors, the camp is working as intended. For me, only a radical change in structure—such as entirely removing myself from the camp—would make it align with my intentions and needs.</p>
<p>If you&#x27;re a plant, your intention isn&#x27;t to be reduced to a single checklist item on a human&#x27;s vitamin chart.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/posiwid/vitaminchart.webp" alt="" title="A list of vitamins needed by the human body, each with pictures food items that provide it."/></figure>
<p>As a plant, you don&#x27;t <em>have</em> an intention, but your genes have something analogous: a plan for growth, reproduction, cross-pollination, speciation. Yet there is a structure in place that makes you comply with those human intentions. This structure is made of mechanisms that redirect your seeds, detect your deviations from an imagined ideal, and relocate your fruits and leafs to the other side of the world. For humans, your species is working as intended. For you, only a radical change in structure—your seeds escaping into the depths of wilderness—would make it align with your genetic tendencies.</p>
<p>If you&#x27;re a political party, your intention isn&#x27;t to be led by incompetent people. On the contrary, your intention is to select the individuals best suited to lead a nation and to solve its multifaceted problems. Yet there is a structure in place that makes you filter out the competent people and promote the unqualified. This structure is made of mechanisms that detect the fanatic and irrational responses of the crowds and trigger more of the actions that caused them, and mechanisms that sink the parties that do that less effectively. Moreover, the intention of the incompetent people is often different: to lead the party. For them, the party works well enough with the current mechanisms, if not perfectly. For you—the party as a group—only a structural change would make politics align with your intentions. Until that happens, you&#x27;re a <em>Moron-Selection System</em>.</p>
<figure><img src="https://aethermug.com/assets/posts/posiwid/529269ldsdl.webp" alt="Painting of a grove with neatly aligned trees and two human figures sketched small in the center."/><figcaption>Undergrowth with two Figures, Vincent van Gogh</figcaption></figure>
<h2>III</h2>
<p>In 2017, near the southern entrance of the Suez Canal, a sailor called Mohammad was aboard a container ship when he put his name on a piece of paper.</p>
<p>A court courier had come with a letter declaring that the ship was being put on hold until an unpaid bill was cleared. The captain was not on board that day, so Mohammad, the second in command, signed the letter as the legal guardian of the ship. That signature marked his fate for the next four years.</p>
<p>Mohammad had nothing to do with the unpaid bill. The owners of the ship, the people who should have sorted the matter out, did nothing. The ship remained there, anchored off the coast of Egypt, without permission to move.</p>
<p>Gradually, the other sailors on the container ship returned to their home countries. When Mohammad tried landing himself, the police took him back to the ship: he was the legal guardian. After two years of living in that artificial island, the last of his crew-mates left and Mohammad remained completely alone on the giant vessel.</p>
<p>At one point, the ship began to sink. He called for help via radio, and hoped he&#x27;d finally be released from his entrapment. The people on land repaired the ship&#x27;s hull and kept him locked up.</p>
<p>When Mohammad pleaded to be put in a prison—a more enticing prospect for him than complete isolation in the middle of the sea—he was told that they couldn&#x27;t do that, because he had done nothing wrong.</p>
<p>During those four years, first Mohammad&#x27;s mother, then his grandmother died, and all he could do was mourn them from afar, with no one to console him. The ship&#x27;s owners sent him food and supplies, but they were less than steady in their shipments. Often, dried bread was all he had to eat.</p>
<p>Mohammad <a href="https://www.bbc.com/news/world-middle-east-56842506" rel="nofollow noopener noreferrer" target="_blank">was finally allowed to leave</a> his dusty, insect- and rat-infested, barely-floating home in 2021, when an International Transport Workers’ Federation found a volunteer to replace him as legal guardian of the ship.</p>
<p>This wasn&#x27;t an isolated Kafkian episode. &quot;<a href="https://www.imo.org/en/OurWork/Legal/Pages/Seafarer-abandonment.aspx" rel="nofollow noopener noreferrer" target="_blank">Seafarer abandonment</a>&quot; is a thing. In 2023 only, 143 new cases like Mohammad&#x27;s were reported around the world, with more than 270 simultaneous cases still unresolved. The pattern is the same: legal or visa-related circumstances prevent the crew from leaving the ship, the ship can&#x27;t be moved, and the people really responsible for the problems are somewhere far away, unconcerned or unhurried. The sailors are often left without salary, innocent but stuck in a gridlock outside their power to resolve.</p>
<p>All along, it was never the intention of those laws and practices to detain innocent people in pitiful conditions. A structural change might avoid that outcome in the future. Until then, those ports are <em>Prisons for Innocents</em>.</p>
<figure><img src="https://aethermug.com/assets/posts/posiwid/529262ldsdl.webp" alt="Colorful painting of fields with flowers and crops, seen from an elevated perspective."/><figcaption>Field with Poppies, Vincent van Gogh</figcaption></figure>
<h2>IV</h2>
<blockquote>
<p>According to the cybernetician, <strong>the purpose of a system is what it does</strong>. This is a basic dictum. It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment, or sheer ignorance of circumstances.</p>
<p>— Stafford Beer ●</p>
</blockquote>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/posiwid/535001ldsdl.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA["Bad" Framings]]></title>
            <link>https://aethermug.com/posts/bad-framings?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/bad-framings</guid>
            <pubDate>Thu, 13 Jun 2024 08:08:02 GMT</pubDate>
            <description><![CDATA[And how to spot them]]></description>
            <content:encoded><![CDATA[<p>Of all the problems in physics, one of the thorniest over the course of centuries was figuring out what in the world this thing we call &quot;light&quot; is. In the late 17th century, Christiaan Huygens observed that light has some very wave-like properties, like refraction and diffraction, so he hypothesized it must be a wave, moving in some kind of invisible liquid—a so-called &quot;aether&quot;. Thus the aether debate began.</p>
<figure><img src="https://aethermug.com/assets/posts/bad-framings/Huygens_Statue_Delft.webp" alt="Bronze bust statue of Christian Huygens holding a circular object in front of his face."/><figcaption>Christiaan Huygens with a lens, although it could also be a portable makeup mirror (source: PieterJanR, CC BY-SA).</figcaption></figure>
<p>Only a few years later, Newton rejected Huygens&#x27; theory because it didn&#x27;t explain certain properties of light. On top of that, Newton was an expert in the motion of planets, and imagining all planets immersed in a liquid would have wrecked havoc in his newly-discovered equations. He claimed that light is made not of waves but particles, which don&#x27;t need an aether in order to propagate. He still believed in an aether, though, but one that didn&#x27;t interfere with the planets, and only disturbed light in certain cases, explaining its weird wave-like behavior.</p>
<p>A while later other eminent scientists found a way to fix the wave interpretation of light, solving some of the problems pointed out by Newton. As for the question of planet motion, they hypothesized that the aether might be an exotic kind of liquid, one that opposes little resistance to slow-moving celestial objects, but is almost solid-like for something as fast as light. This allowed the planets to &quot;drag&quot; the aether along with them, while still letting light propagate as high-frequency waves. This idea sort of worked mathematically, but at the cost of more and more complexity.</p>
<p>The debate went on, swinging back and forth for generations. Every now and then there would be an experiment that cast doubt on the latest theory, and the tide would turn. Very intelligent physicists would make modifications to the theory, throw away some parts and add new assumptions, just to make aether work in <em>some</em> way. Only at the beginning of the 20th century Albert Einstein appeared with his special theory of relativity, and explained things without needing to even mention an aether. With that, he unblocked everything, and the concept of a light-carrying medium became mostly obsolete.</p>
<p>What changed throughout this process, and why did it take so long? The boring answer is that they initially didn&#x27;t have enough evidence, and once they did, they puzzled it all out. That&#x27;s certainly true, but one can imagine them getting closer to the &quot;right&quot; theory much earlier, and by doing so they might have designed the necessary experiments earlier, too, confirming their predictions.</p>
<p>The better answer to my &quot;what changed?&quot; question is, I think, that Huygens and Newton and all the others had settled into a bad framing, and Einstein replaced it.</p>
<p><a href="/posts/a-framing-is-a-choice-of-boundaries">A framing is a choice of boundaries</a>: you make up a way to categorize things around you in order to achieve some goal of yours. Those theories were all based on a framing in which something can be a wave <em>or</em> a particle, with a clear, mutually-exclusive boundary between the two behaviors. They also attached a label around this mysterious thing they called &quot;aether&quot;, and treated it as a real object. Because of that, their <a href="/posts/embedded-prophesy-devices">models</a> had to be about the properties and relationships between the aether and the waves rippling on it, or the particles speeding through it.</p>
<figure><img src="https://aethermug.com/assets/posts/bad-framings/Lorentz_Transform_Animation.gif" alt="Animation of the distortion of spacetime as the observer moves close to the speed of light along a random path."/><figcaption>Einstein said that spacetime is a thing, and how fast you travel through it determines how it stretches and compresses relative to you. A new model based on a new framing. (source: Drummyfish, CC0).</figcaption></figure>
<p>At the same time, those theories didn&#x27;t have a labeled boundary around &quot;spacetime&quot;, so the fact that spacetime might have its own properties and interactions with other things was not even contemplated. Spacetime was part of the immutable background. Einstein&#x27;s great leap was to start with a new framing, one without a name for whatever &quot;aether&quot; was meant to be, and giving &quot;spacetime&quot; and &quot;frames of reference&quot; first-class citizenship in his theory. After that, theory and experiments finally clicked into place in a way that surpassed expectations, and the enormous advances of 20th-century physics began.</p>
<p>In my <a href="/posts/a-framing-is-a-choice-of-boundaries">first post explaining framings</a>, I wrote:</p>
<blockquote>
<p>How can a framing be &quot;better&quot; or even just &quot;good&quot;? If they are all subjective simplifications and projections onto reality of boundaries that <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">don&#x27;t really exist</a>, does it make sense to say that a framing is, <strong>in absolute terms</strong>, &quot;good&quot; or &quot;bad&quot;, &quot;true&quot; or &quot;false&quot;, &quot;right&quot; or &quot;wrong&quot;?</p>
<p>No, in fact, I don&#x27;t think it makes sense. How you draw the lines is up to you, and it has nothing to do with morality or accuracy. But a framing can be more or less <em>effective at achieving a given goal</em>. A framing is effective if and when it lets you pick out meaningful differences that you would otherwise overlook. You judge it by how well and how easily it lets you simulate the parts of reality you want to predict. Even when looking at the same corner of reality, another person, with different goals, might find a different framing to be more effective.</p>
<p>[bold emphasis added]</p>
</blockquote>
<p><em>In absolute terms</em>, judging a <a href="/posts/a-framing-and-model-about-framings-and-models">framing</a> as &quot;good&quot; or &quot;bad&quot; is meaningless. The same terms, however, can work just fine if you specify the goal you apply a framing to. When Huygens published his <em>Treatise on Light</em> in 1690, where he first proposed the idea of light waves in an aether, the framing was great (read: effective) for his purpose at the time. Light <em>does</em> behave as waves in many ways, and Huygens&#x27; <em>Treatise</em> was a foundational work of great historical importance for physics. Newton, however, wanted to explain planetary motion with much higher accuracy than had ever been done before, so a liquid aether wasn&#x27;t a good framing for him.</p>
<p>The same goes for most other proposals in the debate, like Newton&#x27;s idea about light as particles—we know today that light <em>also</em> behaves as particles, the wave-particle boundary having been ditched with quantum physics. None of those framings was &quot;bad&quot; in absolute terms, and many were &quot;good&quot; within a limited scope. But every time new evidence appeared from experiments, the goal would be changed to explaining those results, and all past framings came up short.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>One sign of a bad framing (given a purpose) is confusion. Things don&#x27;t add app, they don&#x27;t convince you. This is the easy case, because it&#x27;s easy to recognize your own confusion, and you can keep working on your framing until you aren&#x27;t confused any more. Often, that&#x27;s enough: it took more than 300 years, but we now have a convincing understanding of light that works just fine for most purposes. Physics is very good, or at least inexorable, at fixing these cases of poor framings.</p>
<p>But a sneakier sign of a bad framing is an endless debate. Two or more parties keep on arguing that their conclusions are right and the others&#x27; are wrong. This one is trickier to solve from the inside, because each side of the debate might be very convinced of their logic—there isn&#x27;t much confusion involved, nor clear new evidence to puzzle you too much.</p>
<p>An example of a long-running debate is that of nature vs nurture: are human behaviors and traits due more to genes or to the environment?</p>
<p>Another one is the quest for a perfect scientific definition of &quot;life&quot;: how do you decide if something is alive or not?</p>
<p>Or the debate about liberalism and conservatism: should the government dictate certain things or leave them up to the people?</p>
<p>Or free will vs determinism, or what constitutes &quot;art&quot;, or &quot;why does it feel like something to be conscious?&quot;, and so on with all the famous debates even now raging in all areas of knowledge. Such chronic disputes tend to happen especially in the softer sciences, or on topics for which clear, tangible evidence is scarce.</p>
<p>All of these debates have not been resolved even after decades or centuries of deep thinking and dialog between (sometimes) intelligent people. And most will never be resolved, because the people involved have different purposes in mind, and have sharpened their framings for those separate goals. Given that your opponents have agendas different from yours, of course the framings they propose will be &quot;bad&quot; for you.</p>

<p>The sneaky part is that <em>each side employs <a href="/posts/a-fundamental-framing-of-human-language">the same words</a></em> to refer to boundaries that were drawn differently. The framings they use, as they stand, are incompatible, but the people involved do not (or refuse to) see that, engrossed as they are in explaining their flawless models build on top of those incompatible foundations.</p>
<p>The way you define, say, &quot;intelligence&quot; will have different uses for a roboticist, a zoologist, and a SETI researcher. Why would a single definition be guaranteed to exist?</p>
<p>Some may argue that all debaters have the shared goal of getting to the truth. The catch is that the way they define &quot;truth&quot;—another choice of boundaries—is itself relative. I don&#x27;t think it&#x27;s a cynical take to say that people never really need pure Truth, for its own sake, but something more practical, like a theory that predicts certain things well, or answers to certain questions of personal interest, or money, influence, and all the other worldly desires.</p>
<p>But even if everyone agreed on seeking an ultimate truth, the fact that people can&#x27;t reach it even after spending eons in reasoned arguments smells of bad framing again.</p>
<p>Cases of sneaky bad framings aren&#x27;t that hard to spot from the outside, precisely because both sides go on forever without being able to reconcile their views with each other even a little. That&#x27;s the tell-tale sign, the hint that they should stop discussing their arguments and begin discussing what each faction is trying to achieve—what they really mean when they use those words that they take for granted. Without first aligning on that, no progress can be expected in a debate. <a href="/posts/rationality-fails-at-the-edge">Rationality fails at the edges</a>. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p><a href="https://aethermug.com/posts/bad-framings#comments">Comment on this post&#x27;s page</a></p>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/bad-framings/911947absdl.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Borges on Chaos Theory]]></title>
            <link>https://aethermug.com/posts/borges-on-chaos-theory?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/borges-on-chaos-theory</guid>
            <pubDate>Thu, 06 Jun 2024 07:59:01 GMT</pubDate>
            <description><![CDATA[About a weird, lovely short story]]></description>
            <content:encoded><![CDATA[<h2>I</h2>
<p>&quot;Pierre Menard, Author of the <em>Quixote</em>&quot; is part of a short story collection, and that makes it a short story. But this is no typical story. It has no plot, no setting, no descriptions or dialog, no direct interactions between people, and only one character—or one and a half, if you count the narrator. In other words, it&#x27;s a kind of story that Jorge Luis Borges excelled at.</p>
<p>I&#x27;m an avid reader of short stories, and can easily recall several that have left indelible footprints in my psyche: Hemingway, Poe, Buzzati, Tsutsui, Borges, Borges, Borges. Yet, all things considered, this strange story from 1939 might just be my favorite, one my mind has gone back to time and again since I first read it fifteen or twenty years ago.</p>
<p>Borges was a meta-author (which also means that it&#x27;s impossible to spoil a Borges story). His &quot;Pierre Menard&quot; follows a pattern shared by many of his other works from the same collection, because it&#x27;s really a literary essay by an imaginary critic about an imaginary book by an imaginary author. The author in question is the titular Pierre Menard, a French poet and writer who has just passed away, leaving behind a corpus of rather conventional works. But—the narrator laments—Menard&#x27;s masterpiece, his greatest legacy, is also his most overlooked and misunderstood undertaking: several chapters of the &quot;Don Quixote&quot;.</p>
<p>Not a modern reboot of Cervantes&#x27; novel, nor a reinterpretation of it, but a new novel written by his modern self that happens to be perfectly identical, <em>word for word</em>, to the original.</p>
<p>Menard explains in a letter to the narrator (translation by Andrew Hurley):</p>
<blockquote>
<p>This game of solitaire I play is governed by two polar rules: the first allows me to try out formal or psychological variants; the second forces me to sacrifice them to the “original” text and to come, by irrefutable arguments, to those eradications.</p>
</blockquote>
<p>In other words, he arrived at the final form of his text after a proper creative process, in which he strained to express his own ideas, impressions, and themes, as a 20th-century author, only to converge through editing and reasoned re-writing to words that matched exactly those written by Cervantes. This is hard work:</p>
<blockquote>
<p>He dedicated his scruples and his nights &quot;lit by midnight oil&quot; to repeating in a foreign tongue a book that already existed. His drafts were endless; he stubbornly corrected, and he ripped up thousands of handwritten pages.</p>
</blockquote>
<figure><img src="https://aethermug.com/assets/posts/borges-on-chaos-theory/borges.webp" alt="A portrait of Jorge Luis Borges"/><figcaption>Jorge Luis Borges (1899–1986)</figcaption></figure>
<p>If you&#x27;re wondering how one would write such a book in practice, you&#x27;re dancing to Borges&#x27; tune. His eight-page treatise is so methodically crafted, so full of erudite digressions and plausible references and even tedious little rants that, before you know it, you&#x27;ve happily suspended your disbelief and accepted Menard&#x27;s endeavor as something that <em>might</em> happen. Except—this a sign of the author&#x27;s true genius—you&#x27;re never <em>fully</em> committed to the idea, and are left thinking and wondering, in a corner of your mind, how one would write such a book in practice, and what it would be like to read it. That thin gap between what the story is saying and what you&#x27;re willing to believe is the first layer of Borges&#x27; magical irony.</p>
<p>The French author&#x27;s efforts, anyway, seem to pay off. According to the narrator,</p>
<blockquote>
<p>Menard&#x27;s fragmentary <em>Quixote</em> is more subtle than Cervantes&#x27;. Cervantes crudely juxtaposes the humble provincial reality of his country against the fantasies of the romance, while Menard chooses as his &#x27;reality&#x27; the land of Carmen during the century that saw the Battle of Lepanto and the plays of Lope de Vega.</p>
</blockquote>
<p>A choice that, in the 20th century, is not obvious and must have a deeper meaning.</p>
<p>Moreover, other modern authors of historical fiction, writing about those times, would have indulged in colorful stereotypes and romanticism (says the narrator) but in Menard&#x27;s work there are &quot;no gypsy goings-on or conquistadors or mystics or Philip IIs or <em>autos da fé</em>. He ignores, overlooks—or banishes—local color. That disdain posits a new meaning for the &#x27;historical novel.&#x27;&quot; (How does one not chuckle here?)</p>
<p>The story goes on to analyze some passages of both works, comparing the 1602 version with the Menard one, contrasting their philosophical messages, stylistic quirks, and influences (e.g. years as a soldier in the Spanish Navy on one side, Nietzsche on the other). All, of course, while providing pairs of absolutely identical quotations.</p>
<p>Scholars (the non-imaginary kind) have explained &quot;Paul Menard&quot; as a humorous critique of literary criticism and the meaning of authorship, and a statement about the role of the reader in creating meaning. The short story ends with the narrator&#x27;s claim that Menard invented a whole new way to read, one where you deliberately imagine the text as written at a different time and by a different author, leading to radically different interpretations of the original text.</p>
<p>I think the scholars are right, and that&#x27;s all very interesting. But the reason I love this story is that I see in it another layer, one that has to do with the chaotic nature of reality. And I mean that literally: it&#x27;s a story (also) about the scientific Theory of Chaos.</p>
<h2>II</h2>
<p>Although they use the same words, the <em>Don Quixote</em> of Cervantes and that of Menard exist in very different contexts:</p>
<blockquote>
<p>Not for nothing have three hundred years elapsed, freighted with the most complex events. Among those events, to mention but one, is the <em>Quixote</em> itself.</p>
</blockquote>
<p>The difference of everything that happened in those three hundred years, along with the different culture (Menard is French, not Spanish) gives the hypothetical <em>Quixote</em> a completely different meaning. Another context, another meaning.</p>
<p>Borges didn&#x27;t invent that from scratch. The same thing happens so often that we&#x27;ve stopped noticing it. Take, for example, proverbs (watch just the first 3-4 scenes):</p>
<div class="lg:col-start-1 lg:col-end-1 aspect-w-16 aspect-h-9 mb-8"><iframe class="w-full p-0" src="https://www.youtube-nocookie.com/embed/ez2dv8qu2gA?si=o1uyGNtFYSBq2xKQ&amp;start=369&amp;clip=UgkxVvcUYKLGSccGJDgOgVT36L4AN2LO1_T8&amp;clipt=EPrDFhjv4xg" title="YouTube video player" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerPolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe></div>
<p>Every time we intone a proverb like that, we&#x27;re faithfully repeating the same words, but they mean different things every time. Each use of a set phrase refers to a unique situation, with different people involved and problems to solve.</p>
<p>We do that with more than words. Around 600 BCE, the Etruscan people in central Italy had the custom of bundling several wooden sticks around an axe to signify honor and authority.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/borges-on-chaos-theory/Ascia_bipenne.webp" alt="" title="A picture of an ancient fasces, several pipes attached to the side of a rusted axe, as found in an Etruscan tomb."/></figure>
<p>The Romans took up that tool, which they called <em>fasces</em>, and used it to remind people of the individual power and authority of a magistrate. Sometimes it was used directly for corporal punishment, but its function was mostly symbolical. After the fall of the Roman Empire, a new idea spread across Eurasia: force can snap a single rod, but not a bundle of them. Following that, kings and nations took up the <em>fasces</em> as a symbol not primarily of strength, but of unity and harmony. It is still in use today, with a similar meaning, in countless official symbols and images.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/borges-on-chaos-theory/lincoln_statue.webp" alt="The Lincoln Memorial statue, with two fasces visible at the two sides of the chair."/><figcaption>The Lincoln Memorial statue, with two fasces visible at the two sides of the chair.</figcaption></figure>
<p>But when it was chosen by Benito Mussolini as the emblem of his National Fascist Party in 1926, the meaning of the <em>fasces</em> was another one still: not merely authority and unity, but a populist reminder of the (supposed) grand heritage of Imperial Rome. Today, most Italians regard it as a symbol of the atrocities of war, similar to how the Nazi swastika is seen all over the world.</p>
<p>I made this short historical detour to show how the meaning of the same icon will change as history unfolds. The events of the world add their colors to it. Each of its adopters brings in their experiences and ideals. And the very existence of that symbol in the past can alter its significance in the future. This is what Menard meant when he wrote that much has happened since the original <em>Quixote</em>, and &quot;among those events, to mention but one, is the <em>Quixote</em> itself.&quot;</p>
<h2>III</h2>
<p>There are two elements in all this that seem to be at odds with each other. On the one hand, things like a proverb, a symbol, or—as in Borges&#x27; story—a novel have some sort of universality. They transcend the ages and remain applicable in different contexts. On the other hand, they acquire a unique flavor every time, dependent on the specifics of the people and times involved.</p>
<p>This is not a paradox, though, but a typical result of chaotic processes.</p>
<p>In Chaos Theory, a process or system is chaotic if even the tiniest difference in conditions will lead to very different results in a short time. Whether it&#x27;s a <a href="https://en.wikipedia.org/wiki/Butterfly_effect" rel="nofollow noopener noreferrer" target="_blank">butterfly flapping its wings</a> or Adolf Hitler dying young in an alternate universe, the ripple effects spiral out and potentially make a huge difference.</p>
<p>But chaos isn&#x27;t necessarily about endless novelty and disorder. Often chaotic processes lead to predictable behavior—called &quot;attractors&quot;—like coming to a stand-still or repeating the same events regularly. For example, a human heartbeat at rest and the Moon revolving around the Earth are pockets of regularity in a chaotic world.</p>
<p>And then there are &quot;strange attractors&quot;. These are a weird mix of the complete unpredictability of chaos and the complete predictability of periodic attractors. When a process is in a strange attractor, its behavior <em>almost</em> repeats, nearly—but not quite—returning to the same conditions as before, except with some small differences that may or may not lead to huge changes in the near future.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/borges-on-chaos-theory/duffing.gif" alt="A strange attractor for a Duffing oscillator (basically a funky flexible pendulum used to study chaos mathematically). The dot represents the position of the oscillator in the x axis and its velocity on the y axis. It almost forms loops, but never exactly. You can&#x27;t predict if it will suddenly switch to one side or the other, yet it sticks roughly to the same general shape."/><figcaption>A strange attractor for a Duffing oscillator (basically a funky flexible pendulum used to study chaos mathematically). The dot represents the position of the oscillator in the x axis and its velocity on the y axis. It almost forms loops, but never exactly. You can&#x27;t predict if it will suddenly switch to one side or the other, yet it sticks roughly to the same general shape.</figcaption></figure>
<p>This is what comes to mind when I read &quot;Pierre Menard, Author of the <em>Quixote</em>&quot;. Every moment in time is unique in many ways, but there are parts of reality that <em>almost</em> repeat. Except they don&#x27;t really repeat, and the small differences may lead to very different outcomes, interpretations, and meaning. History doesn&#x27;t repeat itself, but it might be caught in a giant strange attractor.</p>
<p>I love Borges the author because he appears to have understood, at an intuitive literary level, deep truths about reality that physicists and mathematicians hadn&#x27;t even discovered in his time. Chaos Theory is a rabbit hole that they&#x27;re still busy exploring to this day. His intuition for the mathematical beauty of the world—in its regularity and in its chaos—transpires from his stories, and &quot;Pierre Menard&quot; is a prime example of that. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/borges-on-chaos-theory/the_hoosier_don_quixote.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Human Language Is Not a Network]]></title>
            <link>https://aethermug.com/posts/human-language-is-not-a-network?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/human-language-is-not-a-network</guid>
            <pubDate>Wed, 29 May 2024 06:04:47 GMT</pubDate>
            <description><![CDATA[A visual explanation]]></description>
            <content:encoded><![CDATA[<blockquote>
<p>What my eyes saw was <strong>simultaneous</strong>; what I shall write is <strong>successive</strong>, because language is successive. Something of it, though, I will capture.</p>
<p>— The Aleph, Jorge Luis Borges</p>
</blockquote>
<p>For all the fancy things you can do with words, the one thing you <em>can&#x27;t</em> do is speak more than one word at once. Voice is mono. Every text is the storied coiling of a single string of letters: all lines in a book are turns of the same necklace.</p>
<p>Here lies the impossible challenge of language: <a href="https://planktonvalhalla.com/20220829-the-world-is-a-network/" rel="nofollow noopener noreferrer" target="_blank">the universe is a network</a>, a seamless mesh of things that interact at the same time, at all scales, and in three dimensions. How does one even begin to represent that with a serial succession of words?</p>
<p>Even a picture like this—itself an enormous reduction of the actual place and events that it portrays—presents all sorts of challenges when you try to put it to words:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/redcharlie-xtvo0ffGKlI-unsplash.webp" alt="Photo: redcharlie, Unsplash" title="A savanna scene with rhinos and other animals, described in the text."/><figcaption>Photo: redcharlie, Unsplash</figcaption></figure>
<p>This is how GPT-4o describes it:</p>
<blockquote>
<p>A natural scene in what <span class="text-red-900 dark:text-red-400"><strong>appears to be</strong></span> a savanna or grassland environment. In the foreground, <span class="text-red-900 dark:text-red-400"><strong>there are</strong></span> <span class="text-yellow-800 dark:text-yellow-300">two rhinoceroses walking across a dirt path</span>. The larger rhinoceros <span class="text-red-900 dark:text-red-400"><strong>is</strong></span> likely an adult, while the smaller one <span class="text-red-900 dark:text-red-400"><strong>is</strong></span> a juvenile. Both animals <span class="text-red-900 dark:text-red-400"><strong>have</strong></span> thick, grayish skin and prominent horns on their noses.</p>
<p>In the background, <span class="text-red-900 dark:text-red-400"><strong>there are</strong></span> more rhinoceroses and <span class="text-yellow-800 dark:text-yellow-300">a herd of antelopes, possibly impalas, grazing or moving through the landscape</span>. The vegetation <span class="text-red-900 dark:text-red-400"><strong>consists of</strong></span> dry grasses and scattered trees with autumn-colored leaves.</p>
<p>The scene <span class="text-red-900 dark:text-red-400"><strong>is</strong></span> set against a backdrop of rolling hills or low mountains, <span class="text-red-900 dark:text-red-400"><strong>covered</strong></span> in <span class="text-yellow-800 dark:text-yellow-300">green vegetation</span>, which <span class="text-yellow-800 dark:text-yellow-300">contrasts with the dry foreground</span>. The sky <span class="text-red-900 dark:text-red-400"><strong>is</strong></span> overcast.</p>
</blockquote>
<p>Loosely speaking (this isn&#x27;t a rigorous analysis) the verbs in bold red tell us what is there: they pick some aspects of the picture and assign names—boundaries—to them. These are copular verbs like <em>is</em>, <em>are</em>, and <em>appear</em>, and other linking verbs like <em>have</em> and <em>consist of</em>. Adjectives and adjectival phrases <a href="/posts/a-fundamental-framing-of-human-language">&quot;customize&quot;</a> the general categories mentioned.</p>
<p>Most of the sentences in this description are like that: mere statements about what&#x27;s there. This is typical for descriptions of static things, less typical for real-life scenes, where we usually need &quot;active&quot; verbs—verbs representing interactions. In the photo above, only a few verbs are active, and for those I&#x27;ve marked the whole sentences in yellow:</p>
<ul>
<li><em>two rhinoceroses walking across a dirt path</em></li>
<li><em>a herd of antelopes grazing</em></li>
<li><em>green vegetation, which contrasts with the dry foreground</em></li>
</ul>
<p>They tell not merely what&#x27;s there, but what their relationships are—explicit or implicit:</p>
<ul>
<li>Rhinos → path</li>
<li>Antelopes → (implied grass)</li>
<li>Vegetation → foreground</li>
</ul>
<p>But there are a million interactions in the scene that aren&#x27;t mentioned in this text:</p>
<ul>
<li>Animals breathing air</li>
<li>Antelopes listening for signs of predators</li>
<li>Flies walking on rhino skin</li>
<li>Leaves turning sunlight into chemical energy</li>
<li>Branches holding leaves up</li>
<li>Wind blowing up fine dust</li>
<li>Jupiter pulling at all of them gravitationally</li>
<li>...</li>
</ul>
<p>Notice also how the text works in little bursts. It picks out a specific part of the picture first, tells us something about it, then it &quot;jumps&quot; to another part of the picture, or it adds another burst of information about something it mentioned before. This feels entirely natural, but only because of habit. No such bursts exist in the scene being described. So why does language have to be like this?</p>
<h2>Network Wrangling</h2>
<p>Before we put it into words, the scene is a network of inter-relations, something like this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/network_1.webp" alt="" title="A simple network structure, with a few dozen circles linked randomly by straight lines. The circles near the border of the image have links going out of the frame, indicating that there are more nodes in the network outside of the image."/></figure>
<p>This is of course an enormous simplification—there are many more moving parts in the picture, each with many more &quot;links&quot; for interactions with other parts. But it works well enough to show the operation of language.</p>
<p>As you can see from this first drawing, the nodes (the objects we give names to in the picture) have lots of relationships between themselves, an they&#x27;re also linked with relationships to other things outside the frame. We&#x27;re looking at a small patch of &quot;nodes&quot; in the seamless network that is the Universe.</p>
<p>The colors here have no deep meaning, except to show that there is a gentle continuity between things to begin with. You could see the transition between colors as a fuzzy boundary that <a href="/posts/a-framing-is-a-choice-of-boundaries">we make precise</a> by assigning names and categories to things. For example, certain shades of blue might correspond with what, in the savanna photo, we call a rhino, and purple with an antelope. We&#x27;ll return to the colors later on.</p>
<p>Immediately we face the main obstacle we started with: we can only speak or write one word at a time. In other words, <em>language is one-dimensional, while the world is multidimensional.</em></p>
<p>Computer scientists have found two ways to &quot;serialize&quot;—convert into a sequence—the complex information of a network. The first is called an <em>adjacency matrix</em>. This is a two-by-two matrix where each column and row corresponds to one of the node labels. The cell of intersection between a given row and column in the matrix indicates if there is a connection between them or not. Wikipedia gives the example of this simple network:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/Simple_cycle_graph.webp" alt="" title="Three circles, labled a, b, and c and each connected to the other two by a line."/></figure>
<p>The corresponding adjacency matrix is:</p>
<table><thead><tr><th style="text-align:center"></th><th style="text-align:center">a</th><th style="text-align:center">b</th><th style="text-align:center">c</th></tr></thead><tbody><tr><td style="text-align:center"><strong>a</strong></td><td style="text-align:center">0</td><td style="text-align:center">1</td><td style="text-align:center">1</td></tr><tr><td style="text-align:center"><strong>b</strong></td><td style="text-align:center">1</td><td style="text-align:center">0</td><td style="text-align:center">1</td></tr><tr><td style="text-align:center"><strong>c</strong></td><td style="text-align:center">1</td><td style="text-align:center">1</td><td style="text-align:center">0</td></tr></tbody></table>
<p>You can read this as</p>
<ul>
<li><strong>a</strong> has zero connections with <strong>a</strong></li>
<li><strong>a</strong> has one connection with <strong>b</strong></li>
<li>etc.</li>
</ul>
<p>This is fine for a computer, because a computer is built to work with data structures just like that, but we humans have a hard time thinking in matrix terms.</p>
<p>The next mathematically-proven method to serialize a network is called an <em>adjacency list</em>. It consists in giving a label to each node, then listing for each label what other labels it is connected to. For the three-node example above:</p>
<table><thead><tr><th style="text-align:center">node</th><th style="text-align:center">relation</th><th style="text-align:center">connected nodes</th></tr></thead><tbody><tr><td style="text-align:center">a</td><td style="text-align:center">adjacent to</td><td style="text-align:center">b, c</td></tr><tr><td style="text-align:center">b</td><td style="text-align:center">adjacent to</td><td style="text-align:center">a, c</td></tr><tr><td style="text-align:center">c</td><td style="text-align:center">adjacent to</td><td style="text-align:center">a, b</td></tr></tbody></table>
<p>This seems more approachable. When you say &quot;two rhinoceroses walking across a dirt path&quot;, you&#x27;re making the equivalent of an adjacency list item:</p>
<table><thead><tr><th style="text-align:center">node</th><th style="text-align:center">relation</th><th style="text-align:center">connected nodes</th></tr></thead><tbody><tr><td style="text-align:center">two rhinos</td><td style="text-align:center">walking</td><td style="text-align:center">dirt path</td></tr></tbody></table>
<p>Like adjacency matrices, adjacency lists are extremely versatile, and capable of expressing exactly even enormously complex networks. They are &quot;lossless&quot;. They are manageable by computers, because of their high speed and extensible, exact memory.</p>
<p>Unfortunately, we poor human beings are neither fast nor exact with our memory. Our capacity to retain detailed information is &quot;lossy&quot; and short term. Even though we seem to apply something like the adjacency list method, we can&#x27;t keep long lists clearly in mind, so when the network at hand becomes too large, we tend to forget what we said or heard before.</p>
<p>So we&#x27;re forced to do a lot of pruning and selecting.</p>
<p>Since we can&#x27;t describe the whole Universe every time, the first thing we do is pretend that it&#x27;s an isolated network—we forget the rest of the universe for a whole:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/network_2.webp" alt="" title="The same network with a few dozen circles as before, but this time the circles near the edges don&#x27;t have links to nodes outsides the image."/></figure>
<p>Note how the network is nor isolated, disconnected from the broader network it was part of in the previous image.</p>
<p>That makes things much easier, but it&#x27;s still too much. Every tiny point in the savanna picture (or any other picture, or anything else we might ever want to describe in life) can have its own relationships and interactions with the things around it. In the previous step we&#x27;ve removed the problem of the infinitely large surrounding Universe, but we still have the problem of the infinitely small.</p>
<p>To solve that, we pick out only the tiny minority of things that seem relevant or salient for the current communication, and ignore the rest:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/network_3.webp" alt="" title="The same isolated network as before, but many of the circles, and the links stemming from them, are faded and almost invisible, so that the network looks more sparse and simple."/></figure>
<p>The network is now much more manageable. The number of objects is something we can work with, something we can keep in our human heads. Still, we can do better:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/network_4.webp" alt="" title="The simplified network of the previous step, where the circles that were faded have been removed completely. Additionally, several of the links between the remaining circles are marked for deletion with double lines."/></figure>
<p>We ignore (the double line marks) a lot of the relationships between the salient objects that we&#x27;ve noticed. Some of these we ignore because they are obvious, and the listener will be able to fill in on their own. Others are less obvious but, hopefully, irrelevant. In the photo, it could be things like &quot;the larger rhino is grunting at the juvenile rhino&quot;, or &quot;the 10th antelope from the right is scratching its own leg&quot;. Probably not the most important things to mention.</p>
<p>So we end up with a vastly simplified, skinny network like this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/network_5.webp" alt="" title="The same network as the previous step, without the links that were marked for deletion. It is now much simpler than the initial version."/></figure>
<p>At this point we&#x27;ve ignored all we could ignore without seriously damaging the message. All we have to do is knead the remaining structure a little, mold it into a simpler shape, even out the distances...</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/network_6.webp" alt="" title="The same simple network as the previous step, reshaped somewhat to make it a bit more smooth and straight."/></figure>
<p>...and wrench it into a line, as if we were manipulating a metal wire:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/network_7.webp" alt="" title="The same network again, reshaped even more so that all dots are in a single line. The links between the dots are still shown, but they are flattened and sometimes go back or forward several steps in order to keep everything connected like before."/></figure>
<p>Here we&#x27;ve kept the salient relations intact, and only stretched and compressed them so that everything forms a single file.</p>
<p>Note how we&#x27;re forced to introduce jumps in certain points, so that we can go back and mention additional links that went parallel to what we were saying before.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/network_7_annotated.webp" alt="" title="The same network in the same linear shape as before, with the addition of several arrows showing where jumps are, i.e. points where there is no link beteen one circle and the next one in the line,"/></figure>
<p>In some cases, the jumps return to things mentioned quite a while before. You see this by the sudden change of color across some of those jumps—yellow to greenish, blue to purple to green, and so on. These are entirely artificial discontinuities, artifacts of our shoehorning work, and they didn&#x27;t exist in the original, smoothly gradated network.</p>
<p>What we end up with is a neat line of words, with a beginning and an end, that our gray matter can comfortably chew on, re-converting it into something resembling the original network in the form of an <a href="/posts/embedded-prophesy-devices">internal model</a>.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/network_8.webp" alt="" title="The same line as before, with the same circles, but without the links between the circles shown. The words start and end are written at the beginnin and end of the line."/></figure>
<p>At last we&#x27;ve reached a shape that we can put into language.</p>
<blockquote>
<p>A sight, an emotion, creates this wave in the mind, long before it makes words to fit it; and in writing (such is my present belief) one has to recapture this, and set this working (which has nothing apparently to do with words) and then, as it breaks and tumbles in the mind, it makes words to fit it.</p>
<p>— Virginia Woolf, from her diary</p>
</blockquote>
<h2>What&#x27;s Gained and What&#x27;s Lost</h2>
<p>Every news reel and public speech, every novel and self-help blog and progress report, every written or spoken or gestured word is the result of this wrenching process.</p>
<p>If language was invented (it probably wasn&#x27;t), the inventor was a genius. The fact that we can compress and reformat the infinitely intricate and multi-dimensional reality that surrounds us enough to transmit it with the vibration of vocal cords and finger movements is almost a miracle. That we can put to good use the information so packaged to predict and understand the world is mind-boggling. Maybe we should speak of the <em>unreasonable effectiveness of language</em>.</p>
<p>We should not become complacent, though. We&#x27;re making this bewildering transformation for convenience:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/human-language-is-not-a-network/network_9.webp" alt="" title="Summary image, showing the very first, complex network image and the final linear sequence of circles."/></figure>
<p>Much is lost in the act. Intentionally or not, we ignore the majority of nodes (objects) and relationships that exist in reality; by streaming our words one after the other, we also lose information about the distance between those nodes, and how strongly they interact; we omit inner links based on subjective judgement; we gloss over the three- (or many-) dimensional structure of the thing we describe.</p>
<p>And—perhaps most importantly—we get the impression that the Universe has <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">neat boundaries and jumps, fits and starts</a>, instead of being a continuous mesh of interactions.</p>
<p>Language makes our society possible, but it also inevitably distorts the way we perceive and interpret reality. Sometimes this is good enough, and sometimes, like in poetry and prose, it might even be desirable. But when the goal is a deep understanding of things, language works against us. That&#x27;s why it&#x27;s so important to understand its weaknesses and constraints. Like any tool, we need to become intimately familiar with it. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/human-language-is-not-a-network/Cacatoes_et_magnolia_(1897)_Maurice_Pillard_Verneuil.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Is There Anything Untranslatable?]]></title>
            <link>https://aethermug.com/posts/is-there-anything-untranslatable?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/is-there-anything-untranslatable</guid>
            <pubDate>Fri, 24 May 2024 10:36:01 GMT</pubDate>
            <description><![CDATA[Words do mean things, but not only.]]></description>
            <content:encoded><![CDATA[<p>Last month I wrote <a href="/posts/the-beautiful-dissociation-of-the-japanese-language">a post</a> about some unique quirks of the Japanese language. Afterwards, I had some lovely conversations about it. Most people seemed to agree with the things I wrote—which I&#x27;m pleased of—but you know how the brain works. The comments that tickled my curiosity the most were the skeptical ones. One particular point came up a couple of times: was I justified in claiming that some Japanese words and phrases are &quot;untranslatable&quot;?</p>
<p>What an amazing question! I had never thought about it critically, but it has big implications. The answer to that question has a huge impact on the very value of language learning.</p>
<p>My gut reaction was to answer &quot;yes, there are untranslatable words&quot;. But why? It&#x27;s worth breaking that down to see if it makes sense.</p>
<p>But frankly, more than that, it&#x27;s worth it because it uncovers a lot of delicious aspects of language.</p>
<h2>On the Surface, the Argument Against Is Strong</h2>
<p>All modern human languages with enough speakers seem to be capable of expressing any concept, however complex, by means of composition, recursion, and simply adding more clarifying words. To repeat the same thing in computer-science-y terms, all major languages are &quot;Turing complete&quot;. If that is true—and I believe it is—then there can&#x27;t be any idea that can be stated in one language but not in another one.</p>
<p>After all, monolingual dictionaries exist for the precise purpose of expressing the meaning of every word without using that word. It follows that, if anyone claims that word X is untranslatable, all you need to do is translate that word&#x27;s monolingual definition, which usually will be made of translatable words. Hence, every word is translatable.</p>
<figure><img src="https://aethermug.com/assets/posts/is-there-anything-untranslatable/definition.webp" alt="A definition of the word definition"/><figcaption>This is a definition. (Source: Merriam Webster)</figcaption></figure>
<p>For the most part, this line of reasoning is convincing, and I agree with the conclusion that any concept can be defined, explained, and understood by speakers of any other language.</p>
<p>Something about that logic nags at me, though, like something was missing.</p>
<h2>Language Grafting</h2>
<p>If you&#x27;ve ever shared two or more languages with a friend—both of you fluent in those languages—you&#x27;ve probably experienced an odd phenomenon at least once. You&#x27;re chatting in Language 1 (say, English), but every now and then, almost unconsciously, one of you drops a word of Language 2 (say, Catalan), only to immediately return to English as if nothing had happened.</p>
<p>You&#x27;re both perfectly comfortably with that, because you understand both languages. But a bystander who overhears your conversation (and doesn&#x27;t speak Catalan) would be rather put off by that. The insertion of an alien word in the middle of the dialogue feels like a gaping hole to them. They (the bystander) might ask: why not stick to one language, if you both know it?</p>
<p>This language-hopping behavior is called <em>code switching</em> and, in my experience, it&#x27;s extremely common in multilingual communities. I&#x27;ve caught myself and my friends do it with—depending on the friend—Italian and English, Italian and Japanese, and English and Japanese. In other words, all the language combinations I have access to.</p>
<p>The interesting thing about code switching is that it happens even when both speakers are native in the same language, and using their native tongue as their basis. Why go to the trouble of mixing in a second language then?</p>
<p>In some cases it&#x27;s a matter of habit, especially when you have some reason to speak the second language every day. You may get so accustomed to it that some words pop up in your mind before the native equivalents.</p>
<p>But often a single, fitting native word <em>simply doesn&#x27;t exist</em> that works as well as the foreign one in that situation. I often have conversations like this:</p>
<blockquote>
<p>Marco: I&#x27;m looking for a date spot, have you been to that restaurant before?</p>
<p>Friend: yes, it&#x27;s great.</p>
<p>M: nice and cozy?</p>
<p>F: yeah, especially very... shibui.</p>
<p>M: perfect, she&#x27;ll love it then.</p>
</blockquote>
<p><em>Shibui</em> (渋い) means, in this context, quietly refined in an austere way, without pretenses, almost stoic. Saying <em>shibui</em> like that, in a mere second, conveys what would otherwise make a clunky and unnecessarily long digression.</p>
<p>Instead of code switching, we could use a few more words to convey the same meaning. If necessary, we could open an online dictionary and quickly conjure up a clear definition in Language 1. But we don&#x27;t usually do that. In the blink of an eye, we change language and then change back again, confident that the other will follow.</p>
<p>If we make that choice over and over, there must be a reason.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h2>The Stuff that Is Lost</h2>
<p>Douglas Hofstadter, an American cognitive scientist, told the following story during a lecture titled <a href="https://www.youtube.com/watch?v=n8m7lFQ3njk" rel="nofollow noopener noreferrer" target="_blank">Analogy as the Core of Cognition</a>. It was about a time when he visited an Italian research institute for a year:</p>
<blockquote>
<p>And so the question that I had to face from the very beginning was what do you say to these people when you run into them in the corridor and you don&#x27;t even know quite who they are; [or] you recognize them and you sort of know who they are; [or] you have spoken to them and you know definitely who they are; [or] you know their name or you know them pretty well, they&#x27;re kind of a friend, you&#x27;ve eaten lunch with them once; or you are a friend?</p>
<p>Naively, being American, I would say &quot;ciao&quot;. Ah, that was wrong! You don&#x27;t say &quot;ciao&quot; to people that you are not on familiar terms with.</p>
<p>I mean, &quot;ciao&quot; is something that you can say to somebody that you say &quot;tu&quot; to, but you don&#x27;t say &quot;ciao&quot; to the director of the Institute. That just doesn&#x27;t work. You say &quot;buongiorno&quot;, or you say &quot;salve&quot; and &quot;salve&quot; is sort of intermediary.</p>
</blockquote>
<p>His broader thesis in the lecture—a fascinating lecture well worth a watch—was that we always think through analogies, and which words you choose in a given context is also a matter of finding &quot;what this situation is <em>like</em>&quot;.</p>
<p>In this case, for an American, all those various situations were <em>like</em> &quot;hi&quot; situations, and &quot;hi&quot; is usually translated to &quot;ciao&quot; in Italian. But, during that year as a visiting researcher, Hofstadter found that analogy insufficient, way too rough for the local environment. In Italy, some of those same situations are <em>like</em> &quot;ciao&quot; situations, but some others are <em>like</em> &quot;buongiorno&quot; situations, and others still <em>like</em> &quot;salve&quot; situations.</p>
<p>Notice how here we&#x27;re talking about a second language, but the difficulty is not really one of knowing the meaning of words. Hofstadter already knew what the various Italian salutations meant. What he struggled with initially was <em>the right circumstances that warrant the use of each word</em>.</p>
<p>That&#x27;s language: saying the thing you need to say when, where, and how you need to say it. The &quot;what&quot; is only a fraction of the story.</p>
<p>And, of course, such asymmetries in language are one of the things that make translation difficult. When you translate an Italian book or film to English, all those fine distinctions between &quot;buongiorno&quot;, &quot;ciao&quot;, and &quot;salve&quot; get collapsed into &quot;hi&quot;, &quot;hey&quot;, &quot;hello&quot;, none of which have a neat 1-to-1 correspondence in terms of inciting situation. In terms that I&#x27;ve written about before, the <a href="/posts/a-fundamental-framing-of-human-language">Fundamental Framing</a> of one language sets different boundaries to that of another language.</p>
<p>For a more extreme example of the same problem, look at how many ways there are to say &quot;I&quot; in Japanese, based on the circumstance:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-anything-untranslatable/i_01.webp" alt="" title="watashi: universal, jibun: used by athletes, boku: common for men, ore: rough and common for men, oresama: self-important for men."/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-anything-untranslatable/i_02.webp" alt="" title="washi: old person stereotype, atashi: common for women, informal, atakushi: rare and pompous for women, atai: frivolous for women, wai: informal and provincial."/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-anything-untranslatable/i_03.webp" alt="" title="uchi: women in west Japan, oira: rough and friendly for men, ora: rough for men, orecchi: quirky for men, honkan: only for police officers."/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-anything-untranslatable/i_04.webp" alt="" title="sessou: only for monks, heisha: when representing one&#x27;s company, wagahai: obsolete and self important, soregashi: samurai-like, chin: only for emperors."/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-anything-untranslatable/i_05.webp" alt="" title="ware: ancient, used in hymns, onore: old-fashioned, yo: ceremonious, shousei: was used by men in letters, asshi: Edo-period peasant flavor."/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/is-there-anything-untranslatable/i_06.webp" alt="" title="achiki: for geishas, wacchi: also for geishas, warawa: for queens and goddesses, sessha: for samurai and ninja, watakushime: for servants."/></figure>
<p>Another popular way to say &quot;I&quot; is by simply saying one&#x27;s own name (there is no third-person form in Japanese grammar). Calling oneself by one&#x27;s name is used mostly by children, but many adult women use it in intimate settings, carrying a frank, slightly indulging tone.</p>
<p>These are the 30 (+1) most common ways to say &quot;I&quot;, those I&#x27;m confident every Japanese can recognize. Each carries its own nuance, and tells you something different about the person who uses it, the relationship they <em>perceive</em> to have with the listener, and the formality of the setting. To make things more interesting, each Japanese speaker switches between several of these throughout the day.</p>
<p>In fiction, Japanese authors are experts at orchestrating these options to convey more than the mere words do, to recall archetypes, and to distinguish the characters&#x27; personalities.</p>
<p>Yet, an English translator is forced to take all those flavorful self-appellations and roll them up into the single, sublimely neutral &quot;I&quot;.</p>
<p>Translation succeeds by degrees.</p>
<h2>Beautiful Moons</h2>
<figure><img src="https://aethermug.com/assets/posts/is-there-anything-untranslatable/Natsume_Soseki_photo.webp" alt="Potrait photo of Natsume Souseki."/><figcaption>Natsume Souseki, 1867-1916</figcaption></figure>
<p>Perhaps the most famous story about translation in Japan is an urban legend featuring Natsume Souseki, considered by many to be the greatest Japanese novelist of the early 20th century.</p>
<p>When he wasn&#x27;t writing future classics, he worked as an English teacher. One day—the story goes—his students were trying to translate the English phrase &quot;I love you&quot; to Japanese. They knew the translation for each of those three words, so naturally they constructed a grammatically valid sentence with them. How hard can it be?</p>
<p>When they asked Souseki to check their translation, however, he told them they&#x27;d gotten it all wrong.</p>
<p>&quot;Japanese lovers don&#x27;t say things straight to each other&#x27;s face like that,&quot; he said. &quot;You&#x27;ll do better to translate it as, <em>isn&#x27;t the moon beautiful tonight?</em>&quot;</p>
<p>The story itself may be apocryphal, but the message rings true. There is a side of translation that has less to do with the meaning of individual words than with the intention of the speaker. You forgo a literal translation in favor of one that uses different words, but achieves the intended meaning more closely. This approach—usually called &quot;free translation&quot;—exists for all languages, but it&#x27;s especially important when the two languages in question are very different. Japanese (to/from) translators are forced to do this kind of work all the time.</p>
<p>(It is telling that Japanese Wikipedia has an article about <a href="https://ja.wikipedia.org/wiki/%E7%9B%B4%E8%A8%B3%E3%81%A8%E6%84%8F%E8%A8%B3" rel="nofollow noopener noreferrer" target="_blank">literal vs free translation</a>, while the term &quot;free translation&quot; isn&#x27;t even mentioned on English Wikipedia)</p>
<p>For example, watch this famous scene from Akira Kurosawa&#x27;s Sanjuro. The protagonist Sanjuro, an able but rough samurai, has done something that humiliated his friend Hanbei. Hanbei wants to regain his honor, so he challenges Sanjuro to a duel to the death.</p>
<div class="lg:col-start-1 lg:col-end-1 aspect-w-16 aspect-h-9 mb-8"><iframe class="w-full p-0" src="https://www.youtube-nocookie.com/embed/ZDGExYWaA3s?si=7VgkPI4EFn3X_VPq&amp;start=35&amp;cc_load_policy=1" title="YouTube video player" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerPolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe></div>
<p>Sanjuro tries to dissuade Hanbei, saying</p>
<blockquote>
<p>I&#x27;d rather not. If I do, one of us must die. <strong>It&#x27;s not worth it.</strong></p>
</blockquote>
<p>All three sentences are rather loose translations of the original, but that last &quot;it&#x27;s not worth it&quot; is really a free translation. The literal equivalent of the Japanese <em>tsumaranee ze</em> (つまらねぇぜ) would be &quot;it&#x27;s boring&quot;.</p>
<p>Now, someone saying &quot;it&#x27;s boring&quot; about killing a comrade would sound rather odd in English. For an English speaker, this is not <em>like</em> a &quot;boring&quot; situation. The translator correctly extracted the intention of the character—read between the lines—and conjured different words for it, &quot;it&#x27;s not worth it&quot;. This works, although it loses some of the flavor of Sanjuro&#x27;s mannerism.</p>
<p>Of course, we need free translation in both ways. There&#x27;s a famous quote from the Godfather:</p>
<blockquote>
<p>I&#x27;m going to make him an offer he can&#x27;t refuse.</p>
</blockquote>
<p>In the Japanese version of the film, the subtitles could have used a direct translation of those words, because they are all unquestionably &quot;translatable&quot;. Instead, the translator wrote (I re-translate back literally), &quot;I won&#x27;t make him complain&quot; (文句は言わせん). This sounds weird in English, but in Japanese it carries just the right flavor for the scene, the right balance between sounding innocent at first, but lending itself to a violent reinterpretation later on. On the other hand, the literal translation would sound clunky and overly formal in Japanese.</p>
<h2>What&#x27;s Translation, Then?</h2>
<p>It boils down to this: words are not self-contained packets of information. Every word spoken or written exists in a specific cultural context and in a specific circumstance. Take those away, and the word loses its function. In a sense, every word is its own little inside joke.</p>
<p>Nowadays, developed countries are close enough to each other culturally that roughly the same kinds of situations happen in all of them. Because of that, most of the words of one language will have a rather neat equivalent in those of most others. But there are always corner cases, words and expressions that developed in a peculiar way for historical and cultural causes that did not take place in other languages. How do we square that with the Turing-completeness of language?</p>
<p>At its core, language is a tool to influence other people for some purpose. Even if it&#x27;s just to make the other laugh, or to enjoy the time together, we don&#x27;t use language unless we have a goal to achieve. This is a point expressed well by Nick Enfield, a linguistic anthropologist at the University of Sydney:</p>
<blockquote>
<p>When a surgeon turns to her assistant and says “Scalpel,” this one-word linguistic act—we might call it a command or a request—is, first and foremost, an instruction for mutual coordination. The key to understanding this is that when you use language, you are never just saying something. You are doing something. With words, you act on those around you, to help them, influence them, build affiliations with them.</p>
<p>— Language vs Reality, Why Language Is Good for Lawyers and Bad for Scientists, Nick Enfield</p>
</blockquote>
<p>But if language always has a goal, then the real test for its effectiveness is how well the goal is achieved.</p>
<p>Each word has its own goal, but it also needs to support the broader goal of the sentence containing it. The sentence, in turn, needs to support the goal of the longer text or speech segment around it. A word that conveys its own strict meaning but confuses or weakens the intended effect of the words surrounding it is a failure of language.</p>
<p>If self-contained definitions were translations, you wouldn&#x27;t need to learn a language or hire a translator. Having a dictionary at hand would automatically grant you native level comprehension and output quality, if not speed.</p>
<p>This leads to a simple and general definition for what translation is:</p>
<blockquote>
<p>Translation is achieving the goals of the original text in a different language.</p>
</blockquote>
<p>Now we can tackle the titular question.</p>
<h2>Translatability Tests</h2>
<p>If the original message is good enough for the speaker&#x27;s purposes, but it&#x27;s not good enough in its translated form, the language hasn&#x27;t been translated.</p>
<p>As a corollary to that, a message is translatable only if the translator can convey the same desired nuance in roughly the same amount of time or space. Shedding some of the intended nuance will not cut it. Conveying the nuance at the expense of a much larger number of words will add interference and destroy the proportions and timing of the message, both of which (proportions and timing) are usually instrumental to achieving the original goal.</p>
<p>That&#x27;s why we use code-switching: if we can get away using a word in Language 2 on the fly while speaking Language 1, that&#x27;s far preferable than any alternative approach. In a text, if you need a translator&#x27;s footnote to complement what is common knowledge in the original language, the point may get across, but it&#x27;s an explanation, not a translation.</p>
<p>Another test: if translators regularly give up translating a word, opting for free translation or for words with a much distorted or diminished nuance, than that word is untranslatable. (This happens often with certain language pairs.)</p>
<p>Also: if a term would require a significant cultural shift or national re-education program in the target language just for it to be understood seamlessly, it&#x27;s probably safe to call it &quot;untranslatable&quot;.</p>
<h2>It&#x27;s Doubly Worth It, Sanjuro</h2>
<p>In short, I believe that translation is about more than conveying the same meaning. It&#x27;s meaning, plus nuance, plus contextual cues, plus format and timing. Meaning itself is always transferable to other languages, this is true. But those other things are another story. Untranslatable words, and words that lose most of their intended effect when translated, do exist.</p>
<p>This is good news, because it&#x27;s one more reason to learn other languages.</p>
<p>The first reason, the obvious one, is that it allows you to communicate directly with people who speak that language. That&#x27;s a precious thing on its own. But the second reason, validated by the existence of untranslatable bits of language, is that knowing a language opens up a whole <a href="/posts/language-is-a-bottleneck-for-thought">different world</a> for you that would simply be inaccessible otherwise. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/is-there-anything-untranslatable/godfather.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Hills Not Walls]]></title>
            <link>https://aethermug.com/posts/hills-not-walls?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/hills-not-walls</guid>
            <pubDate>Thu, 16 May 2024 07:52:27 GMT</pubDate>
            <description><![CDATA[The choice of our metaphors matters]]></description>
            <content:encoded><![CDATA[<p>Sometimes you hit against a wall. Your plans are cut off, a link in the chain is broken, and you&#x27;re forced to a standstill, in frustration. A pierced tire, a failed entrance exam, a broken promise. Cases like these happen, and when they happen you either give up what you were trying to do or you look for a <strong>breakthrough</strong>. You look for a new weak spot to hit with a sledgehammer.</p>
<p>Breakthroughs are gambles, and most of the times gambles are a bad idea. Besides, metaphorical walls are rarer than it seems. More often, in life, we aren&#x27;t crashing against a wall, but lost on the way to somewhere else.</p>
<p>The feeling is similar—frustration, helplessness, not seeing the way forward—but the difference is enormous. We need a better metaphor for that state.</p>
<p>Looking back at my own life, most of the key turning points weren&#x27;t breakthroughs but something qualitatively different: something like <strong>hills</strong> I must somehow cross to continue my journey. I think the same is true for everyone else.</p>
<p>In each of those pivotal moments in my life, I only took an infinitesimal step forward, like all previous steps. Yet suddenly something crucial had flipped. Everything that happened after those moments was like before, but simpler—not easier, but filled with confidence and momentum, with a clear sense of direction.</p>
<p>The sensation is like tipping over the top of an upward slope.</p>
<p>It was like that with my self-study of Japanese. I began learning kanji in the summer of 2006, but for a year I didn&#x27;t know where I was going. I didn&#x27;t know if I would ever get anywhere. I learned at a slow pace, and spent more time reading about learning the language than doing the actual learning.</p>
<p>It took me eleven months to go through the first three fourths of my kanji textbook, and the book wasn&#x27;t even teaching me how to pronounce the symbols. One year in, I could recognize a bunch of Chinese characters, but I was utterly unable to speak or understand <em>any</em> Japanese, I had no one to practice with, and I knew no more than a couple dozen basic words. It felt like counting grains of sand in a desert.</p>
<p>I (barely) remained interested enough to continue for those eleven months, until one day (it was late August, 2007) I read a blog post. It was written by a man who had learned Japanese to written and spoken fluency in a year and a half, all outside of Japan. The post was about how he&#x27;d immersed himself in the language, and how he&#x27;d only used material that actually interested him. I don&#x27;t even remember the contents precisely, and it doesn&#x27;t matter.</p>
<p>That blog post made something click for me. After that, I stopped counting grains. I finished the last fourth of the kanji book in a week—a speed-up of more than 40x—and dove head-first into my full immersion in the language. Although I worked much harder than before, I felt lighter.</p>
<p>It was as if I&#x27;d been slogging up a hill for a long time until I finally reached the top. After that things went downhill, in a good way. The frustration and the feeling of being stranded gave way to elation as I flew down the slope on the other side. Like a ski mountaineer who, after hours of trudging with skis on his back, finally puts them on and hops into a swift descent.</p>
<figure><img src="https://aethermug.com/assets/posts/hills-not-walls/mingheras-cosmin-wVNmylveDVU-unsplash.webp" alt="A green downward slope between misty mountains, covered in sheep."/><figcaption>Photo by Mingheras Cosmin, Unsplash</figcaption></figure>
<p>One year after finding that post, I was already reading my way through my first novel in Japanese. One more year, and I was reading thousands of pages of prose per month, and I understood 90% of all the movies, TV shows, and anime I watched without subtitles. Five years after that August evening I had moved to Japan, and I was speaking Japanese all day as part of my job.</p>
<p>Was that blog post so revolutionary or—to use another violent metaphor—so ground-breaking? Not really. It was persuasive, but nothing prize-worthy. Had I read it a few months earlier or later, I don&#x27;t think it would have had much effect on me. It just happened to be what I needed to enter the inflection point, but it was the eleven months of plodding, searching, wandering that I&#x27;d done before reading it that set me up for the phase transition.</p>
<p>Something of the same kind seems to have happened in many of the so-called scientific &quot;breakthroughs&quot;. For example, the theory of special relativity was epochal, but it was the culmination of centuries of advances, to which Albert Einstein added a single catalytic idea: the speed of light is constant. That set in motion a transformation that only gained speed over the next several decades.</p>
<p>I like the hill-crossing metaphor, because it isn&#x27;t merely a more peaceful variant of the breakthrough metaphor. It offers a different way to look at things, and because of that, it can lead to new ideas.</p>
<p>For one thing, hills differ from walls in that they work in all directions. Imagine an oval hill: you don&#x27;t need to reach the very summit, the highest possible vantage point, before gravity starts to gently pull you down the opposite slope. <em>Any point along the climb could serve as the beginning of a good downhill glide</em>.</p>
<p>Here is an element of free will that the wall metaphor won&#x27;t give you: you get to choose your path up the hill. Climb too steeply, and you&#x27;ll exhaust yourself early on. Jump on your sled or skis too soon, and you&#x27;ll slide right back to where you came from. But you can take slanting trails on your way up, longer but more sustainable trails. While these diagonal climbs may not take you to the summit, that&#x27;s not the goal anyway. Those paths may lead you to a high enough point to get across and on with your journey.</p>
<p>Besides, hills come in all shapes and sizes. They often have multiple summits, varied crests and cliffs, knolls and thickets. This geographical ruggedness is like the vexing unpredictability of real-life projects. We have the free will to choose our path—how we tackle our challenges—but the options we choose from are constrained by the shape and state of the world we&#x27;re in.</p>
<p>Most of the time, we don&#x27;t find walls on hillsides, spots where the only option is to go back or destroy something to open a passage. Usually, in life as on hills, we have the choice of going left or right, up thornier trails or windier routes, and all come with tradeoffs. In choosing, you&#x27;re guided by the knowledge that all you need to do is find the next ridge from which you can slide down and past the hill—not the straightest path, not the highest peak, not the most predictable course, but one good enough to get on with your life.</p>
<p>Perhaps most importantly, getting over a hill teaches patience. It gives you a way to channel your frustration for something useful: climbing a bit higher and getting creative with the path you take. No brute force required.</p>
<p>What&#x27;s in a metaphor? There are skeptics who will say it&#x27;s a harmful fiction that distracts you from real life. I disagree with them, because a metaphor is a model that works in more than one context, and <a href="/posts/embedded-prophesy-devices">we can&#x27;t help but think with models</a>. To improve your metaphors is, I think, to become wiser. Thinking in terms of &quot;reaching a sliding-off point&quot;, as opposed to &quot;breaking through&quot;, deploys a whole different set of considerations and options that would otherwise remain hidden. Don&#x27;t bang your head against your problem: take a few more steps in it. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/hills-not-walls/topo.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[They Made Us]]></title>
            <link>https://aethermug.com/posts/they-made-us?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/they-made-us</guid>
            <pubDate>Thu, 09 May 2024 10:11:22 GMT</pubDate>
            <description><![CDATA[Natural selection is the opposite of what it sounds]]></description>
            <content:encoded><![CDATA[<p>Near the end of his life, Arthur C. Clarke co-authored a novel with Stephen Baxter titled <em>The Light of Other Days</em>. In it, a new technology was invented capable of creating atom-sized &quot;wormhole cameras&quot; at any point in space and time. The world-upending consequences of that invention, as imagined by the authors, were interesting enough in themselves. But there was one scene in particular that has remained vivid within me until now, two decades later: the scientists eventually adapt their WormCam technology to track one&#x27;s DNA through the generations, capturing on video the smooth transitions from child to mother, from mother to grandmother, and so on backwards in time all the way, in one continuous sequence, to the very beginning of life.</p>
<figure><img src="https://aethermug.com/assets/posts/they-made-us/tlood.webp" alt="Cover image of the book &#x27;The Light of Other Days&#x27;"/></figure>
<p>Retracing one&#x27;s ancestry like that may be a cliché, but back then it unlocked a new perspective for me. Some of those primordial single-celled organisms were <em>my actual relatives</em>! There was not a <em>single</em> interruption in the multi-billion-year-long chain of parents and children in the whole history of life leading up to me!</p>
<p>From this angle, it looks as if everything has been set up exactly as necessary so that I could be here today, typing these words.</p>
<p>It&#x27;s as if you and I were the outcome of painstaking <em>selection</em>—perhaps natural selection.</p>
<p>There&#x27;s something off with that statement, though. We know, through almost 200 years of scientific research, that evolution happens—thrives—in the random mixing of lineages, in the chance appearance of genetic transcription errors, in the unpredictable unfolding of local circumstances. There is no mind at work, no breeder picking the &quot;good stock&quot;, worthy of survival, out from the mess of genetic mistakes. No ultimate goal to evolve towards (least of all... me).</p>
<p>So what&#x27;s so special about the links in that unbroken chain of heredity, of which we&#x27;re the latest? We say that all those grandmothers and grandfathers were &quot;adapted&quot; to their environment, and that they were &quot;fitter&quot; or &quot;fittest&quot; among their populations. But what do we really mean by that?</p>
<p>We must mean, I think, that they all lucked out.</p>
<p>The following is all mathematically true: none of your ancestors, before reproducing, ever got so far from food that they starved; none of them were targeted by predators in conditions too unfavorable; none were the victims of natural calamities deadly enough to kill them; none had their vital organs destroyed by enemy weapons; none had random genetic mutations bad enough to prevent them from becoming parents.</p>
<p>As obvious as all these points are, it&#x27;s interesting that they are all negative statements. &quot;Not being or doing something&quot; is a weak way to categorize things. It&#x27;s usually better and clearer to group things based on positive statements. I prefer saying &quot;all my pens are black&quot;, rather than &quot;none of my pens are red, blue, green, or any other color other than black&quot;. It&#x27;s better to say &quot;Poodles were bred by choosing the pups with the curliest hair&quot;, than &quot;Poodles exist because none of their forebears looked too unlike an idealized Poodle&quot;.</p>
<p>But there&#x27;s no positive way to describe ancestors, except for the tautology that they all survived because they survived.</p>
<p>Nevertheless, we talk about &quot;natural selection&quot;, which makes it sound like there was something &quot;selected for&quot;. Given the above, it seems that they were selected for staying out of harm&#x27;s way long enough.</p>
<p>Imagine a big party buffet, overflowing with more food than the guests can hope to gobble up. Think about the individual food items lined up on the plates. Some are eaten, some are abandoned there, untouched. Which of the two groups would you say are &quot;selected&quot; by the crowd?</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/they-made-us/tye-doring-yAjERuo85E4-unsplash.webp" alt="" title="A party table covered with French cheese, fresh fruit, bread, and other delicacies of every sort."/></figure>
<p>If you answered that the items that got eaten up were selected by the diners, you agree that natural selection is a flawed metaphor. <em>By the logic of natural selection, the crowd &quot;selects&quot; the scraps that get left over by letting them continue existing</em>.</p>
<p>In other words, when you put &quot;natural&quot; before it, the word &quot;selection&quot; undergoes a complete reversal of meaning.</p>
<p>It seems to me that a river might be a better metaphor for the evolutionary process. The water flow carries the thinner soil and the small pebbles away, and leaves the bigger boulders where they are. The boulders form rapids or stepping stones, while the smaller debris might pile up further downstream, reshaping the river&#x27;s course or forming islets at the river delta.</p>
<p>Is the river selecting the boulders to make the scenery nicer? Or is it selecting the finer detritus to make those geographical changes? Because the river has no grand plan or purpose in its endless flow, that very question is meaningless.</p>
<p>The organisms who died out and have no descendants today—the vast majority of all organisms that ever existed—are not out of the equation or irrelevant for us. They may not have left surviving traces in the form of living things, but their past existence—even their premature deaths—affects today&#x27;s world just as much as those who made it here alive, if not more.</p>
<p>Before dying childless, those extinct organisms killed and ate others, biasing the playing field. They were eaten by our ancestors, providing nourishment. By dying, they freed up resources, sometimes saving the lives of our ancestors. They even altered the structure of the environment, changing the conditions for survival of all others.</p>
<p>They deeply impacted the natural history of the Earth, every step of the way.</p>
<p>If we, the living beings, are <a href="https://planktonvalhalla.com/20220818-differences-contraptions-and-a-ripple-universe/" rel="nofollow noopener noreferrer" target="_blank">ripples in the big pond we call Universe</a>, then those that are no longer alive left ripples of another sort, harder to spot and <a href="/posts/a-framing-is-a-choice-of-boundaries">draw boundaries around</a>, perhaps, but just as consequential. Their ripples are—<em>today, right now</em>—propagating, mixing, interfering, resonating with all the other ripples we call &quot;people&quot;, &quot;organisms&quot;, and &quot;things&quot;.</p>
<p>We weren&#x27;t selected, but rather left behind. And they weren&#x27;t eliminated: they made us. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/they-made-us/shane-smith-o_kW5JqL4SE-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[A Medium-Agnostic Taxonomy of Notifications]]></title>
            <link>https://aethermug.com/posts/a-medium-agnostic-taxonomy-of-notifications?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/a-medium-agnostic-taxonomy-of-notifications</guid>
            <pubDate>Sat, 04 May 2024 05:26:30 GMT</pubDate>
            <description><![CDATA[Or, I Went a Bit Too Far Fantasizing About the Ideal Notification UX]]></description>
            <content:encoded><![CDATA[<p><em>Main intended audience of this post: people interested in UX and information management.</em></p>

<p>Notifications are part of the fabric of modern reality. They wait for us to wake up in the morning. They nag at us to stay awake at night. Throughout the day—every day without end or hope for an end—they rain upon us, vibrating our thighs, overlaying themselves on the other things we&#x27;re doing, interrupting our meals, disturbing our fellow theater-goers, always pulling our hands towards our devices even without the need to emit any signal, even without the need to exist, by the mere possibility of existing.</p>
<p>To me, that means that we should study notifications as something as important as, say, politics, which has a larger impact on our lives but not nearly at such a granular, perennial, visceral level (your mileage, of course, may vary).</p>
<p>This is not a rant against the evils of notifications. Sure, commercial providers have all the incentives to flood us and consume our attention (are they the real &quot;consumers&quot;, in this sense?), and they take no prisoners. But if we use notifications so much, it&#x27;s because most people need them or like them. For those who don&#x27;t, it&#x27;s easy enough to shut them out altogether. What really interests me is how to throttle them by degrees.</p>
<p>The best of both worlds—the world of buzz-slavery and that of asceticism—is, of course, to filter out the useless notifications and only pay attention to the ones you need. That is easier said than done. Some notifications are useless now, but might be useful in the future, and vice versa. Some notifications need to be checked before knowing if they&#x27;re useful, and some must <em>only</em> be checked now and not later.</p>
<p>What follows is my thought process as I put my  product manager hat on and attempt to make sense of all those intricacies. I also try to come up with guidelines and hints for managing one&#x27;s own floodgates and with a wish list of better UX design.</p>
<h2>Your Dog Invented It, Too</h2>
<p>Someone wrote, near the top of the Wikipedia page for &quot;Notification System&quot;:</p>
<blockquote>
<p>Within the 20th century, one of the most important scientific breakthroughs has been the invention of notifications.</p>
</blockquote>
<p>This is not what I expected to read, being wrong in many ways, but it still works as a starting point.</p>
<p>One thing that stands out in that sentence is the idea that notifications are a scientific breakthrough. Notifications are not what I would categorize as a scientific advancement. Maybe the author is referring to all the psychological research that corporations are doing to push their products in front of our eyes, but—even assuming we can call that science—that&#x27;s not how notifications were invented.</p>
<p>Before all of the attention hacking began, notifications existed as honest ways to inform people of things they actually needed to check. That, of course, started well before the beginning of the 20th century. Cell phones in the nineties  had little LED lights that blinked to notify you of messages and missed calls. The first SMS was received by a Orbitel 901 (wired) phone in 1992 and, although I can&#x27;t find more details about it, it must have had some way to show it had arrived. Answering machines had light-based notifications <a href="https://collection.sciencemuseumgroup.org.uk/objects/co8054920/juli-phone-telephone-answering-machine-about-1970-telephone-answering-machine" rel="nofollow noopener noreferrer" target="_blank">at least as early as the 1970&#x27;s</a>.</p>
<p>But the more you try to look back for examples, the more you realize that what we call a &quot;notification&quot; is, at its core, something much more universal and ancient than the little icons that come to mind today, perched up there at the top of a smartphone&#x27;s screen.</p>
<p>An email sitting in your inbox is a notification, and so are the recent posts in a social timeline, and even a ringing phone. Is it limited to modern technology, then? No, because a letter made of paper, in your home&#x27;s mailbox, serves exactly the same function, and as such it&#x27;s another instance of notification. A mother calling her children for dinner is a notification. Your dog whining to get out for a walk, too, is a notification.</p>
<figure><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/the-blowup-xBqM6cfgP4U-unsplash.webp" alt="The paper in the mail box is its own notification."/></figure>
<p>We&#x27;re just using a new-sounding name for something that is, literally, older than humanity itself: <em>information meant for you to consume</em>. Notifications weren&#x27;t &quot;invented&quot; any more than aggression or feeding were &quot;invented&quot;. The only thing that changed is that, in the 20th century, we have a lot more of them.</p>
<p>Unless we look at notifications from this broader perspective, as special cases of a kind of phenomenon that transcends any medium, we can&#x27;t fully wrap our heads around them.</p>
<p>As a first step, I&#x27;m going to define three axes to classify notifications. Then we&#x27;ll see what are the most reasonable actions for each class of notifications. Unfortunately not all of those actions are easy on a smartphone or PC today, so I&#x27;ll conclude by offering some ideas for better software UX for each of those actions.</p>
<p>(I later edited the Wikipedia sentence cited above into &quot;<em>The widespread adoption of notification systems was a major technological development of the 20th century.</em>&quot;)</p>

<h2>Step 1: Eisenhower, but in 3D</h2>
<p>So, in the broadest sense, notifications are simply &quot;information meant for you&quot;. You&#x27;re expected to check them. Whether it&#x27;s someone else who wants you to look at them, or you requested them yourself, doesn&#x27;t matter very much. They&#x27;re notifications if they come to you to be looked at. So, should you?</p>
<p>As always when discussing <a href="/posts/a-framing-is-a-choice-of-boundaries">framings</a>, there isn&#x27;t a single or &quot;right&quot; way to categorize notifications to make them more manageable. Different framings will work in slightly different ways, and the important thing is that the framing you choose is useful &quot;enough&quot;. Instead of reinventing the wheel, then, we can find a framing that has worked well for similar goals, and try to adapt it to this one.</p>
<p>One framing that has helped me in the past is the <em>Eisenhower matrix</em>, or <em>Eisenhower method</em>. It&#x27;s so popular <a href="https://asana.com/resources/eisenhower-matrix" rel="nofollow noopener noreferrer" target="_blank">on the internet</a> that I don&#x27;t think I need to explain it in depth. It divides your pending decisions into levels of <em>urgency</em> and <em>importance</em>. As Dwight Eisenhower put it:</p>
<blockquote>
<p>I have two kinds of problems, the urgent and the important. The urgent are not important, and the important are never urgent.</p>
</blockquote>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/eisenhower_matrix.svg" alt="" title="A rectangle divided into 4 quadrants: urgent and important (emergencies: do), urgent and not important (interruptions: delegate), not urgent and important (planning: schedule), not urgent and not important (time-wasters: eliminate)."/></figure>
<p>We can use this with notifications, but there are two important adjustments to make. First, the Eisenhower matrix is usually explained as a decision-making framework. While handling notifications is a form of decision-making, the matrix is usually explained in the context of management and teamwork. For example, you&#x27;re told to &quot;delegate&quot; urgent-but-not-important decisions to other people. In the following sections I&#x27;ll adapt the concept a bit for the specific case of notifications.</p>
<p>Second, there is a whole dimension to notifications that is not captured by Eisenhower. It has to do with how quickly or easily the notification disappears after reaching you. We could call it the <em>transience</em> axis.</p>
<p>Thus we get a three-dimensional space to think about notifications: the space of urgency, transience, and importance. What can it do for us?</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/3d_axes.webp" alt="" title="A 3D space with three perpendicular axes: urgency, transience, and importance."/></figure>
<h3>The Urgency Axis</h3>
<p>Some notifications are intended for you to see as soon as possible, while others you could ignore for a while without consequences.</p>
<p>As an example of the latter extreme, think of a fire alarm ringing in your house or a tornado siren when you&#x27;re in Kansas. On the non-urgent end are things like new videos from a Youtube channel you follow.</p>
<p>Note that urgency is not the same thing as importance to you: it&#x27;s only about how soon you need to see the message for it to fulfill its purpose. A one-day 80% discount off your favorite lip cream may be urgent, but probably not very important.</p>
<figure><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/fax.webp"/></figure>
<p>In general, you want to be able to control which notifications you get based on their urgency. You may want to get the urgent ones as soon as possible, no matter how busy you are, and silence the less urgent ones until later. You&#x27;re controlling notification urgency when you stick a long-lasting discount coupon to your refrigerator, and when you configure your calendar to send you a reminder email the night before an important event.</p>
<p>You don&#x27;t always know the urgency of a notification before looking at it, so you&#x27;re often forced to look at many less-urgent ones, just in case. But you sure would like to know. On the other hand, marketers love to create a false sense of urgency. Only a third party can mediate this tension, which is why I think we should have protocols in place to clarify the urgency of each notification. More on this later.</p>
<h3>The Transience Axis</h3>
<p>The second axis is transience, or &quot;how temporary is the notification?&quot;</p>
<p>The most transient notifications are extremely ephemeral, disappearing shortly after reaching you, perhaps even before you had the time to look at them. Radio programs are very transient, because they keep going whether you understood what they said or not. There is no pause or rewind on a radio. The winkers of a car in front of you are transient, too, because they might go out at a moment&#x27;s notice.</p>
<p>At the other extreme, the least transient notifications are the permanent ones, which will stay there in your face until you do something about them. Physical mail in your home&#x27;s inbox will keep on piling up and won&#x27;t go away unless you take action yourself. A death threat painted on your shop&#x27;s shutters over the night is something you can&#x27;t easily dismiss either.</p>
<p>Phone calls were originally completely transient notifications. Either you answered immediately or their contents were lost to you, shrouded in mystery. Then they invented things like caller identification, which shows you the number of the caller, and voicemail, which stores people&#x27;s messages for you to listen whenever you want. Effectively, these inventions added less-transient notifications to the innately ephemeral ones of ringing phones.</p>
<p>How transient you want your notifications to be depends on your tradeoffs. Too permanent and they&#x27;ll pile up, take up your physical or mind space, and become a burden. Too ephemeral and they&#x27;ll pass right under your radar and you&#x27;ll never know you missed them. In general, it&#x27;s best to have less transient notifications for information that is important to you.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h3>The Importance Axis</h3>
<p>This one is the clearest, but the trickiest to classify. Importance is very subjective, and potentially any message, regardless of format, could be important or not. In theory, something that affects your life and that you can do something practical about is important, while the rest isn&#x27;t.</p>
<p>You&#x27;re the only one who can assign things along the importance axis. By definition, unimportant information should have low priority, and ideally you want to eliminate it altogether from your life. But you may want to access some of it for your enjoyment, curiosity, or archival reasons.</p>

<h2>Step 2: Charting the Landscape</h2>
<p>Following the matrix approach, we can now divide each of these axes into two sections (urgent/non-urgent, transient/permanent, important/not important) to get eight combinations. Drawing these in a fancy 3D plot would be confusing, so I&#x27;ll use two urgency-transience matrices, one for non-important notifications and another one for the important ones.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/chart_0_important.webp" alt="" title="Empty four-quadrant matrix for urgency and transience, important"/></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/chart_0_unimportant.webp" alt="" title="Empty four-quadrant matrix for urgency and transience, unimportant"/></figure>
<h3>Urgent &amp; Transient Notifications</h3>
<p>Non-software examples:</p>
<ul>
<li>Someone asking you a question</li>
<li>Sirens (when they affect you, e.g. you&#x27;re blocking an ambulance&#x27;s way)</li>
<li>Earthquake alerts on your smartphone</li>
<li>Screams for help</li>
<li>Street signs when you&#x27;re driving</li>
</ul>
<p>Software examples:</p>
<ul>
<li>&quot;A live stream is happening now!&quot;</li>
<li>A &quot;looking for group&quot; message in a social RPG game</li>
</ul>
<p>Urgent notifications, by definition, have a short expiry date: they lose meaning if you look at them too late. That&#x27;s why it is often acceptable to keep them transient.</p>
<p>Any urgent &amp; transient notification can be important to you or not, and that affects what you should do with them. <strong><em>Important</em> urgent &amp; transient</strong> notifications are what you want to dedicate all of your attention to, right now. They should override basically anything else, so they need to be in your face, loud, and give you all the information you need immediately.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/chart_1_important.webp" alt="" title="Four-quadrant matrix for urgency and transience, important. The urgent and transient quadrant says &#x27;check now&#x27;."/></figure>
<p><strong><em>Unimportant</em> urgent &amp; transient</strong> notifications, on the other hand,  are completely optional. If you don&#x27;t want to  remove them altogether, then you want to get them in a non-intrusive way, somewhere you can find them when/if you wish to, but that doesn&#x27;t get confused with the important ones. If they go away before you see them, never mind. You&#x27;ll never know and never feel much pain for missing them.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/chart_1_unimportant.webp" alt="" title="Four-quadrant matrix for urgency and transience, unimportant. The urgent and transient quadrant says &#x27;block or ignore&#x27;."/></figure>
<h3>Urgent &amp; Permanent Notifications</h3>
<p>Non-software examples:</p>
<ul>
<li>Suicide letters, blackmail letters, letters/emails about approaching expiration dates</li>
<li>Newspaper delivered to your mail box</li>
<li>Voice-mail asking to call back ASAP</li>
</ul>
<p>Software examples:</p>
<ul>
<li>Chat conversation you&#x27;re participating in</li>
<li>Emails from your boss</li>
<li>Blog post in your feed reader/inbox about a new concert with tickets quickly running out</li>
</ul>
<p>There is no use, in general, in having urgent notifications that keep existing after you&#x27;ve looked at them, except perhaps a sentimental or record-keeping value. In the latter case, they&#x27;re probably <strong>important</strong> for you.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/chart_2_important.webp" alt="" title="Four-quadrant matrix for urgency and transience, important. The urgent and permanent quadrant says &#x27;check now, then destroy or archive&#x27;."/></figure>
<p>As for the <strong>unimportant</strong> ones, you need to get rid of them once you&#x27;re through with them, or prevent them from reaching you in the first place.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/chart_2_unimportant.webp" alt="" title="Four-quadrant matrix for urgency and transience, unimportant. The urgent and permanent quadrant says &#x27;block or destroy&#x27;."/></figure>
<h3>Non-Urgent &amp; Transient Notifications</h3>
<p>Non-software examples:</p>
<ul>
<li>Brand ads on the street</li>
<li>Most overheard conversations</li>
<li>Spotting a flock of birds making strange patterns in the sky</li>
<li>Sounds of the city</li>
</ul>
<p>Software examples:</p>
<ul>
<li>Bad task reminders (&quot;oops, I swiped it away, then forgot all about it!&quot;)</li>
<li>Time-bounded social-media posts (e.g. Snapchat)</li>
<li>A nice picture or sentence that hit you while browsing the web</li>
</ul>
<p>Almost every moment we spend in society, we&#x27;re flooded by transient, non-urgent notifications. They compose most of the noise in our lives. Thankfully, they go away on their own.</p>
<p>The <strong>important</strong> ones are a problem. You may miss them, and live to regret it. In general you want to take note of the important ones before they go away, for as long as you need—until you have time to act on them. In other words you&#x27;ll have to devise ways to catch and convert them into permanent notes.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/chart_3_important.webp" alt="" title="Four-quadrant matrix for urgency and transience, important. The non-urgent and transient quadrant says &#x27;convert to permanent&#x27;."/></figure>
<p><strong><em>Unimportant</em> non-urgent &amp; transient</strong> notifications are something you can safely ignore. Look at them only when you really have nothing better to do.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/chart_3_unimportant.webp" alt="" title="Four-quadrant matrix for urgency and transience, unimportant. The non-urgent and transient quadrant says &#x27;block or ignore&#x27;."/></figure>
<h3>Non-Urgent &amp; Permanent</h3>
<p>Non-software examples:</p>
<ul>
<li>Most letters, promotional leaflets in your mailbox</li>
<li>Storefront signs in your area</li>
<li>Books on your shelves</li>
</ul>
<p>Software examples:</p>
<ul>
<li>Those eternally-open tabs in your browser</li>
<li>Most emails, chat messages, and social-media posts</li>
<li>Good task reminders (&quot;it&#x27;s staying there until I tick it off.&quot;)</li>
</ul>
<p>These are the things that we need to spend the most time managing, otherwise they will drown us. Most of us, it seems, are losing this battle.</p>
<p><strong><em>Important</em> non-urgent &amp; permanent</strong> notifications may not need immediate action, but they need to be there when you eventually want to do something about them. For this reason you need to organize them well, store them neatly. It is easy for them to slip through the cracks, or to be forgotten for a little too long. The temptation is to keep them all, but space and memory are finite: it&#x27;s often better to assign deadlines to them, act on them, and then destroy or archive them.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/chart_4_important.webp" alt="" title="Four-quadrant matrix for urgency and transience, important. The non-urgent and permanent quadrant says &#x27;organize and/or set deadline&#x27;."/></figure>
<p>The <strong>unimportant</strong> ones are your enemies. They are the invader weeds. They constantly threaten to sideline the important ones, and can get in your way. A complete waste. It&#x27;s best to block them as much as you can, and destroy the others. You may want to keep some of them for entertainment or sentimentality, but do it at your own peril. (This is more of a problem with physical notifications, but care is needed for the digital case, too. See below.)</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/chart_4_unimportant.webp" alt="" title="Four-quadrant matrix for urgency and transience, unimportant. The non-urgent and permanent quadrant says &#x27;block or destroy&#x27;."/></figure>

<h2>Step 3: Better Notification UX</h2>
<h3>The Core Actions</h3>
<p>The Urgency-Transience-Importance framing makes the kind of actions we need clear. In broad strokes, we need eight kinds of actions: <em>destroy, ignore, check now, block, archive, organize, set deadline</em>, and <em>convert to permanent</em>. But do our devices support those actions?</p>
<p>In part, they do. <em>Check now</em> and <em>ignore</em> come for free: everything is built around allowing you to check things immediately, and ignoring is as easy as scrolling down a little or looking away. While the granularity offered varies, all OSes also let you mute notifications either temporarily or permanently. I treasure this ability and exercise it aggressively.</p>
<p>Fortunately most apps also give you ways to <em>destroy</em> things. You can delete emails and chat threads, put files in the trash can, and so on. <em>Archiving</em> functionality is also very common, although not as much as it should be, in my opinion.</p>
<p>You could argue that many digital objects like emails and chat messages don&#x27;t even need deletion, because they &quot;self-archive&quot; simply by remaining there in a sequence. In a sense, it&#x27;s as if they were deleted, because you don&#x27;t need to give them a second thought (unless you hit your email provider&#x27;s storage limit). It&#x27;s also nice, in theory, to be able to go back to them if you ever wish to.</p>
<figure><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/mail_call.webp"/></figure>
<p>There&#x27;s a hidden cost, though: those thousands of past interactions soon pile up into a huge haystack you have to sift through. This becomes a deterrent against your searching for specific things from the past, and defeats the purpose of keeping them. Most of the time, it&#x27;s safe and a better idea to delete.</p>
<p>There is one aspect of the <em>destroy</em> action that, I feel, hasn&#x27;t been explored fully yet. Recall the <strong>urgent &amp; permanent</strong> notifications. If one such notification is even a little bit important to you, you will want to keep it for a while, but you may want to get rid of it once its expiration time comes.</p>
<p>With the current UX of most programs, you&#x27;re forced to wait until the right time, and <em>then</em> delete the message. This is fine and even preferable in many cases (they act like checkboxes that you &quot;tick&quot; by deleting them), but there are cases where you&#x27;re not even sure if you want to act on them. Think about the time-sensitive news in your inbox. Maybe you&#x27;ll get around to reading them in the weekend, but if you don&#x27;t, you&#x27;re okay to skip.</p>
<p>In this case, wouldn&#x27;t it be nice to have a &quot;scheduled removal&quot; function? You see the notification, you think you might consume it later, but not after a certain date. So the notification sits there, waiting for you, but it &quot;shows itself out&quot; when it&#x27;s become obsolete. No further action is needed from you unless you want to.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/autoremoval.webp" alt="An example implementation of the &quot;scheduled removal&quot; UX, applied to Google Keep. Besides the usual &quot;delete&quot; option, you&#x27;d have an &quot;auto-remove&quot; option, where you can set the date and time, and whether you want to delete or just archive the memo." title="A screen showing a Google Keep note with the actions menu open. Among the various actions like &#x27;make a copy&#x27; and &#x27;delete&#x27; is a hypothetical &#x27;Auto-remove&#x27; action. In the next screen a menu is open titled &#x27;Add auto-removal&#x27;, where the user can select a date and time, and the action to take: delete or archive."/><figcaption>An example implementation of the &quot;scheduled removal&quot; UX, applied to Google Keep. Besides the usual &quot;delete&quot; option, you&#x27;d have an &quot;auto-remove&quot; option, where you can set the date and time, and whether you want to delete or just archive the memo.</figcaption></figure>
<p>This might only be a minor improvement, but I think it shows how this framing can help generating new UX ideas.</p>
<h3>Steering the Flow</h3>
<p>Given the constant assault we&#x27;re under, <em>blocking</em> notifications is perhaps the most critical action for our sanity, and it&#x27;s well supported across the board. In most apps, you can filter out emails with custom rules, you can unfollow accounts, and you can explicitly block them.</p>
<p>However, according to the Urgency-Transience-Importance framing, the biggest factor in deciding whether to block something or not is its <em>importance</em>, not the source!</p>
<p>Most people and apps say things that are unimportant to you 90+% of the time, and only sometimes produce something that you consider important. Blocking whole accounts seems like too blunt of a tool and a great way to miss opportunities.</p>
<figure><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/garage.webp" alt="It might be the right time to found your startup so that you&#x27;ll be ready to solve this (I believe) billion-dollar problem when AI catches up. I wouldn&#x27;t be surprised if someone were attempting that already."/></figure>
<p>The ideal blocking feature would be some way to filter out based on a customized &quot;value function&quot; that only lets the things you might care about reach your attention. Using deterministic algorithms would never work. AI is more promising. I don&#x27;t think it&#x27;s feasible just yet with the current state-of-the-art AI, but I think we&#x27;re very close.</p>
<p>Next in line is <em>organizing</em>, the thing you do to make your important, non-urgent stuff easily accessible. This is an extension of mere archival. We have lots of tools for this one:</p>
<ul>
<li><strong>File systems</strong> let you structure data into hierarchies, which are easier to navigate.</li>
<li><strong>Indexing and search</strong> is the specialty of databases. If you know exactly what you&#x27;re looking for, indexing is all you need to find stuff really fast.</li>
<li><strong>Tagging</strong> lets you transcend the strict limitations of folders, and makes search easier.</li>
<li><strong>Cross-linking</strong>, like wiki links to other pages, is a powerful and quick way to organize things in a more organic way while keeping it semantically navigable.</li>
<li><strong>Snoozing</strong> is a relative newcomer to the arena, but one of my favorites. It allows you to archive notifications not (only) based on their contents, but on when you&#x27;ll need to see them again. You can forget about them until they pop up again at the right time.</li>
<li>And more.</li>
</ul>
<p>Yet, I consider <em>organizing</em> an under-solved problem. Besides file systems (and maybe indexing), most of the powerful approaches are left to the developers of each individual product. Each app implements its organization mechanisms differently, and most of them are siloed—you&#x27;re forced to treat each of them as its own separate archive.</p>
<p>Why not have universal, inter-app tagging? Where is the one-click cross-linking between apps? Why are 99% of the things I see on my screen not snoozable?</p>
<p>These are such fundamental operations that I&#x27;d love to see implemented at the OS level on each device, and using standard protocols over the internet.</p>
<h3>Transience Tuning</h3>
<p>The two remaining actions, <em>set a deadline</em> and <em>convert to permanent</em>, are meant to change the transience of your notifications. Similar to the &quot;scheduled removal&quot; function I mentioned above, these functions would allow you to alter the life-span of a notification.</p>
<p>Calendar apps and certain to-do apps allow you to set deadlines for your <strong>important non-urgent &amp; permanent</strong> notifications. In theory, you can create a task that will send you reminders to work on the item and then to destroy or file them away. It&#x27;s a manual process requiring you to choose dates, type in the explanation, set up the reminders, etc. For obvious reasons, that&#x27;s not going to be sustainable for most people.</p>
<p>We can think of a better solution that builds on the auto-removal idea. Instead of only letting you delete or archive the item (e.g. a note, an email, a chat notification), it could also give you the option of receiving reminders of the deadline some time before it actually arrives. It could even add them to your to-do list when the time approaches.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/autoremoval_reminder.webp" alt="Building on the previous example with Google Keep, some options might involve reminders before the actual automated archival/deletion. A confirmation request of the archival/deletion, when the time comes, could allow you to bump the deadline to later, if you want. Procrastination would be opt-in, rather than opt-out." title="The same auto-removel selection screen as before, but now instead of the delete action is an action titled &#x27;remind me 1 week before, then delete&#x27;."/><figcaption>Building on the previous example with Google Keep, some options might involve reminders before the actual automated archival/deletion. A confirmation request of the archival/deletion, when the time comes, could allow you to bump the deadline to later, if you want. Procrastination would be opt-in, rather than opt-out.</figcaption></figure>
<p>(In the absence of all that, snoozing could be a partial solution.)</p>
<p>As for the <em>convert to permanent</em> action that I suggest for <strong>important non-urgent &amp; transient</strong> notifications—temporary bits of information that you don&#x27;t want to lose—we have a few partial solutions.</p>
<p>If it&#x27;s a website you want to retain, you can bookmark it, but good luck keeping those organized. There are specialized save-for-later apps like <a href="https://readwise.io/read" rel="nofollow noopener noreferrer" target="_blank">Readwise</a>, <a href="https://getpocket.com/collections" rel="nofollow noopener noreferrer" target="_blank">Pocket</a> and <a href="https://omnivore.app/" rel="nofollow noopener noreferrer" target="_blank">Omnivore</a>, and they mostly focus on written content. There is Pinterest for images, but it&#x27;s a nasty SEO-spammy walled garden. Nothing really general.</p>
<p>A few note-taking apps do have pretty good data ingestion systems allowing you to throw information in and easily categorize it. I&#x27;m thinking especially about Evernote&#x27;s excellent quick capture functionality, which provides first-class input interfaces for text, emails, voice, pictures, and other formats. (Sadly the company is now <a href="https://www.sfgate.com/tech/article/evernote-layoffs-moving-to-europe-18190083.php" rel="nofollow noopener noreferrer" target="_blank">nearly defunct</a> because of bad management and—among other things—for being so rigid compared to newer options).</p>
<p>I&#x27;m wishing modern tools like Obsidian and Logseq would take a lesson from the old guard here. That would turn them into powerful permanence wells, and their rich features make organizing and rediscovering things a breeze.</p>
<h3>Urgency Flags</h3>
<p>There is one last notification problem I hope future software infrastructure will solve: <em>there is no way to automate actions based on urgency</em>.</p>
<p>Unlike importance, urgency is a rather objective property of a message. It&#x27;s either time-sensitive or it isn&#x27;t. In principle, the producer of a notification knows its true level of urgency, so what if the system allowed them to flag it accordingly?</p>
<p>This would let the client automatically apply different strategies on a case-by-case basis. The OS or app could give you the option to mute only the non-urgent notifications, while letting the pressing messages through. You could have advanced options that auto-snooze non-urgent items until the weekend, or sets deadlines for them based on certain criteria. Urgent items within certain categories (e.g. news) could be made transient by default. A lot of possibilities would open up.</p>
<p>Of course, certain actors would still have an incentive to fake urgency. But by incorporating urgency into protocol, we would have methods for punishing the liars. At least for email, we already have <a href="https://mailtrap.io/blog/email-domain-reputation/" rel="nofollow noopener noreferrer" target="_blank">mechanisms</a> in place to track and impact the reputation of email senders: if they spam too much, they&#x27;ll eventually be sent to the spam box. Imagine having a second button alongside &quot;report spam&quot; saying &quot;this isn&#x27;t urgent&quot;. It could make for a softer version of spam control, adding granularity for the senders you don&#x27;t necessarily want to block out altogether.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/wrong_urgency.webp" alt="Email messages are among the notifications that would benefit a lot from having an urgency or expiration date provided by the sender. If it&#x27;s wrong, you can flag it and penalize the sender." title="Gmail email screen. Under the title, there is a small section titled &#x27;urgency&#x27;, saying that the deadline is May 6th, the day a discount will expire. Next to it is a button to flag the email as having the wrong urgency."/><figcaption>Email messages are among the notifications that would benefit a lot from having an urgency or expiration date provided by the sender. If it&#x27;s wrong, you can flag it and penalize the sender.</figcaption></figure>
<h2>Notifications Are Hard</h2>
<p>Notifications are surprisingly hard to get right. This is how Slack does it:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/slack.webp" alt="Slack&#x27;s notification logic is complex and it&#x27;s above-average in quality. I especially love the ability to snooze messages. (Note, this chart is several years old.)" title="Complex flowchart of Slack&#x27;s notification logic. Lots of conditionals and possible states."/><figcaption>Slack&#x27;s notification logic is complex and it&#x27;s above-average in quality. I especially love the ability to snooze messages. (Note, this chart is several years old.)</figcaption></figure>
<p>More than once in this post I&#x27;ve dreamed out loud of OS-level solutions and new universal protocols. Those things are both unlikely to happen any time soon. At the end of the day, every program and every communication channel is different, and it may be impossible to produce a full integration like that. Still, if this &quot;Urgency-Transience-Importance&quot; is of any use to clarify even one product designer&#x27;s—or anyone else&#x27;s—ideas, it has done its job. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/a-medium-agnostic-taxonomy-of-notifications/super-snapper-ZsimLXUS_cY-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Language is a Bottleneck for Thought]]></title>
            <link>https://aethermug.com/posts/language-is-a-bottleneck-for-thought?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/language-is-a-bottleneck-for-thought</guid>
            <pubDate>Thu, 25 Apr 2024 12:27:33 GMT</pubDate>
            <description><![CDATA[The definitive case in favor of headjacks?]]></description>
            <content:encoded><![CDATA[<p>Why is it so hard to convey your feelings? Why do words so often fail you when you try to infect a friend with your enthusiasm for a person, a place, an idea? How do you teach someone file version management in the least number of words? For someone who writes, this kind of frustration is part of the process. I feel it right now. But anyone who uses words or symbols to commune with others, no matter the format, is intimately familiar with the feeling. We take it for granted. But why is it so?</p>
<p>I believe the answer is not in the complexity of those concepts, nor in a nihilistic worldview where we&#x27;re all alone in this world and beyond salvation, but in a system-centered interpretation of language.</p>
<p>To see that, we need only two concepts I&#x27;ve written about before (but will summarize below): models and framings.</p>
<h2>The Problem with Words</h2>
<p>A model is <a href="https://aethermug.com/posts/embedded-prophesy-devices" rel="nofollow noopener noreferrer" target="_blank">a system that predicts the future</a>. It predicts only a small subset of the future, and it comes with a good deal of uncertainty, but it&#x27;s an invaluable tool for any living organism. This includes mental models, the more-or-less-conscious, customizable models we humans cultivate in our heads. They&#x27;re how you know that a sip of water will make you feel less thirsty, and that a certain kind of joke would not be appreciated by Linda. Ah, Linda.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p>Models are able to make predictions of things before they happen because they are simpler than the things being simulated. They do away with the irrelevant details, and for that they are faster. That doing away is done with framings. The way I define it, <a href="https://aethermug.com/posts/a-framing-is-a-choice-of-boundaries" rel="nofollow noopener noreferrer" target="_blank">a framing is a choice of boundaries</a>, or a choice of what to treat as a black box and what to see as interactions between black boxes. We don&#x27;t need to think about what goes on inside the boundaries of those black boxes, only what they are prone to do, and how they can interact with other black boxes. Often we treat people as black boxes, for example, but also machines, places, the weather, you name it.</p>
<p>Now, all of this has little to do with language. The way we think (and a lot of what our bodies do without our need to think about it) is based on this kind of black-box mechanism, where we abstract away the things that don&#x27;t matter at the moment in order to make our much-needed predictions. It happens inside each of our crania. We grow our mental models and tend to them as our most precious instruments—appendages, almost.</p>
<p>But as long as there&#x27;s more than one human being on Earth, we&#x27;ll want to communicate. Sharing our good framings is a powerful way to help each other. This leads to the question: if your mental models exist as the firing of neurons and the shifting of chemicals, how do you transfer them to someone else <em>using only your muscles</em>?</p>
<p>It&#x27;s not really possible. Or at least, it&#x27;s not feasible.</p>
<p>One problem is that language, whatever its form, comes with its own framing built in, what I call the <a href="https://aethermug.com/posts/a-fundamental-framing-of-human-language" rel="nofollow noopener noreferrer" target="_blank">Fundamental Framing of Human Language</a> (FFHL). This is dictated by the use of words and sentences to communicate, and by the need for those words to be understood by many. It means that, the moment we try to put our thoughts into words, we&#x27;re forced to map our framings into the framing of the shared vocabulary. This is rarely a neat one-to-one mapping.</p>
<figure><img src="https://aethermug.com/assets/posts/language-is-a-bottleneck-for-thought/iknowkungfu.mp4" alt="When kung fu is uploaded to Neo&#x27;s brain with a headjack cable, I bet it wasn&#x27;t verbal information."/></figure>
<p>If I wanted to explain to you what the excellent pizza I had last night tasted like, I&#x27;d had to reach for a series of words like &quot;crispy&quot;, &quot;fresh&quot;, &quot;earthy&quot;, &quot;harmony of flavors&quot;, and so on and so forth. These are emphatically <em>not</em> the terms in which I think about the flavor of pizza while I&#x27;m savoring it. I <em>don&#x27;t</em> use a black box for &quot;Mozzarella Cheese&quot;, which interacts in some way or another with the &quot;San Marzano Tomato sauce&quot; black box, and so on. My experience is purely non-verbal—with boundaries and simplifications, of course, but not those implied by those words.</p>
<p>If I were the one <em>baking</em> the pizza, my framing would be different, and I might indeed have black box concepts for each separate ingredient. But, even in that case, I would probably tune their boundaries to my needs, for instance I might think of the cheese only in its custom-sliced form, and the sauce as really whole tomatoes picked no more than N days ago. In explaining the pizza-making process to others, I would have to leave off some of that nuance.</p>
<p>In other words, we always make a lossy conversion from one framing to another. That doesn&#x27;t mean that we can&#x27;t build more sophisticated and accurate framings by piling on more words—only that the very foundations are different. It&#x27;s a bit like converting a non-analog photograph to digital. It often works just fine, but there are differences that can be of consequence (tiny details lost due to pixelation, color space reduction, and digital artifacts to name a few).</p>
<p>Another problem with language is that the best way to frame and model a system depends <em>entirely</em> on your purpose, instant per instant. You need to make predictions because you have a purpose you&#x27;re trying to achieve, be it removing thirst or getting hired by a company. Purpose determines what the &quot;irrelevant&quot; bits are that you can stuff into the black boxes. It doesn&#x27;t make sense to speak of framing and modeling the world without knowing what your goal is.</p>
<p>We churn through our purposes really fast, too. Once the thirst is quenched, your new goal might be to wash the glass: and suddenly what was the refreshment-yielding &quot;Glass of Water&quot; black box a minute ago has been reframed into a bacteria-collecting &quot;Dirty Glass&quot; black box. You&#x27;re trying to predict another aspect of reality, so you dynamically change your framing.</p>
<p>But what are the chances that the people you&#x27;re talking or writing to have the same purpose as you, at the same exact time?</p>
<p>Scant, the chances are.</p>
<p>Other people usually have different purposes, so what for you is an optimal, well-functioning framing and model, for other people is less than optimal.</p>
<figure><img src="https://aethermug.com/assets/posts/language-is-a-bottleneck-for-thought/littlewomen.webp" alt="In the 2019 movie Little Women, two of the main characters (Jo and Laurie) are so close that they often borrow each other&#x27;s clothes. You might think they suit both well enough... until you learn that the costume department had to tailor two versions of each garment due to their very different proportions."/></figure>
<p>It&#x27;s a bit like sharing clothes with a sibling or a friend. Unless you both have exactly the same body size and shape, the same hair and skin tone, the same personality, it&#x27;s likely that whatever suits one of you very well will look less than stellar on the other.</p>
<p>We can&#x27;t make that kind of relativism go away with language. All we can make is compromises. How do we do that, and does it work?</p>
<h2>Mitigating the Bottleneck</h2>
<p>Of course, there&#x27;s always the show-don&#x27;t-tell approach. Instead of describing the pizza, have the other taste it directly. Instead of writing instructions on how to swing a racket, have the kid play tennis. But we&#x27;re talking about language now.</p>
<p>Our default language-based strategy is to lean into the FFHL and pick vocabulary with the best balance between being fit to the purpose of your specific audience and being accessible, i.e. known to them. For example, the Wikipedia article about the videogame Baldur&#x27;s Gate 3 begins like this:</p>
<blockquote>
<p>Baldur&#x27;s Gate 3 is a role-playing video game with single-player and cooperative multiplayer elements. Players can create one or more characters and form a party along with a number of pre-generated characters to explore the game&#x27;s story.</p>
</blockquote>
<p>The assumption of the author is that the reader knows these specialized terms:</p>
<ul>
<li><em>role-playing</em></li>
<li><em>video game</em></li>
<li><em>single-player</em></li>
<li><em>multiplayer</em></li>
<li><em>party</em></li>
<li><em>pre-generated character</em></li>
<li>...</li>
</ul>
<p>That&#x27;s a fair assumption. For most of the visitors on that page, the goal is something like &quot;decide whether the game is worth playing for me&quot; or &quot;understand the properties that set this game apart from the others&quot;. In other words, they&#x27;re either gamers or game researchers (?). Given that context, framing &quot;role-playing video game&quot; as a black box (for instance) is a decent framing, and one that people are already used to converting to and from in their heads. But it&#x27;s still not even close to conveying what every player of that game understands directly: what this particular game feels like to play, and how you play it in practice.</p>
<p>So, besides employing off-the-shelf terminology, we use two more strategies: increasing the resolution and making up new terms.</p>
<p>Words are cheap, so when the first few aren&#x27;t enough to approximate the framings inside our heads, we can always add more words after them to increase the resolution. We can break down the concept and go into detail about what we really mean. This has the benefit, usually, of making the message accessible to a broader audience—more multi-purpose—because with more moving parts, people will be able to make them interact in more ways in their own mental models.</p>
<p>The downside of increasing the resolution is that it takes an unbelievable amount of time compared to the immediacy of the thoughts you&#x27;re thinking. (This is when they say that a picture is worth a thousand words, and things to that effect.)</p>
<p>The other strategy is to invent new words. This doesn&#x27;t solve the problem of explaining something for the first time, but it saves you a lot work once the new term is established within the group. It&#x27;s why scientific disciplines—and really all groups with common purposes—are so rife with jargon. They cluster in groups that have similar purposes, and make words that define boundaries that work well enough to be shared.</p>
<p>In Tetris, some really good players sometimes use this kind of crazy maneuver with the T-shaped piece to win the game:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/language-is-a-bottleneck-for-thought/tetris.gif" alt="" title="A complex move involving the rotation of a T-shaped tetris piece quickly as it touches down."/></figure>
<p>It would take a lot of words every time one wants to refer to this move, so they&#x27;ve come up with a name for it: &quot;T-spin&quot;. Now anyone in the Tetris community can mention a T-spin on passing in their language and the others will immediately map that back to their own framing of the maneuver.</p>
<p>Of course, neologisms have the opposite downside compared to breaking down things with more words, that is, they make the message more inaccessible to outsiders.</p>
<h2>Is Language an Obstacle to Thinking?</h2>
<p>What about the thinking you do inside your head, when you&#x27;re all alone? Ideally, that should allow you to disregard communicability and make the trade-off 100% in favor of optimizing your mental models. Only in your own head, with your uncompromised and well-understood—or at least &quot;felt&quot;—purposes, can you really feel free to redraw all the boundaries, to ignore whatever prepackaged framings your culture may have forced upon you, and finally think clearly.</p>
<p>That&#x27;s the ideal. In reality, things aren&#x27;t so straightforward. For one thing, it&#x27;s unclear how much one&#x27;s &quot;thought language&quot; (the language one uses inside their head) affects the way we think and forces us into predetermined patterns of thought. This is one of the longest-running debates in linguistics, and there are good arguments for and against both sides.</p>
<p>With my limited knowledge of the topic, I&#x27;d conservatively settle for a midway point: our language probably affects our thinking, even when alone, at least in a &quot;soft&quot; way, not preventing us at all costs from forming certain thoughts that it doesn&#x27;t support, but making them more difficult and less intuitive.</p>
<p>I also believe we always keep in a corner of our minds, even when trying to figure things out independently, the knowledge that we&#x27;ll probably have to explain this stuff to someone else later on. Some people may unpack their thinking imagining that they&#x27;re actually talking to someone. This might encourage us to remain in the shared playing field of the language, without overthrowing too many of its forms and rules. We&#x27;re inherently social creatures, and that probably affects what we do even when we are alone.</p>
<p>We don&#x27;t know enough to say anything clear about this, but if the above hypothesis is true, then language could pose some obstacles to clear, free thought. On the other hand, we would only get a tiny fraction of our knowledge if we didn&#x27;t have language, so it&#x27;s an easy crime to forgive.</p>
<h2>Pick Your Own Answer</h2>
<p>In summary, language is incapable of transferring &quot;optimal&quot; framings and models between people. At most, it can transfer the best framings <em>among those feasible to share among people</em>.</p>
<p>That&#x27;s a big difference. Does it mean that we&#x27;re doomed to be dumber because of all the limitations of words? Let me answer four times.</p>
<p><strong>Yes</strong>, we&#x27;re doomed, because we lose something in the conversion to and from sub-optimal language-based framings. Probably all major breakthroughs in thought have happened inside individual heads ruminating in thoughts beyond language—the Einsteins and the Shakespeares, the Maxwells and the Aristotles. Only <em>after</em> having made the breakthrough, those people made the effort to put them into imperfect words for others to clumsily word-grapple with. We don&#x27;t know if they were ever <em>fully</em> understood by anyone.</p>
<p><strong>No</strong>, we&#x27;re not doomed, because we have the ability, with enough exposure to the intended context and purpose, to re-hydrate the sub-optimal framings of language into good models for ourselves. You can feel this when something that has been explained to you at length finally clicks for you. It&#x27;s not the words carrying the message that have made it click, but your own language-less puzzle-solving inside your head. What&#x27;s more, even if the framings and models you reconstruct in your head are a bit different from the one in the source&#x27;s head, that&#x27;s often a good thing. Your model might be a novel upgrade on the original one, or it could be more well-adapted to your own variant of the purpose.</p>
<figure><img src="https://aethermug.com/assets/posts/language-is-a-bottleneck-for-thought/precious.webp" alt="Language bottlenecks were a more acute problem during WW2."/></figure>
<p><strong>Yes</strong>, we&#x27;re doomed because &quot;languagification&quot; is a big overhead. A lot of the time, when we say &quot;I need to think it through&quot;, we mean that we need to spend time about how <em>to put it into words</em>, not to understand or solve a problem. I think that much less &quot;thinking through&quot; would be necessary if we didn&#x27;t have to eventually share what we&#x27;re thinking.</p>
<p><strong>No</strong> (this is the last one), we&#x27;re not doomed, because of the obvious network effects. If the overhead for language were really too big to be worth the trouble, it would never have evolved in the first place. The effort and the data-loss implied in communication is usually offset by the greater number of thinking heads, each with its own chance at making the necessary breakthroughs.</p>
<h2>So What?</h2>
<p>After all those words, I hope that at least the following (tentative) takeaways have made it to your head more or less intact.</p>
<p>First, intended context and goal are paramount when communicating. They&#x27;re the basis for any thought process. If the speaker/writer/gesturer/... isn&#x27;t clear on the receivers&#x27; assumed context and purpose, or vice versa, the communication will likely fail. Always clarify your assumptions.</p>
<p>Second, it&#x27;s tempting to &quot;think in words&quot;, using categories familiar to many, but that might be a mistake. Try to break free from the external constraints of language when thinking by yourself. Use imagery, sounds, emotions, space, and gut instead: they have more dimensions and more elastic boundaries.</p>
<p>Third, demanding &quot;exactitude of language&quot; is a laughable idea.</p>
<p>Fourth, communicating your thoughts is hard and it&#x27;s fraught with errors and frustration and, more often than not, it&#x27;s well worth it all. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/language-is-a-bottleneck-for-thought/telephone.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[The Beautiful Dissociation of the Japanese Language]]></title>
            <link>https://aethermug.com/posts/the-beautiful-dissociation-of-the-japanese-language?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/the-beautiful-dissociation-of-the-japanese-language</guid>
            <pubDate>Sat, 20 Apr 2024 10:27:25 GMT</pubDate>
            <description><![CDATA[An extra dimension that you can play with]]></description>
            <content:encoded><![CDATA[<p>When I tell people around the world that I&#x27;ve been living in Japan for over a decade, many look both impressed and mystified at once. The place has a good reputation. Some folks are in awe at the temples and the gardens, others at the nature or the food. The extreme tidiness and civility of the local culture are the target of universal admiration. But many of those same people see the local language as an almost impenetrable barrier, a world of pain that one must go through in order to be allowed to live here. I must be so patient and smart, they think. But I (begrudgingly) have to tell them that it&#x27;s not entirely true.</p>
<p>The thing is (I tell those people) the language is part of the wonders of the place. It was the biggest charm for me in the first place. It&#x27;s complex, yes, but it&#x27;s rich and quirky and different. In particular, a whole realm of consciousness exists in the sphere of Japanese speakers that&#x27;s perhaps <em>truly unique</em> in the world, more so than the sushi and the nature and decorum. It even allows for new literary techniques that are unimaginable in any other language.</p>
<p>Usually this is the point when I lose my interlocutor. I might as well be speaking Japanese to them.</p>
<p>I&#x27;ve always wanted to explain that realm, to show what a strange and mind-bending world is accessed by learning Japanese. It feels almost impossible. In this rather long and winding post, I&#x27;ll try anyway. I&#x27;ll do my best to convey something that&#x27;s damn near untranslatable. Whether this is a rabbit hole you want to tumble down with me, that&#x27;s up to you.</p>
<h2>The Things I&#x27;m Not Talking About</h2>
<p>Unless you&#x27;ve studied the language for a good while, you might only be aware of one or more of the following strange-sounding facets of Japanese:</p>
<ol>
<li>It uses <em>kanji</em> characters for writing (more on this later), and it uses a whole lot of them. Depending on who you ask, there are four, five or more <em>thousand</em> characters in use, and you can&#x27;t read a newspaper if you don&#x27;t know at least 2,000 of the more common ones.</li>
<li>There are also two syllabic scripts (syllable-based alphabets) in use, called <em>hiragana</em> and <em>katakana</em>. These two, plus <em>kanji</em>, are used all together, sometimes but not always interchangeably.</li>
<li>There are lots of &quot;untranslatable&quot; words in Japanese, like the salutation お疲れ様 (<em>otsukare-sama</em>), roughly meaning &quot;I appreciate the hard work you&#x27;ve been doing&quot;, and もったいない (<em>mottainai</em>) for &quot;it would be a pity not to enjoy that to its fullest value&quot;.</li>
<li>Exotic-sounding grammar features like the subject-object-verb sentence structure (i.e. &quot;anteater ant eat&quot;) and the lack of important-sounding grammatical elements like articles (&quot;the&quot;, &quot;a&quot;), any singular-plural distinction, and most verb tenses familiar to English/Romance language speakers.</li>
<li>It&#x27;s very vague and context dependent.</li>
</ol>
<p>These aspects give the language an arcane and difficult-sounding aura, but none of them is truly unique to Japanese. Chinese, for instance, uses way more characters and has an even more bare-bones grammar. All languages have untranslatable terms. And several other cultures routinely employ <a href="https://en.wikipedia.org/wiki/Digraphia" rel="nofollow noopener noreferrer" target="_blank">multiple writing systems</a>. These aren&#x27;t the things I mean by &quot;a truly unique realm&quot;, and they&#x27;re not the topic of this post.</p>
<p>What&#x27;s usually not known are the subtle effects of the strange history of the Japanese tongue. These effects are, I believe, absolutely unique to this language, with no parallel anywhere else in the world. On top of that, native Japanese speakers are usually so accustomed to these quirks that they never give them a second thought. So no one—except, I guess, some linguists—ever talks or thinks about these fascinating aspects of Japanese. I write about them here because, well, that&#x27;s <em>mottainai</em>.</p>
<p>It took me a while to put my finger on it, but now I know what the source of that uniqueness is: it&#x27;s the unstoppable, wonderful <em>dissociation</em> between what&#x27;s written and what is spoken in Japanese. To see how that could have happened, we need to take a step back.</p>
<div class="col-start-1 col-end-1 m-10"></div>
<h2>Dissociated from Birth: a History</h2>
<p>The Japanese language existed in purely oral form for centuries, until mainland scholars brought Chinese characters to the Japanese islands. That happened in the 5th century C.E. These characters are, unlike the western alphabets, &quot;logograms&quot;, that is, each character is associated with a specific meaning.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/01_mountain_blood.webp" alt="" title="A three-stroke character corresponds to the meaning of &#x27;mountain&#x27;, a different character with six strokes corresponds to &#x27;blood&#x27;."/></figure>
<p>Chinese and Japanese are enormously different spoken languages. Except for a large number of words imported directly into Japanese (but evolved to sound quite unlike the originals), the two languages have essentially nothing in common. The pronunciation, the grammar, everything is 180° different. A consequence of this is that those Chinese characters, evolved over millennia to fit the Chinese language like a glove, were a bad match for the way the islanders spoke.</p>
<p>Imagine those poor scholars of the Yamato court in Western Japan in the 7th century. They must have been intrigued by this revolutionary technology called &quot;writing&quot;, where you could freeze your words onto a stone or the blade of a sword so that others may understand it later. Why leave it to the Chinese immigrants? Why not master it for their own native language?</p>
<p>Except it must have been excruciatingly difficult. The characters were meant to be used as modular building blocks—a kind of modularity that Japanese just didn&#x27;t have.</p>
<figure><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/haniwa.webp" alt="Not many depictions of scholars from the Kofun period survive (they must have been too busy wrangling the kanji), but warrior statues abound. So here is a warrior."/></figure>
<p>While Chinese uses fixed &quot;plug-and-play&quot; markers to indicate tenses and and grammatical functions, on this front Japanese is more similar to English, because it modifies the very shape of words for those things. So in Chinese you say &quot;chī&quot; for &quot;eat&quot; and &quot;chī<em>le</em>&quot; for &quot;ate&quot;. The &quot;<em>le</em>&quot; part indicates an action that has completed, and it can be strapped onto any verb to turn it into a past tense. So those two spoken words can be neatly segregated into written logograms: 吃 (&quot;chī&quot;) for the present tense, and 吃了 (&quot;chī&quot; + &quot;le&quot;) for the past.</p>
<p>Compare that to English where we transform &quot;eat&quot; into &quot;ate&quot;, and the equivalent Japanese modification of &quot;tabe<em>ru</em>&quot; into &quot;tabe<em>ta</em>&quot;. In both cases we&#x27;re not adding or removing blocks but changing part of the word to convey the difference. In techie terms, the Chinese script doesn&#x27;t support the structure of languages like English and Japanese. It doesn&#x27;t have what it takes.</p>
<p>The only solution for those early Japanese scribes, then, was to do a lot of shoehorning. And boy, did they shoehorn.</p>
<p>The Japanese scholars-aristocrats began repurposing the Chinese characters, which they called <em>kanji</em> (for, well, &quot;Chinese characters&quot;). Sometimes, instead of using them for their meaning, they used them for (gasp!) their pronunciation. By ignoring the original content of a <em>kanji</em>, they could string them together to form almost any sound.</p>
<p>To a Chinese reader, such words would have looked utterly random, devoid of any coherence or structure. But to a trained Japanese, they translated into familiar words.</p>
<p>Over the centuries those &quot;sound-only&quot; <em>kanji</em>, called <em>man&#x27;yougana</em>, evolved into something else entirely. They became simpler, more streamlined, and more standardized. Where the symbols were originally composed of many short strokes, they gradually lost detail and complexity. Where the scribes could choose between a slew of different <em>kanji</em> for any given sound (for instance, the sound <em>pa</em> could be represented by any of 20 characters), later the number of options dwindled and eventually settled to two.</p>
<p>That&#x27;s how the two syllable-based alphabets in use today, <em>hiragana</em> and <em>katakana</em>, came about (collectively <em>kana</em>). For example, this is how the sound for &quot;i&quot; (pronounced &quot;ee&quot;) evolved from two separate <em>kanji</em> into respective <em>kana</em> pronounced exactly the same.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/02_kana.webp" alt="" title="Two Chinese characters that were pronounce &#x27;i&#x27; were eventually simplified into phonetic characters for hiragana and katakana."/></figure>
<p>With this new sound-based tool invented by the islanders, finally the Japanese language had a suitably flexible way to write anything one could pronounce. Today full-blown <em>kanji</em> are used for their meaning, while hiragana and katakana are used for sound-based writing and grammar stuff.</p>
<p>(By the way, the application of Chinese characters to other languages happened in several other places, like Korea and Vietnam, but only Japanese still retains the writing system today, making it a unique case among contemporary languages.)</p>
<p>And so, thanks to this grafting of one language&#x27;s way of writing into a vastly different tongue, a dissociation was born. Japanese is a language where the spoken and the written co-evolved in directions never seen elsewhere. The differences in grammatical structure are only part of the story. Let&#x27;s start with the basics.</p>
<h2>Anomaly 1: One Way to Write It, Many Ways to Read It</h2>
<p>Like most language pairs, there was rarely a one-to-one correspondence between Chinese and Japanese words. Often a single Chinese word or character could merely approximate the meaning of several spoken Japanese words. Each of those local words might have been related to the others, but it carried a different nuance. Even so, for lack of a better solution, often the same <em>kanji</em> was used for all the various meanings.</p>
<p>This had two major effects. First, while in any Chinese language each character is associated with a single way to speak it, in Japanese every <em>kanji</em> can be pronounced in multiple, very different ways. This part is perhaps the biggest bane of Japanese students.</p>
<p>Extreme cases have <strong>15+</strong> different pronunciations (readings) for a single <em>kanji</em>! For example, the <em>kanji</em> for &quot;life&quot;:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/03_life.webp" alt="" title="The same character, meaning generically &#x27;life&#x27;, is pronounced &#x27;shēng&#x27; in Mandarin, while in Japanese it can be pronounced as &#x27;sei&#x27;, &#x27;shou&#x27;, &#x27;ikasu&#x27;, &#x27;ikiru&#x27;, &#x27;ikeru&#x27;, &#x27;u&#x27;, &#x27;umare&#x27;, &#x27;umareru&#x27;, &#x27;umu&#x27;, &#x27;ou&#x27;, &#x27;ki&#x27;, &#x27;nasu&#x27;, &#x27;naru&#x27;, &#x27;nama&#x27;, &#x27;haeru&#x27;, &#x27;hayasu&#x27;, &#x27;musu&#x27;, ..."/></figure>
<p>Some of these pronunciations are common, others quite rare, but you need to know and be able to discriminate most of them in order to correctly read and write modern Japanese.</p>
<p>The second effect of the imperfect match between the written word and the meanings it is associated with is a kind of &quot;chronic looseness&quot; in the conversion of language to and from writing. A Japanese reader isn&#x27;t expected to correctly pronounce everything. New, unfamiliar words will be opaque to them. In a sense, this is similar to English, only much worse.</p>
<p>In English, the question is usually about the right way to pronounce a vowel or two. As is clear from the example just above, in Japanese, sometimes you don&#x27;t know if the <em>kanji</em> for life has to be read as &quot;nasu&quot; or &quot;shou&quot;. Add to that the huge number of <em>kanji</em> in circulation, and in many cases you have absolutely <em>nothing</em> to work with. If you haven&#x27;t seen the <em>kanji</em> before, you have zero hints about the right sounds to make.</p>
<p>Japanese has a trick up her sleeve to solve this problem, called <em>furigana</em>. These are tiny <em>kana</em> characters showing you how to pronounce a difficult <em>kanji</em>.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/furigana.webp" alt="A sign in a train station. It&#x27;s often impossible to guess the pronunciation of place names, hence the furigana above the official kanji name. Source: Wikipedia." title="A sign in the Higashi-Fushimi station in Tokyo. The kanji name is written in big characters, but above those are hiragana characters showing the pronunciation of the word."/><figcaption>A sign in a train station. It&#x27;s often impossible to guess the pronunciation of place names, hence the furigana above the official kanji name. Source: Wikipedia.</figcaption></figure>
<p>Kids keep on learning new fundamental <em>kanji</em> until the end of high school, so they wouldn&#x27;t be able to read without extensive application of <em>furigana</em>. Adults are able to survive with less <em>furigana</em>, but you&#x27;ll still find them on the rarer words that people might not have encountered before (or for which they might have forgotten the reading). For example, this is a page I opened at random from a Haruki Murakami non-fiction book.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/murakami-a.webp" alt="" title="A page in vertical Japanese script, from a Murakami book. Two tiny hiragana characters are highlighted between the lines of normal-sized text."/></figure>
<p>In the whole page, only one <em>kanji</em> word (a somewhat rare way to say &quot;mock&quot;) has the pronunciation spelled out with <em>furigana</em> (those two tiny squiggles between the lines).</p>
<p>Remember this part about <em>furigana</em>, because they&#x27;ll come up again in the following sections.</p>
<p>Now for a practical consequence: people need to get creative just to teach others how to write their names.</p>
<h2>Anomaly 2: Spelling Bee, but Creative</h2>
<p>There is one problem that arises because of how <em>kanji</em> work: how do you explain which character you&#x27;re talking about without writing it down?</p>
<p>This happens all the time with people&#x27;s names. The Japanese like to choose nice and distinctive <em>kanji</em> for their names, even when using common name pronunciations (another instance of dissociation: common name readings on unheard-of <em>kanji</em> choices are all the rage this century). This means that, just by hearing what someone is called, you&#x27;re usually unable to write it down.</p>
<p>So people have to explain the <em>kanji</em> to you, and they do it by telling you which other well-known words each <em>kanji</em> appears in, or how it is built from simpler components.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/deathnote.webp" alt="Perhaps the most famous scene showing the hoops you have to jump through just to explain your name&#x27;s spelling. It&#x27;s from the manga &quot;Death Note&quot;. Here the woman is giving a fake name, but soon afterwards Raito finds out her real name and uses it to kill her real quick."/><figcaption>Perhaps the most famous scene showing the hoops you have to jump through just to explain your name&#x27;s spelling. It&#x27;s from the manga &quot;Death Note&quot;. Here the woman is giving a fake name, but soon afterwards Raito finds out her real name and uses it to kill her real quick.</figcaption></figure>
<h2>Anomaly 3: Breaking Out of the Box</h2>
<p>Reading is made more difficult by the frequent use of <em>jukujikun</em>, words where the pronunciation is not the combination of the normal readings of the individual <em>kanji</em> contained. In these cases there isn&#x27;t any correspondence between parts of the spoken word and the <em>kanji</em> that represent it.</p>
<p>Japanese has a lot of compound words of Chinese origin, where two or more <em>kanji</em> appear as a set. These compounds are usually very straightforward sequences of <em>kanji</em> readings. So 美術 (<em>bijutsu</em>), meaning art, is the combination of 美 (&quot;bi&quot;, beauty) and 術 (<em>jutsu</em>, skill). If you&#x27;re confident about the individual sounds, you just have to say them one after the other. Finally something simple!</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/04_bijutu.webp" alt="" title="The word for art, &#x27;bijutsu&#x27; is the simple composition of &#x27;bi&#x27; for beauty and &#x27;jutsu&#x27; for skill. Similarly, the word for science, which is pronounced &#x27;kagaku&#x27;, is the summ of &#x27;ka&#x27; for subject and &#x27;gaku&#x27; for study."/></figure>
<p>But no, you can&#x27;t let your guard down. The shoehorning work of the ancients has left deformations that survive in the modern language.</p>
<p>Take, for example, the word for &quot;adult&quot;. It&#x27;s pronounced <em>otona</em> and written 大人, the <em>kanji</em> for &quot;big&quot; and &quot;person&quot; respectively. Following the usual method, you might wonder, is <em>oto</em> a pronunciation of the first <em>kanji</em>, and <em>na</em> of the second? Or is it <em>o</em> and <em>tona</em>?</p>
<p>Well, neither. There is no way to split them. The two-<em>kanji</em> word exists as a single block, and looking up each <em>kanji</em> separately won&#x27;t yield even a bit of this word&#x27;s true reading. There are many examples like that.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/05_kyou.webp" alt="" title="Unlike most kanji compounds in Japanese and all words in Chinese, jukujikun like the word for today (&#x27;kyou&#x27;) and tomorrow (&#x27;ashita&#x27;) cannot be seen as the composition of sub-parts corresponding to the kanji."/></figure>
<figure><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/Lanius_bucephalus.webp" alt="A bull-headed shrike. Source: Alpsdake, Wikimedia Commons (CC BY-SA 4.0)."/></figure>
<p>In some cases, the number of <em>kanji</em> is longer than the number of syllables in the word! Try splitting <em>that</em> up.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/06_mozu.webp" alt="" title="The name for the bird called &#x27;bull-headed shrike&#x27; is pronounced mozu, with two syllables, but it&#x27;s written with three kanji."/></figure>
<h2>Anomaly 4: Two Words in a Trench Coat</h2>
<p>But, despite all of its difficulties, this dissociation of writing and speaking opens up some interesting opportunities.</p>
<p>For example, there is a category of verbs that I find fascinating. As far as I know, it has no official academic name, and I&#x27;ve never heard anyone else even mention this. I would describe them as &quot;two words in a trench coat&quot;. In each case, you can see how two simpler verbs were chained and wrapped with a single <em>kanji</em>.</p>
<p>For example, the verb 司る (<em>tsukasadoru</em>), meaning &quot;to be in charge&quot; seems pretty innocent at first. It&#x27;s one <em>kanji</em> plus its <em>hiragana</em> ending indicating the tense, like all other verbs. But look up the etymology, and you&#x27;ll find that it used to be two words: 官 (<em>tsukasa</em>) for &quot;position of authority&quot; and 取る (<em>toru</em>) for &quot;take&quot;. Each of these two words has its own <em>kanji</em> and independent meaning. But over the years, their combined form became so routine that someone decided to give it its own different <em>kanji</em>, probably for no other reason than convenience. Here are a couple more cases:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/07_kokorozasu.webp" alt="" title="The verb &#x27;kokorozasu&#x27;, which means to aspire, is written with a single kanji but can be seen as the fusion of the separate words &#x27;kokoro&#x27;, for heart, and &#x27;sasu&#x27;, for to point, each with their different kanji. Similarly for the verb &#x27;mitomeru&#x27; (to confirm), a combination of the verbs &#x27;mite&#x27; (see) and &#x27;tomeru&#x27; (retain)."/></figure>
<p>These fusions are like language fossils. It shows that <em>kanji</em> are only an after-the-fact addition to a pre-existing vocabulary. But at a practical level, they&#x27;re a way to exploit the looseness of Japanese writing to make writing more convenient (notice how short the final written words are compared to the original).</p>
<h2>Anomaly 5: Make the <em>kanji</em> Work for You</h2>
<p>In Anomaly 1 I said that one <em>kanji</em> can be ambiguous in terms of its <em>pronunciation</em>. But the flip side of the same phenomenon is that <em>kanji</em> can help reduce ambiguity in <em>meaning</em>, increasing the precision of the written word.</p>
<p>Spoken Japanese is actually rather poor in vocabulary. A lot of its verbs are reused in very different contexts with different meanings that are only related in a very abstract way. Thanks to the unique slap-it-on-and-you&#x27;re-ready-to-go mindset of Japanese writing, however, the vagueness can be pared down a lot.</p>
<p>There is a surprising number of verbs that have exactly the same pronunciation, but are written with different <em>kanji</em> in different contexts. For example:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/08_yomu.webp" alt="" title="The verb pronounced &#x27;yomu&#x27; can be written with two different kanji depending on whether it means to read or to compose a poem. The verb pronounced &#x27;okuru&#x27; has different kanji for the meaning of sending and gifting. And the verb &#x27;toru&#x27; has at least 7 different kanji forms, for the following meanings: to take, to take a picture, to harvest, to capture, to ingest a substance, to steal, and to record."/></figure>
<p>The cool thing here is that these are all different meanings, but if you squint you can see how they must have originated from the same primordial word. They started with a generic, blunt verb (e.g. <em>toru</em>, to take), and later applied to it different <em>kanji</em> to distinguish its nuances. Handy!</p>
<p>There are also non-verbs examples of this trick. My favorite are all the versions of the word &quot;cousins&quot; meant in a reciprocal sense, as in &quot;she and I are cousins (of each other)&quot;. In spoken language, you just say <em>itoko</em> in all cases, and that&#x27;s it. In written language, you use the appropriate combinations of the <em>kanji</em> 兄 (<em>ani</em>, older brother), 弟 (<em>otouto</em>, younger brother), 姉 (<em>ane</em>, older sister), and 妹 (<em>imouto</em>, younger sister), preceded by the <em>kanji</em> 従 (<em>shitagau</em>, accompany), to specify the exact genders in the relationship. (These are also <em>jukujikun</em>, come to think of it.)</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/09_itoko.webp" alt="" title="The same noun &#x27;itoko&#x27;, meaning cousins, is written in four different combinations of kanji: one for cousins who are both male, one for cousins who are both female, one where the older cousin is male and the younger is female, and one that&#x27;s the other way around."/></figure>
<p>You can use <em>kanji</em>, then, to add a layer of meaning that doesn&#x27;t exist in the spoken language. Which brings us to the last and, in my opinion, most interesting point.</p>
<h2>Anomaly 6: Dissociation as Canvas</h2>
<p>Finally we come to <em>gikun</em>, the most exquisite (ab)use of the rift between written and spoken Japanese. It&#x27;s based on the clever use of <em>furigana</em>, the little pronunciation marks explained above. Ninety-nine percent of the time, people use <em>furigana</em> as you would expect—plainly indicating the correct dictionary reading of each word. But once you have a tool, who can resist playing with it?</p>
<p><em>Gikun</em> is the replacement of a <em>kanji</em>&#x27;s or word&#x27;s normal pronunciation with something else through <em>furigana</em>. Novelists and <em>manga-ka</em> use it to inject an almost subliminal layer of meaning beyond what is afforded by the words and <em>kanji</em>. It achieves an effect similar to a textual voice over, <em>at the same time as the actual text you&#x27;re reading</em>.</p>
<p>You see it a lot in manga: the actual <em>kanji</em> say something, but the <em>furigana</em>, instead of giving you the true pronunciation of the word, give you something else entirely. Sometimes it&#x27;s a synonym of the word with a more pungent nuance. For example:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/hidari-atsu-todoroki-boku-no-hero-ch28.webp" alt="Source: Boku no Hero Academia, cited by japanesewithanime.com (CC BY-SA 4.0)"/><figcaption>Source: Boku no Hero Academia, cited by japanesewithanime.com (CC BY-SA 4.0)</figcaption></figure>
<p>The author of the excellent <a href="https://www.japanesewithanime.com/2017/12/gikun.html" rel="nofollow noopener noreferrer" target="_blank"><em>gikun</em> explanaton on japanesewithanime.com</a> clarifies the context:</p>
<blockquote>
<p>Todoroki Shouto 轟焦凍 has both cold and heat abilities, which come from the sides of his body: from the right comes cold, from the left comes heat.</p>
</blockquote>
<p>Here Shouto, the protagonist, is telling the flame-y man &quot;during combat, I won&#x27;t use my <strong>heat</strong> power for any reason at all.&quot; Being a kids&#x27; comic, all <em>kanji</em> have <em>furigana</em>. But the <em>kanji</em> for &quot;heat&quot; (highlighted in red) comes with an unexpected reading. Instead of the official <em>netsu</em>, the <em>furigana</em> reads <em>hidari</em>, which means &quot;left&quot;.</p>
<p>So the reader gets two messages at the same time: the character says &quot;I won&#x27;t use <strong>the left</strong>&quot;, but the text is saying &quot;I won&#x27;t use <strong>heat</strong>&quot;.</p>
<p>Sometimes, authors use <em>gikun</em> for the luxury of introducing cool foreign-sounding words while simultaneously providing the meaning for it through <em>kanji</em>.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/gikun-automail-fullmetal-alchemist-ch04.webp" alt="Source: Full-Metal Alchemist, cited by japanesewithanime.com (CC BY-SA 4.0)"/><figcaption>Source: Full-Metal Alchemist, cited by japanesewithanime.com (CC BY-SA 4.0)</figcaption></figure>
<p>And the context from <a href="https://www.japanesewithanime.com/2017/12/gikun.html" rel="nofollow noopener noreferrer" target="_blank">japanesewithanime.com</a>:</p>
<blockquote>
<p>Good guy with mechanical arm fights bad guy with mechanical arm.</p>
</blockquote>
<p>The blond guy says, &quot;Oh, an <strong>automail</strong> colleague?&quot; to say that they both have mechanical arms. But &quot;automail&quot; isn&#x27;t a real word, and the Japanese reader may not guess the etymology of &quot;automatic&quot; + &quot;mail (armor)&quot;. So the meaning is provided by the <em>kanji</em> (literally &quot;mechanical armor&quot;), and the word &quot;automail&quot; comes as <em>furigana</em> above that. Again, you get two things in one swoop: a neologism and its meaning.</p>
<p>Other times it&#x27;s simply a clarification of the word in that specific context.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/koko-deictic-gikun-noragami-ch02.webp" alt="Source: Noragami, cited by japanesewithanime.com (CC BY-SA 4.0)"/><figcaption>Source: Noragami, cited by japanesewithanime.com (CC BY-SA 4.0)</figcaption></figure>
<p>The context:</p>
<blockquote>
<p>Yato 夜ト, who is a God fighting spiritual beings related to human&#x27;s negative feelings, goes to the hospital make [sic] a visit to someone...</p>
</blockquote>
<p>The off-screen speaker is saying, &quot;Tonight will be rough too. <strong>This place</strong> is their nest. Now that the regalia aren&#x27;t with us, we shouldn&#x27;t stay long.&quot; (I assume the &quot;regalia&quot; are some kind of warrior.) In this case the words &quot;this place&quot; appear as <em>furigana</em> for the word &quot;hospital&quot;. The <em>kanji</em> say &quot;<strong>hospital</strong>&quot;, the reading says &quot;<strong>here/this place</strong>&quot;. Through <em>gikun</em>, the author is avoiding confusion on the current location of the characters.</p>
<p>Some novelists use this dissociation for artistic effect, too. Horror-mystery writer Natsuhiko Kyogoku, who loves to create the Japanese version of a Gothic atmosphere, constantly uses archaic, long-forgotten <em>kanji</em> in his brick-sized novels. You, as the average Japanese reader, probably have never seen most of those <em>kanji</em> before, but you&#x27;re able to follow without problems thanks to his <em>gikun</em>. The <em>furigana</em> expose the modern, recognizable readings that you can recognize, even though they are not the real readings of those obsolete <em>kanji</em>.</p>
<p>Kyogoku&#x27;s <em>gikun</em> shenanigans are a bit too much at times. Someone who uses them more sparingly and subtly is Haruki Murakami.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/murakami-b.webp" alt="" title="Another page in vertical Japanese script, from a Murakami book. Again, tiny furigana characters between the lines are highlighted."/></figure>
<p>This is a page from his novel Norwegian Wood. Jay Rubin, the novel&#x27;s English translator, worded the highlighted sentence like this:</p>
<blockquote>
<p>What if somewhere inside me there is a dark limbo where all the truly important memories are heaped and slowly turning into mud?</p>
</blockquote>
<p>But the word &quot;limbo&quot; (pointed by the arrow) in the original is actually a <em>gikun</em>. The Japanese reader sees two things at once: the <em>kanji</em> with the meaning of &quot;a remote region&quot;—normally pronounced <em>hendo</em>—and the <em>katakana</em> pronunciation of the foreign word &quot;limbo&quot; as <em>furigana</em> next to it.</p>
<p>The Japanese language does have more accurate words for &quot;limbo&quot;, words with a stronger link to the original catholic meaning of the word, but Murakami decided not to use them.</p>
<p>This little choice, seen by the reader in a fleeting instant as they devour the pages, and likely not even noticed consciously, is doing a lot of work. It&#x27;s indulging in the mystical-sounding foreign word &quot;limbo&quot;, but it&#x27;s also clarifying the general meaning of liminal space for the readers unfamiliar with it. It&#x27;s making the metaphor of an actual hidden place within oneself stronger, while avoiding overly religious undertones. In short, this <em>gikun</em> alters the flavor of the word a tiny bit, just enough to achieve the thematic and stylistic goals of the sentence. Like a pinch of nutmeg in your butter cake.</p>
<p>How is one to translate that?</p>
<p>It&#x27;s difficult to convey what it feels like to read in this way. For me, back when I started reading in Japanese some 16 years ago, it was a totally new experience, something that I never thought would be possible with text. It&#x27;s like reading in stereo, where sometimes the same message is conveyed to you in two different formats on separate channels, and sometimes two messages blend together as something new.</p>
<p>Because of the unique dissociation between the written word and the way it is pronounced, Japanese is not only harder to learn, but it&#x27;s also more malleable and richer in a way that cannot be imitated. It&#x27;s an extra dimension of language and a happy historical accident. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/the-beautiful-dissociation-of-the-japanese-language/hokusai.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[You Impersonate What You Think You Are]]></title>
            <link>https://aethermug.com/posts/you-impersonate-what-you-think-you-are?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/you-impersonate-what-you-think-you-are</guid>
            <pubDate>Thu, 11 Apr 2024 05:36:42 GMT</pubDate>
            <description><![CDATA[Can you do me a favor?]]></description>
            <content:encoded><![CDATA[<p>In <em>Are Your Lights On?</em>—a book I&#x27;d recommend were it not utterly consumed by a desire to be funny—authors Gause and Weinberg propose a curious notion about bureaucrats. The stereotype of a state functionary is grumpy, unhelpful, and irritable. Do they choose them like that on purpose? More likely, they become like that as they assume their roles.</p>
<p>The reason they behave like misanthropes, the authors write, is that they don&#x27;t have the authority to help you. If you didn&#x27;t bring the right number of signed copies required for your visa application, the immigration desk clerk is not allowed to accept it. It&#x27;s not that they wouldn&#x27;t be glad to help you, nor that they couldn&#x27;t physically process the document: they must not. This turns them into the harbingers of bad news. It elects them as your official enemies. So they can&#x27;t help but act as such.</p>
<p>When you spend enough time controlling an avatar very different from you—for example the armed protagonist of a battle video game—you&#x27;re hit by the Proteus Effect, where your behavior and personality changes to be more consistent with your virtual appearance. Psychologists have <a href="https://journals.sagepub.com/doi/abs/10.1177/0093650208330254" rel="nofollow noopener noreferrer" target="_blank">found</a>, for example, that players who were assigned taller and more attractive characters in a fantasy video game performed better at the game, and were more aggressive negotiators both within the game and, for a while, outside of it (the effect was small but consistent).</p>
<p>It&#x27;s as if we unconsciously adapt to what we think we&#x27;re supposed to be like. This aligns with Benjamin Franklin&#x27;s famous <a href="https://en.wikipedia.org/wiki/Ben_Franklin_effect" rel="nofollow noopener noreferrer" target="_blank">hypothesis</a> that, to win someone&#x27;s favor, the best method is to <em>ask</em> them for a favor. From his autobiography: &quot;He that has once done you a kindness will be more ready to do you another, than he whom you yourself have obliged.&quot;</p>
<p>In other words, make people feel like they&#x27;re the kind of person that is generous to you, and they&#x27;ll act accordingly.</p>
<p>There is more to the definition of self than a name, a face, and a collection of memories. You (and everyone else) are constantly observing your current state as it relates to your environment and especially to other people, and you adjust your behavior to account for it. Whenever your <a href="/posts/a-framing-is-a-choice-of-boundaries">framing</a> and <a href="/posts/embedded-prophesy-devices">model</a> of the world change, so does the <a href="https://planktonvalhalla.com/20230323-the-demon-and-the-tree/" rel="nofollow noopener noreferrer" target="_blank">Tree of Possibilities</a> you perceive for yourself. An action that was previously inhibited might become more likely. Something you&#x27;d normally do before, now looks inappropriate.</p>
<p>It doesn&#x27;t matter if there is little or no change &quot;in reality&quot;. When you play a video game the only thing that actually changes is which pixels lit up on a screen, and when the bureaucrat refuses to accept your papers there is no physical law preventing them. It doesn&#x27;t matter, because we live in real systems, but we always act in imaginary ones. This is the Thomas Theorem:</p>
<blockquote>
<p>If men define situations as real, they are real in their consequences.</p>
</blockquote>
<p>It means that invisible forces, conjured by shared imagination, can sway and condition and even define us.</p>
<p>It also means that, often enough, changing yourself is as easy as reframing a thought. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/you-impersonate-what-you-think-you-are/schiele_reinerbub.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[A Fundamental Framing of Human Language]]></title>
            <link>https://aethermug.com/posts/a-fundamental-framing-of-human-language?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/a-fundamental-framing-of-human-language</guid>
            <pubDate>Thu, 04 Apr 2024 10:02:28 GMT</pubDate>
            <description><![CDATA[The very use of words implies a certain way to segment the world]]></description>
            <content:encoded><![CDATA[<p>If <a href="/posts/a-framing-is-a-choice-of-boundaries">a framing is a choice of boundaries</a>, and we&#x27;re forced to pick one in order to build our mental models, what do we usually choose? I don&#x27;t think we have a full, satisfactory answer from science. Finding a complete answer would require some method of probing people&#x27;s thoughts, of studying the way we <em>actually</em>—not <em>allegedly</em>—make things click and interact in our heads as we <a href="/posts/embedded-prophesy-devices">prophesy</a> external phenomena. We&#x27;re getting there. We aren&#x27;t quite there yet.</p>
<p>But there is one low hanging fruit we can talk about, one that happens conveniently outside our skulls where everyone can observe it. That fruit is language. Words are how we transmit thoughts to other people, and as such they must reflect one framing or another. So what framings can we spot in the oceans of words we exchange?</p>
<p>Granted, language has no hard limits in length and it&#x27;s infinitely extensible. Any framing could, in theory, be represented in language. Language is also the <em>only way</em> we can teach each other novel framings, original ways to draw boundaries on the seamless mesh of interactions that we call the Universe. But language seems to be rooted on a fixed, fundamental framing that we all accept. It would be great if it didn&#x27;t, if it were fully malleable to its very core, but I doubt that&#x27;s possible. That language should employ a fixed framing at its base is a <em>technical constraint</em>. After realizing that, we can further refine our previous question: what framing are we absolutely, unavoidably forced to use when we use a word-based language?</p>
<p>We could call the answer to that the &quot;Fundamental Framing of Human Language&quot;.</p>
<p>For starters, languages—all languages of all cultures past and present—make an assumption: systems consisting of material aggregates that feature sharp interfaces with the surrounding fluid (air, water) are black boxes. Any spoken mental model has to use those objects as building blocks. Those systems get assigned a category of words that we call <em>concrete nouns</em>.</p>
<p>Like all choices of boundaries, this one is arbitrary. It&#x27;s true that within that material surface—the skin of a mammal, the smooth surface of a rock—the interactions are especially tight-knit and correlated, so that the sub-parts usually move together as a single block. However, those &quot;objects&quot; have roughly the same amount of interactions with the rest of the universe as they do within.</p>
<p>A trillion trillion air molecules pound on your skin every second, electromagnetic waves heat the pebble, other &quot;objects&quot; push and pull and chip away at the interface without end.</p>
<p>Assigning the static label &quot;hamster&quot; to a furry group of atoms isn&#x27;t as obvious a choice as we tend to think. A sentient being with a very different kind of intelligence, with a language fundamentally unlike ours, may not see any reason to assign labels the way we assign nouns to &quot;things&quot;.</p>
<p>The moment you <a href="/posts/in-japanese-you-need-a-dictionary-to-count-things">use a noun to refer</a> to an object, you commit to simulating a world where objects are the preferred building blocks—the black boxes.</p>
<p>What about all those nouns that don&#x27;t refer to physical, tangible things? How do words like <em>literature</em>, <em>birth</em>, and <em>Canada</em> fit into this framing? Here, I think, language employs another one of its sleight of hand tricks. When we observe phenomena that just won&#x27;t fit into a physical vessel, we pretend like an invisible, fuzzy, spread-out vessel exists anyway.</p>
<p>So the process is reversed. With physical objects, we usually start by noting their tangible boundaries and then try to figure out what they can do, i.e. what their <a href="https://planktonvalhalla.com/20230323-the-demon-and-the-tree/" rel="nofollow noopener noreferrer" target="_blank">Trees of Possibilities</a> look like. With abstract objects, we start with the Tree of Possibilities, and we make up a &quot;virtual object&quot; that we can conveniently refer to with a word.</p>
<p>Here the Tree of Possibilities <em>defines</em> the object, rather than being observed in it.</p>
<p>The virtual object we call <em>Canada</em> doesn&#x27;t exist in reality: it&#x27;s a pawn we invent in our language to succinctly refer to a bunch of correlated physical objects like territories, accents, and passports, and to lots of other virtual objects like laws and behaviors. Almost paradoxically, abstraction is the reification of correlation.</p>
<p>Of course, we don&#x27;t have nearly enough nouns for all the objects we want to label. No two things are exactly the same, so even memorizing a billion nouns wouldn&#x27;t be enough to convey the variety around us. So we have another category of words, called <em>adjectives</em>, that we use to qualify the nouns, to show how a specific object deviates from the average.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-fundamental-framing-of-human-language/hamster_1.webp" alt="Tree of possible outcomes for a &quot;hamster&quot; (noun only, thicker branches means more probable)..." title="Tree diagram showing some possible near-term outcomes of a hamster. Major ones are &#x27;move around&#x27; and &#x27;feed itself&#x27;, but two thin outcome branches are &#x27;make babies&#x27; and &#x27;die&#x27;."/><figcaption>Tree of possible outcomes for a &quot;hamster&quot; (noun only, thicker branches means more probable)...</figcaption></figure>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-fundamental-framing-of-human-language/hamster_2.webp" alt="...and for a &quot;sickly female hamster&quot; (two adjectives on the noun)." title="Same tree as before, but now the branches for &#x27;make babies&#x27; and &#x27;die&#x27; are relatively thicker, because we know that it&#x27;s a female and that it&#x27;s sickly."/><figcaption>...and for a &quot;sickly female hamster&quot; (two adjectives on the noun).</figcaption></figure>
<p>Put differently, using language means remembering a long but manageable list of fixed archetypes (nouns), then patching those archetypes up with caveats (adjectives) until we get to a passable approximation of reality. It&#x27;s a kind of &quot;perturbation theory&quot; of language.</p>
<p>Perturbation theory is a mathematical technique used in many fields of science. For example, celestial mechanics. The Kepler orbits we study in high school, with their neat ellipses, are a solved problem. We have a perfect equation that describes them without error. Except, Kepler&#x27;s equation only applies to the ideal case when you have exactly two point-like celestial objects. As soon as you begin considering more realistic conditions, the equation fails. Not a single one of the objects of the solar system traces precise ellipses. They just resemble ellipses at a distance.</p>
<p>To better predict these less-than-ideal quasi-Keplerian orbits, we can use perturbation theory. Its principle is very simple. You start with the easy, exact equation of the ideal case, then add within them new <em>small</em> terms (&quot;perturbations&quot;) that account for a <em>rough</em> approximation of a previously-ignored factor. For example, you begin with the idealized solution for the orbits of Earth and Moon, then you add a <em>small and rough</em> term for their non-spherical shapes, then one for the gravity of the Sun, then more for the other planets, and so on.</p>
<figure><img src="https://aethermug.com/assets/posts/a-fundamental-framing-of-human-language/pat-whelen-Lzcg10-pX-E-unsplash.webp" alt=""/></figure>
<p>In this sense, the Fundamental Framing of Human Language uses its own version of perturbation theory to approximate reality. To speak with each other, we postulate &quot;perfect&quot; archetypes like <em>platypus</em>, <em>prow</em>, and <em>stream</em>—each carrying with it a default Tree of Possibilities—then we apply &quot;small and rough&quot; perturbations to indicate deviations from that archetype: the platypus is <em>excited</em>, the prow is <em>wooden</em>, the stream is <em>cold</em>.</p>
<p>If nouns, with the help of adjectives, define the units of interaction for our shared mental models, <em>verbs</em> can indicate the outcomes of those interactions. Dynamic verbs—those that involve action and change, rather than state—answer the question, <em>what outcome of the Tree of Possibility actually materialized?</em></p>
<p>When I say &quot;the platypus snarled&quot;, I&#x27;m selecting one of the many branches in the Tree of Possibilities attached to the noun <em>platypus</em>.</p>
<p>The platypus could have also jumped, swam, dived, hunted, foraged, burrowed, dug, waddled, climbed, groomed, died, and more. But in this case, I&#x27;m telling you that it snarled. This has an effect on your mental simulation, because you know the verb &quot;snarl&quot; and you know the kind of transformation the &quot;platypus&quot; system has gone through. It&#x27;s not just a generic platypus: it&#x27;s a platypus that has snarled. You can update your mental Tree of Possibilities for it, and your ability to predict its next actions increases a little.</p>
<p>There would be something to say about other types of verbs, about adverbs, and all the other elements of a language, but I&#x27;m going to gloss over them. I won&#x27;t attempt a full theory of language beyond this point. We&#x27;re just taking the &quot;framings = choices of boundaries&quot; framing for a ride, to see where it might lead us.</p>
<p>I&#x27;m also not trying to make statements about how this Fundamental Framing of Human Language depends on and affects the way we think. That is a very interesting topic for another time.</p>
<p>But I think we already have enough interesting, if crude, tools to attempt some observations about <a href="/posts/philosophy-is-the-battle-against-the-bad-framing-kraken">the failure modes of language</a>, so here it goes.</p>
<p>First, concrete nouns are a pact with the devil. With nouns, we gain the <em>extreme</em> convenience of indicating things quickly with a couple movements of our lips or fingers, and they work fine 99% of the time. In exchange for that, the remaining 1% of times is incredibly confusing for us.</p>
<figure><img src="https://aethermug.com/assets/posts/a-fundamental-framing-of-human-language/shannon-potter-bHlZX1D4I8g-unsplash.webp" alt=""/></figure>
<p>When your partner dumps you saying &quot;I feel that we&#x27;re not compatible&quot;, if that baffles you it&#x27;s not because such a statement is impossible. It baffles you because you&#x27;ve been treating them as a stand-alone, autonomous system with a boundary coinciding with their skin, instead of the open system of far-reaching interrelations and feedback loops that they are.</p>
<p>Confusing the boundaries prescribed by a concrete noun with the actual target of your emotions cursed you with a huge blind spot. The Tree of Possibilities you want to simulate depends on much more than that physical being you can touch and smell (hint: it also depends on your own actions).</p>
<p>Second, <em>abstract</em> nouns exist only in our heads, but language treats them as equivalent with physical things. It&#x27;s only natural that this leads to more confusion. We use abstract concepts to more conveniently explain processes and correlations, but we often fall into the trap of thinking that <em>there is actually something out there that does those things</em>. There isn&#x27;t. Sorry, Canada.</p>
<p>This is why arguing over the definition of &quot;life&quot; in abortion debates, &quot;marriage&quot; in same-sex marriage debates, &quot;art&quot;, &quot;justice&quot;, and so on are a waste of breath. When you get far enough from the original context, the important phenomena and their correlations change, so you can&#x27;t expect the original word to keep making sense.</p>
<p>Even in the case of adjectives, the third category I mentioned above, the limits are easy to spot. In any field it is applied, perturbation theory fails as soon as the perturbation stops being &quot;small and rough&quot;.</p>
<p>If you want to predict the trajectory of a satellite as it re-enters the Earth&#x27;s atmosphere, you can&#x27;t treat air drag as an approximated perturbation on top of a perfect elliptical orbit. You need the full, detailed equations describing air drag, and solve them the hard way alongside the effects of gravity. Similarly, no amount of adjectives will properly convey to you what a naked mole rat might do unless you observe it yourself for a while. We say that a picture (better still, a <a href="https://www.youtube.com/watch?v=kfzhoz8JDwg" rel="nofollow noopener noreferrer" target="_blank">documentary</a>) is worth more than a thousand words for a reason.</p>
<p>Besides, perturbation theory also breaks down in chaotic systems. When the outcomes are highly sensitive to initial conditions, even a tiny perturbation would need to be highly accurate to be of use. Since we live surrounded by chaos, adjectives often turn out to be sorely insufficient.</p>
<p>Fourth and last on my rough list are (dynamic) verbs. On the one hand, their dynamic nature means that they&#x27;re probably the most &quot;realistic&quot; aspect of language, because the universe never stays still: if everything is changing all the time, transformative verbs are more meaningful than static nouns.</p>
<p>On the other hand, verbs have to co-exist with nouns and play by their rules. The risk is that a verb attributes too much causal power to its arbitrarily-bounded subject, ignoring all the tangled external causes spread wide around it.</p>
<p>If I only tell you that the parliament passed a horrible law, by default you&#x27;re forced to assume that the passing of the law is all the parliament&#x27;s fault, which it almost certainly isn&#x27;t. My language is hiding things from you. The burden is on <em>you</em> to remember that there are many other factors at play, like the pressure from voters and lobbyists, existing issues, and whatnot. The language doesn&#x27;t support the conveyance of such nuances in one sentence.</p>
<p>I find that these hand-wavy interpretations for nouns, adjectives, and verbs more or less match the problems we actually have with language. But if this exercise at reframing language seems too ambitious, it&#x27;s because it probably is. Please don&#x27;t take it too seriously. It may fall apart in certain applications, and I didn&#x27;t even back it up with solid empirical evidence. To the extent that it <em>might</em> produce a few true statements, none of them are especially new.</p>
<p>The point is not to provide a novel understanding of language, but to show that the mere use of a language, <em>any</em> language, comes with batteries included and hard-wired, whether we like it or not. The real &quot;Fundamental Framing of Human Language&quot; might be more sophisticated, more complex, or very different from the one I proposed above, but <em>some</em> framing must be there, because of the very nature of human language. That realization alone, I think, is worth something. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/a-fundamental-framing-of-human-language/michael-jerrard-Vv019m_ClpY-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Don't Think and Write, They Say]]></title>
            <link>https://aethermug.com/posts/don-t-think-and-write-they-say?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/don-t-think-and-write-they-say</guid>
            <pubDate>Thu, 28 Mar 2024 07:46:47 GMT</pubDate>
            <description><![CDATA[Online writing beyond bowel movements.]]></description>
            <content:encoded><![CDATA[<p>Even the ancients knew that writing helps you think and understand. Seneca recommended an alternation between reading and writing, &quot;so that the fruits of one&#x27;s reading may be reduced to concrete form by the pen.&quot; The Renaissance statesman and philosopher Michel de Montaigne—the father of the modern essay genre—said that writing essays was his way of studying himself. There are also the famous quotes by Forster—&quot;how do I know what I think until I see what I say?&quot;—and Didion—&quot;I write entirely to find out what I&#x27;m thinking, what I&#x27;m looking at, what I see and what it means. What I want and what I fear&quot;—so you can pick the flavor you fancy the most.</p>
<p>More recently, Paul Graham has made this point repeatedly, for instance in his <a href="https://paulgraham.com/writing44.html" rel="nofollow noopener noreferrer" target="_blank">&quot;Writing, Briefly&quot;</a> mini-essay:</p>
<blockquote>
<p>Writing doesn&#x27;t just communicate ideas; it generates them. If you&#x27;re bad at writing and don&#x27;t like to do it, you&#x27;ll miss out on most of the ideas writing would have generated.</p>
</blockquote>
<p>I&#x27;ve seen this over and over myself with my own work. I had vague ideas floating around in the back of my head for years, but for three decades I did nothing about them. They were disconnected sketches that would pop up in my consciousness and immediately disappear, never to be seen again.</p>
<p>Then, the instant I found a convenient way to write things down in an organized way (the note-taking tool <a href="https://obsidian.md/" rel="nofollow noopener noreferrer" target="_blank">Obsidian</a>), the ideas started pouring out in text form at an astounding rate. I began writing an amount of notes equivalent to more than a full book every year, without even trying to be prolific.</p>
<p>Through writing, I can not only make my thoughts more concrete, but linking and assembling them into larger, novel concepts has become much easier than I ever hoped. New, sharp ideas emerge sometimes as if by themselves, all because of the habit of typing things down.</p>
<p>Writing for oneself is invaluable, but writing for others is an even stricter test. Every <a href="https://planktonvalhalla.com/" rel="nofollow noopener noreferrer" target="_blank">Plankton Valhalla</a> essay I write—and I consciously call them &quot;essays&quot; for this very reason—is a titanic effort for me. I only start a new piece when I know exactly what I want to write. I always have lots of notes about each idea I want to convey in the essay. Yet the work of piecing them together into something coherent and easy to follow is full of false starts and of going back to the drawing board.</p>
<p>It is not rare for me to take a PV essay through three or four or even five drafts. In one case, I&#x27;ve taken more than six months to finish an essay, because I realized that my <a href="/posts/a-framing-is-a-choice-of-boundaries">framing</a> wouldn&#x27;t work—either in general or for that website&#x27;s intended audience—so I had to radically re-frame it a couple of times until I was happy with it.</p>
<p>That writing can be slow, painful and emotionally exhausting is another old story we&#x27;ve all heard and witnessed, but I think it&#x27;s another face of the same coin. Thanks to those laborious months of rewriting now I have much better framings in my hands to talk about those complex ideas.</p>
<p>This feels very healthy. If you like or need to think deeply, all you have to do is spit some blood writing. But—as is often the case—there are strong incentives in the <em>unhealthy</em> direction. Listen to the online gurus, and you&#x27;ll soon find that the First Commandment of Online Writing is to <em>publish as fast and as regularly as you can</em>. So Max Read <a href="https://maxread.substack.com/p/matt-yglesias-and-the-secret-of-blogging" rel="nofollow noopener noreferrer" target="_blank">tells us</a>:</p>
<blockquote>
<p>The internet and feed-based social platforms have constructed an insatiable demand for content, so if you can produce content mechanically, without requiring expensive resources (such as time, wit, or subject-specific knowledge), you’re in an excellent position to take advantage. But most importantly, this demand is so insatiable that there is currently no real economic punishment for content <strong>over</strong>production. You will almost never lose money, followers, attention, or reach simply from <strong>posting too much</strong>. [Emphasis theirs.]</p>
</blockquote>
<p>In other words, if you want to be successful with your writing, don&#x27;t waste your time thinking while you type: just type something, <em>anything</em>, and throw it at your readers. They won&#x27;t mind.</p>
<p>I find this mindset mildly infuriating, because it seems to undermine the wonderful thought-focusing side of writing that I described above. At the same time, I find it infuriating because he&#x27;s right. It&#x27;s true, as Read writes in the same post, that the most successful online writers all share the trait of logorrhea.</p>
<p>In a footnote, Read explains that &quot;regularity&quot; expresses the skill necessary for an online writer better than &quot;consistency&quot;:</p>
<blockquote>
<p>“consistency” sort of implies quality to me, and reliable quality is of only passing importance compared to reliable production. “Regularity,” on the other hand, reminds me, appropriately, of bowel movements.</p>
</blockquote>
<p>This reminds me a passage from one of <a href="https://en.wikipedia.org/wiki/Natsuhiko_Kyogoku" rel="nofollow noopener noreferrer" target="_blank">Natsuhiko Kyogoku</a>&#x27;s novels (a passage that I read some 15 years ago and that I&#x27;ll have to paraphrase because I can&#x27;t find it any more). In it, the protagonist—a minor post-war author—reflects on the boastful claim of one of his fellow writers, who told him that he wrote so regularly that his work came as automatically as his bowel movements. The protagonist reflected for a while (and not without some envy), until he came to the unexpected realization: &quot;isn&#x27;t that equivalent to admitting that you&#x27;re selling your feces to your readers?&quot;</p>
<p>To be fair, Read is writing in the very specific context of &quot;how to aggressively monetize your writing&quot;, preaching to an audience of aspiring online creators, so he has to approach it from a pragmatic, businesslike angle. &quot;<em>Thinking is great and important</em>,&quot; he seems to be saying, &quot;<em>but it&#x27;s not going to buy your baby&#x27;s diapers if people can&#x27;t find what you wrote.</em>&quot;</p>
<p>He would be right to say that. The reason that kind of reasoning annoys me is not that Max Read is a corrupted soul or a liar (he seems like a nice guy and writes lots of good stuff). The reason it annoys me is that he&#x27;s plainly exposing a sad reality about the world: the internet—and perhaps the world of publishing at large—is built to punish thoughtful writing. Being prolific, not reflective, is what earns you an audience and possibly your bread. Implicit in this seems to be the observation that <em>those successful logorrheic writers just happen to have interesting things to say without the need to spend much time thinking about them</em>.</p>
<p>The cognitive dissonance within me, not Max Read, is what bothers me.</p>
<p>But there&#x27;s another side to this realization, something that affects not only the aspiring writers but all readers on the planet. It&#x27;s that we, the readers, are the victims of a race to the bottom for the amount of re-thinking and re-evaluating of one&#x27;s ideas.</p>
<p>A smart, prolific writer will always have some interesting idea off the cuff, something that they can write in half a morning and receive kudos for. If <em>that</em> was interesting, imagine what they could have produced if only they had spent five, ten, twenty more hours thinking, doubting, musing about it.</p>
<p>Of course, you might say, they can always re-think and deepen their thoughts later on in their rich torrent of future writing. I agree that could happen, at a certain level, and for certain writers. But it&#x27;s against the social side of human nature. People hate admitting they were wrong, and smart, successful writers tend to hate it even more. How many popular authors have you seen write &quot;thinking about it, my previous opinion was superficial and short-sighted, so I&#x27;ve re-thought it from scratch&quot;? There are exceptions (Wittgenstein famously did just that), but that&#x27;s what they are: exceptions.</p>
<p>Even assuming, however, that the best fast writers were all eager to denounce their own mistakes and disown their old thoughts (aka &quot;working in public&quot;), that might not be very good news for us readers. Sure, it might be interesting and instructive to see the evolution of an idea, and small tidbits of past iterations may still bring usable insights even after the core ideas have been rendered obsolete. But I&#x27;m not sure it&#x27;s worth the opportunity cost.</p>
<p>Having all writers iterate in public would mean that we&#x27;re forced to read their half-baked, interesting-but-provisional views most of the time, only to be told, later on, that we might as well not have read them at all. And for what? Only because those writers want to jam the foot in the door, because they want to avoid being buried too deep under the daily steaming pile of other people&#x27;s half-baked writing. It&#x27;s nothing short of a tragedy of the commons. All roads seem to lead to the intellectual&#x27;s equivalent of TikTok.</p>
<p>Yet they&#x27;re right. People&#x27;s memories <em>are</em> ultra-short, and if you, the writer, don&#x27;t publish something, the readers&#x27; eyeballs <em>will</em> read what another, less inhibited writer has published. You should, indeed, publish more often if you want people to read what you write. So what can the writer who wants to write thoughtfully do in all this?</p>
<p>Short of rebelling against the system (ping me if you have ideas), the aspiring thoughtful writer can <a href="https://aethermug.com/posts/a-framing-is-a-choice-of-boundaries" rel="nofollow noopener noreferrer" target="_blank">reframe</a> things. I especially like the framing given by Visakan Veerasamy in some of his Twitter threads.</p>
<blockquote>
<p>the cool thing about writing is that very often the coolest things you say are not the things you&#x27;re trying to say, but the things you accidentally-peripherally invent on the fly to support you trying to say the thing you&#x27;re trying to say</p>
<p>— <a href="https://twitter.com/visakanv/status/1085352294290415616" rel="nofollow noopener noreferrer" target="_blank">Visakan Veerasamy</a></p>
</blockquote>
<p>Thinking deeply while writing might be great for you, but it&#x27;s not necessarily or always how the <em>good writing</em> happens.</p>
<blockquote>
<p>If you want to write well, I think you may need to spend more time identifying good writing than actually writing.</p>
<p>— <a href="https://twitter.com/visakanv/status/1088347054974201858" rel="nofollow noopener noreferrer" target="_blank">Visakan Veerasamy</a></p>
</blockquote>
<p>Finding the gems in the midst of things you&#x27;ve already written is faster than iterating through a single piece until it&#x27;s perfect.</p>
<blockquote>
<p>There’s almost always something interesting about everything if you find the right angle on it. The cool thing is, you don’t actually need to find the smart angle! Writing is cheap, basically free. Write all the angles. Whatever comes to mind, whatever tickles your fancy. Write stupid, edit smart. By using your own taste to retrospectively identify what’s good.</p>
<p>— <a href="https://twitter.com/visakanv/status/1088347054974201858" rel="nofollow noopener noreferrer" target="_blank">Visakan Veerasamy</a></p>
</blockquote>
<p>Visa (the author of these quotes) produces a stunning amount of insights, mostly on Twitter but also in the form of books, videos, and more. He&#x27;s clearly doing something right. What he&#x27;s saying is superficially similar to Max Read&#x27;s bleak message: you have to write <em>a lot</em> and <em>fast</em>. Maybe too fast to think very deeply about it. But his explanation is very different. He proposes that writing fast can still lead to good writing—not out of an innate ability to spit out good ideas, but out of serendipity, contingency, and good taste.</p>
<p>More than that, maybe thinking <em>too</em> deep about what you&#x27;re writing isn&#x27;t a guarantee of good results. There is a point beyond which thinking more gives you diminishing returns. You get to the sweet spot by writing fast enough to roughly explore a vast space of ideas, then you learn to filter the good ideas—and only those you actually want to develop—at the editing stage. No need to throw stuff that you know is crap at your readers, trusting that the next micro-video they watch will make them forget about it. With this approach, you can maintain a certain level of quality and polish, but it&#x27;s probably going to take less effort than writing one painstaking, final piece at a time.</p>
<p>There is one more, complementary, approach that might be worth trying, though. Why not look for a middle ground between the fast-and-mechanical and the tortuous-and-deep? Why not do the thinking on the page, not as a long series of pedantic re-writings, but as a relatively self-contained &quot;stream of reasoned thought&quot;? Begin with a question, try to answer it, try to answer it again and again, all in the same text, discovering as you go where it will take you. And don&#x27;t expect to always find a definite answer.</p>
<p>This way of writing might not be suitable for all contexts and audiences, but it&#x27;s faster and allows for some healthy evolving uncertainty. In fact, I tried that on this very page, and I&#x27;d love to know from you how it went. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/don-t-think-and-write-they-say/claire-mueller-unsplash.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Aphantasia]]></title>
            <link>https://aethermug.com/posts/aphantasia?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/aphantasia</guid>
            <pubDate>Fri, 15 Mar 2024 06:20:49 GMT</pubDate>
            <description><![CDATA[A subjective description of the experience]]></description>
            <content:encoded><![CDATA[<p><em>Note: copied almost as-is from a <a href="https://threadreaderapp.com/thread/1679483345552707586.html" rel="nofollow noopener noreferrer" target="_blank">Twitter thread</a> I shared last year. I&#x27;m putting this here for archival purposes.</em></p>
<p>I have aphantasia, meaning that I can&#x27;t form mental images at will. Nor, in my case, sounds, smells, flavors, tactile sensations, or emotions. When I close my eyes, I enter a blank world. Here I&#x27;ll try to express what that is like for those who don&#x27;t have aphantasia.</p>
<figure><img src="https://aethermug.com/assets/posts/aphantasia/mri.mp4" alt=""/></figure>
<p>I&#x27;ve had the opportunity of participating in a research project on aphantasia at a major university. By &quot;participating&quot;, I mean &quot;letting them take detailed MRI snapshots of my brain as I look at pictures, try to imagine killer whales, and other things that brains do&quot;.</p>
<p>I&#x27;ve also been engaging with some aphantasia communities online for a few months and showering questions on my friends and family members who don&#x27;t have aphantasia. These are my notes on what I&#x27;ve observed.</p>
<p>What is it like to not have a mind&#x27;s eye (and &quot;mind senses&quot;)? Does it mean that aphants (slang for people with aphantasia) have no imagination? Not at all!</p>
<p>Inside the fMRI machine, I&#x27;ve been asked to do one exercise more than any other: I close my eyes and try imagining a snowmobile, then a top hat, then a camel, etc. I&#x27;d never tried to imagine specific things so intensely before, so I took the opportunity to study my experience.</p>
<p>(Disclaimer: what follows is my own experience and interpretation. From my conversations with other aphants I&#x27;ve found that, even within this minority, there&#x27;s a lot of variation in experience. I can&#x27;t say I represent all of them. Also, I know very little neuroscience!)</p>
<p>Anyway, describing this stuff is exceedingly difficult, and I can only begin to do it now after months of daily, intense thinking and self-observation. So here we go.</p>
<p>When I actually see a cat with my eyes, something like this is happening.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/aphantasia/process_1.webp" alt="" title="Photons on retina lead to cat image forms in the brain lead to cat qualia lead to conscious feelings/reactions."/></figure>
<p>(Here by &quot;qualia&quot; I mean the low-level sensation of perceiving something, i.e. &quot;ah, I&#x27;m seeing a cat&quot;, with all baggage that my past experiences of cats might have attached to it. This definition might be a bit different from that used by some philosophers.)</p>
<p>Now, when a non-aphant imagines a cat, something like the following seems to happen. The photons aren&#x27;t there, but the experience starts from the mental image (or at least, it happens in parallel with the qualia).</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/aphantasia/process_2.webp" alt="" title="Cat image forms in the brain lead to cat qualia lead to conscious feelings/reactions."/></figure>
<p>On the other hand, when I try to imagine a cat, the following seems to be happening. I still get the &quot;ah, I&#x27;m seeing a cat&quot; feeling, I still recall a lot of concepts related to cats, and I may even half-believe that I have a cat in front of my closed eyes. But I have no image in my head.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/aphantasia/process_3.webp" alt="" title="Cat qualia lead to conscious feelings/reactions."/></figure>
<p>The same is true if you replace &quot;cat&quot; with &quot;house&quot;, &quot;beach&quot;, &quot;my mother&#x27;s face&quot;, etc. I can imagine all those things in the sense that I focus on them and I can retrieve lots of information about them, <em>except</em> the sensory data. Also, the data feels very faint, distant.</p>

<p>Is it all completely abstract, then? Not exactly. Not in my case, at least. I have a rather well-developed spatial sense, meaning that I can remember the relative positions of things in space, and their rough proportions. I can position myself relative to the things I&#x27;m imagining and imagine myself traveling along them, turning them around in space, etc. I can even orient my (closed) eyes in the direction of any part of an object.</p>
<p>So when I imagine something physical like a cat, I don&#x27;t see it, but I &quot;feel&quot; it as a collection of spatial relations. Something like this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/aphantasia/cat_1.webp" alt="" title="Words floating in space for basic parts of the cat: body, head, pointy ears, tails, and the words &#x27;cat legs&#x27; below it."/></figure>
<p>(note: I don&#x27;t literally see words/letters; and yes, the legs are literally a single fuzzy concept instead of four, until I focus on them.)</p>
<p>So I can tell you (very roughly) how far from me I&#x27;m imagining it, whether its head is pointing left or right, and how it is moving. Of course, I don&#x27;t see any of that, I have it as a quasi-3D spatial model only.</p>
<p>The thing is, this imagined object has almost no details. I can see more details, but I have to work to summon them consciously. I can think &quot;now I&#x27;m going to focus on the head of the cat... hmm, a cat has whiskers, its nose is triangular and its cheeks are like two round furry pads&quot;. So my spatial model fades into this:</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/aphantasia/cat_2.webp" alt="" title="The same as the last image, but focused on the head: triangular nose, whiskers, cheeks."/></figure>
<p>In other words, I can focus my consciousness on certain parts of the object I&#x27;m imagining, and actively recall some of their sub-parts and their relative positions. But they come a few at a time, and they&#x27;re limited to the details that I have consciously recalled.</p>
<p>This spatial model isn&#x27;t an organic whole, each part smoothly attached to the next: it&#x27;s a patchwork of loosely linked &quot;islands&quot; of concepts floating in a certain pattern. And it lacks so many details, big and small, that I&#x27;m not focusing on at the moment.</p>
<p>Often, after trying to imagine an object, I look up an image of it (I just did that for the cat) and slap my forehead: &quot;ah, of course a cat also has eyes!&quot; I knew that all along, but I didn&#x27;t focus my attention on those parts, so they were absent from my previous spatial model.</p>
<p>If I try again now with my eyes closed, my imagination will be like this, because those details are fresh in my memory.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/aphantasia/cat_3.webp" alt="" title="The same as the previous one, but with the words &#x27;cat eye&#x27; repeated twice near the center."/></figure>
<p>I&#x27;m good at drawing, but I have to either look at a reference, or do several sketch iterations to gradually turn my un-seen spatial model into a coherent picture on paper. (Many aphants claim they can&#x27;t draw at all, while others are great artists, so lots of variation here too.)</p>
<p>That&#x27;s all for now: aphants can imagine things just fine, but it seems to be different from how others do it. My hope is that by studying what does and doesn&#x27;t differ between aphants and non-aphants, researchers will be able to deepen our understanding of everyone&#x27;s brains. ●</p>

<p><em>This post is included in the <a href="/posts/a-list-of-introspective-descriptions">List of Introspective Descriptions</a>.</em></p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/aphantasia/void.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[A Framing Is a Choice of Boundaries]]></title>
            <link>https://aethermug.com/posts/a-framing-is-a-choice-of-boundaries?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/a-framing-is-a-choice-of-boundaries</guid>
            <pubDate>Thu, 14 Mar 2024 09:51:00 GMT</pubDate>
            <description><![CDATA[One way or another, you've gotta chop it up.]]></description>
            <content:encoded><![CDATA[<blockquote>
<p><strong>Neo:</strong><br/>
What are you trying to tell me? That I can dodge bullets?</p>
<p><strong>Morpheus:</strong><br/>
No, Neo. I&#x27;m trying to tell you that when you&#x27;re ready, you won&#x27;t have to.</p>
<p>— The Matrix, 1999</p>
</blockquote>
<p>In <a href="/posts/embedded-prophesy-devices">Embedded Prophesy Devices</a> I wrote about internal models. We, the living organisms, always simulate internally the bits of the external world that matter to us, and act based on the results of our simulations. We take sensory inputs, feed them into our internal models, and see what&#x27;s likely to happen. It doesn&#x27;t matter if we have a nervous system or not, a version of this process happens in every living thing, from microbes to insect colonies to tenured university professors.</p>
<p>That last group is especially interesting—not just the professors, but <em>homo sapiens</em> in general. The jury is still out on other animals, but humans certainly have what we can call &quot;mental models&quot;, which is a very sophisticated kind of internal model.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-framing-is-a-choice-of-boundaries/20240314_model_hierarchy.webp" alt="Non-living things: no models. Living things: internal models. Humans: mental models."/><figcaption>Non-living things: no models. Living things: internal models. Humans: mental models.</figcaption></figure>
<p>It&#x27;s difficult to reason about mental models, because we usually equate them to the things they&#x27;re made to simulate: in everyday speech we say &quot;it&#x27;s going to rain,&quot; not [&quot;my mental model predicts rain&quot;], although the latter is arguably more accurate. This conflation of model with reality only happens with mental models, because the &quot;non-mental&quot; kind of model only drives unconscious, automatic behaviors. The moth that flies into a street lamp at night isn&#x27;t thinking &quot;hmmm, I&#x27;m finally making it to the moon!&quot; Instead it acts automatically on the output of its internal simulation, which (implicitly) assumes that the moon is very very far away, so that flying at a fixed angle towards it will allow the moth to fly straight for long distances.</p>
<p>We people, on the other hand, actually think that the world we simulate <em>is nothing but reality</em>. Even though our mental simulations run entirely inside a dark and moist environment covered by flesh and bones, receiving nothing but electrochemical signals from the outside, we somehow convince ourselves that we&#x27;re experiencing reality directly, whatever that means. Said like that it sounds silly, but hey, it works just fine. In almost everything we do in life, pretending that our models are the actual things they refer to makes everything easier with no big downside.</p>
<p>But there <em>is</em> a subtle catch. By convincing ourselves that we&#x27;re reasoning about the world directly, we usually fail to see the huge difference in how external reality and mental models of that reality work. Outside, everything follows the laws of physics. Inside, our mental models use simplifications and heuristics to simulate the same patterns.</p>
<p>The thing we call &quot;weather&quot;, for example, is a process involving an unimaginable number of interactions all happening at the same time, from the subatomic level to large-scale wind and cloud phenomena. The &quot;weather&quot; model in your brain, however, can&#x27;t simulate every air molecule bouncing around the atmosphere in order to make predictions about the weather. It doesn&#x27;t have nearly enough neurons for that, and it doesn&#x27;t need to. It only tracks a small number of variables, like the color of the sky, the humidity of the air, and especially what the weather app on your smartphone is telling you will happen. Based on these few inputs, your model makes a prediction about what are the most likely things that can happen (in something like a <a href="https://planktonvalhalla.com/20230323-the-demon-and-the-tree/" rel="nofollow noopener noreferrer" target="_blank">tree of possibilities</a>) and you use that to decide whether you should take your umbrella out with you or not.</p>
<p>In order to be usable, mental models must reduce most parts of the things they model to structure-less &quot;points&quot; with predefined behavior. Only then can they focus on simulating the interactions <em>between</em> those points, ignoring anything that happens inside them. In other words, mental models are networks of black boxes.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-framing-is-a-choice-of-boundaries/20240314_bicycle_model.webp" alt="An actual bicycle is a terribly complicated system. A bicycle in your head is cut up into very few &quot;black boxes&quot;, each with connected with the others and each with its own tree of possibilities for the expected conditions."/><figcaption>An actual bicycle is a terribly complicated system. A bicycle in your head is cut up into very few &quot;black boxes&quot;, each with connected with the others and each with its own tree of possibilities for the expected conditions.</figcaption></figure>
<p>At least, something very close to that must be happening, based on the fragmentary evidence we have from psychology. Strategies to plug models we know well into new models to act as black boxes, like <a href="https://www.goodreads.com/book/show/34459.Metaphors_We_Live_By" rel="nofollow noopener noreferrer" target="_blank">metaphors</a> and analogies, seem to be central to the way we think. And we now know that <a href="https://www.goodreads.com/book/show/11468377-thinking-fast-and-slow" rel="nofollow noopener noreferrer" target="_blank">heuristics</a> play a major role in our perception and decision making. We even have <a href="https://en.wikipedia.org/wiki/Predictive_coding" rel="nofollow noopener noreferrer" target="_blank">promising theories</a> for the concrete mental process of our simulation of reality.</p>
<p>Seeing mental models as made of black boxes explains a few mysteries:</p>
<ol>
<li>Why we can make predictions about systems that result from the interaction of Avogadro-scale numbers of particles.</li>
<li>Why we imagine riding a car without knowing how it works, or make someone laugh without understanding the neuronal processes of hilarity, or sing without ever having seen a vocal cord.</li>
<li>Why we tend to see everything as &quot;things&quot; doing stuff, rather than the <a href="https://planktonvalhalla.com/20220829-the-world-is-a-network/" rel="nofollow noopener noreferrer" target="_blank">uninterrupted network of interactions</a> that it really is.</li>
</ol>
<p>There is a problem with this view: which interactions should we put <em>inside</em> a mental black box, so that we can safely ignore them, and which should we keep as interactions <em>between</em> the black boxes? Or, to put it differently, <strong>where should we draw the boundaries between things?</strong> Unlike most other species, we have a certain freedom to <a href="/posts/designing-for-emergence">draw and redraw the boundaries</a> of our black boxes to fine-tune how effectively our mental models simulate reality.</p>
<p>Suppose, for example, that one day you decide to read up on the Israel-Palestine conflict, because you&#x27;ve heard about it a lot but have no idea how something like that might be going on. You might begin with a hazy mental model of &quot;Population 1 black box&quot; (Israel) and &quot;Population 2 black box&quot; (Palestine), each with its religious beliefs and claims to a piece of land. This isn&#x27;t enough to understand why the conflict could be so violent and bitter and persistent. Can&#x27;t they just live together in that place, each side minding their own rites? It doesn&#x27;t make any sense.</p>
<p>Then you read about how Jews have been treated over the centuries, and the way the state of Israel was established in 1948, and how hundreds of thousands of Palestinians were displaced in the process. This leads you to draw a few new lines in your mental model: other countries, like the UK and the US, have had a role in the conflict, so you can&#x27;t just treat them as part of a generic &quot;rest of the world&quot; black box. You need to have separate a black box for each of them, because their direct interactions with the initial two black boxes is specific and important. Then you might learn that there are factions and organizations <em>within</em> both sides of the conflict—for example the militant organization Hamas, as related to but distinct from the broader Palestinian population. Now you have to split things up again, with a black box for Hamas and another one for the Palestinians who are not directly affiliated with it. And so on and so forth. As you learn more about the geopolitics of the war, you keep on revising your boundaries, on finding new things between which you should be tracking and simulating in order to make better predictions on the matter. Your predictions will never be perfect, but they can be better than before.</p>
<p>We can call a specific choice of boundaries a <em>framing</em>. Every model we use in our heads is built on <a href="/posts/there-is-thinking-and-there-is-thinking-and-there-is-thinking">one framing or another</a>. As with our use of models, framings are absolutely necessary to our interpretation of reality, but we&#x27;re not often conscious of them. Most of the time, they are invisible to us, like the frame of your glasses becomes invisible after you&#x27;ve worn them long enough. When you assign a name to something—like I just did by defining &quot;framing&quot;—you&#x27;re updating a framing: you&#x27;re changing the number and the specifics of the possible interactions you can simulate with your mental models.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/a-framing-is-a-choice-of-boundaries/Tectonic_plates_(2022).svg" alt="Some recent boundaries."/><figcaption>Some recent boundaries.</figcaption></figure>
<p>New framings pop up when people make important realizations. Before the discovery of plate tectonics, geologists had no reason to divide different parts of the Earth&#x27;s crust into sub-regions. Everything was one block in their mental models, and as a consequence of that lots of things didn&#x27;t make sense. Then, over the course of several decades, they acquired a new framing, in which the crust is actually a network of many continent-sized plates, each capable of moving somewhat independently, but all connected to each other with certain mechanisms. New boundaries were drawn inside what used to be the &quot;Earth&#x27;s crust black box&quot;, and suddenly things made more sense: mountain formation, earthquakes, and volcanoes now fit in the framework as different outcomes of the same tectonic processes.</p>
<p>You can see all big scientific revolutions in this light: new &quot;things&quot; or terms were introduced, new relationships and processes extracted out of black boxes and into &quot;plain sight&quot;, so that we could employ them in our models (theories). Think about the novel concepts of &quot;germ&quot; and &quot;immune system&quot; introduced by the Germ Theory of Disease, and the words &quot;natural selection&quot; and &quot;fitness&quot; that Darwin drew up, and so on. In every case, new boundaries led to new interactions being examined, and better models of reality.</p>
<p>Sometimes two framings seem to differ not in where you&#x27;re drawing the boundaries between black boxes, but in the relationships between those black boxes. The &quot;glass half full&quot; vs &quot;half empty&quot; trope is really about two different framings that draw the same water/air boundary, but give different causal powers to each side of that boundary. Is it the water-filled part of the glass that is more consequential, with your ability to drink it and hydrate yourself, or is it the empty part that you should focus on, as an omen of what is unavailable to you?</p>
<p>I think that these are still different ways to draw boundaries. Locally, when you look only at the glass, it appears <em>as if</em> the water/air surface is the same in both the pessimistic and the optimistic views. But expand your field of view and you&#x27;ll find that those are but little segments of a much bigger boundary, one that encompasses very different things in each case.</p>
<p>The &quot;half-full&quot; view draws a boundary around myriad things and events that to you, taken together, mean that life is full of opportunities. In this view, the other things outside of that boundary are separate things, devoid of an overarching meaning or coherent powers. When you see the glass as half full, the empty half is &quot;just a bit of air&quot;, with no relation to yesterday&#x27;s inopportune downpour or to your flat tire this morning. On the other hand, a pessimist sees the half-empty part of the glass, the rain, and the flat tire as parts of the same black box—one that is working against them to make them unhappy—and the water in the glass is &quot;just a bit of water&quot;, unable on its own to comfort them even a little.</p>
<p>Another example of a boundary that seems to stay the same is the difference between the heliocentric and the geocentric models of the solar system. You could argue that Copernicus introduced no new &quot;things&quot;, no new black boxes to simulate independently. The Sun-Earth boundary appears to have remained the same, and only the way the sun and earth relate to each other has changed. But what Copernicus did was to say, &quot;hey, the line surrounding the things we call &quot;planets&quot; should include the Earth too, because it does the same kind of things Venus and Mars and the other ones do; but it shouldn&#x27;t go around the Sun, as it&#x27;s a different kind of beast&quot;. In other words, the Earth and the Sun swapped places around the boundary. That&#x27;s to show how tricky it can be to examine and reason about our framings, let alone change them into something &quot;better&quot;.</p>
<p>And, by the way, how can a framing be &quot;better&quot; or even just &quot;good&quot;? If they are all subjective simplifications and projections onto reality of boundaries that <a href="https://planktonvalhalla.com/20240227-boundaries-are-in-the-eye-of-the-beholder/" rel="nofollow noopener noreferrer" target="_blank">don&#x27;t really exist</a>, does it make sense to say that a framing is, in absolute terms, &quot;good&quot; or &quot;bad&quot;, &quot;true&quot; or &quot;false&quot;, &quot;right&quot; or &quot;wrong&quot;?</p>
<p>No, in fact, I don&#x27;t think it makes sense. How you draw the lines is up to you, and it has nothing to do with morality or accuracy. But a framing can be more or less <em>effective at achieving a given goal</em>. A framing is effective if and when it lets you pick out meaningful differences that you would otherwise overlook. You judge it by how well and how easily it lets you simulate the parts of reality you want to predict. Even when looking at the same corner of reality, another person, with different goals, might find a different framing to be more effective.</p>
<p>I&#x27;ll repeat that. The way you chop up reality into black boxes has huge consequences for the way your models will work and what they&#x27;ll predict, but it is subjective and arbitrary. How &quot;good&quot; a framing is depends entirely on what your purpose is when you use it.</p>
<p>So if you find yourself baffled by the arguments of someone disagreeing with you, chances are that the two of you are thinking off of different framings. And, more often than not, your framings are different not (necessarily) because one of you is stupid, but because <em>you have different goals</em>. In that sense, you might be both right within your respective framings, and it&#x27;s pointless to argue very long about what your models predict: align on the goals first, and the framings might synchronize on their own. Framings are tools for us to manipulate.</p>
<p>If you don&#x27;t realize you&#x27;re using a framing, you can&#x27;t even begin to understand its limitations, not to mention improve it. Reality is whatever is out there beyond your cranium walls, impossibly far and dim, and you can only hope to learn a little about what it was like in the past. What <em>feels</em> like reality to you, on the other hand, exists only in your mind, and you—being human—have the ability to partly modify the way it works. The upshot of that knowledge is that it might spare you the need to  dodge bullets. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/a-framing-is-a-choice-of-boundaries/stainedglass.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Software Projects Will Be Late]]></title>
            <link>https://aethermug.com/posts/software-projects-will-be-late?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/software-projects-will-be-late</guid>
            <pubDate>Thu, 29 Feb 2024 09:03:33 GMT</pubDate>
            <description><![CDATA[Blame the thickets and the paradoxes]]></description>
            <content:encoded><![CDATA[<p>I&#x27;ve been a product manager by trade, and I&#x27;ve done various kinds of engineering over time, especially software engineering. Back-end, front-end, infrastructure, architecture. That means that I&#x27;ve had to deal both directly and indirectly with the eternal struggle of <em>software estimation</em>. This is the tug-of-war between the non-engineers who call the shots the product teams about when exactly Product A or Feature B will be ready. (Some call those non-engineers &quot;the business people&quot;, but I&#x27;ll use the nicer &quot;project sponsors&quot;.)</p>
<p>Everyone in software is familiar with that struggle, and understands instinctively why it happens. No one on the other side has a clue (when they do, the struggle doesn&#x27;t happen). So here I&#x27;ll attempt a very brief and simplified explanation of this woeful corporate trap. If you&#x27;re about to punch a sponsor, consider linking this page to them instead.</p>
<p>I&#x27;ll start with the upshot: software projects <em>will</em> be late, because of their very nature.</p>
<p>To people who have never written production software, this feels like a cop out, an excuse to be sloppy with one&#x27;s estimates. This is understandable for inexperienced managers. <em>&quot;You told me you&#x27;d finish it in three weeks, and it&#x27;s been over a month. Therefore you&#x27;re either lying or bad at your job.&quot;</em> It does sound like good, straightforward reasoning. The catch is upsteam: expecting to have neat dates to mark in a calendar in the first place.</p>
<p>The tendency to compare software work to other kinds of engineering makes the misunderstanding even more insidious. The argument goes like this:</p>
<blockquote>
<p>Sure, construction projects are known to be always late too, but they involve huge teams, they have to deal with the weather, they need to provision materials in the right order, etc. Software usually lacks all of those complications and <em>force majeur</em> factors. It&#x27;s just a few people (sometimes very few) typing at a keyboard. It&#x27;s supposed to be the epitome of autonomy. Who else is there to blame?</p>
</blockquote>
<p>Fred Brooks wrote a famous book about the pitfalls of software estimation in the 1970&#x27;s called The Mythical Man Month. After almost 50 years, it&#x27;s still very relevant. His most famous statement is what software people know as Brook&#x27;s Law:</p>
<blockquote>
<p>Adding manpower to a late software project makes it later.</p>
</blockquote>
<p>His argument is that software output is not additive. More people in a team means more people everyone has to talk to and coordinate with, which actually slows everyone down. Also Brooks: &quot;the bearing of a child takes nine months, no matter how many women are assigned.&quot;</p>
<p>Another point that Brooks makes is that all programmers are optimists, so they tend to underestimate the problems and dead-ends ahead of them and overestimate how quickly they can get things done. He is right that optimism is common and makes things worse, but I don&#x27;t believe that programmers are particularly more optimistic than any other category of people. It&#x27;s just what humans do when faced with software. The real culprit is in the peculiar nature of the job.</p>
<figure><img src="https://aethermug.com/assets/posts/software-projects-will-be-late/temple_of_thought_francois_galas.webp" alt="Temple of Thought, Dedicated to Beethoven, under construction (1897-1914) - François Garas"/></figure>
<p>A key difference between software and other engineering tasks is that, whenever you encounter a new challenge that you have already solved before, you only need to copy and paste the code to solve it. Unlike houses and robots, creating a new copy of past work is effectively free and instantaneous. This usually removes 99% of the <em>repeatable</em> parts of the process, so that all of the things you actually have to do are the non-repeatable parts.</p>
<p>Paradoxically, this doesn&#x27;t save you project time, although it does save you <em>total</em> time. Usually the engineers know what parts they can reuse instantly, and will only include in their estimates the other parts, those that are new and strange and full of unknowns. By definition, software development is composed almost entirely of solving problems you&#x27;re not sure how to solve.</p>
<p>To put it another way, you should think of software work as equivalent not to the <em>construction</em> of a bridge, but to its design (I borrow this idea from <a href="https://news.ycombinator.com/item?id=25828923" rel="nofollow noopener noreferrer" target="_blank">an insightful Hacker News comment</a>). The actual building part is more similar to the act of &quot;<a href="https://en.wikipedia.org/w/index.php?title=Compiler" rel="nofollow noopener noreferrer" target="_blank">compiling</a>&quot; code, which is almost entirely automated and only takes a few minutes. Or, if you really want to equate software to <em>building</em> the bridge, then it&#x27;s like building the first bridge ever, or the first bridge with a new set of materials or never-before-tried environmental conditions or new physical laws. Every single time. This means that you can&#x27;t predict with confidence what&#x27;s going to be possible at every stage until you&#x27;ve done the previous work leading to that stage, nor can you predict all the problems that will surface once you attempt to solve the problems that you <em>can</em> predict.</p>
<p>If that metaphor didn&#x27;t resonate, here&#x27;s another. Software product development is like having to climb an unexplored mountain. Usually it is not impossible, and you may even know pretty accurately where you want to end up (e.g. the mountaintop), but you have to figure out the way as you go, one or a few steps at a time. If you don&#x27;t even know the path you&#x27;re going to take to get there, good luck guessing how long it&#x27;ll take you to walk it. You have to make the map as you go—and in software the map will become unnecessary the moment you&#x27;re finished, because you never climb the same mountain twice.</p>
<p>That is not to say that no time estimate is ever possible in software. There are methods to increase the accuracy, and good engineers use them. But for all the effort that you might put in it, the accuracy is never going to be high, and it&#x27;s never even going to be medium. You have to live bravely with poor estimates.</p>
<p>So how do you do it then? Seen in this framing, the Agile movement that everybody knows—including the project sponsors—makes more sense, I think. The core values of the Agile Manifesto go like this:</p>
<figure><img src="https://aethermug.com/assets/posts/software-projects-will-be-late/manifesto.webp" alt="You&#x27;ve got to love the Agile Manifesto website, essentially unchanged since its creation in 2001."/></figure>
<ul>
<li><strong>Individuals and interactions</strong> over processes and tools</li>
<li><strong>Working software</strong> over comprehensive documentation</li>
<li><strong>Customer collaboration</strong> over contract negotiation</li>
<li><strong>Responding to change</strong> over following a plan</li>
</ul>
<p>Agile is all about improvisation and trial and error. It&#x27;s about being good at orienting yourself in the wilderness rather than following a map that doesn&#x27;t exist.</p>
<p>Exactly how to implement this philosophy is a tougher nut to crack. People have come up with many approaches, of which Scrum is the most popular. They have all been criticized, because very often they don&#x27;t &quot;work&quot;. Now, as much as I like being the balanced, unbiased guy, in this case I have to lean heavily for the programmers&#x27; side. Agile is a way of thinking about software development, and it&#x27;s a very wise way to think about it. But it&#x27;s never going to work if only the engineers adopt it. If the sponsors of a project keep on asking for calendarized roadmaps and Gantt charts, it will all be for nothing. It won&#x27;t matter what practical declination of Agile the team uses: your project <em>will</em> be late.</p>
<p>The sponsors who embrace the call of the wild, on the other hand, gain the presence of mind to set up things so that delays won&#x27;t spoil everything. That&#x27;s where Agile will work. Of course, the software projects will still be late, but it won&#x27;t matter very much. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/software-projects-will-be-late/jp-valery-EAx4OpPCSrs-unsplash.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Internal Model Translation Table]]></title>
            <link>https://aethermug.com/posts/internal-model-translation-table?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/internal-model-translation-table</guid>
            <pubDate>Wed, 21 Feb 2024 03:27:16 GMT</pubDate>
            <description><![CDATA[There's something for every occasion!]]></description>
            <content:encoded><![CDATA[<p>Living things have models of the world inside them, and those models are <a href="/posts/embedded-prophesy-devices">embedded prophesy devices</a>. So why not talk about them in those terms? Here is a convenient list of translations from the usual, mundane speech and the equivalent expression in proper <em>modelese</em>.</p>
<h2>It looks like it&#x27;s going to  rain.</h2>
<p><em>The mental model of the weather in my head shows that rain in the near future is highly correlated with the current appearance of the sky.</em></p>
<h2>You&#x27;re so good at darts!</h2>
<p><em>You have a great internal model for the effect of your muscles and the flight path of darts when you throw them with the purpose of hitting a target!</em></p>
<h2>I&#x27;m afraid of heights.</h2>
<p><em>My internal model of the risk of falling guarantees I&#x27;ll meet with certain and painful death if I climb all the way up there. I swear I&#x27;d like another model but I&#x27;m stuck with this one.</em></p>
<h2>Let&#x27;s meet there at 10 AM on Saturday.</h2>
<p><em>Let&#x27;s update our mental models to both expect that we&#x27;ll find each other there at 10 AM.</em></p>
<h2>I can&#x27;t believe he did that!</h2>
<p><em>His actions contradict the mental model I had of him!</em></p>
<h2>He&#x27;s evil!</h2>
<p><em>My mental model of him predicts inexplicable actions that go against my most fundamental social decision making heuristics.</em></p>
<h2>The region of Sápmi was formerly known as Lapland.</h2>
<p><em>I&#x27;m giving you the old name for the region of Sápmi so that you may hook it up to your mental model of Scandinavian geography and increase your predictive power if you so wish.</em></p>
<figure><img src="https://aethermug.com/assets/posts/internal-model-translation-table/le-mois-2.webp"/></figure>
<h2>Get out of my house!</h2>
<p><em>My mental model of you predicts a good chance that the curtness of my wording and the volume of my voice will make you leave the perimeter of my residence.</em></p>
<h2>1+1=2</h2>
<p><em>If I apply my mental model for arithmetics to the numbers one and one and the operation of summation, my prediction is the number two with 100% confidence, meaning that I believe it to be the only branch in its tree of possibilities.<sup><a href="#user-content-fn-1" id="user-content-fnref-1" data-footnote-ref="true" aria-describedby="footnote-label">1</a></sup></em></p>
<h2>It hurts!</h2>
<p><em>My innate internal model for the functioning of my body predicts bad things if whatever is unusual in my body remains there much longer.</em></p>
<h2>The dog is nagging me to go out for a walk.</h2>
<p><em>The dog&#x27;s internal model predicts that making the motions and sounds that I describe as &quot;nagging&quot; gives her a good chance that I will take her out for a walk.</em></p>
<h2>I&#x27;m not ready to talk about it.</h2>
<p><em>I am reworking my mental model of this situation for emotional or other reasons, and it&#x27;s not yet complete or consistent enough to be put into words effectively.</em></p>
<h2>I think, therefore I am.</h2>
<p><em>I was questioning the universal applicability of my mental models because I predict the existence of a perfect model where everything is predictable and I want it. With that goal, I looked at how well my internal models predict my own existence and found them lacking. If my models fail at predicting something as basic as me existing, it&#x27;s worth spending years trying to fix them and writing some books on the topic. That led me to stumble on the heuristic I was looking for, one that makes my models self-consistent and self-predicting: the fact that I have mental models means that the holder of those models, which is me, must exist too.</em></p>
<h2>Will you marry me?</h2>
<p><em>My internal models predict much happiness lasting the rest of our lives if only we remain together as a couple. They also predict that your models are telling you the same.</em></p>
<p><em>So we should set in motion a series of legal and social mechanisms that will greatly penalize us in case of separation, as a way to reduce the branches of our joint tree of possibilities that would lead to us separating on a whim. (This has the effect of updating our mental models so that they&#x27;ll predict unpleasant and inconvenient things if we ever try simulating a separation in our heads.)</em></p>
<p><em>Also, the very act of accepting to go through that process will serve to greatly thicken the branch of the trees of possibilities of each of our separate models predicting &quot;my partner also has a similar mental model as me, at least with regards to my prediction of us being happy together in the future,&quot; which is what made me begin this whole statement in the first place.</em></p>
<p><em>Here&#x27;s a ring to intensify the predictions of your models. Is that expression a yes?</em> ●</p>
<div class="col-start-1 col-end-1 m-10"></div>
<section data-footnotes="true" class="footnotes"><h2 class="sr-only" id="footnote-label">Footnotes</h2>
<ol>
<li id="user-content-fn-1">
<p>The &quot;with 100% confidence...&quot; part applies to all mathematical statements. <a href="#user-content-fnref-1" data-footnote-backref="" aria-label="Back to reference 1" class="data-footnote-backref">↩</a></p>
</li>
</ol>
</section>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/internal-model-translation-table/le-mois.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Conversational Humor Is Funny Only After You've Seen It Click]]></title>
            <link>https://aethermug.com/posts/conversational-humor?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/conversational-humor</guid>
            <pubDate>Thu, 15 Feb 2024 08:00:30 GMT</pubDate>
            <description><![CDATA[Most funny interactions betray the usual categories and seem to be evolutionary or emergent]]></description>
            <content:encoded><![CDATA[<blockquote>
<p>Explaining a joke is like dissecting a frog. You understand it better but the frog dies in the process.</p>
<p>― E.B. White</p>
</blockquote>
<p>If you want to become integrated in a different culture, the last thing you can hope to master is the local humor.</p>
<p>This was a surprise for me when I became proficient enough in Japanese to understand 100% of what was being spoken around me. I could understand the content of the jokes and gags popular in Japan, yet many of them simply fell flat on me. The opposite was true, too. Much of the humor that I always found to be effective in Italy would simply implode when translated to Japanese. I wondered why and, I hate to admit, I looked up the scientific literature. My verdict: the science on this seems incomplete.</p>
<p><a href="https://www.semanticscholar.org/paper/Comprehension-of-English-Jokes-in-Chinese-EFL-%E5%BF%BB%E6%84%9B%E8%8E%89-Hsin/fbb6417984e59e0130c47c526752927692a52418" rel="nofollow noopener noreferrer" target="_blank">Some</a> joke researchers—all very serious frog-dissectors—separate jokes into three categories: <em>linguistic jokes</em>, <em>cultural jokes</em>, and <em>universal jokes</em>. Linguistic jokes are those that play with words, meanings, and sounds in the local language to humorous effect. Cultural jokes are mostly about stereotypes for nationalities, religions, and things like that (I guess that Chuck Norris-type jokes fit in this category as well). And the universal jokes are those that stand on their own legs, based only on logic (or logical contradictions) and on knowledge that everyone in the world has.</p>
<p>The universal and linguistic jokes were never a problem once I understood the language. If anything, I noticed that the linguistic jokes are <em>more</em> funny when you&#x27;re not entirely fluent, maybe because they have an aura of novelty, or because they give you the satisfaction of &quot;getting&quot; them all on your own. Here&#x27;s a cookie, me.</p>
<p>The cultural references took longer, but I&#x27;m more or less able to get them now, after being immersed in Japanese culture for many years. With time, the local comedians&#x27; culture-grounded jokes became gradually more intelligible and funny to me. Those compose perhaps 40% of all the humor I hear in the language.</p>
<p>But most cases of humor turn out not to depend on linguistic play, cultural tropes, or canned jokes at all. The straightforward understanding of their contents is all there is, which would make them fall in the &quot;universal joke&quot; category, except they depend completely on the social context. They&#x27;re funny for some and meaningless for everyone else. There seems to be this other category that I haven&#x27;t seen mentioned by any scientist. Here, I think, lies the final frontier for the language learner.</p>
<p>When I&#x27;m the one making jokes in Japanese, I have full control of how much cultural context I assume. Obviously, I try to stick to the self-contained type of humor, yet a lot of it is lost on the Japanese listener. The contrast is stark when I&#x27;m in a mixed-culture group. I have a Spanish friend and an Italian friend that I hang out with often, and all of us have Japanese partners. Every time, we end up with a clean split: our &quot;Mediterranean jokes&quot;, as we&#x27;ve come to dub them, are normally-funny for the three Europeans, and (wearily) resisted by the Japanese.</p>
<p>(Fortunately, that rift has turned into its own shared running joke: we settle for bantering about which culture has no taste for what&#x27;s really funny.)</p>
<p>I&#x27;ll make a few examples. One of the first and most shocking realizations of every Western learner of Japanese is the almost complete lack of what we call &quot;irony&quot; in this country. Saying the opposite of what you think for laughs simply doesn&#x27;t occur to most Japanese people.</p>
<p>If, for instance, you see a snow blizzard outside when you&#x27;re about to go out, you might say &quot;ah, so sunny, we should change into our swimsuits!&quot; In Japan, that will often earn you a look of concern for your sanity, and a heartfelt recommendation <em>not</em> to do that. (The Japanese do have a form of sarcasm, called <em>hiniku</em>, but it is only used to mock others, and very sparingly.)</p>
<p>Coming from a family, on my mother&#x27;s side, where irony and sarcasm are the main sport, and pushing the line between &quot;funny&quot; and &quot;offensive&quot; is a fine art (sometimes successful), sealing all of that away from my Japanese social life is a constant battle.</p>
<p>There is also a kind of humor where you say something nonsense, impossible, or utterly exaggerated instead of the real thing. It&#x27;s similar to irony, but it requires the other to play along to make it worthwhile. My Italian friend gave me a good example this very morning. He&#x27;d gone on a short walk with a friend and got back home only twenty minutes later; when his Japanese partner (also my good friend) asked him where they&#x27;d been, he replied that they&#x27;d made it almost to the border with Ibaraki, a neighboring prefecture.</p>
<p>Now, given that Ibaraki is some 44 kilometers away from their house in central Tokyo, I or another of our Western friends would have promptly responded &quot;oh, were the corn fields in bloom over there?&quot; or &quot;man, your feet must be hurting bad right now!&quot;, or something along those lines. My friend&#x27;s partner, on the other hand, was confused: how was it even possible for them to walk that much in <em>twenty minutes</em>?</p>
<p>At this point I should make it clear that my friend&#x27;s partner, and all the other Japanese that I describe not getting jokes, are highly intelligent, reasonable, and funny people. On top of that we (the blundering Western jokers) have swallowed our pride and explained our humor to them inside out more than once. They &quot;get&quot; what we&#x27;re trying to do—they just tend to reject the premise. Attempting such rhetorical devices feels so pointless and unfunny to them, that they forget their existence and end up being caught off guard the  next time it comes up. Even when they later get the speaker&#x27;s intention, it&#x27;s too late to fix the humor. The joke has fallen flat, and its pointlessness seems plainer than ever. A perfect vicious cycle.</p>
<p>On the reverse side, I was initially baffled by the classical <em>manzai</em> genre of Japanese comedy. It requires two people to assume complementary roles: the <em>boke</em> (idiot) says something ludicrous or silly, and the <em>tsukkomi</em> (dunker, shover) loudly and often violently calls them out for their silliness. The two comedians repeat this sequence over and over throughout each skit. <em>Manzai</em> dominates the vast majority of Japanese comedy, and it&#x27;s been repeated in infinite variations for decades. It&#x27;s the kind of thing that never goes out of fashion.</p>
<p>In my early days of learning the language, I didn&#x27;t get why doing that same stunt again and again could be funny to anyone. Was it a form of sadism, watching the <em>tsukkomi</em> slap the <em>boke</em> behind the head every ten seconds for two minutes? But I kept watching them for the pleasure of seeing my language comprehension improve, and over time I started to get it. It wasn&#x27;t the content of what they said (although the good ones do say interesting things), nor was it the physical attacks themselves, but the tension in the small gap between error and correction, the rising expectation of exactly when and how the <em>tsukkomi</em> comedian will be forced to react to the current joke.</p>
<figure><img src="https://aethermug.com/assets/posts/conversational-humor/funny-owl.webp" alt="I guarantee that there is a place and time in which this is funny."/></figure>
<p>And it&#x27;s not something specific about Japan, or about very different cultures, either. I used to be mildly annoyed by a French friend of mine—i.e. culturally quite close to me, an Italian—who never misses an opportunity to make dirty double entendre jokes. You know, stuff like, &quot;hey, she said she loves the color of &#x27;Uranus&#x27;, <em>wink wink</em>&quot;. Rather condescendingly, I believed his sense of humor was stuck at adolescent levels... until I learned that double entendres is a well-known staple of (grown-up) French humor. (In retrospect, I should have guessed from the name of the joke category.)</p>
<p>More surprising still for me was seeing that even my friend from northern Italy, a <em>compatriot</em>, simply doesn&#x27;t get the kind of teasing/bantering that I took for granted and always loved during my whole life in central Italy. It didn&#x27;t register as humor for him, and I ended up upsetting him more than once because of that. After I explained to him that my obviously exaggerated remarks were meant to be the start of a funny back-and-forth with him, he tactfully asked me to never do that again.</p>
<p>All of this leads me to a hypothesis: what makes most types of humor hilarious is not their content in itself, nor the cultural references, but <em>seeing others laugh at it</em>. Even when you ensure that the receiver has fully understood the context and language involved, the difference between laughter and a blank stare hangs on whether they have seen the humor actually cracking someone up or not.</p>
<p>All humor is an emergent phenomenon. It&#x27;s not decomposable into an inventory of &quot;funny bits and bolts&quot;, each hiding in it a portion of the overall comicality. And this specific kind that I&#x27;m talking about is its own self-fulfilling prophecy: observing others doing it well is the required catalyst you must include to do it well in turn. You can&#x27;t create it unless you believe you&#x27;ll enjoy it, and you can&#x27;t believe such a thing until you&#x27;ve somehow verified that it is enjoyable. It&#x27;s a catch-22 that only dissolves when others perform the humor in front of you.</p>
<p>I&#x27;ve seen this with the Japanese <em>manzai</em> skits: they became funny after I watched my wife laughing to tears to them. I also catch our Japanese friends laughing furtively at our &quot;Mediterranean jokes&quot;, too (although they are quick to recover their unimpressed attitude). We could call this strange category <em>acquired humor</em>.</p>
<p>This hypothesis gives us a turtles-all-the-way-down problem, though. If everyone needs to see a humor sample fully formed and performed before picking it up and spreading it to others, where did it come from in the first place? It can&#x27;t be an infinite regression! (I don&#x27;t think it likely that the first living microorganism just happened to be a very funny fellow; not before dividing a few times, at least).</p>
<p>The answer can only be serendipity, the sudden appearance of hilarity that surprises even the people who produced it. A &quot;random mutation&quot; in the way we interact, a lucky convergence of all the right elements, crucially including the good mood and readiness of those involved to laugh at something new. When that happens, the originators of the new format observe themselves laugh about it—without initially understanding why—and can start <em>believing</em> in it. They plant a seed for a new strain of humor to spread.</p>
<p>Having an origin in happenstance doesn&#x27;t mean that it&#x27;s all about luck, though. Just as you can hope to eventually score a point if you kick the ball towards the goal a sufficient number of times, you can &quot;willfully&quot; stumble on usable new humor by sheer perseverance.</p>
<p>This brings me to propose three rules you&#x27;ll need to follow if you want to sow the seeds that will populate the future joke-space:</p>
<ol>
<li>Be playful and creative in the way you communicate with others, introducing surprising, whimsical, and unexpected patterns and points of view.</li>
<li>Be very patient with the strange quirky behaviors of those around you. Most new patterns will not be funny.</li>
<li>Be ready at all times to see the humorous in everything around you; if unsure how to react, laugh. You might just start something new.</li>
</ol>
<p>On the moral <em>need</em> to go through all that just to increase the catalog of workable joke types, I won&#x27;t comment. Either way, I think we should do those things. To me, those three rules sound like a good way to enjoy life <em>regardless</em> of how much you care for the variety of humor out there. Maybe joke variety is, after all, one measure of a community&#x27;s ability to be happy. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/conversational-humor/kaminari.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Embedded Prophesy Devices]]></title>
            <link>https://aethermug.com/posts/embedded-prophesy-devices?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/embedded-prophesy-devices</guid>
            <pubDate>Thu, 08 Feb 2024 05:59:30 GMT</pubDate>
            <description><![CDATA[Without a vast, detailed replica of the world encoded as models inside your body, you wouldn't survive one day.]]></description>
            <content:encoded><![CDATA[<p>When you walk on a sidewalk you are pretty sure it won&#x27;t kill you as you step on it. Somehow, you know that moving your foot forward and shifting your full weight on it will lead to you being one step closer to where you want to be, with no averse effects. It won&#x27;t, for instance, bite your foot; it won&#x27;t move suddenly out of your way; it won&#x27;t propel you backwards; and it won&#x27;t turn into a 10-thousand-strong procession of Brazilian carnival dancers that trample you to death at the beat of a Samba. Not in the next step, nor in any of the steps that you intend to take until you reach your destination.</p>
<figure><img src="https://aethermug.com/assets/posts/embedded-prophesy-devices/sidewalk.webp" alt=""/></figure>
<p>But you (as in &quot;the biological system that is now parsing and comprehending these words&quot;) are a bunch of neurons and other connected cells nested deep inside the reddish darkness of flesh and bones. All you know about the sidewalk is that it reflected photons in a certain pattern—shades of gray, rectangles, perspective effects...—in the past. Even assuming the configuration of those photons you scooped up with the two pinholes on the surface of your skull is perfectly accurate, all you have is outdated information. You have zero information about what the sidewalk will be like when you set your next foot down.</p>
<p>Of course, you somehow know it won&#x27;t have changed much. Not in most cases. <em>How</em> do you know that? The answer is that you have a <strong>model</strong> of the sidewalk inside your head.</p>
<p>Somewhere, spread across a great number of interlinked neurons, you have a replica of the sidewalk, and of every other solid walking surface, that you use to <em>predict the future</em>. As soon as you get enough photons in patterns that tickle your &quot;sidewalk&quot; neural network strongly enough, a prophecy is made: you can move your feet in a walking motion and all that will happen is that you move forward, no explosions, no harm, no one dressed in glittering costumes shaking their behinds at you.</p>
<p>The &quot;sidewalk model&quot; is there in your head first and foremost so that you can predict the future and get to where you&#x27;re trying to get. And, similarly, prophesy is the function of the &quot;door handle model&quot; that you fire up when you feel like being on the other side of the door; and of the &quot;Jim model&quot; when you&#x27;re considering inviting your friend Jim to a jog in the park; the &quot;H key model&quot; when you want the letter H to appear on your monitor; the &quot;my skin&quot; model when you decide how hard to scratch your itchy arm without shredding it; and all the other things you do, second by second, as you live your life.</p>
<p>Without a vast, detailed replica of the world encoded as models inside your body, you wouldn&#x27;t survive one day.</p>
<p>Those models are not the same as the physical things they are modeling. In fact, they are very different from most points of view—a network of a few thousand neurons looks and feels nothing like a door handle. That&#x27;s alright. They don&#x27;t need to be the same, as long as they behave similarly enough to produce reliable prophecies when and where you need them.</p>
<p>How do these models work? We don&#x27;t know the details yet but, from first principles, we can make a couple of assumptions.</p>
<p>First, the predictions made by these models are not, in general, on/off statements but probability distributions. I find that a &quot;<a href="https://planktonvalhalla.com/20230323-the-demon-and-the-tree/" rel="nofollow noopener noreferrer" target="_blank">tree of possibilities</a>&quot; is a great way to think about it.</p>
<figure class="rehype-figure"><img src="https://aethermug.com/assets/posts/embedded-prophesy-devices/sankeymatic_cake_and_oven.png" alt="" title="Tree of possibilities: the &#x27;ingredients in oven&#x27; system leads to a branch for &#x27;delicious cake&#x27;, a thicker branch for &#x27;meh cake&#x27;, and smaller branches for &#x27;barely edible cake&#x27; and &#x27;inedible crisp&#x27;."/></figure>
<p>Given what we know about the system under consideration and the specific circumstances, we have a range of possible outcomes, some more probable than others. This is the part that really needs to match the modeled system well for the model to be of any value.</p>
<p>The second thing we can say about how <em>mental</em> models work is that they must be made up of heuristics. Since the models in our heads have physical configurations very different from anything existing outside mammal skulls, they can&#x27;t possibly be subjected to the same mechanisms of energy transfer and physical forces as the things they are meant to prophesy about. In other words, you don&#x27;t need to actually rotate your synapses 30 degrees in order to imagine turning a door handle. What&#x27;s probably happening is a collection of if-this-then-that rules of thumb.</p>
<ul>
<li>&quot;If the apple has nothing to support it, then it falls down&quot;</li>
<li>&quot;If I put my hand on that flame, then I will feel great pain&quot;</li>
<li>&quot;If I yell at Lucy, then she will be upset and this and that other bad thing will happen&quot;</li>
</ul>
<p>When we say that we &quot;learn&quot; something, what we really mean is that we&#x27;re creating or upgrading our mental models. You can do that by being taught by others or simply by observing what happens around you. The vast majority of what you learn happens through the latter, direct experience. You don&#x27;t need to be taught that the apple will fall, you observe all sorts of things fall from a young age, and form your mental model of that phenomenon on your own. Reading about Newton&#x27;s theory of gravity might give you more nuance and expand the applicability of that model, but the core of your predictive power takes root well before you even learn to read.</p>
<p>Some cognitive psychologists <a href="https://www.goodreads.com/book/show/34459.Metaphors_We_Live_By" rel="nofollow noopener noreferrer" target="_blank">speak</a> of the huge role that metaphors play in the way we talk and reason about the world. Perhaps metaphors are how the brain optimizes its memory storage and access: if I already have a &quot;model for X&quot; with a tree of possibilities and heuristics that are similar to those of system Y, why forge a new one?</p>
<p>Notice that I made a silent conceptual leap a few paragraph ago. I went from talking about models in general to <em>mental</em> models, happening inside brains. That&#x27;s the kind of models we are most familiar with, and those we can do something about. But mental models are only a subset of all possible future-predicting models. What does a &quot;non-mental&quot; model look like?</p>
<p>A first category of non-mental models is the collection of things we know instinctively, without the need to learn them through experience. These are the models hard-wired into our nervous systems by evolution, like the innate fear of wild predators, and the sense of disgust at dirty or decaying things. While they are still centered on neural processes, these models don&#x27;t require consciousness. They make predictions—the tiger will eat me, the rotten fruit will make me sick—and those predictions automatically trigger an appropriate physical reaction. This toolkit of prophesy devices we share with all other animals.</p>
<figure><img src="https://aethermug.com/assets/posts/embedded-prophesy-devices/colony.webp" alt="An E. Coli colony."/></figure>
<p>But we can expand the category further. Forget about a brain, even a nervous system is optional for the formation of internal models of the world. For example, most bacteria are capable of <em>chemotaxis</em>, the directed movement as a response to a chemical signal. Take <em>E. coli</em>, the most widely studied single-cell organism. <em>E. coli</em> has sensors on its surface to detect favorable chemicals, and can swim to where the concentration of those chemicals is higher. That means that, somewhere inside it, the bacterium is making predictions about the future: &quot;moving now might get me to a place with more food.&quot; No thinking is involved, of course. The prediction directly triggers, by chemical means, the activation of the flagella on the cell&#x27;s membrane, causing it to move (it needs some trial and error to find the right direction, though).</p>
<p>This is quite amazing when you think about it. Not only is a rule-of-thumb-driven simulation of the external world constantly running inside every skull in existence, but that same kind of simulation, to various degrees of sophistication, is running somewhere in the body of <em>every living being</em>. In this sense, prophesy is one of the core functions of what we call Life. ●</p>
<div class="col-start-1 col-end-1 m-10"></div>
<p><em>Edited on Dec 12, 2024 for better grammar.</em></p>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/embedded-prophesy-devices/japanese-folk-toys.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Designing for Emergence]]></title>
            <link>https://aethermug.com/posts/designing-for-emergence?utm_source=rss&amp;utm_campaign=rss</link>
            <guid>https://aethermug.com/posts/designing-for-emergence</guid>
            <pubDate>Thu, 01 Feb 2024 10:31:30 GMT</pubDate>
            <description><![CDATA["If we are ever to advance beyond simple computational artefacts, we need a science of emergence"]]></description>
            <content:encoded><![CDATA[<p>I was reading a fascinating website (23 years old!) of University of York Professor Susan Stepney, and this call to action caught my eye.</p>
<blockquote>
<p>If we are ever to advance beyond simple computational artefacts, we need a <strong>science of emergence</strong>.
...
Not only do we need to engineer the initially desired emergent properties, we need to engineer ability to change in desired, if unanticipated, ways.</p>
<p>— Susan Stepney, <a href="https://www-users.cs.york.ac.uk/~susan/complex/nstdcomp.htm" rel="nofollow noopener noreferrer" target="_blank">Non-Standard Computation: an overview</a> (2001)</p>
</blockquote>
<p>Stepney is—still, 23 years later—an expert of &quot;non-standard computation&quot;, which I suspect to be a euphemism for &quot;contrarian computer science&quot;. She works on how to compute stuff, but without using all the tried-and-true techniques, algorithms, and hardware that we&#x27;ve perfected over a century. The page linked above explains it better than I could ever do: non-Turing, non-von-Neumann machines, cellular automata, and cool things like that. It&#x27;s worth a read despite the age.</p>
<p>In this passage, she argues that, while emergence is fundamental in everything, we have no theory for how to design and <a href="/posts/water-lilies-water-lilies-everywhere">create emergence</a> as we like.</p>
<p>Here, as is usually the case among scientists, she uses &quot;emergence&quot; to refer to what <a href="https://planktonvalhalla.com/20230525-a-non-magical-example-rich-introduction-to-emergence/" rel="nofollow noopener noreferrer" target="_blank">I call &quot;context emergence&quot;</a>—the most interesting kind. In the broader sense of the word, we <em>do</em> know how to engineer for emergence, and very well. A pair of scissors is engineered (feature) emergence: the ability to make straight cuts in paper that would be nearly impossible with the same two blades handled separately. And the same holds for every single product of technology. In this sense, engineering is nothing but the design of emergence.</p>
<p>But Stepney, of course, is thinking about something much more sophisticated than that. <em>She wants a theory that makes new physical laws emerge</em>, as the laws of thermodynamics emerge from the assemblage of simple atoms, or as the self-organized mechanisms of embryonic development lead to a grown adult from a single cell, or the &quot;laws&quot; of social behavior, and so on. Stepney wants us to learn how to engineer complexity.</p>
<p>That&#x27;s a whole different level of difficulty. She admits that there are high barriers against that dream, like the computational irreducibility of certain processes, and the huge role that contingency plays in complex systems. So how do we do that? Stepney&#x27;s essay proposes a full research program to tackle that problem, which makes a lot of sense. I&#x27;m especially interested in one point that she makes:</p>
<blockquote>
<p>Complex systems should be &quot;grown&quot; rather than &quot;switched on&quot;.</p>
</blockquote>
<p>It immediately reminds me of &quot;Gall&#x27;s Law&quot;:</p>
<blockquote>
<p>A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.</p>
<p>— John Gall, Systemantics (1977)</p>
</blockquote>
<p>This is a very different kind of engineering from what we&#x27;re used to. You can&#x27;t make a &quot;blueprint&quot; for a complex system, nor a step-by-step instruction plan that can be replicated accurately to mass-produce it. You have to begin from a seed, and tend to it like a tree, letting it do its own thing while finding gentle, focused ways to influence its growth.</p>
<p>Perhaps the closest thing to an engineering of emergence that we&#x27;ve come up with are the latest advances in artificial intelligence. We don&#x27;t know exactly why a specific language model does what it does, and we have no hope of &quot;planning&quot; its behavior beforehand. We are forced to &quot;grow&quot; (train) it, feeding it the nourishment (data) it needs, and discover what it does after the fact. We can try to steer it a little by selecting the inputs we give it, pushing it away from its biases and towards the kind of knowledge that we would want it to talk about, but it&#x27;s at best a trial-and-error process.</p>
<p>My gut feeling is that we&#x27;re already good enough—as good as we can hope to get—at the &quot;growing&quot; part. We have enough experience of that with all the natural complex systems we&#x27;re surrounded with, from gardens to groups of people. The skill that we really miss is creating that initial seed, Gall&#x27;s &quot;simple sistem that works&quot;. Deep neural networks are one example of that, began as an imitation of brain structure and improved over decades of experimentation. That, then, is one path: copy nature, and see where it leads you. Is there any other, much faster way? ●</p>
<div class="col-start-1 col-end-1 m-10"></div>]]></content:encoded>
            <enclosure url="https://aethermug.com/assets/posts/designing-for-emergence/Rose_shrub_(1909)_Henryk_Szczyglińskia.jpg" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>