<![CDATA[Moonshots]]>https://jimhorton.substack.comhttps://substackcdn.com/image/fetch/$s_!IKv0!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3292f19f-62a5-49ab-b134-575eabe65d67_1280x1280.pngMoonshotshttps://jimhorton.substack.comSubstackThu, 23 Apr 2026 04:43:10 GMT<![CDATA[Tír na nÓg]]>https://jimhorton.substack.com/p/tir-na-noghttps://jimhorton.substack.com/p/tir-na-nogSat, 18 Apr 2026 07:30:26 GMT
Image by Author, via MidJourney
Image by Author, Via MidJourney

I. Electric Guitars

I don’t often review recent articles, but David Cain just put one together that hit that sweet spot between “this is a classic thought” and “this is a novel take.” So I find myself bubbling with thoughts about it. You can find it linked below:

Social Media is the Opposite of Social Life.

I have been following David Cain’s work for fifteen years now. David has a unique superpower: every article he writes feels like it applies immediately, viscerally, to my life. There are no other writers I know who so consistently track my personal experience.

Now, I’m not a fool. I don’t believe that Cain has a literal superpower that allows him to know exactly what it is that James Horton, PhD needs to hear on a given day. No, Cain’s superpower is… what’s a good word?

Ah. Resonance.

When David reacts honestly to the world, his base frequency resonates. Readers close their eyes and nod, and say to themselves “hey, this guy is my vibe.” At the same time, he is also just discordant enough that his writing adds a bit of contrast—enough to roughen his texture. The effect is electric.

Actually I think “electric” is the perfect word here: Underlying both Cain’s writing and the magic of that Fender electric guitar you bought, back when you were a teenager, is the principle of gain, or how much signal your audio input picks up. David is a master of gain, the king of audio-in. He picks up every little sound in the room, with the result that he can make walking across a parking lot feel like a guitar solo.

II. Is Social Media the Opposite of Social Life?

Anyhow Cain’s thesis in his latest article is simple: Social media promised to connect us but has done the opposite because real human connection is a body thing.

Social media, on the other hand, is not a body thing. It replaces the squishy, bodily, time-bound interactions that meat-creatures like us require in order to actually connect to other meat-creatures. You know what I’m talking about. Complex meat skills, like eye contact, reading emotions, pacing, taking turns, listening attentively. The list goes on.

That’s the core of it: You may be a demigod who juggles the lightning of awareness across a quadrillion synaptic gaps, but also? You’re made of bacon. Sorry, but you can’t escape baconspace: The bacon makes the lightning.

And the internet? It’s all lightning, no bacon. Spend too much time there and you will forget how to go back to your bacony ways.

The simplest example of this is eye contact. Far from simply removing eye contact from conversation, social media works against eye contact by allowing you to exist in a world of messages that arrive without eyes. In order to make up that gap you have to mentally emulate the person on the other end.

The more body-free conversation you have, the more you have to fill in the blanks using your imagination, but if you’re not making regular contact with people out in meatspace your imagination starts to slip out of tune.

One side effect of this is you can have people who vibe purely on the platonic level of words and ideas who do not vibe as two bodies sharing a room — hence the perennial e-dating advice; meet them in real life, quick, or you’ll never know how much of them is a fantasy you projected onto your screen.

But it’s not just your expectations that slip out of alignment. You can slip out of alignment, too. That de-tuning effect can extend to your whole life, weakening your connection to everything.

III. Baconspace

I don’t agree entirely with Cain. He argues that social media is the opposite of social life, but I think the reality is more complex: Connection is a multi-channel problem and you can’t focus too much on one channel without neglecting another. For example, it’s possible for this dissociation to go in the opposite direction: As a certain class of very beautiful, very active people often find out, there are people out there who are electric at the body-to-body level but you can’t much talk to them about anything.

So I’m skeptical about using the language of opposites. It falls apart when you slot social media into its rightful place in the larger body-to-mind spectrum.

On one end you have pure body language, absent any words at all, where our communication is at its most animal; where eye contact means only that you plan to fight or rut. A step beyond that is the human realm, the golden mean, where our body and soul are both present and conversations unfold amid gestures and glances and negotiations over who gets to hold the popcorn during the movie.

Beyond that ideal space you can subtract out the body one part at a time, and each subtraction leaves you with a well known form of communication that we engage in every day:

  • Remove physical presence and the in-the-room-ness of bodies and you’ve got video chats.

  • Remove visuals from your videos and you have telephone calls.

  • Remove the voices but keep your words and you have texting and messenger apps.

  • Remove the temporal synchrony of your messenger apps—the part where two people are communicating at the same time — and you have message boards. This is social media in the form that Cain envisions.

But you can go a step further even. You can remove the two-ness of conversation. Turn yourself into a simplified homunculus — a lone ear, listening; a single throat, speaking — and suddenly you are in the realm of… books! Books and blogs, where Dostoevsky and David Cain write (or wrote) for nobody in particular, and millions of people listen to words that weren’t meant for them but that reached them by happenstance. And yet those aimless messages sometimes produce genuine soul-to-soul resonance when they reach the right person.

This is why I object to the “opposites” framing. What seems to matter, at least as much as the modality of the communication, is the quality of the soul that passes through it.

There is, I think, a better metaphor. If I had to pick one, I would lean on the metaphor of nutrition. Your body is designed to thrive on any number of diets but if you leave out certain vital nutrients that it can’t compensate for it will start to break down in predictable ways. Similarly, I think a soul can thrive as long as it gets a good balance of life across the vital channels we’re meant to live in. As long as we get enough.

The catch, I think, is that last part. An enormous number of us are not getting enough and we are feeling it keenly, like a ravenous gnawing in our bellies. And I think that for some reason in the past few years a large number of people have been trying to get back out into the real world and are finding that some part of them has withered from neglect.

There are a hundred small rules governing face-to-face interaction with other people, all of which you get worse at if you’re away for too long. The deficiencies accumulate and leave you feeling like a fraud, LARPing at real life instead of participating in it.

So, maybe that’s why David Cain feels like social media is the opposite of social life. Who wouldn’t feel that the thing which stole that social grace from them—that marked them with the scarlet letter—was the enemy? The opposite?

It’s a natural conclusion. Easy as breathing. Or as looking someone in the eye.

IV. The Land of the Young

In Irish mythology there was an island off the western coast of Éire where the gods — the Tuatha Dé Danann — lived. The island went by many names, Tír Tairngire (the Land of Promise), Mag Mell (the Plain of Delight), or Ildathach (the Multicoloured Place) but its most emblematic name is Tír na nÓg, the Land of the Young.

In the old tale of Tír na nÓg the warrior poet Oisín, son of Fionn mac Cumhaill, king of the Fianna, is visited by the faerie maiden Niamh, daughter of the sea god Manannán mac Lir. She invites him to the island of Tír na nÓg. Taken in by her incredible beauty, he follows.

They marry, have children, and live blissfully: The days are full of hunts and the nights full of stories. But time passes and Oisín begins to long for home, and one day he tells his wife of his desire to visit his family. Niamh, pensive, agrees to loan him her white horse so that he can make the trip across the sea, but she warns him that he must not touch the ground of Éire, or he will never be able to return.

He agrees, and thunders across the sea on the white horse, elated at the thought of seeing his family again, but when he arrives everything has changed. The Fianna are gone: Where their castle once stood, there are only ruins; where their town had been the buildings are now abandoned. He learns that the time he spent in Tír na nÓg, which felt to him like a short three years, was in fact three hundred. He has been gone for centuries. His family has passed away. His people have moved on. And so he leaves, heartbroken. On the way home he encounters farmers trying to move a boulder in the road. He tries to help them, but in doing so he loses his balance and is cast from his horse, hitting the ground.

The moment he touches the ground the white horse flees back to Tír na nÓg and leaves him beihind. The weight of the three hundred years he had been gone catches up to him. He ages rapidly, becomes decrepit and bedridden, and dies shortly afterwards, telling stories of who he had been; of the Fianna; of the princess Niamh and the Tuatha Dé; of the land of Tír na nÓg.

V. Tír na nÓg

Katherine Dee is probably one of the few people on the internet who actually understands the internet. She’s also the first person I know of who made the explicit connection between the internet and the spirit world, as well as to the myth of Tír na nÓg. You can find her article, The Internet as the Astral Plane, at this link.

The structural parallels between the internet and cultural representations of the afterlife are surprising. If you’re conscientious about I think you can make the case that psychologically the two have the same “shape” — in ancient times people spent great amounts of time imagining the pleasures and perils of a world free from the shackles of the body. About thirty years ago the internet actually gave us that world, and when we explored it we found that many of those ancient intuitions were true, and they haunted us.

I think what is happening in David Cain’s essay is that we are porting old intuitions over to the digital age. Many cultures treat the world of spirit and the world of flesh as opposites. It’s an old intuition, and a resonant one — the same that makes people wary (or derisive) about those who spend too much time reading books.

The reality is more nuanced. But the principle of neglect is true: Spirit and body may not be true opposites, but to neglect one puts all of you at peril, and if you’re not careful you won’t know until it’s too late.

What’s happening now, I think, is that many of us are growing tired of this digital otherworld that has kept us suspended and ageless for so long. We’re hungering for the wild grass of Éire, and the memories of being strong once, and brave. And when we do go back—quite literally, when we touch grass—we have to reckon with the weight of those lost years as they lay themselves upon our shoulders.

The biggest mercy is that unlike Oisín our debt is measured in decades, not centuries. We don’t have to crumble to dust. There’s still time to build, to tell new stories alongside the old, to make sense of who we are and where we are going.

If you made it this far, you should subscribe. ;-)

,

]]>
<![CDATA[Index Page: The Affirmations]]>https://jimhorton.substack.com/p/index-page-the-affirmationshttps://jimhorton.substack.com/p/index-page-the-affirmationsFri, 27 Mar 2026 01:47:53 GMTIn 2023 I started writing a regular publication on Medium titled The Affirmations. Eventually it would become my first success as an online writer. I’ve made the decision to start backing out of Medium, and I will be un-paywalling all of my articles there and porting them here to Substack for readers who are interested.

This is a unique type of article: An index page. I use these to create collections of thematically connected articles so that interested readers can browse through them to see if there is anything that they want to read. I will be posting Affirmations every week or so for readers until I have uploaded my full archive.

]]>
<![CDATA[You and Your Shadow]]>https://jimhorton.substack.com/p/you-and-your-shadowhttps://jimhorton.substack.com/p/you-and-your-shadowTue, 24 Mar 2026 03:06:00 GMT
Image
Image by Author, via MidJourney

Author’s Note: This article is dedicated to all of you who have ever had to hold yourself in reserve and smile politely while some jackass explained to you that your problems would be solved if you were just more disciplined.

I. The Central, Lateral, and Basal Nuclei of the Amygdala

Fear is the shadow of love.

Like all shadows the length and depth of it are determined by the positioning of a greater light, the science of which I will never fully understand. What I do know is that most of the time the shadow is cast by real things, and yawns behind them, like a small tear in the substrate of God.

Shadow is an apt metaphor—fear as the flat, dark, negative geometry cast on the ground behind something true. The accuracy of this kind of poetry is alarming. Only a few other metaphors, mostly simple and elemental, fit their subject so well. Anger as fire. Life as motion. Love as light. I think because of this apt-ness, the fear-as-shadow metaphor has a long history. The earliest example I know of is an old medical text—Galen’s De Locis Affectis, written when medics thought that the despondent emotions were caused by black bile (literally, melancholia). Galen described fear as a shadow that melancholia cast on the mind.

The metaphor stuck, mostly because whatever was going on in there, it felt like something was casting a shadow. And in the past hundred years or so we clued in to was really happening, and holy shit, it really is like a shadow.

So let’s talk about the amygdala. It is the central structure in everything else I’ll be describing here, so.. what is it?

The high-level answer is: it’s a gray blob that’s important. So maybe it’s better to start by asking where it is. To answer that, put your fingers on your temples. Move your fingers back an inch, and then up an inch, until they’re above the top of your ear-line, in front of your ear.

Okay. Now, importantly, don’t actually do this, but imagine it: Push your fingers into your temples until they get about two joints deep. The spot in your brain where your fingertips are? That’s right about where the amygdala is. It’s a blob of brain-matter that’s about the size of an almond and, while it doesn’t create your emotions, exactly, it does anchor them: If it is damaged your emotions become unmoored from any meaningful hierarchy of mattering that corresponds to the outside world.

The emblematic case is Kluver-Bucy Syndrome, which involves damage to the amygdala and some of its surrounding structures—the hippocampus, select parts of the cortex. A monkey with Kluver-Bucy stops feeling things, not because the I feel things part of the brain is broken, but because the I should feel things part of the brain is broken. A monkey with Kluver-Bucy syndrome will try to stuff into its mouth anything that fits because the little voice that says “maybe that’s a bad one” doesn’t work. It will try to copulate with anything that moves (and a surprising number of things that don’t) because there’s no sense of meaning-ness to get in the way of the desire for stimulation.

For several decades we thought that this was largely due to the amygdala being the “fear module” in the brain, and the architecture of the amygdala makes a compelling case for that. Information comes into the amygdala from the sensory cortices through the lateral node (LA). It pulls in information from the hippocampus, the body’s “here’s the context” system, through the basal (B) and anterior-basal (AB) nodes. And it projects outwards from the central node (CE) to the parts of your brain that actually activate your fear response.

Again, those are fancy words for something that’s pretty intuitive when we use simpler language. Senses come in from the side port, memories from the back port, and the central port is the hookup that projects a signal outward. If you can hook your Nintendo Switch up to the television you’ve got enough experience to get a decent working model of the amygdala.

The magic happens inside the system itself. In 2003 a cross-national team of researchers published a challenge to the prevailing wisdom—everybody knew that the amygdala was involved in fear, but evidence was slowly mounting to suggest that it was involved in other emotions as well. Happiness. Anger. Sadness. As an alternative they suggested that the amygdala should be thought of as something like an importance detector, responsible for deciding what matters, and how.

The upshot of all this—the thing I hope you take away from this, if nothing else—is that the system that assigns meaning to the world around you is the same system that projects everything from mild anxiety to mortal terror outward into your body. The moment a thing matters to you the entire defensive architecture of your mind is primed to fear its loss. One follows the other as surely as a shadow follows the object that casts it. If it was any other way, you’d be a much worse type of broken than you are now.

Share

Image

II. The Sympatho-Adrenal-Medullary System

Now, to work.

I’m writing this because there are a lot of comfortable stories that people like to tell about what it means to be stuck on a creative project. Most of the stories are true, some of the time. There are times when the reason you can’t move forward really is a lack of passion. There are times when you really do lack discipline. There are times when you really are sabotaging yourself. But none of those stories are true all of the time, and there’s one very common story—the ‘dammit I really wanna do the thing but for some reason I just can’t’ story—that a lot of people are eager to dismiss. I’m not sure about you but for me the worst offender, by a wide margin, is myself. The voice inside of my head, which happily informs me that I am a shitbag every day of the week, and twice on Tuesdays.

One of the voice’s favorite plays is ‘this clearly must not matter much to you if you’re not doing anything about it.’ And again, sometimes? That’s true. But if there’s any part of you that felt a gentle buzz of resonance to the opening segment of this article I am willing to bet that at least once you’ve had the experience of wanting to do something so much that your body impels you away from it.

Here’s what happens: The moment you want something, your hope is bound up in the outcome. For most art—whether you are a writer, a sculptor, a director—the outcome is sharing your vision with others and hoping that they delight in it the way you do. Generally speaking, that’s what we mean when we say a person’s ego is caught up in something—I mean, sure, they could be a narcissist, but most run-of-the-mill ego situations are much closer to a three-year-old bringing a crayon drawing to their parents and hoping that mommy and daddy will love it. It’s such an innocent and vulnerable thing, hoping that others will share the joy with you. So easily crushed. And the amygdala senses that exquisite vulnerability.

Now, this bit is important: Everything that follows that first moment of vulnerable hope is predictable. It is your system behaving exactly as built. Thousands of studies, stretched out across a half-dozen different species, all converge on the same sequence of events:

The moment that hope is threatened—by insecurity, frustration, or logistics—your amygdala sends a rapid warning message to your adrenal medulla—the inner portion of your adrenal glands, which in turn dump catecholamines (epinepherine and norepinepherine) into your blood stream. The norepinepherine causes the smooth muscle in your outer blood vessels to contract, squeezing the blood out of them and into the middle of your body. At the same time, acetylcholine signals sent from your brain cause the eccrine glands in your fingers and toes to activate.

Or, in normal English: Your hands and feet get cold and start to sweat. The sweat is one of nature’s great ironies; in the wild, fingertips that are slightly wet with sweat are better for gripping things like spear hafts if you have to fight, or tree branches if you have to climb. If you have to write a five page term paper, however, sweaty fingertips are not helpful for gripping a pen or perusing Google Scholar on your cell phone.

The cold hands are just a part of a wave of initial symptoms that prime you for fighting or fleeing. Your shoulders and jaw muscles tense—the mechanism is the same as the one that squeezes the blood from your toes but as best as we can tell the purpose is different: Your neck and shoulders are catastrophically vulnerable in a physical fight, and by tensing them your body encourages you into guard position, head down, shoulders up, braced. Except fights are supposed to be over quick: Holding guard posture for five hours while writing is a guaranteed stress headache.

There are more changes. Your heart starts to beat harder and your breath speeds up. The muscles in your thighs tense in preparation for motion—your body is quite literally priming you to run, even if there’s nothing to run from, because your fear system is old, and stupid, and it wants your legs to be ready to bolt just in case a jaguar jumps out of the half-written draft of your novel and attacks your face.

If you are sitting in a chair this manifests as an intense physical restlessness, a need to leave. You fidget. Your feet rapidly tap tap tap under your desk. You self-stimulate, chewing your hair, scratching your face, picking at your scalp or massaging your wrists. You snack, because taste is comfort. And if you successfully hold yourself in writing position for an hour you will be rewarded with all of the muscles in your lower back and your ass tensing to the point of bone-deep pain. Importantly, this urge to move, to leave, is unconnected to the psychological desire to avoid your work: You’re not fidgeting because you want to avoid your task. You’re fidgeting because your body has decided, since this is a high stakes situation, to prime your hips to run—and you are forcing them to sit still instead.

And this is to say nothing of the cognitive effects. Your amygdala doesn’t just project to your body; it projects to your brain as well, subtly altering your patterns of attention and motivation. Your brain starts monitoring for threats, which mostly means that you indulge in lurid rumination about the shame of failure. Your brain demands that you find a way to ward it off, as if the correct punctuation, or a perfect arpeggio, or a line inked in just the right way, will protect you from attack.

Ironically, the signal that you most need at this point to break the cycle—a feeling of safety and confidence—is the precise thing denied to you by your body’s fear system.

Worse, your brain doesn’t know where the threat will come from. If it was clear, you could do something about it. Since it’s not, your brain shifts into a well-known “diffuse scanning” mode, where your thoughts bounce around quickly, increasing the odds that they will notice an arriving jaguar, and decreasing the odds that you can focus on your project.

And, of course, underlying all of this there is, in fact, the ever-present desire to avoid. Because the project is daunting. Because you have trouble handling the nerves. Because the frustration drives you up the wall. Because you need to calm down so you can focus. Occurring as it does in the middle of an entire constellation of other symptoms is it any wonder you can barely attend to the thing in front of you?

Most people experience this as a personal failure. But again, I can’t emphasize this enough: What is actually happening is that your body is responding exactly as it was built to. Precisely. Down to the last catecholamine.

Share

Image

III. The Central Executive

So, why the detailed descriptions of catecholamines, jaguars, and clenching ass muscles?

And the answer is... I don’t know. I’m not sure who I’m fighting.

Somewhere out there at this moment there is a dopamine bro broadcasting a YouTube screed to his passel of credulous young men explaining, with near-pornographic levels of eye contact, that their real problem is a lack of discipline. A few channels down a confident woman with a power suit and a bob is explaining to a crowd of entry-level Silicon Valley assistants that the real secret to a successful career is passion. All of your problems can be solved, guys, if you just feel the right thing forever!

Both of these people annoy me. They’re curiously flat. More like caricatures of advice than something real. I find it hard to hate them, though, because like most influencers they are just echoes of their audience—their advice is tailored, not to solve their audiences’ frustrations, but to resonate with them. When you’re an eighteen-year-old male the defining feature of your psyche is raw, masculine agitation—you want to do something, anything, all of the things, all of the time. So of course you snap to order when a badass with biceps tells you “Know what you need to do, son? Just DO THINGS OVER AND OVER AGAIN FOREVER WITHOUT STOPPING.” He is exactly what testosterone would say if it could speak.

The advice that you just need passion comes from a similar place of longing—this time, for that raw, unfettered joy that powered you through a decade of piano practice without a teacher, a schedule, or internal conflict, until you could play Chopin and your wrist was developing nerve problems. “Gee, what if I could tap into THAT, and point it at my day job?” is one of the most understandable sentiments in the world.

But most importantly, both of those impulses exist as resonant echoes inside the heads of most people. They’re the voice that says things like “If you actually wanted this, wouldn’t this struggle go away?” and “Why can’t you just force yourself to do the thing?”

So maybe I’m arguing against the phantoms in my head.

That tracks. Okay.

So here’s what I’d say to the phantoms in my head:

Discipline and motivation are always enacted against something. If there is no “against” getting in your way, you can move freely, powered by nothing more than whim. Put your finger on your nose (not up it, please). Did you require discipline or passion to do that?

No. It was small.

The voices of discipline and passion—and the crowds who accept them without question—almost never engage seriously with this question of “against.” But against-ness is carved into your neurological architecture. I’ve gone to great lengths to describe the neurology of your fear system, to map out what we know of its physiological and behavioral consequences. But did you know that you have a system in your brain that is built to control it as well? Your executive function, most commonly associated with your cerebral cortex, is responsible for reining in your fear when you need to repress it.

The thing that stuns me about this system is that the way that the cortex fights back against the amygdala is through inhibition. Let me explain just a bit, because this is worth understanding if you deal with anxiety problems in any major capacity:

Your amygdala, once it gets up and running properly, is a bit like a large pickup truck with a full tank of gas. Shift that thing into drive and it will move forward at a speed proportionate to how much gas you give it.

You might imagine that the thinking, planning parts of your brain are like a normal driver in a normal truck, moving his foot between the gas and the brake pedal to speed up and slow down as needed on the road. But you would be wrong.

If we built it to match your fear system, the actual setup in your pickup would be much more like having a truck that decides for itself how much to throttle the gas based on criteria that were decided before you got behind the wheel, and the only control you are able to exert over your speed is the brake near your foot. We call this an opponent process system—two forces that act against each other at the same time, instead of alternating.

In our imaginary truck—let’s call it the Chevy Amygdala—using the brake is costly, and it starts to lose its power fast. It’s also pretty weak, and can be overwhelmed: If something hits the gas hard enough you are going to move forward, fast, regardless of how hard you press the brake. This is, in fact, what happens in situations of raw panic: Your sympathetic nervous system hits the gas so hard that no brake can stop it. But even in more moderate situations? Your cortex-brake will fail long before the car runs out of gas.

And the sickest fact of the system is that under the weight of long-term stress, the system it self is altered so that your brake is less effective. Your fear response follows a two-wave pattern. The first wave, the SAM system, is responsible for most of the classic symptoms of fear and anxiety, detailed above. The second wave, the HPA axis, is something like the “long-term system.” It peaks later and is capable of remaining active for weeks on end. Its primary purpose is to shift your body into something like a “long-term threat orientation.” It does this primarily through the mediating hormone, cortisol.

And the long-term effect of cortisol on your brain is to down-regulate your cortex. Because, to help prepare you to handle threats over the long-term? Your brain is wired to make you twitchy.

So the progression: The act of caring something means that you will stress out about it. That stress has a series of predictable outcomes that, by design, will get in the way of your ability to work. And the same stress will undermine your ability to press through the noise.

Wonderful. So, what can we do about it?

Share

Image

IV. How to Control Your Shadow

For starters, don’t get too enchanted with the system. Really.

Remember that fear is a shadow, cast. There are a few ways a person can get entangled in that shadow—getting caught up in the story being told by your own internal ruminations is one. Scapegoating your nervous system for solvable problems is another. Obsessing over the details, as if reciting the right catecholamine will deliver you from evil, is a third. Or, the most ridiculous one, and the bane of influencers—bonding over the misery instead of the greater joy it’s derived from.

Usually there’s some combination of toxic beliefs at play when a person gets too caught up in their own fears: That their problems are unique; that the fear is the whole problem and can only be solved on-site; and that mastering it requires uncovering some sort of secret—a special key that will unlock it. There are good counterpoints to each of these lies.

  1. Your problem is not unique: the “fear response” that paralyzes you is shared by every other member of the human race, including the ones who succeeded in mastering themselves. Your system may be more active, it may flare up at really bad times, in really unfortunate patterns, and the social system you’re embedded in may be especially triggering (ask me about graduate school). But your problem is not special. The timing, the triggers, and workarounds for your fear response may be idiosyncratic, but the mechanisms are old. Species-level-old. You share them with dogs.

  2. Your fear exists in a larger context. If fear reliably blocks you from working and can’t be fought directly then you may need to re-envision your workflow itself so that fear happens less and new opportunities for dealing with it sprout naturally out of your new structure. In fact this is sometimes the only way you can do things—the systems that trigger your fear response are powerful and can keep you locked in a self-defeating cycle for years (again ask me about graduate school).

  3. The nuances don’t matter. A lot of this article is a high-level description of the amygdala and some features of the fear system but most of what I chose to focus on here is human-level things that the system does, because really, they’re the only events that matter. The big shift—the one that helps you break the cycle—doesn’t happen after you learn all the system’s parts. It happens the moment you learn that the system exists and acts separate from your choices. At that point you move from a moral understanding where your primary option, for reasons I’ve never fully understood, is almost always some variation of “Try super hard to muscle through this and be a good boy”, to a mechanical understanding where you have options. The moment you start thinking mechanics you’re on your way to control.

And, really, that is the point of this whole article. I want you to know that the system exists, because once you know, you can take that crucial leap from thinking of all your problems as “behavior of myself, a subject” to thinking of them as “behavior of my fear system, an object.” And once you do, paths start to open up. Productivity strategies start to present themselves as “ways of navigating the fear system,” and new, surprising avenues open up. I’d like to offer a few, in closing.

Care less

Fear is caused by care. You can build surprisingly effective strategies around the principle of “care less.”

For example, if you’re blocked as a writer, a lot of times the big problem is that you don’t want to write something bad. Which begs the question… bad for whom? The answer to that is that there’s almost always a phantom audience in your head that you don’t want to let down.

Okay. Fine—so, how do we care less about a phantom audience? Two ideas spring immediately to mind:

  1. Write things that will never be seen by an audience, ever. I have a folder in my email account titled “Mocks.” When I’m learning things, I write mocks using The Most Dangerous Writing App, which will delete all my text if I stop writing for more than ten seconds. I use mocks to extract my early thoughts on whatever I happen to be reading. I can usually write about 1500 words in a half hour. Most of these will never see the light of day. But the ideas will occasionally be upcycled.

  2. Explicitly disavow the goal of pleasing an audience. Write for other reasons instead. Write to learn. Write to describe. Write to disentangle spiritual knots. Sometimes after you’ve done this you’ll look over your writing and realize it’s shareable. Share then.

That last bit, by the way, doesn’t mean you should never write for an audience. It means you should have a space where you can escape them. Most disciplines have some form of exploratory, self-edifying work, precisely for this reason.

Shadow Mechanics

My favorite solution, and really, for most people I know, the only workable one. Once you’ve got a good idea of how the fear system works—the behaviors it impels you to, the things it wants from you—then you’re in a good position to manage what’s happening to you, directly. Some examples:

  1. Vacillation. Fear WILL cause you to vacillate back and forth. You WILL leave your chair. That part of the system is inevitable. But how long you stay out of your chair is much more controllable. A very modest amount of control exerted to ensure that you don’t end up doomscrolling for five hours can pay huge dividends.

  2. Self-Soothing. Figure out what your body reaches for when you’re stressed and give it the least damaging, most productive version of that. Want to know one thing that reliably helps me work better on my bad days? Having a tin of salted almonds at my desk. I almost always bail, looking for salty food, when I get stressed. Almonds short-circuit that.

  3. Stretch. And, more broadly, keep up your self-care. It’s a productivity hack, I swear. All of those physiological responses I listed earlier? When you’re trying to work, they create noise that your brain has to exert great effort to repress. You can take a load off of your mind by tending to your small physical discomforts so they reduce your cumulative “noise” load.

A Note on Becoming Stronger

There is a final thought I’d like to add, which I’ve left out of the piece thus far. There is a counterbalancing force in the amygdala that is capable of down-regulating your fear response.

One of the most important questions in the study of fear and stress is, what goes into the brain’s decision to fear something? Most theories converge on a few key criteria— a system of hurdles that an event has to jump before it triggers the amygdala. The first two hurdles are relevance and threat—is something that matters to me under threat?The third hurdle is vulnerability. “Can I manage this?” If you can, your fear system eases.

So the final thought I want to offer is this: Experience will make things easier for you automatically. I promise you. The fear never goes away exactly, but there’s a sweet spot you can hit after a long familiarity with your craft—you will have faced the same ambiguities and fears so many times that you will know the road through them, and that makes an enormous difference. The shadows really do lose their grip, with time.

Did you really read this whole article? If you did, wow. You are either a literary marathon runner, or you were really interested. In either case, I’d be honored if you subscribed.

]]>
<![CDATA[Comments Thread: An Atlas of the Self-Help World?]]>https://jimhorton.substack.com/p/comments-thread-an-atlas-of-the-selfhttps://jimhorton.substack.com/p/comments-thread-an-atlas-of-the-selfMon, 09 Mar 2026 10:53:38 GMTComments thread for all readers of the “Atlas of the Self-Help World” article.

Any thoughts or comments? Post here! Substack actively blocks me from allowing free readers to comment on my work if any part of it is behind a paywall. That seems stupid to me. So I’ve engineered this workaround—any thoughts, comments, or ideas you have about the post, feel free to leave them here and I’ll be happy to discuss them with you.

James

]]>
<![CDATA[An Atlas of the Self-Help World?]]>https://jimhorton.substack.com/p/an-atlas-of-the-self-help-worldhttps://jimhorton.substack.com/p/an-atlas-of-the-self-help-worldMon, 09 Mar 2026 10:50:13 GMT
Image
Yarr…. (Image by Author, via MidJourney)

Nobody I know of has done a systematic study of self-help, personal development, and productivity literature. I’m not sure why, either, because it seems… well, important.

Let me explain. In my second year of graduate school I burnt out—an episode I mostly remember as “That time my internal monologue just screamed in pain for eleven months.1 Predictably, I had trouble being productive during that time because of, y’know. The screams. It took years to get out, and I suspect that I never healed fully. But I was able to direct the healing process a bit, at least, and since I value work I focused on that.

Two years later I spoke with a colleague who had burnt out the same way I did. Our stories were similar, in some regards. But notably, the experience had led us to different conclusions—especially regarding writing.

Academics write. The academic market is a brightly colored bazaar of words. As an academic, you write books. You blog. You write lectures and obsess over the content of PowerPoint slides. But mostly you write academic journal articles, which are 8000-word-long sins against human language.

A journal article is a linguistic Frankenstein. It is a charnel heap of statistics stitched together by sinews of jargon. The job of a professor is to add lightning to these lifeless piles and get them slouching off towards the village to greet the local pitchforks. It’s all very mad-science, in the sense that you’re doing science and most of the time, you’re mad at it.

My colleague was discouraged by her struggle to produce these reports. On reflection, she had concluded that her problem was a lack of passion. Surely, if she cared about academia, it wouldn’t be such a fight?

I was skeptical. Writing academic papers is like lugging a cow uphill; gravity is against you. It doesn’t matter how much you like the cow. But it wasn’t my place to gainsay her, and she’d already made her decision: She found meaning elsewhere after graduation, moving quickly into the business world. Last I heard, she is happy, successful, and very well-paid.

In contrast, I doubled down: I considered the process of technical, disciplined writing to be a challenge, and I wanted to win, burnt out or otherwise. Today I write close to a thousand words per day.

The Many Varieties of Self-Help

At this point I could flutter off into the woo-woo like a manic pixie.2 I’m a psych major: we can generate enough magical pixie dust in a single conversation that you could snort a line of it off of a mirror. So, I could talk about positivity, reduce my colleague to a parable, talk about mindset and life outcomes and so on. Sprinkle, sprinkle, snort that dust.

That doesn’t sit right, though. She’s better paid than I am. Also she’s smarter than I am: I stack up well, but every now and again someone comes along who makes me feel like the dull child sitting in the corner with the pointy cap. She’s that person. So, I’m pretty sure that her conclusion came from a place of deep understanding rather than a sub-optimal “mindset.”3

Just as important, though—the conclusion that was true for her wasn’t true for me. After contemplating my own soul I decided I had a different problem, which required a different vocabulary. I wasn’t passionate, exactly, but motivation wasn’t my issue: My internal parliament had voted and the yeas had it. My language was logistics, and budgeting, and infrastructure. It’s served me well in the years since.

My point, I suppose, is that everyone must find the right language to articulate the path through their struggles. And self-help is the archive of that process. There are hundreds of languages available to you, free for you to try on like pants at a thrift store. Morals? Hard no; doesn’t zip. Medication? Ditto. Meditation? Zen feels good on these legs. Mechanics? Perfect fit. Two pairs, please; out the door.

And I can’t, for the life of me, figure out why we haven’t systematically documented this advice. If this stuff is the trace record of our personal solutions to our greatest frustrations, shouldn’t it matter enough for someone to, I dunno… explore it? That’s what I’ve been working on lately.

Anyhow, there are two ways to divide up the map of self-help. One is conceptual: I’m working on that now. But today? I want to talk about genre. There are distinct “traditions” of self-help literature, and I wanted to share my thoughts on a few here.

Image
Image by Author, via MidJourney

I. Main-Line Self-Help

The idea of genre usually implies some sort of artistic or literary trope. Fantasy implies elves and dragons. Sci-fi implies a deep exploration of the societal implications of technology. But what do you do if a genre’s primary aesthetic signature is “tacky aspirational kitsch?”

For that reason, I think it is hard to imagine self-help as a genre of its own. It is not artistic, nor does it aspire to be. It is banal. It feels like a genre in the same way that B2B marketing copy is a genre; it’s an abdication—the hole where a genre could be.

But if you can get over that and squint a bit, self-help behaves suspiciously like a genre. It has embedded values, assumptions of form, and conventions that have persisted for a long time—sometimes centuries. It also has a history. Self-help, as you and I recognize it today, almost certainly descended from a single book—Samuel Smiles’ best-seller Self-Help, which was written in 1859, and (unsurprisingly) is also the source of the genre’s name.4

A few features of Smiles’ Self-Help, which have persisted in the genre since he wrote it:

  • It is implicitly framed as a successful man offering moral instruction to enterprising young men.5

  • The core mechanic is character portraits. The author repeatedly points to examples of successful professional men and highlights features of their character that were supposedly responsible for their success. “Look here, young man! This man worked hard! You could be like him if you work hard, too!”

  • The genre is devoid of technical information. It is primarily a moral and motivational work.

I’ve been reading Smiles’ book and it’s actually quite endearing—expect a review some time in the future. But the genre that grew out of it concerns me: I think it is ill. Anemic. A shadow of what it should be. To my delight, G.K. Chesterton cottoned on to the genre early, as it grew, and was absolutely vicious in diagnosing its problems. I want to discuss that here, because my concerns echo Chesterton’s:

In his article The Fallacy of Success, Chesterton argued (correctly) that self-help writers invariably omitted the most important part of success, which was having the technical knowledge required to do a quality job. Instead, self-help writers wrote fawning portraits of admirable people, reveling in the mystery of them. The enchantment was the point.

Put another way, self-help literature is admiration porn. Its mechanics are similar to regular porn: In both, a real person is stripped and reduced to key features presented at enticing angles, carefully sequenced for maximum excitement. The big difference is the target drive. Regular porn targets the sex drive. But self-help? It targets the biological drive to mimic, which is the basis of learning.

And, like porn, most self-help offers nothing real. There’s a switcheroo involved—the portraits presented are those of successful entrepreneurs, but the behaviors emphasized are the ones that, if copied, make for a good employee of the same. It’s a twofer, producing admiration of the tycoon and a character suited to serving him well rather than competing with him.

I’ve got a lot to say about this dynamic, but for now I’ll say that I think a large part of the reason so much self-help literature fails to help people is that it is written to produce inspiration rather than to provide information. A person who needs inspiration will buy twenty books. A person who gets information will buy just the one.

Smiles, at least, seems like a genuinely good man who thought of himself as providing moral instruction to young men (a tradition taken up more recently by Jordan Peterson), but he laid down the forms of the genre and others ran with it. Let’s talk about them next.

Representative Works:

Self-Help - Samuel Smiles
How to Win Friends and Influence People -
(Dale Carnegie)
How to Get On in the World: A Ladder to Practical Success (Major A.R. Calhoun)


II. New Thought

Skeptical as I am about self-help, it still shines in one area: Moral instruction is good, and each new generation needs principles of social connection and industriousness translated into language that reaches them. Your life really will get 70% better if you shower daily, finish projects you start, and try to understand the perspectives of other people. If the language that gets you there sounds like “Murdershank the dragon of self-doubt, my son, and you too can rise to the lobster throne!” or “By expressing your cerebellar chakra in reverse you can assert your divine feminine against the patriarchy” then that’s what gets you there. Every generation needs a legend, and it must be theirs.

Young men in particular are wired so that if you tell them showering is important, they’ll go “Oh adults want me to do that,” but if you convince them it’s epic they’ll go “F*CK YES I SHALL CHEW MY WAY UP THE MOUNTAIN OF CONQUEST WITH MY TEETH” and (lamentably) that is how it is.6 It’s pretty clear that’s what Smiles was doing in Self-Help: It was Victorian England: Science was epic. Smiles was dangling character portraits in front of young men and saying “See, son? You could be a f*cking ORNITHOLOGIST. Isn’t that cool? Work hard and be good! This could be yours.”

So, I get that. I’m not sure that it’s the path to solving real productivity problems like writer’s block. I’m pretty sure the genre isn’t helpful for self-diagnosis. But it has its purposes and all the books are worth it if you stumble across a single, golden piece of advice.

New Thought, though, is where I draw the line. New Thought formed shortly after mainline self-help: It’s an amalgamation of Smiles’ moral instruction and Victorian mysticism, and is predicated on the idea that if you cultivate positivity the world will hand itself to you. It is Think and Grow Rich, by Napoleon Hill, and a dozen books like it. And I’ve watched it wreck too many people, watched it be adopted as the lingua franca of too many hucksters and abusers, to ever be comfortable with it.

Modern newthink is the product of many influences—1960s crystal tie-dye ganesha, a whorl of chaos theory covered in a thin foam of quantum mystique—but the core of newthink is Victorian mesmerism. It started with Phineas Parkhurst Quimby, who popularized the theory of mind-over-body and psychic healing. Quimby’s primary interest was medicine—he argued that incorrect beliefs could produce illness, and that correcting those beliefs could restore a person to health.

I am pretty sure about two things from my readings of Quimby:

  1. He was a kind, honest man, enthusiastic about his cause.

  2. (This bit drives me mad) He was correct, at least in the sense that he correctly observed the placebo effect in vivo and concluded that it must be important.

The problem was that the mechanisms behind the placebo effect were poorly known and its boundaries were unexplored. Quimby and others filled the gaps with metaphysics and the movement tried to extend the principle to all aspects of life. From this extrapolation emerged a philosophy arguing that correct mental alignment—positive thought, focused desire, unwavering belief—could attract success.

I understand from a long history with this movement that it is formed around a kernel of truth, and has some useful advice. But I’ve never seen more hucksterism and fraud concentrated in a single lineage than in the New Thought tradition and its derivatives. I will withhold judgment on individual works until I read them, but I’m skeptical of newthink as a whole.

Anyhow, some representative works.

Think and Grow Rich - Napoleon Hill
As a Man Thinketh - James Allen
The Law of Attraction - William Walter Atkinson


If you have any thoughts or comments on what you’ve read here, you can visit the comments thread at the following link:

Image
Image by Author, via MidJourney

III. Military Self-Development

If we define self-help as literature that instructs people on how to to develop their character with the aim of being successful, then it seems evident that there must be a lot of literature that is essentially self-help even if it goes by other names. Another way to think about it might be to treat self-help as one branch of a larger genre—say, personal development. Once defined that way there are many self-help adjacent spaces we can look to for insightful advice. The most obvious one, to me, is the military.

I think that many in the military would dismiss modern self-help because of the shameless self-promotion the genre now implies. But if you go back to the Victorian self-help standard, where personal advancement was considered a consequence of virtue and usefulness, then there is some clear overlap between self-help writing and mid-20th century officer development literature.

The most noteworthy feature I’ve seen in military self-development literature thus far, though, is the its motivational component. This is hard to explain (and I note that my understanding of the military literature is very incomplete, so this is just a vague first impression) but I’ll try to elaborate.

When I left academia I had to grapple with questions of motivation and direction. When do I go? What do I do? Why? How do I generate the will to persist in an uncertain world?

From what I’ve seen, the military renders these questions pointless. You don’t need to think too hard about role-craft when you have a manual. Intrinsic motivation is easier to summon when fellow soldiers are staking their lives on you being the best version of yourself. Questions of self-definition, then, are addressed with the simple advice “follow the guidelines” and questions of motivation are addressed with the simple advice “right face, son—look at the soldier next to you. That’s your motivation.”

For that reason, you just don’t see navel gazing in military literature the way you do in civilian writing. Instead you see appeals to the superordinate body—duty, honor, loyalty; to god, country, your fellow solider. That’s enough.

The little I’ve seen of military self-help, I find inspiring, because it is suffused with that esprit d’corps. But that same spirit suggests a limitation — for civilians who lack that sense of immediate connection and camaraderie, this type of motivation is probably much tougher, so a lot of military advice may only port to civilian life in a limited way—with the result that a lot of soldiers must be super frustrated when their strong intuitions about duty, self-sacrifice, and discipline aren’t shared widely outside of the military.

But I imagine that, for soldiers who are able to carry that sense of belonging outside of the military—who can transfer that connection to their family and community—a basic set of military principles are probably all that they need to tackle almost any set of motivational challenges.

With that said, I suspect soldiers who lose that sense of connection might flounder. I know from my own experience—different in almost every regards, except perhaps the part where I had to rebuild—that constructing a new set of motivations from scratch is fiendishly difficult, and takes years.

Anyhow, I’ve got a lot more reading to do regarding this but the small bit I’ve done has gone a long way to reminding me why I find soldiers’ character so admirable.

Some literature from this tradition. The first is a good example of enlisted soldiers’ attitude towards orders. The second, for officers, is more holistic:

a. Field Service Regulations 1905, Article II: Orders
b. The Armed Forces Officer — Department of the Army Pamphlet 600-2 (1956)


Anyhow, peace and love to all of you—the extra thought is below, for subscribers. If you like what you read here, consider sharing.

Share

And if you’d like to support the work I’m doing here, taking the methods and techniques of social science and applying them to the public sphere, consider subscribing! I’ve collected the first round of data for the Lexicon project and will be sharing some preliminary results in the near future.

Subscribe now

Extra Thought: Christian Self-Help

Read more

]]>
<![CDATA[Index: Lexicon of Struggles]]>https://jimhorton.substack.com/p/index-lexicon-of-struggleshttps://jimhorton.substack.com/p/index-lexicon-of-strugglesSat, 28 Feb 2026 05:47:35 GMTAuthor’s Note: Hello, to my readers. I’m introducing a new type of post—an Index, which will serve as a navigational hub for an arc (also: a “sequence”) of research topics for my Moonshot newsletter.

This is a purely administrative article. Think of it as the equivalent of a Table of Contents.


Introduction:

Image

]]>
<![CDATA[Arc 1: The Lexicon of Struggles]]>https://jimhorton.substack.com/p/arc-1-the-lexicon-of-struggleshttps://jimhorton.substack.com/p/arc-1-the-lexicon-of-strugglesThu, 26 Feb 2026 05:20:20 GMTAuthor’s Note: This is the introductory post for my first arc of this project. If you’d like to see the other articles in this arc, you can find the index page here:

Image
Image by Author, via Midjourney

Self-Help

I was raised in American self-help culture.

That phrase, self-help culture? Well, it seems a little bit strange as I type it. But really I can’t imagine a better term for it than “culture,” because self-help culture is a culture, in almost every way that the word “culture” means something—except, of course, the part where our society has decided that it is illegitimate and chosen to look the other way.

Like other cultures, it has a history. The phrase “self-help” itself comes from the saying “God helps those who help themselves.” Samuel Smiles chose that quote as the inspiration for his book Self-Help, which was released the same year as Charles Darwin’s The Origin of Species. And, like Darwin, Smiles is the grandfather of an entire intellectual lineage. His lineage is less savory. It is less prestigious. It contains more grifters and fewer prize-winning biologists. But it is no less real.

Like other cultures, it is inherited. It was passed to me through my dad, who introduced me to Kevin Trudeau’s audiobooks Mega Memory and Mega Speed-Reading when I was a kid, and it has been a part of my life since then.1 Trudeau is, himself, a fascinating case study—in many ways a foreshadowing of Donald Trump—and both of them inherited their hustle-hustle attitude from the lineage of salesmen and businessmen that came before them. I’ll probably write more about this history and its connections to Trump someday2, as the downstream effects of mid-American sales culture is one of the running themes in my life.

Like other cultures, it’s marked by deeply rooted cultural practices and values, and is embodied in specific cultural artifacts. It has its quiet codes and its overt signals. You recognize this culture, even if you haven’t thought about it much. It’s the culture of Amway and traveling salesmen and old style direct-mail copy writing. It’s the culture of chain e-mails and vitamin supplements and seminars that promise to teach credulous young men how to “win” as day traders. It’s the culture of self-help books. It is the culture that spawned the audiobook scene and probably the podcast scene. It lingers in conference centers, at hotel bars, and in corporate board rooms. If it was an aesthetic it would be American Salesman, except for the part where the whole thing (bizarrely) originated in Victorian England and sailed here on steam ships.

Finally, like other cultures, you can’t ever really shake it if you were raised in it. It’s the culture that propelled me into graduate school to study the mechanisms of charisma and, later, productivity. Its history is fascinating to me, I love its preoccupations, and while I detest its excesses I have a surprisingly large number of good things to say about it, as well. It’s not the people on the top of the multi-level-marketing scam that make the culture beautiful. It’s the people on the bottom.

They’re my people, even if I’ve floated away from them, like a lone dandelion seed carried away by an errant gust. I like the way they smile. I like the way they care. I like the way they believe in the power of community to lift them up and the way that they give back to it. I like the way they keep hustling. I like the way their business isn’t all that different from their religion, and the way that they believe that success at either requires the same set of lessons.

I hate the people who see this as weakness and find ways to exploit these virtues.

Much as I would love to keep talking about self-help culture, though, there are really only two parts of it right now that are pertinent to us. The first is that it is the parent of many of the greatest monstrosities of the modern age. Silicon Valley’s “find your passion” culture is probably its ugliest (and certainly its most corpulent) child. It (probably) spawned the world of cryptocurrency—though I need to investigate further. It smells familiar, at least.

And it is also the progenitor of the self-help industry and its endless books. If you’re here wondering about how to be more productive, you’ve probably spent more time steeped in it than you’d care to admit. Have you read The Seven Habits of Highly Effective People? Or How to Win Friends and Influence People? Or Think and Grow Rich? Congratulations. It’s your culture too, at least a little bit.

Second, also pertinent to our talk: Nobody has really studied this culture. Not that I know of, at least. Academics seem to avoid it, which is odd, because from an anthropological perspective it’s absolutely wild.

A Map of the Territory

Anyhow, I find myself coming back here for a reason. I am interested in the study of work, creativity, and productivity, and one of the things I’ve been realizing is that we have very little formal knowledge about the landscape of work done on this topic.

Don’t get me wrong—you and I understand this landscape intuitively. If you’ve ever tried to be more productive, if you’ve tried different time management techniques, or read books on how to follow your passion, or watched YouTube videos on dopamine, then you’ve experienced this culture firsthand and you’ve probably gotten at least a sense of the looming shape of it.

But to my knowledge there aren’t a lot of people who have tried to articulate the contours of the world of self-help, to consider the ideas that compose it, or to map them. This leaves a lot of weird empty spaces—open questions that have been left to marinate in their own intrigue for decades. A couple:

  • You may be familiar with dopamine, since it’s become an integral part of the language that people (young men especially) use to describe discipline and productivity. For all its importance to public discourse, though, it’s curiously absent from academic literature on productivity adjacent topics. Why?

  • You probably haven’t heard many people talking about self-efficacy lately, at least not as a “big deal” theory of motivation and behavioral change, but it is overwhelmingly the most common subject that academic researchers reach for when studying achievement. Again, why?

I’ve been thinking about where to even start with my study of work and productivity. Say that you are a writer, like me, and you have a secret dream of writing an article per week, but for some reason you just can’t seem to get there. You want to know how to go about it. You want some place to start—you’re reasonably confident that if you can just figure out where the base of the mountain is, you can chew and claw your way up to the top yourself. Where do you look?

Most people that I know just kind of buy books and browse web articles until they find something that resonates with them. And that’s fine. But what’s surprising to me is that millions of people do this all the time and nobody has ever, to my knowledge, tried to map the territory that they’re covering.

  • Say your path up the mountain of productivity is something like passion. You read books on how to discard peripheral distractions and focus on the one thing that matters. In your free time you try to figure out ways to ensure that you are properly psyched up each morning. You spend a lot of time thinking about how to find a way to connect with the deeper meaning of your work, searching for that well of enthusiasm that will power you through it. How many other people are trying what you try? Do we have any clue?

  • Or say you’re a dopamine bro and you’ve got a wealth of YouTube videos that have explained to you in excruciating detail about the neuroscience of discipline. How big is your niche?

  • Or say you’re a procrastinator (*cough, cough, ohgodimsorrythisarticletooksolong, cough*). How many other people like you frantically google procrastination advice each week, trying to figure out how to manage their time better?

Which of these approaches are more popular? Which are more successful? What are people saying in each one? Has anybody ever tried to create a “master taxonomy” of the approaches that people use to try to be more productive? Do we even have any way of knowing about these things at all?

The Lexicon of Struggles

With the above questions in mind, I’m introducing the first arc of my project on work. You may recall a couple articles ago that I introduced you to my first real project as a public academic, which is to study work and productivity in the hopes that I might find something useful to say about it for people who want to be creative, but feel blocked. That “project” is split into smaller arcs, like this one, where I tackle a specific question, or challenge.

In this case, the first thing I want to do is map the territory of advice that people give for how to overcome work problems. I can’t map it completely, of course—writing an article about every possible answer that people have given to the question of productivity would take an incredibly long time—but I can at least get an overview of the field for you.

My current project is… well, it’s not simple, exactly. But it’s straightforward. As best as I can tell, there are two big worlds where people talk about productivity. One is hustle culture—the world of self-help. The second is academia. I’m interested in mapping the concepts that the two cultures use to explain productivity (or lack thereof). And then I want to see how much they overlap.

My plan, to start, is to gather a list of the hundred most popular key concepts that each culture uses to describe productivity. I’m well on the way to doing this already. Once it’s done I plan to construct a “prevalence index” that allows us to rank the topics in terms of how popular they are in pop culture and in the world of psychological research.

That’s not the only thing I plan to do, of course. I don’t want to yank the conversation here into the world of mathematics—the prevalence study is supposed to be a sort of preliminary guide to the territory of productivity advice. Once we’ve mapped the territory, I want to explore the biggest spaces there. Which concepts are talked about by academics and pop culture alike?3 Which ones are big in pop culture but nobody in academia links them to productivity?4 Which ones do academics obsess over that most people outside don’t hear about at all?5 Are there any that both are ignoring, which might be useful to know more about?6

In the end, my goal is simple: I want to give you a guided tour of the major ways that you can find yourself blocked. And I’d like to try to cover each of those in a way that brings something new and valuable to you. So, here goes.

If you’d like an easy link to the other posts in this arc, the index page is linked below:

Footnotes

1

For more on Kevin Trudeau you can simply see his Wikiepdia page. It’s quite the read.

2

When I first saw Trump, the parallels between him and people like Kevin Trudeau were very apparent to me. I was exasperated with my family members who were taken in by him—how could they have not seen that he was basically the same archetype? But I realized much later how stupid I was. They did see. That’s what attracted them: if you want to understand the phenomena you need to understand that the “Rebel Businessman” archetype runs incredibly deep in American culture, and so does the “Whistleblower” archetype. And the “Shadow Government” archetype. Really the entirety of Trump’s marketing strategy makes deep sense in light of this culture, as does the trust that people from that culture have in him.

3

Hint: Procrastination

4

Hint: Dopamine (probably—still looking into this one)

5

Hint: Self-efficacy.

6

Hint: Pre-crastination. Yes, you read that right.

]]>
<![CDATA[Community Comments: Buddha, Love-Work & Hate-Work]]>https://jimhorton.substack.com/p/community-comments-buddha-love-workhttps://jimhorton.substack.com/p/community-comments-buddha-love-workMon, 02 Feb 2026 10:48:19 GMT
Image
Image by Author, via MidJourney

Author’s Note: Hello, everyone. I need your help! I’m going to start engaging with commenters in my articles. I want to do this for two reasons. The first is that conversations (much like books) take me out of my head, which helps me grow as a writer and also helps me write to others, rather than just writing at them.

The second reason is that I want to showcase other people and link to their work. If you’re adding thoughtful ideas to the world, I’d like to use my platform to draw attention to you. For that reason, I’ll be writing community-response articles like this one. If you’re interested in joining the dialogue, please add a comment to any of my main-line articles. I can’t make guarantees about who I’ll respond to, but I’ll try to engage and boost people regularly.

James

Community Comments

I. ‘Twas Buddha

Right. So, this first comment is just incredibly fun. In response to my last article, pointed out the following:

American psychologists lack that historical perspective you write about because they think “psychology” began with William James. Buddhists and Stoics and others were doing fine-grained cognitive analysis centuries before Lazarus et al.

If you’re not familiar with Baird’s work already, you should go check out his publication, Human Nature. He’s wonderfully insightful and a natural curator—follow him for any amount of time and he’ll almost certainly point you to something new and worth seeing.

In this case Baird’s comment touches on a missing part of the story of the cognitive revolution—one that I had left out of my account almost entirely. Baird is correct that Buddhists got to cognition first, long before the West. But also—if you want to understand what went on in the 1960’s and 1970’s, not just in cognitive psychology, but in the culture at large, you need to understand Buddhism and, to a degree, the larger mid-century obsession with mental states.

If you want a picture of the cognitive revolution, specifically, I think a good place to start is the following three people. Each will send you down a different historic rabbit hole. Each contributes in a major way to the cognitive revolution.

  • Herbert Simon. Look into Herbert Simon if you are interested in understanding the genesis of Cognitive Psychology and its departure from the Behaviorist paradigm of the 1950’s. Working in 1956 and 1957 Simon began arguing that it was both possible and necessary to model human cognitive processes. He was also mindful of the pitfalls of doing so, since cultural attitudes toward the mind were skeptical, still tainted by the memory of Freudian woo-woo. Simon’s contribution was to use computers to model human logical processes—a method which he hoped would help the new science legitimate itself.

  • Al Hubbard. If you had to pick a single person to designate as “most responsible for the 1960s” I don’t think you could do better than the man who introduced over 6000 of society’s elites (by his count) to their first LSD trip between the years of 1951 and 1966. There simply are not words to describe this man. He radiates major Main Character energy—as in, “Man decides to change world. Succeeds.”

    You can read about Hubbard in Michael Pollan’s How to Change Your Mind. Other figures such as Timothy Leary typically get credit for introducing psychedelics into mainstream culture in the 1960’s but that’s not really accurate: If I want to understand why a boulder rolled downhill and crushed a house, I don’t care nearly as much about the tipping point where gravity took over as I do about the man who spent fifteen years lovingly inching it toward the precipice.

    For that reason, if you learn about Al Hubbard, Leary seems like an afterthought.1 To the degree that Hubbard popularized LSD among the movers and shakers in world society, then, he also had a second effect which is directly pertinent to the cognitive revolution—through choice, and work, Hubbard and (I shit you not) his leather satchel full of LSD persuaded an entire swath of society’s elite to become deeply interested in the mind.

  • Shunryu Suzuki. In the mid-20th century Suzuki migrated to the United States from Japan and opened the first Zen meditation center in San Francisco in 1961. He was greeted by a culture of young people (largely Beatniks, fresh off their latest reading of Alan Watts) who were eager to know more about spirituality and the mind, and who likely had at least a passing familiarity with hallucinogens. In Suzuki they found a teacher who showed them the depths of Buddhist tradition and also that it was possible to explore the mind in a systematic and disciplined way. Also important—I think that many of the concepts explored by Buddhism ultimately permeated cognitive psychology, including attention, the self, and the interplay between mind and body.

I mentioned in Why Your Brain Fights You that academic questions (and arguments) are the bubbles that surface atop the froth of human concern. We know that Buddhism itself didn’t really flourish in psychological research until the 1970’s (it is now fully entrenched, though perhaps in reduced form, as the study of mindfulness) but what do you think was happening in Psychology in the late 1950’s and early 1960’s?

The answer is that the new research on the subject of mind, rigorously developed by Herbert Simon and others, was meeting a generation of young students who were familiar with Buddhism both philosophically (from the Beat generation and Allan Watts) and sometimes practically (from Suzuki or other Buddhist teachers). Many of them would have known somebody who had tried hallucinogens. Some would have tried hallucinogens themselves. All of them would have been part of a culture that was buzzing about mind.

There are other names from that time who deserve mention. Timothy Leary and Ram Dass are such obvious names that they hardly need comment. Noam Chomsky looms over the entire era, the towering intellectual giant who went toe-to-toe with the aging champion, behaviorism, and knocked it out cold. Ulrich Neisser researched the mind rigorously for a decade and gave the field of Cognitive Psychology its name in 1967.2

But if you want to imagine how it started? Picture a professor in a tweed jacket and horn rimmed glasses lecturing on computer models of mind to a class of bright-eyed, eager graduate students who argue about hallucinogens and Buddhism in their free time. That image may not be spot on, but it’s not far off.

Image
Image by Author, via MidJourney

II. On Hate-Work

One of the most important lessons I have ever learned regarding work and procrastination is that work done immediately before a deadline is fundamentally different than work done in advance of a deadline. They draw on different sources of motivation, require different tricks for navigating pressure and organizing, and require different attitudes towards a range of other life functions such as distraction, sleep, and momentum.

For that reason, if you’re trying to stop procrastinating, you need to stop trying to think of your endeavor as “the same work, done earlier.” You need to learn how to work in a different way.3

Generally speaking when I refer to these different forms of work I treat them as a spectrum. One one end of the spectrum is “love-work,” which you can think of as work that you feel deeply engaged with. On the other end of the spectrum is “hate-work,” which is the type of work that is boring, obligatory, and which you wouldn’t usually do unless some sort of external incentive is involved.

In-between two is the murky middle. And, to be honest, the murky middle is kind of where you want to be about 90% of the time you’re working. There is no form of in-the-moment passion that lasts long enough to keep you eternally engaged in the boring minutiae of creation, and that’s true whether you’re creating quarterly reports or writing the next great American novel. What you really want, then, is a middling approach that allows you to move forward on any project, no matter how arbitrary it feels, in a way that is pleasant and steady, and on your own terms.4

That being said, brought up a distinction that isn’t really captured in the love-work and hate-work distinction—it runs alongside it and occasionally intersects with it, but it’s different. It also happens to be one of the most important distinctions about work motivation made in the social sciences. Some work is driven by approach motives—you do the work because something intrinsic about it draws you. Other work is driven by avoidance motivations—you do the work because you are afraid of some distal consequence that will happen if you fail. To quote Elkay:

Great article. The lines between “love-work” and “hate-work” are not clearly defined for me; my day job is obviously hate-work, nevertheless the underlying fear of letting colleagues down / not being good enough drives me through with surprising speed and motivation.

You might think, based on my description of love-work and hate-work, that the latter is less productive. People hate it after all, right? But actually that’s not the case. My experience is that most people are far more productive at hate-work, and barely know how to move forward on love-work at all. That’s why so many people wind up looking at articles like my Nonwriter’s Guide to Writing a Lot: They don’t know how to push themselves to do the work they would love.

Most forms of hate-work are enacted around other people. They have built in rewards, punishments, and accountability: Basically, the full cluster of social levers that motivate work that you wouldn’t normally do on your own. And those levers are incredibly powerful. So I think that, when doing hate-work, you can complete enormous amounts of work incredibly fast if you have the right incentive structure. Much more, especially in short bursts.

Cultivating high productivity on projects you love is much tougher because you often lack this network of external incentives and, whenever the positive emotions break down, you have to figure out ways to move forward under your own steam without them. This is a surprisingly tough thing to do. After my graduation a large part of the reason I left academia is that I recognized that I didn’t have the skill, to do work that I loved in a steady, disciplined, and productive way. I was so conditioned to respond to fear that I struggled to respond to choice, and I knew that if I wanted to become a writer it would be an uphill battle. It took a few weeks to write my first proper article as a dedicated writer on Medium.

Thankfully you can improve at it, with effort.

Subscriber Section

Hello! I’ve been experimenting with different ways of adding value for subscribers and ultimately decided on an approach that is built around endnotes and footnotes. Basically, I will continue to write Moonshots in the format that I intended them, which is usually something like “Introduce an artifact, and then write three points about that artifact.”

During the writing process, however, I often stumble across a bunch of small rabbit trails that make for amusing footnotes, and I often have to select three points out of a list of four or five that I could write. So, I’ve decided that those extras—the footnotes and additional notes and ideas—will be extra material available to subscribers in each Moonshot.

For community notes like this, subscriber extras will also include things like administrative notes. Interested in looking further? In today’s extras, I fill in subscribers on my plans for the first two major arcs of this season.

In a perfect world, I would be able to pursue a full-time career as a public academic, funded by the public. You could help make that happen. If you’d like to become a paid subscriber, then, I’d be deeply grateful for your support.

Subscribe here!

Read more

]]>
<![CDATA[Why Your Brain Fights You]]>https://jimhorton.substack.com/p/why-you-feel-dividedhttps://jimhorton.substack.com/p/why-you-feel-dividedTue, 20 Jan 2026 20:20:11 GMT
Image
Image by Author, via MidJourney

I. Introduction

Psychology has a history. Some academics I know don’t appreciate this until, late in their career, they realize with a start that “hey, things are different now…” and a chilling clarity descends on them and they realize that most of what they felt, for decades, was the present, is suddenly history, and always was.

One way to vaccinate yourself against this inevitable epistemic shock is to just get comfortable with the idea that your field has a history and you are the inheritor of it. Psychology is not a morass of free-floating ideas—they connect in a temporal sequence and encode the concerns of the past within them. The web of scholarly research is more than just a bucket of Legos that you grab by the fistful to construct arguments that you want to make. It has stories to tell. It can whisper to you.

One particularly pleasant thing you can do as a researcher, then, is to get arms deep into the history of your interests. When you do that, it stops being isolated pieces of information and becomes stories. Stories are a comforting and intimate way to organize information. They lift you up. They remind you that you are not alone. And they point the way for you.

In the spirit of waypointing I want to introduce you to a paper by Richard Lazarus that lies at the nexus of a meaningful moment in psychological history. It was written in 1982, at a time when our understanding of the mind was changing, radically.

We refer to this period as the Cognitive Revolution, and textbooks record it as the point where psychology pivoted from trying to define everything in terms of behavior to acknowledging that it was both possible and necessary to study cognition.

This was supposedly a major victory in the field. I have my reservations: About 40% of Psychology got smarter after we moved past behaviorism. The other 60% got dumber. But in any case, less discussed is the massive fulmination our understanding of cognition went through at that time, as well as its effects.

Lazarus’ paper, Thoughts on the Relations Between Cognition and Emotion, is a great artifact of that time. If you’re keen on it, here’s a link to a copy of the original, housed at the website for Dr. Jane Gruber’s Positive Emotions and Psychology Lab. Or, alternately, here’s the APA record. And don’t worry—if it’s too dry for you, I’m going unpack it right now.

II. Lazarus (1980): Thoughts on The Relations Between Cognition and Emotion

I first encountered Richard Lazarus and his work midway through graduate school when I decided to focus my research on negative emotion and how it cripples productivity. One of the first questions you have to address, to understand negative emotion, is something like “So how do I know to be scared, in the first place?”

The normal pass-the-bong level answer to that question is something like “Oh, that’s simple. There’s a tiger chasing you and you don’t want to die” which is, of course, true. But that’s a description of what’s going on at a really high level of cognitive abstraction. Psychologists love that level just as much as (if not more than) the others, but if you want to address those weird edge cases (“Johnny jumps right out of his skin every time he sees a tiger plushie”) you need to get deep down into the mechanics of things.1

From an evolutionary perspective “I’m terrified of my upcoming term paper” is one of those weird edge cases. So, mechanics it was, for me. And just one level of abstraction down, you bump into Richard Lazarus, who is sort of the gateway theorist for understanding emotion.

Base-level emotion research from the 80’s and 90’s usually starts by trying to identify three properties of emotions:

  • What does a person have to interpret in the environment around them (e.g. draw conclusions about what they see or experience) in order for the emotion to be triggered? These are called appraisals.

  • What changes physically when a person experiences the emotion? How does it alter their body? How does it alter their face?

  • What does the emotion push a person to do? How does the emotion “want” them to act? These are called action tendencies.

Lazarus is well known for his work on the first part, appraisals2, and here he was defending the importance of those appraisals against an eminent scientist, Robert Zajonc, who, two years earlier, published an article titled Feeling and Thinking: Preferences Need No Inferencesa twenty-two page manifesto aimed with millimeter focus toward a single goal, which was dismantling the argument that emotions (specifically preferences—think liking and disliking) are the product of thinking. The empirical and societal contradictions that cropped up because of this belief (I address some of these below) frustrated Zajonc and he wanted to correct the record, since a new, uppity branch of psychology had appeared that was once again arguing that cognition preceded emotion.

Lazarus’ response was much shorter—only six pages. And rather than a direct refutation of Zajonc, he made a simple argument that can be summed up in three points:

  1. Zajonc assumed (incorrectly) that “appraisals” referred to a form of thinking that was willful, rational, and self-aware.

  2. Zajonc assumed (incorrectly) that Lazarus and his colleagues were claiming that emotions were the end point of a long chain of cognitive meaning-making, starting from raw, meaning-less sensory input, and ending with a conclusion.

  3. If readers understood (correctly) that Zajonc was actually talking about one small part of cognition—late-stage, rational, conscious thinking—then he and cognitive psychologists actually agreed with each other. The issue at hand was that Zajonc was ignoring the rest of cognition—the brain appeared to be doing quite a bit of cognitive work before conscious thinking emerged. Their disagreement wasn’t about emotion vs. thinking—it was about what to define as “thinking.”

If anything, Lazarus undersold his point: He suggested that the brain assembled interpretations from partial information, but research since then suggests that much more is happening than that. The work of the brain is only “partial” if you think of rational, conscious building blocks as the end-point of mental activity. In any other context it looks like the brain is doing a lot more cognitive work before you become aware of it. Most of this is structural—you can think of it as the pyramid base that the pointy capstone of rational thinking sits on top of.

This paper, then, is a snapshot of the transition from a model of mind that partitioned reason from passion—where the work of form and classification was done entirely by the conscious, thinking part of a human—to a model of mind that blended them. In this new model, form, connection, and meaning happened long before we became aware of them. Long before we thought, in any sense of the world that people would have recognized prior to the 20th century.

It was a strange new atlas of human meaning—turtles all the way down3. And accepting it has changed us.

III. A War of Titans

This whole paper must sound incredibly dry. I hope that it doesn’t, but I get insecure about these papers, because they sound like the things that eggheads argue about when they’re tucked safely away in their carton in the back of the fridge.

Let me try to communicate to you why I like these so much.

The big reason is that papers like these are not isolated from the world. They influence it, but also, they reflect its depths. Academic arguments form like bubbles atop the froth of human concern. For each major preoccupation of an era there is a small crowd of bright and inquisitive people who feel something about that preoccupation and say “I want to understand that better than anyone else alive.” And then?

They actually try! And then they argue about it! And in their arguing they recapitulate their roots. Or grapple with them. Or betray them altogether. But in any case, holy hell, what a drama!

Those arguments, in turn, reveal strange things to the rest of us. Not all of them, of course. Some of them really are just carton-level egghead debate. But, more often than you’d think, you have episodes like the argument between Zajonc and Lazarus.

That argument came at the end of a massive, century-long fight between titans—two enormous bodies of philosophy that wrestled for decades, jaws around each others’ throats, over a single question:

What does it mean that I am?

Each titan was an answer to that question. On one side, the clockwork automaton of the old Cartesians: “I think, therefore I am.” In that philosophy the human mind formed a nice, tight lattice, each mortise and tenon joined flush around a sturdy core: “I am the part of me that thinks, reasons, judges, and wills. The rest of me provides sense, and urge, but is subject to me.” This view was embodied in movements like rational actor economics, which posited that people act in their rational self-interest, and the laws governing that interest can be understood by eggheads.

The other titan was… weird. Lovecraftian, even—made up of fuzzy edges and unsettling angles. Stare too long at it and its contradictions could wreck you and refashion you in their image, so that you could no longer talk to normal people anymore—they’d sense the madness on you and leave you standing at the punch bowl, drinking alone. If the first titan said things like “I am reason. I am will. I am choice!” the second replied “Yes, yes, so are we. But we are also a trillion flickers of lightning, and only a few of those bother with reason.”

When these two titans fought, the first one broke. It was brittle. It splintered at the joints, collapsed inward into a heap of its own contradictions. We have been raised in its debris, wandering among clockwork gears that still fit together and function a little bit, but not with the same titanic force that used to animate them.

What does this look like? Well break away from the titan metaphor, and think of these instead as titanic meme-plexes made out of ideas, and it looks like regular encounters with old ideas that have lost the vital core that once made them formidable. Some examples:

  • For much of the 1970’s and 1980’s a large number of people were preoccupied with the idea that not only were humans selfish, but that it was impossible for them to be anything else. The chain of logic was clear: If everything passed through your rational mind, and your rational mind acted to maximize its own interest, then didn’t that mean that even apparently “selfless” acts must have been chosen for personal benefit?

  • For much of the early 1900s it was common to blame people for their own pain. Again, the chain of logic was clear: If something horrible happened but that experience was processed through your rational mind, shouldn’t you be able to control your emotions by controlling your interpretation of the event? And if you didn’t… what did that say about your character? This chain of thinking is part of the reason that the armed forces took so long to accept PTSD. In one of the most famous incidents, General George S. Patton slapped the helmet right off of a soldier who had been hospitalized for severe PTSD, accusing him of cowardice.4

These ideas lack teeth today. It’s not that the old world lacked compassion, or that the modern world disparages willpower. It’s that, until about 1970, we lacked a detailed, compelling neurological account of why normal human beings are internally divided, and why you can’t simply clamp down on and control your mind on a whim. Absent that, propositions like “soldiers with PTSD suffer from a medical condition, not a lack of character” or “compassion is a beneficial evolutionary adaptation but is not calculated to maximize personal benefit” were defensible, not victorious. The difference between those two in shaping institutions, and intuitions, is immense.

Without that body of knowledge, you could argue that nerves overwhelmed will, or that emotion was more responsible for altruism than calculation. But your opponent could also respond “lies; you acted that way because you considered it in your best interest” and the implication just hung there because it was too plausible to refute.

Now? We’ve accumulated such a massive body of neurological evidence for the primacy of non-conscious processing that willpower looks more like a brake than a steering wheel.5

This argument between Lazarus and Zajonc happened right at the tipping point of that transition. Here, in this article by Lazarus, you get to watch two guys bickering at each other as an entire tradition of human philosophy gets flipped on its head.

That’s why I love this stuff.


Want to read more? You can find my article on empathy and altruism at this link


IV. Understanding Work

I’ve gone long, here, but I’m going to try to bring this home for you in a way that is pertinent to the research on work that I’m doing for Moonshots this year. This fight between titans also influences how we think about work.

So, let me start with something that you know, deep down, but that you may not have thought about formally. You must know, on some level, that doing work you love is psychologically different from doing work you hate. They don’t just feel different, emotionally. The two forms of work have different psychological textures. They involve different actions. They have a different cadence, different aims, different outcomes, and draw upon different wells of energy:

  • Love-work is spontaneous and exploratory. It often unfolds without a plan and doesn’t require a top-down imposition of clarity or form in order to make it more manageable. It delights in tangles and puzzles. It is not subject to the same rules of fatigue that other forms of work are, or the same rules of habit. It selects and discards goals rapidly, on a whim, and yet strangely manages to progress in spite of this. It is often reflected in the quality of a final product, and leads to outcomes that are reflective of the creator’s personality and idiosyncrasies.

  • Hate-work is effortful and unpleasant. Unlike love-work, it often requires a plan, since some measure of clarity and partitioning is required in order to break a hated task into manageable doses. Hate work follows a classic fatigue curve, since it requires a lot of top-down monitoring to push through the unpleasantness. It proceeds by measured goals because it requires clear external signals of progress in order to feel worthwhile. It is also reflected in the quality of the final product, which is often curiously devoid of personality and emotion. Done poorly, it’s possible to spend days on it without feeling like any progress is being made at all.

So, one question that should pop immediately to mind when you think about work this way is, well, what if I take hate-work and do it in the way I would normally do love-work? Like, what if I did tasks I hate in an open, exploratory, spontaneous way? Couldn’t that make them more tractable? Even a little bit more enjoyable? And the answer is absolutely, it can.6

But also, you can put in a good faith effort to do this and find that you can never really sink completely into a piece of hate-work the same way that you do with love-work. Why?

Well, the answer to that goes back to Lazarus and Zajonc and the strange and wild things we discovered about the human brain during the 20th century. As it turns out, a lot of the decisions that your brain makes about a task are decided by parts of you that are outside of your immediate conscious control.

These parts of you aren’t reptile-brain parts. They’re all you, all modern: there was no paperwork on the ancient savannah, so your interpretations of that office report are probably governed by phylogenetically recent systems of cognition that can weigh modern life. But they still happen early in your cognitive loop, deep in your brain, on a level that you don’t usually work through consciously to draw conclusions.

And therefore, if you are facing major work motivation problems? You have to contend with the fact that there’s a second character in your head making decisions before they get to you.

That character isn’t quite the same genre that you, yourself are. Talking to it is not the same as geeking out with a friend over Discord. But it’s a being-made-from-brains, and you can find common ground with it—and, in fact, you have to in order to move forward. After all, it’s made from your brain: it’s the trillion flickers of lightning that the “I am reason! I am will! I am choice!” part of you ignores. It’s the second titan.

This is, at the root of it, the part that I try to address when I talk about things like toxic preconditions. One of the things that I found genuinely amazing in the response to my article on toxic preconditions is the way that so many people commented to let me know that it was causing them to re-think parts of their attitudes that had kept them paralyzed for a long time. These things, once brought to the surface, were perfectly legible—the kind of thing that could be thought about, and addressed, and changed. But also, these were almost certainly attitudes that were formed early, that were formed deep, separate from the light of conscious awareness.

And that, for me, is the most interesting thing. Thus far I’ve discussed these two titans as if they were fundamentally different creatures. But, of course, they’re not. They overlap—and that means that it is possible for at least some of those pre-attentional parts to be surfaced so that you can dialogue with them, and re-think them.

In a nutshell, I’m inclined to think that is the difference between Zajonc and Lazarus. You might think that, since they were on opposite sides of this debate, that one of them represented the old clockwork titan and the other represented its challenger. But that’s not the case. Both men were squarely on the side of the new understanding of mind. They were just quibbling over particulars.

But what wild particulars! Making allowance for the fact that I don’t know Lazarus and Zajonc—that I am taking two scholars more intelligent than myself and simplifying them to fit an argument—I imagine them saying something like this:

  • On the one side, Zajonc argues: There is our rational, calculating brain, and underneath it is a different system. An “affect” system. It is more primitive and basic, it responds to the environment with feeling according to patterns it recognizes within the environment. And these patterns are almost mechanical—not penetrable to our cognition. We’re stuck accepting their output and working with it at a later phase of cognition.

  • And on the other side, I imagine that Lazarus says: No, no, that’s not the case at all. The truth is weirder! Yes, whatever is down there has its own rules. Yes, we must accept them sometimes. But also, it’s a brain! It’s cognition, all the way down. We don’t know what we’ll find there, but we know that at least some of it is something that we can talk to, and isn’t that crazy?

And it is. It’s crazy, wild, and oh-so-very human. Because it’s a brain. Our brain.


Thoughts? Comments? Leave them here! I’m trying to strike up an ongoing dialogue with the people who follow my writing, so please share your thoughts if you’re inclined.

Leave a comment

A final housekeeping note. I’ve been trying to figure out how to say thank you to my paid subscribers while staying true to my commitment to keep these stories free. And then I realized that paywalls hide footnotes. Soo… I’ve paywalled my personal commentary on this article. In future articles I will paywall footnotes and the occasional bonus section as a kind thank you to those who are helping support this work—all ten of you. ;-)

If you’re interested in supporting this work too, I’m changing and updating the structure of my work to include additional gifts for paid supporters. More on that over the coming weeks. For now, peace and blessings to you — the subscribe button is below if you’re keen on following.

James

Subscribe now

Image
Image by Author, via MidJourney


FOOTNOTES

1

When I talk about levels of abstraction, what I mean is this—everything you see starts as a blur of light hitting cells in your retina. From this blur your brain learns that some parts of the blur are one continuous thing. If I look at a green light, then look to the left of it, the light is hitting different cells in my retina, but my brain interprets it as the same light.

From that level our brain moves up to inferring features, and then patterns, then concepts, etc…—at some point it then starts to interpret those concepts in terms of what they mean for things like not dying from tiger claws.

Why? When? How do you move from “experiencing an orange blur in my retina” to “seeing a mortal threat?” Those are the things you tackle as you move from a “motorist mode” of understanding psychology, where you interpret people in terms of narrative, causality, and benefit, toward a “mechanic mode” of understanding, where you look under the hood.

2

Just some background. It’s not strictly necessary, but you might find it interesting. Lazarus’ basic theory is that there are “phases” of appraisal. The first happens instantly and can be boiled down to the following:

  • Is this relevant to me?

  • Is this good or bad?

  • K how much tho?

Some of the more primitive emotions branch off at this phase—basic preferences like liking, disliking, and really ancient emotions like fear, anger, and want. These emotions can be modified by basic cognition but they’re old and simple from an evolutionary perspective.

After this comes phase-two appraisals which are much more nuanced. They also result in much more nuanced and complex emotions: pride, shame, guilt and others. These phases show up in surprising ways in everyday life. Phase two emotions show up later in childhood development; babies don’t appear to learn pride, shame, or guilt until later, after they develop deeper and more self-reflective cognition. In contrast, phase two emotions are more recent, emerging much later along the evolutionary chain.

3

This is a fun metaphor, but is probably also overextending. Take it in the sense that it was meant; cognition, long thought to be the “top” function of the brain, appeared to be happening a layer below where we normally expected to see it. And a layer below that as well. And a layer below that. And so on. It changes at each level: The rules are different. The old metaphors we used to describe things are often wholly inadequate. But it’s there. Wow. Most exciting is this—we have so much more to learn.

4

This story is wild. Patton was visiting hospitalized soldiers behind the front lines in Italy, commending the wounded. Eventually he came upon a soldier with severe shell-shock—such a bad case that he was hallucinating the sound of phantom mortar attacks. Patton didn’t believe in shell-shock, so when the soldier explained his situation, sheepishly suggesting that it might be his nerves, Patton got pisssed, and backhanded him so hard that it knocked off his helmet lining. He stormed off and a few minutes later came back around and saw the soldier sobbing from the first strike, which pissed him off more, so he strolled up and slapped him again. A nurse tried to intervene after the first strike, lunging at Patton, but was restrained and led away, weeping.

Was this overkill? Clearly. But… if you start from the understanding that Patton really didn’t believe that shell-shock could happen? What must that soldier have looked like? Well, he’d have looked like a coward who told a dumbass lie about ghost sounds so that he could abandon his friends at the front, knowing that they would have to carry his share of the work, and be in more danger for his absence. I’d have been pissed too, if that’s what I saw.

But that’s not what we see. We know better. But back in 1943 they were learning better and that’s not the same thing. I share this as an example of how dramatic an effect these differing views of mind and will can have on how we interpret what’s right in front of us. Eisenhower, for his part, threatened Patton with removal if he ever did anything similar again, but kept him at his post since, well, he was Patton. Under orders, Patton apologized to everyone involved.

5

I don’t want to overplay the power of pre-conscious thought here, or to disparage the importance of choice and will. Clearly personal agency matters, a lot! But one way you can think about it is that overtime we have shifted from a conception of personal agency as a force imposed on an irrational body, to personal agency as skill at piloting a complex system. That is, we no longer assume that agency should work un-checked. There are limits that require skillful navigation.

6

Yes of course I’ll be writing about love-work and hate-work in the future. Not sure when. But I’ll be touching on it at some point.

]]>
<![CDATA[Project #1: Creative Work (and Blocked Dreams)]]>https://jimhorton.substack.com/p/project-1-on-making-thingshttps://jimhorton.substack.com/p/project-1-on-making-thingsSun, 04 Jan 2026 13:07:17 GMTAuthor’s Note: Hello, everyone. With this first article of the new year I’m announcing a new project. Several months ago I posted a note regarding my intent to become a public academic—this will be my first formal project along those lines, and should last for the next year. As it stands, the plan is to study… work! Read further for more information.

J

Image
Image by Author, via Midjourney. If you’re wondering if it feels kind of sacrilegious to make AI Art of a cave painting — yes, yes it does.

Introduction

We don’t have access to the personal correspondence of Marcelino Sanz de Sautuola. As such, most of what we know about him has to be inferred from public documents and the testimony of others. For those who are willing to spend the time needed to find and read those sources, however, three facts stand out, and quickly:

  • First, Marcelino Sanz de Sautuola was a truthful man, and not just in the watered down, American, “I cannot tell a lie” sense of the word. He was inquisitive, thorough, and brave, and those traits in turn made him well suited to finding the truth, to documenting the truth, and to defending the truth.

  • Second, Mr. Sautuola loved his daughter Maria very much, and took her on adventures.

  • Third, because of #1 and #2, the world will forever remember Maria as the young girl who shattered our understanding of humankind.

There are only a few events in history, I think, which represent a true break point in the story we tell ourselves about humanity. I’m attracted to such events because of their liminality—they represent a sort of “between space” in our understanding of mankind, and when you look at the ideas that do battle in that space you can learn a lot about what makes us who we are.

The story of the Sautuolas contains one of those insights—and it is the core of the work that I will be doing for the next year. I wanted to share it with you. So let’s talk about Santillana del Mar.

I. The Man in the Cave

The hills surrounding Santillana del Mar are ancient.

Technically, of course, every landscape is ancient. But some landscapes hide their age. Alaska, where I live, is youthful; the mountains here are old but the surfaces that humans make contact with are young. Anchorage is quilled with trees a few centuries old, at most. Its bays are full of mud-flats made of silt that shifts and renews with the tides. Its lakes are fed by January’s snow. The glaciers, at least, are ancient, but also, they’re melting.

Santillana del Mar, in contrast, lies on the northern coast of Spain, between the Sierra del Escudo de Cabuérniga1 and the Cantabrian Sea. Millions of years ago the sea hosted endless generations of sea life that bloomed and then died, depositing their shells on the seabed below. Aeons of geological activity compressed those shells into a layer of limestone, which in turn was forced inland by tectonic activity, and carved into a porous mess of karst by millennia of running water.

The limestone caverns that pock the landscape, then, are truly ancient. To enter them is to enter spaces so old that the numbers we assign to them cannot carry the meaning of the time that has elapsed. Step into one and you are a lonely little grain in the hourglass of God, and if you have any sense at all, you feel it.

Periodically some external force such as an earthquake, or erosion, causes a part of a cave to collapse. When that happens the space inside is sealed off like a time capsule, and can remain hidden indefinitely. There are spaces in the Cantabrian foothills that have kept themselves from us for the entirety of written history. We know this for a fact because every few years an inquisitive soul opens one of them up, and we find what was hidden there.

In 1879, Marcelino Sanz de Sautuola and his daughter Maria entered the cave on their property in search of hidden things. They had barely an inkling of what they would find. Marcelino Sautuola was dreaming of trinkets; a year prior he had attended the 1878 World Fair in Paris, France, where prehistoric archaeologists had started to make their academic discipline known. Of particular interest to Sautuola were the exhibits—small artifacts of engraved bone, antler, and stone. Scholars argued that they might be decorative, which was an odd assertion, contradicting the conventional wisdom that prehistoric man was little more than a beast, capable of creating tools, but without the capacity for art or whimsy.

A decade earlier, in 1868, Sautuola had been approached by an acquaintance of his—a local telero named Modesto Cubillas, who sometimes requested to use his land for hunting. While Cubillas had been exploring the property his dog had disappeared briefly into a fissure in the landscape. Cubillas, who knew of Sautuola’s interest in prehistory and the natural sciences, thought it would be of interest to him.

And it was. Sautuola had entered the cave before and found evidence of prehistoric occupation—tools of flaked stone. Scattered animal bones. Now, however, mind fresh with new theories, he purposed to visit it again. His daughter Maria came with him.

The next part of the account is perhaps the most charming. The ceiling of the main gallery of the Altamira cave was low. Sautuola was a fully grown adult, and it was likely that he had to stoop or crouch in order to make his way in the dark. This was little problem to him, because the only parts of the cave that were of interest to him were the small artifacts of prehistoric occupation strewn about the floor.

Maria, meanwhile, was small, and inquisitive, and had plenty of room to look wherever the lamplight fell—including the ceiling. And when she did look up? She saw this.

Actual art from the Altamira Cave, via Wikimedia Commons

Her next, breathless words, “Mira, papa! Bueyes pintados!”2 mark a tide-change in our understanding of prehistoric man. Up until that point, it was thought that early humans were savages whose lack of civilization made art impossible. In this view, art was a gift granted by human culture—and early man, being in a “state of nature,” should barely have barely been able to think about art, let alone make it.

The bueyes pintados suggested that something was wrong with this account—perhaps just the timeline? Just how much civilization did mankind need in order to produce art? Because the ceiling at Altamira suggested that all humans really needed was a steady stream of prehistoric beef and some free-time.

Her father Marcelino was humble enough to recognize what it was that he was looking at, and reached out to scholars in Madrid. The next year, in 1880, the world received the news that prehistoric man was an artist.

II. The Signature of Man

That last line—prehistoric man was an artist—is the departure point for my newest project. I’m not quite sure that people appreciate what a radical idea it is. But to give you an idea, it will help to know that, shortly after the publication of his 1880 paper, Marcelino Sanz de Sautuola was widely accused of fraud (or at least self-deception) by the leading academics of the time.

Why? Well, there were some legitimate reasons for skepticism, not least of which was the incredible state of preservation of the artwork. But the problem was also philosophical: Sautuola’s opponents simply did not have the interpretive framework needed to accept that cave-men might be capable of art.

The emerging sciences of the time prided themselves on doing away with sentimental notions about human nature. If humankind emerged from the rough stuff of evolutionary processes, then prehistoric man, being a few rungs down on the ladder of evolution, should be more animal than man—or that was the thought. The public sphere was already bustling with tales of prehistoric savagery.

The greatest source of skepticism, then, was that some of those cave drawings were really, really good. They showed an artistic attention to form and line that, by the modern account, prehistoric man simply shouldn’t have been capable of.

This debate was, I think, captured best by G.K Chesterton. He was one of the most insightful (and scathing) of the late Victorian literati. In the opening chapter of his book The Everlasting Man he highlighted an absurdity in the modern position: every man of science at the time believed that early man was a savage, in spite of a complete lack of evidence supporting that belief. And the same men of science couldn’t bring themselves to believe that he was an artist—in spite of the fact that there was overwhelming evidence that early humans made things, and were often surprisingly good at it.3

One way to view the troubles of Sautuola, then, is to say that he got caught in the crossfire of an ongoing war. At the core of it is a single question - how deep does the art run?

On the one hand there is the skeptical impulse, the nature-red-in-tooth-and-claw crowd, who believe that the fundamental truth of man is that he’s a violent ape, and that if you trace the thread back far enough, the art will eventually run out and you will be left with humans that are nothing but (as Stephen King so eloquently put it) “the most murderous motherfuckers in the jungle”.

And on the other hand you have those who believe, to quote Chesterton, that “art is the signature of man.” According to this crowd, creation is the defining feature of humankind. Violence may be part of us due to our animal nature4 but there is no point in our past where you will find humans who were all violence, without their corresponding urge to create things, and perhaps even beautiful things.

Today we have mostly settled on the latter. Art and tools are one of the primary markers of humanity on the fossil record. While there are other branches of hominid, such as neanderthals, that also used tools, the presence of tools is one of the surest pieces of evidence that we are dealing with a creature that is “like us”.

Leave a comment

Image
Perhaps the most interesting thing about MidJourney’s attempts at cave art is its failure to capture how the art was arranged. (Image by Author, via MidJourney)

III. What I’ll Be Doing For the Next Year

I’m pretty sure that, if you’ve read this far, there are a couple things that are true about you. Normally I don’t make guesses about my readers like this, but if you’ve really followed the story above, one of the implications is that the urge to leave your art on the wall is a fundamental human impulse—a desire that lies at the core of our species.

So, with that in mind, I feel safe assuming that:

  1. You probably have got a project that you’ve wanted to do for a long time—one that, to you, is amazingly cool.

  2. You haven’t done it yet.

  3. That kinda frustrates you.

If those three things are true, I’d like to help get you there. If the desire to create is really, as Chesterton observed, “the signature of man,” then I think that it deserves to be taken seriously. I think we should understand more about what it is, how it works, what blocks it, and how to overcome those blocks. And I plan to take a shot at that.

That sounds ridiculous now that I’ve written it down, because of course we take work seriously. We work ourselves to the bone. We obsess over agency. We worship discipline and try to medicate away its opposite—and that’s just one side of the societal dialogue. The other side is full of all the people who write endlessly about why productivity worship is dysfunctional and how it is chewing up an entire generation of people and spitting them out. There are thousands of research articles on procrastination alone.

What I haven’t seen is somebody who tries to do the following:

  • Take a holistic view of the topic, drawing from as many different disciplines and areas as possible.

  • Synthesize the insights from the disparate areas.

  • Break it down into insights that might be useful for normal people who are just trying to do that one cool thing in the middle of all of the other urgent things.

Most of the work I’ve seen on this topic5 is done by people whose aim isn’t to understand work in its own right, but to leverage it for some other purpose. People think about work when they have a pithy idea for a self-help book. They think about work when they are a consultant who is hired to boost morale in the middle of the latest round of layoffs. They think enough about work to accomplish their aims, and they don’t think much beyond that.

So... I’m going to take a shot at that. I’m going to throw everything I have as a scholar at it, and see what happens.

My plan, then, is to start researching, and to report what I find to you across the different newsletters that I run. I’ll update you more on the specific research plan in the near future but at the outset I wanted to bring you a bit on what to expect. So, a few points here just to get the ball rolling. My goal is to turn this project into something that is:

A. Service-Focused

Most of my followers here on Substack have come to me via one article: The Nonwriter’s Guide to Writing a Lot.

Apparently in the middle of all of my fighting to get myself to write, I stumbled across a couple insights that other people consider worthwhile. To be honest, I don’t know that I have many more of them—but I do know that one of the most useful things I can do with my writing is to search for them and share them. So I consider that to be the primary goal of this project.

B. Writer-Centric

I’m a writer. It is my primary job, my major creative ambition, and will probably shape my writing in predictable ways. You can therefore expect writing to show up frequently in my research.

This doesn’t mean I’m going to talk exclusively about writing (although I will start publishing regularly to my Psychology For Writers blog). What I hope is that as I conduct my research I will be able to find and share deep principles that can be applied to most of the personal projects that my readers have in mind—everything from painting to planning a vacation. But writing is where I start from, so if you’re a frustrated writer, you’ve got company here.

C. Arc-Based

This is going to be a large project. As such, I’m going to be using some organizational approaches that I have borrowed from the inimitable (you should visit him over at Everything Is Amazing).

My plan is to organize my research into arcs. Each arc will focus on one particular theme or sub-question. I’m not sure how long each arc is going to be, exactly; I’ll aim for anywhere from eight to ten articles, but my intuition tells me that different topics will demand more or less.

If you joined because you appreciate my randomness, that’s not going to go away—I need room for my mind to wander or else I go crazy, and the purpose of this Substack was (and still is) to provide room for that. So expect that in addition to building each arc, I will also be posting with some regularity on other topics.

IV. Conclusion

My intent is for this first major project to be the “capstone” of my research on work. I’ve studied work in one form or another for over a decade and a half now, and there is so much I wanted to say that I never got the chance to. This is my chance to say it—and I’m sure that as I go I will learn much more and find other things to talk about.

I’m feeling pensive. I have been writing for about six years now. A lot of it was directionless. It’s… strange, to me, that I’m narrowing my focus now. But if I look over my history of writing a lot of the signs were there—this is one of those topics I kept circling back to, over and over again.

To get to this point I had to overcome some deep concerns about the topic itself. A lot of the writing on work—pretty much the entire self-help industry—is a bit… skeezy. I’ll offer my thoughts on why as I go (I’m thinking of devoting an entire arc to it, actually). But for a long time I felt like the topic was tainted by the self-help industry. And I still worry that it is.

So, I’m going to have to find a way to differentiate myself. I’m not a coach. I’m not your guru. I’m not the type of guy who can promise you a five figure income from Substack if you follow my ten step plan.

But, even without being any of those things, maybe I can still be useful. Let’s find out, shall we?

Interested in seeing what happens with this project? Follow along! You can subscribe below.

Image
Image by Author, via Midjourney
1

A sub-range of the Cantabrian mountains. Santilanna del Mar lies in the foothills of this range, approximately five kilometers from the ocean.

2

This version translates to “Look, papa! Painted bulls!” While we do not have absolute evidence of this specific phrasing, it is the one that appeared most commonly in my research and is well accepte

3

Chesterton was quick to point out, here, that the lack of evidence didn’t mean that prehistoric man was peaceful. It just meant that, whatever prehistoric humans were, the only irrefutable pieces of evidence they passed us across the archaeological divide were their tools and their art. Evidence of their violence, at the time, was sparse.

4

That early humans were violent is, as best as I can tell, completely non-controversial. The question at hand is whether that negates their ability for creation and beauty. Today we would argue, correctly, that there is no opposition between the two. But the Victorians seemed to think that violence was the nature of brutes, which precluded art.

5

This may reflect my ignorance. If you can think of someone who has a lot of great stuff to say on the topic, point me to them? I’ll review them.

]]>
<![CDATA[On Writing a 700 Page Novel, Stark Naked, In the Middle of a Bloody Insurrection]]>https://jimhorton.substack.com/p/on-writing-a-700-page-novel-starkhttps://jimhorton.substack.com/p/on-writing-a-700-page-novel-starkTue, 16 Dec 2025 00:43:36 GMT
Image
Watercolor portrait of Victor Hugo. These are the eyes of a man who would do anything to complete his novel. Anything. No matter how horrifying. (Image by Author, via MidJourney)

Introduction: A Clean Brain

I had an idyllic experience the other day. It went like this: I woke in the early morning and had a quick, uneventful breakfast while my car warmed up against the winter chill. Afterwards, I drove through an early morning ice-fog down to my favorite coffee shop—a comfy little bistro on the eastern rim of town, with wide bay windows that take in the Chugach mountain range.

I ordered a breve and sat down with my favorite pen and a good notebook, and worked on a draft of an article as the sun rose. I finished one page. Then a second.

The coffee was sweet in that understated way you only discover after you’ve given up the candy drinks. I had left my computer at home, so the writing came easy. My mind, usually a scattered mess, was undivided. My shoulders, usually a knot of tension, were loose and relaxed—the ease radiated downward into my handwriting, the letters crisp and clear.

When the sunrise hits an ice-fog early in the morning there’s a moment, before the fog lifts, when everything turns to gold. For a while the inside of the coffee shop was set aglow, the sunlight whelming the lamps hanging from the rafters. I finished a third page. Then I finished the fourth page and, to my surprise, my draft was done. I had said what I wanted to say, and all that remained was editing.

So I pulled out my phone and snapped four pictures: *snap, snap, snap, snap*—one of each page—then I skimmed over them briefly in my transcriber to make sure they were in order, and pushed a button.

A second later my cell-phone vibrated as the transcript of my morning writing landed in my inbox. It has been mellowing in my drafts folder since, waiting for a final edit.

Broadly speaking, I call this experience—or at least the state of mind that accompanies it—having a “clean brain.” A mind, undivided, working on the task at hand. It goes by other names—flow, absorption, the groove—but for me the defining feature of it has always been the sense that my mind is orderly and smooth. So, “clean brain” it is.

I’m going to talk more about having a clean brain in a bit—especially that part about the transcription, because for me it was the last piece of the puzzle that helped me get from “internet brain” to “clean brain,” and I’m proud to be a part of the team that helped make it happen.

But before I go there? I’m going to tell you about the batshit crazy French author who wrote a 700 page script for a Disney film stark naked while his countrymen killed each other with muskets outside of his window.

I think it adds context.

Image

He Wrote a Novel Stark Naked

There is a version of this story passed around the back-channels of the internet as a gesture of solidarity between writers. “You may be a fuckup who can’t focus enough to write a single chapter,” it goes, “But at least you’re not Victor Hugo.

As legend has it, Victor Hugo spent four months naked as a cupid during the winter of 1830-1831 because it was the only way he could trap himself inside his house long enough to work on his novel.1

Here’s what we know. Late in 1828, Victor Hugo was a rising star on the Parisian literary scene, receiving a handsome pension from the crown for his talents. He pitched his publisher, one Monsieur Charles Gosselin, on a grand romantic epic spanning the whole society of medieval Paris. It would have soldiers, poets, and priests, and mysterious bell-ringers and gypsies and goats. But the real main character of the story would be Paris herself, and at the center of it all? Her beating heart, the grand Cathedral of Notre Dame de Paris.

There’s not much detail about Gosselin in the story, but his favorite sins appear to have been avarice and greed, and Hugo’s offer appealed to both. Gosselin was already working on publishing Hugo’s Last Day of a Condemned Man and Hugo had a proven track record of writing things that Paris wanted to read. He had the potential to generate buzz. And money. Gosselin agreed, eagerly.

Unfortunately, Hugo favored the artist’s sins—lust, gluttony, and sloth2—which were less conducive to getting things done. Over a year later Hugo had nothing of his novel written except for some notes, and many of those, he had lost.

Gosselin had been patient at first but the relationship between the men slowly grew more acrimonious until finally he threatened to sue. Hugo was trapped—he had accepted a generous advance, and Gosselin insisted that if Hugo didn’t produce a manuscript by February 1st, 1831, he would sue for repayment.

This created a conundrum for Hugo. He needed to produce, but his social life was so tumultuous that it intruded constantly on his focus. What he needed was a chance to escape, but if he were any good at escaping to focus on his work, he’d have already finished the manuscript.

So he devised the dumbest plan I’ve ever heard. As the legend goes, he paid an (apparently very loyal) servant to steal all of his clothes each morning and hide them away. Thus denuded, Victor Hugo had a choice: He could step outside, completely starkers, and let all of Paris have a glimpse of his bourgeoisie. Or, he could sit his naked ass in his chair and write.

And... okay. Maybe it wasn’t so dumb? It worked!

Hugo’s wife Adèle, in her memoir, reported that he moped and whined for a couple weeks,3 but after that he became furiously interested in his manuscript. He wrote like a lunatic for months, rapidly burning through the bottle of black ink and the stack of foolscap paper he had purchased. Periodically Adèle found him writing with the window open and the winter chill rolling in, oblivious to the cold.

And on January 14th, 1831, he was done. He had written through his entire bottle of ink. His manuscript was immense, with some sources putting it at 190,000 words long. And when it was published later that same year, The Hunchback of Notre Dame quickly became one of defining novels of the century.

Okay, But Did he Really?

There are a few questions I have whenever I read this story. The first is, was he really naked? Can we get a snopes-check here?

If you’re curious about this I’ve got good news—I’ve looked into it for you already because I also found it unbelievable, and it turns out it’s at least half-true.

The official account, sourced from the memoir of Hugo’s wife, Adèle, is that Hugo spent the winter of 1830-1831 wearing only a large, ugly, gray shawl, too heinous to be seen in public. So, he quite literally spent the winter tottering around in his room sans culottes.

There’s no word from Adèle’s memoir about whether he was letting his working men roam free (that’s probably not the kind of thing you disclose in a memoir, after all), but Hugo was an odd duck who was known for his excesses. I can’t prove it, but my money says he freeballed the whole epic, and Adèle just left that part out of her memoir so that she wouldn’t have to explain herself to her friends whenever they met for a game of écarté.

Now, whether this is the truth, history does not record—and perhaps wisely. But rumors start somewhere, and I think the reason the naked-Hugo story became an urban legend is that the people who knew of his personality all heard it and thought “Yeah. He’d totally do that.

Lately though I’ve been thinking on this story and I’ve been preoccupied with a different question. Is it even possible for something like this to happen today?

I mean, sure, sure - there are eight billion people on this planet and it is statistically inevitable that somewhere out there at this moment at least one of them is a) writing and b) starkers. But I’m thinking more about the gestalt. The scenario as a whole—the dramatic decision. The seclusion. The four-month burst of heroic focus.

Is this kind of story one that can even be told, today? Because the whole of the modern world seems to be stacked against it.

Hyperbolic Discounting

One of the big mysteries of Hugo’s nude-writing technique is that he could have quit any time he wanted. There was nothing physically constraining him—he could easily have decided to keep his clothes one day, and then gone wherever he pleased. But his wife’s journal reports that he did so only once, to attend the trial of the ministers of the former French king, Charles X.

So, this raises a confusing point. The clothes-trick was easy enough to bypass that Hugo could have done it at any time, but he didn’t, which meant he had enough willpower to refrain from doing so. But if he had enough willpower to refrain from reclaiming his clothes, shouldn’t that mean he had enough willpower to just… write his manuscript? Without all the crazy plotting? What gives?

We have a better understanding of the reasons for this now thanks to the work of behavioral psychologists like George Ainslie in the 1970’s. Psychologists call the phenomenon hyperbolic discounting and it is a surprisingly simple, elegant concept.

Picture giving someone a choice between two rewards—a smaller, sooner (SS) reward of $100 or a larger, later (LL) reward of $110. Generally people will hold out for the LL reward. If the choice is between $100, thirty days from now, or $110, thirty-one days from now, it’s an easy choice, right?

But if you shorten the distance to say, twenty days vs. twenty-one days from now, and then to ten vs. elven, and then to five vs. six, and so on, there will eventually come a point where many people’s preference will flip, and they will prefer the SS reward. Would you prefer $100 today vs. $110 tomorrow? Many would. Why? Because immediacy is a “hot” psychological phenomenon. It follows a different set of rules than more distant, rational choices.

Viewed in this light, Hugo’s productivity trick has a psychological explanation. By hiding his clothes he reduced the immediacy of his social life, putting enough distance between him and a trip outside that he could weigh his desire to leave calmly against his desire to finish his book.

What is most amazing was just how well this worked. The part that is typically left out of the Victor Hugo story is that this took place in Paris between September 1830 and January 1831—immediately after a major revolution in France. In the summer of 1830 the July Rebellion unseated Charles X, the King of France, just two months before Hugo started writing. There was open fighting between soldiers and civilians in the street. Citizens were cut down by rifle fire.

Much of this happened in downtown Paris, a modest walk from Victor Hugo’s home on 9 Rue Jean Goujon, in the Champs d’Elysees district. It’s likely that Hugo heard the sound of gunshots as he sat in his writing room. The months after the July Rebellion weren’t as bad, but there was a great deal of uncertainty and periodic outbreaks of violence, including a major one in November of 1830, while he was writing.

And yet Hugo maintained monastic focus through the tumult and the bitter, cold winter that followed. For those few months, his life was his novel. In a sense, then, you could argue that this technique of putting distance between himself and the outside world helped him block out an entire revolution.

Image

On Focus

I am not Victor Hugo. Unlike Hugo, I do not write naked (much to the great relief of everybody I know). Unlike Hugo, I do not have the aftershocks of a revolution reverberating through the streets outside my window.

But, also unlike Hugo, even during my most intense and driven periods of work my mind is fragmented and fractured, with filaments of attention radiating outward from the task at hand like cracks in a windowpane. Under deadline pressure I lose hours of work to distraction, and it’s only stubbornness that keeps me in my chair long enough to overcome it. It feels, well, dirty—like dragging my brain through a pit of gravel.

Why? Because when I lock myself in a room to work on a project, I have the entire internet locked in with me.

On balance, my world is kinder and quieter than the one Hugo lived in. Even with the tumult of the current administration I don’t have to linger at my window to investigate whether the gunshots in the distance may signify trouble for me, as Parisians must have done late in 1830.

But in spite of this, I am pretty sure that the advent of computers and the internet have left me far more prone to distraction.

The reason for this is simple. The same principle of hyperbolic discounting that worked so well to keep Victor Hugo’s brain clear and focused in the middle of months of political upheaval can also work in reverse to make a brain dirty and distracted in an otherwise uneventful world. Why? Again—immediacy.

To get his treasured focus, all Hugo had to do was reduce immediacy by putting a barrier between himself and the noise. Granted, it was a dramatic barrier—but once there, it walled out a space for him.

In contrast, in the modern world the major commercial forces have worked diligently to increase immediacy. The tools we use to write are overwhelmingly digital, and they come with distraction built into them on purpose, because the internet runs on attentional capture: Click-through rates. Eye time. Lizard-brained stares, aimed at chameleon screens, which change their colorful display into whatever you ache to see. It’s maddening.

If you’re enterprising, there are ways to deal with this, of course. Jonathan Franzen has an air-gapped4 laptop devoted solely to his writing. Apps like ‘Freedom’ will wall away the internet for you for a time. You can use an electric typewriter, which has a retro charm and makes a satisfying *click, click* noise. You could also use a dedicated writer, such as ReMarkable.

But to my mind? The best solution has always been the simplest. Get a good pen and a pad of good paper. Sit down at your favorite coffee shop with your favorite coffee drink. Leave your computer at home, and your cell phone too, if you’re feeling brave. Print your notes ahead of time and bring them with you. And then? Write.

In my mind this has always been the superior solution, for several reasons.

  • Writing is linear. There is no delete key, so you can’t get trapped in a delete-edit-delete loop. You have to move forward.

  • If you’re using a fountain pen you avoid hand cramps and the writing has a slow, relaxed, hypnotic rhythm.

  • It’s slow. And after a while your thoughts slow, to match.

  • Above all, with a small act of will you can put up a true barrier, by leaving the computer at home.

It’s this enthusiasm for analog writing that led me and my colleague to try to find ways to make it more accessible to writers.

Of course, writing with a pen is accessible to anyone who can scratch up a Bic and a table napkin, but that’s not quite what I mean. Analog workflows suffer from a big flaw: The time it takes to transcribe handwriting is, for many people, prohibitive. This has gotten much better thanks to AI, but it remains the primary bottleneck for most people who want to compose using pen and paper.

To give you an idea of just how much of a bottleneck this is, we can use Victor Hugo’s work as an example. Hunchback of Notre Dame, at the high end, was estimated to be about 190,000 words long. If Victor Hugo had to transcribe his handwritten work himself, how long would that take?

Well, for typing, if you typed 80 words per minute without ever once hesitating or stopping to eat, pee, or think, it would take you… 2375 minutes. Or about forty solid hours of typing.

What about voice? Well, if you use a good voice transcriber and talk non-stop at about 160 words per minute? About twenty solid hours of talking.

AI is better. But it has limits; you can only transcribe a modest number of pages simultaneously before it starts to hallucinate, which means you have to process one or two pages at a time. Hugo’s manuscript came out to something like 700 pages, minimum. That’s a lot of ChatGPT queries, and a lot of loading time.

AI is probably the best option, especially if you want to scan-as-you-go. But a few months ago showed me a prototype of his newest batch transcriber, Scribbles. The two of us geeked out for several hours about its utility for archivists and people transcribing old diaries. Then we realized that it would be just as useful for writers who want a no-hassle way to bridge the gap between their notebook and their Substack account.

So I pitched a long-dormant dream to him: I’ve always wanted to be able to spend a day writing in my notebook, and when I’m done, snap a photo of my writing, press a button, and have it appear in my email, transcribed, a moment later.

And holy hell, he did it. He made it happen.

This article isn’t exactly meant for marketing, but if you are one of those people who likes using pen and paper, and hates (or just avoids) the hassle of getting it into the computer afterwards, you should give it a shot. We’ve set it up so that people can scan a few pages free of charge as an experiment, and we’d love your feedback.

Mostly, though, I have been wanting to log an observation. For most of my life the inexorable progression of technology has been toward greater intrusion. Every new development has been designed to push me further into the world of bright screens, music, pixels, and impulse purchases—and with that, the feeling that my mind was growing progressively messier and more fragmented.

This small piece of AI-powered tech, on the other hand, has given me a simple, elegant way to step out.

I am not sure, yet, if this is just an evanescent victory, or if it is a hint of a possible future. I remain hopeful. But for now, a simple solution to the transcription problem has permitted me to step out of the digital world in a major way. Most of my composition, now, is done with a pen.

There are other parts, of course. Editing. Formatting. Commenting and responding. Those remain online.

But the part most important to me? A coffee. A sunrise. A pad of good paper, and a pen. And a brief respite from all the chaos. My own quest for a clean mind—a chance to focus, and to write in peace—has been fulfilled.

And I didn’t have to shed a single layer of clothing to do it.

Vive la révolution.

Image
Not Esmerelda. But maybe Esmerelda. Image by Author, via MidJourney
1

The winter of 1830-1831 was bitter and drafty. The year before, the Seine had frozen over completely, and while 1830 wasn’t as bad, the government of Paris had to open public halls where citizens could warm themselves so they didn’t freeze to death. Some did anyway.

2

Most people think of sloth as laziness or inactivity—but in early medieval literature one of the primary components of sloth was acedia, or an aversion toward productive work. Often times this aversion was very active, and took the form of busy-work and compulsive distraction, or of locking up and retreating in the face of a daunting task.

3

Self-help bros would recognize this as a dopamine detox.

4

Air-gapping a computer simply means completely disconnecting it from any form of network with other computers. It’s a great technique. I’ll be doing it as well.

]]>
<![CDATA[Notes on the World Beneath Us]]>https://jimhorton.substack.com/p/notes-on-the-world-beneath-ushttps://jimhorton.substack.com/p/notes-on-the-world-beneath-usThu, 04 Dec 2025 10:27:47 GMTTo my readers: Early in the process of reading Deb Chachra’s How Infrastructure Works I recognized that she was writing a book on one of those lifelong themes I keep returning to—the multi-layered world, and the importance of details to a life well lived. I have written about this extensively, about what it means for health, and connection, and writing. I address that last part, writing, here. If you are following my work for that reason, hold to it—about halfway through the piece you will find, suddenly, that it’s written for you.

Image

In China Miéville’s The City and the City, the twin metropolises of Besźel and Ul Qoma occupy the same geographic coordinates.

In almost any other situation we would imagine that this means they share a border, but not Besźel. Not Ul Qoma. By dint of history, those cities share more than a border; they share the exact same space. Some boroughs are unique to one city or the other, and small filaments of each stretch away from the downtown into isolated suburbs, but otherwise Besźel and Ul Qoma overlap.

In spite of this, the cities remain distinct—even in their shared parks and roads (called “crosshatched" zones) the citizens of Ul Qoma experience their city entirely as their own, and Besźel simply does not exist to them. Similarly, in Besźel, if you ask a person on the street to point to an Ul Qoma resident, they will look at you, confused. “Surely you must be mistaken,” they will protest, “Nobody from Ul Qoma is here.” And they will insist on this as citizens of Ul Qoma walk past them, three feet away.

Now, Miéville’s story is weird, but it’s not fantasy or sci-fi weird, so there is no twist of magic or five-dimensional spacefuckery that accounts for the shared-but-not-shared nature of the Cities. The divide between Besźel and Ul Qoma is purely attentional. And if you are thinking “Aha! So, it’s a matter of attunement! If you just will yourself onto the right wavelength, Ul Qoma will appear!” you’d be right, technically. Sort of.

But no. Miéville’s story is weird. You’re thinking Lovecraft weird, or maybe Scientology weird. But this is weirder. This is Miéville weird, which, to those in the know, usually means something completely new that can’t be classified by ordinary standards, except that it will usually contain more than hint of communism. And here, Miéville is true to form. In The City and the City, Besźel and Ul Qoma are separated, not by dark wizardry folding space like aethereal origami, not by cosmological flaking of branes in the multiversal bulk, but by a more sinister force—the fetid egregore lurking behind the darkest nightmares of the twentieth century.

Bureaucracy.

Or, to be precise, a shadowy alchemical mix that is approximately 60% bureaucracy, 10% “that one weird psychological trick” and 30% terror at the prospect of being carted away by the local Stasi for the unforgivable crime of seeing the other city.

A Pollution of Cities

This article isn’t about The City and the City. But indulge me a bit, please? There’s a reason I put it here and it will make sense soon. Let’s talk logistics: How do two cities remain separate while occupying the same space?

In TC&TC, the answer is that the citizens of Besźel and Ul Qoma learn from childhood to intuit the signs of the other city—subtle differences of gait or gesture; small variations in decoration and architecture—and to simply… unsee anything that is not their home. The other city becomes background noise.

Most citizens grow adept at this. Some of it is selection; those who fail are whisked away by Breach, the secret police tasked with enforcing the separation between the two cities. But mainly, the habit of unseeing is so deep that people can even travel between cities. Copula Hall, the one structure that exists in both Besźel and Ul Qoma, serves as the recognized border between the two. Travelers enter Copula Hall, present their paperwork and, on obtaining a visa, step out through the door they just came through.

But now? They unsee the city they just came from, and attend to the new one. Done right, the attentional shift is so complete that home is lost from their view, until they cross again.

This unseeing is confusing, of course, until you realize that you do it every day. In fact, that was Miéville’s point. Our native capacity for unseeing is immense.

For example: I live in Anchorage, Alaska. Every day I pass by a myriad of cities that exist parallel to my own. Each of these cities is also named Anchorage: In one of them a network of some three to four thousand homeless people pitch tents in the woods, form communities, and brace themselves to weather the Alaskan winter, stripped of the shelter which protects richer citizens. In another Anchorage, thousands of believers sing hymns in buildings I will never set foot in and will barely even register. In another Anchorage, moose wander through narrow strips of forest, searching for twigs from paper birch and quaking aspen, gone dormant in the fall. For the most part, I do not see these cities. They are not my city.

Similarly, while I do not know what city you live in, I know for certain that you have your own Ul Qoma. Less than a mile from where you live there is a building you have passed a hundred times this year that you have never once registered. It glides through your visual field as you drive to work, but otherwise withholds itself from you completely. It intimates nothing about its color, architecture, or purpose to your senses, even though it’s right there, just a few feet away.

You’ve got no idea what that building is. You barely even have a clue that that building is. And why would you? It’s not your city.

But it’s somebody’s.

And, really, that is the central conceit of Miéville’s world. That city is somebody’s. You don’t know them. But if you did? They could tell you about it. They could take you there. You could see this world that exists, nested within your own, if only you could just get your eyes to focus on the right set of details.

How Infrastructure Works

Lately I’ve been thinking about those details. Those who follow me will recognize that the details are a long-running theme of mine, as well as our reaction to them, and what that says about the state of our souls.1 So I suppose it was inevitable that when I started reading Deb Chachra’s How Infrastructure Works I would rapidly get caught up in her world.

In the opening pages of How Infrastructure Works, Chachra defines infrastructure as “all of the stuff you don’t think about,” and with those words it becomes clear that her book is strange. Miéville-type strange. Her work is a journey into The City. Read it, and you are entering Copula Hall: A bit of shuffled paper, a few moments of your attention, and you will step outside again—like a native of Besźel traveling to Ul Qoma, you will walk out the door you entered through. But you will be in an entirely different town, because now you are paying attention to a different set of markers.

Chachra positions herself as your guide to Ul Qoma, except the metaphor breaks there, because Ul Qoma and Besźel are only two cities. A chapter into Chachra’s writing it becomes apparent that the strange geography she lives in consists of dozens or even hundreds of cities, all overlapping with each other in wild and twisted fractal geometries that spiral outward in self-similar patterns, from coils that span continents, to tiny whorls of copper pipe that terminate in the gas burner on your stovetop.

Chachra, a professor at Olin College of Engineering, in Needham, Massachusetts2, is well positioned as a guide to all of these cities: a self-professed infrastructure geek with a doctorate in materials science, Chachra authors the Metafoundry newsletter and has a history that spans Canada, India, and the United States.

Having lived across a range of cities—some where the infrastructure is working, others where parts of it barely function at all—Chachra is attuned not only to infrastructure but to what it does for (and to) the people it serves, and what happens when it is taken away from them. Her work, then, is a lively meditation on a wide range of themes: How does infrastructure transform us, granting us exceptional agency in some areas while robbing us of it in others? What does infrastructure mean on a warming planet? How are principles like justice and fairness embedded in physical infrastructure? Why does everybody in the world hate their internet provider?

I’m only four chapters in, and I’ll likely have more to say about this as I go, but I wanted to start here with a discussion of a few themes that have popped out to me already. Some, Chachra discusses extensively. Others are my own extrapolations. But they center on the common theme of the for-granted-ness of infrastructure.

You and I live in the middle of a confluence of cities. In your town alone there is the city of copper pipes and the city of storm drains. There is the city of stray pets and the people who catch them for a living. Tucked away in the high floors of the high buildings there is the city of money and its derivatives. There is the city of tents. The city of maternity wards where each day a newborn draws their first breath. The city of hospice care where each day an old soul draws their last. All of those worlds are there for those who have the eyes to see them. With a single decision you could just… look, and completely transform your world.

But most of us barely notice them. Why?

The World Beneath

The word infrastructure means ‘the structure beneath us,’ which I think is an apt metaphor. But Chachra introduces a new metaphor that I think captures not just the element of support but the ways in which it empowers us.

You are a cyborg. This is true in some senses more than others: Driving a car that is responsive to a roll of your wrist is not so different from piloting an honest-to-god war-mech, though if the latter existed we probably wouldn’t use it to get our Tuesday groceries.3 Generally, however, if you think of cyborgs as humans augmented by machines, we are. We are augmented by entire networks of machines; we just haven’t breached the skin barrier—much.4

Generally we don’t think of ourselves as augmented beings but this is largely a quirk of our psychological make-up. We are wired to attend to desires and frustrations. Tools—stable, boring tools—are neither.

Instead, we experience tools as extensions of our will. Not only do tools fade into the background; they become a part of “us” at the neurological level. If you have ever tried a VR headset, one of the most astounding effects is how rapidly we come to perceive the digital hands of the in-game avatar as our own. Evidence from neuroscience suggests that the underlying mechanism is synchronization—when a hand in our visual field moves in perfect time with our will we quickly incorporate it into our self-concept.5

Is it so different with tools? This basic psychological effect - the embracing, acceptance, and forgetting that seems to accompany thinking of my tools (like a pen) as an extension of me - extends, I think, to most of the things I use, to the point where they easily fade to the background.6

I may be wrong but I think that this incorporation of tools into the self extends to infrastructure, as well. Mentally, once we have electric light within easy reach we stop thinking about the systems that grant it to us. All we know is that when we want light, it’s there. The massive societal infrastructure required to deliver light on demand to millions of citizens sinks from view because, in our mind, light is an action—the simple flip of a switch.

Chachra, on the other hand, dredges these systems back up. Once she does it becomes immediately obvious that most of what we experience as agency is the far end of a densely intertwined network of other people doing things for us.

For example, one of my ambitions is to succeed as a writer, by some nebulous metric that I have not pinned down yet.7 The simple fact that I can want that is predicated on enormous amounts of human work braided into an intricate system that permits me to want that.

  • I can want to be a writer because in the middle of a cold Alaskan winter I am surrounded by a system that allows me to remain warm and fed. Without that system, which is the product of thousands of people working, I would have other preoccupations - like not dying.

  • I can imagine success as a writer because there is another system entirely, or rather, several dozen, that all make an ecosystem where I have the potential to get paid for my efforts.

  • I can write because over a hundred years ago many thousands of people built a densely regulated system that allows me to banish darkness with the flick of a switch, which is why I can write this at 3am.

It is true that there is always a system. If it wasn’t this system of natural gas and hydroelectric power and copper cables delivering light to me, for example, it might instead be a system of whalers and oil lamps, or beeswax candles and local chandlers. It’s also true that a person’s actions matter. But I think that we are often caught up in this illusion of alone-ness, precisely because these systems that grant us agency fade into the background. In our forgetting, we become creatures of goals and discipline and targets. We pride ourselves on being self-made, forgetting that the materials we fashioned ourselves from were provided by others. And yet, on even a brief consideration, how is it possible to not be awed and humbled by this vast, complex, utterly stupefying network of us-ness that makes ‘me’ possible?

For Writers: A Note on Contempt

One of my favorite stories about writing is about a young professor teaching composition to a class of freshmen. For her first assignment, students were to write an essay about their hometown.

One student sent her an email the first evening: “I can’t think of anything to write about my town.” The teacher, hoping to be helpful, responded with a few suggestions. Find a building, or a park, or a small roadside monument—write about that.

“That’s pointless,” the student said. “Everything here is boring. There’s nothing to write about.” She responded again but her student swatted down each suggestion. Too boring. Pointless. Not exciting. Finally the teacher got irritated: she sent her student an email with very specific instructions:

Sit down in front of your old school and look at the front door. Start with the brick in the upper left-hand corner. Write about that first, then move right from there.

Finally the student seemed to get the point of the assignment. They were supposed to describe their hometown, not judge its worth. Their essay wound up in her inbox the next day—it was about something other than bricks, suggesting the student had learned their lesson, and actually looked.

I know the takeaway that most people offer on the internet is that you should look deeper, or that you shouldn’t take the world for granted, or that life is in the details. But I think the more useful thing to do here would be to interrogate just what it is we’re feeling when we fail at that. What is the counterforce that impels us away from settling into the details of a thing, pulling out a pen, and writing?

There are different types of writer’s block, I suppose. I think a lot of writers get so keyed up by anxiety that their brain can’t rest on any one thing long enough to write about it. But there is a second type of writer’s block—one accompanied by a deep sense of ennui. It goes by many names. Indolence. Listlessness. My favorite is Acedia. But most of us just recognize it as boredom.

Boredom is associated with idleness but it is not a passive emotion. Quite the opposite; boredom asserts itself aggressively. Boredom—at least the type that lasts for more than the five seconds it takes to find something to do—is an internal conflict, where one part of your soul proposes an endless series of ideas while another rejects each of them in sequence because they are not stimulating enough. If you interrogate the feeling deeply it appears to be an aggressive, internally directed form of contempt.

I’ve made my stance on contempt clear elsewhere. I think it’s a fool’s emotion, an idiot affect, and that the simple act of feeling it diminishes a person. I’ve never seen one person sneer at another who didn’t immediately make themselves look like the dumber of the pair. But - why?

And the answer goes back to Chachra’s infrastructure. To feel contempt is to lose sight of those many hidden cities that support you. It is forget that you are augmented—that the work of countless others made you able to want more than a day’s meal and a shelter from the cold, and still does.

And, from a writer’s perspective—to feel contempt for anything is to lose sight of Ul Qoma. There are no secret police who will cart us away for the crime of seeing the many other Cities that hide from our view. The only force keeping us from looking is our own deeply ingrained unseeing. This, at least, offers a way out of writer’s block. The trick is to fight back against the contempt, and the lizard-brained desire for stimulation now. Pick something that is so small that your first impulse is to dismiss it as nothing—and then prove to yourself with your own eyes that it is not. Start with the brick in the upper left corner. Work your way right from there.

Image
1

Each of these links connects to an article from Medium on the nature of details and how we connect with them. It is one of my longest-running themes, having appeared in the very first article that I wrote, many years ago. There are others, but I won’t belabor the point.

2

Needham is a suburb of Boston, and Chachra seems to have embraced her hometown, so How Infrastructure Works can also be thought of as a tour of Boston, and Boston, and Boston, and Boston, and Boston, in true Miéville fashion.

3

Unless it’s Wal-Mart.

4

Although in some cases, we’ve done even that! Pacemakers, hearing aids—there are several mechanical augmentations to the body that cross completely into cyborg-land.

5

The earliest known version of this effect used a rubber hand. Botvinick and Cohen (1998) subjected people to a clever illusion where they placed their left hand on the table in front of them. Their hand was hidden with a screen, and a rubber hand was placed in their field of view—experimenters stroked the rubber hand and the person’s left hand in sync with each other, and very quickly people reported feeling as if the rubber hand was suddenly their own. A later study by Ehrsson and colleagues (2007) reported the same, but as a coup de grace they threatened the rubber hand with a sharp object—participants heart rates shot through the roof and they recoiled from the threat, even though they knew objectively that the rubber hand was not their own.

6

Research by Iriki and colleagues has demonstrated that teaching a monkey to use a tool changes their neurological representation of their body, incorporating the tool into their self-schema. There’s a lively body of psychological literature exploring this—for an early review, you can see Maravita & Iriki (2004). An accessible PDF can be found here.

7

Money. It’s money.

]]>
<![CDATA[You Can Seek Love Now, Not Later]]>https://jimhorton.substack.com/p/you-can-seek-love-now-not-laterhttps://jimhorton.substack.com/p/you-can-seek-love-now-not-laterTue, 09 Sep 2025 04:00:07 GMTNote: I’m busy building an online class from old notes. I’ve got two new articles on the way and a substantive amount of research for Season One, which I discussed in an earlier post. In the meantime, this is a piece I posted on Medium two years ago, and one that I hope speaks to those of you who spend all of your time feeling heinously overworked. Reviewing it now, it’s a bit maudlin, but I guess I’m okay with that; I was feeling maudlin at the time. Credit to — it was a comment of hers on the endless chasing of “meaning” that some people get up to, which reminded me of this. It’s not exactly the same thing. But it’s similar—sometimes you just need to stop the endless striving, and chase joy indirectly.

Image by Author, via Midjourney

There are a few hidden meta-patterns in the social sciences. I stumbled across one early in my graduate studies, due to luck. Bizarre as it may sound, the inciting element was a paper on pronouns; the authors used second-person pronouns (you and thou) as a window into exploring two types of human relationships — one coldly practical, marked by power differentials, and the other warm and equal.

That paper stuck with me. I began seeing the dual themes of cold practicality and warm equality occur across vastly different areas of research. I later found that other researchers — primarily Dr. Susan Fiske and her colleagues — had spotted the same pattern and researched it at length. Most people outside of the social sciences don’t learn about it.

Most of the people inside of the social sciences don’t learn about it, actually, except as a brief mention in a textbook. But you’ll find that the two themes explain a lot about relationships.

We’ll use Dr. Fiske’s names. The first theme, Fiske calls warmth. Warmth is pleasantness, trustworthiness, and sociability — the degree to which a person is desirable to other people as a relationship partner.

The second theme, Fiske calls competence. The “can-do” trait. Competence is the ability to make things happen. Competent people are ones you rely on.

The boundaries between the two are fuzzy. Likable people get things done by bringing others together, forming collective groups. Capable people are likable, in the sense that those who have power are desirable to be around. But the two are separable; you can be cold and competent, or warm and generally worthless, with no contradictions.

The pair pops up often in the social sciences, and especially in social psychology, which was my area of specialty. Consider the following:

  1. In the 1970’s psychologists were interested in masculine and feminine traits. Eventually they realized that it was inappropriate to gender the traits, so they renamed them but noted that each trait was stereotypically associated with a gender by society. Communal traits, associated with women, clearly corresponded to Fiske’s warmth dimension. Agentic traits, commonly associated with men, were clearly the same as Fiske’s competence dimension.

    The idea of gendered traits is deeply flawed, but the flaws are informative; they suggest that warmth and competence are so fundamental to perceiving others that we use them as a basis for stereotypes.

  2. Another one, and more subtle; when studying how people evaluate their self-worth, psychologists distinguish between two major constructs; self-esteem and self-efficacy. Scales that measure self-esteem often gauge how likable a person feels they are, and scales that measure self-efficacy often gauge how capable a person feels they are.

  3. There’s also good reason to believe that warmth and competence are embedded in the structure of relationships, themselves. Balanced, egalitarian relationships are built on warmth — two people appealing to each other, mutually. Asymmetrical power relationships are based on competence — people on both sides of the power divide care greatly about the capabilities of the people on the other side.

These patterns occur often enough that they probably represent something real about the psyche. They’re old. Old enough to show up in the work of Nietzsche. They represent fundamentally different orientations towards the world. And they may even be grounded in the way our brain is wired, reflecting the division between the sensory cortex (which perceives) and the motor cortex (which acts).

I may be getting too fanciful with that last part. What seems true, however, is that we have two modes of engagement with the world, and each is also a metric for evaluating ourselves and determining if we stack up; we feel good about ourselves when we know that we are likable and can form warm relationships with others. We also feel good about ourselves when we know we are competent and powerful. Ideally, as we grow and mature, we learn to feel good about both.

But… what happens when we don’t?


There is a winnowing period that occurs at a young age, around the time we arrive in school, bright-eyed and looking forward to making friends.

Starting at a young age children in school begin to stratify, with some being more popular and well-liked than others. Children who are liked by others have more opportunities to socialize and grow. They tend to have more fun, more confidence, and more experience in socializing, with its subtle codes of conduct. Other children are left out.

Even when children are socialized well, there are still other opportunities in childhood and adulthood to learn a horrible set of lessons — to learn that warmth is evanescent, and not something you can count on from the people close to you. That the people close to you might change unpredictably. Or that they might leave. Or that, like the child without friends on the playground, they might never arrive in the first place.

All of them equate to the same dark lesson; that, for any number of reasons, the warmth of human relationships is not reliable.

What happens, then? Because warmth is very important; as John Gorman noted in a poignant reflection on the topic, people who are well liked do better across all levels of business and personal interaction, period. In his words, “Rather than be the best, doesn’t it sound a whole lot easier to just be someone people want in the room?”

So what happens if you’re left feeling like you‘re not wanted in the room?

I suspect that many of us turn to work. The second source of personal fulfillment. Capability. Why not? For some of us, the amount of effort we can put into excellence is much more controllable than the amount of joy we can produce in others.


How many of us work because we are trying to fill a void in our connection with others? It seems to me that most workaholics are operating from a place of lack. Some of them keep elaborate ledgers — I have accomplished X, and therefore I deserve Y from people.

Others are using work as a way of doping on self-made goals. Rather than acting as if their achievements entitle them to connection with the people around them, they act as if the achievements are a prerequisite to a love that remains endlessly on the horizon.

What is really happening is that we are delaying. We are working furiously in the hope that if we work hard enough, long enough, and achieve enough, one day some person we admire will look at us and say “wow!” and give us the love that we hope for.

But it’s a poor substitute. High accomplishment and hard work evoke many reactions, but very few of them are the ones that we want. For all that we want it to be, competence is not warmth. The emotions you get for being competent— faith, gratitude, admiration — are important, but they aren’t enough to fill the void when what you lack in life is warmth, love, and communion.

You have to do a different set of things to get those. And you can. It is possible that you are one of those people who were taught at a young age that likability was beyond you — that you didn’t have what it took to make people laugh, to respect you, or to approach you for the simple joy of being near you when you are at your best.

If that happens to be the case—if someone taught you those ugly lessons—they were wrong. You do have what it takes. You have the ability and the right to be likeable, to be lovable, to be loved. You also have an obligation to yourself to make it happen, and to stop procrastinating using work, striving endlessly for an excellence that you hope will make you brave enough to try.

You’re brave enough now. You’ve just spent so long working hard that you forgot. Go for it.

Liked it? Share it!

Share

Image by Author. Components generated via MidJourney. Assembled by author via Paint3d.
]]>
<![CDATA[You Can be Ungrateful]]>https://jimhorton.substack.com/p/you-can-be-ungratefulhttps://jimhorton.substack.com/p/you-can-be-ungratefulTue, 17 Jun 2025 11:30:14 GMTAuthor’s Note: This also comes from the Medium archives, in 2023, where it was one of my most popular stories by a wide margin. However, this one is not a simple repost; instead, I’ve expanded on and updated the original substantially. I think this version is clearer, and better-developed. My hope is that its central message remains strong — J

Image
Image by Author, via MidJourney

Natural gratitude

Gratitude is overplayed. I believe that, and not just in a vague sense; I can tell you how it is overplayed, who is responsible, and why it damages us spiritually and culturally. But we can get to those soon enough: Before the criticism I’d like to start by talking about why gratitude is beautiful.

I’m a social scientist and I have colleagues who specialize in studying gratitude. I’ve watched over the past decade as it has become one of the primary pillars of an entire industry of self-help books, keynote speakers, and online articles, each praising the profound effects that gratitude has for well-being.

There are different ways to define gratitude. I’d like to start with the most basic form: Gratitude is, first and foremost, an emotion; it is shaped by biology and the social context in which that biology is expressed. It is also highly social—to be grateful is to be grateful for something, and to be grateful to someone. This type of gratitude  deserves a name here so I am simply going to call it natural gratitude.

One of the biggest gratitude studies in the past few decades is, in fact, about natural gratitude. It was conducted in 2008 by Dr. Sara Algoe and her colleagues, Dr. Jonathan Haidt (yes him) and Dr. Shelly Gable. Their study leveraged a UVA sorority tradition called “Big Sister Week,” where new sorority members, or “little sisters,” were anonymously sponsored by one of their second-year “big sisters”.1

Over the course of Big Sister Week, the big sister would give anonymous gifts and help to her little sister as a way of welcoming her to the sorority house. At the end of the week there was a special ceremony where big sisters introduced themselves to their little sisters.

Algoe and her colleagues were interested in studying a previously un-examined aspect of gratitude; they believed that gratitude was communicative. Algoe’s intuition told her that something special happened when a person not only gave a gift, but gave a gift that communicated “I see you and care about who you are” to the receiver.

In the basic framework Algoe was exploring, gratitude is a signal that helps foster warm, loving relationships. When one person is generous to another, gratitude serves as an external signal to the giver that the person receiving the generosity is open to and capable of building a relationship based on reciprocity and mutual care. On the other end, for the person receiving the generosity, gratitude acts as a sort of internal signal that the giver was offering something meaningful, and it also acted as a prompt—a desire to repay the favor, to further the relationship by returning the generosity.

That might sound somewhat Machiavellian, but that’s largely because when it is described mechanically we tend to imagine it as a product of rational, conscious calculation. But emotions aren’t calculated—this is an intuitive, deeply felt process. In practice, natural gratitude serves as a sort of “sincerity amplifier.” It feels like being seen and appreciated and a desire to return the favor, and therefore it acts as a sort of emotional glue that helps compatible people quickly glom together into communities.

Algoe’s findings supported this theory: Big sisters whose generosity made their little sisters feel seen and appreciated formed stronger relationships. It seems that the process of giving, receiving, and giving back serves to help people build social bonds by kickstarting a virtuous cycle of mutual generosity, and gratitude is the felt emotional energy that powers the cycle.

Image
Image by Author, via MidJourney

Abstract gratitude

Please keep in mind the idea of gratitude as a virtuous cycle of mutual generosity. Natural gratitude occurs in a cyclical framework; it requires a person who gives first to get the cycle going, a person who gives back to perpetuate it, and a sense that the gestures on both sides are meaningful.

Modern writing on gratitude ignores this dynamic. It is preoccupied with a different type of gratitude; a variation which I am going to call abstract gratitude.

Humans’ advanced cognition gives them a special capacity to “abstract” an emotion, making it possible to feel that emotion in contexts far beyond the ones it evolved for. For example, it is well accepted in the sciences that compassion in animals is felt primarily for family, and that, with modest exceptions, it tends to diminish as genetic distance between organisms increases.

Humans, on the other hand, can feel compassion for trees, which are most decidedly not kin, and which are not natural targets for emotional bonding at all. How do we do this? Well, simply put, we leverage our cognition to imagine trees as if they were family. This strategy pops up in how we talk: When people cultivate compassion for non-kin, we see them use the language of family to describe the ones they are extending compassion to. This linguistic shift is not just a byproduct of feeling compassion; it is an active component in creating and amplifying compassion outside of the context where it typically occurs.

Abstract gratitude is an extension of this ability. The trick, as you might have guessed, is to expand your understanding of gratitude, leveraging your imagination to think of life itself as a reciprocal, generous two-way interaction. Phrased slightly different; you take the warm two-way relationship built in basic gratitude, and project it onto the universe.

Abstract gratitude differs from natural gratitude in two key ways.

  1. Natural gratitude requires a giver, while abstract gratitude works as long as a person can imagine a giver.

  2. Natural gratitude requires a gift, while abstract gratitude works as long as a person can imagine that ordinary things are gifts given to them.

Abstract gratitude, done right, is both profound and life-changing. It is a virtue, in all the truest and best senses of the word, and if you do it sincerely it will fundamentally transform the way you interact with life and with other people.

But it also comes with an intractable vulnerability. Eliciting abstract gratitude  is a powerful strategy for manipulators, con-men, gaslighters and narcissists, all of whom absolutely love its lopsided social effects. So, while abstract gratitude, practiced by an individual, is deeply life-affirming, we should be skeptical of abstract gratitude when it is promoted by people in power.

Image
Image by Author, via MidJourney

The abuse of abstract gratitude

Here’s why abstract gratitude is such a tempting target for abuse: Since abstract gratitude is powered by an act of the imagination, it allows one party in a relationship to give nothing while demanding that the other party compensate for the imbalance using… their imagination!

In other words, there’s a certain context in which a person can pull an old-fashioned switcheroo in a social relationship by using words to trick their partner into feeling abstract gratitude instead of using actions to prompt them to feel natural gratitude. The easy signifier of this switcheroo is when someone says “you should.”

You should be grateful.

I mean, don’t get me wrong. Sometimes there are good intentions behind it. For example, there are a lot of kids who should be grateful to their parents. If you’re a beleaguered parent who just wishes their kids would see and appreciate all the things you do for them, that’s hardly a form of gaslighting or manipulation. That’s an understandable reaction to the fact that a relationship that should be built on natural gratitude is missing half of the cycle.

But I think that beyond that and a few other obvious counterexamples there are a lot of powerful people — such as corporate CEOs who want to abuse their employees, or narcissistic abusers who want to control their partner — who love the idea of abstract gratitude because it seems like a great way to get people to be satisfied with their lot, with minimal investment.

Imagine yourself as a CEO who wants his workers to be happy and grateful, but doesn’t want to put in the time and effort (and money) to let employees know that they are valuable, unique, and appreciated. Wouldn’t abstract gratitude sound absolutely amazing to you?

Just imagine it! A workplace culture that encourages all of your employees to feel grateful to life, God, and the universe…so that they might conveniently overlook the fact that you haven’t given them any reason to feel grateful to you, in specific? What a deal!

I think we have to accept this uncomfortable fact; at least part of the reason that abstract gratitude is such a talked-about cultural phenomenon is that powerful people love to promote it so they can suck on it like parasites.

I’m not saying that the people who hire keynote speakers to talk about gratitude are deliberately using it to gaslight their employees in a cynical, dystopian way. What’s happening is probably more like this: They want employees to feel grateful and happy, and they see a high-cost path to getting it (i.e., be worthy) and a low-cost path to getting it (i.e., hire a keynote speaker) and gravitate to the low-cost option.

Either way, though, choosing the low-cost option cheapens the emotion and the relationship that it takes place in. Low-cost means low quality. (Surprise!)

So how do you deal with a world where demands for abstract gratitude are leveraged against you? Well, that brings me to my final and most important point: In a culture where gratitude is used as a form of manipulation, ingratitude is a powerful act of self-care. When someone offering you a raw deal says “You should be grateful,” ingratitude is having the spine to look at them and say “No. I want you to look at me, see me, and offer a better deal.”

Image
Image by Author, via MidJourney

The primary debt

There is a problem with ingratitude, however. It can protect you but it can also isolate you in a tiny bubble of self-interest. Abstract gratitude is still an amazing way to live and even when you’re dealing with manipulators there’s a fine line to walk, where you want to protect yourself while remaining engaged and grateful, without lapsing into defensive narcissism. So how do you integrate the necessity of being grateful—to life, to friends, to good leaders and good colleagues—with the necessity of being ungrateful? Can we reconcile the two?

I think we can. In fact, it is a deceptively simple principle. We can leverage the nature of abstract gratitude itself to form a “primary cycle” of gratitude within ourselves, which helps us defend against people trying to build unnatural secondary cycles. At the core of it is a simple idea:

Be grateful to your self.

Not your ego—that’s a sure path to narcissism. Be grateful, instead, to your physical self—your body, the biological organism that, on most days, you treat as a vehicle or an automaton, a tool to be manipulated on the way to your latest goal. Stop and consider your stomach, your legs, your heart, as if they were more than just objects that existed to carry you to the next promotion. Leverage your imagination a bit; teach yourself to view your body as family, as someone to love and to care for. Because it loves and cares for you.

Your body is a small, frail hero that fights daily against the forces of chaos for your own survival. It sacrifices itself daily, at great cost, so that the thinking, feeling part of you can laugh, live, and love. It is all you have. You can be grateful to it, and grateful for it. You can be generous to it, and accept its generosity in turn.

With that knowledge you can start your own self-contained virtuous cycle of gratitude, letting it spiral tightly within the confines of your own soul. And that makes an enormous difference.

When you are grateful to your body, you have someone to whom you are indebted first. You have a primary debt of gratitude that you owe to your own being. With that as your foundation, you are free to respond with grace to others. To those who come to you in good faith, who see you for who you are, who aren’t looking to steal from your body for their own ends, you can share the virtuous cycle of natural, organic gratitude with them.

But to those who are trying to use gratitude to manipulate you?

You can indulge them just a little. You can be grateful for what they have given in that all-embracing way that comes with a sense of abstract gratitude. But then you can remember your primary debt. And with that primary debt in mind, you can say “Thank you,” and follow it up with much more powerful words “…but I need you to look at me, see me, and offer a better deal.”

And then, if they’re really one of those sad sacks of shit who were trying to use your own virtue to manipulate you? Have fun listening to them howl.

Because you owe a debt to yourself and your body — that little fire of life in you that keeps on burning, which gives you everything, every day, at great cost, with great love. Be grateful for that first and it will give you the standing to be ungrateful where it matters. With grace.

If you enjoyed this post, the kindest thing you could do, by far, is share it with someone else who you think might enjoy it. And if some idea here inspired you, let’s talk! The share and comment buttons are below. Follow your muse.

Share

Leave a comment

Subscribe now

Image
Image by Author, via MidJourney
1

Algoe, S. B., Haidt, J., & Gable, S. L. (2008). Beyond reciprocity: gratitude and relationships in everyday life. Emotion, 8(3), 425.

]]>
<![CDATA[Why Can't We Be Zeus?]]>https://jimhorton.substack.com/p/why-cant-we-be-zeushttps://jimhorton.substack.com/p/why-cant-we-be-zeusThu, 12 Jun 2025 01:11:42 GMT
Image
Image by Author, via MidJourney

The Annual Report of the New Hampshire Department of Agriculture

Periodically my research takes me in odd directions. These past few days have taken me back into the strange world of annual reports from old government departments. If you think that sounds boring, you would be absolutely right, but remember: Boring is human, because humans are boring, and for that reason if you can keep an open mind about it, you can often find small and dazzling bits of humanity hidden inside of boring things.

Every old dusty forgotten book in the archive? Each bears the imprint of its time and place and the people who wrote it. Sometimes that imprint is golden.

I’d like to share a snippet here from the 21st Annual Report of the New Hampshire Board of Agriculture—containing their official records spanning November 1st, 1891, to November 1st, 1892. This particular entry is from January 14th, 1892, when one Mr. J. Warren Smith, of Cambridge, Massachusetts, was invited to speak to the board about his job with the United States Bureau of Weather.

I’m aware there’s a good chance I’ve one-shotted you with that last paragraph. I managed to work agriculture, bureaucracy, annual reports, public speaking, New Hampshire, and the weather all into a single paragraph; it’s entirely possible I have just produced the single most conceptually boring paragraph that ever existed.

If you’re still awake and reading this, though, here’s the small bit of life in the middle of all that: It’s from Page 66 and 67 of the report, from Mr. Smith’s speech, titled Advantages to Be Derived From the Weather Bureau.

During all the first year’s work of the weather bureau there was a certain mystery connected with it, which made the general public look upon the man who pretended to predict the weather, and even the observer, as being gifted with almost supernatural powers… [break] …during the early days of our own service the establishment of a station in a Western town was followed by unusually bad weather. After a time, the people began to think that the instruments set up by the observer had something to do with it; a meeting was held and a committee of the leading citizens of the place was appointed to wait upon the observer, and ask him to pack up and leave the town; and the consequence was that his life came near being taken, the feeling toward him and, particularly, toward his little instruments was so strong.

Even more piquant is the passage that follows:

Occasionally now, a man will come into the office and look carefully around, as if in search of something; and on being approached on the subject of his visit, will say that he is looking for the weather machine; and he will act as if he expected to find a large machine and a man with a crank grinding out the weather, or at least some instrument by which the weather is accurately foretold.

I think, without having to work too hard at it, you and I can both imagine the man being described in this old, old speech by Smith—a man with an older, weather-beaten, Jim Varney face; a thick moustache and a droopy hat; wearing a striped shirt and carrying himself with a bashful demeanor. I picture him looking cowed by the prospect of seeing the place where the weather was made, and a bit let down (and put off) to discover that it was just another office.

The quaintness of that image is belied by the earlier passage, where a group of (then modern) villagers ran a government employee out of town because they were afraid that his instruments had brought in a storm. I’ve been thinking about that for the past few days, courtesy of the research I’m doing for an upcoming article. Ostensibly, Moonshots exists so I can get my thoughts on these things out in the open to work with them. Here are a few I’ve had about the weather:

Image
Image by Author, via MidJourney

1) How did the weather become boring?

There is a thought that I absorbed un-critically when I was young and I don’t even know where it came from. I absorbed it through implication and innuendo, and by the time I was about twelve I had internalized it as a simple truth of human interaction.

If I had to put the thought into words it would be this: The weather is a lame topic of conversation. You open with the weather if you’re a dullard, lacking either in cleverness or in pluck. You open with the weather if you don’t care enough about your conversation partner to start with something interesting to them. The boringness also makes weather a safe topic—you hide behind it when you don’t want to broach topics that are too real or vulnerable. Clouds are empty talk, filler talk, before getting to the real thing.

But why? I remember this one time back when I was 24 and living in South Dakota with my brother. I was fretting around in the kitchen and he yelled to me from outside our trailer; “James! Get out here! You gotta see this!”

I stepped outside and it turns out he wanted me to see the clouds. I looked skyward and the blood ran right the fuck out of my face into my legs where it would be more useful. The sky looked like it was boiling in slow motion, bubbles of clouds facing downwards. I had never seen clouds like that before but my stomach clenched and flattened and started braiding itself into a twist.

I told Cole to hop into the car and we drove to shelter at the local Wal-Mart. I found out later that the clouds were mammatus clouds, harbingers of a nasty storm, and possibly a tornado. We weren’t hit that day but it was widely agreed that it was a close call.

What’s boring about that?

Mammatus clouds in Austin, Texas. The sky looks like it’s boiling. Image taken from Wikimedia Commons: [Click for Link]

Reading up on this speech given to the New Hampshire Board of Agriculture I’ve started to get a sense of an answer. I’m not sure if it’s right, but I’m going to take a shot at it.

The townspeople described by Smith in his speech—the ones who ran the weatherman out of town—thought a streak of bad weather was important enough to risk violence. The occasional country bumpkin who wound up in the bureau looking for the weather machine was never just a random occurrence. He was one of the daring ones—the ones brave enough to go see, while everyone else he knew preferred to just talk. But that means there were probably a lot of people talking about the weather bureau and its mysterious weather machine.

It occurs to me that the thing these talkers shared in common is that they were all likely to be profoundly affected by the weather. They were farmers and country folk—people whose economies and jobs could be shattered by a drought, who were profoundly grateful for a good growing season and horrified at the prospect of a bad one.

For these people, talking about the weather was the same as talking about their shared struggle. Their history. Their hopes for the future. What do you do in 1892 when the sky east of your home boils like an angry kettle and there’s no Wal-Mart to run to for safety? I have no clue, but whatever you do, you’ve got a hell of a story to tell afterwards.

My grandparents lived and died on the Minnesota prairie. Grandpa was kind but he was as boring as a left foot and never thought about much in his later years except baseball and beer. But he had installed a makeshift thermometer and barometer in the front yard of his farmhouse and kept a dutiful record of both for years, and would occasionally chat about it.

Here in the city that’s something a hobbyist would do. But grandpa grew crops on his land. Baseball was his hobby. Weather was his business, and that record book was a habit, and probably an important one.

So, to my point then—what if the gentle contempt we have for talking about the weather is the descendant of a thinly veiled contempt city-people had for rural farmers? If you live in a city you’re less affected by all but the most serious, inclement weather. People with a comfortable home in the city wouldn’t fret as much about good or bad days because their livelihoods would have been un-moored from the land.

It’s not hart to imagine a time in, say, the 1950’s, when a well-to-do office worker would consider it bizarre to see two farmers commiserating about a long run of dry days, or speculating about the meaning of a barometer reading. Weather, he might have thought, is for the dull ones.

I have no ability to prove this. But I’ve taken a few courses on geography and the weather is anything but dull. Give it a shot; talk about the motions of God and what it might mean for your future if you didn’t have a faucet and a grocery store.

I’m not sure who came up with that idea that the weather is boring but it’s a reminder of an old lesson I learned, which is that contempt is the mark of fools.

Leave a comment

Image
Image by Author, via MidJourney

2) The storm gods always rule

We’re going to wander a bit afield on this second point. It occurred to me that, given the importance of the weather to the daily life of most people, it might show up in their theology. If you imagine being a farmer at the utter mercy of a storm in the Mediterranean, and you believed there was a god up there causing that storm, it would make sense that you might consider that god to be a big deal, even compared to other gods.

Some of this checks out right away—Zeus and Jupiter are storm gods and each the head of their respective pantheons. But they’re also carbon copies of each other—to get a sense of whether this is consistent you have to go further afield.

Odin is the chief god of the Norse pantheon, but if you look at the gods at the peak of that pantheon, Thor, the god of storms, is right up there with him. As it turns out if you look at the pantheons of other northern peoples you see a similar pattern. In the Celtic pantheon the chief god, The Dagda, controls storms and the weather (though notably he shares this ability with other gods, like Lugh). In the Finnish pantheon the chief god is Ukko—and you guessed it, Ukko is a storm god.

But these are European gods. What if we look further afield?

Well… in the Japanese pantheon the chief goddess is Amaterasu, goddess of the sun. But the god of storms? It’s her younger brother Susanoo—the god of the sea, storms, and fields. He’s one of the gods at the center of the Japanese pantheon, alongside Amaterasu.

In the Egyptian pantheon? The chief god is Ra, the sun god. But the god who controls rainstorms, the weather, and agriculture is Horus—and he is personified in the Pharaoh, who rules over Egypt.

In Mesoamerica? The Aztec god in control of the weather is Tláloc, god of rain, earthquakes, and earthly fertility. I wasn’t able to find out as much about his relative position in the Aztec pantheon but Tenochtitlan, the Aztec capital city, the Templo Mayor is devoted not just to one god, but two. The first god is Huītzilōpōchtli, god of the sun, war, and sacrifice, the patron deity of the Aztecs. The other god, who shared equal space in the temple? Tláloc.

Even in pantheons that break from this pattern the storm god often plays a surprisingly important role. In the Indian pantheon the three primary gods are Brahma, Vishnu, and Shiva, respectively—representing the principles of creation, preservation, and destruction. These principles, however, are remarkably abstract—embodiments of transcendent law rather than aspects of nature. If you try to work your way back in history to earlier, nature-based deities? The Rig Veda is the oldest of the Hindu vedas, and in it, the primary god is Indra, king of the gods, and the god of storms.

For that matter, what about monotheistic religions? As best as I can tell, there’s no physical description of the God of the Bible as specifically a storm deity—and yet the most common mental image people seem to have of him in popular culture is an old man riding a cloud, wielding lightning bolts. What gives?

Make what you will of that. There are a lot of caveats to what I’ve said here. It’s not a universal survey and I would need to be more thorough before I could make any definitive claims about storm gods. Also, I highly suspect I’m not the first person to have thought about this, or even anywhere close—certainly, if it’s true, there must be historians and scholars of ancient religions who noticed that storm gods often wind up at the top of pantheons. The heavens are high. We are small. And whatever god makes the clouds boil is probably important.

Image
Image by Author, via MidJourney

3) No more gods for the weather

A final thought. There’s a second transition regarding the weather that I’m interested in, as well. It’s more subtle. I’m going to try to point it out to you in a way that will hopefully drive home just how weird it is.

Think about the opening of this article for a moment and be candid with yourself. The example at the beginning, about the man stumbling into the weather bureau, looking for the weather machine—it was kind of cringe, right? Clearly the man was very naive.

Right?

Okay. So, if it’s that obvious, certainly you can answer this question: Why is a weather machine naive?

You might answer that the reason is that we’re smarter now but hold up for a second there. Why would being smarter make a weather machine ridiculous? Shouldn’t being smarter make a weather machine more likely? With the right technology, in theory, shouldn’t it be possible to control the weather? And therefore, as we develop better technology, shouldn’t we, in theory, be moving closer to finding the technology that would allow us to do so?

In spite of this, if I suggested to you that technology might one day allow us to, say, summon a rainstorm over Death Valley with enough regularity to cause it to bloom, you probably have a deeply intuitive sense that such a technology is very unlikely, not just now, but probably ever. It sounds like something out of fiction, right?

Okay that’s what I’m getting at. Why do you have that intuition? Where did it come from? Who told it to you? Because I share the same intuition, and nobody ever told that to me!

Here’s the last thing I’ve been mulling over in this whole thought experiment. We used to believe that there were grand beings—gods—who could control the weather with remarkable specificity. They made storms wherever they wanted to, just because they wanted storms. Zeus could, theoretically, cause Death Valley to receive enough rain to eventually bloom, just as he could cause a fertile Greek landscape to go dry and perish, if it suited him. In the face of Zeus, all humans could do was pray, and speculate about what he was going to do next, with varying degrees of accuracy.

Today we have a similar posture towards the weather except there are no gods behind it. We view the weather as something that is slightly predictable (say, two weeks in advance) but largely uncontrollable except in the crudest sense—we can, for example, seed clouds to make them more likely to rain, but we can’t summon clouds where we want.

There was a long transition between these two orientations toward the world, however. And what I find most interesting is that there was this sweet spot between the two of them—from about 1850 to maybe 1980—where we believed that it was actually within our grasp to control the weather in a far more ambitious way than we do now. What happened to that? What happened to the belief that we might someday be able to play at being Zeus, ourselves?

Why does your intuition tell you that it can’t be done?

I’ve got answers to this already but they’ll have to wait for a different time. For now, I’d love to hear yours—slap that comment button below. Let’s talk.

Leave a comment

Share

Image
Image by Author, via MidJourney
]]>
<![CDATA[What is Charisma?]]>https://jimhorton.substack.com/p/what-is-charismahttps://jimhorton.substack.com/p/what-is-charismaSun, 08 Jun 2025 07:27:53 GMTNote: From the archives, with love. This was posted on Medium in early 2023 and touches on one of my research specialties—the relationship between charisma and emotion. Reposting it here for you for a couple reasons—one is that it’s one of my favorite articles. A second is that it’s congruent with some other things I’m thinking through.

Photo of Marilyn Monroe taken for Photoplay magazine, 1959. In public domain (copyright expired). Copyright search by Wikimedia Commons. Image collected by Wikimedia Commons

On an otherwise normal afternoon in 1955 Robert Stein, a young editor at Redbook magazine, sat at a table in the Gladstone Hotel in Manhattan, slamming rounds of scotch in rapid succession and fighting back a rising swell of panic. He had bitten off far more than he could chew. Newly promoted and anxious to make a name for himself, he had sold his editor on a sensational story — an intimate, behind-the-scenes portrait of Marilyn Monroe.

Monroe, by that point, had retreated from the public eye. She was pursuing more serious acting roles, and the 1950s press had reacted with the predictable contempt they heaped on any woman who tried to transcend the role they had assigned her.

In response, Monroe had tried to create space for herself, but her seclusion made her all the more tantalizing. Magazines had taken to publishing unfounded gossip. Her name was so powerful that the hint of her was enough to send ratings soaring.

Robert Stein knew all this and gambled anyway, praying that Monroe would be open to something different. An actual interview? A brief glimpse into the life of the icon, behind the stage? If Stein could pull it off, it would be gold. If not… well, Stein spent several days sweating profusely. His job was on the line. The friend he was meeting at the Gladstone would either make his career or destroy it with a single phone call to Marilyn Monroe.

Stein’s gamble paid off. Monroe was happy to let them profile her. It later came out that she agreed, in part, because Redbook had treated her kindly when other magazines had ridiculed her. And thus Stein’s gambit culminated in one of those brief, poignant moments that are iconic in the lives of Hollywood royalty; Stein and his best friend, a brilliant but tormented photographer named Ed Feingersh, accompanied Marilyn Monroe for a trip around New York — just the three of them, as she let the two in on her day-to-day life.

That day is significant for many reasons. For 1950’s American readers, that day was a rare look at the real Marilyn Monroe. For Stein, that day is a painful memory; he was the only one of the trio who would live to see his 37th birthday. For modern Americans, that day resulted in several of the most famous photographs ever taken of Marilyn Monroe.

But there is one other reason that remains largely unnoticed to those outside of the scientific community. Over fifty years later, as he wrote about that day, Stein unwittingly uncovered a mystery.¹

The mystery predates Stein and Monroe. Social scientists have been trying to solve it for close to a hundred years. It is a mystery that I happen to care a lot about; I devoted five years of my professional life as a research psychologist to trying to solve it. People feel, intuitively, that it must have an answer, but when it comes time to put it to words, the words elude them.

And Marilyn Monroe, in a brief, scintillating moment outside of a New York subway in 1955, embodied that mystery in its entirety.

Strike a pose

Here’s how it happened. For most of that day, Stein and Feingersh spent their time not in the company of Marilyn Monroe, but in the company of her mild-mannered alter-ego — the driven, insightful, melancholy Norma Jeane Baker.

The name Marilyn Monroe was created by Baker and studio executive Ben Lyon in 1946, as she signed her first contract with 20th Century Fox. It was a glamorous stage name for a future star, though neither Baker nor Lyon knew at the time just how famous Marilyn Monroe would ultimately become.

Looking back, it is also evident that at some point Marilyn Monroe had become something more than a simple stage name. Fifty years later, as Stein recalled the story, he was struck by the fact that Monroe seemed to be two people — one of them was the star that everyone knew and adored. The second was Norma Jeane Baker; private, shy, and often overwhelmed.

Baker did not feel as deeply connected to Marilyn Monroe as the rest of the world. Throughout their trip, Baker alternated between being herself and being Monroe. One moment, in a bar alone, she was a solitary woman ordering a vodka, her beauty drawing the appreciative eye of the bartender but not his recognition.

The next moment, walking into a clothing store, she was the famous starlet being doted on by attendants and salespeople clamoring for her attention. If not two separate personalities, at the very least she had two separate personas.

As the three of them rode the subway, Baker sat in a corner of the train, largely inconspicuous to the subway crowd. Most of the New Yorkers on the subway had little time for strangers, engrossed as they were in the business of the day.

After disembarking and snapping a few photos in the subway station, Stein, Feingersh, and Baker emerged onto the New York City streets. Baker stopped for a moment, struck by a sudden inspiration. She turned playfully to the two men accompanying her.

“Hey, do you want to see her?” She asked. It was clear from her words that, like Stein, Baker also considered herself to be separate from her stage persona, as if Marilyn Monroe was a mantle that she could assume at will. Stein and Feingersh were intrigued and waited for her to continue. She smiled — there was a hint of mischief in her eyes.

Then she proceeded to flip New York on its head; the chaos she caused in the next sixty seconds is the stuff that Hollywood legends are made of.

In Stein’s words, all she did was take off her coat, fluff her hair, and strike a pose. It was an innocuous gesture. It was nothing at all. But at the same time, it was everything. A brief shift in her posture and tone, a playful shine in her eyes, and she was no longer Norma — she was Marilyn.

The effect ripped through the street like a bomb. Within minutes, Monroe, Feingersh, and Stein had to fight their way out of a crowd of New Yorkers who were ready to scrap the day’s business for the chance to be in the presence of a star.

If you made it this far, and you’re loving the story, why not subscribe? Just hit the button below!

Image by geralt via Pixabay [Available for use under Pixabay license]

The mystery of charisma

What is charisma? This seems like a simple question on the surface, but answering it is surprisingly difficult. It has stumped academics for close to a hundred years, ever since the term was first popularized by the sociologist Max Weber in his 1920 Theory of Social and Economic Organization.

Weber used the term to describe leaders who commanded followers by the force of their legendary presence. Today, we apply the term far more broadly than Weber ever intended. We use it to describe stars, grifters, presidents, and philanthropists. In all cases, however, it refers to something similar to Weber’s original meaning. Some people have a gravity about them — a quiet force, a little extra something — that draws others into their orbit.

That ‘little extra something’ is powerful. For that reason, there is an active community of organizational scientists who have spent decades trying to unpack the concept of charisma so that they can understand its secrets. It is, in fact, one of the most popular topics in the scientific study of leadership.

If this seems a bit odd to you, consider; throughout history we have examples of leaders who have commanded extraordinary devotion from their followers, compelling them to trust and obey, to go above and beyond, through the sheer power of their character. This is distinctly different from normal forms of leadership, which rely on things like money or fear to motivate people.

From a leadership perspective, the ability to command respect and loyalty must have a powerful appeal. It exists beyond the domain of what financial incentives can reach. If you are an employer, you can buy an employee’s time and compliance with a paycheck. Getting them to care, or inspiring them to grow beyond the parameters of their contract, is an entirely different prospect.

Since the 1970s, then, there has been an active market for science that seeks to take the mysteries of charisma and chart its mechanics in a way that can be easily understood and, hopefully, taught.

Unfortunately, the science is a mess. Scholars have only begun the process of disentangling it properly over the last ten years. At the core of this mess is a simple conundrum. It’s a trap that both scientists and laypeople run headlong into when they are trying to define charisma:

It is nearly impossible to precisely describe what it is that people do that makes them charismatic.

Most of us are easily able to recognize charisma, and that is enough for us to be able to use it in a meaningful way in conversation. For example, there is no question that Marilyn Monroe was charismatic, or that her brief stunt in the middle of New York City — outshining the sky on an overcast day in 1955 — was a prime example of her charisma.

The problem comes when a person is asked to move beyond recognizing charisma into giving a formal definition of it. What is it that Norma Jeane Baker did that day, exactly, that transformed her into Marilyn Monroe? There is no question that she did something. Was it the pose? The fluffed hair? The sudden confidence?

It could have been any or all of those things, of course. But it is equally clear that none of those actions by themselves is sufficient to contain the charisma of Marilyn Monroe. If someone else had done them on that specific day, at that specific time, in that specific place, the very best that they could have hoped for was puzzled glances from businessmen on their way to work. If anyone flocked to them at all it would have been pigeons — and then, only if they were posing with a loaf of bread.

The best scientific attempt at a formal definition of charisma is probably that of Dr. John Antonakis, an organizational researcher at the University of Lausanne, Switzerland. In a paper published with several colleagues in 2016, he defined charismatic leadership as a leadership style where the leader tries to attract followers by appealing to their emotions and values.²

It is a damn good definition, but it has a caveat: Antonakis had to acknowledge that, by his definition, a “charismatic” leader can fail to attract followers. That doesn’t sound charismatic at all, does it? Is it really charisma if it doesn’t work? What separates the true charismatic from the street corner prophet? Both appeal to emotions and values, after all; why do people devote their lives to one, but refuse to devote even a moment to the other?

The core of charisma

An easy way to start untangling this problem is to study how we use the word charisma in language, to get a sense of its underlying conceptual shape. For example, try comparing the following two sentences:

  1. Norma tried to be irritated but it didn’t work.

  2. Norma tried to be irritating but it didn’t work.

The two sentences are identical except for a single suffix, but that suffix gives each of them a unique shape, in terms of how we interpret them. In the first sentence, Norma tried to do something, but she failed to perform her intended action. In the second sentence, Norma also tried to do something, but whatever actions she may have taken, she failed to have her intended effect.

This dual pattern shows up throughout the English language. As it turns out, there is one class of descriptive words that we use to describe people’s characteristic patterns of action and behavior. Energetic, poised, brave, bothered — all of these are connected in some way to a person’s actions.

There is also a second class of descriptive words that we use to describe the effects that a person has on the people around them. Charming, irritating, inspiring, terrifying — none of these words has meaning separate from the reactions of others. Would a person be terrifying if nobody around them reacted with fear? Or charming if nobody was charmed? The life of these words dwells outside of the person they are used to describe.

So, which class of words does charismatic fall into?

It seems evident to me that it falls into the second class. A charismatic person has a certain effect on the people around them. The exact nature of the effect is also something of a mystery, but it is a much easier mystery to crack than the question of what behavior counts as charismatic.

For my money, I would say the effect is emotion. Not just any emotion, mind you; there is no charisma involved in a jump-scare on Halloween, or in provoking somebody to anger by intruding on their boundaries. Some emotions are so simple to elicit that anyone can do it, charismatic or not.

Other emotions, however, can be evoked by appealing to the higher functions of our brain. These emotions are intimately connected to the neurological processes responsible for making sense of and reacting to the social world around us. It is possible for other people to make us feel these emotions through the way they communicate with their body, and their words, and all of the subtle channels of meaning that exist in the quiet spaces between action and speech.

Hope, inspiration, admiration, and courage. Compassion and camaraderie. Complex forms of fear and anger. Shame, and pride, and serenity. And, above all — raw, unbridled excitement. People can be provoked into feeling all of these emotions by the words and actions of others. That’s the domain of charisma. A charismatic person can work their way rapidly past all of the defenses that we deploy to keep others from making us feel. Once they’re past those defenses, they take us on a wild ride.

Still, it would be nice to know how they do it.

The blonde bombshell

Image by allexsalon via Pixabay [available under Pixabay license]

There is a way that you and I could think about charisma that would allow us to describe it as a behavior. But it requires an imaginative leap from the world of simple actions in the physical world to the more nebulous and invisible realm of ideas, and needs, and beliefs, and dreams.

We could say that a person is charismatic when they have the ability to match a need or desire that exists in the mind of another person. If that’s a bit confusing, consider:

Needs, desires, and dreams are immense sources of latent energy. Most people desire to have meaning in their life. They desire to feel brave and attractive. They desire power, or success, or love and approval.

They want other things too. They crave clarity, and stability, and a meaningful vision for the future. They want a clear path forward. They want to stop feeling miserable. They crave the chance to feel fully alive, coveting those moments that fill their nerve endings with lightning and put a bit of thunder in their hearts. Most people don’t get to feel those things, with the exception of a few treasured moments that linger as precious memories.

How much enthusiasm would you feel if, one day, you bumped into someone who represented a path through which you could fulfill your deep desires for meaning, or belonging, or achievement? What wave of energy would be unleashed from behind those sealed doors deep in your soul, if only someone came along with a key that matched that lock?

Maybe a charismatic person is someone who matches the unmet needs of the people around them, either by chance or by design. Norma Jeane Baker had built up a persona — Marilyn Monroe — that meant something to the citizens of New York in 1955. Monroe meant glamor, excitement, fame, and beauty.

For most of her day out with her reporter friends, Stein and Feingersh, Baker didn’t match this persona. She was a beautiful but shy woman who didn’t represent anything at all in the minds of the people around her.

But for a brief, playful moment on the street outside of the subway, Norma Jeane Baker took off her coat, fluffed her hair, and struck a pose. In that moment, she matched an unspoken image that existed in the minds of the crowd around her.

She matched the Marilyn Monroe that they had built in their imaginations, and by matching, she presented herself as an opportunity to fulfill their dreams and desires, if only for a moment. The crowd that day was a tightly coiled mass of latent dreams, needs, and wants, just like us. They dreamed of being in the presence of someone famous. They needed a little bit of extra excitement to their day. They wanted to have an amazing story to bring home to the family that evening.

And suddenly she was there on the street outside the subway. She was the key that opened all of those locks. She was the brief atomic shock that caused all of those impossibly strong internal bonds to cascade into unrestrained fission, releasing all of that shackled energy at once.

There was a reason they called Marilyn Monroe a bombshell.

The path to charisma

What does that mean for you and me, however? Well, for one, it might mean that if you want to be charismatic, you ought to be asking yourself a different set of questions.

Most writers who try to peddle the secret to charisma offer their readers a framework that focuses on how to act. Be warm and confident. Be extraverted. Be a good listener. Ramp up your energy as much as you can.

These things can certainly have a charismatic effect, but they will never serve as a complete answer to the question of charisma because the peddlers are trying to answer a fundamentally flawed question; what can I do to be more charismatic?

As if charisma was generated by something other than the act of connection.

There’s a better approach. You could turn your attention to the needs and desires of the people around you, and you could start asking much more powerful questions about how to match the unspoken needs of other people.

You could ask yourself how to elevate the people that you meet, so that their day after meeting you is just a little bit better and more exciting than it was before. You could ask yourself how to inspire your friends and acquaintances to be the bravest version of themselves. You could ask yourself how to help other people feel powerful, or attractive, or interesting. You could ask yourself, what can I do to inspire the people around me to bloom?

Those questions are difficult, too. But they are also questions that you can answer — and if you are able to answer those questions, the charisma will probably follow naturally.

If there’s anything in this post you found useful, odds are good that the people you know might find it useful, too. Slap that share button!

Share

References

¹ Stein, R. (2005). “Do You Want To See Her?” American Heritage, 56(6). Read the article at: https://www.americanheritage.com/do-you-want-see-her

² Antonakis, J., Bastardoz, N., Jacquart, P., & Shamir, B. (2016). Charisma: An ill-defined and ill-measured gift. Annual Review of Organizational Psychology and Organizational Behavior, 3, 293–319.

]]>
<![CDATA[You Don't Need a Master Plan]]>https://jimhorton.substack.com/p/you-dont-need-a-master-planhttps://jimhorton.substack.com/p/you-dont-need-a-master-planSun, 01 Jun 2025 08:27:10 GMT
Image
Image by Author, via MidJourney

I knew this one asshole, back when I was starting college as an undergraduate in the midwest. I’ll call him Jeff.

First, a full disclosure; Jeff isn’t a person — Jeff is a pastiche of five or six people I knew up to that point who shared a set of common characteristics. So, instead of thinking of Jeff as an asshole, you can think of him as five or six assholes stitched together.

Actually, that sounds about right; six assholes stitched together. That was Jeff.

One of Jeff’s defining features is that he had a master plan. Back when I knew him, I envied him, because I am very much not the type of person who has a master plan. My brain is volatile; I’m the type of person who can form six impossible plans before breakfast, and follow through with none of them.

On those occasions when I have been able to play the long game, I credit it to placing myself inside a structure where progression was inevitable. That is why I went back to college. I arrived late, starting at age twenty-five because, after a series of failed stops and starts, I knew that I needed to walk a well-traveled path in order to move forward with my life. It was a good call.

Conversely, when I am left to my own devices I tend to react to what is in my environment and often get tangled up in it like a thorn bush. I am susceptible to life’s brambles; I can’t charge confidently through them the way that Jeff could. For that reason, I envied Jeff— all six assholes of him.

That’s an awkward line, but you get what I mean.

Moonshots is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Here’s the thing: You probably sussed this out already, but I don’t think Jeff was a good person. On the outside he seemed unobjectionable — he got along well enough with others and was popular. He didn’t have problems finding a romantic relationship. He was productive.

But globally he appeared to make things worse for the people around him. He was the kind of guy who took a group of peers and carved them up into those who suited him and those who didn’t. Those who suited him got to be friends, while the rest became fodder.

If you were one of the outsiders, your habits, history, and personality were fair game to be picked apart as a way of promoting the solidarity of his group of acceptables. You were either holding a knife with him or you were one of the turkeys. He was divisive.

Even those who earned his approval seem to have been worse for it. He was the kind of person who would (and did) playfully smack a friend or partner upside the head — ostensibly as a joke, but also as a way of saying I can do this to you.

I am not sure what, exactly, he was to his friends. We have to make allowance for my dislike; it may be that I am reading him wrong. But he seems to have given them the type of friendship that was generic enough to be comfortable but not meaningful — a friendship designed more for one-way extraction than mutual benefit.

He knew what role he expected them to fill and gave them what he needed to, in order to fasten them in place.

The worst-case scenario was to become one of those friends who Jeff defined himself against. He liked having a wreck close by, so he would pick someone vulnerable and inculcate in them a deep belief in their own brokenness so that he could be the one who had it all together.

I wonder often what the group would have been like with Jeff subtracted from them. I am reminded of an episode from the cult classic sitcom, Community. The episode was broken into multiple hypothetical timelines; in each parallel timeline a different character left the group briefly to go get a pizza, and viewers got to see how the remaining characters reacted in their absence.

The main character in Community was also named Jeff. When he left the group, everyone was happier. That, in my mind, sums up the Jeff I knew — everything was superficially fine when he was there. But he was a net extractor of happiness; I can’t shake the feeling that everything would have been better, for everyone, if he wasn’t.

Image
Image by author, via MidJourney

There’s a point to this. Jeff was a bit narcissistic, and I believe that most of the people he knew were slightly worse off for his presence. Jeff also had a great capacity for playing the long game, and I envied him for his focus and his drive.

In retrospect, however, I am not sure that those two parts of his character were separate. In fact, as time goes on I find myself suspecting that the type of rigid, detailed long-term plans that I used to envy so much are, themselves, narcissistic.

I don’t think that everybody who has a concrete vision for the next five years of their life, broken down into twenty strategic steps, is a narcissist. Most are not. But I suspect that forcing such planful, top-down control on the joyous tangle of life is a strategy fraught with narcissistic potential.

Most of the Jeffs I have known lacked outward focus. They rarely saw the moment for what it was, or appreciated the people around them. Everything around them had to fit into a pattern they decided in advance.

In other words, the Jeffs lacked serendipity.

Merriam-Webster’s online dictionary defines serendipity as “the faculty… of finding valuable or agreeable things not sought for.” It is the capacity for planless joy, and it is a fundamental element of relationships; you have to have a bit of stop and smell the roses in you to turn your attention to the vibrant, beautiful details of the lives happening adjacent to your own.

Mental disorders, including personality disorders such as narcissism, sever people from serendipity. This severance takes the form of illusory beliefs — phantoms projected onto the world around us. Over time the phantoms become more real, given solidity by the force of our unquestioned beliefs. As they grow more solid, they filter our experience, assimilating the things that nourish them and discarding the things that don’t.

Long-term planning is a powerful phantom. Held with too much intensity, plans start to take on the qualities of a mental disorder. Fixate too much on a plan and it, too, starts to carve the world into digestible and discardable — the things that fit in its belly and the things that do not.

Religious traditions often warn against the folly of trying to exert too much control over the world (e.g. “If you want to make God laugh, tell him your plans”). But in classic literature, extreme cases of this are portrayed as madness; I think, for example, of the obsession of Captain Ahab, in Moby Dick, chasing after the white whale to his death and the death of his crew.

Perhaps Ahab is a principle — a reminder of what we become when we focus too much on the chase.

What an odd way to think of planning, no? Generally I think of plans and goals as a good thing, rather than imagining them as a way of filtering the serendipity out of life. I don’t think plans are bad, exactly. But is it appropriate for us to fetishize the most extreme versions of them? The personal visions that stretch years into the future?

What I do know is that most of the planless people that I have known — all of those non-Jeffs, myself included — are doing well for themselves. Some are more successful than Jeff. Most make the people around them happier, not sadder. And all they needed to do, to get ahead, was to look at the space around them and take the next reasonable step.

Maybe the true narcissism is believing that we need a laser-sharp focus on the future in order to succeed. Most of us just need a good direction to sail, and good friends to travel with.

Think plans are overblown? Spread the word. Share this post!

Share

Image
Image by Author, via MidJourney

]]>
<![CDATA[You Can (Re)Discover]]>https://jimhorton.substack.com/p/you-can-rediscoverhttps://jimhorton.substack.com/p/you-can-rediscoverMon, 26 May 2025 12:31:49 GMT
Image by author, via MidJourney

The time for journeys would come and my soul
called me eagerly out, sent me over
the horizon, seeking foreigners’ homes.
But there isn’t a man on earth so proud,
so born to greatness, so bold with his youth,
grown so grave, or so graced by God,
that he feels no fear as the sails unfurl.

The Seafarer, Codex Exoniensis, 1072 A.D.

I’ve found that, if you brace yourself for it correctly, the ocean almost always comes as a shock to the senses.

Of course, you can work yourself into that numb, screen-bound space where the waves, and the musky scent of brine and kelp, and the cry of seabirds, all fade into a uniform blob of qualia, easily ignored in favor of your smartphone. I get it. I do.

But if you get yourself in the right headspace there is a moment when you crest the final hill where the things that have been crowding your brain — every tree, berm, and bush; every stray thought, and idle distraction — are suddenly ripped from you, replaced with a simple, endless vista of unmarred blue, as if God ripped the tablecloth off the whole universe and took every dish with it. The effect of this is electric. Your brain crackles. It puts a zap in your step.

For years I’ve contemplated this strange power the ocean has over me. It has led me to poems like The Seafarer, a thousand years old, and I’m struck by how different their experience of the sea is — as if there is an entire second half to it, full of adventure, and fear, and reverence, that we have lost. And in trade we now have Disney Princess cruises and beach towels.

That is what I wanted to address today — how did we lose that other half of the ocean? How do we get it back?

Can we?

Image by Author, via MidJourney

In The Seafarer there is a clear sense of the divine attached to the ocean. The writer spoke with reverence and trepidation about its beauties and its dangers, and often shifted to discourses about God, heaven, and the moral lessons they learned at sea.

This sense of danger and beauty has ebbed since the Enlightenment, and I’ve been trying to piece together the story. One thing seems clear: It’s not that we lost interest in exploration. Explorers are central figures in the western imagination. Indiana Jones, Captain Picard, Jack Sparrow, Dora and Boots. We learn to treasure adventure early.

Instead, the best explanation I can come up with is that between 1400 A.D. and 1900 A.D. the mysteries of the ocean — and really of the whole world; the forests, deserts, and jungles, and distant mountain peaks — began to shrink under the relentless cataloguing of endless expeditions.

By the end of the 1700’s we were on the tail end of western civilization’s adventure phase. James Cook and his crew discovered Hawaii and Alaska in 1778. Lewis and Clark crossed the North American frontier in 1804. Amundsen and his crew reached the South Pole in 1911, having braved a landscape so hellish it could have been ripped from Dante’s nightmares.

Popular culture was replete with stories about the great out there that inflamed the imagination of generations of young men and women. And then, suddenly? It drew to a close, carrying with it a tangible sense that there was no mystery left. Something was lost: The world had shrunk. We’d mapped its edges.

In their thirst to keep the sense of adventure going people started substituting achievements for discovery, celebrating feats like adventurers who circumnavigated the globe, since the last of the world’s hidden corners were vanishing.

And then the last corners did vanish, and all that was left for explorers was society, space, and science.

Coveting adventure, we sought mysteries and magic elsewhere. I think, for example, of C.S. Lewis’ Chronicles of Narnia, or J.M. Barrie’s Peter Pan. At some point in the mid-20th century we ran out of actual frontiers to adventure through, so we started tucking away new ones behind stars, or in wardrobes, replacing boots with books.

I grew up in the 1990s, in a world that felt denuded of surprise: Earth had been explored, and if you wanted to get that old-fashioned sense of adventure you did it by discovering new things, not by going new places. There was no more unknown territory out there; instead, adventure lay in the lens of the microscope and telescope, the sleepy whorl of the double-helix, the elegant spiral of galaxies colliding.

Or, that’s what they taught me.

Image by Author, via MidJourney

At this point you may have noticed that my narrative of the Age of Exploration is flawed, badly. In fact, it might as well be a caricature.

Which, in a way, it is. The flaw is that the narrative assumes a single viewpoint — the colonial West. For example: Hernan Cortes discovered Mexico in 1519, but of course there was an entire empire already there (which he slaughtered). Yet the Age of Exploration narrative treats Cortes as the discoverer and the Aztecs as the discovered.

There’s not a race in the world where you claim first place by shanking someone who arrived at the finish before you, but that sums up a surprisingly large portion of the Age of Exploration; to the sailors, the glory, to the natives, the sword. It’s revisionism, and up until the past twenty-five years or so it was the dominant narrative taught in schools.

It seems plausible to me that the only true discoveries during that time were the expeditions into Antarctica. Most other places had already been found. And yet, to the West? The whole age was the pinnacle of adventure.

Anyhow, let’s pause there. I think that contrast offers insight into the nature of adventure. So, let’s start with a simple idea: A sense of adventure comes from exploring the unknown, right?

But.. whose unknown?

It’s not the personal unknown. I live in Alaska. I would not feel a particular sense of adventure if I were to travel to Albuquerque, even though I’ve never been there. There would be some adventure, of course, but nothing like the old elation and reverence that sailors once had for the sea.

On reflection, a better explanation is that the feeling of adventure comes from exploring the cultural unknown — the places that lie at the edge of our cultural map. The ragged edges of our shared schemata.

One implication of this is, of course, that we need to rethink the story of the Age of Exploration. During the Age of Exploration, European culture was having a grand adventure, certainly. But it wasn’t an adventure for everybody. Rather, it came at the expense of great tragedy to other cultures.

A second implication is that every culture had its own ages of exploration — just in different times, when their own cultural milieu expanded into the world around them. I imagine, for example, the Polynesian explorers who cast themselves onto the ocean, trusting in their canoe, rowing great distances until the flight of seabirds alerted them of the presence of an unfound island nearby. Certainly their hearts thrilled, too.

Adventure occurs at the edges. But importantly, for our discussion here, those edges are defined by the culture we belong to. So let’s discuss what that means.

Image by Author, via MidJourney

Back to the big question. How do we get the adventure back?

In 2003 two psychologists, Dacher Keltner and Jonathan Haidt, combed the literature of cultures across the globe to see if they could identify trends in the experience of an understudied emotion — Awe. They found that awe experiences shared two features:

The first feature was vastness. Feeling awe required an encounter with something mind-expanding; something that broke a person’s schemata for how the world works.

The second was a need to accommodate. We are surrounded by vastness — the volume of sensory detail at your computer desk would shatter your brain if you tried to take it all in. We rarely engage this vastness because our mind filters it out. Mindfulness meditation involves dropping these filters just a little and engaging with that vastness. The result? A lot of meditators report small but reliable feelings of awe.

These aspects of awe are culture-bound. Our culture directs our attention to its own edges, saying “Look! Out there is a vast, unexplored world!” Our culture also tells us what parts of the world we can accommodate; many unexplored parts of the world are dismissed because we are taught that they are beyond our ability to fathom.

But most importantly, for our personal sense of adventure; culture also informs us what is not at its edges, and what is not worth our time. Culture is the voice that says “we don’t need to explore that.” It’s the voice that speaks to you when you look out at the ocean and says “Oh, we already know what’s out there. You can look it up online.”

Image by Author, via MidJourney

It is an elegant explanation. Wonder is not a property of the big wide world outside of us. It is the big wide world seen in a specific way — blinders removed, so that we can witness and engage with its magnitude and its richness of detail; making them part of ourselves.

That explains why two people can see the same thing, and one can feel wonder while the other feels nothing. The wonderstruck soul sees a vast world of detail to be wrestled with — an opportunity to learn and grow. The wonderless soul sees a boring gestalt with no properties worth noting.

It also explains why we can look at the sea today, absent the reverence that animated the work of writers like Borges or Swift. There is nothing left of its breadth to grapple with. All that’s left is its depths. We’ve assimilated it.

Or have we? Because the mechanisms that explain why we do not feel awe today also suggest how we might reclaim it.

All it requires is a creative act of disavowal. Shed your culture. Find a part of the world that is grand and mysterious to you and wrestle with it yourself. Set aside your phone. Turn off the internet. Treat it the way you would treat a movie you don’t want anybody to spoil for you; jealously guard your naivete. Wonder happens when your naivete shatters against the thing itself, not against reports of the thing.

Please do not be an idiot about it. You will want to know if your culture says that there are bears nearby. That seems important. But beyond that, approach your sliver of the world as if it were new. Adventure is a product of primary experience. Go get yours.

Go as a biologist. Catalogue the strange flora, fauna and fungi. Make up names for them. Draw them, too.

Go as a mapmaker: Count the trees. Plot their locations. Look for patterns. Where do the healthy ones grow? What does that say about the land?

Go as a writer, with a pen and a paper and a sense of urgency — a need to tell the world about this new place, so that they can know it as you do.

But don’t go with the world whispering into your ear that we’ve seen it all before. That voice killed the wonder. Fight it.

My soul roams with the sea, the whales’
home, wandering to the widest corners
of the world, returning ravenous with desire,
flying solitary, screaming, exciting me
to the open ocean, breaking oaths
on the curve of a wave.
The Seafarer, Codex Exoniensis, 1072 A.D.

And against that endless blue, shall I dare to stop, and to dream? — Image by Author, via Midjourney
]]>
<![CDATA[Two Months Without a Cellphone (Pt. 2)]]>https://jimhorton.substack.com/p/midnight-streets-are-darker-withouthttps://jimhorton.substack.com/p/midnight-streets-are-darker-withoutWed, 21 May 2025 09:08:55 GMTNote: I’d like to extend a special thanks to for suggesting this article. The truth is that it hadn’t even occurred to me that my experience losing a cell phone was worth writing about until WBE suggested to me that it was unique, and the kind of thing that he (and others) would be interested in learning about. His sense was absolutely correct—the first installment of this series performed much better than many of the other articles I’ve posted recently. Something about it resonates with people. I think that many of us have a deep ambivalence and anxiety about losing our ties to the digital world. This second article is about the darker side of that.

Image
Image by Author, via MidJourney

Introduction

In my last article I talked about going without a smartphone. To recap, I think the benefits are overstated, and they only show up for people willing to commit to guarding the space created when the smartphone is gone.

In spite of that I favor reducing smartphones. I think removing student phones will improve education. I think that “smartphone detox” fails mainly because we don’t take it far enough; if you want your brain back you need a screen detox, or even a tech detox. The idea that you’ll fix your brain by removing only 1/5 of the screens you own is laughable.

But if you (like me) want less tech in your life you need to consider the implications. And cutting out a smartphone is becoming increasingly problematic; the world is slowly structuring itself to accommodate smartphones and to exclude those without one.

I just learned this the hard way so I figure I might as well talk about it.

So, I want to talk about what you lose alongside your smartphone. When I’m done, I don’t want you to take away the wrong message: I am not suggesting that you keep your phone and charge bravely into the future with it smooshed snugly into your back pocket so it tingles your butt every time someone dials you. I wrote this article (and the last one) because I want you to see the hole we’ve dug for ourselves with this smartphone thing.

Once you’ve got that in your head, we can start brainstorming escape plans.

Reacquiring a smartphone

Let’s start here: After spending two months sans smartphone, I reactivated my old Galaxy Note 9 two days after returning to Alaska.

I had briefly considered just… not. In my last article I described a wifi island effect that happens when you lose a cell phone; your addictions wither in the no-man’s land because you lack the signal to indulge them. I wanted to keep that going, so after arriving I activated a LightPhone II that my brother bought me for my birthday.

The LightPhone II is a dream: It’s a simple interface built on old Kindle tech. It takes calls and sets alarms and can summon a wifi hotspot in a pinch, but it doesn’t do much else. I can text, but the letters are so small that it makes me feel like I have hot-dog thumbs. So, it’s perfect.

I have owned five smartphones over the past seven years and the simple, simplistic, simpleton LightPhone II outshines them all. It is simple, tacky, and perfect. It is a dumbphone; its IQ is nestled somewhere in the fifties. So is its ringtone, for that matter—a jazzy little number, invoking old TV soundtracks like I Dream of Jeannie.

I adore it. And after two days I reactivated my Galaxy Note 9 anyway.

Why? Because I needed a smartphone, dammit. Not in an addiction sense, but a practical one; I was getting cut out of basic life functions. That brings me to today’s topic: Disenfranchisement. I couldn’t cover it in my first article because it merits its own treatment.

Let me explain.

Image
Image by Author, via MidJourney

If you like what you’re reading here, consider becoming a subscriber! Most of my articles are free to read.

Cut off from polite society

So here’s how this happened. I had settled back into my apartment after four months away. The LightPhone my brother had bought me was adorkable and I was committed to it. But then? I had to clean my dirty laundry, so I brought it to the laundry room and was promptly reminded that I needed an app to access the laundry machines.

Technically I already knew that, but I mostly forgot; I had never tried to get around it before. As I inspected the machine I realized that there was way to pay using cash. There was also no web application. There is a laundromat near my house, but it is uncomfortably expensive and, being late at night, it was closed.

So I had two choices: Use the app, or smell like a gym sock.

I tracked down my old Galaxy Note and downloaded the app.

This happened to me three times in two weeks. Here are the other two:

  1. I got locked out of my old college email and needed a dual-factor code to get back in. Usually when my phone is dead I use a workaround but the authentication program had changed and my usual method was discontinued.

    To log-in? I needed to get an authentication app. There was no desktop alternative. Without a smartphone I needed to contact the help desk and wait for them to call me—and, without a smartphone, I would need to do that every time I wanted to log in.

  2. I booked a flight on a budget airline to visit friends after a work conference. Since it was a budget company, they relied on sleaze tactics to make a profit, and they tried to bully me into downloading their app to display my boarding pass.

    In fact they actually hid the option to download a boarding pass online, burying it multiple menus deep with no indication where it was. I had to google a complaint forum before I found someone who had gone to the trouble of locating it. It took about a half hour just to download my boarding pass.

These are both examples of the ways you find yourself disenfranchised when you lose a smartphone. Larger companies are good about attending to the needs of non-smartphone users. But smaller companies that offer apps often wind up depreciating non-smartphone options, or neglecting them. Or just hiding them.

And why not? People overwhelmingly use smartphones. It doesn’t hurt these smaller companies much to neglect non-smartphone users.

But each time you deal with a logistic problem because someone left out a non-smartphone option, it sends a message. This washing machine? Not made for you. This login portal? Not made for you. This airline? Not made for you.

Of course there are workarounds. There are always workarounds. But still, it only takes a few times before you start hearing the world say ‘This town? Not made for you.’

Image
Image by Author, via MidJourney

Outer spaces

American industry has a history of turning expensive machines into necessities for participating in society. The car. The personal computer. The internet. The smartphone is the latest instantiation of this trend; if you don’t have a smartphone it leaves you feeling perpetually disenfranchised.

Personally I don’t mind this feeling, so much. The pragmatics of it are irritating enough that I am willing to reactivate my Galaxy Note 9, but emotionally speaking I am quite comfortable being on the outside, and I don’t mind becoming a creature of the edges for a couple months in order to avoid tech that I’m ambivalent about.

But it gets tiring. When I go through security with my laptop open to display my boarding pass because my cellphone has been stolen, I know the right things to say to turn it into a fun interaction. Display the right combination of earnestness and cluelessness (it’s very easy; just act like a golden retriever) and the TSA, instead of getting irritated, will be like, “Awww, wookit him! He’s just a big goofy boy on an adventure, isn’t he? Isn’t he? Wookit da goofy boy!

It makes the trek through security less awkward. But also, being the dog gets tiring after a few repetitions.

But there are also other, darker situations that can happen when you are deprived of a phone signal. In Buenos Aires I stayed in the Retiro, a beautiful waterfront district about forty kilometers away from the airport. On my last day there the wifi at my AirBnB shut off. My flight was leaving the next morning and I needed to be at the airport at 6am.

I had no guarantee that the wifi would be repaired and I had no smartphone with signal to use as a backup. The Retiro district closed down in the evening so coffee shops weren’t an option. I couldn’t get hold of the AirBnB owner. Suddenly I had no easy way to get to the airport. I had to consider contingency plans; some very uncomfortable.

Cafes closed at 9pm. The nearby hotels had wifi, but there was no guarantee that they would be receptive to letting me use their network as a non-guest.

There was a nuclear option but it made me very uncomfortable: four kilometers across the downtown there was an all-night café that had wifi. I could pull an all-nighter and walk there after midnight, with all of my belongings in my backpack, and wait to call an Uber at around 4am. Economic conditions had left a lot of disenfranchised people sleeping on street corners. Some were not the type of people I was comfortable bumping into late at night, but… it might be necessary?

If it came down to it I knew I would most likely be fine. I’m large; people give me space because they know that if they tangle with me, they might win—but it’ll cost them something they don’t want to pay. So I’m usually safe.

But that’s some awkward calculus to have to do the night before a flight out of Buenos Aires. And not everyone is large.

Image
Image by Author, via MidJourney

Lonely islands

I talked before about that “digital island” effect. When you don’t have a personal smartphone signal the world becomes small islands of wifi with large gaps between them. And most of those islands disappear in the evening.

If you’re a traveler and you’re trapped in the empty space between two of them you feel the weight of midnight acutely. The Buenos Aires incident resolve itself—the owner of the AirBnB got in contact and had a personal network he let me use. But I’m uncomfortable at the thought of what might otherwise have happened.

That, for me, illustrates the worst possible outcome of losing your smartphone; a long trek in the dark with no signal. If you are not a traveler—if you are at home, working, living, operating on normal hours, with no early morning flight that turns a broken wifi signal into a logistic emergency, you don’t need to worry about this. You can give your smartphone up safely, with barely a hiccup.

But even then it puts you a step closer to the edge, and you should think about how to compensate accordingly in case something goes very wrong.

In spite of this I still plan to cut the smartphone out of my life. I’ll talk more about that later because—and again, this is especially true if you’re not traveling—there are ways to have it all. I like the idea of a smoother life, a clearer mind, and an insulated space where I can interact with Silicon Valley on terms that I like rather than ones that they like.

So I’ll continue pursuing that, and writing about it. For now, I think it’s worth understanding that if you’re the kind of person who seriously wants to renegotiate their relationship with tech, you need to think about that midnight space at the periphery of our current cultural milieu, where you can find yourself walking alone at night in the no-man’s land between wifi hotspots.

That’s almost a ridiculous image. I’d have laughed at it just ten years ago. But we’ve created a world where so much of our personal power is connected to our wifi signal that to be stranded without it at the wrong hour in the wrong place can plunge us back into an old, old, nightmare—walking alone in the dark, footsteps timid, hoping for an island of light somewhere ahead.

If you appreciated this post, or consider the information in it to be useful, consider sharing it with a friend!

Share

Image
Image by Author, via MidJourney

]]>
<![CDATA[Things You Learn from Skimming 1350 Academic Journal Articles]]>https://jimhorton.substack.com/p/things-you-learn-from-skimming-1350https://jimhorton.substack.com/p/things-you-learn-from-skimming-1350Fri, 16 May 2025 08:21:24 GMT
Image by Author, via MidJourney

I’ve been working on a meta-analysis for the past several months. It’s been a doozy. It’s taken up an enormous amount of my time. And it will probably take me another year to complete, at least.

A meta-analysis is the workhorse of the psychological sciences — and really, just about any science, now that you have hundreds of thousands of researchers all doing their own thing. A meta-analysis is like a review, except it’s packed full of nutritious numbers to the point where it is ready to burst. The goal is to comb through other people’s research, record their numbers, and then compare and combine numbers across studies to see how they all fit into the big picture.

So, with that being said, I’m not going to throw numbers at you. I know. I know. Please breathe — you’re safe here. We all have deep trauma associated with math. I won’t trigger yours today. I’m not going to talk statistics; instead, I just want to talk about the experience of going through so many papers.

There aren’t many places where you get to see such a broad crush of humanity on display. Airports. Amusement parks. Crowded streets in Tokyo. Good parks, if you’re the kind of person who can sit patiently on a bench and observe others without looking creepy.

Meta-analyses are one of those places where you get a chance to see humanity on display. They’re like a private viewing available only to scholars. When you conduct a meta-analysis you look through dozens — sometimes hundreds, sometimes thousands — of papers on a topic. To give you an idea, I have a list of about 1350 that I have to go through.

There is no better time than a meta-analysis to get a sense of the broad range of scientists working on a topic, or what they’ve done, or what they hope to do, or the stories they tell themselves about the work they’re doing.

Plus, the quirks; when you read enough papers you stumble across gems like an entire line of papers on marriage and children by two authors with the last names Musick and Bumpass. Or the man with the most scholarly name I have ever seen in print — the inimitable U.R. Dumdum.

The problem is that, small joys aside, you can only make it a short distance into a meta-analysis before you start to feel that most of what science has done is a pile of hot garbage. There is a reason that most papers in the psychological sciences are hardly ever cited by other scientists: most of them add nothing, or at least nothing noteworthy.

Image by Author, via MidJourney

There are reasons for this sense of wastefulness. One reason is redundancy: Scientists unwittingly trace the same contours endlessly, repeating themselves with small variations on the same theme. In the psychological sciences there aren’t many researchers who are daring enough to do something new — not when their tenure depends on producing a steady stream of acceptable papers.

Another is the smallness of any given study. Say that a researcher does want to do something new. Often that “new” thing takes the form of a program; the individual studies don’t add much except that they advance the argument the researcher is trying to make, one lemma at a time. It’s only upon synthesis that they produce a useful meta-level insight that might add something of value to science.

Another reason is bad practices. It’s well known in the social sciences that you can take any two measures of negative concepts — say, anxiety and procrastination (which happens to be the topic of my meta-analysis) — and they will be related to each other by default. Negativity is transferrable; a person who checks a bubble on a survey saying that they believe they’re a hot pile of garbage is unlikely to have good things to say about politics and they are unlikely to be optimistic about the future.

So when you start a meta-analysis on the relationship between two negative variables? It’s going to get redundant quickly.

With all the waste you might think that the social sciences would be better served if we were to just get rid of all those junk papers and be efficient. There are problems with this type of thinking, though. Three, really, which are all inter-related, and which we could spend a long time unpacking, but I will try to state them succinctly here.

If you enjoy what you’re reading here, consider subscribing to my newsletter!

Image by Author, via MidJourney

I. What is waste anyway?

Who decides what is waste and what is not? And what criteria do you use? You might suggest that waste is obvious and easy to spot, and all you need to do is put a judicious regulator in charge, but that implies that a regulator is capable not only of judging the present worth but also the future worth of a line of inquiry.

My favorite example of this is from the biological sciences: The polymerase chain reaction is a necessary part of genetic research; it is used to rapidly create millions to billions of copies of a segment of DNA. The technology itself is dependent on thermostable polymerases which are derived from work on extremophiles — organisms which can survive in extreme conditions that most life would not.

But that research began with one fellow who grew particularly interested in a green sludge that he spotted in a thermal vent in Yellowstone natural park. Nobody cared much about it, then — it wasn’t until decades later that it blossomed into something that had undeniable application.

II. Who is the research for?

Every research paper comes with a sense of who-ness and if you’re not attuned to it you could completely miss why it was done.

Researchers who evaluate scientific literature after it has been published each have their own purpose for it in mind, and that purpose shapes their judgments of its worth.

Say, for example, that your goal is to provide broad insight on the relationship between anxiety and procrastination. It doesn’t take more than a few papers before everything else on the topic becomes redundant to you. After the tenth paper showing that anxiety correlates positively with procrastination you can almost certainly guess the eleventh accurately, and the fiftieth. Really, the reason you add the remaining four hundred papers to your database is to be thorough, not because you lack an answer.

But the people who wrote the papers had other aims in mind. Say that one researcher wrote a paper on the relationship between anxiety and procrastination in schizophrenic patients, and another researcher wrote one on the relationship between anxiety and bedtime procrastination. To an outside reviewer these might seem redundant (and, having reviewed several of each, I can say that the results are near identical). But to the ones who wrote them? They may have important meaning.

That paper on anxiety and procrastination in schizophrenics may have been written by a researcher who works with schizophrenics and wanted to address several patterns of self-sabotaging delay that their clients exhibited. In the absence of a meaningful paper dealing with the schizophrenic population, the researcher can only draw on broad principles to make their argument. With a published paper under their belt they can say “no, we have evidence that this is a pervasive problem affecting people exactly like the ones we work with every day.”

Ditto with the researcher who decided to write a paper on bedtime procrastination. I am certain that it matters to many people receiving treatment for sleep disorders — because I have a sleep disorder as well, and it seems largely driven by a tendency to put off sleep in favor of work. Was there even a language for this until somebody decided to call it “bedtime procrastination” and conduct research into it? No matter how derivative it is of the original literature, it still has value in sparking a conversation among the professionals who work with it and the people who do it.

This is one of the overlooked functions of research papers: often they allow a person to open up a dialogue on a new topic, not with the world, but with other professionals in their immediate vicinity, and to discuss it with some authority. Is it right, then, to judge the utility of a paper solely on whether it advances some broad theoretical agenda?

III. Is waste actually wasteful?

A third consideration, who says waste is wasteful? Back in the realm of DNA, researchers have known for many decades that the majority of DNA is non-productive, in the sense that it does not produce proteins that are used to construct life. But that does not mean it is useless. Researchers have identified many potential (and actual) uses of non-productive DNA over the decades, but my favorite comes from a study conducted in 2019.

The authors of the study found that the non-productive DNA was a site for the spontaneous assembly of new proteins that were not currently in circulation. Or (and I may be taking liberties here), it appeared to be one way in which evolution occurs; the random matching and mismatching of all that non-productive DNA spontaneously produces “options” that may become fodder for natural selection.

Image by Author, via MidJourney

Okay. So, how does this apply to you?

So far this has been a purely abstract set of considerations, grounded in the type of science talk that a lot of people may find incurably boring. But I’m going to take a slightly strange direction with it. I think that we can take the principles above and apply them to life, and specifically your own life. There’s a good lesson in here about reserving judgment.

To do that we just take the principles above about judging the wastefulness (or productivity) of an academic journal article and apply it more broadly to personal projects and life goals. So, here goes:

  1. What appears wasteful to you in the present may be of great value to you in the future if it is given the time to develop to its logical end. Don’t judge the usefulness of your actions too quickly.

  2. What appears wasteful to you may be of immense value to somebody else — don’t judge the usefulness of other people’s actions based only on how they affect you, and don’t judge the usefulness of your own actions based only on how they affect you.

  3. What appears wasteful to you may be the raw stuff that new ideas and new directions are born from. Don’t judge the usefulness of your actions until you have given them enough time to surprise you.

All of that is to say, don’t be too hard on yourself. Productivity-heads are prone to believing that life must be optimized and that waste must be removed in favor of laser-sharp focus on a few essential things (chosen ahead of time) that truly matter.

I am not sure if that is an appropriate or healthy attitude to take towards life. It seems to me like it filters out the serendipity.

Close friends are produced from thousands of interpersonal connections, most of which never bear fruit. Think of your best friend — and now imagine that if you hadn’t bumped into a thousand random people during your childhood, most of whom didn’t matter, you may never have met them. Were the other thousand encounters really junk? Or were they all worth it for that one golden connection?

Another way to think of this is that you can’t optimize until the mess of life has given you something to work with. You can’t produce something optimal by cutting out mess — you can only focus on something optimal that has already been produced. Those are different things. The mess itself, with all its serendipity and false paths and boring and exciting and distracting moments — that’s the thing that dredges up the gold for you. So don’t shy away from it too quickly.

J

If you appreciated this article, consider sharing!

Share

TFW you finish the 1000th abstract and the sun rises and you attain sagehood. (Image by Author, via MidJourney)
]]>