<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>sgiath.dev</title>
    <link>https://sgiath.dev</link>
    <description>Writing from sgiath on transhumanism, philosophy, software, cryptography, and technology.</description>
    <language>en-us</language>
    <lastBuildDate>Sun, 22 Mar 2026 19:08:34 GMT</lastBuildDate>
    <atom:link xmlns:atom="http://www.w3.org/2005/Atom" href="https://sgiath.dev/rss.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The Impossible Middle: A Critique of Post-Labor Economics</title>
      <link>https://sgiath.dev/blog/2026/03-21-the-impossible-middle.html</link>
      <guid isPermaLink="true">https://sgiath.dev/blog/2026/03-21-the-impossible-middle.html</guid>
      <description>Post-labor economics panics about a future that can&apos;t logically exist. There are only two coherent endpoints for AI - and neither requires new monetary policy.</description>
      <pubDate>Sat, 21 Mar 2026 09:00:00 GMT</pubDate>
      <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p>
There are only two futures for AI and the economy that actually make sense. Both are fine. Most economists are panicking about a third one that can’t exist.</p>
<p>
<strong>Future A:</strong> AI does literally everything. Every need met, every problem solved, total abundance. Paradise. Nothing to worry about, nothing to redistribute.</p>
<p>
<strong>Future B:</strong> AI is extremely capable but doesn’t cover everything. Humans still contribute, exchange continues, work transforms like it always has. The economy adapts. Economics still applies, just with different inputs.</p>
<p>
There is no stable state between these two. But that impossible middle is exactly where most post-labor economic thinking lives.</p>
<h2>
The Impossible Middle</h2>
<p>
<a href="https://daveshap.substack.com/">David Shapiro</a> is finishing his Post-Labor Economics Treatise right now. I follow his work and find him genuinely interesting - fairly intelligent, willing to go against mainstream narratives, unafraid of big ideas. I also think his central framework has a hole in it you could drive a truck through.</p>
<p>
Shapiro talks (correctly, in my opinion) about the AI revolution as the most disruptive economic transformation we’ve ever seen. More disruptive than industrialization, more disruptive than the internet. I agree with this. Where he loses me is what he does next: he accepts our current monetary and economic system as a given and tries to figure out how to fit AI into it.</p>
<p>
He doesn’t question the US dollar. He doesn’t question central banking. He doesn’t question whether the concept of money itself survives the scenario he’s describing. He takes the most radical premise imaginable - that human labor becomes worthless - and then writes chapters about monetary policy (metaphorically, the book is not out yet).</p>
<p>
If the AI revolution is more disruptive than the industrial revolution, shouldn’t we analyze the current monetary and economic systems too? Shouldn’t we ask whether they survive at all?</p>
<p>
Let’s take his premise seriously and follow it where it goes.</p>
<h2>
Why People Work</h2>
<p>
Start simple. Do chimpanzees work? They spend most of their awake time trying to obtain food to survive, if they stop, they starve. That’s work. Did cavemen work? Of course - they hunted, gathered, made tools, built shelters. They worked most of their waking hours.</p>
<p>
Why? Because they had needs and work was the way to fulfill those needs. That’s it. There is no nobility in hard work, no deeper meaning, nothing moral or amoral about it. Work is a process to obtain things that fulfill your needs.</p>
<p>
If all of your needs were met, you would not work. And there would be nothing wrong about that.</p>
<p>
But here’s the catch: have you ever met anyone with all their needs met? No. Can you imagine having all <em>your</em> needs met? Probably yes, but ask yourself: what would an average person in 1880 consider “all needs met”? Do you think more TikTok followers would be on that list? What were the needs of a caveman?</p>
<p>
Our needs are infinite. Once some are met, new ones appear that we didn’t even know we had. This isn’t “consumerism” in the pejorative sense. This isn’t a moral failing. That is just how people are and it is fine.</p>
<p>
This also explains something that puzzles a lot of people: why we work roughly the same number of hours as people did a century ago, despite our productivity being orders of magnitude higher. We have more needs. If you would be fine living in the same conditions as an average person in 1880, you would have to work a few hours a month.</p>
<h2>
What Even Is Money</h2>
<p>
Money is not wealth. Money is not value. Money is a coordination mechanism for exchanging goods and services. That’s it.</p>
<p>
Before money, there was barter. Barter has an obvious problem: the double coincidence of wants. I have fish, you have wood. I want wood, you want fish. Great, we trade. But what if you don’t want fish? What if you want leather? Now I need to find someone with leather who wants fish, trade with them, then come back to you. This is insanely inefficient.</p>
<p>
Money solves this. It’s a medium that lets us coordinate exchanges without requiring both parties to have exactly what the other wants at exactly the right time. That’s the entire reason money exists.</p>
<p>
Not because governments declared it. Not because banks invented it. Money emerged spontaneously across dozens of unconnected civilizations because the coordination problem is universal. The form changes - shells, salt, gold, paper, digits on a screen - but the function is always the same: facilitate bilateral exchange.</p>
<p>
Hold that thought.</p>
<h2>
Following the Premise</h2>
<p>
Now let’s go back to Shapiro’s premise: in a post-labor world, humans have nothing to contribute. AI and robots do all the work. Human labor has zero economic value.</p>
<p>
If we accept this premise, what happens to money?</p>
<p>
Money exists to coordinate bilateral exchange. Bilateral exchange requires both parties to have something the other wants. If humans have nothing to contribute - no labor, no skills, no services that AI can’t do better - then there is no bilateral exchange. One side of the transaction is empty.</p>
<p>
No bilateral exchange â†’ no need for a coordination mechanism â†’ no money.</p>
<p>
This isn’t a prediction. It’s a logical consequence of <em>Shapiro’s own premise</em>. If you truly believe human labor becomes worthless, you must also believe money becomes meaningless. You can’t have one without the other.</p>
<p>
Yet Shapiro spends his treatise discussing how to redistribute money. Which money? The money that has no reason to exist in the world he’s describing? He’s writing monetary policy for an economy where money has no function.</p>
<p>
That’s the shallow analysis I’m critiquing. He accepts the most radical technological premise imaginable and then refuses to follow it through to the most basic economic conclusions.</p>
<h2>
The Fork</h2>
<p>
So which future do I actually think we’re headed for? Honestly - I don’t know. And neither does anyone else.</p>
<p>
I can imagine a world where AI does literally everything. Every human need met before you even know you have it. “Work” as foreign a concept as “hunting mammoths for dinner.” That would be paradise. Nothing to redistribute because nothing is scarce. No economic problem to solve.</p>
<p>
I don’t dismiss this possibility. But I don’t see it coming any time soon, not even on the most optimistic AI timelines.</p>
<p>
What I think is far more likely - at least for any future worth planning for - is that human needs will keep expanding faster than AI’s ability to satisfy all of them. This is what has happened through every previous technological revolution. The industrial revolution didn’t end work. It ended <em>certain kinds</em> of work and created entirely new kinds that would have been incomprehensible a generation earlier. A medieval farmer couldn’t have imagined “software engineer” as a job. We can’t imagine what jobs will exist when AI handles everything we currently call work. But they’ll exist, because needs expand to fill available capability.</p>
<p>
In that world, humans still contribute. Exchange continues. Money persists. The economy transforms and economic principles still apply, just with different inputs.</p>
<h2>
The Point</h2>
<p>
Two endpoints. In one, AI solves everything and we live in paradise - no economic problem to discuss. In the other, humans still contribute and the economy adapts like it always has.</p>
<p>
The scenario post-labor economists worry about - where AI is powerful enough to make human labor worthless but <em>not</em> powerful enough to satisfy all human needs - doesn’t hold together. It requires AI to be simultaneously all-capable and insufficient.</p>
<p>
If AI can replace every human capability, it can meet every human need. If it can’t meet every human need, there are still things humans can contribute. Pick one.</p>
<p>
Shapiro is a smart guy writing about an important topic. But his framework starts from a premise and doesn’t follow it where it leads. He bolts 1971-era monetary policy onto a world that, by his own description, has moved beyond it.</p>
<p>
If we want to think about AI and economics clearly, we need to start from first principles - why people work, what money actually is, what exchange requires. Not from the assumption that the current system is permanent and just needs adjustments.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I have a fear of death and I am not ashamed of it</title>
      <link>https://sgiath.dev/blog/2026/03-15-fear-of-death.html</link>
      <guid isPermaLink="true">https://sgiath.dev/blog/2026/03-15-fear-of-death.html</guid>
      <description>Most conversations about death are wrapped in comfort. I&apos;d rather be honest about the terror and do something about it.</description>
      <pubDate>Sun, 15 Mar 2026 09:00:00 GMT</pubDate>
      <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p>
I love life. I don’t want to die. I don’t want to stop existing. I don’t want to be stripped of all choices. And I am not planning to die.</p>
<p>
Most people treat this as something to get over. “Everyone dies.” “It’s natural.” “You should make peace with it.” “The end gives meaning to life” I’ve heard all of these and I reject them. Making peace with death is giving up on the most important problem you’ll ever face.</p>
<h2>
The copium pipeline</h2>
<p>
Religion is terror management. Terror management theory in psychology describes exactly this: we’re organisms wired for survival, but cursed with the knowledge that we’ll die. Every other animal gets to ignore this. A deer doesn’t lie awake at 3am thinking about the heat death of the universe. We do. And religion evolved as the painkiller - a way to reconcile “I must survive” with “I definitely won’t.”</p>
<p>
It’s copium. Effective copium. Copium that built cathedrals and wrote symphonies and gave billions of people a reason to get up in the morning. I’m not dismissing it. But once you see the mechanism - once you understand <em>why</em> you believe rather than <em>what</em> you believe - you can’t unsee it. The comfort stops working when you recognize it as comfort.</p>
<h2>
It’s not a trick question</h2>
<p>
Sometime in my twenties I read Eliezer Yudkowsky’s <a href="https://www.lesswrong.com/posts/Aud7CL7uhz55KL8jG/transhumanism-as-simplified-humanism">Transhumanism as Simplified Humanism</a> and it crystallized something I’d been feeling but couldn’t articulate.</p>
<p>
The argument is almost embarrassingly simple. If a six-year-old is lying on train tracks, you save her. If a 45-year-old has a curable disease, you cure him. These aren’t controversial statements. Now: if a 95-year-old is dying of old age, and you could prevent it - would you? What about a 150-year-old?</p>
<p>
If life is good and death is bad, that doesn’t change at some arbitrary age. There’s no birthday where the moral equation flips and suddenly it’s fine to let someone die. Transhumanism isn’t a radical philosophy. It’s just “life is good, death is bad” applied consistently, without special cases, without an upper bound.</p>
<p>
This hit me hard because I’d spent years feeling like my fear of death was somehow immature or excessive. Like wanting to live forever was a childish fantasy I should outgrow. Yudkowsky made me realize I had it backwards: the people who accept death are the ones adding complexity. They’re the ones who need to explain at what exact age life stops being worth preserving. I just need to say “keep going.”</p>
<h2>
Choosing a different drug</h2>
<p>
Here’s the thing about rejecting religious copium: you don’t stop needing to cope. The fear doesn’t go away. You just need a better strategy.</p>
<p>
My strategy is to not die.</p>
<p>
Not metaphorically. Not spiritually. I mean: I intend to be alive in 200, heck even 2000, years. Whether that happens through medical breakthroughs that stop aging, replacing failing organs with engineered ones, gradually swapping biological parts for mechanical ones, or eventually uploading whatever “me” is into something more durable than meat - I don’t care about the method.</p>
<p>
Is this also copium? Absolutely. I’m not deluded about that. The difference is that my copium has a nonzero probability of being true, and that probability is increasing every year. Religious copium asks you to believe without evidence and wait for something unfalsifiable. My copium asks me to look at the trajectory of AI, medicine, and bioengineering.</p>
<p>
One of these you can work toward. The other you can only pray for. I know which one I’m picking.</p>
<h2>
The fear itself</h2>
<p>
I want to be clear about what I’m afraid of. It’s not pain - pain ends. It’s not the process of dying - that’s a medical problem. It’s <em>non-existence</em>. The idea that there will be a last thought, a last moment of experience, and then nothing. Forever. Not darkness, because darkness is something you experience. Just… nothing.</p>
<p>
People handle this in different ways. Some find comfort in legacy - “I’ll live on through my children, my work.” That’s a weaker version of the same copium. You won’t live on. Your <em>influence</em> might persist for a while, but you - the thing having this experience right now - will be gone.</p>
<p>
Others go the Buddhist route: the self is an illusion anyway, so there’s nothing to lose. Clever, but try telling yourself that at 3am when the existential dread hits. The self might be an illusion, but it’s an illusion that really doesn’t want to stop existing.</p>
<p>
I respect people who genuinely don’t fear death but I’m not one of them, and pretending otherwise would be dishonest.</p>
<h2>
The bet</h2>
<p>
Longevity escape velocity is a concept from Aubrey de Grey: the point where medical advances extend life expectancy faster than time passes. You age one year, but medicine gives you back more than one year of healthy life. Once you cross that threshold, you’re no longer on a countdown.</p>
<p>
Are we close? I don’t know. Could be 20 years, could be 50. Could be never. But “could be never” is better odds than “definitely never,” which is what you get without trying.</p>
<p>
Here’s what I do know: AI is accelerating biological research at a pace that would’ve been unthinkable ten years ago. Protein folding is solved. Gene editing is becoming routine. Organ printing is moving from lab to clinic. The people who say “that’s science fiction” are the same people who said the internet was a fad and smartphones were toys.</p>
<p>
I’m 33. If I can stay healthy for another 30 years, I think the odds are decent. Not certain. But decent. And decent odds on immortality beat certain death every time.</p>
<h2>
Plan B</h2>
<p>
My plan is to live forever, or at least until I get tired of it and then a bit more.</p>
<p>
If that doesn’t work - if the science doesn’t arrive in time, if aging catches me first - I have a plan B. I don’t intend to spend my final years with a degraded mind in a hospital bed, slowly becoming less of myself until the machines are turned off. That’s not dying. That’s being slowly erased.</p>
<p>
Plan B is to go hunt a bear and don’t come back.</p>
<p>
I mean that somewhat literally and somewhat metaphorically. The point is: agency. Even in dying, I want the choice to be mine. The fear of death is really a fear of losing all agency - the ultimate loss of control. If I can’t beat death, I can at least refuse to let it happen <em>to</em> me.</p>
<h2>
Why I’m writing this</h2>
<p>
Most conversations about death are wrapped in comfort. People want reassurance. They want to hear that it’s okay, that it’s natural, that there’s something beautiful about mortality giving life meaning.</p>
<p>
I think that’s backwards. Mortality doesn’t give life meaning. Mortality is the thing threatening to take all meaning away. The urgency you feel isn’t beauty - it’s terror, dressed up in philosophy.</p>
<p>
I’d rather be honest about the terror and do something about it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Life Like (2019) - Review</title>
      <link>https://sgiath.dev/blog/2026/03-13-life-like-review.html</link>
      <guid isPermaLink="true">https://sgiath.dev/blog/2026/03-13-life-like-review.html</guid>
      <description>My review of the movie Life Like (2019)</description>
      <pubDate>Fri, 13 Mar 2026 09:00:00 GMT</pubDate>
      <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p>
<a href="https://www.imdb.com/video/vi2515909913/">IMDB</a>
|
<a href="https://www.rottentomatoes.com/m/life_like">Rotten Tomatoes</a></p>
<h2>
Plot</h2>
<p>
A young couple, James and Sophie, inherit a large estate and acquire Henry - a cutting-edge humanoid robot designed to manage the household. Henry is eerily human: articulate, attentive, emotionally perceptive. Sophie grows increasingly drawn to him - intellectually, emotionally, and eventually physically. James, threatened by the connection, spirals into jealousy. The film builds toward a reveal: Henry was never a machine. The company uses conditioned humans, not robots. The “android” was a person the entire time.</p>
<h2>
What This Film Is Actually About</h2>
<p>
Beneath the sci-fi surface, Life Like is an examination of how our behavior changes based on what we believe we’re interacting with. Sophie is reluctant to order around human servants but finds herself more open to it with Henry because she believes he’s a machine. Eventually Sophie permits herself a level of openness with Henry, that she doesn’t even with James, just because she thinks Henry is a machine. There’s no ego to bruise, no judgment to fear, no social contract to violate. The “machine” label gives her permission to be vulnerable.</p>
<p>
James’s jealousy is equally revealing. He doesn’t feel threatened by a household appliance - he feels threatened once the interaction starts resembling a human relationship. He intellectually insists on Henry being a machine but he himself is not able to act on that belief either.</p>
<p>
The film, despite being filmed before ChatGPT arrived to our phones, captures something true about how we interact with AI today: we say things to machines we wouldn’t say to people. We’re more honest, more open, less guarded. The question Life Like raises - perhaps unintentionally - is whether that openness is a bug or a feature. Should we fear those emotional connections to machines or embrace them?</p>
<h2>
What I Wish This Film Was About</h2>
<p>
In my opinion, the twist undermines the most interesting version of this story. By revealing Henry as human, the film retreats to safe territory: it was infidelity all along, not a genuine human-machine connection. The audience is let off the hook.</p>
<p>
The braver film would have kept Henry as a machine and forced the characters - and the audience - to sit with the discomfort. What if Sophie genuinely preferred interacting with an AI? What if that preference wasn’t a pathology but a rational response to what the AI could provide that James couldn’t?</p>
<p>
Even more interesting: could the two relationships coexist? Could James accept that the machine fills gaps he can’t, without it threatening the marriage? Could Sophie accept the same about herself without feeling guilty? Would a female robot body make James more comfortable with Sophie’s attachment to Henry? What if both partners had companion of the opposite sex? Could the marriage become stronger because neither person carried the impossible expectation of being everything to the other? What would the marriage look like if both partners had a machine companion? What would be the role of human to human connection in a world where machines are better companions?</p>
<p>
I don’t have clean answers to most of these. I’m not sure anyone does yet. But the film doesn’t even try to ask them - it chose a plot twist over a thesis. The result is a movie that’s interesting to think about despite itself - the conversations it provokes are better than the conclusion it reaches.</p>
<p>
<em>Rating: 6/10 - compelling premise, cowardly ending</em></p>
]]></content:encoded>
    </item>
    <item>
      <title>Dangers of AI</title>
      <link>https://sgiath.dev/blog/2025/01-25-dangers-of-ai.html</link>
      <guid isPermaLink="true">https://sgiath.dev/blog/2025/01-25-dangers-of-ai.html</guid>
      <description>I&apos;m not woried that AGI will kill humanity - but there are other dangers we should pay attention to.</description>
      <pubDate>Sat, 25 Jan 2025 09:00:00 GMT</pubDate>
      <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p>
I’ve followed <a href="https://x.com/ESYudkowsky">Eliezer Yudkowsky</a> since the <a href="https://hpmor.com/">HPMOR</a> days and have often appreciated his insights. His <a href="https://www.lesswrong.com/posts/Aud7CL7uhz55KL8jG/transhumanism-as-simplified-humanism">essay on transhumanism</a> deeply shaped my worldview, helping me articulate ideas I had only sensed intuitively.</p>
<p>
Lately, though, his warnings of inevitable AI doom leave me unconvinced. I struggle to see why he believes AI would inherently seek to destroy us - or how halting technological progress could ever be realistic, even with the best intentions. Still, I share his concern that AI poses dangers. Mine, however, lie elsewhere: in the way we <strong>anthropomorphize AI</strong> and the unintended consequences of treating it as something it is not.</p>
<h3>
The Mirror, Not the Mind</h3>
<p>
Today’s large language models are trained on vast troves of human-generated text, optimized to predict the next plausible token. They excel at mimicking conversation, but their outputs are ultimately <strong>statistical averages</strong> - reflections of the biases and preoccupations embedded in their training data. They have no lived experience, no original thought, no ethical compass.</p>
<p>
As someone who has built simple models, read the research, and run experiments on my own hardware, I know their limitations. What worries me is that many others don’t. Increasingly, people treat AI systems as if they were genuine intelligences, imbuing their outputs with unwarranted authority. That mistake will only grow more dangerous as these systems become more convincing.</p>
<h3>
A Simple Test</h3>
<p>
Try this: ask your favorite chatbot to help brainstorm a fiction story. In my own trials, no matter the premise, one of its top three suggestions was always some variation of:</p>
<blockquote>
  <p>
“This would increase inequality; explore how this negatively affects society.”  </p>
</blockquote>
<p>
If you believe the AI is “reasoning,” you might conclude that inequality is humanity’s ultimate moral challenge - one that every technological development must be judged against.</p>
<p>
But that’s not reasoning. It’s pattern replication. Inequality is a theme disproportionately represented in the academic, media, and creative writing that feeds these models. Writers often skew toward openness, a trait correlated with left-leaning views, so those ideas appear more often. The result isn’t malicious - it’s statistical. Yet it risks being misread as universal truth.</p>
<h3>
The Real Danger</h3>
<p>
This is the problem: by confusing mimicry with intelligence, we risk <strong>smuggling in biases</strong> as if they were objective wisdom. Trained on decades of politicized discourse, models will inevitably echo dominant narratives - progressive or otherwise.</p>
<p>
The threat isn’t that AI has an agenda. It’s that its outputs can be weaponized as faux-neutral validation:  <br>
<em>“Even the AI agrees with me!”</em></p>
<p>
Over time, such feedback loops could calcify worldviews while sidelining others - not through conspiracy, but through probability.</p>
<h3>
How We Should Respond</h3>
<p>
The real challenge isn’t to stop progress. It’s to ensure we - and our tools - retain the ability to <strong>critique progress</strong>. We should treat AI as we would a skilled impersonator: impressive, entertaining, sometimes useful, but never authoritative.</p>
<p>
Until models transcend their statistical foundations, their outputs deserve a warning label:</p>
<blockquote>
  <p>
<em>“This is not wisdom. It is a mirror, polished to reflect the loudest voices of the past.”</em>  </p>
</blockquote>
<p>
Learning to see that difference may be the most important safeguard we have - before the reflection becomes a cage.</p>
]]></content:encoded>
    </item>
  </channel>
</rss>