<![CDATA[Monomythical]]>https://nayafia.substack.comhttps://substackcdn.com/image/fetch/$s_!IBX_!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33d4bd6-0c4b-4741-bc78-13b0d8ff1ba3_425x425.pngMonomythicalhttps://nayafia.substack.comSubstackMon, 16 Mar 2026 16:18:11 GMT<![CDATA[What embryo selection can’t do (yet)]]>https://nayafia.substack.com/p/what-embryo-selection-cant-do-yethttps://nayafia.substack.com/p/what-embryo-selection-cant-do-yetTue, 26 Aug 2025 17:18:03 GMTEmbryo selection has been in the news lately. The phrase tends to conjure sci-fi images of wealthy parents designing immaculate children, but I find that media discourse often glosses over the practical limitations of these technologies, which make them much less dramatic than they appear. I wanted to unpack what we actually can and can’t do with embryo selection today, which I hope will provide a more grounded foundation for discussing its ethics.

Firstly, a few definitions. Embryo selection is, broadly, the practice of screening and implanting an embryo with favorable traits. There are a few ways to determine whether it has such traits:

  • Monogenic screening (or PGT-M) checks your embryos for a single, clearly defined genetic mutation. This type of testing has been used since the 1990s to screen for serious, life-altering genetic diseases, such as cystic fibrosis or Huntington's disease.

  • Polygenic screening (or PGT-P) is a newer technology that only became commercially available in 2019. It can be used to assess an embryo's risk for conditions influenced by multiple genes – such as diabetes or schizophrenia – rather than just one gene. But, because we don't always know which genes to look at, researchers use statistical models, based on population studies, to estimate which genetic variants are likely to influence a certain trait. These variants are then compiled into a polygenic risk score (or PRS).

For example, we know schizophrenia has a strong genetic component, but there isn't a single "schizophrenia gene" that's responsible for it. Unlike Huntington's disease, which is caused by a mutation of the HTT gene, researchers have to approximate which combination of genetic variants might influence someone's propensity to have schizophrenia, and assign weights to each one.

Polygenic screening – and the underlying risk scores, which researchers have developed since the late 2000s – has attracted controversy because it could be applied to a wider set of traits like IQ.1 But, on a technical level, monogenic and polygenic screening are just two different methods for predicting an embryo's traits. Both methods can be used to screen for genetic diseases that could severely impact a child's life. Most parents probably wouldn't want their child to have Huntington's disease (monogenic) or schizophrenia (polygenic).

Both methods also have practical constraints that aren’t often discussed, which is what I want to discuss here.

Problem #1: Parents can’t make enough embryos to screen beyond the essentials

The first issue is that parents can't currently make enough embryos for elective trait selection to matter very much. A single embryo retrieval – in which eggs are retrieved, fertilized, and matured to the blastocyst stage – plus genetic testing can cost around $30,000 and might yield 3-5 embryos. For each embryo transfer, the live birth (i.e. "success”) rate is perhaps in the 50-60% range, so multiple transfers are often needed. And these numbers are on the higher end of successful outcomes.

Fertility clinics also grade each embryo on its quality, which is correlated with the likelihood of a successful implantation. Most use the Gardner grading system, which looks at a blastocyst's size and cell quality. A 6BC, for example, is less likely to have a successful transfer versus a 4AA.

Parents may also want to consider the sex of the embryo, not just for elective reasons (wanting to have a boy or a girl), but for certain genetic diseases. A male with a BRCA1 mutation, which causes a high predisposition for breast cancer, still has an much lower lifetime risk (1-5%) than a female (60-80%).

These two filters alone shrink the pool considerably. Now, try adding just a single genetic trait to screen for, and the list of viable embryos gets even shorter. Imagine you are screening for Huntington's disease, which typically affects 50% of embryos:

  • EMBRYO #1: M, 4AA, mutation yes

  • EMBRYO #2: M, 5BB, mutation no

  • EMBRYO #3: F, 3AA, mutation yes

  • EMBRYO #4: F, 6BC, mutation yes

  • EMBRYO #5: F, 4AA, mutation no

Half your embryos have the mutation, so you wouldn't want to implant them. But of the embryos who don't have the mutation, only one (#5) is considered high grade, which a fertility clinic will prioritize because it’s more likely to result in a live birth. In this example, we didn't even have the luxury of considering the embryo's sex.

A world in which parents are "shopping" among lots of different traits in their embryos simply doesn't exist today. You would need to create dozens or hundreds of embryos to make this possible, which current IVF methods don't produce. Researchers are exploring new possibilities in this realm, such as generating embryos from stem cells, but these are still highly experimental, and inevitably raise a different set of ethical questions. For now, embryo selection is tightly constrained by biology and numbers.

Problem #2: With polygenic screening, you're unlikely to find a genetic outlier

The “not enough embryos” problem also impacts the usefulness of polygenic screening. Polygenic risk scores are comprised of lots of genetic variants, which interact with each other in unpredictable ways. Statistically, on a bell curve of all possible genetic combinations between two parents, most of your embryos will end up somewhere near the middle (meaning, the middle of what you would have produced anyway through natural conception) for a given trait, with only small variations.

When you only have, say, 5 or 10 embryos to choose from, you only get a few “shots” along this bell curve, and it’s unlikely you’ll uncover anything interesting. Finding a genetic outlier is like sticking your hand in a jar of green jelly beans and rummaging blindly around, hoping to grab the one red jelly bean: it’s mostly wishful thinking.

Problem #3: Polygenic risk scores are not set in stone

We tend to think of our DNA as something that is hard-coded into our bodies. Therefore, any information we receive about our genes must be irrefutably true, right? Wrong. Some genetic information is clear, especially once we know what to look for. But for the most part, our understanding of how genetic variants map to traits is complex and incomplete, and these insights are always changing. Genomics researchers can’t just look at our DNA, like some soothsayer blowing the dust off an ancient text, and tell us what they see. Instead, they use statistics to make educated guesses about which genes seem likely to influence a certain trait, and to what degree.

You can think of this as similar to how people use the English alphabet. Sure, it only has 26 letters, and everyone knows what these letters are, but people still find new ways of rearranging them – words, sentences, essays, books – every day, and we’re not going to run out of new combinations anytime soon. Developing a polygenic score is kind of like that, too.

Warren Weaver, a director at the Rockefeller Foundation in the 1950s, once described the difference between problems of simplicity, disorganized complexity, and organized complexity:

  • Problems of simplicity are a two-variable problem. Monogenic screening falls under this category: “Does this embryo have X mutation?” has a direct and straightforward answer.

  • Problems of disorganized complexity deal with a large number of variables, but which act in predictable ways that are independent of their relationship to each other. Population studies used in genomics research are an example of this: “What percent of this population carries Y variant?” is a complex question, but can be answered with statistical analysis.

  • Problems of organized complexity involve multiple variables, each of whose significance can change depending on its relationship to other variables. Polygenic screening is this type of problem. A genetic variant might be more or less important depending on which other variants are present (which could amplify or mitigate its impact), as well as other factors like a person’s sex, lifestyle, or environment. And we don’t always know what these dependencies even are.

A polygenic risk score is just a model that captures our best-guess thinking at the time. Like any model, it can have wide error bars. With time, we can probably reduce these errors quite a bit, to the point that we're comfortable making consequential decisions. But we’re nowhere close to that world for every polygenic trait in question.

Many polygenic scores are trained on European ancestry and don’t always generalize to other populations. Standards are still emerging in this field: researchers don’t all use the same data sets or methods to create these scores. And companies don’t have to disclose how their proprietary scores were developed, nor how accurate or predictive they are.

High uncertainty might be fine if we were dealing with a pure research problem. But when genetic testing companies offer their customers a neatly-wrapped polygenic risk score, it can seem more certain than it actually is. I've had several services give me conflicting insights about my risk for various polygenic traits, based on limited or differing interpretations of the data. If one service tells me I have a high risk for, say, Type 2 diabetes, and another tells me I have a low risk, which one should I believe? How should I change my behavior based on this information? If these services can't help customers understand what they should actually do differently, having access to these data insights might be more counterproductive than not having them at all.

For technologies like embryo selection, which utilize this research, I think we should hold ourselves to a high degree of confidence before making decisions about elective traits. We are, of course, always updating our collective scientific body of knowledge2, and sometimes we have to act even when we're not very confident about what we know. I can understand why someone might pursue a treatment with 55% odds of success, for example, if they’re facing a rare and aggressive type of cancer and have no alternatives. But it’s another story to make life-altering choices for ourselves, or our children, based on highly speculative data, when the default choice might have been just fine. At the very least, companies should be transparent about the uncertainty we have around these scores today, so consumers can make informed decisions.

Do these issues mean that none of this technology matters?

Not at all. Firstly, I do think it is useful for parents to do genetic testing of themselves before having kids, which can surface any major genetic issues. This is a simple saliva test that you can do at your doctor, costs a couple hundred dollars, and is often covered by insurance (especially if you have a family history of disease) or even subsidized by test providers.

Embryo selection, too, has clear value when families have known genetic risks that could significantly impact the quality of a child's life, such as hemophilia or hereditary cancer. In these cases, monogenic screening can mean the difference between having children versus not. Despite the panic about designer babies, most embryo screening today is being used for exactly this: avoiding passing severe, life-altering diseases on to one’s children.

Genomics researchers are now trying to extend this research to more common, but still consequential, conditions like diabetes and heart disease. This pursuit also seems valuable to me – such conditions are among the leading causes of death in the United States – even if I'm not yet convinced they’re ready for consumer primetime.

Finally, we shouldn't get too hung up on embryo selection itself, because it represents just one way of solving for a bigger ambition. Gene editing, for example, could someday enable us to correct harmful mutations directly, which would bypass the need to create and choose from a handful of embryos. However we get there, though, I think the underlying imperative is still worth pursuing: finding ways to prevent genetic disease and help more people live healthier lives.

1

If anyone is interested in diving further into this topic, I enjoyed The Genetic Lottery: Why DNA Matters for Social Equality, by Kathryn Paige Harden, which I found both balanced and informative.

2

I'm reminded of the recommendation to avoid giving babies peanut-based foods, which was common advice for my parents’ generation when they were raising me. Years later, researchers realized that avoiding early exposure worsened, rather than reduced, peanut allergies. Now, parents like me are explicitly told to give their babies peanut-based foods as early as they can: the exact opposite recommendation that my parents received.

]]>
<![CDATA[Playing tai chi with memetic warfare]]>https://nayafia.substack.com/p/playing-tai-chi-with-memetic-warfarehttps://nayafia.substack.com/p/playing-tai-chi-with-memetic-warfareThu, 07 Aug 2025 14:37:26 GMTI was motivated to write Antimemetics in part because I felt dissatisfied with how we’ve unthinkingly glommed onto memetic warfare as a terminal explanation for why people do what they do. Memes and mimicry might explain some of our behavior today, but they aren’t an excuse to roll over and accept things exactly as they are. This seems like a profoundly defeatist way for us to justify our worst behavior. Aren't we humans creative creatures, blessed with the ability to design our way out of even the most hopeless situations?

I recently participated in a panel in San Francisco about how new technologies gain public legitimacy. My co-panelists were three founders who are doing this quite well: Laura Deming (Longevity Fund, age1, Until) Celine Halioua (Loyal), and Noor Siddiqui (Orchid), and whose respective fields – cryopreservation, longevity, and embryo selection – touch up against some of the most sensitive frontiers in biotech.

These are not easy topics to make legible, let alone popular, and they’ve historically been met with distrust from the public and media. If one were to follow the conventional internet playbook today, the best way to address such concerns is to own it. Take a stance. Don't be afraid to step on people's toes. Use polarizing, coded language to strengthen your support base. You get the idea. I am sure that each of these founders were given this advice at some point.

When I reflected upon what I admired about them, however, it’s that they've all succeeded by explicitly not playing into the memetic wars. Instead, they’ve taken care to present their ideas in a way that feels credible and trustworthy. They focus on the science. They highlight real stories and use cases that broaden their appeal. And they even, subtly, helped rewrite the language used to describe these topics.

Longevity, for example, was commonly called "anti-aging" until the mid-2010s. It was associated with pseudoscientific products, like supplements or experimental therapies, that promised to make people younger. As a scientific pursuit, it was regarded as a vain billionaires’ quest for immortality that was largely out of touch with the needs of average people today. Thanks to the efforts of Laura and others, longevity founders and researchers now talk about "extending healthy lifespan" and targeting age-related diseases, which is much more relatable and actionable. Loyal, too, started with an approachable use case – helping dogs live longer and healthier lives – that could pave the path towards human longevity. They are currently running the first-ever clinical study to be accepted by the FDA that studies lifespan extension explicitly (dogs or otherwise).

Embryo selection has been around since the 1990s, but even just a few years ago, expressing interest in the topic was a career risk. Stephen Hsu, a vice president at Michigan State University, was forced to resign in 2020 due to his interest in genomic predictors of complex human traits (i.e. polygenic screening – a composite score of multiple genes thought to influence a trait – as opposed to monogenic screening, which only looks for a specific mutation). Hsu cofounded the service Genomic Prediction, one of the first to offer polygenic embryo screening. The first baby to be born with their service was featured in a Bloomberg article as recently as 2021. Embryo selection still draws inevitable comparisons to Gattaca, but the conversation has become much easier to have thanks to Orchid, whose tagline is simply "Have healthy babies." Orchid sidesteps the “designer babies” conversation, instead focusing on screening for genetic traits that could significantly impact a child's health, such as heart defects, vision loss, or certain types of cancer.

Cryopreservation is still comparatively new in terms of its trajectory to social acceptability, but Laura was drawn to the field precisely because she noticed she had a reflexive aversion to it, and decided this meant it warranted further investigation. I have some idea of what she was grappling with: in 2015, intrigued by cryopreservation, I asked some friends about the process. A few emails later, I found myself in a video call with a jolly man who reminded me of Saul Goodman, dressed in a brightly-colored buttondown and clashing 90s-style power tie, trying to sell me life insurance. The experience was memorable, but not exactly something that would appeal to a wide customer base. Instead of catering to weird transhumanists who want to freeze themselves in hopes of being revived in the future,1 however, Until (formerly Cradle) created a new conversation that’s centered around specific uses for cryopreservation, such as preserving organ tissue for transplants, or neural tissue for research.


Life science technologies are an interesting counterexample to memetic playbooks, because they’re a place where we must face the dizzying, and often frightening, consequences of humans toiling on the frontier of progress. Our first reaction is often to flinch and look away: this is classically antimemetic behavior, where an idea feels too consequential to acknowledge. Many people don't want to think about the implications of a world where we can live forever, choose our babies' genetics, or be resurrected after death – and understandably so. Better to suppress the thought and push it away.

When trying to spread an idea that touches people's lives, families, and personal values in an intimate way, leaning into memetic contagion can make them feel more alienated and afraid, as if the idea is running away faster than they can safely process it. A different approach is to focus on being relatable, concrete, and credible:

  • Relatable: Regular people can see themselves in the future you propose. Orchid and Loyal both have websites that one could imagine sending to a friend or family member, featuring soft colors, approachable logos and taglines, and photos of real people – whether that’s couples who’ve had babies with Orchid, or dog owners with their beloved pets.

  • Concrete: Rather than trying to evoke a high-minded, Elysian future, identify use cases that would solve specific problems today. While Until doesn’t shy away from their long-term vision of human whole-body reversible cryopreservation, they are starting with organ transplants, and they point out that cryopreservation is already used in IVF and stem cell therapies.

  • Credible: Science – and more importantly, the processes driving scientific discovery and consensus – is treated seriously. In its clinical study announcement, Loyal’s team states that they “made the decision from the start that we would seek FDA approval for our aging drugs. It’s a rigorous, expensive, years-long process that sets a very high bar for both safety and efficacy – exactly as it should be.”


Finally, I've been thinking about how one might apply these learnings to what's happening at the intersection of neuroscience and contemplative science – such as advanced meditation, transcranial focused ultrasound, and breathwork – an area that I'm currently preoccupied with, and which suffers from similar public perception challenges. Many people are skeptical or dismissive, and rightly so, of anyone purporting to offer blissed-out happiness for the one-time price of $999. But that doesn't mean there isn't real science and insight buried amidst the noise, and figuring out how to tease that out properly, and communicate it to others, is a big reason why I'm so fascinated by this space.

In writing about these topics so far, I've noticed that people seem to appreciate a narrative that is relatable, concrete, and credible. A number of people, for example, told me that a magazine piece I wrote about the jhanas was something they could send to a friend or family member and not feel embarrassed by. The report we published last fall about how such practices impact people's lives was written as a research piece, rather than trying to sensationalize the upside. By explicitly taking on the role of the skeptic, one can build a rapport with readers who might have otherwise tuned out the message. (Michael Pollan did this well, I think, with his 2018 book, How to Change Your Mind, which helped psychedelics cross the chasm from taboo to socially acceptable dinner topic.)

There are two failure modes that I've noticed in this world. The first is when the vision is too vague, promising non-specific outcomes like "happiness" or "wellbeing." I think they think they are being relatable (who doesn't want to be happy?), but these outcomes aren't concrete, and that makes people suspicious for the same reason that "immortality" doesn't resonate as well as "extending healthy lifespan."

The other, and perhaps less obvious, failure mode I've seen is overcorrecting for credibility and concreteness at the expense of relatability. These narratives are overly scientific: fMRI scans are prized over personal experience, with scientists offering detailed descriptions of neural processes that aren't contextualized and don't have clear practical implications. Use cases are narrow – treating a disorder or neurological disease – but feel disconnected from a bigger vision, and don't inspire the public with what's possible.

I love how Loyal decided to develop products aimed at "longevity for dogs," a premise that might initially throw people off guard, but ultimately attracts their curiosity. Inspired by this prompt, I’ve wondered: what would a "killer app" be for contemplative science?

I was at a dinner recently where the topic turned to sleep issues. It turned out that everyone at the table struggled with sleep, and each person had their own way of dealing with it. There were, of course, the typical melatonin and sleeping pill-type solutions, but also more unusual adaptive strategies. One person admitted that they hadn't slept through the night in decades, so they'd parlayed it into a finance trading habit. Another said they watched YouTube videos of coin pusher machines to fall back asleep (I looked these up, and they are indeed soporific).

I was amazed by how everyone at the table had some private, tortured relationship to sleep. Sleep quality is very much connected to one’s psyche, but telling people to “work on themselves” in order to sleep better is too vague to be useful. Instead, we peddle a bunch of Band-Aid solutions and placebos, as well as the ever-infuriating lifestyle suggestions (“don’t look at screens,” “read a book,” “only use your bed for sleep”). While some sleep problems certainly have physiological causes, such as hormonal imbalances or airway issues, if you're sleeping well, your mind and body are also likely at ease.

Figuring out how to fix sleep in a novel and deterministic way, then, could be an proxy for loftier goals like “happiness” and “wellbeing,” rather than chasing those states in the abstract. Just as GLP-1s are marketed as a weight loss drug, but also show promise in treating other reward system disorders (such as substance abuse and addiction), perhaps sleep could be an entry point that would concretize the potential of contemplative science more broadly.

Beyond life sciences or contemplative science, I’ve noticed that when the public expresses fear of technologies like social media or AI, tech often reacts by doubling down on their ingroup, instead of trying to genuinely engage with these concerns. I think it’s useful to consider why people might fear these things, and try to speak to them on an emotional level, instead of using it as an opportunity to engage in memetic warfare. Doing this well seems to have had a counterfactual impact in longevity, embryo selection, and cryopreservation so far. And if the goal is to make exciting new technologies socially legible, rather than scoring cheap points for one’s team, it’s worth trying a playbook that communicates from a place of trust and reassurance to win public approval.

1

This is a self-own: my uncle was the first person to be vitrified by Alcor, the leading cryonics service, in the early 2000s. I guess it runs in the family.

]]>
<![CDATA[All the things I've been doing lately]]>https://nayafia.substack.com/p/all-the-things-ive-been-doing-latelyhttps://nayafia.substack.com/p/all-the-things-ive-been-doing-latelyFri, 30 May 2025 15:02:26 GMTWhen people ask me “what I’ve been thinking about lately,” I feel like I’m playing a game of Codenames:

It’s a popular party game where you lay out a grid of random words and have to come up with clues that tie many different concepts together, in hopes of nudging your team to choose the correct words and score points.

If my life were a Codenames grid right now, it would include the following:1

  • Antimemetics

  • Advanced meditation

  • Silicon Valley ideology

Help, codemaster! Make that into a clue, and you’ve got this newsletter update.

I don’t usually like sending link roundups, but I’ve been publishing too many things recently, on too many seemingly-disparate fronts (except to my subconscious, who is somehow weaving these themes together into a single individual called “Nadia”), to make a story out of it.

So…here’s what I’ve been up to lately. You can treat it like a board of charcuterie, and only pick out the bits you like (cornichons for me – hold the mortadella, please). Or, for that matter, like mingling with a bunch of Nadia-shaped guests at a house party, and politely excuse yourself to get a drink whenever you get bored (I won’t be offended).

Antimemetics (the book)

The New Yorker published a lovely book review of Antimemetics this week, written by Gideon Lewis-Kraus. Beyond his coverage of the book, I think it’s a great explanatory piece of the internet’s recent vibe shift, and why neither the left (with its emphasis on gatekeeping) nor the right (with its emphasis on mimetic desire) has offered a satisfying narrative that gets us to where we’re going next.

My internet politics, in a nutshell:

Asparouhova’s basic intuition is that both of the prevailing theories of information on the internet (either that it had to be sanitized and controlled or that it was simply natural for it to remain perennially downstream of charisma) have been wrong….[but] she does not think that all communication can be reduced to a power struggle, she is not ready to give up on democratic values or civilization tout court, and she considers herself one of many “refugees fleeing memetic contagion.”

Ben Davis of Artnet News also wrote a shorter review of Antimemetics, which we discussed at length on his podcast. (As an aside, I’ve enjoyed getting to talk about this book with a wide range of audiences, from our dear technology brothers at TBPN, to the art critics at Artnet, to my fellow illegible internet blogger Applied Divinity Studies.)

BOOK RELEASE! We released the digital version of Antimemetics this week, so if you bought your copy already, it’s now available (you should have received an email about it). If you haven’t bought a copy yet, you can get the Kindle version now on Metalabel or on Amazon.

The physical copies are still heading to our distributors, but they should likely ship to everyone next week. Here are a few sneak preview photos from our designer (see the shiny silver print?), aka “proof of physical copy”:

Advanced meditation

Meanwhile, I’ve been plugging away in relative peace and solitude on my latest topic of fascination: advanced meditation and the remarkable, intense states we can engender using only our minds.

I started working with the Meditation Research Program at Harvard University/Mass General, led by Matthew Sacchet, a neuroscientist who’s doing, in my view, some of the most critical research in this space today. We are currently developing an education initiative that I’m excited to share more about in the coming months.

Matthew is an absolute beast at publishing papers2, but what primarily attracted me to his work is that he, too, cares about translating scientific and contemplative knowledge into meaningful and actionable insights for the average person. As you might imagine, it’s not easy to find people like this in meditation land, where many conversations quickly unravel into Buddhist poetry and start to feel more like a hazing ritual.

I also recently published a few bits of research from my work last year with Jhourney, a meditation retreat company. We spent some time trying to identify factors that are correlated with meditation success. Beyond the more obvious, demographic-type factors (like social media or prior psychedelic use), we uncovered a few interesting behaviors. One is about how people who overestimate their abilities are less likely to access deeper states, and the other is about how challenges with feeling pleasant emotions in the body (such as alexithymia) is especially correlated with difficulties.

I’ve had some good conversations about framing meditation as a trainable skill – if so, what would the underlying pedagogy look like? It may seem obvious, but many meditators push back on the idea that their skills can be measured or assessed in any rigorous way. And yet, meditation experience (as defined by e.g. number of hours meditated) doesn’t seem to correlate to abilities, so I think there’s room for more experimentation here – which is why I’m excited about my work with Matthew’s lab.

Silicon Valley’s political ideology

Last but not least, I published a piece about Silicon Valley’s political ideology in American Affairs. The impetus was a seminar I co-hosted with Tim Hwang last fall – days before the presidential election – where we brought some people together to discuss where the “Silicon Valley ideology” is going and where it’s been.

In the American Affairs piece, I revisit Richard Barbrook and Andy Cameron’s iconic “Californian ideology” essay from 1995 to argue that Silicon Valley is not libertarian. Nor was it ever classically liberal. Nor is it currently undergoing a “rightward shift.” Tech is its own, strange, undefinable thing (apropos to this newsletter?) that chronically suffers from a lack of allies, as evinced by its getting dumped by the Democratic Party in the 2010s, and its current, growing dissatisfaction with the Republican Party in the 2020s.

It’s hard to write timeless essays about rapidly changing topics for a quarterly magazine, and there’s a lot I wish I could have expanded on in this piece, given how DOGE (and tech’s increasingly skeptical relationship to it) has unfolded, but I’m glad for the opportunity to situate our current moment in historical tech commentary. I hope it helps address some of the outdated notions that tech’s behavior can be attributed to merely being libertarian, obsessively capitalist, or autistically engineer-brained.

1

This Codenames grid does not include the other thoughts that are presently crowding my headspace, such as “My toddler loves to brush his teeth, but why does he only brush one side of his mouth, and how can I convince him to brush the other side without causing him to scream and throw himself on the ground in blatant protest of the mommy dictatorship?” and “How do I get the furniture store in Italy to fill out a U.S. customs declaration of the plant materials contained in the armchairs I ordered three months ago, along with the scientific names of each plant, so that FedEx stops calling me every day and doesn’t send my chairs back to Europe?”

2

As opposed to me, the wretched refuse of the uncredentialed, who briefly entertained the idea of submitting my research to a scientific journal earlier this year, and ended up coated, sticky, and shivering in the horrendous oil spill of its sprawling and indecipherable requirements. I ran, like an Exxon penguin victim in need of a sweater, to the nearest iceberg to clean off my feathers, vowing never to venture within 100 yards of journal publishing again.

]]>
<![CDATA[Introducing: Antimemetics (my new book!)]]>https://nayafia.substack.com/p/introducing-antimemetics-my-new-bookhttps://nayafia.substack.com/p/introducing-antimemetics-my-new-bookWed, 26 Mar 2025 14:54:49 GMTIt's easier than ever to share ideas. But in recent years, some of the most interesting ideas are burrowing deeper underground, circulating quietly among group chats and texts instead of public social channels.

I've watched these changes unfold while thinking about a book I read a few years back, called There Is No Antimemetics Division (or TINAD), a horror science fiction book by the pseudonymous author qntm. In TINAD, the world is overrun by antimemes, or ideas that actively resist being remembered or shared – despite their importance.

Antimemes exist in real life. They are a broad category of self-censoring ideas that includes taboos (things that everyone knows, but can't be said out loud), cognitive biases (things that we can't see about ourselves), or memory lapses (things we know we ought to be doing, but keep forgetting to prioritize). Antimemes grab our attention in the moment, but we struggle to keep them top of mind. This sort of forgetting occurs both collectively (ex. taboos) and individually (ex. cognitive biases).

I read TINAD shortly after its release in 2021. Afterwards, I started collecting scraps of thoughts about antimemes in a text file that I kept on my desktop, which I named antimemetics-notes.md. At some point, this file grew large enough that I thought I might try to write a blog post about it. I tried and failed at this task many times, partly because it felt like the world I wanted to write about was still evolving, and partly because – true to their name – antimemes are hard to think about directly for too long.

Since TINAD was first published, more and more people I know seem to have found their way to its pages. The book (and its author, qntm) is one of those wildly successful self-publishing stories: while it wasn't published or promoted through conventional channels, enough people fell in love with its premise that qntm recently signed with Penguin Random House to re-release it for a wider commercial audience.

I think qntm is one of the best science fiction authors of our generation. Still, I longed for a proper nonfiction treatment of antimemes. It seemed that such a consequential idea needed an analysis that was grounded in the real world, not just a fictional one. While qntm first coined the term "antimeme" back in the late 2000s, the concept has found fresh relevance in recent years, I think because it helps us understand the next phase of the social web: one that’s driven by private and semi-private online spaces, not just highly public ones.


Memes may dominate how we communicate online, but in the last few years, their rapid spread has also necessitated the birth of a parallel, shadow ecosystem, where the best ideas are paradoxically harder to find. Not everyone wants their ideas to go viral anymore. One friend writes a newsletter that he asks people not to share around. Another publishes their writing in a Google Doc, which is sent only to a small group of trusted people. My new favorite social app is a private, Twitter-like app that a friend created just for his friends.

Chasing virality is no longer a sufficient motive to explain why we do (or don't) share what we do online. But we still don't have the vocabulary to describe what we’re doing instead. Group chats have become a major platform for social communication in the last five to ten years, but hardly anyone has written about them, in large part – I think – because they can't. Group chats are self-censoring: unlike the public social web, we can only know what we've experienced, or what others are occasionally willing to share with us. The juiciest gossip is reserved only for its members (and those who accidentally stumble into a highly classified war chat).

I also think a lot of people, especially those who grew up in the Web 2.0 era, are weighing conflicting emotions around both wanting to withdraw from the chaos of public social web, while also feeling vaguely guilty about checking out. It's much more comfortable to live in our group chats and DMs, but doesn't it feel a little strange to watch the world burn from the safety of our miniaturized abodes? Shouldn't we....do something? But what?

I've found myself wishing for a narrative that didn't just feel like complaining about social media today, nor pining for a simpler time where we didn't have to beg for likes and shares. The past is the past: we need to let it go. Instead, I wanted to take a hard, unflinching look at where we are today, and try to figure out how to make meaning from our lives with the constraints we currently have.


I wasn't the only person thinking about the future of the social web. In 2019, Kickstarter and Metalabel co-founder Yancey Strickler published an essay, titled "The Dark Forest Theory of the Internet," about why he felt increasingly uncomfortable interacting in public spaces online. The essay attracted a lot of attention, and eventually Yancey formed The Dark Forest Collective with a few likeminded collaborators, including Venkatesh Rao and Maggie Appleton. They published an anthology of their writing early last year. The first print run sold out in the first 72 hours. A few months later, they announced that they were looking for more ideas to publish under the same label.

I hadn't properly met Yancey before, but it turned out we were admirers of each others' work – making slow, lazy circles around the same cluster of ideas. And that's how my half-abandoned Markdown file of notes began preening itself into the book I'm sharing with you today. It’s called Antimemetics: Why Some Ideas Resist Spreading.

I didn't really mean for this to turn into a book. I wasn't sure what its final form would be when I sat down and started writing in May of last year, from the warm and sun-filled loft bedroom of a rented summer home in Manhattan Beach. Mostly, I wanted an excuse to think about antimemetics. But the draft grew and grew, and it sprouted legs, and it began to wobble and toddle around, and a few weeks into dissecting and organizing my notes, I began to develop a fondness for the alien creature that had taken up residence in my head and heart. I coaxed this manuscript – an improbable mess of ideas that had so many false starts over the last three years – to grow at the same time that I watched my infant son grow: rolling onto his stomach, rocking on his hands and knees, crawling and climbing up the stairs, pulling himself up and cruising from couch to coffee table.

In December, my son finally stood up and walked seven steps on his own, before he tripped and fell. But – realizing what he had done, and the possibilities he had just unlocked – he stood up and did it again, and again, and again, shrieking and rolling and laughing with joy and he stood, and walked, and fell, and stood, and walked, and fell, and stood, and walked again. A week later, I turned in the final draft of my manuscript for Antimemetics.


Publishing with the Dark Forest Collective felt like the right way to release this book. Antimemes are delicate and messy and hard to look at; they would be swiftly crushed in the hands of a publisher trying to juice it up and make it "go viral," like a flower ground under the heel of a shoe.

I had thought of this book as a "fun summer project," but as my writing wore on into the winter, I realized that antimemes were an overarching meta-theme of much of my life and interests: whether that's the unseen costs of infrastructure, the need for privacy in our online lives, how talent ecosystems are forming from the memetic spread of apocalyptic thinking, how bureaucracies keep ideas safely hidden, or how our attention is being hijacked by doomscrolling. All of these ideas make an appearance in Antimemetics.

And, of course, this book is in some sense a sequel to my first book, Working in Public: The Making and Maintenance of Open Source Software. (While writing Antimemetics, I joked to myself that I would call it Working in Private.) I wrote that book partly because I felt that what was happening to open source developers – who, in the face of overwhelming demand for their attention, were withdrawing from public online spaces – was going to happen to all of us. I even referenced the Dark Forest thesis in my initial book announcement.

I barely told anyone I was working on this book. This makes Antimemetics itself a sort of antimemetic experiment. I am putting my faith – as always – in my writing speaking for itself, and earning its right to spread from person to person. So, if I've written something here that resonates with you, please share it with your friends – and buy the book!

Antimemetics is now available for pre-order:

Pre-order now

The first edition will be sold exclusively on Metalabel; we’ll make it more widely available soon after. We plan to release Antimemetics in late May of this year.

Thanks, as always, for reading and supporting my work.

]]>
<![CDATA[New report: How altered states shape our minds]]>https://nayafia.substack.com/p/new-report-how-altered-states-shapehttps://nayafia.substack.com/p/new-report-how-altered-states-shapeWed, 20 Nov 2024 16:54:49 GMT

When I first pitched a magazine piece about a meditation subculture surrounding the jhanas1 at the beginning of this year, there hadn’t been any media coverage of the practice yet. The Twitter discourse mostly centered around two questions: 1) Are the jhanas even real? and 2) If you can self-induce extreme bliss, why would you do anything but “press the pleasure button” all day?

Fast forward to the end of the year, and the jhanas have now been covered in The Atlantic, Vox, Time, and Men’s and Women’s Health. Instead of debating whether they are real, the question I hear now is: “How can I try it?” Or, “I went on a retreat / tried the instructions in your blog post, and some crazy things happened!” I’ve been especially surprised to hear from people I would’ve never expected to hear from (read: not meditators or “spiritual” types) – it’s been a cool way to get to know some of my acquaintances on a deeper level.

Now that we’ve established that jhanas exist, my focus has turned to understanding their impact. Jhanas feel like self-induced psychedelic states, so – do they offer similar benefits?

Answering this question, however, turns out to be a challenging research question. Firstly, there aren’t that many researchers studying the jhanas yet, and frankly, academia doesn’t move as fast as my curiosity. Secondly, of the few published studies I’ve read about the jhanas, recruiting participants is a barrier to study, because jhanas are a skill that must be taught, and there just aren’t that many people who’ve tried them yet. And thirdly, because jhanas occupy a weird space between “meditation” and “psychedelics,” it’s not even clear which type of impact to look for, much less measure. (Mindfulness and MDMA might both change your outlook, but in very different ways.)

A big, open research question with little prior art to draw from? Sounds fun!

I decided to team up with the good people at Jhourney, a retreat company that teaches the jhanas to beginners – and therefore have unique access to practitioners – to tackle these questions. We asked their alumni how their lives have changed since going on a retreat. You can read our findings in a report that we just published today:

Read the report

I’ve also summarized our key insights and takeaways in this Twitter thread.

As far as I know, this is the most extensive report yet on the impact of the jhanas on meditators. There’s a lot here that needs to be validated by research, but my initial goal was helping us figure out the right questions to even ask. (To that end, this report has already helped inform another project we’re working on with an academic lab – more on that soon.)

1

If you need me to jog your memory: the jhanas are a series of intensely altered, psychedelic-like states that are induced solely through meditation, which I first wrote about for Asterisk magazine.

]]>
<![CDATA[Protecting our attention]]>https://nayafia.substack.com/p/protecting-our-attentionhttps://nayafia.substack.com/p/protecting-our-attentionTue, 13 Aug 2024 16:03:33 GMTI was rather confused when sociologist Jonathan Haidt’s book, The Anxious Generation, came out earlier this year, which examines the harmful effects of phone and social media use among children and teenagers. Haidt posits that mindless screen time is perhaps not-great for us, and we should take the issue more seriously.

It wasn’t Haidt’s thesis that confused me, but the defensive response from many of my peers, who alleged that he was fearmongering – much as critics once wrung their hands about books, television, video games, or any other form of new media.

This was cognitive dissonance for me. While there are many benefits to social media – I owe my career, many close friendships, and even my marriage to Twitter – doomscrolling is obviously bad. Nobody feels good after scrolling on their phones for hours, and this simple observation alone seems to reasonably justify a deeper examination of social media’s effects on our psychology. So, why were so many people in tech either reluctant to engage with Haidt’s thesis, or actively deriding it in public?

We need to look beyond the words being said, and instead understand the motivations – and scars – that make it difficult for many people in tech to acknowledge a thing that everyone else seems to find plainly obvious. It feels like saying the quiet part out loud, but I think it’s important. Tech has so much power and influence to wield here, if only we can bring ourselves to look a very difficult truth in the eye.

I’ve increasingly come to feel that the destruction of our attention – thanks, in large part, to the combination of smartphones and social media – is one of the biggest threats we face as a civilization, as it compromises our ability to make sustained progress on anything worthwhile. Most people are so immersed in it that they can’t see how bad it is; you can’t ask someone to think clearly when their core operating system is compromised.

So, I wrote a piece for the newly-launched Arena1 magazine about why tech is so avoidant of the social media debate, from a historical and psychological lens, which is also available as an audio episode (Spotify) that I narrated myself. (It’s kind of a fun production thing, complete with sound clips and spooky music.)

Read it now

Examining the effects of social media has become the domain of regulators and staunch critics, and joining the fray risks being painted as an anti-tech Luddite. But I care about this topic because I love tech and all that it hopes to promise the world: a better future for everyone. I want us to be awesome, and we can’t do that when we’re trapped in a loop of perpetual distraction. Running on a hamster wheel is not the same as running a marathon.

Despite being 20ish years into social media, I think this conversation is still so early. (We’re apparently still debating whether it’s even an problem!) There’s room yet for tech to demonstrate leadership and show people how to live peaceably with powerful technology at their fingertips. If we choose to take it seriously.

1

Arena is a new effort from Max Meyer and Ginevra Davis: a print and online magazine about technology, capitalism, and the USA. You should subscribe!

]]>
<![CDATA[What is meditation for?]]>https://nayafia.substack.com/p/what-is-meditation-forhttps://nayafia.substack.com/p/what-is-meditation-forThu, 27 Jun 2024 18:27:59 GMTHello! You may recall that a few months ago, I went on a meditation retreat as part of research for an article I was writing about the jhanas, and came out of it with my mind blown. (Jhanas, to jog your memory, are altered states of consciousness that can be accessed through concentration alone.)

Well, I’m still doggedly on the jhanas train. I went back to try it again, and accidentally discovered how to voluntarily turn my consciousness on and off, which occasioned an experience so profound that I decided not to write about it.

Because everything I could find online about the jhanas is heavily shrouded in spiritual language, I wrote up a set of instructions on how to get into jhana that is pragmatic and straightforward. These instructions were well-received, and to my delight, several people on Twitter reported that it worked for them on their first try. (I’m not sure why I didn’t send that post to this list. I guess I was sort of shy about it, and I felt more comfortable lobbing it into the internet void vs. sending it to a group I’m more acquainted with.)

There were, however, some seasoned meditators who raised an eyebrow at how quickly I progressed through all eight (or nine) jhanic states, as these experiences are expected to take much longer to attain: months, years, a lifetime. My self-reported tally was around 20 cumulative hours.

Is this a red flag? Should they be suspicious of my claims, based on the number of hours that I practiced? Anecdotally, many teachers say that it’s not unusual for inexperienced practitioners to find success with the jhanas, nor for experienced meditators to struggle with them.

I decided to team up with Jhourney – the retreat company that taught me the jhanas – to figure out whether there is any relationship between meditation experience and success with the jhanas. We looked at an anonymized sample of meditators to find out.

Read it now

TLDR: We did not find any correlation between meditation experience and success with the jhanas. This finding casts into question why jhanas are described as a “meditation practice” at all. And, doing this research raised some interesting questions for me regarding “What does it even mean to be an experienced meditator?” – and then, “Is meditation a desirable goal in itself?”

My friend recently shared the phrase “fingers pointing at the moon” with me, which means that sometimes we get so caught up looking at the fingers that we ignore what they’re pointing towards. I think meditation is a finger, and we don’t know what the moon is yet. Meditation is one method for, say, cultivating attention, but someone who meditates every day is not necessarily adept at that skill. They’re just someone who meditates often. We shouldn’t conflate “meditation experience” with the underlying skills it develops.

A lot of people have asked me why I think I progressed through these states so quickly, and my answer has been: I don’t know!1 But I think searching for the answer could uncover some interesting theories about not just how the jhanas work, but the mind itself. I’ve shared some early ruminations on my evolving framework towards the end of the post, which I’ll be working with Jhourney to validate with more research. I’m excited to share what I learn.

1

My best theory so far is that, even though I don’t formally meditate, I am very lucky to spend pretty much all day in a focused, creative state, thanks to the nature of my work (writing and research). And, as a new parent, I have a cheat code for sparking joy: thinking about my baby!

]]>
<![CDATA[Dangerous protocols]]>https://nayafia.substack.com/p/dangerous-protocolshttps://nayafia.substack.com/p/dangerous-protocolsThu, 09 May 2024 18:15:24 GMTAfter the early crypto boom of 2017, I spent a few years dabbling at its edges. I was deep into open source world then, and curious about crypto, which seemed to be open source culture embodied in an economy.

It was fun to be surrounded by people who shared my intellectual interests, and I had a lot of questions for everyone I met. One of the earliest questions I remember asking someone was about the governance of distributed file hosting. Imagine a file that’s not held in one single place, but split into a thousand pieces, which are held across many different providers. No one provider can take down the file. This is great for many reasons – security, reliability – but immediately raised other questions for me.

“So, say a vindictive person uses this protocol to publish compromising photos of their ex. Who does the ex contact to take down the photos?”

“Well,” the person I spoke to paused thoughtfully. “I guess there would be a governance council of all the hosting providers, and they’d have to establish a policy on how to proceed.”

I made a face. “You’re asking all the right questions,” he added. “These are the things we’ll have to figure out.” But the years passed, and I never got a satisfying answer from anyone.

For advocates of the decentralized web, protocols are often viewed as a good thing – a liberating alternative to, say, platforms like Meta and Twitter. But decentralization seems to be at odds with the historical purpose of protocols, which is to simplify communication. Most people would rather have one identifiable adversary to grapple with, instead of a thousand faceless ones with middling levels of motivation: or worse, not know who their adversary is at all.

As the culture wars ballooned, I realized that protocolization, and its relentless tendency towards simplification, had much bigger implications. It seemed that everything – how we thought, how we acted, who we dated, the opinions we held, even our intellectual pursuits – had become protocolized, and always with a tinge of urgency. Understanding protocols, then, was a way to understand our present cultural moment.

In the 1930s, the rise of bureaucracy helped us simplify the information boom that came out of the Industrial Age (Protocolization 1.0). Similarly, today’s crisis mindset is an attempt to simplify a boom of opinions that have proliferated in the Digital Age (Protocolization 2.0). LLMs are the expected output of this era: an attempt to compress human creativity into something predictable.

For the Summer of Protocols research group1 last summer, funded by the Ethereum Foundation, I decided to explore protocols as systems of control. The result was an essay on dangerous protocols: how protocols usurp our agency, and how we can subvert them.

Read it now

Just as bureaucracy – despite its downsides – propelled humanity forward by making it easier to organize at scale, I think the crisis mindset emerged as a way to help people prioritize their attention. But if everything is running on autopilot now, it’s especially critical for us to swim into the machine and identify the kernel of that-which-makes-us-human that enables us to steer the ship.

I’m still trying to figure out how to navigate and make the most of it as a lone brain floating through this newly protocolized world. If you feel similarly, you might enjoy this piece. Let me know what you think!

1

For those who like to peek behind the curtain, you can also read my working notes on the researching and writing of this piece here.

]]>
<![CDATA[Exploring altered states of consciousness]]>https://nayafia.substack.com/p/exploring-altered-states-of-consciousnesshttps://nayafia.substack.com/p/exploring-altered-states-of-consciousnessMon, 15 Apr 2024 20:28:51 GMTWhen researchers were trying to understand the effects of LSD in the early 1960s, a team in Toronto strapped and blindfolded their subjects in sterile rooms before administering a dose and walking away. (You can imagine how that went.) Years later, we learned the importance of set and setting – one’s mindset, and one’s environment – and how they influence the nature of highly subjective experiences.

Last year, as I prepared for my own momentous event – unmedicated childbirth – I was surprised to learn that giving birth operates similarly, even though we don’t talk about it that way. Psychology strongly influences physicality: when we are afraid, our bodies seize up, and birth slows down. When we are calm, our bodies relax, and birth can progress smoothly.

These observations seem obvious now that I’ve been through it, but at the time, this was news to me. Because birth manifests so clearly in the body, I had assumed it was something that happened to me – like getting the flu, or arthritis – rather than something could be influenced, to some degree, by my mind. (By contrast, positive thinking has only a marginal impact on illness or degenerative disease.)

In researching birth, I couldn’t help but notice the parallels with psychedelics. Both these “exotic states of consciousness,” as the Qualia Research Institute might call them, suggest that there are still vast oceans to explore inside our brains, which influence not just fuzzy, hard-to-measure things like mood or motivation, but in some cases – as with birth – vastly different medical outcomes.

Around the same time, I took fresh notice of growing Twitter chatter about the jhanas, a series of eight altered states of consciousness that are accessed via a special form of meditation. Practitioners claim that they can enter blissful, euphoric states that are comparable to MDMA or psychedelics – without the use of substances.

Like many others, I thought these claims were interesting, but likely exaggerated. But my recent learnings tugged at me. It took decades for researchers to understand how psychedelics worked: to differentiate them from chemically-induced psychosis, to identify that they had therapeutic benefits, and to be able to discuss them openly. And, despite its many advancements, the United States medical establishment still treats childbirth – a phenomenon that occurs thousands of times per day – as a purely physical, rather than psychological, experience. So…why not the jhanas?

When Asterisk magazine approached me about contributing a piece to their upcoming issue, I decided to use the opportunity to understand what the jhanas were about. I expected to write a piece that was a more anthropological, observed account of a strange Twitter subculture. I did, in fact, write that piece, and initially submitted it to my editors. But in the course of my research, I met Stephen Zerfas, who co-founded a company, Jhourney, that aims to teach the jhanas to novices. Stephen invited me to one of their upcoming retreats, which, after much deliberation, I agreed to attend.

I am not a meditator. (Even after experiencing the jhanas, I still have no desire to develop a meditation practice.) Nor am I a “spiritual seeker” of the sort you might find at Burning Man or a Vipassana retreat. Despite my lack of experience, within my first hour of practice, I found myself in the equivalent of an MDMA roll. With ten hours of practice, I had an out-of-body experience. Less than fifteen hours, a psychedelic experience that rivaled what might be had on LSD. All of this took place using only concentration.

If you’re raising an eyebrow right now, I must once again stress that I, too, did not believe this was a thing. I arrived at the retreat feeling rather silly for being there. I left astonished, and perplexed, as to why barely anyone has studied the jhanas at all. Public accounts of jhanic experiences are widely dismissed or snickered at by skeptics. I’ve found only three notable academic studies about the jhanas that were published in the last ten years. These studies used fMRI and EEG data to examine whether the phenomenon was real – and they did, in fact, find that their subjects displayed unusual brain activity, comparable to being asleep or in a coma while conscious – but conspicuously missing still is the secular language needed to describe how the jhanas work, how they are accessed deterministically, and their impact on practitioners’ minds. Strangest to me, as an outsider, is that the jhanas are still firmly lodged in meditation circles and norms, rather than studied as a form of behavioral therapy.

I am less interested in making the argument that everyone should try the jhanas. But it seems to me that if people can access these experiences with relatively little mental effort – and to do so legally, for free – more ought to know that such a thing exists. At the very least, shouldn’t there be more than three published studies about it?

If anything I’ve said so far has caught your attention, then I’d love for you to check out the piece I published in Asterisk today. I think it’s a good primer on the jhanas and their history, as well as a narrative account of my experience at the retreat.

Read it now

Finally, if you’ve followed my work for awhile, you know I love a good unexplored landscape, and – against all odds – the jhanas have grabbed my attention like a dog with a bone. If they’re interesting to you as well, I’d love to talk about it, particularly from a qualitative research and storytelling perspective. Drop me a line!

]]>
<![CDATA[Why effective accelerationism matters]]>https://nayafia.substack.com/p/why-effective-accelerationism-mattershttps://nayafia.substack.com/p/why-effective-accelerationism-mattersMon, 12 Feb 2024 15:48:28 GMTNearly a year ago, an editor reached out to me about writing a piece on effective accelerationism, or e/acc. Here's what I said in response:

> I've casually tracked their existence but tbh find them to be not dissimilar from other techno-optimist movements, beyond aesthetics and rhetoric

> I think my primary q with e/acc is "what cultural need are they filling that doesn't already exist"

At the time, I didn't have the bandwidth to explore this question, but I picked it up again sometime in the early fall, as I saw all the predictably bad takes come out. E/acc as a cringey tech bro fantasy. E/acc as a "problematic" Silicon Valley billionaire ideology. E/acc as techno-optimism-and-here's-why-that's-dangerous. Even the slightly better pieces seemed to only consider e/acc in the context of the AI ethics debate, though when I read through the tweets of the founders, it seemed there was more to it than that.

I also noticed an unresolved tension between people trying to interpret e/acc as a novel philosophy, which I sensed it was not (but that doesn't make it uninteresting!), and e/acc spreading primarily through memes and shortform. (E/acc is the exact inversion of the rationalist community in this way.) Journalists frequently cited an official e/acc Substack with a few posts, mostly abandoned. It was an easy source to point to, given that persistent artifacts were hard to come by. But given e/acc’s clear penchant for shortform, this didn't seem like a reliable source of truth.

The one longform format that e/acc's founders did seem to enjoy were Twitter Spaces: in other words, talking. So I reached out to the founders to learn what e/acc was all about. This led to a series of phone conversations with Bayeslord, one of the co-founders, from which I tried to deconstruct what had spurred them to start the movement in the first place.

I was pleasantly surprised to find that Bayeslord was humble, thoughtful, and intensely principled. From our conversations emerged a clearer picture of e/acc: not as a deep philosophy, not as a stupid meme, but a reaction to ten years of doom and gloom in Silicon Valley. E/acc, it seemed, was an attempt to rediscover one's culture and preserve the tradition of technology.

The outcome of these conversations was a piece that touches on the recent cultural history of Silicon Valley, and how it's playing out in the AI ethics debates, which came out in The New Atlantis today.

Read it here

I hope you enjoy!

]]>
<![CDATA[The Hypnotized Society]]>https://nayafia.substack.com/p/the-hypnotized-societyhttps://nayafia.substack.com/p/the-hypnotized-societyThu, 16 Nov 2023 18:50:56 GMTI. Hypnosis

If you're preparing for an unmedicated childbirth, chances are you'll encounter a method called hypnobirthing, meant to help women better manage their psychology during the experience. Hypnobirthing is exactly as it sounds: using guided self-hypnosis to rewire one's brain to perceive pain as pressure, fear as joy, anxiety as excitement, and thus facilitate an "easy, comfortable childbirth."

I'd never been hypnotized before. My only prior exposure was sitting in the audience of those flashy stage performances. Despite having the opportunity to be hypnotized, the thought of being mesmerized into doing silly things before a crowd has never quite appealed to my subconscious self.

The prospect of giving birth without medication, however, was motivation enough to embrace the journey of letting go, in the comfort of my home. So, for six weeks, I read a chapter of my hypnobirthing coursebook and listened to the accompanying audio tracks. I didn’t know what hypnosis would feel like, but I imagined being completely zonked out, my mind vacated, my body jerking around, zombielike, under the dictation of some sober mastermind.

My hypnobirthing class disabused me of these notions in my first week. Its authors claim that the hypnotic state is much more common than we think – in fact, we enter hypnosis multiple times a day:

Have you ever had the experience of driving along the highway and suddenly realizing that you passed your exit several miles back? Or been so caught up in a book or movie or video game that you don't even realize that someone has been speaking to you for the past several minutes? THAT is hypnosis...So you see, when a hypnotist guides you into hypnosis, you are not being asked to experience anything strange or that you haven't experienced before.

II. Bicameral Mind

Julian Jaynes, who introduced the concept of bicameral mentality in his 1976 book, The Origin of Consciousness in the Breakdown of the Bicameral Mind, proposed that humans only became conscious 3,000 years ago, and that this was not a biological evolution, but a learned behavior. Before we became conscious, the right hemisphere of our brains (interpreted as hallucinations, or a god) "spoke" to us, while the left hemisphere listened passively to its commands.

During this time, humans had no notion of themselves as individuals: we were simply servants to the voices inside our heads. However, this was not a primitive state by any means. Jaynes posits that entire civilizations were built with the bicameral mind, including ancient Greece, parts of Mesopotamia, and Egypt. "These gods...pushed men about like robots and sang epics through their lips...They were noble automatons who knew not what they did."

Jaynes devotes a entire chapter to hypnosis, which he files under "vestiges of the bicameral mind in the modern world." During hypnosis, the subject is lulled into an unconscious state, while simultaneously listening to an external guiding voice - in this case, the hypnotist. "He does not introspect as we do, does not know he is hypnotized, and is not constantly monitoring himself as in an unhypnotized state."

Jaynes' theory didn't make much sense to me – how can one be asleep, but awake, in all their daily activities? – until I finally experienced hypnosis myself. In my very first session, I slumped into a state that felt like sleeping, while listening attentively to a woman's trancelike voice. My thoughts would occasionally drift to other places, but her voice would always bring me back. I was there, but I was happy to be a guest in my own mind. At the end of each session, I'd re-awaken at her count of three, feeling energetic and refreshed.

There is something strangely addictive about hypnosis, like doing a drug. Hypnosis feels like taking a vacation from my conscious brain. It feels good to give myself up to this disembodied voice, who had only positive suggestions for my life. It feels good to follow and not always have to lead, to make a million little micro-decisions at every other point in my life.

Like the authors of my hypnobirthing class, Jaynes uses the car metaphor to describe what it feels like to be in a trancelike, bicameral state:

In driving a car, I am not sitting like a back-seat driver directing myself, but rather find myself committed and engaged with little consciousness. In fact, my consciousness will usually be involved in something else, [such as] a conversation with you...My hand, foot, and head behavior, however, are almost in a different world....Now simply subtract that consciousness and you have what a bicameral man would be like.

Both Jaynes's book and my hypnobirthing class were published well before the dawn of social media. If they were to write them today, I wonder if doomscrolling would've replaced driving as the canonical example.

III. Doomscrolling

I can recognize, now, when I'm entering hypnosis at various points throughout my day. It's that "zoned out" feeling, where my mind is thinking one thing and my body is doing another.

The positive version of this sensation is what’s been trendily referred to as flow state. Writing often puts me into that state; so does working out, cooking, or listening to certain types of music. I'm not quite operating consciously, but my body knows what to do, and the sensation is peaceful and relaxing. In a deep flow state, I can operate for twelve hours straight and not look at the clock – much longer, and always feeling more refreshed, than when I've consciously toiled at that same task for, say, two or three hours.

Then there are times when I'm zoned out in a neutral sense, such as watching TV, playing video games, or doing some other mindless activity at the end of a long day. I'm already mentally drained, I don't want to process any more information, so I let my body take over, while my mind takes a break. I don't usually come out of this state feeling refreshed, but I don't necessarily feel badly about it, either.

Finally, there are times when I'm zoned out and feel negatively afterwards, and that's when I'm trapped in a doomscrolling cycle. My body is doing something I don't want to be doing. I didn't choose to enter this state, the way I chose to write or listen to music. I kinda just found myself there when I picked up my phone to do…something else, now long forgotten. I want to stop, but I can't. So I keep scrolling, until finally my active brain grabs ahold of the controls again and forces me to close the app, breaking the hypnotic loop.

Doomscrolling is often likened to addiction, but I think addiction only describes the allure of the activity. Addiction is something that happens to individuals; it destroys lives. But doomscrolling as hypnosis reveals something more about its dangers – not just on an individual, but on a collective level.

IV. The Hypnotized Society

As far as I can tell, there are three stages to being hypnotized. The first is simply breathing, relaxing, and clearing one's mind, to begin to detach oneself from the outside world.

The second is entering a deep, trancelike state, which is achieved by a "trigger" that you learn to create, repetition and countdown exercises, and "testing" (ex. trying to open your eyes and being unable to), which sends you deeper into hypnosis. I don't know what portion of my hypnosis sessions is devoted to these first two stages, but I would subjectively guess that half or two-thirds is just priming my brain for suggestion.

The third part of hypnosis – once your brain is lulled into a pliable, obedient state – is actually receiving external suggestions. If parts one and two are like prepping a patient for surgery and giving them anesthesia, the third part is where the surgeon (i.e. hypnotist) actually gets to work, rewiring her patient’s body and mind, telling them how to think and feel, before closing everything up again.

If doomscrolling is a form of hypnosis, it's not just the fact of being mesmerized, or “addicted,” that's concerning. It's that we're putting ourselves into an unconscious state that makes us highly receptive to the ideas and information we're ingesting, and that we're ingesting it from rather...unclean sources.

This is, of course, not a new concern, but it's worth considering how our prescribed solutions might change if we understand ourselves as not truly conscious in this state. Encouraging "responsible" social media use, for example, means nothing to someone while hypnotized, who has no perception of self and cannot introspect upon his or her supposed "responsibilities."

The other, bigger question is: if everyone is constantly tethered to their phones and feeds, what percent of our time these days is being spent in a prolonged unconscious state? And how does that affect our resilience as a society?

Returning to the analogy of our bicameral man in the driver's seat, Jaynes imagines what would happen if such a person suddenly encountered an unfamiliar situation on the road (emphasis mine):

The world would happen to him and his action would be an inextricable part of that happening with no consciousness whatever. [If] some brand-new situation [were to] occur, an accident up ahead, a blocked road…behold, our bicameral man would not do what you and I would do, that is, quickly and efficiently swivel our consciousness over to the matter and narratize out what to do. He would have to wait for his bicameral voice which with the stored-up admonitory wisdom of his life would tell him nonconsciously what to do.

Bicamerality can be more powerful than consciousness – as hypnosis demonstrates – but without clear dictation, it is also brittle, limited to operating within a certain scope of familiar scenarios: after all, the "god" we’re listening to is not really a god at all. Jaynes believes that these limitations are what led to the breakdown of ancient bicameral civilizations, whose members were unexpectedly thrust into new situations that they didn’t knew how to deal with. Their gods failed them, and rather than adapt as a conscious person would, they simply let their lives waste away, like helpless children.

Jaynes' depiction of the ancient Greeks as "noble automatons" comes with a caveat that, of course, "the soldiers who were so directed were not at all like us." Over and over, Jaynes describes these bicameral civilizations – not without a touch of pride – as distinct from how we operate today: we, who are now conscious, active individuals with a distinct sense of self.

But the world does not look like it did half a century ago. And when I read Jaynes' description of bicameral societies, and I think about how frequently and easily we slip into hypnosis every day, I wonder whether we are, once again, reverting to a bicameral civilization: one in which we recursively take dictation from the internet's hive mind, our thumbs flicking in an ever-upward direction to maintain a trancelike state.

Mimesis is not conscious behavior. Quote-tweeting is not conscious behavior. And, just as entire civilizations were still built by bicameral minds, today we are constructing entire industries centered around ever-changing crises – fake news, totalitarianism, having too many children, not having enough children, the fiery destruction of our planet, death by artificial intelligence.1 We are the Mesopotamian slaves again, building ziggurats to the quivering annihilations that our gods command us to recognize and worship.

And this eternal crisis mindset is no longer really a crisis anymore, but a comforting place to be. It distracts us from being able to see the future for what it really is – a blank page – instead of as an inevitable disaster, to figure out what we’d actually want to build. Today, we are stronger and more connected by technology than ever, but perhaps we are also as brittle and docile as the ancient Greeks, incapable of acting nimbly in the face of novelty and uncertainty.

If we are now re-entering an age of bicamerality, then it is more important and powerful than ever to remain agentic, to retain our individual sense of self. To be agentic is to be godlike. Per Jaynes: "It is the self that is responsible and can debate within itself, can order and direct, and...the creation of such a self is the product of culture. In a sense, we have become our own gods."

1

Even as I type this, I wonder if my screed against the crisis mindset is, itself, the articulation of yet another crisis.

]]>
<![CDATA[Glimpsing God]]>https://nayafia.substack.com/p/glimpsing-godhttps://nayafia.substack.com/p/glimpsing-godWed, 27 Sep 2023 20:37:47 GMTI'm just coming out of the black hole of this summer, which was very writing- and research-intensive, between participating in the Summer of Protocols program, some private research work, and a couple of other projects that I hope to be able to share with you soon. Over those months, I accumulated a backlog of things I wanted to write about. Now that I've found myself with more time again, I've returned to that log and felt sort of conflicted about how to approach it. Here are some thoughts on why that is.


This summer, I read C.S. Lewis's spiritual memoir Surprised by Joy, which chronicles his journey from being raised Christian, to becoming atheist, to rediscovering Christianity on his own terms as an adult. As the title suggests, Lewis repeatedly returns to the theme of Joy, which he describes as "an unsatisfied desire which is itself more desirable than any other satisfaction." He is careful to distinguish it from both Happiness and Pleasure, likening Joy more to "a particular kind of unhappiness or grief"..."but then it is a kind we want."

Joy is the first signpost that helped Lewis understand what God truly is. While he initially chases the irresistible sensation of Joy itself, eventually he realizes that it's the fact that there is something outside oneself to seek at all – rather than his ravenous pursuit of it – that finally becomes his proof of the divine.

I've often thought of writing, and creative work more generally, as a way of glimpsing God – of touching the ineffable. But until reading Lewis's account, I wouldn't have described that sensation as “Joy” so much as an abyss, which Venkatesh Rao echoed recently at the Summer of Protocols research retreat in Seattle this summer. Venkatesh described the researcher's pursuit of truth as "staring into the abyss," which he likened to his experience of witnessing a total solar eclipse. Staring into the abyss induces a revelation that there is much more out there than we previously realized, even if the abyss can't yet be described.

Staring into the abyss sounds a lot like Lewis's Joy, but it doesn't usually carry the same connotation. The abyss is a recurring theme in Lovecraftian horror, where there is a dread and uneasiness, rather than euphoria, that comes from encountering something that resists definition. Nietzsche famously wrote that gazing into the abyss will cause it to gaze back at you – a warning of the dangers of spending too much time chasing the secrets of the void.

Divine and demonic temptation, then, look very similar. And so while I've thought of writing as a form of spiritual practice, when I consider what that experience has been like, I can't unsee that it looks much closer to the demonic than the divine. I've indulged a lot of vices to get there, numbing myself to reach a certain emotional state. When I'm not writing, I get antsy, anxious, and irritable. I have to fight not to see everything else in my life – time spent with loved ones, travel, even wandering and contemplation – as an annoyance or inconvenience, because it’s time spent away from whatever I’m chasing around in my head.

It's hard to describe my creative process without concluding that writing just isn't very good for me. People assume I'm joking or being hyperbolic when I describe writing as an addiction, but I mean it seriously. I might be a high-functioning addict, but I've never really had a positive relationship with writing. Over time, writing has managed to crowd out other parts of my life that might have been better for me. And yet, as Lewis said, “anyone who has experienced it will want it again.”

But here's the plot twist of Lewis's spiritual journey. The endless pursuit of Joy (or the abyss) mistakes the signpost pointing to the "outer" for the outer itself, like those who do too many psychedelics and call it divine worship. Joy itself is not God; it merely suggests that there is a god. In the final pages of Surprised by Joy, Lewis remarks that after converting to Theism, he didn't think very much about Joy at all anymore. He’d finally found relief from that relentless ache and pang, whose symptoms resembled a death drive more than a life of harmony.

In recent years, I've begun to suspect that a life consumed by ideas will not bring me closer to the divine. The freedom I seek, it seems, doesn't lie in my laying about, steeped in my own brain, but rather in the stillness I've found in the more mundane moments of my life. In these moments, there is no euphoria, nor even any active reflection on gratitude or happiness. Rather, it's the sense of just being – the absence of introspection – that brings me peace.

Lewis, too, came to realize that his pursuit of Joy had in fact been "a futile attempt to contemplate the enjoyed," rather than simply enjoying it. And I worry that being immersed in a world of hungry, gaping maws, grasping for ideas – any idea – to gobble up; to compulsively transact in ideas for status, money, and friendship; pulls me further away from the divine somehow. Chasing ideas can be just as materialistic as chasing money or power.

This, of course, puts me at an odd crossroads when it comes to what I do for a living. Writing requires that one dabble in ideas, which is felt all the more if one does it professionally: a tension I've become acutely aware of as I've gone fully independent in the last couple years. I’m haunted by the number of bloggers I know who stopped writing because they started being happy.

I, too, have considered not writing anymore, but this seems as nonsensical to me as no longer speaking or breathing. Writing is a sensory appendage, like having a nose or set of hands, that I use to experience the world. So in recent months, I've thought about how to reconnect with writing in a different way. I’m still a curious person who likes to sort out the world and tell stories about what I see. I just don't want to chase God anymore, escalating my discomforts in a vain attempt to manufacture the circumstances that produce beauty. Nor do I want to be chased by the demonic, allowing myself to be passively steered into chaos like a wide-eyed passenger on Charon's boat to the underworld.

Instead, I think of my relationship with God more as a stable foundation that enables me to explore the world at my own pace. Life is still filled with novelty, but it doesn’t live in front of or behind me. Rather, novelty is what buoys me, like bobbing on the surface of a glass filled with champagne, and provides me with endless surprising moments to draw from.

I'm not exactly sure how this changes how I work – though I’ve started to see it influence my writing style and output – as I still feel at the edge of it, but I felt I needed to chronicle it here, as it also explains some of my hesitation around writing these days. I want to be more present in my work, just as I would be fiddling with the grass on a warm sunny day, instead of chafing to be somewhere else. There's a certain timelessness about this feeling that I really enjoy, and I'd like to see where it takes me.

]]>
<![CDATA[Remembering GitHub's office]]>https://nayafia.substack.com/p/remembering-githubs-officehttps://nayafia.substack.com/p/remembering-githubs-officeSun, 28 May 2023 15:03:12 GMTI was bummed when I read GitHub’s February announcement that it would permanently close its San Francisco headquarters. I worked there for just a couple years, but I remembered it as a whimsical testament to everything I loved about tech, made all the more intriguing by its complex history.

I associate that office with a particularly fond era of working in startups, when offices were more than just a place to work, but cultural centers in their own right. My “real” home was a tiny studio apartment, where I worked off an ironing board in my kitchen. But my “office” home was a lavish mansion, filled with feasts and laughter and music and activities and meetups, where friends would freely host each other for lunch or drinks or dinner.1 There was a uniquely San Franciscan proxy urban neighborhood culture that formed not around people’s homes, but their offices, and yet these cultural monuments were swiftly abandoned in the post-pandemic reorganization of work.2

Reading about GitHub’s office closure made me think about how the 2020s era of hybrid and remote work must feel so different to someone working in startups today. Even if people do go into the office these days, that neighborhood-y feel is just not what it used to be.3 Without offices as cultural gathering points, where does tech derive that same sense of pride and solidarity?

I briefly entertained the idea of writing an elegy to the GitHub office, which I then shelved as maybe a little silly or frivolous, until Angela Chen from Wired reached out a couple months later about contributing a piece.

It’s rare, and satisfying, when you get a chance to dust off those “maybe-one-day” ideas and give them life. I tried to capture not just the GitHub office itself, but its significance in tech’s cultural history as one of its first disputed territories. You can read it here today:

Read this essay

In other news, I've been emailing a bit more frequently than usual, but I'm probably gonna go back into a hole for the bulk of this summer. I've started publishing my working notes for my Summer of Protocols project, which has evolved somewhat to focus on protocols as systems of social control. I'll update that page every couple weeks with the themes and snippets of things I'm thinking about, so if you're wondering where I'm at, you can always take a peek in there.

See you on the other side!

1

Side project idea that I’ll never get to: a coffee table book chronicling all the iconic startup offices in San Francisco from the 2010s.

2

I’ll never forget coming back to the Substack office several months into the pandemic to retrieve my gear. The building was dark and empty, and I was masked and terrified as I crept around the shadowy desks. My desk had a mug with a dried-up tea bag in it: I had just assumed I’d be back in the next morning to clean it up. I had no idea it would be my last day in the office.

3

I was particularly struck by a comment from Scott Chacon, one of GitHub’s co-founders who led the office design, who I spoke to for this piece. He said that because working from the office was optional, the founders wanted to design an office that was better than working from home. GitHub is the OG remote work company: over half their employees were remote at the time – and that was by design. But their office was still a first-class objet d’art that drew employees in, not a sad afterthought whose use is enforced with mandatory in-person days. Just because employees work remote or hybrid doesn’t mean offices have to suck; I’m reminded of online-first consumer brands that have especially gorgeous physical showrooms. The showrooms aren’t meant to churn out high order volume, like a typical retail store - they’re designed to evoke the brand itself. Chacon intuitively understood this about GitHub a decade ago. What if more companies did this with their offices today?

]]>
<![CDATA[The battle over tech's values]]>https://nayafia.substack.com/p/the-battle-over-techs-valueshttps://nayafia.substack.com/p/the-battle-over-techs-valuesMon, 15 May 2023 15:45:22 GMTIf you’ve paid attention to the news in the last decade, you’ve probably noticed a negative turn in sentiment towards the tech industry, sometimes called the tech “backlash.”

Ask people what caused this backlash, and you’ll likely hear about how the unchecked spread of misinformation on tech platforms caused regulators to step in, triggering a public reckoning with tech’s impact on the world and its moral legitimacy as an industry. I’ve personally never resonated with this story; it simply doesn’t match my experience working in tech.

So I tried to tell the story I saw, where the tech backlash is explained not by surface-level media events, but a deeper clash between two generations of power in America, who each built their wealth in a different way, and accordingly have different views on how to shape the world and its future. That piece, published with Tablet Magazine, is out today.

I don’t think any essay has ever caused me so much grief. I started writing the first draft last summer, and since then it’s been through at least six major revisions (or perhaps that’s just when I lost count). It took just as long to find great editors to work with, who understood what I was trying to say and could help bring it into fruition. (Many thanks to Alana Newhouse and David Sugarman for their patience and hard work!)

I hope you’ll give it a read and let me know what you think. You can check it out below.

Read this essay

]]>
<![CDATA[Explaining tech's notion of talent scarcity]]>https://nayafia.substack.com/p/explaining-techs-notion-of-talenthttps://nayafia.substack.com/p/explaining-techs-notion-of-talentTue, 25 Apr 2023 17:16:22 GMTI was recently asked to explain what it means when people say that, for example, “There are only [10-200] people in the world who can do what [highly-paid AI researcher] does.” Why can’t more people be trained to do these jobs?

The notion that only some engineers or researchers in the world can do certain types of work – i.e., nobody can learn how to be Linus Torvalds or Andrej Karpathy at a coding bootcamp – feels very intuitive to me, but apparently this is not necessarily intuitive, or even valued, among other industries. That made me wonder how much this implicit belief drives tech culture.

Why does software have this phenomenon, while other industries don’t? Are companies that hire for exceptional talent organized differently from those that don’t? And what do we mean by “exceptional talent,” anyway?

I ended up with a framework for talking about different types of talent distribution (normal, Pareto, and bimodal) and how they influence corporate cultures, which helped me answer these questions. Enjoy!

P.S. In other news, I’m spending time this summer at Summer of Protocols, a program funded by the Ethereum Foundation (and run by the one and only Venkatesh Rao) to explore deeper research questions around the sociological theory of protocols. I’ll be looking at the spread, transmission, and defection (?!) of social protocols.

In addition to the topic itself, I’m excited to participate in a para-academic experiment in bringing a bunch of independent researchers together around the same topic. I’m planning to write about the experience from a field-building perspective, and particularly how it compares to similar efforts to catalyze research fields in academia. Stay tuned! And in the meantime, send me all your wild musings and unanswered questions about protocols.


Explaining tech's notion of talent scarcity

TLDR: Most conversations about “top talent” assume Pareto distribution; however, a closer examination suggests that different corporate cultures benefit from different types of talent distribution (normal, Pareto, and a third option – bimodal) according to the problem they’re trying to solve. Bimodal talent distribution is rare but more frequently observed in creative industries, including some types of software companies.

While Pareto companies compete for A-players (“high-IQ generalists”), bimodal companies compete for linchpins (those who are uniquely gifted at a task that few others can do). These differences account for variations in management style and corporate cultures.


It was a group of consultants at McKinsey & Company who coined the “war for talent” in their 1998 report and subsequent book of the same name, propelling the term “top talent” into the corporate executive hive-mind for the next two decades. While McKinsey refrained from offering a precise definition of talent, they thought that a shortage of “smart, energetic, ambitious individuals” was coming, and that it would lead companies to fight to attract and retain the very best.

In software, there is a related but distinct notion of the “10x developer,” which dates at least as far back as a 1968 study that accidentally uncovered individual differences in programmer performance, and was further popularized by Fred Brooks’ 1975 book, The Mythical Man-Month. The definition of a 10x developer is similarly vague, and its existence is frequently contested. Depending on who you ask, a 10x developer might be someone who can write code 10x faster; is 10x better at understanding product needs; makes their team 10x more effective; or is 10x as good at finding and resolving issues in their code.

Despite the similarity between these two concepts, McKinsey’s notion of top talent and software’s 10x developer reveal subtle cultural differences. Both are concerned with identifying the best people to work with, but the McKinsey version defines the best as the top percentile in their field, whereas the 10x developer is often a singular, talented individual whose magic is difficult to explain or replicate.

For example, in conversations about hiring AI researchers, many people have said something to the effect of “There are only [10-200] people in the world who can do what [highly-paid AI researcher] does.” This is a very different statement from, say, “We are trying to hire top AI researchers.” In the latter case, “top” means the highest-performing slice of all AI researchers, but in the former, the assumption is that there are only a handful of people who can perform the job at all. While this idea is intuitive among software engineers, it is rarely seen in other industries.

Why can’t more people be trained to do certain tasks in software? Why aren’t there more Linus Torvaldses or John Carmacks? Will there only be 100 people, ever, who can do what some AI researchers do?

After exploring these questions, I identified three distinct models of talent distribution, which correlate strongly to industry, but vary even within industries, depending on what the company does and how mature it is:

  • Normal distribution: Talent follows a normal distribution. Companies succeed not by attracting and retaining “top talent,” but by the strength of their processes, to which all employees are expected to conform. Frequently seen among manufacturing, construction, and logistics companies.

  • Pareto distribution: Talent follows a Pareto distribution, skewed towards the top nth percentile. Companies benefit from attracting, retaining, and cultivating “A-players,” who are expected to demonstrate exceptional individual performance. Frequently seen among knowledge work and sales-centric companies.

  • Bimodal distribution: Talent follows a bimodal distribution, where companies benefit from identifying, hiring, and retaining “linchpins,” who make up a fraction of headcount, but drive most of the company’s success. Frequently seen in creative industries (ex. entertainment, fashion, design), as well as software companies solving difficult technical problems (ex. infrastructure).

A company’s distribution type also shapes their organizational culture, which lives downstream of the types of talent they are most incentivized to seek out and hire. Most notably, we can understand the difference between what I’ll call McKinsey and Silicon Valley mindsets by understanding differences in their respective definitions of “top talent.”

Read full post

]]>
<![CDATA[A Bank of One's Own]]>https://nayafia.substack.com/p/a-bank-of-ones-ownhttps://nayafia.substack.com/p/a-bank-of-ones-ownSat, 11 Mar 2023 18:33:57 GMTA fun fact about startups, if you've never worked for one, is that employees don't typically get equity in the company. They're granted options, meaning they have the option to buy equity when they leave, usually within 90 days. This can cost tens or hundreds of thousands of dollars, paid out of pocket – even if the chance of an exit is still uncertain or years away – if they want to participate in the startup’s potential financial upside.

The first time I had the option to buy equity, I did the math and realized it came out to half of what was my modest savings account. I had earned very little in my career up til then, and I didn't have family or friends who could help me with that kind of money. I kept my savings in a traditional Big Bank, had no personal banking relationship, and had no idea how I'd even go about getting a loan, though I was sure I wouldn't qualify for one.

My only other option, I learned, was to work with an employee equity "fund" – essentially, loan sharks – that lend money to startup employees if they don't have the cash to buy their options. In exchange, they would take more than 50% of my payout, if it ever came to pass. After weighing these options, I decided to choke down my fears, cash out my savings and buy myself one big chip, which I placed in front of the roulette wheel and held my breath as it spun.

Not everyone makes the choice I did, nor even have the option to do so. Some employees can't afford to buy their equity at all, so that when their startup is acquired or goes public, they earn nothing from the outcome, looking on in silence while their colleagues become millionaires. The people who find themselves in this situation are, of course, disproportionately those who work in lower-paying roles, and who don't have family or friends to borrow from.

Tech has gone mainstream in the sense that its end products are ubiquitous, its most famous (and infamous) founders are canonized in movies and TV shows, and working at a startup feels just a little bit passé now. But that's just the shiny side of tech, the part that everyone else gets to see. When it comes to the mechanics of making all this work, behind the scenes, tech is still a cottage industry, with surprisingly little connection to the outside world.

Nearly everyone who participates in startups – founders, employees, and investors – has at some point had to take a good hard look at their personal bank account and figure out how to reconcile their iridescent dreams with the concrete dust of reality. While the somewhat controversial practice of secondaries (selling part of one's equity to an outside buyer for cash, before the company has exited) became more popular in tech's recent, fatter years, most founders spend years toiling away at a company with no knowledge of whether it will ever all be financially worth it. Meanwhile, founders must still progress through the humble stages of a human life: they get married, buy houses, have babies, send their kids to school, become caregivers to aging parents. When a founder goes to the bank to get a mortgage, it's difficult to explain to someone at, say, Bank of America that yes, their savings and income don't look very impressive, but they do have a lot of equity in a promising company – which, by the way, isn't yet profitable, but it will be! (Maybe.) A Big Banker doesn't look at that story and see a potential high earner. They just see someone who is incredibly cash poor and risky.

Investors aren't exempt from the pains of the financial system, either. Several years into my time in San Francisco, someone explained to me how, as a partner at a venture capital firm, they were expected to commit personal capital to the fund, which is often several percentage points' worth of the total fund size. This could amount to hundreds of thousands, even millions, of dollars.

"Boo hoo," you might think. "Pity the poor venture capitalist who can't afford to buy into their fund." But this person did not come from a wealthy or privileged background. They were relatively young. They didn't have family who could advance them the cash. They were exactly the type of person that we plaster on conference panels and brochures as an example of the kinds of people we all wish there were more of in venture capital. But the only way they were able to participate in that world, and serve as a role model for others, was by quietly borrowing from a bank that understood what it meant to be a young venture capitalist with carry, but not enough savings.

Silicon Valley Bank is not a bank for rich people. It's a bank that served the needs of a very particular ecosystem with a very particular set of financial circumstances, which enabled decades of creativity and entrepreneurialism to thrive. It was an integral part of the less glamorous side of tech that people don't like to think about and certainly don't want to talk about, even amongst themselves – truthfully, writing this piece makes me nervous and uncomfortable.

I've seen a number of people ask why so many startups banked with SVB, as if this should have been a red flag somehow. The answer is not that tech doesn't want to work with traditional banks, or because they were greedy and looking for better rates. They banked with SVB because it's one of only a handful of banks that was willing to align its financial products with the holistic levels of risk that are required to work in startups. And the types of people who need access to these services aren't necessarily the rich kids with wealthy parents; they're precisely the people who don't have access to capital anywhere else. The real world is not kind towards people who don't have money or pedigrees that want to start, fund, or work at early stage companies. The real world wants those people to work a steady desk job and not ask too many questions. That's why tech needs a bank of its own.

I am not, by the way, a customer of Silicon Valley Bank, nor have I ever been. I also have no interest in defending SVB as a firm; from public information so far, it seems they made a series of poor judgment calls that could have prevented the events that unfolded, which makes their demise all the more tragic. But I am still deeply saddened by the news this week, because I know SVB as a bank that helped remove the friction that entrepreneurial people feel in every other part of their lives whenever they try to interact with the "default world."

I'm perplexed, then, to see so many people gleefully celebrate the collapse of an institution that helped level the playing field for people from all backgrounds. It seems that people simply saw the words "tech" and "bank failure" and decided they knew how this story should end. But what triggered this chain of events had, ironically – and I cannot stress this enough – little to do with the inherent risks or impracticality of startups, but SVB’s bad decisions combined with the hiking of federal interest rates: the brutal constraints of the "real world" encroaching once again upon tech's constructed version, like flames creeping at the edges of a newspaper.

There ought to be more, not less, institutional support for entrepreneurs, creatives, and dreamers to get the help they need. More importantly, we ought to celebrate, not vilify, the idea that people ought to be able to do whatever they put their minds to. I don't want to live in a world where people can't do interesting things because they're told they're not rich enough to qualify for those dreams, or that their financial situations are too weird to fit the traditional banking system. Whatever happens to Silicon Valley Bank, or to the startup ecosystem more generally, I hope that dream continues to survive somewhere.

]]>
<![CDATA[America's future and the network state]]>https://nayafia.substack.com/p/americas-future-and-the-network-statehttps://nayafia.substack.com/p/americas-future-and-the-network-stateWed, 22 Feb 2023 17:14:40 GMTThe geopolitical concept of hegemony is implicitly a game of keep-away: a smattering of nation states who've divided their mountains and oceans among themselves, battling for dominance in a constrained landscape. After the fall of the Soviet Union, America was left holding the ball, but – as the narrative goes – if we're not careful, Russia or China or India might sneak up and steal it away.

But what if hegemony were determined not by geographic borders or resources, but instead by who can attract the best people? In this version of the world, America is a proposition nation, held together by shared ideas rather than a shared origin (like ethnicity or geographic proximity). If America, the nation state, were to become less attractive to outsiders, the hegemonic tribe has no true allegiance to its shining seas and waves of grain: they merely migrate to where the opportunities are.

It's these two competing theories – hegemony as physical place to be defended, versus hegemony as a self-selected tribe in search of a home – that are playing out in tech today, through the widening division between Atoms and Bits. While the Atoms tribe is playing geopolitics, it’s the Bits position that Balaji Srinivasan attempts to unpack in his manifesto The Network State. Twenty years from now, what we think of as America may no longer refer to that country bordering Canada to the north and Mexico to the south. America might instead be summoned up to the skies, its spirit and consciousness expanded to a digital nation.

I explored the Atoms/Bits division in an essay for The Point, which just came out today. It’s also an attempt to explain what Balaji’s network state thesis means to me, and to reconcile it with my love of America.

James Pogue recently published a thorough profile of prepper types who are gravitating towards the American West as a way to “exit” society, which heavily references Balaji’s vision of the network state, but something about the scene he paints feels more ominous, resigned, and doomer-ish to me than how I personally connect to it: a vote of no-confidence rather than a new reimagining. I view the tension between nation state and network state as fundamental groundwork for understanding why tech matters, where it derives meaning, and what it can contribute to society. To me, that is an optimistic story that represents a modern kindling of the American spirit.

So that’s what I decided to write about. You can check it out here:

Read this essay

In other news…

  • I’m giving a (virtual) talk at The Stoa next week about mapping digital tribes. We’ll discuss how cartography can both legitimize and threaten digital spaces, methodologies for tribe mapping, and look at a bunch of different maps together! Would love to see you there, next Mon, Feb 27, 6pm ET. RSVP here.

  • Early stage funding markets for science: I worked on a paper last summer about the growth and impact of “early stage” science funders, with support from Schmidt Futures. In particular, I looked at several emerging funding mechanisms – rapid grants, scout programs, and focused research organizations (FROs) – and how they serve the needs of science funders. Might be useful to anyone who’s interested in field building, especially grantmakers. You can check that out here.

]]>
<![CDATA[Understanding climate as tribes, not a social cause]]>https://nayafia.substack.com/p/understanding-climate-as-tribes-nothttps://nayafia.substack.com/p/understanding-climate-as-tribes-notWed, 30 Nov 2022 19:20:21 GMTOnce upon a time, I thought I wanted to work in climate. I studied environmental science in college and spent my first year out of school working with endowments that were investing in climate opportunities. After moving to San Francisco, though, I stumbled sideways into the tech rabbit hole and never climbed back out.

So that was the end of my interest in climate. But in the last couple years, climate came back for me. I entered the working world in the ashes of the 2000s cleantech bubble; today, climate now accounts for 14 cents of every venture capital dollar. Climate provides meaning to many people in tech, so I decided to try to understand its allure better.

I started with an assumption that working in climate appeals to “doomer types,” but after immersing myself in all things climate for the last two months, I’ve come to appreciate its nuances. (Turns out, most people who work in climate aren’t doomers.) It’s strange to me that the media still portrays climate as a question of beliefs (where “deniers” are the villains), and academia focuses on climate activism and policymaking, because there’s so much more going on under the surface that isn’t being adequately reported on. Hopefully this post provides a different narrative.

In this piece, I dive into: 1) how climate has decomposed from a monolithic social cause to a landscape of tribes, 2) what those different tribes are, and 3) whether climate represents a broader trend of “doomer industry” speciation – AI safety, population decline, and the like.

Excerpt below. Enjoy!


Mapping out the tribes of climate

Climate is a gravity well for talent, but why don’t other, equally impactful topics attract talent in the same way? Why isn’t everyone dropping everything to work on homelessness, or global poverty, or curing cancer? With many peers in tech now working on climate issues, I tried to understand why this topic holds such purchase for so many people – and its incredible staying power over the decades.

Initially, I started with the idea that climate was an attractive industry for “doomer” types, and I painted their motivations monolithically. I was searching for the one weird reason that was causing hordes of people to drop what they were doing and march, hypnotically, towards the same problem space.

What I found instead is that while the media still portrays climate as a simple question of beliefs, the climate field has long moved on to diversified solutions. Whether one believes in climate change is no longer the interesting question; now it’s “What do you think is the right approach?”

Pass through the asteroid belt of climate doomerism, and the universe expands into a rich panoply of different climate tribes. People who work in and around climate don’t all believe the same things. Instead, they inhabit a parallel, mirror world that looks a lot like the non-climate world. Just like in the regular world, there are factions, politics, and competing belief systems.

For example, I did not find that people who are interested in climate fall cleanly along a certain political line of thinking, or even a shared set of values or goals. Climate is frequently coded as a left-leaning issue, but there are also centrist and right-leaning people who operate in different factions.

Nor do climate people all agree on the right solutions to pursue. In some cases, they believe other tribes are actively harmful to their cause. The enemy, in their minds, aren’t climate deniers, as we might have seen a decade or two ago – they’re other people working in climate.

For someone who doesn’t work in climate, trying to figure out which opportunities to pursue – carbon removal, renewables, energy storage and transmission – is a dizzying array of options, with no way to sort or rank their importance. But it seems to me that climate is better understood not as a singular list of technology and policy action items, but as an assortment of climate tribes. Tribes tell us why these opportunities are interesting and help us make better predictions about how they will unfold.

To understand climate better, I slurped up hundreds of thousands of words’ worth of blog posts, podcasts, interviews, articles, and tweets (my notes alone are over 80,000 words) – paying less attention to object-level discussions, and more to the rhetoric being used to describe one’s goals and motivations. I looked for cleavages between values, language and narratives. I then followed up this research with a handful of conversations with those who work in climate, across different tribes, to further refine and “stress test” my characterizations.

Ultimately, I landed upon seven climate tribes, which I’ll expand on in a bit:

Images generated with DALL-E.

Climate is a huge topic, and there are, of course, many more subcultures that are not fully captured above. (I’ll also add a requisite note that this analysis is heavily centered on American climate trends.) But if you’re a stranger in a strange land who’s trying to figure out what’s going on in climate, I found that grokking these seven groups gave me the conversational fluency to understand most of the climate discourse.

If you just want to read about these climate tribes, you can skip ahead to that section. But if you want to suffer through my process with me, I’ll unpack how I got from an outsider’s view of perceiving climate as a doomer topic, to instead understanding it as a pluralistic landscape of tribes that largely mirrors the non-climate universe in its richness and diversity. We’ll start with the outside layers and work our way in. Licking the Tootsie Pop, so to speak. Here we go.

Read full post

]]>
<![CDATA[Antinatalism, agency, and social movements]]>https://nayafia.substack.com/p/antinatalism-agency-and-social-movementshttps://nayafia.substack.com/p/antinatalism-agency-and-social-movementsTue, 23 Aug 2022 16:41:36 GMTFertility rates are declining in developed countries, dipping below replacement rates. 1 in 4 childless American adults cite climate change as a major or minor reason for not wanting to have children. This seems bad, but why?

For all the public conversation that’s been had about having, or not having, children, I find that pronatalists and antinatalists seem to talk past each other. Pronatalists assume that it should be obvious why we ought to have children, and yet…there are still antinatalists. I’ve also noticed that “antinatalism” encompasses a wide range of motives, from personal to societal, and wanted to get clarity on what exactly we’re arguing about when we talk about the importance of having children. So I wrote a blog post about it; excerpt below.


Cultivating agency

I didn’t always know that I wanted to have kids. I wasn’t against it, necessarily – for awhile, there were just more reasons in the “why not” column than the “why”: uncertainty about whether I’d be a good parent, fear of losing my identity, a lack of maternal instinct. Those reasons gradually faded away as I grew older and got to know myself better.

I imagine this is not an unusual experience. Some people knew they wanted to have kids their entire lives; they were raised with big families, or traditionalist values, or otherwise found it to be perfectly natural and obvious. For others, it takes a little more time to conquer your messes and realize that if you can figure out how to get yourself together, you can probably figure out how to be a parent, too.

All that is to say: as excited as I am to have kids now, I still understand and respect others’ decisions to not have children. I’m intrigued by the philosophical arguments for antinatalism, such as those made by Sarah Perry in Every Cradle is a Grave. As far as I can tell, these arguments are a personal exercise in morality: for example, the idea that it is unethical to bring a human into the world without their consent, or that a child might experience extreme suffering in their lifetime, or cause extreme suffering to others. These questions have been asked for literally thousands of years, and are a useful inquiry into the purpose of man and civilization, if only to reaffirm one’s faith in procreation.

But today, there is a newer strain of antinatalism weaving its way into the conversation. Unlike these deliberate ethical inquiries, this newer version of antinatalism appears to be a byproduct of social movements, a deeply encoded worldview that perhaps children are not worth having. It is not a decision being weighed against one’s personal moral code, but passively transmitted through a widely-held set of social beliefs.

Antinatalism as a byproduct of social movements

The climate crisis is probably the most prominent example of a social movement whose natural conclusions have led people to not want to have children. One survey of roughly 600 American adults between 27 to 45 found that while 60% of respondents were “very” or “extremely” concerned about the carbon footprint of having children, their bigger concern (cited by 96.5% of respondents) was their children’s well-being in a “climate-changed world.” [1] In the words of one 31-year old respondent: “I dearly want to be a mother, but climate change is accelerating so quickly, and creating such horror already, that bringing a child into this mess is something I can’t do.”

But the climate crisis isn’t the only social movement with antinatalist externalities. Effective altruism (EA) and AGI (artificial general intelligence)/x-risk – social movements which attract overlapping groups of people, but are distinct – also have implications for society that lead to antinatalism.

None of these movements are explicitly antinatalist. Some parts of EA, for example, are even pronatalist. Will MacAskill, a founder of effective altruism, believes that children have the potential to “innovate” and be “moral changemakers” (though he personally does not plan to have children). The longtermism branch of EA, which is focused on improving our long-term future, can be understood as pronatalist, though it is not explicitly, nor uniformly, so. MacAskill affirms this position in his most recent book about longtermism, What We Owe the Future.

On the other hand, among adherents to we might call “classical EA,” the value of having children has been frequently debated. EA derives its philosophy from utilitarianism, and some argue that children are not “cost-effective”: that the time and money spent on raising children could be better spent on reducing suffering in the world. In “The Cost of Kids”, Brian Tomasik states that while “there might be utilitarian benefits from having a kid…I wouldn’t count on it,” suggesting that one could become a sperm or egg donor, or spend their time “inspir[ing] some of the billions of other young people in the world” instead of raising children. Liz Kaye notes that some EAs “point out the very low likelihood that any given potential child…would do more good than that same amount [of money] going towards the Against Malaria Foundation to save dozens, perhaps even hundreds, of lives.”

Among those who are preoccupied by the risks presented by AGI or other global catastrophes, there is a belief that because humanity will be seriously threatened in the next few decades, we need to be primarily concerned with saving ourselves now, instead of having children, who will suffer immensely if they are brought into this world. For example, one anonymous poster explains that “as a 23 year old man living in the UK…the probability that I die [in the next 30 years] due to AI x-risk is 41%,” and that AGI is strongly incompatible with longtermism. With those odds, it’s understandable why one would not plan to have children.

Read this post

]]>
<![CDATA[Why aren't there more effective altruisms?]]>https://nayafia.substack.com/p/why-arent-there-more-effective-altruismshttps://nayafia.substack.com/p/why-arent-there-more-effective-altruismsThu, 12 May 2022 16:18:07 GMTIn my last post, I wrote that "effective altruism, despite its popularity, cannot singlehandedly meet the civil purpose of philanthropy.” I thought I was being coy, but a bunch of people have asked me what this meant, so I will be less coy and explain why I think effective altruism has limited impact.

The more interesting question, to me, however, is: “Why aren’t there more effective altruisms?” Why is effective altruism so strongly associated with philanthropy in tech, and what are other examples of initiatives that we perhaps don’t take seriously enough?

I’m not an EA, but I still think effective altruism helps us understand an increasing number of what I call “idea machines”: a decentralized network for turning ideas into outcomes (progress studies, It’s Time To Build, and crypto’s public goods funding are all examples of this). I just published a blog post that breaks down what idea machines are and how they work, using EA as a blueprint. Excerpt below.

(P.S. I have a new last name! Still transitioning everything over, but I’m now Nadia Asparouhova.)


Idea machines

Tech as a system of values, and not just an industry, is heavily driven by its subcultures and their ideologies. Where do these ideologies come from, and how do they influence what’s accomplished?

One of the most visible ideologies in tech is effective altruism (or EA), a philanthropic school of thought that advocates for “us[ing] high-quality evidence and careful reasoning to work out how to help others as much as possible.”. If you don’t buy into its philosophy, it’s easy to write off effective altruism as yet another eccentric subculture. But effective altruism is both less and more interesting than it seems.

Although I’m not an EA, I think effective altruism is a useful blueprint for understanding a growing number of influential subcultures in tech right now, from progress studies to It’s Time to Build to crypto public goods funding. EA is the strongest example of what I think of as an Idea Machine: a network of operators, thinkers, and funders, centered around an ideology, that’s designed to turn ideas into outcomes.

Effective altruism’s strength lies in its infrastructure, and by understanding how it works, we can better understand how other idea machines will develop, what their impact will be, and what’s needed to make them more effective.

The limitations of effective altruism

Firstly, I want to address why effective altruism, as I’ve stated elsewhere, “cannot singlehandedly meet the civil purpose of philanthropy.” In other words, if effective altruism is so good already, why do we need other idea machines at all?

I think of philanthropy as a type of idea marketplace for public goods, funded by private capital. Like all idea marketplaces – startups, media, philosophy – it’s inherently pluralistic. We don’t have a single government-funded media channel, for example, but instead get our news, entertainment, and ideas from a multitude of sources.

There are certainly better and worse ways of executing a philanthropic initiative, just as there are better and worse ways of building a startup. But once we look beyond best practices, there’s way more variance in approaches than, say, effective altruism might advocate for.

We seem to understand that entrepreneurship operates in a free market of ideas, so I’m not sure where the idea comes from that there is, or could be, One True Approach to philanthropy. I’d guess it’s because there are so many egregious examples of mismanaged funds and middling outcomes, which have led people to feel understandably suspicious about its effectiveness. [1]

If we were to take EA literally, however, we’d be saying that there is an objectively best way to accomplish these outcomes, and that that way is discoverable: that complex social problems are a finite, solvable game.

If philanthropy is pluralistic – and, like any idea marketplace, that is one of its virtues – then there is no single school of thought that can “solve” complex social questions, because everyone has a different vision for the world. If you’re pro-pluralism in startups, you should also be pro-pluralism in philanthropy.

The scholar Peter Frumkin describes philanthropy as having both instrumental and expressive value. Effective altruism can be understood as a movement that heavily prioritizes instrumental value (which, ironically, is its own form of self-expression). As a private citizen, renouncing my right to expressive value, in favor of donating to wherever GiveWell tells me to, feels like I might as well just pay more taxes to the government. Why have a market of choice if we can’t exercise it?

I expect that effective altruism will always be an example of what I’ve called “club” communities elsewhere: high retention of existing members, but limited acquisition of new members, like a hobbyist club. EA will continue to grow, but it will never become the dominant narrative because it’s so morally opinionated. I don’t think that’s a problem, though, because ideally we want lots of people conducting lots of public experiments.

Why aren’t there more effective altruisms?

The more interesting question, then, is: why aren’t there more effective altruisms? It’d be like if there were just one startup, or one blogger, or one news channel. When it comes to deploying private capital towards public outcomes, the idea marketplace is woefully barren.

Although I don’t personally identify with the ethos of effective altruism, I also think they’ve done a lot of things well. EA has a remarkably good infrastructure for attracting and retaining members, identifying cause areas, and directing time and dollars towards those efforts. A common critique of EA is that it fails to attract operational talent, but despite its weaknesses, it’s still the best example of what I’ve been calling an “Idea Machine” in my head – maybe not the best term in the world, but let’s roll with it because I’m bad at naming.

Read this post

]]>