<![CDATA[The Psychiatric Multiverse]]>https://psychiatricmultiverse.substack.comhttps://substackcdn.com/image/fetch/$s_!tBNa!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444a827e-ba7b-423e-90a7-424931a9bedf_911x911.pngThe Psychiatric Multiversehttps://psychiatricmultiverse.substack.comSubstackWed, 22 Apr 2026 20:42:50 GMT<![CDATA[What if a horse was mistakenly sent to space on purpose?]]>https://psychiatricmultiverse.substack.com/p/what-if-a-horse-was-mistakenly-senthttps://psychiatricmultiverse.substack.com/p/what-if-a-horse-was-mistakenly-sentThu, 16 Apr 2026 17:01:03 GMT
That’s one small trot for horse, a giant gallop for horsekind. Source: NightCafe

To explore properly, you sometimes have to risk being ridiculous. New links between distant fields rarely arrive in a tie and sensible shoes. So today, we are going to build the lore of Psyverse #2 by following its logic into absurd territory.

Be prepared for interesting facts, historical oddities, and scientific studies you probably haven’t heard about.

(New to Psyverse #2? The compendium is here)


In 2017, the president of the United States in Psyverse #2 was thinking of establishing a “Space Force”. It was to be a new military branch that dealt with national defence matters in space. The president ordered a draft proposal to be sent to the head of NASA for revision.

Unfortunately, the official put in charge of writing said proposal had just recently had a child with their partner. The previous night had been spent attending to their cute, but extraordinarily loud, baby. In the official’s slightly delirious state, they misheard the orders of the president. They wrote a proposal about a new project codenamed “Space Horse” - a mission to send the first horse into space.1

At the time, the United States was competing with the British to achieve various space-related firsts. The head of NASA started asking colleagues about the feasibility of equine astronauts for a space horse race with the British. They quickly suggested he should probably see a psychiatrist.

So, when a man entered a relatively inexperienced psychiatrist’s office talking about the advantages of designing a space suit for a Mongolian horse instead of a Shetland pony, while also claiming to be the head of NASA, eyebrows were immediately raised.

Through quite a remarkable feat of pure bad luck, both parties managed to completely misunderstand what the other was saying. For instance, when the psychiatrist asked about any past feelings of stress, being wide-eyed or perhaps times he felt exposed, the head of NASA described the first time he tried to take a photo of the stars on the International Space Station “All I could see in front of me was darkness - a vast emptiness that extended beyond my imagination”.

The psychiatrist thought this was a sign of depression. In reality, the head of NASA was trying to communicate his frustration at being unable to correctly set the exposure and aperture settings on his camera.2

The head of NASA was prescribed a Selective Serotonin Reuptake Inhibitor (SSRI).3 The response to SSRIs (and antidepressants in general) is assumed to be highly variable.4 For instance, some people may report complete remission of their depressive symptoms, while others report an “emotional blunting”. Some may receive no benefit at all. There is a great deal of uncertainty surrounding treatment response. What we can be certain of, however, is that a person without depression5 isn’t going to benefit from the antidepressant effects of an antidepressant.

This does not mean that for the undepressed,6 taking an antidepressant is risk-free. SSRIs, particularly when it comes to various bedroom activities,7 can produce noticeable side effects. The head of NASA and a firefighter, who also happened to be his wife, both noticed these noticeable side effects. After six weeks, due to not noticing any noticeable improvement in his actually notably fine brain, the head of NASA booked another psychiatric appointment to discuss the matter, then made a note of the date in his calendar.

As the appointment was during work hours, the head of NASA arrived in his official jacket.8 It did not take long for the psychiatrist to realise that yes, this was in fact the head of NASA and yes, NASA was indeed trying to send a horse into space. It was agreed that it would probably be best if the head of NASA weaned off the SSRIs.9

Subscribe now


With that mistake dealt with, all that was left was this silly old horse thing. The head of NASA booked a meeting with the commander-in-chief. Now, if the United States had a president who could admit mistakes, everything would have gone back to normal. But the U.S. didn’t. And thus, the USA became the first country on Earth #2 to announce it was going to put a horse into space.

The British, rather predictably with their storied history of drawing galloping horses incorrectly,10 couldn’t let this stand. They announced that the U.S. was sadly mistaken. It would be the Brits that would be the first to put a horse into space! Australia, who couldn’t let some Poms get away with any horsey business, entered the race too – but with a kangaroo. It didn’t go well. Kangaroos like to kick.11

File:George Townly Stubbs - ^Baronet^ - B1985.36.1053 - Yale Center for British Art.jpg
Close enough… Source: Baronet by George Stubbs

As news spread around the world, more nations with proud associations with horses entered the space race.12 Even Mongolia, despite their less-than-ideal technological situation, sprang into action. Hey, if Genghis Khan could conquer a quarter of the world in a quarter of a century on the back of a Mongolian horse,13 surely the great country of Mongolia could send one into space.

It became clear to the countries involved that the task of sending a horse into space was not going to be easy. Starting in all places, with their complicated digestive system. Annoyingly, horses need to feed almost all the time. Other than leaving little room for the many administrative tasks required of a horsetronaut, a lot of hay, grass, grains and fruits would be needed. If the horse were to stay for any protracted period in space, produce would need to be grown.

Fortunately, there exists an entire field of Astrobotany14 you probably didn’t know about until this sentence. Despite the problems of growing plants in space15 plenty have already been grown on space stations like the ISS, and even a cotton plant briefly grew on the far side of the moon. So interestingly, growing the steroid-infused grasses for a horsetronaut was the least of the Psyverse #2 countries’ concerns. What happened when the space horse swallowed the grass was the bigger issue.

Horses may have a problem digesting food in space. They rely significantly on enzymatic digestion. The massive caecum in a horse – around 30 litres – acts as a kind of fermentation vat, where bacteria and enzymes break down food. The entry and exit to the caecum are at the “top” of the horse. Gravity is required to keep food in the caecum to ferment, and the caecum uses peristalsis to move food up and out into the large intestine. It might be the case that there would be a greater chance of impaction in microgravity.16 Which would not be good for the short-term health of the horse.

On the other hand, bacteria thrive in space.17 Some strains seem to grow much more quickly, though it is unclear why.18 This might offset the gravity problem in the caecum.19 Nevertheless, the horse digestive system is incredibly fragile. One of the most common causes of death for a horse is colic,20 which is a catch-all term for the symptoms caused by over 70 different types of intestinal problems.21

For our intrepid nations of Psyverse #2, digestion is not the end of the story. If a horse is to spend any reasonable length of time in space (months), their musculoskeletal22 system will likely weaken significantly. For humans in our universe, we have found that microgravity reduces bone mineral density, decreases muscle mass, and loosens tendons.23 A horse in space would have a greater susceptibility to bone fractures24 and would be significantly weaker on return to Earth.

One potential solution would be to breed “Mighty horses”. By inhibiting genes that produce proteins involved in muscle and bone loss – myostatin and activin A – researchers Emily Germain-Lee and Se-Jin Lee showed it was possible to breed jacked mice.

A view of a wild type mouse (left) compared with a mouse treated with the myostatin inhibitor (right) from a prior ground based experiment. The treated mouse displays increased muscle mass.
A view of a wild-type mouse (left) compared with a mouse treated with the myostatin inhibitor (right) from a prior ground-based experiment. The treated mouse displays increased muscle mass. Caption Source: ISS National Laboratory, Image Source: Quadrupling muscle mass in mice by targeting TGF-ß signaling pathways

They then sent the mice adonises with their normal counterparts to the International Space Station (ISS). The swole mice were protected against both muscle and bone loss in microgravity.25

Perhaps the most important factor that would need to be considered is the behaviour of the horse in microgravity. Weightlessness could be very disorientating for the equine astronaut. Fortunately, there is a 1995 paper by a Japanese researcher called S. Mori26 which definitely helps illustrate the postures of various mammals in microgravity:

According to this figure, I have scientifically deduced the most likely posture a horse would have in microgravity:

Fig 2. Realistic posture of a horse in space. Source: Me

Interesting. Looks like we have horse drawing history all wrong. Artists weren’t drawing horses galloping; they were drawing horses in microgravity!

Despite an initial panicked and disoriented behaviour, there is some reason to believe that a horse would become acclimatised to microgravity. Mice are frequently sent up to the ISS, and after hanging around for a couple of days, they eventually learn to whizz around their cage as if it is one giant stationary wheel.27

The choice of breed is crucial for any space-faring horse. Thoroughbreds would be far too agitated to deal with the stressors of space, increasing the risk of conditions like colic. Ironically, one of the calmest breeds, the Clydesdale, is also one of the largest. In rocketry, the larger and heavier the capsule, the bigger the rocket needs to be, the higher the cost.

We are just touching the surface of the potential problems that would need to be solved to send a horse into space. There is potential for retinal damage, or cardiovascular insufficiency. If it is a long-term mission, you would have to think about horse breeding28 and radiation effects. How would a horse use the controls?29 What would you do with horse waste? If you wanted to do a space-trot, how would you get the horse into the spacesuit?


It took until the end of the 2020s for the United States to be finally ready to send a horse into space. The head of NASA had settled on an American Quarter Horse due to their calm temperament and highly trainable nature. The rocket was on the launchpad, the Quarter Horsetronaut had a calm and determined look in her eyes.

Then the head of NASA turned on the television. A Mongolian flag was painted on a spaceship orbiting around Earth #2. Inspired by the wooden satellite “LingoSat” launched in late 2024, and the wooden heat shields on the Fang Shi Weixing satellites developed by China in the 1960s,30 the Mongolians had built a wooden spaceship.

The head of NASA tilted his head to the right. Right there, proudly floating in the cockpit, legs splayed out, was a Mongolian horse.

Godspeed Mongolian Horse, godspeed... Source: NightCafe

After more than a decade of effort, the United States had lost the space horse race to Mongolia.31 In a wooden spaceship no less! Dejected, the head of NASA wondered why on earth (#2) they started this godforsaken project in the first place. Oh, wait, that’s right. A mistake. This whole project snowballed from a single mistake. Otherwise, the whole thing went pretty smoothly.

Not like that incident with the psychiatrist, the head of NASA thought. He seemed to make plenty of mistakes. In need of a distraction, the head of NASA revisited the psychiatrist who mistakenly gave him medication for a condition he didn’t have.

“Where did you anonymously report the mistake to?” The head of NASA asked out of curiosity. The psychiatrist gave him a confused look “What do you mean ‘anonymously report the mistake’?”

“You know, you fill out a little form, send it off to an independent national organisation, they do an investigation. That sorta thing” the head of NASA replied.

The psychiatrist paused in thought for a few seconds “Hmm. There isn’t something like that here… but I do recall something like that in a country in Asia. Ahh, it is on the tip of my tongue! Like, had that guy that conquered a whole bunch of places… Gengu… Gungis…”

“Oh god. Please no…” the head of NASA interjected “…is the country Mongolia?”

The psychiatrist’s face lit up. “Mongolia! That’s it! The Mongolian patient safety reporting system. I’ve heard it’s been quite successful. Even helped send that horse into space. Crazy world we live in!”

The head of NASA made a quiet whimper as his head sank into his neck. Without saying a word, he turned on his heels and slumped his way out the door. He knew that if he wanted to build a national patient safety reporting system in the United States, he would need to learn from the best.32 In a tone resembling Eeyore from Winnie the Pooh, he muttered “Why is it always Mongolia?”

The next day, he booked a flight to the country that he now disliked with all his heart.


I hope this little adventure into the science of space-faring horses helped to lift your day.

If you enjoyed the article, please do launch a horse🐎 🚀 … ah.. no… I mean drop a heart❤️. If you think someone else would be interested in the article, please consider restacking 🔄 or sharing 🔗. It helps others find my ridiculous escapades!


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

While not intended for any spacefaring duty, the Space Force in our universe has a Space Force Horse, of course.

Believe it or not, there is also a website in our universe where Judith Tarr, a person who likes horses, has written an article about what it would take to get a horse into space.

So, I guess I am expanding on an already existing idea? It is nice to know that there are others out there who’ve had similar shower thoughts.

2

If they are not correctly set with a bright object in the foreground (such as the sun’s reflected light from the earth or moon) space will look mostly pitch black in a photo.

3

SSRIs stop serotonin released by a neuron (serotonin released by the pre-synaptic terminal into the gap between neurons, called the synaptic cleft) from being taken back in by the neuron that released it. This increases the amount of serotonin in the synaptic cleft, which can influence when the next neuron fires (by serotonin binding to receptors on the postsynaptic terminal). See the video below for a brief explanation:

4

Recently, the variability of response to antidepressants due to individual differences has been questioned. It is posited that the variability in response to antidepressants could be due to other factors like placebo effects or regression to the mean.

In my (pretty ignorant) opinion, before such conclusions can be made, the problem of measurement needs to be addressed. Studies of antidepressant response variability assume we have the tools to pick up on it. I would strongly argue we do not.

Back when I was getting rTMS treatment, I was part of a support group. My goodness, we disagreed on a lot of things related to mental health. What was striking was the one thing we all overwhelmingly agreed on. We all had no idea how to fill in the PHQ-9, GAD-7 or how to answer the HAM-D. We all “fudged” it, for lack of a better word.

For those interested, I wrote in a blog post a while back about the problems I had with rating scales. In short: conglomerating a whole bunch of constantly changing, fluid symptoms into one subjective measure of intensity for a study on variability is just asking for trouble.

Okay cool, back to horses in space.

5

Like the head of NASA in Psyverse #2.

6

or anyone really

7

Like sleeping for instance… or not sleeping…

I meant insomnia….

Okay, I’m just going to let you look up the rest of the side effects yourself.

8

According to Reddit (usually not a good start to a sentence), there is no official NASA uniform or jacket per se. The US Space Force does have an official uniform, including camo. I am aware that the Space Force doesn’t conduct their missions in space, and the standard camo is probably for cost-cutting purposes, but I still found it funny to picture Space Force personnel trying to hide amongst the lack of trees, dirt and shrubbery in space.

9

Until recently, there was not much information on how to come off psychotropic medications. If not done carefully, antidepressant withdrawal syndrome (or antidepressant discontinuation syndrome) can occur. The general advice was to taper down over a couple of weeks. For some patients, this was inadequate - the story is much more complicated. Mark Horowitz and David Taylor have released deprescribing guidelines that provide a more nuanced picture.

It is important to note that some advocates for deprescribing have sometimes worryingly insinuated that all antidepressants are harmful and don’t provide any benefit for a patient. Or have made overarching, oversimplified claims about the nature of mental health problems and mental illness based on their negative experience with antidepressants.

As with most medications, there are potential benefits and potential risks. In many cases, the benefits will outweigh the risks. As a patient, I wanted a conversation about the benefits and risks, but this rarely, if ever, happened.

10

Before videos, people could not figure out how the legs moved on a horse in full gallop. Horses’ legs moved far too quickly. Therefore, there are a whole host of paintings where horses are seemingly levitating with both sets of legs outstretched.

Eadwaerd Muybridge, photographer and part-time alleged murderer, designed the first-ever cameras that could take images quickly enough to film a horse in a gallop without being blurred.

On June 15, 1878, he invited the press to see his process. He showed how he captured the images using trip wires and 12 quick-exposure cameras. This was the first ever moving picture.

File:Horse in Motion - Sallie Gardner (animation).gif
Animated version of Eadwaerd Muybridge’s Horse in Motion. Source: WikiCommons

It was not a good day for horse artists.

Historical Note: Before perspective was figured out, some medieval artists had an especially bad time drawing horses

11

While generally docile and, according to a 2020 study in Biology Letters, perhaps more intelligent than previously thought, kangaroos are “‘vegetarian gladiators’ with kicks that can kill”. It may have been a better bet for the Australians of Psyverse #2 to go with one of the many Australian horse breeds.

Though kangaroos are exceptionally interesting creatures. For instance, if a kangaroo mother already has one baby (or joey) and the environmental conditions are not conducive for a second - like a drought - the mother can freeze the development of the second embryo. It is called Embryonic Diapause.

12

Like the Tajikistan Lokai, or the Danish Sport Pony. There are a lot more national breeds than you might expect. See the full list here.

13

The rate of expansion of the Mongolian empire is truly astonishing.

14

There is a really interesting website all about Astrobotany run by two cats.

15

Plants exhibit Gravitropism. This means that a plant’s growth, particularly the roots, is partly guided by gravity. In microgravity, roots are shorter, weaker and skew more. To solve this problem, different colours of light can be used to direct root growth.

One ongoing issue is how to effectively water plants. In microgravity, water's surface tension dominates, making it “sticky.”

There are some advantages to growing plants in space. As Astrobotanist Simon Gilroy explains, in microgravity the height of Redwood trees, for instance, would not be limited. So if a space-interested billionaire wanted to bloat their ego by growing the largest ever space forest, instead of say, growing the largest hate-filled public square - the proverbial net is wide open.

16

Though, as no one has sent a horse into space in our universe, it is tough to actually know.

17

In more bad news, it seems like the microgravity may suppress the immune system and age it more rapidly. In I guess the only bit of potentially good news, according to a 2017 paper, macrophages (the white blood cells which engulf pathogens) seem to immediately adapt to microgravity. Hooray?

18

Bacteria also exhibit increased virulence in space. One hypothesis states that the reason some bacteria thrive is not to do with microgravity directly, but the indirect effects of gravity on the environment bacteria live in.

In a rather interesting article from 2017, Luis Zea - now an Adjunct Professor at the University of Colorado Boulder - said:

On Earth, there are different flows and forces, such as sedimentation, buoyancy, and convection, that don’t exist in space because they are gravity dependent…The model states that it is the lack of these forces and flows that creates a different environment around the bacteria.

Due to the absence of these forces and flows in space, the transfer of nutrients to bacterial cells can only happen through diffusion. The bacteria have reduced availability of nutrients and live in higher concentrations of toxic compounds. So in essence the bacteria are in starvation conditions.

Counterintuitively, this may possibly lead to more growth due to the cells entering a fed-batch process. The bacteria switch their metabolisms and genes to adapt to the incremental absorption of nutrients prolonging the growth phase of the bacterial life cycle resulting in more bacteria.

Note there is evidence of correlation that fits this hypothesis but not causation.

19

I’ve been unable to find out if enzymes thrive as much as bacteria do in microgravity.

What I do know is that enzymes are being used to convert waste plastics to upcycled material in space.

Enzymes are also being used to grow very large crystals in space for neutron diffraction experiments. Microgravity allows for substrates (surfaces where crystal growth starts to grow from) to float evenly throughout the medium - on Earth substrates would all sink to the bottom due to gravity. In microgravity, there is a lack of buoyancy and convection too. This means fewer defects form and larger crystals are possible.

So there may be “crystal growth factories” in low Earth orbit in the future. Imagine that, manufacturing in space!

For more on crystal growth in space, see this very good article by Debbie Senesky.

20

Interestingly, if you took a horse up to the vomit comet (the plane that dips and dives along a snake-like path in the sky to simulate zero-g), A horse would not vomit. Because a horse can’t. They have a one-way digestive system. This is generally not a good thing. If fluid builds up in the stomach, usually due to colic, it will rupture. A tube has to be passed into the stomach to remove the fluid.

Horses do seem to be able to stop themselves from choking by expelling food through the nostrils, but this is not vomiting (food is coming from the windpipe, not the oesophagus).

21

A claim made by the University of Liverpool, though it is difficult to confirm. If you wish to traumatise yourself for an afternoon, some of the causes are listed here.

22

muscles and bones

23

R. Lee Satcher, B. Fiedler, A. Ghali, and D. R. Dirschl, ‘Effect of Spaceflight and Microgravity on the Musculoskeletal System: A Review’, J Am Acad Orthop Surg, vol. 32, no. 12, pp. 535–541, Jun. 2024, doi: 10.5435/JAAOS-D-23-00954.

24

Leg fractures can be lethal for a horse, though not as lethal nowadays as it once was.

25

The ultimate aim of the research is to help older adults with conditions rendering them bedridden long-term, or for people who are permanently wheelchair-bound. These populations see increased muscle/bone loss.

However, the cynical side of me can’t help but think it will be used by Gym Bros as an alternative to steroids.

26

I am sure the author is a fantastic scientist - artist they are not.

27

If you would allow me to be personal for a second, I feel uneasy about how animals are treated in science, to say the least. We seem to understand a tremendous amount about rats and mice, but very little about humans. One of the most common phrases I read in psychiatric literature is: “animal models do not apply to humans”. A future Psyverse will explore alternative methods to animal testing. For now, here is a video of mice having a ball in space:

28

Breeding in space is potentially possible for mammals according to these two studies.

29

The U.S. in Psyverse #2 developed H.O.R.S.E. or Human Oriented Readable Space Engine. Basically, it is just a go-pro attached to a horse’s head to see what the horse is looking at.

30

On reentry to the atmosphere, spacecraft are travelling at very high velocities. At these high speeds, friction from the air against the outside of the spacecraft causes it to heat up. Spacecraft need to withstand temperatures up to 7,000 degrees Fahrenheit (~3900 degrees Celsius). A heat shield is required to protect the spacecraft and its occupants from “burning up” in the atmosphere.

Heat shields are usually made of an ablative material. As the material heats up, it is continuously stripped away by the fast-moving air, taking the heat with it. But in the 1960s, the Chinese did not have the technology to develop these sophisticated ablative heat shields, so they used wood instead.

A 5.9-inch thick oak wood heat shield was strapped to a satellite. Upon reentry, the friction caused the wood to char, producing a layer of charcoal. This was then blown away exposing more oak.

In other words: a very simple, very cheap, ablative heat shield.

Supposedly anyway. The source I’ve got this from, an excellent article by Amy Shira Teitel, says that there is contradictory information regarding whether this is a true story or not.

31

In our universe, Mongolia does have a space agency - The National Remote Sensing Center (NRSC). I think anyway. A lot of the websites describing the agency are old or rather brief. Their main website seems to use data from satellites, and they do have a Facebook page. So I’m saying it is a space agency.

Mongolia has recently built its first two satellites. Ondosat-Owl-1 and Ondosat-Owl-2 were launched on a Space-X Falcon 9 rocket in March 2024. Apparently, the Mongolian Aerospace Research and Science Association (which also has a Facebook page) started the Mars V project. The plan is to simulate Martian conditions in the Gobi desert.

(In 2017 it was reported that a CubeSat called MongolSat-1 was Mongolia’s first-ever satellite. However, the satellite was actually launched by a Bermuda-based company.)

32

I didn’t include the Psyverse #2 Mongolia story as I felt this was a natural ending to the article. I was going to say that Mongolia, due to their inexperience in building rockets, had a lot of injuries. This would bring the aviation engineers, pilots, etc., into hospitals. Thus, there would naturally be a conversation about the lack of incident reporting systems in healthcare.

Given that much of the population of Mongolia is highly concentrated in small areas (the capital Ulaanbaatar contains almost half the population of the country), with large geographical separation between these areas, this makes the logistics of setting up a reporting system much easier, I would guess. i.e. testing of procedures in towns with the same population, which can then be applied to large hospitals in the few big cities.

Population Density of Mongolia. Source: Bat-Orgil (2021)

Subsequently, I was going to say the incident reporting system helped to improve safety, as well as improve the medical system generally. Psychiatry was a huge beneficiary. The significant problems with mental health that Mongolia faces started to be tackled. Combined with the technological scarcity, innovation increased dramatically, leading to the wooden spaceship.

All of which is ridiculously flawed, wishful thinking, to the extreme. But you have now learned more about Mongolia, a country not often talked about.

]]>
<![CDATA[We debate. We don't build.]]>https://psychiatricmultiverse.substack.com/p/we-debate-we-dont-buildhttps://psychiatricmultiverse.substack.com/p/we-debate-we-dont-buildThu, 09 Apr 2026 17:01:20 GMT

This is the final article in the Psyverse #2 series. For previous posts, see the compendium.


I try as much as humanly possible to find the nuance in every field I explore. I try as much as possible to see the bigger picture. When it comes to psychiatry’s own engagement with human error, however, there is no bigger picture. There is barely even a drop of paint on the patient safety canvas.

Yes, there are important, emotionally vulnerable stories about iatrogenic harm. There are bold debates about coercion and autonomy. There are discussions about involuntary commitment from both patient and clinician.

But, put bluntly, none of this translates into meaningful structural change. Patients continue to be harmed; operational safety still isn’t central to the field.

If it were, we’d see sustained, mainstream engagement with human factors, human error, and learning systems across flagship psychiatric outlets. Psychiatry would be heavily involved and integrated within the safety science field. Structures such as safety management systems, incident reporting systems and investigation bodies would be prevalent. Just like it is in aviation and, to a lesser extent, other fields in medicine.

Be honest, when was the last time you read a flagship psychiatric editorial centred on human error? That this still feels marginal in 2026 — more than twenty-five years after the patient safety revolution — is a serious failure of imagination.

When the subject of iatrogenic harm is discussed, we quite rightfully highlight the pain, the injustice and discuss the philosophical implications to the ends of the earth. But in the final analysis, I cannot help but conclude that few are willing to roll up their sleeves and venture into the subject of human error.

I believe this is psychiatry’s crux. In physics, when we need tools from another discipline, we go and learn them. Psychiatry appears more inclined to stay in its lane. While I understand this is beneficial and often necessary for everyday clinical practice, the result is psychiatric discourse has all the epistemic variety of a greyscale Rubik’s cube.

Where are the psychiatric innovators? Who speaks for systemic safety change in psychiatry?

Looking back through the history of aviation reporting systems, I could not help but notice the social push for better safety. Through their associations, pilots, air traffic controllers, and other aviation staff relentlessly advocated for better aviation safety systems. I do not see this kind of advocacy in psychiatry.

I have been impressed by the insight of authors tackling the complexity of psychiatry. I have been engaged learning about the hope, or indeed false hope, of new treatments. I have been moved by deeply personal stories of patients and practitioners in their struggles navigating the psychiatric system. I have had my mind changed through the continuous dialogue about what diagnosis is and isn’t.

And none of it matters.

Even if you came up with miracle treatments or technologies tomorrow that cured every single mental illness, even if you proved it was effective, it would not matter. Because you wouldn’t be able to convince patients to take it.

Someone has to design the treatments. Someone has to diagnose the patients. Someone has to administer the interventions. And with people comes human error. Patients would continue to be harmed, some would die.

When new technologies are introduced, people do not demand “better than before”. They demand near perfection. Trust is not built on promise. It is built on reliability. Without a rock-solid foundation of patient safety systems in psychiatry, none of the mountains of quality mental health commentary matters. Still, I wouldn’t even say the foundation of patient safety systems in psychiatry is flimsy. It has to exist first.

Building these systems requires a culture that treats error as data, not disgrace. In 2005, Justin Waring interviewed 28 specialist physicians about their attitudes to human error. Many described mistakes as inevitable – “Human error is always going to occur,” one said. Error reporting was widely regarded as pointless. If error cannot be eliminated, why report it?

How much have attitudes changed since? Does psychiatry, and perhaps healthcare in general, still focus on individual performance, over system safety?

I can’t know. What I do know is that having been a patient in the medical system for over a decade, I almost expect error to occur. I have to fight complacency. My psychiatrist, whom I respect, asks at every appointment whether I am still receiving rTMS treatment. I stopped five years ago. At some point in almost every year since my sister was born, my mum has to phone healthcare professionals to ensure my sister’s tube feed arrives on time. The very food my sister needs to live is not exempt from human error.

It doesn’t have to continue like this. Change is possible. The knowledge already exists. We do not need a new philosophy. We need to engage with the safety science already built.1

Psychiatry does not lack intelligence. It does not lack compassion. It lacks operational imagination.

And until that changes, patients will continue to be harmed – not because we failed to debate, but because we failed to build.


Do you think psychiatry needs to do more building? Or is there something I have not considered? Would love to hear people’s opinions and stories – please do comment below or send me a DM 💬.

If you liked the article, please consider gifting a heart ❤️, restacking 🔄 or sharing 🔗. It helps others find my work!


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

This series contains almost everything you could possibly want to know about incident reporting systems in healthcare (see compendium). I recommend Lucian Leape’s book Making Healthcare Safe as an introduction to patient safety (it is free). For deeper reading into the subject, see books and academic papers written by Charles Vincent or Carl Macrae.

If a psychiatrist or a healthcare professional in general, please consider going on Human Factors training short courses (see here for those based in the UK).

]]>
<![CDATA[What if Psychiatrists reported all their mistakes to NASA?]]>https://psychiatricmultiverse.substack.com/p/what-if-psychiatrists-reported-allhttps://psychiatricmultiverse.substack.com/p/what-if-psychiatrists-reported-allThu, 02 Apr 2026 17:01:09 GMT
Source: NightCafe

Welcome to the Psychiatric Multiverse – where we take an idea from an area of life, and postulate if it could be useful in the psychiatric system.

Today, we are postulating how an effective reporting system could be constructed in a hypothetical alternate reality. To explore previous articles in the series, visit the compendium.

To help with the split between our universe and Psyverse #2:

Explanations will be in normal formatting.

While the story of Psyverse #2 will be in italics.


Late one evening, I had an absurd idea.

While tucking into a good book, in the form of Don Norman’s The Design of Everyday Things, I read this paragraph about NASA’s role as the independent administrator of the Aviation Safety Reporting System (ASRS):

NASA is a neutral body, charged with enhancing aviation safety, but has no oversight authority, which helped gain the trust of pilots. There is no comparable institution in medicine (emphasis added by me)

I felt an excitement start to build – the kind that always accompanies me before a quest into the academic literature. “What if healthcare professionals just reported their mistakes to NASA?” I thought, as I leapt into my office chair and turned on my computer.

To my dismay, I was three decades late to the party. The U.S. Department of Veterans Affairs (VA) had the exact same idea while convening several expert advisory panels in 1997. After a talk by Linda Connell, the director of the ASRS, the VA enquired about the feasibility of applying an ASRS-like system to medicine. Both organisations worked together to set up what became known as the Patient Safety Reporting System (PSRS) in 2000.

The PSRS was run by NASA in a very similar way to the ASRS – it was an external, voluntary, confidential reporting system. There was, however, one major difference. It was run as a complementary system to the VA’s own internal one with the same coverage. PSRS was intended as a “safety valve” for reporters who wanted confidentiality. Unfortunately, unlike the ASRS, it was not successful. Over a 10-year period only 500 reports were submitted to the PSRS, while the internal VA system received approximately 700,000.

Due to cost-cutting measures, the partnership between NASA and the VA ultimately ended,1 leaving only an abandoned website as a window to the late 2000’s internet.

Ever the optimist, and lover of left-field ideas, I was not discouraged by this failure. I read more about incident reporting, going far deeper into the subject than I had anticipated.2 I think there is a genuine need for improved patient safety incident reporting systems. But how could it be done?

Fortunately, here at the Psychiatric Multiverse, we don’t have to worry about the frustrating roadblocks of societal and political forces. All we have to do is imagine our universe splitting in two at this very moment. One continues as our own universe, the other (Psyverse #2) builds a national patient safety incident reporting system in the United States.

Using what we have learned from the history of aviation reporting systems, and the current literature on patient safety reporting systems, let us try to figure out how an incident reporting system could be built.

Subscribe now

Patients, Patients, Patients!

Firstly, I believe a truly effective patient safety reporting system has patients reporting incidents. Patients are the ones with the most skin in the game. They are the ones harmed by mistakes. They are front and centre in the discussion about their care, and if they have the mental capacity, they have a veto over any treatment proposed.

In aviation reporting systems, both air traffic controllers (ATC) and pilots report incidents.3 Drawing an analogy to medicine, doctors are like air traffic controllers, and patients are passengers who are suddenly thrust into becoming pilots. While patients generally know little about their bodies, ultimately, they are the ones who have to “fly the plane”, so to speak. Therefore, patients see a very different picture compared to the “ATC” doctors, who can only advise and provide treatment.

Logically, it follows that a patient’s perspective is essential to fully understanding an incident. Information from patients can be useful in isolation, but the primary value comes from the added incident context through doctor-patient report comparison. But, with our current patient safety reporting systems, many don’t even allow for patients to report, while for the ones that do, they make up a tiny proportion of incidents. Which begs the question: how can we expect patient safety reporting systems to help indicate where system wide learning can take place, if we are missing information from half the system?

Including patients in an incident reporting system would not be as straightforward as simply allowing them to fill in forms. Factors like the design of the report form,4 legal protections,5 promotion of the reporting system to patients6 and feedback mechanisms7 would need to be addressed.


Starting our story of Psyverse #2:

Due to wildly improbable circumstances,8 a sudden social and political will to build a national patient safety incident reporting system within the United States #2 arises. Miraculously, the U.S. #2 government in this alternative world takes this public pressure as a sign they need to do something.

To figure out how to design an inclusive reporting system, they create the Never Again Unit (or NAU – pronounced “now”) to consult with patients and lived experience experts.9

They help design a form with plain English language, simplified to the digital equivalent of one side of A4 paper. The form has only very basic fixed-choice questions to ascertain where the event took place, with the free text portion dominating (prompts a simple question: what happened?). They find in testing that the “callback”, where patients are called back for clarifications, is essential with patient reporting. It not only ensures that the lived experience analyst correctly understands what occurred but also provides an excellent opportunity to indirectly promote the system.

Every stage of the reporting process has patient and public involvement. Working with their clinical counterparts, lived experience analysts help to decipher patient narratives, while lived experience investigators help to spot areas of improvement (that may otherwise be missed from a clinical perspective).10

The government drafts specific legislation to protect the patient incident reports from being used as evidence in legal proceedings (reports are privileged). Patients are asked to be involved in investigations through the confidential communications software. Later, promotion of the program is integrated into the wider strategy, when NAU teams up with the promotions team at the yet-to-be-announced independent third party.11

A trusted third party

In a previous article, we argued that there is a need for national patient safety reporting systems. This is especially relevant for the United States, which currently doesn’t have one - instead relying on a complex fragmented web of voluntary institutional reporting systems conglomerated into one national database. Briefly, health care providers can report to a Patient Safety Organisation (PSO), which can then send a standardised report to the Network of Patient Safety Databases (NPSD).12

While healthcare providers have found some benefit in reporting to PSOs, significant difficulties remain, including low participation, legal protection confusion, and poor patient involvement.13

It may be best, therefore, to start afresh. But given the importance of an independent third party, which one would be best? The architects of the ASRS emphasised that “not any third party” will do. From the NASA Reference Publication on the ASRS:

Any organization called upon to manage and direct a program as sensitive as ASRS has proved to be must meet several criteria, among which are the following: ( 1 ) credibility with users, the relevant community, and the bureaucracy; (2) unquestioned integrity; (3) experience in gathering and handling information; (4) technical proficiency; and (5) an understanding of the community, the operational context and an appreciation of the relevant political issues.

Currently, the Agency for Healthcare Research and Quality (AHRQ) runs the Network of Patient Safety Databases. Given the institution’s non-punitive function, combined with previous experience handling technical information and a deep understanding of healthcare, the AHRQ would seem like the perfect third-party.

There is, however, one major problem. Very few people outside of the U.S. healthcare professional community (i.e. patients) know about the AHRQ.14 Any third party must at least be known by patients, never mind have credibility with them, if we want patients to report. Ten years ago, the Centers for Disease Control and Prevention (CDC) may have been a better bet, given it is more widely known and has experience in dealing with large healthcare datasets. But the CDC doesn’t, erm, have the best reputation with the public at the moment.

Thinking about it, not many U.S. institutions seem to be seen in the greatest of lights right now – from both sides of the political spectrum. I think NASA is probably one of the few institutions that has the name recognition, is still widely thought of as credible, capable and independent, while also having the relevant technical proficiency.15 It has run the successful ASRS and must have learned lessons from the failed VA-PSRS project.

But, for any independent third party chosen to run a hypothetical United States national patient safety incident reporting system, the historical roadblock of healthcare staff not reporting to the old VA-PSRS system needs to be addressed. Perhaps there is an out of the box solution.


Continuing the Psyverse #2 story:

Popularised in the media as “The Decision”, the actual selection of an independent third party was relatively straightforward. Many stakeholders argued for over a month about why they were best placed to run a national reporting system, then it was awarded to NASA.16 Ultimately, there was no way of competing with the reputational capital that comes with successfully launching a horse into space.17

A Prototype in Psychiatry?

Both Charles Billings and Lucian Leape expressed concerns that the complexity of the medical system would forbid useful functioning of a national reporting system in the United States. However, Billings, in his 1998 talk to the National Patient Safety Foundation, proposed a possible way forward18:

I would offer you some hope that it may be possible to define areas within medicine in which there is a somewhat smaller constituency, a somewhat smaller group of stakeholders and within which, therefore, the problem may be slightly more tractable than it will be if you decide that your purview is all of medicine. I am not sure I would know at this point, even after 20-plus years of experience with this business, how to design a system for medicine. I think I could perhaps design a system for some subsegments of medicine, that are more tightly circumscribed, but I hasten to add that I am not even sure of that.

I would put forward that the psychiatric system is one of the better places to begin building a national reporting system.

Historically, psychiatry was somewhat separated from general medicine – both conceptually and geographically (in the form of asylums). While there has been a growing effort to integrate mental and physical healthcare systems, divisions remain.19 Oddly, while this separation likely hinders the current quality of care of patients, it arguably reduces the complexity of implementing a reporting system.

In an inpatient setting, the reduced interaction with general medicine means fewer types of health practitioners involved in incidents. Potential investigations may initially stay within the confines of a psychiatric ward (or within a separate hospital entirely). In an outpatient setting, most interactions between a psychiatrist and patient involve only a conversation.

The technological simplicity of psychiatry also lends itself to easier reporting system implementation. It is impossible to report an error with a complex diagnostic test if they don’t exist. In many cases, the most advanced diagnostic technology used in a psychiatric clinical setting is a piece of paper.20 Treatment of mental health conditions remains mostly confined to therapeutic methods or medication, though there is a growing use of brain stimulation methods.

The relational complexity of psychiatry, spanning issues like coercion, capacity, stigma and broader ethical issues, makes the narrative-rich free-text section of incident reports particularly valuable. A lot could be learned and improved upon using the narratives of psychiatric patients and staff.

While many types of patient safety factors are shared with general medicine,21 some are relatively unique. On the patient side of things, there is likely little that indicates a need for improved psychiatric patient safety (and care) more than an entire anti-psychiatry movement. Also, self-harm and suicide are increased risk factors. On the healthcare worker side of things, the potential for aggressive behaviour (in certain settings of some sub-groups of patients) means that not only are physical injuries more likely, but healthcare professionals can also often suffer mental harm through second victim syndrome.22 After investigation, some system wide changes may also be unique. For example, changing décor and introducing physical barriers may lead to safer environments.

Perhaps the best argument for implementing a national incident reporting system in psychiatry is that the field has the most to gain from a focus on patient safety. Up until now, the field has mostly been “in a world of its own”. Psychiatric patients were either excluded or not mentioned in landmark patient safety studies or reports, while the psychiatric patient safety literature generally runs in parallel, rather than with, general patient safety literature. Psychiatric safety remains an under-researched field.23


A team at NASA #2 assembled to plan and implement the new national reporting system. They chose psychiatry as the field to test a prototype: the Psychiatric Safety Reporting System. The NASA team batted away suggestions that the subsequent acronym, PSRS, would be “confusing” compared to the previous PSRS,24 since that PSRS was too old to be of relevance to this PSRS. The name stuck regardless.

Rather than relitigate a very complex US #2 incident reporting landscape, NASA #2 sets up a new Patient Safety Organisation that covers all of the psychiatric system. They have their own database and run their own analysis. Over the 5-year testing period, they iterate the design of the report form, developing their own standardised taxonomy of incidents. However, throughout this period, the analysts still convert a copy of the incident submissions to common formats and send it to AHRQ and the Network of Patient Safety Databases.

The NASA #2 team finally sets up a national investigation unit called INSIGHT (Independent National Safety Investigation Group for Healthcare Transformation). It is modelled after HSSIB in the UK.25

If successful, the PSRS program will be slowly applied to other medical fields, replacing the AHRQ program. It is proposed the name would then be changed to the Patient Safety Reporting System, or PSRS for short.

Open databases

Right now, you can go to the Aviation Safety Reporting System website and look through any report that has ever been submitted to the ASRS. It is completely open to the public. No need to sign up. No special clearance required.

The open ASRS reports include both the fixed choice and narrative sections, with a very short synopsis at the end. There are report sets grouping reports together by type of incident. They contain synopses and narratives. There is not a statistic in sight.

Compare this with the UK’s Learning From Patient Safety Events database, and you get a very different story. Only health and social care staff have access to the patient safety incident reports,26 seemingly restricted to reports from their own local organisation. The only public data available consists of purely summary statistics.

It is a similar story in the United States. For instance, the FDA Adverse Event Reporting System (FAERS) also doesn’t include the narrative portion of the report in the quarterly data files or public dashboard (which again shows only summary statistics).27 The AHRQ Network of Patient Safety Databases page has chart books, dashboards and spotlights – all showing summary statistics, with no hint of a narrative section or synopsis in sight.

The medical system’s obsession with statistics in patient safety reporting is confusing, to say the least. The predominant value of incident reports resides in the narrative (qualitative) data, not the fixed-choice quantitative part. Therefore, the patient safety database statistics are of little practical use.28 They also do not seem to change much from quarter to quarter, providing little to no new information.29

Comparison between LFPSE data (number of incidents sorted by type) for adjacent quarters. Source: NHS England

In order to have an open database, the incident data would need to be deidentified. It has always confused me why so little anonymised medical data is accessible in open databases. I would argue that the benefits of allowing access to anonymous medical data far outweigh the very small risk of reidentification.30

With narrative sections in patient safety reporting systems closed off from the public (and possibly other healthcare providers where the incidents didn’t occur), local healthcare providers who conduct their own root cause investigations lose opportunities to achieve a wider “window into the system”. Access to multiple perspectives of slightly different incident reports from other providers offers helpful context for investigation. An open database would also increase the likelihood of communication and collaboration between providers - duplication of root cause analysis investigations would become less likely.31

Access to incident reports across a country can help enormously in equalling the power imbalance between a patient and practitioner (and subsequent practitioner learning). For example, would the historically widespread dismissal of patients with Chronic Fatigue Syndrome (CFS) be so widespread if CFS patients could show their doctor incidents (or even better: root cause investigations) of similar adverse outcomes?


The PSRS team at NASA #2 sets up both a private and a public database. They add an “opt-in” tick-box asking the reporter if they would like to include their deanonymised report in the public open database. The private database contains all reports, and the public one has opt-in reports (including full narrative) with carefully worded synopses for opt-out reports.

A working group of patients, clinicians, data engineers, legal scholars and other experts is set up with the aim of determining the kinds of free-text data that should be modified or completely excluded when deidentifying report data. It is an ongoing process. Legislation is also written to protect patients and practitioners in any case of reidentification.

Promotion and Feedback

When my (then) psychiatrist produced a report with factual inaccuracies based on a 15-minute conversation about a simple dose increase of lithium, I contacted the Patient Advice and Liaison Service (PALS) asking for advice. During the two or three conversations I had with the PALS team, not once was it mentioned that it was possible for me to submit an incident report to the National Reporting and Learning System (NRLS), operational at the time (the NRLS was the predecessor to LFPSE).

In fact, the only reason I know about incident reporting systems at all is because I read about them in the aforementioned book by Don Norman. I would not be surprised if I were one of maybe five patients in the UK who know about the LFPSE incident reporting system.32 Gosh, by reading my series of articles on the subject, this may be the first time you, the reader, have even heard about the existence of patient safety reporting systems!

This is a huge publicity problem – especially because criticism of patient safety reporting systems is growing. As we discovered in the fourth Thin Line article about the ASRS, persistent promotion was critical for the program’s success. It is difficult to persuade diverse groups of stakeholders that a reporting program is useful if too few of them know about it, never mind report to it.33

When Charles Billings and Rex Hardy (of the ASRS team) introduced the first edition of CALLBACK magazine back in the 1970s, some of their NASA colleagues thought it was “shockingly casual for what was unquestionably a government publication”. But it was a successful strategy for reaching a wider, more diverse aviation audience.

In comparison, the UK’s LFPSE promotional material seems to consist of a webpage with a YouTube video and five unconnected podcast episodes, which makes for, erm, not exactly thrilling listening. The situation seems to be similar for the U.S. and the Network of Patient Safety Databases, with only a webpage (which includes a sign-up for email updates).34

Perhaps future promotion of patient safety reporting systems can learn from the casualness of CALLBACK. In the modern media age, the public and medical professional community are more accessible than ever. Why not produce short TikTok explainers, YouTube interviews or a Substack case study publication?35

This kind of promotion also acts as a feedback mechanism – a general audience is learning about the system. Reciprocally, feedback in the form of reporter follow ups (i.e. callbacks) helps to breed a sense of familiarity and trust with the people running the reporting system.36 Given many reporters in medicine currently feel like their reports disappear “into the ether”, the introduction of more rigorous feedback mechanisms appears greatly needed.37


The Psychiatric Safety Reporting System team at NASA #2 begins a targeted promotion strategy. Posters (both paper & digital) are sent to be put up where psychiatrist and mental health patient congregation is highest: Community mental health centres, psychiatric clinics, inpatient hospitals, pharmacies, etc. Automated messages are recorded for telephone queues. An “opt-in” tick box is added to patient complaints report forms all over the country, asking if they would like to include their complaint in the new learning system (an analyst converts the various complaint formats into the standardised form).

A publication and podcast titled “The Near Miss Clinic” launches for a general audience (including healthcare staff). Every month, a case study of a near miss, the subsequent report, investigation and improvement is followed. This is made possible through the open database. A YouTube and TikTok educational channel called “Psychiatry Safety Lab” is created. Every week, a different concept about patient safety in psychiatry is explained. There is an emphasis on an engaging, storytelling style to ground the message in everyday experience.

To engage a clinical audience, guest posts are regularly written on psychiatric-centred publications (like Psychiatry at the Margins, Psychiatric Times, RC Psych Insight, etc.), and interviews on podcasts and YouTube channels are booked. In the Psychiatric Times, a column about learnings from incident reporting in psychiatry is introduced.

After submission of a report, regular feedback on the course of the investigative process was facilitated through the confidential communication platform.38

The vital provision of transactional immunity

In the summer of 2020, I had to wait 42 days for lithium to be prescribed. There was nothing wrong with any of my blood or ECG tests. The psychiatrist who suggested lithium was experienced, a professor, even.

With every other medication I trialled, the time between agreement with my psychiatrist and prescription only took a few days (at most). So why, in the case of lithium, was there such a long delay?

Well, my psychiatrist professor was in charge of an rTMS centre, and I was receiving private treatment specifically for rTMS. He wasn’t allowed to prescribe me lithium, so I had to go to my local NHS centre. Long story short, the whole matter was eventually settled around day 40 of the 42 days, through communication between my local NHS psychiatrist and my private rTMS psychiatrist.

The first 40 days were filled with an unnecessary assessment, countless clinician group meetings, administrative mishaps and a fruitless communication with an in-house rTMS psychiatrist (not the professor). This, it could be argued, was an example of “active defensive medicine” – where more procedures were undertaken than needed for my care.39

The practice of defensive medicine, both passive and active, is highly prevalent throughout the world.40 While there are many good reasons to define defensive medicine broadly, the term is usually defined as physicians deviating from sound medical practice due to liability claims and lawsuits.

Global comparison of defensive medicine prevalence. Source Zheng et al. (2023)

Within this culture of defensive medicine, it may be surprising to learn that the vast majority of patient safety incident reporting systems do not utilise limited transactional immunity (protection of the reporter from disciplinary action)41 – the vital provision that gave aviation professionals the confidence to report to the ASRS. Even the Department of Veterans Affairs PSRS, based on the ASRS, didn’t have transactional immunity.

Instead, the focus has been on providing privilege (protection of reports from being used as evidence in legal or disciplinary proceedings) and anonymity. Yet, one of the most significant barriers to reporting in medicine remains fear of reprisal. Why haven’t the privilege and anonymity protections removed this fear? And why hasn’t transactional immunity been introduced?

A potential explanation for the first question is provided by Huynh et al. (2017). The authors point out that healthcare professionals have a duty of candour – an ethical duty to be open and honest when an error occurs – codified by professional regulatory bodies.42 Given the privilege and anonymity protections usually only apply to the incident report they submit, a healthcare professional cannot remain truly anonymous. In fact, these types of protections incentivise not telling the patient about errors, then submitting a report - thus promoting unethical practice, an unjust culture and a continued fear of reprisal.

As for the answer to why transactional immunity is so rare in patient safety reporting systems, it appears partly due to the complexity of the healthcare regulatory system, and, at least in my opinion, partly due to a lack of will.

Unlike aviation, which typically has one national regulator, in medicine, there are multiple bodies for collections of professions within the healthcare system. In the United States, regulatory power is split even further into individual state regulators.43 Trying to produce legislation for immunity that all regulators will agree on is a tough task, to say the least. However, there is evidence to suggest the task isn’t impossible. Many states have enacted laws that provide immunity from civil malpractice claims for reports by physicians to the Department of Motor Vehicles (DMV) about patients who decide to drive despite being impaired by a medical condition. A similar kind of malpractice protection was advocated for at the beginning of the COVID-19 pandemic, when healthcare services were stretched thin.44

In terms of incident reporting systems, there have been a very small number of academic papers that have advocated for transactional immunity.45 But discourse remains thin on the ground. I find this lack of will confusing.

I can appreciate that the medical system is much more complicated than the aviation system;46 nevertheless, I am struggling to understand why transactional immunity cannot be offered for incidents which clearly do not cause harm to a patient.47 Malpractice claims cannot be filed for these incidents, and potential disciplinary measures would most likely be on the milder side.


With the backing of all state boards, U.S. Government #2 introduces legislation that provides transactional immunity for psychiatrists in possession of a receipt related to a no-harm incident submitted to the PSRS (within 10 days of said incident). Transactional immunity is limited to non-criminal, unintentional, and/or inadvertent actions. Immunity is also limited to one “usage” per type of no-harm incident, up to a total of ten types of incidents over a five-year period. Investigations can still be carried out. The immunity only waives sanctions. Funding is allocated to each state board to set up committees to settle any disputes.

A wider transactional immunity that includes harmed patients is planned if the PSRS is successful. It is recognised that implementation requires an overhaul of federal & state regulations as well as tort law.48

Harnessing technology

It is often mentioned that a major limitation of anonymous reporting systems is that two-way communication with the reporter is not possible.49 But this is no longer true. Two-way anonymous communication technologies exist and have existed for at least a couple of decades.

While there are different methods used to anonymise communication, generally, anonymity is preserved through the generation of a unique “key” (instead of any personal information) after the initial message by a reporter. This key can then be used to log in to a secure online platform.50

Most two-way anonymous systems have been used for whistleblowing or behaviour-related reporting (like bullying and harassment). Examples include: The Guardian newspaper’s SecureDrop whistleblowing service (launched in 2014), tip411 for anonymous tipping, and Whispli.

Anonymous communication is not restricted to text. There are ways to protect the identity of an individual in audio and video communication too. Both the metadata and speech data (i.e. voice) of calls can be anonymised.51 Currently, there is a wide range of use cases.

Despite anonymisation technology’s clear use case for follow-up and feedback in anonymous patient safety reporting systems, I could only find one paper mentioning two-way anonymous communication. A single sentence in a 2025 paper by Beecham et al.52 So far, all the anonymous patient safety reporting systems I have come across have been one-way.

Oddly, two-way anonymous communication systems are used within the UK healthcare system, just not for patient safety incident reporting. Perhaps the example closest to a patient safety reporting system is the anonymous two-way system set up by Moorfields eye hospital as a response to the Lucy Letby case. The system appears to be primarily focused on whistleblowing, but patient safety incident reporting is mentioned in the response.53

Anonymous communication is not the only software from other fields which could be used to improve the functioning of incident reporting systems.

Sometimes, when I’m looking through a forum trying to find out how to fix an open-source software problem (otherwise known as a “bug”), I will see a “duplicate” tag (e.g. see this thread on GitHub). This tag will link to a thread posted at an earlier date where essentially the same problem is discussed, and a solution found (Note: by “duplicate” I don’t mean identical. I am referring to a similar report by another person about the same bug).

In the open-source community, duplication detection is usually carried out manually (by the owner of the software). But in the commercial software industry, with large userbases and mammoth numbers of issues, automatic (or semi-automatic) triage and deduplication are often used.54

Duplicate detection algorithms automatically analyse the narratives and structured fields of bug reports, and flag ones which seem to describe the same bug.55

In incident reporting systems, duplications – reports about very similar incidents – have been shown to exist.56 Yet research on deduplication techniques for reporting systems seems to have only focused on one: the FDA’s Adverse Event Reporting System (FAERS).57

In both the bug and FAERS deduplication discourse, duplication is generally viewed as a problem – duplications are something to remove. However, there is evidence to suggest that duplication could actually be helpful. A survey conducted by Bettenberg et al. (2008) found that bug duplicates provided software developers with extra contextual information, helping to fix bugs more quickly. They advocate for merging bugs. There are parallels with the change of ASRS transactional immunity policy in the battle for immunity of the late 1970s. The extra context provided by each aviation professional submitting their own report to gain immunity for the same incident helped with investigations.58

Instead of seeing overreporting as a problem, deduplication software could add rich contextual information about very similar patient safety incidents, while at the same time reducing the workload of analysts.

Any deduplication software would need to utilise Natural Language Processing (NLP) of the free text portion of the reports. Here (and with machine learning in general), the literature on incident reporting is much denser, focusing primarily on automatic incident classification and event detection.59


By harnessing technology, the PSRS distinguishes itself from all previous predecessor reporting systems. Because reporters are given transactional immunity for non-harm incidents only, the team decides to build an anonymous communication feedback system called TRUST (Two-way Reporting for Understanding Systems & Trends). Reporters can anonymously submit reports via text or through an interactive voice response system.60

At the same time, NASA #2 partners with a large software company to build an incident report clustering system that will recognise very similar incident reports through free text NLP. Once finished and tested, it is integrated into TRUST.

The process of reporting goes as follows:

Upon submission of an incident report to TRUST, a psychiatric healthcare worker or patient is provided with a unique anonymous login key and confirmation that the report has been received. Before an analyst at PSRS #2 receives the submission, the clustering software searches for a similar case file in the database.61 Depending on what the clustering software finds, the process splits into three potential courses of action, described in this footnote.62

In all three courses of action, the original reporter is part of the process.

Integrated into a wider safety structure (both nationally and globally)

In the 2004 paper, When will healthcare pass the orange-wire test, Sir Liam Donaldson wrote:

Imagine that an airline engineer doing a preflight inspection [of a Boeing 757] spotted [an orange wire] frayed in a way that suggested a systematic fault rather than routine wear and tear. Imagine what would happen next. It is likely that most 757 engines in the world would be inspected—probably within days—and the orange wire, if faulty, renewed.

Like airlines, hospitals take charge of people’s lives many times a day. Yet, health care has lagged behind other industries in putting safety first in dealing with its consumers. A systematic fault that put patients’ lives at risk discovered in one country would not surely be rapidly and simultaneously corrected by health services across the world.

Over 20 years later, I think it would be fair to say we are no nearer to passing the orange wire test in healthcare. Why? Why has there been so little progress?

Part of the answer is explained by Carl Macrae, the researcher whose work I was most influenced by when writing this series of articles. In his 2025 paper Systematizing Safety, he writes:

One of the most striking features of the past few decades of effort to improve patient safety is the isolated and individuated character of much of this work: the field has blossomed with separate interventions, stand-alone programmes, discrete processes, individual roles, and distinct policies. This blooming of a thousand flowers across the field is indicative of the creativity and experimentalism that has been brought to bear on the problem of patient safety. But it is also indicative of something much more troublesome: fragmentation […].

At best, the management and organization of patient safety tends to consist of loose coalitions of projects and activities brought together under broad thematic priorities coordinated by a handful of highly-motivated (but often under-resourced) enthusiasts. At worst, it is a tragic mess.

Aviation incident reporting systems had the benefit of growing out of an already developed safety system, strongly connected throughout the entire world. This is simply not the case in medicine.

How do you build an effective national incident reporting system? The answer is you can’t. Not on its own. You need to build the entire worldwide patient safety infrastructure along with it.

Perhaps a start would be the setting up of an organisation like the International Confidential Aviation Safety Systems (ICASS) Group, which provides advice, facilitates the exchange of safety information and identifies solutions to common problems between different countries. Given the World Health Organisation is an independent worldwide body, I think it would be one of the prime candidates to advise countries on how to develop safety management systems, and then helping to connect these systems.63

Forget landing on the moon, or sending a horse into space, the greatest human achievement of our age may very well be building a worldwide patient safety infrastructure that passes the orange wire test.


After a decade-long development within the US #2 psychiatric system, Congress agreed that the PSRS should be expanded to every field in medicine, as part of a broader plan to implement a national patient safety management system. Sitting around the desk of the main meeting room, clothes untucked, bags under eyes and hair slightly dishevelled, the staff at NASA #2 took stock of what they had achieved, and what was to come. “This is only just the beginning, isn’t it…” one member groaned. “Yes. Unfortunately, yes” another half-heartedly responded.

The head of NASA #2, twiddling his thumbs, commented through half a grin “To think… all this happened because every country in the world decided to send a horse into space.”

Epilogue: the invisible boundaries between worlds

In 1989, Lucian Leape, fresh from learning that there were potentially 120,000 preventable patient deaths in the United States, decided he was going to learn as much as he could about error management in medicine. He strode into the Countway Library at Harvard Medical School and searched the archive for medical literature on mistakes. He found nothing.

Confused and slightly downbeat, he asked the librarian for help in his search. She thought the search strategy was fine, but recommended looking in the social sciences or engineering literature. When they searched this archive, hundreds of references came up. Many people in these disciplines already knew a lot about mistakes and how to prevent them. Lucian suddenly had a lot of reading to do.64

You and I live in a world of invisible boundaries. Problems that have plagued an industry for decades will have solutions sitting in an adjacent one. We frequently pass people from these parallel fields on the street, we chat to some of them, and a few will even be friends. But the gaps between subcultures mean that some of these people exist in industries with solutions lightyears ahead of others, in one very specific area. And no one notices.

What would happen if we merged these adjacent fields? What if human factors culture became embedded throughout the medical system? What if all the lessons from aviation reporting systems were successfully adapted to the psychiatric system?

What if psychiatrists reported all their mistakes to NASA?


And Breathe. Congratulations gang, we’ve made it to the end! Gosh, that was a long article, even by my standards… Thank you ever so much for reading this series - whether you stuck around for the whole shebang, or just popped by, I am grateful you’ve decided to read my work.

As ever, if you Alex is go for launch enjoyed the… T minus 10 seconds and counting …wait a minute… 9 did she just say I was “go for launch?” 8 Am I in a rocket launch? 7 that would explain why my desk is facing directly upward… 6 …and my desk is actually a control panel… 5 …and I’m in a space suit… 4, main engine start Main engine unstart! Main engine unstart! 3 Can someone please blow on the exhaust flames? 2 Ah well, might as well accept my fate. 1 Where am I going anyway? And we have liftoff! For Alex on his one-way mission to Mars! Mars? Awesome! Ha ha suck it Elon! … … … Did she say “one-way mission”?

If you enjoyed this article, please do drop a heart❤️, restack 🔄 or share 🔗. It helps others find my work!


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

The VA now jointly uses the Joint Patient Safety Reporting system with the Defence Health Agency.

2

This, quite clearly, is an understatement.

3

Looking back to the crash of Flight 514 and the United Airlines near miss that occurred six weeks prior, we see that for United’s program to pick up the problem on the flight chart, both the pilot’s and the ATC’s version of events are important.

If an investigation is based solely on the ATC’s version, events like terrain near misses may never get picked up.

4

One of the barriers to reporting for clinicians is the amount of time and effort it takes to fill out a form.

With patients, this barrier may be even more of an issue given the potential confusion around alien terminology. Based on the limited literature available, patients also report differently to clinicians - their narratives are often more emotive and personal.

One of the few systems to currently offer reporting for patients, the LFPSE, I found to be a pretty jarring experience. It is seven pages long with multiple textboxes, and because you cannot look ahead on your first go, you don’t know how long it will take to complete.

The ASRS managed to keep their report to a double-sided A4 page. We can see from the 2009 PSRS form that the same can be achieved in medicine.

5

While patients don’t have to worry about professional disciplinary or punitive measures, this does not mean patients will report free of fear.

Clinicians have considerable power over a patient. A patient has a veto over treatment, but so does a clinician. I remember several times during my illness where fear of withheld treatment, or at least resistance to talking about treatment options, was a significant factor in my care.

Ideally, legal protections and clinical protections should be put in place to ensure patients feel safe and even encouraged to report. Rather than the medical system assuming patients want to blame, assuming they want to help clinicians learn may go a long way to helping patients report.

The language and story need to change.

6

Covered later in this article

7

I feel bad for kicking the LFPSE while it is down - it is a reflection of how few reporting systems allow patients to report, and it is admirable that the LFPSE has become one of the few.

However, when I filled in one of the reports, I had no option but to be anonymous. As soon as I submitted the form, I had no way of telling if anything would happen because of my report. And it left me feeling pretty empty.

8

One of the benefits of imagining an alternative Earth in an alternative universe is that we can make things happen there, however unlikely they are to happen here.

9

Patients commonly express not wanting the suffering that happened to them from happening to other people - hence the Never Again.

One of the major frustrations, at least from personal experience, is a system that drags and takes ages – in the form of waiting lists, admin errors, and not being listened to. Hence, giving the pronunciation a form of urgency.

10

Each lived experience investigator will need appropriate training, of course. Given the system will require a fair number of lived experience advisors, we are fortunate to be living in a time of increasing Patient and Public Involvement

11

These topics are all covered later in this article.

12

Less briefly, as a Brit confused by the overwhelming complexity of the U.S. healthcare system, I hope what follows is accurate:

The Patient Safety and Quality Improvement Act (PSQIA) of 2005 allowed the formation of Patient Safety Organisations (PSOs). To my understanding, these are entities set up specifically to deal with patient safety matters. They can be non-profits, academic institutions, private companies, etc., operating with a broad range of health care providers on a national, state or local level (that may also specialise in particular areas of patient safety like medication errors). Examples include ECRI and ISMP; Nebraska Coalition for Patient Safety; and Child Health.

The PSQIA gave PSOs legal protections, meaning any information supplied to them (called a “Patient Safety Work Product”) by healthcare providers has legal protections (federal privilege and confidentiality), as long as the PSO is listed on the Agency for Healthcare Research and Quality (AHRQ) registry.

PSOs can then choose to pass this information on to the PSO Privacy Protection Center (PSOPPC), which deidentifies the data.

Information can include incident reports, near misses and root cause analyses, but these all have to abide by the “common formats” standardisation. Which, in many cases, isn’t used when healthcare providers report to PSOs.

Once deidentified, the PSOPPC gives the information to the Network of Patient Safety Databases (NPSD), which analyses it and aggregates it into a national database containing all the other reports and analyses submitted by the PSOs.

13

See the U.S. Department of Health and Human Services Office of Inspector General 2025 report into the Patient Safety Organization program (pdf here).

14

I couldn’t find any specific survey showing how aware the public was of the AHRQ. However, as newspaper articles often describe the AHRQ in a supporting context (e.g. “the HHS agency that prepared Tuesday's report”) or explicitly say it isn’t widely known (e.g. “Little-Known Health Agency”), we can infer that the AHRQ isn’t well known by the public.

15

And also this article is a lot more interesting/quirky if I choose NASA as the third party for Psyverse #2

16

A similar sentiment was expressed by Gary Lineker, England footballer (soccer player), when he said after England’s exit at the hands of Germany in the 1990 World Cup:

Football is a simple game. Twenty-two men chase a ball for 90 minutes and at the end, the Germans always win.

17

This will make sense in a future article.

18

Lucian Leape also expressed a similar way forward in his 2002 article Reporting of Adverse Events:

A more realistic alternative would be an expansion of systemwide programs, such as the Veterans Affairs program and specialty-based, focused reporting programs, such as those for neonatal and adult intensive care units. These programs have the advantages of the commitment of those who run them, the allegiance of reporters who trust fellow experts, and the ability to be tailored to practice needs. Similar programs could be developed by other specialties.

19

See Coates et al. (2020) & Nasrallah (2010)

20

See NIH curriculum supplement and paper by Dargham et al. (2023)

To the best of my knowledge, there isn’t a current diagnostic technology used in clinical diagnosis of a mental disorder that isn’t more advanced than psychometric measures (which can sometimes support diagnosis through screening/measurement - clinical judgement is the norm).

Blood tests and brain scans can be used to rule out potential physical causes of mental symptoms (e.g. thyroid disorder causing depressive symptoms or an MRI to rule out cerebral pathology).

Blood tests can also be used to therapeutically monitor various organ functioning while taking medications that have the potential to harm them (e.g. lithium monitoring for potential long term damage of kidneys or thyroid)

21

There is evidence that some of these shared incidents aren’t looked at enough, with papers in the academic literature seemingly focused on suicide prevention and aggression.

According to Waddell and Gratzer (2020), there were fewer than 15 papers which looked at fall prevention within inpatient psychiatric units worldwide.

According to Thibaut et al. (2018), just 5% (17 studies out of 364 found between 1999 and 2019) covered medication errors.

22

While this sentence is focused on healthcare professionals, I want to make it clear that this goes both ways - patients can suffer physical & mental injuries too.

23

While I’ve enthusiastically set out the case for psychiatry as the testbed medical field for a national incident reporting system, I have to admit that, given this Substack is currently focused on psychiatry, I kinda assumed psychiatry would be good, then found evidence of why it would be good. In other words, I leapt wholeheartedly into the arms of confirmation bias.

The reality is there may be significant challenges to be overcome when trying to implement a psychiatric reporting system. The harms in mental healthcare are often diffuse and non-specific - they can unfold over many years. The subjectiveness of the field could lead to difficulties in figuring out discrete causal chains of events.

In other words, psychiatry could be pictured as one massive grey area where the word “incident” could mean very different things to different people.

24

As a reminder, the “Patient Safety Reporting System”

25

In our universe, the United States does not have a national investigation organisation. Though there is a push for one in the form of the proposed National Patient Safety Board (NPSB).

26

I’m not entirely sure how much of the report is available to healthcare professionals (since as a patient I cannot access it) - and if it includes the narrative free-text section.

27

You can request a narrative from a single report - but you need to know the case file number. Which arguably defeats the point of an open database.

28

It has been consistently shown that the number of reports does not reflect the actual incidence of adverse events.

Indeed, in the Data quality information section of the UK’s NHS Patient Safety Event Data Quarterly Publication, it warns as much:

LFPSE data does not, and cannot, provide the definitive number of patient safety events occurring in the NHS; it measures the number of safety events recorded. The number of recorded safety events has increased year on year, which likely reflects improved recording culture and cannot necessarily be interpreted as the NHS becoming less safe.

The primary purpose of incident reports is to identify underlying risks in the healthcare system, which will then trigger an inquiry (for further investigation into the causes).

29

The situation is eerily similar to the aviation incident reporting data collection of the late 1950s and early 1960s, which was cancelled because of unchanging statistics. The following passage, written by Bobby Allen in a 1968 letter to Harold Caplan, could very well sum up the situation in medicine now, nearly 60 years later:

Consider for a moment that the pie chart showing the distribution of causal factors in terms of percentages has not changed (in the USA) appreciably in the past ten years. Nor have we found it necessary to add many new areas of causation.

30

Within the context of health research free-text data, a 2025 paper by Ford et al. provides an in depth summary on the subject.

The paper also covers the deidentification process for free-text data and the concern from the public that data is not linked up enough to improve care and services, then argues that as long as data is stored in secure data environments, the risk of reidentification is very low.

Note: I’m talking about fully deidentified data (as opposed to pseudonymised data, where the original identifiable data is still privately stored on a computer with a key linking it to the public deidentified data - fully “anonymised” means the original data is deleted)

31

It has been argued that the lack of open source scientific reports in health research literature has contributed to a lot of “waste”.

See Glasziou (2014) and Rosengaard et al. (2024)

32

An exaggeration, but likely not too far off

33

As the developers of the ASRS say:

The safety data system, and the participants, must operate in a political environment, for safety has high political visibility. A clear recognition of this indisputable fact before the system is implemented can likewise do much to defuse potential problems downstream.

Also, remember from Thin Line 4 that the lack of promotion to FAA officials nearly resulted in the collapse of the entire ASRS program.

34

The AHRQ does, however, also have a website called the “Patient Safety Network” or PSNet. With articles, training/education material and other resources dedicated to patient safety.

However, it is directed at patient safety more generally (rather than incident reporting systems) and appears to be aimed at healthcare professionals rather than a general audience.

Note: a worrying amount of material (studies, perspectives, weekly updates) just seems to have stopped in early 2025. This cannot be a good thing.

35

Note: promotion doesn’t have to come in the form of effort-heavy newsletter publications or videos. Automated messages or website information placed in optimal places could be a cheap and easy way to broaden awareness of reporting systems.

If it just so happens that an NHS administrator or clinician connected with the LFPSE (or patient safety system in general) in the UK is reading, here would be my suggestions:

  1. Targeted messages: put a message like “The NHS is always looking to learn. If you have experienced an incident, error or mistake in your care, we would love to hear from you. Please fill in the following form: https://www.patient-public-reporting.nhs.uk/ by the NHS Learning From Patient Safety Events (LFPSE) service”, in the following places:

    1. 111 service webpage

    2. PALS webpage

    3. In an automated email reply, when a PALS is contacted via email

    4. An automated message while a patient is on hold to PALS, the 111 service, or calling up a local GP

  2. Posters in waiting rooms of hospitals or GP surgeries (nowadays, there are large T.V. screens which show this type of informative material)

36

This also improves patient safety culture - it gets people from all parts of the system involved in patient safety

37

For evidence of lack of feedback in healthcare incident reporting, see Beecham et. Al (2025) & Carlfjord et al. (2018)

For a substantive summary of feedback mechanisms, see Benn et al. (2009)

38

See the Harnessing Technology section later on

39

I kept getting told that they were doing all these extra checks “for my safety”, and my local NHS psychiatrist even had the gall to say they were working at 100% efficiency.

I would argue that keeping me in limbo for a month and a half, while I was in extreme suffering, when a phone call to the psychiatry professor would have provided the necessary information, made things less safe for me.

One cannot help but wonder whether these checks were for the local NHS centre’s benefit, not mine.

40

Passive defensive medicine describes avoidance behaviours (e.g. avoiding high risk surgeries).

A 2023 systematic review by Zheng et al. (2023) put the worldwide average percentage of physicians who engaged in defensive behaviours (both passive and active) at 75.8%.

The subgroup analysis of the prevalence of defensive medicine by region and specialty.
Prevalence of defensive medicine per type of physician. Source: Zheng et al. (2023)

For an introduction to defensive medicine, personally, I found the 2017 paper by Leonard Berlin very readable.

The reasons physicians practice defensive medicine are summarised in Kakemam et al. (2017, while psychiatry-specific reasons are explored in Scognamiglio and Morena (2025).

41

The only reporting system I could find which had transactional disciplinary immunity was one implemented by a Swiss Hospital Group called “The HUG“.

42

In the UK, an example is the General Medical Council's professional standards

All medical professionals have a duty of candour – a professional responsibility to be honest with patients when things go wrong. As part of this duty, they must tell the patient when something has gone wrong, and explain the short- and long-term effects of what has happened

In the U.S., an example would be the American Medical Association’s code of medical ethics:

Patients have a right to know their past and present medical status, including conditions that may have resulted from medical error. Open communication is fundamental to the trust that underlies the patient-physician relationship, and physicians have an obligation to deal honestly with patients at all times

43

Aviation:

The FAA is the sole federal regulator in the USA & the CAA is the sole regulator in the UK.

Medicine:

In the UK, there is the GMC for doctor regulation; NMC for nurses, midwives, etc.; HCPC for 15 healthcare professions from art therapists to paramedics; the list goes on.

In the US, regulation is split by state and, to an extent, profession. For example, California has the Medical Board of California to regulate physicians and surgeons, the California board of Registered Nursing to regulate nurses, etc.

44

I have mentioned the DMV example to show that it is possible to implement transactional immunity in the United States at the state level. The jury is out on whether physician reporting to the DMV actually reduces the number of traffic collisions (See Koppel et al. (2019)).

See Al-Azri (2020) for immunity argument during Covid pandemic.

45

There was only one study I could find which trialled transactional immunity when a reporting system was introduced - Wilf-Miron et al. (2003)

Many of the transactional immunity advocate papers, referenced below, cite the Wilf-Miron study.

  1. O’Beirne et al. (2010)

  2. Mikkelson et al. (2009)

  3. Schwappach & Boularte (2008)

  4. Kapur et al. (2015)

  5. The Learning from Bristol Public Enquiry page 368

46

Many voluntary patient safety incident reporting systems, for instance, allow for reporting of incidents which cause harm to a patient, blurring the regulatory lines between malpractice and learning. Aviation reporting systems, on the other hand, have a clear line between occurrences which cause harm/damage (accidents - no immunity offered) and ones which don’t (near misses/incidents – immunity offered).

47

An example of a real-life near miss which may apply is here.

48

I was initially thinking about perhaps proposing a variation of “Health Courts” (in my head it would be split between “clinician courts” and “patient renumeration courts”), which would work with assumed immunity as long as the medical professional apologised and engaged in incident reporting, eager to learn from the mistake.

But I was overwhelmed by tort law, apology laws and the various repercussions of policies. Incident reporting systems link up to almost all aspects of the medical system - it is really difficult to express just how interconnected and complex the subject of incident reporting systems actually is!

49

For example, one of the most explicit comments comes from a 2004 paper by Clarke:

Some reports will require follow-up information and/or further investigation as to why the event really occurred. For that reason, confidential reporting is referred to anonymous reporting

Another example can be found in a 2008 Canadian Background document by Gregory:

Importantly, anonymous reporting does not allow the opportunity for follow-up if questions arise during the course of an investigation, sometimes making it difficult to get at the root cause (s) of the adverse event

A more recent implicit mention can be found in Mahmoud et al. (2023):

Furthermore, anonymity was cited as making information about the incident more difficult to obtain

50

More info can be found in this explainer by SafeCall, with an example walkthrough of a system by the University of London

51

For more on speech data anonymisation, see this explainer.

See this study by Diaz-Asper et al. (2025) for more on the clinical research utility of speech anonymisation.

52

Anonymous two-way messaging features would enable reporters to receive updates or clarifications without compromising confidentiality.

53

Other examples:

Cardiff & Vale University Health Board uses the WorkInConfidence software to allow employees to “speak up safely”, while Nottinghamshire NHS trust uses another software for a similar purpose.

54

I am not a software engineer and have never worked for a software company, so despite finding a few mentions of automated bug duplication detection (e.g. Su and Joshi (2018), Houpes (2016) & Amoui et al. (2013)), I’m not entirely sure how useful the tools currently are. See Rodrigues (2022) and Götharsson et al. (2024).

55

To my understanding, these detection algorithms tend to fall into either information-retrieval or machine learning methods. But there are quite a variety of methods. For a comprehensive summary, see this 2023 paper by Qian et al.

56

The 2009 NRLS (precursor to LFPSE) report notes the exclusion of 11 duplications in a sample of 5,000. Though it is difficult to ascertain whether these are identical “copies” or duplicates.

A paper on more recent NRLS data by Alrowily et al. (2024) notes that:

Some incidents were repeated, referred to as duplicate incidents, and were therefore excluded from the analysis

The Sheffield Health & Social Care NHS trust 2024 Patient Safety Response Framework notes that duplicate investigations are a problem, with one of their aims to:

Reduce the number of duplicate investigations into the sametype of incident, to enable more resources to be focused oneffective learning and enable more rigorous investigations thatidentify systemic contributory factors.

57

Duplication is a big problem with FAERS. See Hauben et al. (2012).

One of the more recent proposals for a method to deduplicate FAERS can be found in Kreimeyer et al. (2025).

For reporting systems in general, there are papers on the classification of incidents through Natural Language Processing. But this is not really the same thing as deduplication. The closest I could find was this 2011 paper by Cure. Zayas-Castro & Fabri on clustering near miss reports. Otherwise, the literature is thin on the ground.

58

As mentioned in the “Open Databases” section, reports on the same incident are still merged today.

59

For a review of the subject, see De Micco et al. (2025)

60

A paper by McNiven et al. (2021) explored the use of such a voice response system for incident reporting. They found that filing a written report took 10-15 minutes, with the voice response system reducing this to 97 seconds.

61

The database is organised towards investigation rather than the accumulation of reports. “Case files” are therefore collections of reports related to the same investigation.

62

The clustering software presents the analyst with potential duplicates.

Course of action #1: Novel incident

If the analyst sees the incident as novel, a completely new case file is opened for a future investigation. A notification message is sent to the reporter through TRUST about the case file and asks if the reporter would like to take part in the investigation. In the meantime, the analyst goes through the report and if there is any info that needs clarifying, will “callback” the reporter through a TRUST message. Both analyst and reporter (if agreed) stay part of the investigation through to completion.

Course of action #2: Non-novel incident, investigation incomplete

If the analyst determines that the incident report belongs to a previous case-file, it is appended to that file. If the investigation has not started yet (or has yet to be completed), a message is sent to the reporter through TRUST notifying them their reported incident has happened before. The reporter is also sent the case file with all the incident reports, and is asked if there was anything novel about their experience. An invitation to become part of the investigation is sent.

Course of action #3: Non-novel incident, investigation complete

Perhaps the most important course of action, if the analyst determines the report belongs to a previous case-file, where the investigation has already completed, the analyst sends a message to the reporter through TRUST asking whether the changes recommended by the investigation have been implemented at the healthcare facility they work at, or were/are a patient of (if a patient doesn’t know, staff are contacted at the facility). If the changes have not been made, the analyst follows up with management staff, asking why changes were not implemented. If the changes have been implemented, the analyst discusses with the reporter and management about why the solutions have not worked. The process carries on iteratively from there.

In essence, the anonymous communication system, combined with finding duplicates, can be used to enhance the feedback process and ensure solutions have been implemented on a wide scale.

63

Note: there is the International Patient Safety Organisations Network (IPSON) run by HSSIB, which seems to have similar goals.

64

Story from Chapter 2 (page 17) of Lucian’s book: Making Healthcare Safe. Note: Lucian made this book open source. It is free to read (and I highly recommend it).

]]>
<![CDATA[The surprising history of patient safety reporting systems]]>https://psychiatricmultiverse.substack.com/p/surprising-history-patient-safety-reportinghttps://psychiatricmultiverse.substack.com/p/surprising-history-patient-safety-reportingThu, 26 Mar 2026 18:00:57 GMT

This is article 9 in the Psyverse #2 series (and article 5 in the Thin line series). For previous articles, see the compendium.

Paradoxically, perhaps the most thrilling aspect of literature review is also the most infuriating. I had to re-write this post twice after discovering articles and books that retold the history of patient safety incident reporting systems. The frustration of rewriting an article, however, pales in comparison to the joy of being able to tell a story that is not often told.

So, I hope you will enjoy this story about trailblazing nurses, an accidental anaesthesiologist, the malpractice insurance crisis and a Leape into the unknown.


A genesis in medication errors

Nurses are the unsung heroes of patient safety history, with significant contributions extending back to at least the early 1900s. The history of incident reporting systems is no different. As early as 1939, a professor of medical nursing named Margene Faddis proposed the recording of medication errors to prevent future mistakes from occurring:

[A] record, written by the nurse, is desirable. This is not a means of punishment but assures the appreciation of the importance of mistakes and also serves as future reference; it should include a statement of the notification of the doctor and full details concerning the error. Finally, let us remember that the ordinarily careful and conscientious nurse who makes the mistake has had, in the realization of her act, all the punishment and discipline and suffering which are desirable. What is done beyond that must be of a constructive nature or it had better be left undone.1

In 1953, Anne Byrne2 conducted a study that used a form to catalogue medication errors by nursing students. Through analysis of the data, Byrne found “wrong medication given” was the most frequent error and determined:

…that the nurse in charge was at fault in approximately 50% (11 in 20) of the instances, either because she did not transcribe the doctor’s orders correctly or did not make out the medication card properly.

The data was reviewed by a curriculum committee, and recommendations were made.3 A second study was then carried out shortly after, which found that the number of errors had been cut in half.

Two years after Anne Byrne’s paper, Catherine Corcoran published a master’s thesis with one of the first examples of an incident report for medicine: the “Unusual Occurrence Report”. The very simple open-ended form was used to collect evidence of medication errors by nursing students over the course of two years.4

A medical report form with text

AI-generated content may be incorrect.
Unusual occurrence report. Source: Corcoran (1954)

Subscribe now

OOPSie-daisy

Possibly because of the early work on medication error incident reporting by nurses, the Board of Trustees at the American Hospital Association pushed for the establishment of full incident reporting systems back in 1958.5 It may have been one of the first report forms focused on incidents beyond medication errors. Quite what the result of this attempt was, I’m not sure. I could not find a record of what happened next.6

In 1962, Kenneth Barker and Warren McConnell published a paper looking at the problem of detecting medication errors. Previous publications had established factors which prevented reporting of errors, such as fear of recompense.7 But the best method to overcome these detection inhibiting factors had not yet been established.8

Barker and McConnell tested three methods of collecting information: disguised observation, self-report and the study of existing records. They showed through comparison between the hospital’s records and observation that there was a severe underreporting problem.9 It seemed like the authors were hoping that the anonymous self-report “OOPS” form (One Object – Patient Safety) would be a way of addressing this problem.

The OOPS report was the earliest anonymous reporting form I was able to find in medicine. The form was very simple, providing only a section to write a story. Unfortunately, the authors received just six OOPS reports in seven months. In Barker and McConnell’s opinion, this was because of several methodological flaws.10 Despite the issues, research into the detection of medication errors pivoted away from voluntary incident reporting to observation.11 The story of healthcare incident reporting subsequently took a sideways leap into anaesthesiology, and a man dressed as a birthday present.

A paper with a cartoon of people and text

AI-generated content may be incorrect.
Source: Barker and McConnell (1962)

Making Sleep Safer

Just like in physics, the early years of anaesthesiology had a problem with infinities. In that the aim was to put patients to sleep for a finite amount of time, but on the rare occasion, this would inadvertently extend to infinity. Before 1954, quite how often deaths occurred from anaesthesia is difficult to say as mortality reporting was anecdotal.12 The first large observational study on anaesthesia deaths was conducted by Beecher and Todd, who looked at 599,548 surgical patients, putting the death rate due to anaesthesia as 1 in 1561.13

Today, general anaesthesia is almost as safe as commercial airline travel (mortality rate is approximately 1 in 250,000 for healthy patients and 1 in 10,000-15,000 for unhealthy patients14). So, what happened? How did anaesthesiology become so safe?

A diagram of a medical procedure

AI-generated content may be incorrect.
Source: Botney (2008)

Believe it or not, a researcher named Jeffrey Cooper (and the research team he was part of) accidentally made anaesthesiology safer. In 1972, Cooper thought he had been recruited to the bioengineering unit of the department of anaesthesiology at Massachusetts General Hospital to be an engineer. In an interview with the Center for Medical Simulation,15 he said the recruiter had a different idea in mind:

The person who brought me here, brought me here actually to work privately for his company on the side (which I hadn’t appreciated) because I had been doing this work on biogalvanic pacemakers. So, I was in this group not knowing exactly what I was supposed to do.

Fortunately for Cooper, Randy Meyer - an anaesthesiologist at the group - frequently took him to the operating room. Meyer helped Cooper see the issues with anaesthesiology, particularly with how people made mistakes while working with technology. To prevent these kinds of mistakes from occurring, the group decided to design and build a computer-driven anaesthesia machine.16

While working on the machine, Cooper was invited as a plus-one to a Halloween party.17 Dressed as a birthday present, he just so happened to sit next to Ron Pickett while carving pumpkins.18 Pickett was organising a NATO sponsored conference on human factors in healthcare. After realising Cooper was working on trying to prevent errors using anaesthesia machines, he invited him to present at the conference.

While Cooper was giving his talk, a man called Mel Rudolf was sitting in the audience. Rudolf worked for the American Institutes for Research, a company which used the critical incident technique to study human factors. From Jeffrey Cooper’s interview again:

Mel came up to me after the talk and said: “boy you have a great laboratory to study [anaesthesia mistakes]. Did you ever hear of the critical incident technique?” I said “no”. And one thing led to another and… we started to study errors in anaesthesia. We thought it was about errors with technology and equipment, but when [people told us their stories]… they weren’t mostly about equipment. They were about everything.

Between January 1975 and April 1977, Cooper and his research colleagues interviewed anaesthesiologists, staff members and residents at Massachusetts General Hospital.19 Using the critical incident technique, they managed to codify the types of preventable incidents involving anaesthesia and published their findings in a 1978 paper.

The critical incident technique was developed by John Flanagan (and collaborators) to select and classify aircrews during World War II as part of the Aviation Psychology Program within the U.S. Army Air Forces. It was a five-step generalised qualitative research procedure for “gathering certain important facts concerning behavior in defined situations”.20 While it had a broad application, it was particularly useful in studying mistakes.

Within medicine, Dentistry was the first to use the technique in 195321 to evaluate student performance.22 In 1970, it was used to classify whether actions of physicians during a critical incident had beneficial or detrimental effects on a patient. Before the paper authored by Cooper and his group in 1978, the only time the critical incident technique was specifically used to study errors was a very brief 1960 study of errors in drug administration by nursing staff.

Through the broad nature of the critical incident technique, Jeffrey Cooper uncovered a wide range of human factor mistakes within anaesthesiology. Most importantly, the study by Cooper and colleagues showed the importance of recording and analysing incidents that didn’t end in mortality.

While the work of Cooper and colleagues provided a template for the development of incident reporting systems and their analysis in anaesthesiology,23 the study on its own wasn’t enough to spark a wider patient safety movement. If the procedure for incident reporting came from aviation, the motivation to implement it came from another familiar source of medical change: a health insurance crisis.

The Malpractice problem

While a plane crashed into a mountain, resulting in a revolution in aviation reporting systems in the mid-1970s, Doctors in the United States were struggling to find insurance coverage. The number of malpractice insurance claims, as well as the amounts awarded, were increasing rapidly.24 The crisis was particularly pronounced in California. As a result of multi-million-dollar settlements, malpractice insurance premiums were rising by up to 400%. More problematic, however, was the exodus of many insurance firms from the market due to the losses incurred from the increased number of claims.

The need to explore alternative malpractice insurance systems resulted in the California Medical Insurance Feasibility Study. From the 1978 paper by Mills:

What is needed, in contrast to the existing malpractice trials under tort law,25 is a system that will cost less to administer, and will stabilize compensation through scheduled, standardized benefits. In this way, patients who have disabilities as a result of health care management could be fairly compensated automatically, without having to prove fault. However, reasonable cost estimates for such alternative plans cannot be made without valid information concerning the type, frequency and severity of those disabilities for which compensation might be paid.

In other words, there was no data on how big a problem malpractice was. The California Study looked at a random sample of 20,864 inpatient charts from 23 California hospitals. They found that approximately four percent of patients who were admitted to hospital suffered a potential compensable event, with 17% of these events due to negligence.

While several solutions were implemented to reduce the number of malpractice claims (and monetary amounts claimed for each) by the Californian Government, the underlying factors remained unchanged. Claims as well as monetary damages continued to rise at a constant rate. This subsequently led to another mini-crisis in the mid-1980s.

The now decade old California study remained the only source of information on the incidence of iatrogenic harm and malpractice. Howard Hiatt, dean of the Harvard School of Public Health and James Vorenberg, dean of the Harvard Law School, wanted to rectify the situation.26

They, along with other colleagues, conceived of a study like the California one, but larger and, importantly, sampling patients that represented the population. All they needed was funding - a task easier said than done. Various institutions in Massachusetts thought the study was a terrible idea.27 Fortunately, or unfortunately, depending on one’s viewpoint, New York State was struggling with spending on medical liability claims. The Governor, Mario Cuomo, was happy to fund the project.

The Harvard Medical Practice Study (HMPS) was carried out in the late 1980s.28 A random sample of 30,121 hospital records from 51 hospitals (within New York state) over the 1984 calendar year was reviewed. The researchers estimated that 98,609 adverse events occurred, with 13,451 (13.6%) leading to death.

If these rates were representative of the population, the study implied that 1.3 million patients were injured by medical care, with 180,000 dying from these injuries. This was much higher than the researchers had anticipated.

An uncomfortable truth

Lucian Leape was encouraged to interview as a researcher for the Harvard Medical Practice Study six months after it began and immediately did not want anything to do with it.29 He had spent the previous year studying epidemiology and statistics in the hope of making a career change from academic paediatric surgery to health policy, and Leape didn’t want to waste time on malpractice, an issue that he saw no reasonable prospect for change.

Fortunately, Howard Hiatt managed to convince Leape of the underlying aims of the project: to expose the frequency of medical errors. The Harvard study ended up revealing something much more striking than the 98,609 adverse events. Two thirds of the events were caused by system errors that were detectable in the medical records. In other words, they were preventable.

This fact spurred Leape’s interest in patient safety. After studying the human factors literature from fields outside medicine, he wrote a paper titled Error in medicine where he laid out how the healthcare system could be changed to prevent errors.

The Harvard Study and the Error in medicine paper helped to inspire further research into medical errors.30 In combination with high profile medical mistakes covered in the media, a patient safety movement started to build through the 1990s.

Patient safety reporting systems started to be set up. In the United States, the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) initiated a sentinel event reporting system for hospitals in 1996. The introduction of the MedMARx program, an internet-based anonymous system for hospital medication errors, occurred in 1997. Many states introduced or improved their mandatory reporting systems. Australia introduced the Australian Incident Monitoring System (AIMS), and the University of Basle in Switzerland set up a Critical Incident Reporting System (CIRS) for anaesthetists.31

Generally, however, patient safety reporting systems tended to be very specific or very local.32 It would take a patient safety revolution for the proliferation of reporting systems to truly ramp up.

The Patient Safety Revolution

Towards the end of the 1990s, the patient safety movement was slowly growing in the form of committees, conferences and reports. This culminated in the Institute of Medicine’s (IOM’s)33 Quality of Care Committee. Throughout 1999, they met with various experts and discussed ways to improve patient safety and quality of care. Two reports would be written. The first, titled To Err is Human: Building a Safer Health System, focused on patient safety.34

Amongst the many recommendations, the IOM report made two specifically about reporting systems. For accountability of medical professionals, the Quality of Care Committee recommended that a national mandatory reporting system be established for adverse events that resulted in death or serious harm. For learning and improvement, the committee encouraged the development of voluntary reporting systems.

The response to the report was beyond what any of the committee members imagined.35 Quite why the report cut through the noise to influence government officials and medical professionals, while also increasing public awareness, has been debated. Whatever the reason(s), there is no doubt that it did.36

The U.S. federal government sprang into action (which might be a surprise to many). The Quality Interagency Coordination task force (QuIC) responded to the report with extensive recommendations on actions to be taken. This included a detailed pathway to a national mandatory reporting system and a thorough plan on review and implementation of voluntary systems.37

It is difficult to express how significant To Err is Human was.38 I could tell you it has been cited over 27,000 times in a quarter of a century. I could show you a graph of how the research literature on patient safety tripled shortly after the report was published. Or I could note that the rate of biomedical US federally funded patient safety research awards increased by over 2800%. But it still wouldn’t convey how the conversation about patient safety changed at the turn of the century.

Source: Stelfox et al. (2006)

Starting in the early 2000s, high income countries started to create patient safety organisations and national reporting systems. Some countries started with sentinel events (e.g. Sweden & Norway) or focused on specific clinical domains (e.g. France – nosocomial infections), others were developed independent of government (e.g. Australia). The UK, Denmark and Ireland developed comprehensive wide-scale reporting systems.39

After the World Health Organisation (WHO) wrote draft guidelines for the development of adverse event reporting and learning systems in 2005, middle to lower income countries began the process of developing their own systems. The WHO, collaborating with many countries, subsequently developed the Minimum Information Model in 2016, with the intention of standardising incident reporting forms worldwide.40

Where are we today?

Despite the early explosion in the early 2000s, progress has generally been slow worldwide. While 94% of the 108 surveyed countries in WHO’s 2024 Global patient safety report have adverse medical reaction reporting systems, only a third of them have specific national patient safety programmes.

This is perhaps an indicator of where reporting systems in the world stand today. Trying to write about what happened after the year 2000 has given me a headache. Not because the fundamental logic of modern patient safety incident reporting systems is hard to understand, but rather due to the sheer number and diversity of systems. There are so many, of so many different types, in different configurations, either interacting with each other, or kept separate. Trying to map it in my head has been a terribly frustrating experience.

So instead, let’s try to build our own. In the next article, we will imagine an alternate universe where we have free rein to plan and develop a national patient safety incident reporting system.

What could possibly go wrong?


This concludes the thin line series. Just two more articles to go in the Psyverse #2 series! Plus a bonus fun article as a reward for sticking through our journey into reporting systems.

If you enjoyed today’s article… oops… dropped my phone… hold on… If you enjoyed today’s article about the history of pa… ooohhh noo! not good! I accidentally created a mini nuclear explosion in my mortar bowl! Gosh, how clumsy of me! Allow me to decontaminate and fill in one of those OOPS forms… Hey! Would you look at that! There is a picture of exactly what happened to me on the form! Anyway, where was I? Ah yes…

If you enjoyed today’s article about the history of patient safety incident reporting systems, please do consider dropping a heart❤️, restacking 🔄 or sharing 🔗. It is much appreciated!


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

While not related to incident reporting, Margene Faddis includes a passage in her 1939 paper that I couldn’t help but share with you:

A Plea to the Doctor

Nurses are in a peculiar and often difficult situation in carrying out the orders of physicians. Friction may arise where it could be avoided if both nurses and doctors would remember that the only thing which really matters is the welfare of the patient and not that one group gives and the other carries out the orders. Nurses and doctors are not always so tactful and discreet in their daily relations as they ought to be.

If, by some miracle, I could address a plea to all members of the medical profession who give orders for medications, I should say something like this, “We are neither sentimental about our functions as nurses nor under false impressions concerning our relative responsibilities. We want to carry out your orders because we, like you, desire to help our patients. Sometimes, however, you do make it unnecessarily difficult for us. You insist upon our taking verbal orders when we are taught not to do so. You often write orders we can interpret only with great difficulty or not at all, either because you do not write legibly or because your directions are not clear. Sometimes, we know, we give the impression of being ‘sticklers’ but on sober thought it does seem that orders involving medications should be perfectly clear to all who are concerned with their administration.

“Again you write a dose which is much larger than the usual one and then make an inquiring nurse feel that you do not wish your orders to be questioned. Of course that is what you meant! But sometimes, when you do slip, if the nurse really does give your order you feel like growling (and perhaps you even do), ‘Anyone would know better than that.’ And if a nurse who has never known as large a dose as you have ordered to be given, gives only a portion of it because she cannot reach you, you may lose control of your temper and say unkind things to her and-yes!-even in public places. All this is because you think she is tampering with your orders. It need never have happened, however, had you explained to the nurse or added to your order, ‘I mean this dose,’ or some similar statement.

“Sometimes, too, you use the metric system and again the apothecaries. There are even occasions when you combine both in the same order. Some of you use official names entirely, some proprietary, and some a curious conglomeration. New preparations are introduced and you do not realize how difficult it is to acquaint large groups of nurses with the fact that you all mean the same thing when Dr. X writes ‘Vitamin C,’ Dr. J. writes ‘Cebione,’ Dr. A writes ‘Ascorbic Acid,’ and Dr. L. writes ‘Cevitamic Acid.’

“And sometimes you aren’t too helpful when we seek information as to how you wish a drug administered. So often you reply, ‘How do you usually give it?’ On the contrary, one of you criticizes us harshly for giving a medication by a certain method when just the day before Dr. A has assured us it is correct. Pray, what does a nurse do in that case?”

2

I am hopeful this obituary is of the Anne K. Byrne who wrote the paper. The article did not mention an affiliation or the hospital where the study was conducted. Further, I think it might be the only paper Byrne published. Apologies if I have the wrong Anne Byrne.

3

Recommendations were as follows:

  1. Teaching the students the proper way to identify patients and to make out medication cards in the course on nursing principles and practice: emphasizing the need for exercising great care in checking medication cards, labels, and doctors’ orders.

  2. Teaching the action of drugs and legal responsibility in accepting verbal orders, in the course in pharmacology.

  3. Discussing, in all courses, how to minimize carelessness and forgetfulness.

  4. Discussing organization of work in nursing principles and practice and helping students organize their work on the clinical services.

4

A couple of points here. Firstly, the thesis mentions a “Hospital X” where the study took place. I don’t know if it is supposed to be a placeholder, or censorship, or simply that there is an actual Hospital X.

Secondly, when reading the thesis, I found it striking that nurses were already building the type of “supporting culture” that medicine is trying to implement today - simply through intuition:

The data collected showed that the faculty had been concerned with the problem for some time. They were acutely aware of the possible psychological trauma for students responsible for errors and felt that everything possible should be done to remove any punishment implication from errors. This attitude was reflected in the development and use of the “Unusual Occurrence Form” for reporting errors, the approach in teaching, and the ultimate distruction (sic) of the report when the student graduated.

5

This may have been a development from an earlier accident report form.

Writing in a 1953 edition of the “Hospitals” journal (Vol 27, Issue 3, pg 90-92), Kent Francis exclaims that:

The American Hospital Association and the National Safety Council have achieved a reasonable facsimile of the impossible—an accident report form on which can be reported injuries to patients, staff and the visiting public.

Francis also attaches the form:

6

After contacting the American Hospital Association, they said they did not have any surrounding documentation about the May 13th meeting in 1958, when the Board proposed the incident reporting systems.

Finding the “Hospitals” journal source took me by complete surprise – evidence was limited for medication error incident reporting, never mind any hint of the establishment of a full-blown incident reporting system.

It is tough to gauge whether hospitals actually did implement incident reporting systems - reviewers at the time, up to around the 1970s, commented on the lack of literature on incident reporting.

7

See Schlossberg (1958, Hospitals, Vol 32, Iss 19) & Concoran (1952).

A paragraph in the Barker and McConnell paper, citing a rather forward thinking paper on homosexuality by Kinsey et al. (1948), sums it up nicely:

Dr. A. Kinsey stated that there were, after all, only two reasons why anyone should hesitate to contribute his sex history to a scientific project. He may hesitate because he fears that the interviewer will object to something in his history, and he may fear a loss of social prestige, or legal penalties, if his history were to become a matter of public knowledge. Certainly the basic objections of Dr. Kinsey’s subjects were essentially those which are confronted in the detection of medication errors by self-report methods.

8

Arguably, Concoran’s 1952 study suggested one such method (destroying of reports) but did not test whether the method worked or not.

9

Caveats here:

  1. A very small total observation time - only two eight hour shifts.

  2. Very small sample (9 nurses)

  3. Little detail given as to the process of the internal, non-anonymous, accident reporting system that made up the hospital records.

  4. The extrapolation to get the number of incidents in a year was a simple multiplication by a factor of 603 to get the year. (fluctuations in reporting not accounted for - for example, it could have been a particularly stressful week)

  5. The disguised observation method (how observers remained disguised) wasn’t mentioned.

10
  1. On the busiest floor, the forms were required to be kept in the office of the supervising nurse. To get a form a nurse had to request one, thus defeating the point of anonymity

  2. Five of the forms were from obstetrics, psychiatry and pediatrics - which at the time handled fewer drugs than other fields (Medication in psychiatry is a relatively recent phenomenon) and these forms were freely available to collect anonymously

  3. There was a high turnover - continual orientation was therefore difficult

  4. In the questionnaire, 40% of the nurses objected to any kind of anonymous reporting. They felt their integrity was being questioned.

11

For a summary of medication error research using direct observation, see both Keers et al. (2013) & Keers et al. (2013)

12

The first reported death attributed to anaesthesia was in 1848. A 15 year old girl called Hannah Greener died due to complications during a toenail removal. However, it is unclear whether anaesthesia was in fact the cause. Paul Knight and Douglas Bacon tell the story in this highly interesting article.

A 1948 article by Macintosh, aptly titled Deaths Under Anaesthesia tells the story of all the different kinds of mistakes that led to preventable anaesthetic deaths back then.

Though it is important to note the situation was likely significantly better than surgery before anaesthesia, where the chance of dying was around 50% (and even if you managed to survive, there is a decent chance you would be traumatised for life. One could argue anaesthetic is the best preventative trauma treatment we have).

For more on the history of anaesthetic mortality, see: Jones (2001), Aitkenhead and Irwin (2021), Fenwick (2007) & Braz et al. (2009)

13

This is the rate for all anaesthesia deaths, both where it was the primary cause and where it was an important contributory cause.

For only primary anaesthesia deaths, this rate falls to 1 in 2680.

14

See Botney (2008). Keep in mind that, according to Schiff & Wagner (2016), estimates have differed widely between studies.

16

Cooper mentions that the prototype machine was probably one of the first microprocessor based medical instruments.

17

Cooper mentions his wife was a medical technician working in a different group. This was the group that organised the Halloween party.

18

Spoiling the funny mental picture, Cooper says he must have taken off the birthday present costume at this point.

19

The 1978 paper mentioned in the next sentence only mentions the study took place at “one urban teaching institution”. Considering all the authors are from Massachusetts General Hospital, I’ve put two and two together.

20

As outlined in a paper by Flanagan (1954).

The five steps are:

  1. Determination of the general aim of the activity

  2. Development of plans and specifications for collecting factual incidents regarding the activity.

  3. Collection of the data

  4. Analysis of the data

  5. Interpretation and reporting of the statement of the requirements of the activity

The method was an attempt to be as objective as possible when studying things that happened, towards a general aim. And how behaviours of an individual both helped and hindered achieving that aim.

See Flanagan’s paper for more on the method (pdf here).

21

O'Donnell, Robert J. The development and evaluation of a test for predicting dental student performance. 1953. (no link available)

22

The technique was used in a similar way to evaluate postgraduate training in psychiatry in 1972.

23

It also sparked a patient safety movement in anaesthesiology.

24

According to Cooper and Stephens (1977) the number of claims increased in part because more people were able to afford healthcare, naturally resulting in a rise of litigation claims. But this didn’t fully explain the increased rate of litigation.

The introduction of new technologies (e.g. the CT scan) was cited as another potential factor. New technologies bring new risks, along with higher expectations of care. Media coverage fostered universal cures which were subsequently not met. New technologies resulted in greater specialisation meaning patients weren’t solely seeing their GP or family doctor to handle all their care, but larger teams of diverse practitioners. The deterioration of the doctor-patient relationship was seen as a major contributor to increased malpractice claims.

Factors leading to greater award amounts were cited as new technologies having the potential to produce more severe injuries (radiation doses too high as example). Further, the general public was more aware that doctors had malpractice insurance covered by a big company, thus potentially seeing it as damaging a big company rather than the individual doctor. General inflation was cited as the third reason.

Posner (1986) (pdf here) adds the decline in the US stock market as another potential reason for the crisis.

25

Tort is a branch of civil law dealing with “wrongs”. So essentially in the context of medicine, it is the branch of law that deals with malpractice and other harms to patients in the healthcare sector.

26

Hiatt writes in a special report about the Harvard paper (mentioned later):

The debate on tort reform has been long on rhetoric and short on facts. Thus, we have few data to answer these basic questions: What is the actual extent of injuries suffered by patients as the result of medical intervention? What proportion is due to substandard care? How severe are the economic losses inflicted on patients? To repair those losses, what are the sources of financial redress — not just tort law, but the array of other public and private programs? How accurately and how effectively does the tort system respond to the victims of negligent treatment? What fraction of hospitalizations leads to malpractice claims? What fraction of cases in litigation arises from hospitalizations with no record of patient injury?

27

According to Lucian Leape’s book Making Healthcare Safe, anything that made doctors look bad was not seen as acceptable

28

See Brennan et al. (1991) & Leape et al. (1991)

29

Much of this section is taken from Leape’s open source book Making Healthcare Safe. It is therefore written from his perspective - others may recall events differently.

30

e.g. Wilson et al. (1995), Thomas et al. (2000)

31

The sources from this paragraph come from:

Shojiana et al. (2001) (JCAHO, AIMS), Shaw and Coles (2001) (MedMARx, JCAHO, AIMS, Basle, state programs), and Chapter 6, page 88 of The Patient Safety Handbook by Youngberg and Hatlie (2004) (JCAHO)

32

MedMarX, for instance, was only for medication related incidents. The Swiss system was only in one department at one university.

34

For claims made in this paragraph, see chapters 4-9 in Leape’s open source book Making Healthcare Safe

35

From Leape (2002):

The speed and intensity with which the IOM report captured media, public, political, and professional attention surprised everyone. Neither the shocking statistics nor its central message, that errors are caused by faulty systems, was new, but the report forcefully brought them to public awareness.

36

See page 155 of Leape’s book Making Healthcare Safe & The answer to the first question of this 2019 Q&A for suggested reasons.

37

For more info on what QuIC recommended, see pages 50-58 of this report from 2000 by the task force.

38

It is important to note that Too Err is human was not the only influential report - it wasn’t even the first. In 1995, The Quality in Australian Health Care Study was released, proving highly influential. The UK released An organisation with a memory report just a couple of months after To Err in 2000. While in New Zealand, the Review of Processes Concerning Adverse Medical Events report was released in 2001.

See Shaw and Coles (2001) for more on these landmark studies.

39

For more examples, see Chapter 5 of the 2005 WHO draft guidelines & Table 2 of the 2009 review of national reporting systems in Europe

40

The success of this has been limited: According to the 2024 WHO Global Patient Safety Report, only a third have aligned their reporting format to the WHO minimum information model.

]]>
<![CDATA[Development of the ASRS and the battle for immunity]]>https://psychiatricmultiverse.substack.com/p/development-of-asrs-and-immunity-battlehttps://psychiatricmultiverse.substack.com/p/development-of-asrs-and-immunity-battleThu, 19 Mar 2026 18:00:54 GMT
Source: NASA Reference Publication 1114

This is article 8 in the Psyverse #2 series (and article 4 in the Thin line series). Find previous articles in the compendium.


James Dow didn’t have time. The tragedy of Flight 514, alongside other crashes and safety failures publicised in late 1974, resulted in rapidly decreasing confidence in the FAA’s ability to function.1 This perceived lethargy had likely cost the then-current FAA Administrator, Alexander Butterfield, his job.2 When James Dow took over as Acting Administrator in April 1975, he understood his term would likely be a matter of months, not years. But he also knew his authority would be limited if FAA staff saw him merely as a transitory figure. So, he acted, and made decisions, as if he were a permanent appointee.

Source: Troubled Passage (1978)

Dow had over 30 years of experience within the FAA, working on some of the most significant development projects.3 This deep understanding of the bureaucratic machinery helped him to push through a whole heap of initiatives in a matter of months. One of these initiatives was the Aviation Safety Reporting Program (ASRP), a non-punitive incident reporting system, announced in May 1975.

Dow’s inspiration for the ASRP came from the FAA’s Near Midair Collision reporting program (NMAC), which ran from 1968 – 1972. The NMAC was essentially the same as all the other near miss reporting systems mentioned in this series, apart from one crucial difference. Reporters had extensive transactional immunity.4 Even if the reporter had been reckless, or evidence had been found from other sources outside of the report of a misdemeanour, the FAA could not apply punitive action.5 By submitting a report, the reporter received immunity in return from the FAA – hence “transactional”.

Just like a previous FAA near miss program under FAA administrator Quesada, this immunity provision was dropped with little warning in 1971, before the entire program was ultimately cancelled in 1972 due to the subsequent dramatic drop off in reporting.6

Dow wanted to both reignite and extend the cancelled NMAC through the ASRP. Reignite by reintroducing transactional immunity (with one caveat7) and extend through full incident reporting. Essentially, the Aviation Safety Reporting Program was almost identical to the incident reporting program Oscar Bakke proposed two decades earlier – but with enhanced immunity provisions.

And, in similar fashion to all the previous reporting systems devised by the FAA, the ASRP ultimately failed – in record time. The number of reports that came in were so low, Dow changed course after just 3 months. Reporting to the same organisation that gave out punitive measures and had previously taken away the immunity without any notice likely deterred reporters.

With an air of familiarity to the events that led up to Project SCAN 15 years prior, Dow turned to a third party to ensure anonymity from the FAA. But this time it wasn’t a non-profit organisation. It was a governmental one – NASA.

Building the ASRS

For an agency that had just put men on the moon, you might think overseeing an aviation reporting system for ordinary mortal folk wouldn’t be the type of small step NASA liked to take. But this would be forgetting the first “A” in the National Aeronautics and Space Agency. A deep knowledge of aeronautical principles,8 combined with a sky-high reputation and lack of regulatory function, made NASA one of the most viable, trustworthy and impartial third parties. All the FAA needed to do was knock on the right door. Fortunately, they did.

Charles Billings had a curious career path. He trained as a medical doctor, then served in the US Air Force as a squadron flight surgeon. After his military service, he worked as an aviation medicine researcher at Ohio State University before joining the human factors group at NASA Ames Laboratory. A couple of years into his role as a medical research officer, he received a knock on his door.

Charles Billings. Source: NASA Astrogram Newsletter (Sept 2010). Photo by J.T. Heineck

Billings was tasked by the FAA to lead the construction and implementation of a new incident reporting system – the Aviation Safety Reporting System (ASRS). The FAA would fund the program, but it would be run entirely by NASA. With the help of aviation psychologist George Cooper and research pilot John Lauber, Billings developed a rough protocol for the ASRS. After three weeks of refinement through NASA management, the protocol was presented to the FAA and quickly accepted.9

A Memorandum of Agreement was signed on the 15th of August 1975. The ASRS would be designed to perform four primary functions:

(1) receipt de-identification and initial processing; (2) analysis and interpretation; (3) dissemination of reports and other data; and (4) system evaluation and review.10

with NASA developing procedures for receiving, de-identifying and processing ASRS reports, ready for operation on April 15th, 1976. This meant Billings and the NASA Ames team had just six months to build an incident reporting system to accommodate the entire United States.11

Within two months, a procedure for day-to-day operation of the system had been written. A formal request for a proposal was made to find a contractor to help process the ASRS reports. Which was awarded to Batelle a week before the deadline. On April 15th, Advisory Circular 00-46A announcing the ASRS came into effect, and reports soon came flooding in. Initially 250 per week, before eventually settling to 150. This was more than expected.

I’m not sure what it would’ve been like to have had piles of reports turn up every day, procedures being updated on the fly, without a working data entry computer system. The word “stressful” comes to mind.12 In this situation, the team followed a “first-thing-first” policy, with a focus on establishing satisfactory procedures, administrative control over the growing number of reports, and the first stages of the ASRS reporting process. Which went as follows13:

After an incident, an air traffic controller or pilot would fill out a report form, the entirety of which fit on a double-sided piece of paper.14 Once the form reached NASA, it was reviewed for appropriateness and whether it contained time critical information.15 The report would then be sent off to an analyst – an aviation expert (such as an ex-pilot, etc.) – to identify and categorise potential causal factors, among other actions.16 Sometimes, the analyst would determine that more information from the reporter was required. In this case, the reporter would be called up for clarifications – termed a “callback”.17

The first page of a filled-in ASRS form. Source: NASA Reference Publication 1114

After analysis, the slip containing the contact information of the reports was cut off and returned to the reporter as proof it had been received and analysed. The report had therefore been preliminarily “deidentified” and was subsequently sent off to be encoded.

In the initial absence of a database, this involved transcribing pertinent information and analysis on to a separate record sheet (to varying degrees of completion) which was subsequently filed away until the ASRS processing procedure had been fully developed. The original reports were then destroyed after an arbitrary waiting period.

By the 6th of July 1976, Batelle had demonstrated they were ready to process, analyse and store individual reports and began the next day. Nevertheless, by Christmas 1976, the “ingestion of data had so far exceeded digestive capacity that strong measures were required”.

A team of Batelle researchers started tackling the fixed-field and free text analysis, as well as the encoding procedure18. Then, they introduced a new step: diagnostics. Each encoded analysis would have an additional diagnostics sheet where standardised keywords were applied to classify the types of causal factors in each incident. Useful for future research and trend analysis. Quite incredibly, in the early stages of the prototype ASRS, a single person developed the diagnostic lexicon and applied it. Each and every one of the hundreds of reports per week had to go through a single diagnostician.19

The ASRS project was overseen by an advisory subcommittee.20 Over the next few years, they helped to direct the further development and refinement of the ASRS as it went from prototype to fully functioning system. Eventually, the backlog of paper encoded records became fully digitised on Batelle’s database, realising Bobby Allen’s vision of computer utilisation. There was, however, one significant problem with the rapid change and development of the ASRS procedures. While the ASRS team understood how the system worked, almost no one else did.

Within the first couple of years, Advisory Circulars, brochures and posters publicising the ASRS were widely distributed to facilities by NASA and the FAA. But this wasn’t enough. In surveys, it was evident that “a large proportion of the flying population lacked knowledge of the ASRS and its immunity features.”21 The Advisory subcommittee therefore recommended that NASA develop an accessible way to communicate with the wide range of professionals that worked in the aviation industry.

The task fell onto the shoulders of Rex Hardy, a decorated WWII aviator and corporate test pilot,22 who, along with Billings, created CALLBACK. A monthly, short, easy to read newsletter that addresses a serious subject in aviation safety. Rex, however, had a “distaste for well-intentioned but dull exhortations on safety”, so he endeavoured to make the publication interesting, instructive, entertaining, and god forbid, funny. It seemed to work. CALLBACK is still in production today.

CALLBACK From NASA's Aviation Safety Reporting System
Source: CALLBACK Issue Number 187

Promotion was crucial to the success of the ASRS. The program existed in a political environment. Having the aviation community not only onside, but actively advocating for the system helps provide political pressure to keep the program funded and running, as the writers of the NASA ASRS history put it:

Even if all the other motivators are present in the reporter community, a lack of appreciation for the purpose and scope of the program or difficulty in obtaining the means to communicate will eventually result in the termination of an effective and valuable safety system.

One of the greatest strengths of the ASRS, that promoters could point to, was the immunity policy. The transactional immunity policy – a submission of a report led to a waiver of disciplinary action from the FAA for everyone involved in an incident – was strengthened from the FAA’s failed Aviation Safety Reporting Program that the ASRS replaced. Reckless operation was partially protected. With the addition of NASA as a third party, reporters also had use immunity – NASA could not share any info from the reports which might reveal the identity of the reporter to the FAA. Given the lack of trust between the Aviation community and the FAA, immunity became a vital provision to incentivise incident reporting.23

As the 1970s drew to a close, the value of the ASRS was felt by both the developers of the ASRS and the reporters using it.24 Everything was going swimmingly, until the 31st of January 1979, when John Winant,25 who had recently compiled a report evaluating the ASRS, received a letter from the FAA:

The FAA letter… largely addressed the immunity issue, a topic which had not been mentioned in my report to him. [T]he text said, “We are definitely considering a program of modification” which would “allow enforcement proceedings against any person who has allegedly violated an [FAA regulation]….” This meant the limited immunity provision would be canceled…. The pertinent paragraph concluded with this sentence: “Before taking such action, further discussion with you and appropriate NASA officials is contemplated.” To me, and particularly in the context of prior concerns over the immunity issue, the word “discussion” had a distinct meaning. I believe, in reading that sentence, that a process of consultation would begin shortly, and that full opportunity to modify the FAA intention would exist. That belief was shattered within 24 hours.

On February 2nd, the FAA told Winant that they were going to go through with the cancellation of the immunity provision.

The mistake that almost cost immunity

When Langhorne Bond, the newly appointed FAA administrator, sat before a House of Representatives subcommittee in November 1977, he only had good words to say about the ASRS. Bond classified “NASA analysis of the aviation safety reporting system as a very healthy, constructive and professional effort”, saying later on the FAA “found the reporting system to be a helpful mechanism for assimilating as much information as possible…”. In a March 1978 letter to NASA Administrator Robert Frosch, Bond said the ASRS was providing “needed and valuable insight” confirming the FAA “was not contemplating a change of the [immunity] provision”.26

Langhorne Bond. Source: WikiCommons

Just one year later, Bond’s opinion appeared to have changed drastically. In a speech to the National Aviation Club on March 16th 1979, he announced a U-turn on ASRS immunity, giving his main argument:

[O]ne part of the program causes me grave concern. This is the cloak of immunity that a violator can draw around himself simply by turning himself in to NASA […]. The guarantee of immunity, I’m afraid, can be too easily corrupted into a license to endanger hundreds of lives with no fear of punishment. I want to continue the immunity program - but I want it modified to end these abuses. Under these modifications, airmen will no longer be able to claim immunity for violations – witnessed by others – of safety regulations

So, what changed Bond’s mind?

This, as Charles Billings explained in a 1998 talk to the National Patient Safety Foundation, was due to a political oversight:

We (or, rather, I), in my naiveté made an assumption. I assumed that since the chief of the FAA was asking for [the ASRS], the FAA wanted it. This was a bad mistake.

Over the past 21 years, the NASA Aviation Safety Reporting System has had the support of virtually everybody in the United States aviation community to a greater or lesser extent with one notable exception. The exception is the people under the FAA Administrator—and there are roughly 22,000 of them—whom he did not consult before he came to NASA and asked for an incident reporting system. We do well to remember that a primary issue is who may be hurt by reporting. This is especially of concern where use of immunity (and, originally in the ASRS, transactional immunity as well) is a prominent feature.

What was the largest segment of the FAA aside from the air traffic control system? Divisions involving regulations and enforcement. Whose enmity did we earn the day this thing was announced? Those who had to make it work in the community. That is the worst mistake I made in 40 years.

Due to the pace at which James Dow set up the ASRP and subsequent ASRS in 1975, regional FAA officials were not consulted. After implementation of the program, the morale of regional FAA personnel was negatively affected by the waiver of disciplinary action because more of their efforts to conduct investigations ended up fruitless. Efforts to explain why the immunity provision was important were few and far between and largely ineffective. It seems as if discontent slowly built until the autumn of 1978, when regional FAA directors expressed to Bond that the “Aviation Safety Reporting Program had not provided any significant, useful data not already known to us through FAA programs”.27

Political regional pressure, as well as pressure to show the FAA had safety under control after the 1978 deregulation act, likely led to the reversal of the opinion of Langhorne Bond.

In Bond’s proposed modification, the anonymity provision provided to reporters would be strengthened, while the blanket transactional immunity – waiver of disciplinary action – was to be removed entirely. If the FAA found incriminating evidence of a disciplinary infraction from other sources other than an ASRS report, the fact that a report was submitted would not mean the investigation was cancelled – it would carry on.28

The response from the aviation community was overwhelmingly negative. In short, they had been burned by the FAA too many times before. The fact the ASRS could be abused was acknowledged by commentators, but the effect of the potential loss of reports would be far more detrimental to safety than any current abuse. Their view can be summed up by Frank Munley29:

Unlike Mr. Bond, I’m firmly convinced that the problem of human error is better tackled by a scientific investigation of the true causes of hazardous situations, as opposed to a punitive approach which ignores the intent of the violator.

Rather than a win or a loss for either the FAA or aviation community, the battle for immunity ended in a draw. Due to the arguably unimpressive performance at the April 1979 committee hearing by Bond,30 he decided against returning for a follow up hearing and worked with the ASRS subcommittee to come to a compromise. Instead of full loss of transactional immunity, some would be maintained, while the strengthened anonymity provisions remained.31

Ironically, the weaker transactional immunity provisions seemed to improve the quality of reporting. Before the reduced immunity, only one reporter needed to send in a report for everyone involved in an incident to have immunity. With the new provisions, every person had to send in a report to gain individual immunity. The ASRS therefore received different perspectives from the same incident, enhancing the potential for analysis.

Since the late 1970s, the procedure of the Aviation Safety Reporting System has pretty much remained the same. It has become a mainstay of safety prevention in US Aviation, with over 2 million reports submitted since its inception. In recent years, additional voluntary reporting programs have been introduced within the U.S. to cover more specific and nuanced areas of aviation.32

The success of the ASRS prompted other nations to set up their own voluntary reporting systems, such as the UK’s CHIRP in 1982, Brazil’s RCSV in 1997, and Korea’s KAIRS in 2000. The proliferation of reporting systems led to the International Confidential Aviation Safety Systems (ICASS) Group, which facilitates the sharing of information between nations, among other roles.33

Today, through voluntary reporting systems, the aviation industry has managed to reliably take advantage of the thin line between normality and catastrophe. In medicine, I don’t think we can make the same claim.

Though, the history of patient safety reporting systems goes back much further than you might believe. In the age of Hawker Hurricanes and Supermarine Spitfires, just before they defended the British Isles from the Luftwaffe, a member of medicine’s unsung heroes was already advocating for the use of reporting systems…


Come back next week for the final instalment of the thin line series: the surprising history of patient safety reporting systems.

If you have enjoyed the story of the, not so smooth, Mr. Bond and his unintentional mission to alienate the entire aviation community, please do drop a❤️, restack 🔄 or share 🔗. It would be much appreciated!


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

The proximity of Flight 514 to Washington DC likely magnified its news appeal and political impact. Victims in the crash also included government staff (congressional aides, civil servants, etc.).

The handling of this crash by the FAA (alongside a DC-10 Turkish Airlines crash six months earlier & an Eastern Airlines DC-9 in September) was criticised in a House subcommittee report (Staggers Report), various newspaper articles and a T.V. documentary called “Crashes: The Illusion of safety”.

See Chapters 6 & 7 of Troubled Passage by Edmund Preston for more details (free to read on Google Books)

2

There was some speculation that Butterfield’s dismissal was also due to his testimony against Richard Nixon in the Watergate Scandal. Supposedly, Ford was waiting for the right opportunity to fire him as retribution. (Page 159 - Troubled Passage)

Arguably, history has treated Butterfield unfairly when it came to his time as FAA Administrator. He can be credited with making the FAA far more open to criticism, thus allowing James Dow to introduce the Aviation Safety Reporting Program with much greater ease than what otherwise would have been the case.

3

A biography can be found on page 2 of the House of Representatives subcommittee hearing on Department of Transportation and related agencies appropriations for 1976 (source is here - click pdf download).

I’ve read quite a few congressional hearings now. Based on the transcript, I think Dow’s appearance was one of the most impressive, combining clear messaging with detailed explanations. Something committee member Silvio Conte agreed with:

I have been on this committee for a good many years and I must say that it was one of the finest and most, comprehensive statements that we have ever had by any administrator

James Dow’s term ultimately lasted only 7 months, but his achievements were significant. The FAA was in a much better place by the end of his term than it was at the beginning (pg 199, Troubled Passage)

4

According to the policy document attached to the advisory circular (00-23) announcing the NMAC, reporters also had “use immunity”, though it is unclear to what extent. Use immunity means information in the submitted report cannot be used for any future prosecution of the reporter.

Within the FAA disciplinary process, use immunity was covered with transactional immunity. The FAA would also withhold the identity of the people involved in the reported near miss from public disclosure, but only under written request.

It is unclear what would happen if the reporter committed a criminal offence evidenced in the report, and the reporter did not request withholding of identity. I’m not sure if it could be used as evidence under a future prosecution.

What is certain, is that the reporter didn’t have full anonymity - the FAA did know the identity of each reporter.

5

I’m not entirely sure about whether outside sources could be used to initiate punitive action. In previous program announcements, such as the CAB voluntary near miss program mentioned in Thin Line 1, the outside evidence exceptions are made clear:

That where information of such violation of a Civil Air Regulation is obtained by other means, the fact that the violation was voluntarily reported will
not preclude enforcement, remedial, or other disciplinary proceedings that are
initiated on the basis of such other information.

Whereas in the NMAC announcement, there doesn’t appear to be a mention of any such exception:

To encourage persons involved in near midair collisions to make such reports, the Administrator will take no enforcement or other adverse action , remedial or disciplinary , against any person involved in a near midair collision that is reported to the FAA during the period of this program.

6

There is significant ambiguity as to whether the immunity was dropped then quickly reestablished, and what actually happened to the NMAC when immunity was revoked.

In the 1979 Congress hearings, Representative John Burton, chairman of the Subcommittee of the Committee on Government Operations, said:

They had immunity from 1968 to 1971... the statistics show that reporting went down. Then immunity was reestablished.

Officially, Advisory Circular 00-23D cancelled the program in 1975, but in the aforementioned 1979 Congress hearings, there was a suggestion the program was cancelled in 1972. John Winant, chairman of the NASA Advisory Subcommittee on the ASRS:

From 1968 to 1971, FAA [ran the NMAC]. After about a year of operation, the immunity provision was revoked. Report volume fell dramatically. The system was abandoned because input was so insignificant no useful results could be achieved.

According to John Baker of the Air Line Pilots Association (who had previously worked in the FAA) at the 1975 hearings in the House of Congress:

…[I]n 1967-68, there were fewer than 200 incidents reported. That was before the prior immunity program was set up. In 1968, there were 2,600, which is quite a dramatic increase in numbers,if that is the way you choose to construe those. Through 1971, it never got below 1,000. That was through the term of the incident reporting or the immunity program before. In 1972, there were fewer than 200, so we went from a constant run of an average of, say, 1,500 a year to 200 or so the minute the immunity program was removed. A decision was made when we took the immunity program off—and I was participating in that decision, although Mr. Dow and Mr. Flener weren’t—it wasn’t because we weren’t generating data. It was because we were getting so much data, we didn’t know what to do with it, and we couldn’t process it, and we were getting the same kind of thing over and over.

Baker later says:

….The numbers dropped from something in the magnitude of 2,000 or 3,000 a year to 200 a year after the immunity program, which had existed up until 1972, I believe, was rescinded.

James Dow, in the same hearing while in discussion with Senator Bayh:

Mr. Dow. It could be. We had at one time an immunity program that applied to only near misses. This was nationwide.
Senator Bayh. What year was this?
Mr. Dow: 1968.
Senator Bayh. 1968? To what year?
Mr. Dow. 1972. Like many programs in the early months of the program a lot of reports came in. Then it died down and went along at a low level. As I understand it, even though I wasn’t involved, when the word got out about the termination,the number of reports rose again. In any event, it terminated in 1972.

7

Reckless action wouldn’t have immunity, but careless would. Dow explains his interpretation of “careless” and “reckless” on pages 1025, 1026 & 1027 of the 1975 hearings. This was codified in Advisory Circular 00-46:

…if any person involved in a violation of Federal Aviation Regulations or FAA directives covered by this program files a timely written report of that violation to the FAA, the Administrator will waive the taking of disciplinary action against any person involved in that violation except with respect to reckless operations, criminal offenses, gross negligence, willful misconduct and accidents.

8

NASA was built on the earlier National Advisory Committee for Aeronautics (NACA), which was formed in 1915 and solely undertook Aeronautical research.

9

From the 2009 RAeS Stewart Memorial Lecture given by Billings:

In 1975, NASA was asked to assist the FAA in securing input from the pilot community regarding safety issues in civil aviation […]. NASA asked Ames what we might do to improve the flow of such safety-critical information and wanted the answer to that as soon as possible…. So with help from Cooper and Lauber, I devised this protocol over a weekend to respond to the FAA’s request. After a rapid trip through our management, we took the scheme to our headquarters liaison in Washington. We then presented it to every industry representative group we could find, and to the FAA, where it was very quickly accepted. So over a period of three weeks came into being what is still known as the NASA Aviation Safety Reporting System.

10

From the Memorandum of Agreement. A copy of this agreement can be found in the 1979 Aviation Safety Reporting System Congress hearings, page 196.

11

As I’ve read the literature about the story of setting up the ASRS, I get the impression this was the most intense, rushed period of most of those involved's lives. According to the 1986 NASA Technical Document about the ASRS, the early project operating circumstances forced “an uncomfortably rapid pace”.

The team at Ames were however able to rely heavily on the knowledge and ideas of the previous attempted programs mentioned in this series, including the system it was replacing (the ASRP) as well as a personal involvement in the development of the United Airlines reporting system program (NASA and United worked together to develop the program - if confused about what the United program was, see the previous Thin line article about the United near miss before Flight 514 crash).

Nevertheless, six months was not a lot of time to build the infrastructure for a national reporting system.

12

As noted in the NASA Reference Publication about the ASRS:

Although the system’s operational plan had been thought out in strategic terms (flow diagrams , etc.), no detailed work had been done on what, exactly, one did with an ASRS report that had been received, opened and was out on one’s desk

13

NASA actually asked a contractor to handle the data processing, as allowed by the Memorandum. This flow chart was subsequently replaced by the following one, by Batelle incorporated. Given its complexity and the poor quality of the scan, I thought I would stick to the preliminary program definition.

14

The developers reviewed previous reporting form make-ups from predecessor programs and had psychologists help design the form. They opted for a simplistic approach so the form would be as easy as possible to fill in, while still getting across the required information for analysis.

Comparing the old ASRP form with the new ASRS one:

15

The reports were reviewed by attorneys. Criminal actions would be referred to the Department of Justice, and an accident to the NTSB.

Sometimes the report would contain information about unsafe conditions that were time-critical (high probability of accident or injury in the near future). This report would be flagged and the FAA immediately alerted. The FAA would then send out an alert bulletin to notify the aviation community as soon as possible.

16

Further, analysts removed secondary information that would lead to potential identification, also adding information using manuals, charts, etc., to improve the quality of the data in the reports, standardise certain attributional info and append descriptive and diagnostic terms.

17

From the NASA Reference Publication:

The information callback is of crucial importance in the ASRS program for two reasons; first, it is a powerful means for enhancing the quality of the information in the database ; second, it provides direct communication between the program and its reporters , and is therefore an excellent means of building program awareness and appreciation among the members of the reporting community.

18

Fixed fields are highly structured, limited option text entries (usually multiple choice or single to two-word options), whereas free text is a box where the analyst can write in paragraphs.

See Appendix C of the NASA Technical Report for examples.

19

Given the consequences to the functioning of the ASRS if anything were to happen to this one diagnostician, more diagnosticians were eventually brought in and trained.

20

Formed of representatives from the aviation industry (e.g. pilot associations), consumers, and government associations (e.g. FAA & NASA).

See Appendix K, Table K2 of the Reference publication for subcommittee members

21

From the CALLBACK 20th anniversary special

22

As well as being a LIFE Magazine photographer in a past life.

23

A waiver of disciplinary action (transactional immunity) was granted to all participants of an incident under the original memorandum between NASA and the FAA if:

  1. A written report had been completed and delivered to NASA within 5 days of the incident (or a notification provided within 5 days, then a complete report within 15)

  2. The incident was not an accident, or involved a criminal offence

  3. Unlike the ASRP, reports involving reckless operation, gross negligence and/or willful misconduct were somewhat protected (due to “use immunity”) unless the FAA received this information from other sources. In which case, disciplinary action could be taken against the reporter.

Even if a report was not timely filed, a waiver of disciplinary action would still apply if the FAA did not ask NASA whether a report had been filed* within 45 days.

*Note: not the contents of the report, just whether it had been filed.

24

Even if nothing came of the report, many reporters commented that the act of filling out the form became a learning experience. It helped them reflect on what went wrong and why it went wrong.

25

Chairman of the ASRS advisory subcommittee.

26

Sources and Full Quotes:

A) November 1977 committee: Federal Aviation Administration Operations Related to Safety and Procurement Management: Hearings Before a Subcommittee of the Committee on Government Operations, House of Representatives, Ninety-fifth Congress, First Session, November 28 and 29, 1977

  1. Bond: “We welcome and encourage diversity of opinion since it provides us with vital input with which to develop the balanced safety programs so necessary to the maintenance of a healthy air transportation industry. As I will discuss later on, I classify the NASA analysis of the aviation safety reporting system as a very healthy, constructive and professional effort to define problems so that we can fine tune the system.”pg 248

  2. Bond: “Let me conclude, Mr. Chairman, by saying that the joint NASA-FAA aviation safety reporting system is one of the management tools we use in taking a broad look at our system. Although the data is somewhat coarse, and I think it must always be since it is confidential, and a number of assumptions have to be made to put the data into context, we still have found the reporting system to be a helpful mechanism for assimilating as much information as possible about how our air traffic system is working.” pg 252

  3. Belanger (Director of Air Traffic Service for the FAA): “I think you are on track there. It really was apples and oranges when you have an immunity program and a nonimmunity program so that-however, when you take the quarterly report of NASA on the same airspace, if my computation was correct, there were 22 near misses reported in TCA’s and for the same period in 1968 the report was 185. So it is relatively close. I am encouraged to see with an immunity reporting program at NASA that it appears we are getting free reporting.” pg 252

B) Letter to Robert Frosch: Original Letter not found, taken from extracts in NASA Reference Publication 1114, The Development of the NASA Aviation Safety Reporting System, November 1986:

  1. “In a reply dated March 8, 1978 , to a letter from NASA Administrator Robert Frosch, Langhorne Bond himself confirmed his earlier approval. ‘ASRS,’ wrote Bond, ‘is providing needed and valuable insight I am confident that an increased FAA familiarity with the ASRS database will further substantiate the merit of the program....’ In the same letter Bond stated, “The FAA is not contemplating a change of the provision for waiver of disciplinary action as set forth in AC 00-46A.’” pg 11

27

Some regional offices went as far as to advocate for the dismissal of the program in its entirety.

William E. Morgan of the Eastern FAA regional office, in a letter to the central office, stated:

The need for safety analysis in respect to accidents , incidents and system deficiencies is quite clear. NASA’s function, as separated from those of the FAA and NTSB in the field of safety analysis, is less apparent. It is our recommendation that the program be terminated at the earliest possible date.

28

Technically, this isn’t entirely true. Bond proposed a 90 day rule, where if the FAA had not queried the ASRS in 90 days as to whether they received a report, the reporter would then have immunity (as long as the reporter filed their report within 5 days of the incident occurring). Bond noted this as like a “statute of limitations”.

Though this procedure was quite confusing (I’m stilll not sure I’ve got my head around it!) which was one of the worries, another had to do with the fact that NASA would not be able to deidentify the report within 24-48 hours. They would have to wait the full 90 days.

29

A safety consultant for the Aviation Consumer Action Project. Quote taken from Eisenbraun (1980).

30

These two exchanges between Representatives David Evans, John Burton and FAA Administrator Bond sum up his performance:

Mr. Evans. Without having the limited immunity provision, why wasn’t the TWA crash in 1974 prevented? Why did it occur then, I wonder?
Evidently, similar situations, such as near-misses of that particular mountain out there on the approach to Dulles, had occurred prior to the TWA crash.
Why wasn’t that system deficiency, or whatever you want to call it, cleared up prior to that December 1974 crash then?
Mr. Bond. I am no expert on the TWA crash. The Safety Board’s report is a matter of record. I am at a loss to answer your question. I don’t know that the NASA reporting program would have cured it.
Mr. Burton. It was instituted for that reason.

And

Mr. Bond. I remember last night reading through one of the reports that NASA itself wrote on this program. They were so enthusiasticthe reporter was on the details provided and the helpful hints that were put in through the reporting system, that it was NASA’s conclusion that it seemed to them that most of the people who reported were doing so for safety reasons and not to get a free ticket. If that is true, then that will clearly continue and anonymity will be the same thing.
Mr. Burton. No, it won’t. You are a heck of a poor judge of human nature, Mr. Bond. They reported to improve safety and not for a free ticket, which destroys your belief that th ey are walking around waving white envelopes at your inspectors; that they know they can do it without losing their livelihood. You just destroyed your own case. They are doing it for safety and not for a free ticket. Mr. Bond. Then they will continue todo so. Mr. Burton. Are you a betting man? [Laughter.]

31

If a violation was found, no disciplinary action would be taken if:

  1. It was inadvertent

  2. Not a criminal offence

  3. No previous conviction from the FAA

  4. A report was filed within 10 days of an incident

The differences between the old and new transactional immunity provisions were:

  1. Old version protected all participants of an incident as long as at least one of them filed a report. New version protected only the reporter. Each participant needed to file a report in order to gain immunity

  2. 10 days to file a report after incident (to keep immunity), increased from 5 days

  3. In the old version, an action of reckless operation, gross negligence were protected by use immunity (i.e. confidentiality). But not transactional immunity (i.e. waiving punitive action). In the new version, as well as use immunity, it only mattered whether the action was inadvertent and not deliberate, to be protected by transactional immunity.

  4. In the old version, you could be protected by transactional immunity for any number of incidents. In the new, reporters were only allowed to use transactional immunity for one FAA infraction.

32

Such as the ASAP and ATSAP.

33

From the ASRS webpage, the main objectives of ICASS are:

  • To provide advice and assistance in the start up and operation of a confidential reporting system.

  • To facilitate the exchange of safety related information between independent confidential aviation reporting systems.

  • To identify solutions to common problems in the operation of such systems.

]]>
<![CDATA[The thin line arrives at Washington]]>https://psychiatricmultiverse.substack.com/p/thin-line-arrives-washingtonhttps://psychiatricmultiverse.substack.com/p/thin-line-arrives-washingtonThu, 12 Mar 2026 18:02:08 GMT
Approx Dec 1975; looking west from road
Trees shorn by TWA Flight 514 just before it crashed into Mount Weather, Virginia. Photo taken one year after the crash (December 1975). Source: C. Brown, Wikicommons

This is article 7 in the Psyverse #2 series (and article 3 in the Thin line series). For previous articles, see the compendium.

This piece will draw heavily from an excellent article, A Different Approach: The crash of TWA flight 514 by Admiral Cloudberg, and Chapter 7 of Delivering the Right Stuff by Andrew J. Dingee. I recommend reading both for more information.


On the 1st of December 1974, a plane crashed into a mountain. It was not the first one in history to do so, nor the last. In isolation, the crash of Trans World Airlines (TWA) flight 514, tragically killing all 92 people on board, holds no more significance than any other deadly plane crash. Why flight 514 is considered one of the most influential accidents in aviation safety history has little to do with the crash itself, but with a near miss six weeks earlier.

In October 1974,1 a plane nearly crashed into a mountain. 40 miles out, at an altitude of 7,000 feet, flight controllers cleared a United Airlines flight for an instrument approach to Washington Dulles Airport without a minimum altitude.2

On the plan view of the Dulles “approach plate” (see Fig 1 below) three official flight paths - each with associated distance (indicated above path) and minimum altitude (below path) - converged to the Round Hill beacon point, or “initial approach fix”, 11 nautical miles (NM) out from the “final approach fix”. From Round Hill, a single flight path led to this “final approach fix” 6 NM out from Runway 12.3

On the profile view of the approach plate, on the other hand, only the final approach fix and the final 6 NM are displayed with a minimum altitude of 1800 feet. The flight path from Round Hill, with accompanying minimum altitude,4 to the final approach fix is not displayed. This discrepancy is what likely led to the United Airlines flight near miss.

Fig 1. Washington Dulles Approach plate for runway 12 taken from NTSB flight 514 accident report. Altitudes of mountains (dots/“stars”) in feet. Annotations by me in purple, green and blue. Based on similar annotations by Admiral Cloudberg
Fig 2. Simplified 1974 Flight paths into Washington Dulles showing the terrain. Taken from Open Street Maps. Black and yellow annotations by me.

When Air Traffic Control (ATC) cleared the United Airlines flight for an approach to land without a minimum altitude clearance, the pilot assumed they could descend to the minimum altitude of 1800 ft, clearly displayed on the profile view approach plate.5

Fortunately for the United Crew, they were in good weather conditions and could see that they were about to fly dangerously close to the peak of Mount Weather, so they adjusted accordingly.6 After landing, the pilots filled out an incident report and sent it to the United Airlines Event Review Committee (ERC). This was part of a new procedure as part of a program put in place by United in January 1974 called the Flight Safety Awareness Program. Once the report was reviewed by the ERC, a safety alert was sent out to all United Pilots about the Dulles Runway 12 approach discrepancy. But only United Pilots. Pilots employed by other airlines never saw this incident report – no federal system was in place to facilitate distribution.


The crash of Flight 514

One hour before Flight 514 crashed into a mountain, the pilots of an American Airlines flight made the same mistake that United’s pilots had made six weeks earlier. Seven miles out from Mount Weather, at 1800 ft, the American Airlines plane was heading straight towards the mountain. This time, the weather conditions were poor. The pilot decided to radio ATC at Dulles to ask what altitude they should be at. ATC told them to climb to 3,400 feet immediately. A crash was averted, for an hour.7

TWA Flight 514 took off from Port Columbus International Airport8 at 10:24 am, heading for Washington National Airport. But twelve minutes after take-off, the pilots were ordered to divert to Dulles airport - severe weather (high winds, snow and rain) had resulted in the closure of Washington National.9

Due to the diversion, the plane did not initially follow the official published flight path - instead intercepting the 300 degree (120 degree on the approach plate) radial 13km earlier than the plate’s initial approach fix at Round Hill.

A screenshot of a diagram

AI-generated content may be incorrect.
Fig 3. Annotated version of Fig 1 by Admiral Cloudberg. Source: A Different Approach

At 11:04, 44 NM from Dulles Airport, ATC cleared flight 514 to approach runway 12, again without a minimum altitude. Just like the United and American Airlines flights, the pilot descended to 1800 ft, thinking the ATC had implicitly given him clearance and would point out if he was at the wrong altitude.10

While they were descending, the pilot had another look at the approach plate and noticed that “this dumb sheet says it’s thirty-four hundred [feet] to Round Hill — is our minimum altitude”. Due to confirmation bias, a discussion on this discrepancy was cut short before it could even begin.11 All the crew managed to miss the Mount Weather peak heights of 1764 and 1930 feet, which they were directly flying over.12

Once they reached 1,800 feet, they encountered significant turbulence and downdrafts, causing frequent altitude oscillations. They received only two altitude warnings: the first, seven seconds before impact, and the second, two seconds before impact. It was too late for the pilots to take any corrective action, and the plane crashed into Mount Weather, killing everyone on board.13

Fig 4. Top: video of the flight path of Flight 514 during the final few minutes. Taken from the FAA page about the crash. Bottom: Google Maps first-person view of the direction Flight 514 was travelling, with a marker displaying the crash site. For position, note the Shenandoah River bottom right with small island seen in the video thumbnail (Top)

After the NTSB released their report of the Flight 514 crash, the FAA sprang into action and introduced what would become the Aviation Safety Reporting System or ASRS. The story of Flight 514 is often told as the turning point in the development of the ASRS, with the unique set of circumstances surrounding the crash being the key motivator. It is therefore interesting to note the nearly identical crash that occurred six years earlier.


Groundhog day

On the 25th of October 1968, a plane crashed into a mountain. It was not the first one in history to do so, nor the last. In isolation, the crash of Northeast Airlines Flight 946, tragically killing 32 out of 42 occupants, holds no more significance than any other deadly plane crash. Out of isolation, this remains the case. Why flight 946 is considered just another accident in aviation safety history, has little to do with the crash itself (or the previous near miss), but the continuing inaction of the FAA.

The runway 25 approach plate of Lebanon Municipal Airport, North Hampshire, from 1968 looks confusing, to say the least. Since only one radar station existed without distance measuring equipment (i.e. just VOR, no DME),14 and the airport was nestled in amongst a very hilly region, the entirety of the approach plate flight paths were restricted to one single radial.

A diagram of a runway

AI-generated content may be incorrect.
Fig 5. A very confusing approach plate for Lebanon Municipal Airport in 1968. Taken from NTSB report about crash of flight 946.

A flight at cruising altitude coming in from any direction was supposed to first head to the VOR (radar) station location, then head northeast (radial 066 degrees) for around 10 Nautical Miles. During this phase, flights could not descend below 4200 ft. This point – 10 NM from the VOR – can be thought of as the start of the final approach (see Fig 6, indicated by purple circle), but is not an official “fix”.

Flying north, a pilot was supposed to circle around in a clockwise direction – ensuring a minimum altitude of 4200 feet throughout - eventually ending up inbounds, travelling towards the VOR radar station and Lebanon runway 25 (the 246 degree radial). The flight was then supposed to descend to a minimum altitude of 2800 feet. Only once the pilot had passed the VOR station, could they descend no lower than 1800 feet until they could fly under Visual Flight Rules or “the time to fly to the airport had been flown”.15

A map with a route

AI-generated content may be incorrect.
Fig 6. Open Street Map of terrain to the northeast of Lebanon Municipal Airport. Simplified Annotation of the final approach (black arrows) added by me. Mountains are yellow triangles. VOR radar station location is red circle. All numbers are altitudes in feet. North points to the top of image.

The problem with this approach centres on the radar system. A VOR station can only provide a pilot with information about which radial it is on in relation to the station, and whether flying along that radial would take you towards (TO) or away from (FROM) the radar station. So, when coming into land at runway 25, the indicator on a pilot’s Course Deviation Indicator (CDI) would have a radial of 246 degrees selected, with “TO” on the display. As the plane passed over the VOR radar station, this “TO” would switch to “FROM”. There is a nice explanation on how this works from Clearflight.

Fig 7. TO-FROM indicator switch explanation as a plane flies over a radar station. Source: Clearflight

At some point before Flight 946 crashed, a Northeast Airlines flight was coming in to land at runway 25. It was overcast, with occasional breaks in cloud. The pilots saw the needle on their display jump and the CDI switch from “TO” to “FROM”. Thinking they had just passed the VOR station, they started to descend. Once they reached 2000 feet, the pilots noticed the CDI had switched back to “TO”. The captain then saw how close they were to the ground through one of the breaks in the cloud and immediately climbed back up to a safe altitude.

Once they had safely landed, they reported this incident to the local FAA maintenance technician, who checked the VOR radar station and found no irregularities. The pilot “did not, nor was he required to by [the FAA’s] current procedures … [to] report this occurrence to any central unit within [the FAA].”16

Flight 946 took off 47 minutes late from Boston. It was late in the evening, sunlight was fading, and the conditions were overcast with occasional breaks in cloud. Arriving into the vicinity of Lebanon, the crew communicated to ATC that they were going to perform a “standard instrument approach”. For some reason, the crew decided not to perform the published instrument approach procedure.17

Flying due north-west, flight 946 attempted to take a shortcut, avoiding the initial VOR station approach starting point entirely. They tried to tack on to the very end of the approach procedure, turning anticlockwise onto the final approach path.

Fig 8. Flight path of flight 946 (purple) overlaid on simplified final approach procedure (black). Terrain map from Open Street Maps.

The flight then continuously descended, not stopping at the minimum altitude of 2800 feet, until they eventually crashed into the top of a rocky, heavily wooded mountain, 2237 feet above sea level.

In the subsequent investigation, the NTSB tested for possible interference that could affect a pilot’s CDI display. They found areas along the approach where “course roughness manifested itself on the Course Deviation Indicator” but could not recreate the switch from “TO” to “FROM”. The NTSB concluded that the probable cause of the crash was the premature descent based on navigational instrument indications that they were about to pass the VOR station.

Due to the potential for an incident report from the previous near miss preventing the deadly crash of Flight 946, the NTSB believed that “the FAA should provide the leadership in developing and implementing an industrywide operational incident reporting system”. The FAA responded with:

In regard to the suggested industrywide operational incident reporting system, this is now being accomplished by our system of issuing telegraphic alerts and operations bulletins […]. We will conduct a complete review of current incident reporting procedures.18

On the 1st of December 1974, TWA Flight 514 crashed into a mountain. So the FAA quickly introduced an actual industrywide operational incident reporting system, at least six years too late.


The series within a series continues next week.

If you enjoyed today’s article, please consider gifting a heart ❤️, restacking 🔄 or sharing 🔗. It would be much appreciated.

If you enjoyed today’s article, please consider gifting a heart ❤️, restacking 🔄 or sharing 🔗. It would be much appreciated.

If you enjoyed today’s article, please consider gifting a heart ❤️, restacking 🔄 or sharing 🔗. It would be much appreciated.

If you enjoyed today’s article, please consider gifting a heart ❤️, restacking 🔄 or sharing 🔗. It would be much appreciated.

If you enjoyed today’s article, please…


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

The only detailed info I could find on this particular flight comes from the book “Delivering the Right Stuff”. Which says the near miss occurred in November 1974. However, this book does not have many sources, and the NTSB report on the crash of Flight 514 clearly states that a near miss occurred six weeks earlier. So I am going with the NTSB’s account.

2

An instrument approach procedure is a set of predetermined manoeuvres that allow an aeroplane operating under Instrument Flight Rules (see part one of the thin line series for explanation on flight rules) to land.

Two different types of radar were available in the 1970s that would provide positional information: Very high frequency Omnidirectional Range station (VOR) and Distance Measuring Equipment (DME). VOR provides a heading (or radial) to the pilot of where the plane is in relation to the station (provides an angle relative to north, but not a distance). DME provides distance information to the pilot of where they are in relation to the station. In combination, these two systems can pinpoint the location of the plane.

I believe Washington Dulles had just installed a VOR/DME system in 1974 (having previously been just VOR). Though I am not sure about this (there is a vague reference to improved radar in Delivering the Right Stuff).

3

An “approach plate” is the colloquial name for an “instrument approach procedure chart”. Pilots use these plates as a reference when coming in to land for an instrument approach.

The initial approach fix is a reference point to be used by pilots to know when the approach “begins”, when the pilots start the initial segment procedure of the landing. The fix (and other fixes in general) are identified through an intersection of VOR radials, the location of a VOR station itself, a non-directional beacon, or a DME location designation (a distance from a DME station).

The approach plate of Washington Dulles in 1974 has an initial approach fix designated by the Round Hill (I believe non-directional) beacon. The final approach fix (the point at which the final approach starts. The aeroplane is lined up with the runway and is descending to land), was designated as 6 Nautical Miles from the DME on the 120 degree radial (in line with the runway)

4

Which would likely have started at 3000 ft at the Round Hill beacon.

5

Pilots were becoming accustomed to ATC’s warning them if they were at too low an altitude.

At this point in aviation history, the introduction of VOR/DME in combination with transponders and computer systems allowed Air Traffic Control (ATC) to fully ascertain a 3D picture of the location and speeds of flights (see here). According to Delivering the Right Stuff, this meant that ATC was becoming more involved in the process of take-off and landing.

Before these technological developments, it was the responsibility of the pilot to communicate altitude and position to the Air Traffic Controller (ATC).

7

From Chapter 7 of Delivering the Right Stuff

9

From Northern Viginia Magazine November 2024 article.

10

According to the NTSB report (which the author originally read in Cloudberg’s article), in the aftermath, FAA witnesses stated that because the flight was not on an official flight path at this point (they crashed before they could reach Round Hill), the responsibility lay with the pilot to ensure altitude and position. The counterposition to this was that Flight 514 was in a radar environment; therefore, the ATC had information on its altitude and position.

The same VFR vs IFR confusion (see part one) reared its ugly head again.

11

From the NTSB accident report:

The captain said, ‘You know, according to this dumb sheet it says thirty - four hundred to Round Hill is our minimum altitude.’ The flight engineer then asked where the captain saw that and the captain replied, ‘Well, here. Round Hill is eleven and a half DME.’ The first officer said, "Well, but - - - ‘ and the captain replied, ‘When he clears you, that means you can go to your - - - ‘ An unidentified voice said, ‘Initial approach,’ and another unidentified voice said, ‘Yeah!’”

12

As they were not on an official flight path, in poor weather conditions, they may not have known their precise position until they turned on to the 300 degree radial (120 degree on approach plate)

13

A couple of extra details:

  1. Six hours after Flight 514 crashed, another plane almost suffered the same fate, due to the same altitude miscommunication between ATC and the pilots.

  2. One of the major developments to come out of the Flight 514 crash was the installation of ground proximity warning systems on all commercial aircraft.

15

From NTSB report on crash.

16

The information about the previous near miss comes from the 1969 hearings before the House of Representatives: Aviation Safety and Aircraft Piracy.

17

While not applicable here, there is a bias called “plan continuation bias”, colloquially known as “Get-there-itis” by pilots, which causes pilots to stick to a flight plan despite warnings from ATC or other indicators showing it would be more dangerous to stick to the plan than change it. In the case of flight 946, the pilots did the opposite, changed the plan without telling ATC.

18

Page 151 of the 1969 House of Representatives hearing: Aviation Safety and Aircraft Piracy.

]]>
<![CDATA[Fear]]>https://psychiatricmultiverse.substack.com/p/fearhttps://psychiatricmultiverse.substack.com/p/fearThu, 05 Mar 2026 18:02:16 GMT
Source: Bobby Allen letters

This is article 6 in the Psyverse #2 series (and article 2 in the Thin line series). For previous articles, see the compendium.


On November 1st 1958,1 General Elwood Richard Quesada became the first Federal Aviation Agency (FAA) administrator at a time when air traffic control was “a dangerous hodgepodge of uncoordinated civil and military operations”.2 Quesada introduced a new program of cooperative military-civilian control of airspace, and instigated a “no nonsense” approach for civil aviation pilots, bringing them up to “military standards”:

[Quesada] sent his inspectors through a demanding Air Force checkout course in the KC-135 (the military version of the Boeing 707), and took the course himself… ‘Now,’ [Quesada] says, ‘the FAA inspectors can fly these jets better than the man they’re checking out.’3

The inspectors picked up on everything, with the FAA jumping on infractions with heavy fines. Quesada justified his actions by saying:

When you go to the ticket counter and buy a ticket…you don’t know who’s going to fly you, or anything about his training, or the airline’s equipment – nothing. The public acts in faith, faith in this system, and we’ll see to backing up that faith. I’m here to represent the public, and dammit, the public will be protected.4

Administrators and pilots reluctantly agreed that this cleanup campaign by Quesada was a good thing, but it came at a cost.

General Quesada. Source: WikiCommons

Animosity grew between pilots and the FAA. Pilots felt that the inspectors were overreaching, picking up on questionable minor indiscretions resulting in fines, instead of improving systemic problems.

For example, Quesada seemed to believe greater pilot discipline was the main answer to the near miss problem. On March 3rd, 1959, he sent a letter to Clarence Sayen, the President of the Air Line Pilots Association (ALPA), noting as a passenger, “all too often the pilot is in the cabin talking to passengers or engaging in endeavours not essential to his duties”. He drew attention to a regulation stating that all flight crew members should “remain at their respective stations when the aeroplane is taking off or landing, and while on route except when the absence of one such flight crew member is necessary in connection with his regular duties.”

Sayen, in response, pointed out the impracticalities of Quesada’s request. Under the regulations in force at the time, only the pilot was required to be qualified to operate the specific controls of the aircraft being flown.5 Further, it was the pilot’s responsibility to ensure the passenger cabin was safe,6 and considering some flights were 12 hours long “the sensible pilot gets out of his seat occasionally for exercise… to retain his alertness”. He noted that ALPA had been pushing for a requirement of specific aircraft copilot training for years.7

Clarence Sayen. Source: ALPA

Sayen noted that FAA inspectors were now timing the pilot’s visits to the passenger cabin, filing violations that were not consistent – the absence times ranged from 7 to 23 minutes. He accused the inspectors of turning Quesada’s letter into a rigid rule through a “childish Gestapo program”.8

The animosity had a particular impact on near miss reporting. According to Joseph Hartranft, president of the Aircraft Owners and Pilots Association (AOPA), when the transfer of safety regulation from the Civil Aeronautics Board (CAB) to the FAA began on 1 January 1959,9 near miss reports previously sent to the CAB were now heading to the FAA – without pilots knowing this had happened.10 When the pilots were eventually informed on March 9 1959, through a press release, it did not mention that pilots no longer had immunity, as that provision only rested with the CAB.

In the six-month period between January and July 1959, pilots were being charged by the FAA with violations based on their own voluntary reports. The pilots, understandably, were not very happy about this, and near miss report submissions dropped off dramatically.

When the CAB near miss program was shut down in July 1959, the FAA took on the responsibility of collecting and analysing near miss reports. Except almost none were now being submitted.11 Senator Engle, in the Review of the Federal Aviation Act Senate hearing in 1960, illustrated the situation:

The FAA invited pilots to report near misses. And so a lot of dumb country pilots like me – and I wasn’t one of them who made the report – reported a near miss; and found himself cited because he was at the wrong altitude or something… I noticed this magazine, the Pilot’s magazine, published by AOPA, just said quit sending in the reports; don’t report or you will get clobbered […]. This is the kind of an attitude that doesn’t contribute to air safety.12

Clarence Sayen asserted that the non-immunity “policy of the FAA is fallacious and should be reversed immediately.”13 It was not reversed.14 It took a new administrator of the FAA, Najeeb Halaby, to reignite a near miss reporting program when he took office on March 3rd, 1961.

Due to the damage in trust caused by the previous administration, Halaby found that pilots were unwilling to report to the FAA, even with the promise of immunity.15 Therefore, he asked a third party, the nonprofit non-governmental organisation called the Flight Safety Foundation (FSF), to run a near miss reporting program. It was called Project SCAN (System for Collection and Analysis of Near mid-air collision reports) and initially ran for six months starting on July 1st 1961, before being extended to a year, finally concluding on June 30th 1962.16

If a pilot had a near miss, they would send a form to the FSF.17 It was anonymous – even if the pilot signed their name, the form would be destroyed once relevant information had been taken from it. Given the Flight Safety Foundation did not have the power to issue punitive action and did not pass on identifying information of the pilot to the FAA, pilots had immunity. However, similar to the previous Civil Aviation Board program, if the information of the incident came through other means, the voluntary form submission did not grant immunity.

The main difference between Project SCAN and previous programs was, according to a 1961 article in Aerospace Safety Magazine:

Where the analyzed information suggests the need for changes of any kind – changes in procedures, traffic patterns, even aircraft cockpit or instrument design – specific recommendations will be formulated and submitted to the FAA for action.… [T]his is where Project SCAN differs.… [Previous] programs have dealt solely with statistics, the ‘how manys’ instead of the ‘how comes.’ Project SCAN deals with the ‘how comes’ or the ‘whys’ as well as the ‘how manys’

Project SCAN received 2,577 reports during its year in operation and seemed to be popular. Some pilots even returned forms without a near miss, simply to state their approval of the program. Interestingly, the FAA also kept its own near miss reporting program, and pilots only turned in 549 reports over the same period – the anonymity and immunity provisions resulted in 2.5 times more reports.18

At the same time as Project SCAN, a small group of the International Air Transport Association (IATA) airline operators decided to form an information exchange “experiment”.19 Information related to safety reported by pilots under one airline operator would be shared with all the others around the world. Arguably, this was the first International civil aviation incident reporting system.20

During most of the 1960s, however, a U.S. national voluntary near miss and/or incident reporting system seemed to once more become relegated to discussions in safety conferences.21

The main push for a national incident reporting system came from the CAB director of aviation safety, and later director of the National Transportation Safety Board (NTSB),22 Bobby R. Allen. He was particularly interested in the role of the computer in accident prevention. Instead of condensing incident reports down into simple, broadly defined categories (such as pilot error, mechanical deficiencies, inadequate maintenance, etc.), Allen argued for a “bulk look” at the data. He wanted to utilise the memory and data retrieval capabilities of the computer to do a detailed statistical analysis to unpick the complex causes of accidents. And to project potential future causes of accidents through analysis of past trends.23

First, however, Allen needed data to awaken the “sleeping giant” of computational analysis. He needed an incident reporting system. This was the subject of his 1966 talk at the 19th Annual International Air Safety Seminar in Madrid.24

Bobby R. Allen. Source: Wikicommons

In the talk, Bobby Allen highlighted how the International Air Transport Association information exchange processed incidents, quoting Boyd Ferris, a secretariat at IATA:

[W]e lump incidents and accidents together because the thin line dividing an accident involving fatalities or major damage to the aircraft from an incident in which everyone got off scot free is often nothing more than luck.

Allen then points out that:

The alarming thing is that we do not take advantage of our good fortune. Here we have a brush with disaster; a live crew and virtually intact aircraft ready to tell a story, and yet we never open the book […]. What is it, then, which stands in the way of communicating this incident information to the appropriate governmental agency for processing? Repeatedly, when this question is asked, one hears the reply – FEAR: fear of litigation; fear of regulation; fear of punitive action.25

In the late 1960s, Allen communicated with officials from the airline industry, as well as associations, to work out how to combat these fears so that an “information exchange program” could be set up.26 He found that the most stubborn fear was litigation.

Several lawyers tried to tackle the fear of litigation.27 Probably the foremost pushing against the fear was a UK lawyer named Harold Caplan, who argued that when an accident occurred, lawsuits would almost certainly follow.28 Since incident reporting would reduce the number of accidents, the number of lawsuits would therefore decrease.29

Harold Caplan. Source: Royal Aeronautical Society

But fear is not rational. The worry for a lot of airline companies was that the information produced by incident reporting would be used against them in litigation. Caplan therefore proposed example legislation that could be used internationally to protect the exchange of safety information.30 Further, Robert Reed Gray built on an idea by Charles McErlean of United Air Lines, suggesting that liability without fault be imposed on air carriers. Litigation would only be used to decide how much to pay to survivors of victims of accidents – no blame would be asserted.31

According to the archive of Bobby Allen’s letters, significant progress was made towards an information exchange system as the 1960s were coming to a close.32 But Allen found he could not get the project over the line from proposal to implementation. He concluded:

To the question of “Why hasn’t a program been established” everyone immediately trots out the “6 symptoms of the Fear Syndrome” […]. But in the final analysis, I personally believe that the Fear Syndrome is used as an excuse for our failure to overcome the major problem of information exchange. That problem is nothing more than sitting down and writing specifications for the program.33

Allen nearly helped accomplish it. In August 1968, the Air Transport Association of America (ATA)34 and Aeronautical Radio, Incorporated (ARINC) held initial discussions about the establishment of an airline safety data bank to be used for an information exchange program. The ATA then reached out to Allen to try to have the NTSB as the custodian of the data bank, to keep the information safe. By October 1969, potential specifications and requirements for an information exchange program had been written up by ARINC and submitted to the ATA.

On December 3rd 1969, the ATA safety committee met to discuss the ARINC report. Allen’s invitation to the meeting was withdrawn due to fears that a representative of the government in attendance would not lead to a frank exchange of industry views. Subsequently, the ATA decided they:

would not be willing to participate at this time […]. However, the airlines recognize the desirability of a program of accident/incident data analysis and exchange in the airline industry.35

ARINC recommended the ATA continue with an alternative program where the ATA would “select a non-government organization to function as data custodian”. This never came to pass. Bobby Allen died on November 17th 1972. Despite all his efforts, he never saw the creation of an incident reporting system.

For fear to be overcome, the thin line between incident and accident had to visit Washington DC, where legislators could see the consequences of their inaction with their own eyes.


The series within a series continues next Thursday.

If you enjoyed today’s article please👻 AHH! A ghost!! … … Okay, I think the coast is clear… If you 🧟 AHHHH, gosh darn it! … Right, I’m boarding up my house🏚️ .

If you enjoyed today’s article, please consider dropping a heart❤️. If you think someone else would be interested in some aviation history, please do restack 🔄 or share 🔗. It would be much appreciated!


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

Source here

2

Page 192 of the Review of the Federal Aviation Act.

5

The copilot did not have to be qualified to fly the specific type of aircraft, and the third person in the cockpit did not have to be qualified to fly at all.

Flying an aeroplane is not like driving a car. You cannot just jump between types of aeroplanes and expect to master the controls. Each aeroplane type (e.g. Boeing 737 or Airbus A380) has a different layout of controls and different characteristics (e.g. number of engines and placement, shape of wings, size of aeroplane, etc.)

6

This was pre 9/11, when pilots were able to visit the passenger cabin.

7

Page 149 of the Review of the Federal Aviation Act. In the letter, Sayen said:

An occasional trip by the pilot in command to the passenger cabin to carry out his statutory duties is the least of our worries. We have major air safety problems in this entire crew qualification and training area, in the air traffic control area, in the inadequacy of our airports, in providing the pilot in command with adequate tools with which to work including airborne radar, in insuring that the carriers’ training programs are adequate, and in many other major safety areas.

8

Quesada, in his response, wrote an incredibly long letter. Well, more like a rant really (in my opinion). It is the type of thing you write to your ex, which you know you really shouldn't, but man, you have feelings you need to get out! Then, when you re-read the letter years later, you feel embarrassed while thinking, “Why on earth did I write that?” Note: definitely not something the author did…

Sayen, in a statement, called it “a sad letter”, accusing Quesada of missing the point of the original reply:

The Administrator has completely missed the point of the safety problems raised by the airline pilots. He has attempted to convey to the public that the airline pilot is conducting himself irresponsibly by filing a complaint with the administrator. If Mr. Quesada is to be so thin-skinned, he has rough days ahead.

During the FAA hearing, Senator Engle, in reference to a different letter, also had some advice for Quesada:

I would like to suggest, General, that if you had been in public as long as some of us have, a mean editorial now and then is in due course. You ought to expect it, and ought not to be too thinskinned about it.

9

Note: this is before the CAB near miss program had been shut down in July 1959.

10

Page 320 of the Review of the Federal Aviation Act.

11

According to Najeeb Halaby, the second president of the FAA.

See Page 718 of the Survey of Selected Activities (Part 7 - Efficiency and Economy in the Federal Aviation Agency) House of Representatives Hearing in 1962.

12

The conversation between Senator Engle and Quesada is well worth a read. You can almost feel the tension when Quesada is asking for examples of times the FAA have been abusive, and Engle, quite reasonably, does not give it (so as not to incriminate pilots).

13

Page 374 of the Review of the Federal Aviation Act.

14

Though there did seem to be a survey in 1960 conducted by the United States Air Force: USAFE’s Near Collision Survey. Source: Page 12, Aerospace Safety magazine, December 1961 edition

15

Page 719 of the Survey of Selected Activities (Part 7 - Efficiency and Economy in the Federal Aviation Agency) House of Representatives Hearing in 1962.

Halaby is talking about safety complaints, which I am assuming encompasses near misses. Halaby notes:

I will try very hard to convince people that they get a fair and fully respectful attention and action on their complaints, but it is not in human nature to report to the man who has the responsibility for policing infractions of the rule.

16

According to two articles. Aerospace Safety magazine, in September 1961, published an article on page 7 about Project SCAN (Do not look at the next page in this magazine), saying it was going to last six months. On page 13 of the November 1962 edition of the FAA’s Aviation News, an article was published noting the program went on for a year.

17

Here is what the form looked like:

18

Source for entire paragraph: Page 2, July 1961 edition and Page 13, November 1962 edition of FAA Aviation News.

The final FSF report on project SCAN identified five “major problem areas”: high altitude military and air carrier operations, single-engine jet operating practices, Visual Flight Rule operations, terminal area procedures and radar limitations. The program also brought to light specific issues, such as pilots not adhering to landing patterns. The FAA then worked on finding solutions to these problems. Both the FAA and project SCAN showed that most incidents concerned general aviation (private or recreational transport).

19

[1] H. Caplan, ‘Need for the Exchange of Safety Information’, J. Air L. & Com., vol. 34, p. 386, 1968.

20

According to Gordon Burge [2], the Australians had a compulsory reporting system dating back to the 1940s. Though I don’t think it was international, and I don’t know much about the details of the program.

In any case, the IATA experiment was successful. It appears this experiment evolved into the current Incident Data Exchange.

[2] H. K. Gordon-Burge, ‘Practical and Legal Problems of Disseminating Air Safety Information. Part 1. Practical Problems’, The Aeronautical Journal, vol. 71, no. 683, pp. 773–781, Nov. 1967, doi: 10.1017/S0001924000054105.

21

According to David Thomas, a deputy administrator at the FAA, in a 1967 House of Representatives Hearing, A reporting system did seem to still be in place with the FAA, but it did not contain immunity or anonymity.

I am not sure why a national anonymous near miss reporting program following Project SCAN did not occur. It requires more research on my part.

22

See the history of the NTSB here

23

[3] B.R. Allen, National Transportation Safety Board Bureau of Aviation Safety, 34 J. AIR L. & COM. 399 (1968) https://scholar.smu.edu/jalc/vol34/iss3/10

25

Bobby Allen actually laid out five fears in letters he wrote to officials and members of airlines (pg 44): Fear of litigation, Fear of Punitive Action, Fear of Regulatory action, Fear of Competition, Fear of Adverse Publicity (He added Fear of a Biased Data Custodian later, see pg 40/41, but I can’t find any explanation of what he means)

26

See letters mentioned in the previous footnote.

27

[1] H. Caplan, ‘Need for the Exchange of Safety Information’, J. Air L. & Com., vol. 34, p. 386, 1968.

[2] H. K. Gordon-Burge, ‘Practical and Legal Problems of Disseminating Air Safety Information. Part 1. Practical Problems’, The Aeronautical Journal, vol. 71, no. 683, pp. 773–781, Nov. 1967, doi: 10.1017/S0001924000054105.

[3] H. Caplan, ‘Practical and Legal Problems of Disseminating Air Safety Information. Part 2. Possible Solutions to some of the Problems’, The Aeronautical Journal, vol. 71, no. 683, pp. 781–786, Nov. 1967, doi: 10.1017/S0001924000054117.

[4] R. R. Gray, ‘The Attorney’s Role in Accident Investigation’, J. Air L. & Com., vol. 34, p. 417, 1968.

28

Ibid (Source [3])

29

Bobby Allen, in a letter to Gordon Maxwell (Vice President Ground and fleet operations Pan American World Airlines), writes:

Although I personally beleive the fear of litigation is more notional than substantive, let’s assume for a moment that there is some risk involved when the [information exchange] program is initially started. It will be far better to pay short term increased risk in 1969 dollars, rather than 1979 dollars. Think how much better it would be if we had decided to pay for this program back in 1958.

As we will learn, they paid 1979 dollars.

30

The SAFEX system - Source [3] (footnote 27)

31

Source [4] (footnote 27)

32

Bobby Allen proposed that the NTSB could serve as the data repository in an information exchange program, ensuring that data was handled confidentially. Page 28 of letters.

33

From pg 40/41 of Bobby Allen letters. “major problem” underlined in original text, bold added by me.

34

A trade association representing commercial airlines - now called Airlines for America (A4A)

]]>
<![CDATA[When planes (almost) collide]]>https://psychiatricmultiverse.substack.com/p/when-planes-almost-collidehttps://psychiatricmultiverse.substack.com/p/when-planes-almost-collideThu, 26 Feb 2026 18:02:16 GMT
Source: Review of the Federal Aviation Act

Today, we are indulging in an Inception (or Psyception), a series within a series. This is article five in the Psyverse #2 series (for previous articles, see compendium), and article one in the Thin line series.

The Psychiatric Multiverse is centred around taking an idea from another field and applying it to psychiatry. For this Psyverse series on incident reporting systems, the “other field” is aviation.

Over the next few weeks, we will explore the history of aviation incident reporting systems. As well as providing crucial lessons in our aim to build an effective reporting system in psychiatry, I think it is a story that has yet to be told.

So, without further ado, let us go back to the 1950s and the beginning of the jet age…


The United States had a problem. Since the dawn of aviation, pilots had operated mostly on a “see and be seen”1 procedure to avoid air collisions – they would visually scan for other aircraft and ensure their own aircraft was as visible as possible.2

This was okay during the 1930s and 40s when there were sparse aircraft in the sky and speeds were usually slow enough for pilots to take evasive action if necessary. However, by the early 50s, the number of planes in the sky was increasing rapidly, and the development of turboprop engines meant cruising speeds of some aircraft were approaching 250 miles per hour.3

image
Transport aircraft cruising speeds. Source: IPCC

Faster planes require more “separation”4 between other aircraft to ensure the risk of mid-air collisions is kept at a minimum. But in a congested airspace, especially around airports, this was quickly becoming unrealistic.5

The Air Line Pilots Association (ALPA) were well aware of the growing mid-air collision problem. Up until this point, problems were found and solved after collisions and the deeply unfortunate loss of life. But, talking to the Air Safety Board in a 1950 House of Representatives hearing, David Behncke, president of ALPA, noted:

Actual accidents are not the only fruitful source from which to determine the need for improvement in air safety. There are those about which the public never hears; about which you honorable gentleman seldom hear; indeed, about which no one except those very, very closely connected with the industry ever hear. These are the near accidents, the narrow squeaks, the close calls of high disaster potential. The independent Air Safety Board would find fertile ground indeed for investigation in a carefully compiled “close call” or “near accident” file. ALPA has such a file but the present air safety set-up is such that we are at a loss at times to know who should receive our reports.

During the early 1950s, near-accident (or near-miss) reporting seems to have remained a disparate practice of individual organisations running their own projects without an overarching organisation to send their reports to.6

Perhaps the closest to a nationwide scheme during this period was the “Anymouse” program introduced by the US Naval Air Force starting around 1954.7 Naval Air Force pilots were encouraged to submit anonymous reports to Naval Command whenever they nearly collided with either another aircraft or terrain. According to a 1954 article8 in Flying Safety Magazine:

[T]he idea evolves around the theory that a close shave with the reaper will probably teach a guy something. If he can pass on a bit of knowledge to the next man, then both will benefit.

In general, the military led the way in researching how to implement a near miss reporting program. It seems as far back as the 1930s, it was observed that it was more important to find out from a pilot how and why an accident occurred, than to punish the pilot for any mistakes they had made leading up to the crash.9 A research paper by the United States Air Force in 195310 suggested that a near miss reporting system without identification of the pilot provides “considerably more information on personal behaviour” than one which has the potential for evaluation or disciplinary action. Hence, the emphasis on anonymity in the Anymouse program.

In civil aviation, wider attention was given to the near miss problem through several safety meetings starting in 1953. Special attention was given to how to establish a near miss reporting program in 1954 at the meeting of airline chief pilots in Dallas Texas. Meetings between disparate airline industry groups and the Civil Aviation Board (CAB) about setting up a reporting program continued through 1955, when, finally, on February 23rd 1956, the CAB announced Special Regulation 416 – “Voluntary Pilot Report of Near Midair (Near-Miss) Collision” – a national near miss reporting program.

Pilots were assured that they could submit reports, choosing to be anonymous, without fear of disciplinary action. Pilots had immunity. Up to a point. If information about an incident was obtained from other sources outside the voluntary report, the pilot in question could still face remedial action.

Despite this caveat, reports flooded in.11 In one year, 971 near miss reports were sent in, an average of 2.5 per day.12 Around 85% of the reports concerned near misses that were below 1,000 feet (around the height of the Eiffel Tower). In 256 of the 971, planes had narrowly avoided collision by less than 100 feet. To give some context, a Boeing 737-700 is about 100 feet in length:

Boeing 737-700. Source: SimplePlanes

According to Oscar Bakke, the safety director of the Civil Aeronautics Board (CAB), the reports confirmed that in areas with higher traffic density, there were more near misses. Interestingly, 98% of the near-misses were reported with visibility in excess of three miles, and a growing ratio of near misses over time involved jet aeroplanes – suggesting that see and be seen was indeed becoming obsolete. A final observation was that approximately half of the near misses reported by civil pilots involved a military aircraft. The near miss reporting program gave evidence for greater use of “positive control” of aircraft.13

It was recognised in the program’s May 1958 report that the CAB’s objective in collecting near miss statistics had been achieved, and that continuing the program would provide no further benefit. Therefore, the CAB proposed a new incident reporting program, with the same anonymity and immunity provisions, that would extend well beyond the near miss reporting program. On August 7th 1958, under draft release 58-14, Oscar Bakke wrote:

The excellent response of pilots to the present program and the desirability of collecting broader information bearing upon aviation safety, leads to the belief that the program should be extended to include more than just the reporting of “near misses.” … [A]ny airman aware of in-flight, ground or procedural hazards [would be encouraged] to report them to the Board […]. The program under consideration would place less emphasis on the statistical aspects of the reports and direct attention primarily to the particular circumstances involved in each report.

But it was too late. The program could not be set up before the Federal Aviation Act of 1958 transferred the CAB’s regulatory function to the Federal Aviation Agency. Subsequently, on July 15th 1959, the voluntary near miss reporting program under special regulation SR-416A was shut down.

It took the United States 16 years before an almost identical incident reporting program to Oscar Bakke’s 58-14 proposal, was finally introduced. The reason? The usual culprit at the heart of any inaction: Fear.


The inception-like “Thin line” series within a series continues next week.

Thanks for indulging this foray into aviation history. If you know anyone with an interest in aviation, please do share 🔗 the article with them!

If you think someone else needs an escape into the dreamscape of history, please do restack 🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄🔄…

a black spinning top is sitting on top of a white surface .
Source: Tenor

Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

It is an augmented version of see and avoid. The history of the see and be seen concept can be found here (which is well worth a read in my opinion).

2

The situation is a little more complicated than this, and I’ve found myself confused as to what the actual procedures were at the time. The following is my current understanding (any pilot readers out there, please do correct me!)

There are two sets of rules pilots abide by when flying a plane. Visual Flight Rules (VFR) and Instrument Flight Rules (IFR).

VFR is in place when there are Visual Meteorological Conditions (VMC). From the VMC wiki, it is “defined by: visibility, cloud ceilings (for takeoffs and landings), and cloud clearances.” Pilots are responsible for maintaining separation from other aircraft (see and be seen) and terrestrial obstructions like mountains. Pilots can fly along any route they please (within reason). I think this is mainly for general aviation nowadays.

IFR is in place when there are Instrument Meteorological Conditions (IMC). Where there is poor weather or poor visibility (e.g. flying at night). Pilots fly through reference to aircraft instruments and electronic signals (like radar). Flights must travel down predetermined flight plans.

I think “corridors” were introduced in the 1950s where aircraft must be under IFR between (or above) certain altitudes, irrespective of the weather. Quite what the altitudes were and the timelines seems a bit of a mess. But I think it ended up as over 18,000 feet. The problem during the 1950s was that some aircraft, such as military, would be operating under VFR, and civilian aircraft would be operating under IFR, with only a separation of 1000 ft between these two corridors. The error in pilots altimeters during this time meant it was plausible a military aircraft and a civilian aircraft altimeters could display very different values, but both could be flying at the same altitude.

During the 1950s, there was a push towards more “positive control” or “positive separation” of aircraft. This is where Air Traffic Control (ATC) has the location data of every flight. Regardless of whether flights were under VFR or IFR, they would monitor every flight, and ensure through communication with pilots that each maintained separation from each other. Within controlled airspace (e.g. near airports), this was achievable. However, the radar coverage was insufficient for positive control of the entire country, and some planes did not have radar transponders at that time. The US needed to build the long-range radar infrastructure first.

It was all a bit of a mess, summed up by this quote from a pilot in the Gadsden Times on April 25th 1958:

Man, we’re using Wright brothers rules and regulations and airways for supersonic planes.

Sources:

  1. For 1950s definition of VFR, and altitude corridor details, see pg 22 of Legislative History of the FAA Act

  2. For inaccuracy of Altimeters: pg 1266 Review of FAA Act

  3. For problems of radar and transponders for tracking in 1950s, see pg 375 Review of FAA Act

3

The jump in speeds around the end of the 1950s was due to the introduction of the Jet engine. Since the Comet, the first jet-engine commercial aircraft, was introduced in 1954, the United States knew things were about to get significantly worse.

Note on the Comet: there is a myth that the explosive decompressions in Comet hulls were due to square passenger windows. However, the shape of passenger windows makes very little difference to the structural integrity of aeroplane hulls. Metal fatigue and the fact windows and emergency exits were riveted in - not glued - were one of the reasons the Comet hull failed.

4

Think of a sphere of air around an aircraft that no other aircraft can enter.

5

The problem involved both high-speed and low-speed collisions.

Increased congestion around airports combined with increased cognitive “load” on a pilot trying to undertake the complicated procedure of landing a plane (meaning pilots were less aware of other aircraft) and out-of-date air traffic control tracking technology meant an increased risk of collisions at slow speeds.

High up in the atmosphere, the high speeds of aircraft meant pilots were unable to react in time in the rare circumstance they would encounter an oncoming aeroplane. Captain Edward Boyce of Trans World Airlines makes the point quite pointedly in American Aviation magazine, Vol 18, from April 1955:

Dodging out of the path of present-day fast aircraft is just as hard and as practical to do as ducking out of the way of an oncoming bullet.

6

The Military Air Transport Service (MATS) and Strategic Air Command (SAC) seemed to have a comprehensive reporting system (See pg 28 of April 1954 Flying Safety magazine & pg 19 of May 1954 Flying Safety magazine).

The Air Transport Association had conducted an independent survey of near-collision incidents. (See pg 10 of August 1961 Approach magazine, Volume 7, Number 2.)

7

The name came about when a safety suggestion box supposed to say “anonymous” was misnamed “anymouse”, and the name stuck.

I think the program might have also incorporated the Marine Corps too.

I believe the Anymouse program is still running today (please someone correct me if I am wrong), however, it now covers any safety incident, not just near-misses.

8

Reading old magazines, one cannot ignore the rampant sexism. Now and then, random images of scantily clad women pop up. Weirdly, sometimes with little to no relevance to the information on the page (do not look at page 29 of the 1954 article linked to this footnote).

A photo in the August 1961 edition of Military Air Transport Service flyer, I think gives a hint at what women thought about the sexism:

9

They didn’t use the word “error” because of the negative connotations, the blame, associated with the word. Instead, “human factors” was used.

Note: I am not entirely sure of the validity of this footnote and the sentence it refers to. I learned this information from a 1967 article by Gordon-Burge, who used a quote from an unnamed American Air Force lawyer. Considering the other evidence of how far along the Military were when it came to near miss reporting (i.e. anymouse), I am inclined to believe Gordon-Burge’s claims.

[1] H. K. Gordon-Burge, ‘Part 1. Practical Problems’, The Aeronautical Journal, vol. 71, no. 683, pp. 773–781, Nov. 1967, doi: 10.1017/S0001924000054105.

10

One of the great joys of literature review, is that you occasionally stumble upon a gem of a source, which rewrites previous assumptions. This research paper, Human Factors in Near Accidents by Vasilas et al., showed me that research into near miss reporting systems was being considered in detail 20 years before I thought (when I first started researching this subject).

11

There was no retrospective limit, so a lot of the initial reports were about incidents that occurred up to 10 years before the start of the program!

12

From Oscar Bakke’s statement to the 1958 Senate hearings on the Federal Aviation Act. Depending on the source, there are contradictions in the statistics. In the Senate hearings of 1961 on Air safety, the number of reports for the first year is quoted as 3,000.

13

See Footnote 2 for the definition of “positive control”.

]]>
<![CDATA[The error of our way]]>https://psychiatricmultiverse.substack.com/p/the-error-of-our-wayhttps://psychiatricmultiverse.substack.com/p/the-error-of-our-wayThu, 19 Feb 2026 18:01:42 GMTThis is the fourth article in our Psyverse #2 series. To read previous articles, visit the compendium.


We need to address the elephant in the room. Not only do patient safety reporting systems already exist, but they have existed for at least a quarter of a century.1 While tempting to say, “case closed!” on an excellent example of a system from fields like aviation or nuclear safety being implemented within psychiatry2 and immediately high-tailing it out of here, the current situation is bedevilled by nuance. So much nuance that it has been difficult to wrap my head around it. And I love wrapping my head around things.3

Yes, patient safety reporting systems exist, but their effectiveness has been recently questioned by multiple authors.4 Despite significant investment in reporting systems, large numbers of patients continue to be harmed each year. It is still the case that for most areas of medicine, the safest time to receive healthcare treatment is tomorrow. This contrasts with other fields like aviation and the nuclear industry, where reporting systems have been a highly successful tool for helping improve safety through system change.5

Many reasons for the underwhelming performance of patient safety reporting systems have been suggested, along with proposed solutions.6 We are going to focus on their characteristics. Arguably, while patient safety incident reporting systems are widespread, few can claim to be integrated, (inter-)national,7 non-punitive, confidential, independent and inclusive. Why are these six characteristics important? We shall find out.8

Firstly, however, it would probably be a good idea to explain what a patient safety reporting system is.

The aim of a reporting system is to catch incidents, which, according to the World Health Organisation (WHO), broadly fall into three categories: near miss, no harm incident and harmful incident.9 The harmful incidents subsequently fall into two further categories: adverse events and adverse reactions. With “events” being preventable harm and “reactions” non-preventable harm. Within these broad categories, more specific classifications have been used and proposed.10

Source: WHO technical report and guidance on patient safety incident reporting systems (2020)

When a healthcare professional (or indeed a patient) comes across an incident, whether through direct involvement or as a bystander, they fill out a form. Depending on the type of system, reporting an incident may be mandatory or voluntary.

Mandatory systems are primarily used to hold healthcare providers accountable. They focus on the severe incidents, like sentinel events, otherwise known as “Never Events”. An example of a severe incident could be the amputation of the wrong leg during surgery or an incorrect medication given to a patient, leading to their death.11

Voluntary systems are primarily used for learning. These generally focus on near misses or very minimal patient harm. One example would be a prescription error (wrong dose, wrong drug, etc.) that is caught by a pharmacist before it reaches the patient. Voluntary incident reports submitted by healthcare reporters are indicators of potential systemic (procedural, technological, etc.) changes that need to be made.12

In either case, the incident form that a healthcare professional submits will generally have a section which classifies the incident (patient information, incident time/type/severity,13 etc.) to allow sorting and trend analysis. It will also have a free text section for the reporter to provide the story of what happened. The WHO proposed a Minimal Information model to help healthcare providers develop their own incident reporting form.

Source: WHO technical report and guidance on patient safety incident reporting systems (2020)

Once completed, the clinician/patient sends the form off (electronically nowadays) to an organisation that will sort and analyse the reports. If there are incident reports of interest, an investigative body could send a team of experts to review cases and investigate. After completion, different policies or procedures would then be recommended in a report and implemented to change the latent conditions, thereby preventing future incidents from occurring (depending on the reporting system in question and the seriousness of the incident, a reprimand of staff involved may also be issued).

Designing an effective patient safety reporting system is easier said than done. Out of the many characteristics and factors that need to be considered, arguably, the following are some of the most important.

Integrated

My initial intention for this Psyverse series was to focus exclusively on incident reporting systems in psychiatry. I found this an impossible task. Every time I wanted to make a point, the evidence and analysis leaked into other areas of patient safety. Whether that was the type of culture that encouraged reporting, or how investigations into incidents needed to be guided by an incident reporting system. Patient safety is an interconnected web of incredibly complex systems that influence, and are influenced by, each other.14

In industries such as aviation, investigation into harm (accidents) has been developed since at least the 1930s, with reporting systems being developed since the 1950s.15 An entire safety management system has been slowly developed over the course of nearly a century. In contrast, healthcare systems have attempted to translate safety knowledge from other sectors into a fully functional patient safety system within the space of a couple of decades.

The result is that patient safety incident reporting is not properly integrated within a wider patient safety system. Too often, they have been used as a repository to gather statistics and keep track of the number and types of incidents, rather than as a learning tool.16 An incident report is only the start of the learning process; it is only a “window on the system”, from Vincent (2004):

Incident reports by themselves…tell you comparatively little about causes and prevention…. Reports are often brief and fragmented; they are not easily classified or pigeon holed. Making sense of them requires clinical expertise and a good understanding of the task, the context, and the many factors that may contribute to an adverse outcome.

An effective patient safety reporting system requires substantive analytical, investigative and regulatory systems to lean on. With each system intricately connected to the others, and the wider healthcare system, through procedure and legislation. Patient safety also needs to be promoted and communicated to build a safety culture.

In other words, the development of Safety Management Systems (SMSs) is desperately needed.17 Only then can the learnings, discovered through incident reports and investigated by experts, be implemented into principles, processes and practices, then subsequently followed by healthcare staff.

Source: WHO technical report and guidance on patient safety incident reporting systems (2020)

(Inter-)National

While there is value in a system that picks up on a specific local incident, then develops a specific local solution, this is not where the real value of a reporting system lies. Reporting systems are as much about preventing incidents occurring over the entire system as they are about finding specific cases.

Say we have an incident where there are two patients with the same first name (say Jacob M. and Jacob Y.) sitting next to each other, awaiting a routine blood transfusion. A unit of red blood cells was taken from the refrigerator, and during the setup of the transfusion for Jacob M., it was realised the blood cells were actually for the other patient - Jacob Y.18

If we don’t have a national reporting system, we are relying on every single local blood transfusion provider to pick up on this type of name mix up incident, report it, have a team investigate it, and then enact a correct latent condition change that would fix the problem (rather than a change that would only half-fix it, such as guidance or training etc.). Not only is this horrendously inefficient and expensive, but it is also a lot of work for each local provider to ensure patients stay safe.

A national reporting system’s role is to find common occurring problems as they start arising, have a national investigative body examine them, and then disseminate the solution to all local healthcare providers across the country. Meaning that for most providers, the incident is prevented from occurring in the first place.

While the UK has a national patient reporting system,19 many countries around the world, including the United States, do not.20 Given the number of countries with fragmented local reporting systems that do not permit the sharing of information between them, how many times has the wheel been reinvented? Perhaps more importantly, how many different shapes has the wheel been reinvented into?

While I have focused on national systems, there is scope for an international sharing of incident reporting. Healthcare is a global phenomenon, and there will be problems generic enough to apply to many countries around the world. But more importantly, there are many countries with rudimentary reporting systems that could benefit from the learnings of countries with more advanced ones.

A final point is that the introduction of a national reporting system does not mean the shutting down of local ones. Firstly, if every incident in the country were reported to a national agency, it would be overwhelmed quickly. Indeed, overreporting is a significant issue. Secondly, in many healthcare settings, solutions may need to be tailored to the individual characteristics of each healthcare provider (e.g. number of staff, layout of the hospital, number of beds, how busy the hospital is, etc.).

Non-Punitive

Lucian Leape, one of the experts at the forefront of pushing for increased patient safety awareness, once testified to the U.S. Congress:

The paradox [of punitive measures] is that the single greatest impediment to error prevention is that we punish people for making them.21

Keeping to the theme of apparent paradoxes, while encouraging healthcare staff to “report it all” to a national system tends to result in overreporting, having punitive consequences for reporting incidents tends to result in underreporting.22 Out of the many barriers to reporting,23 the most prominent seems to be the consequence of disciplinary action after submitting a report to a reporting system.24

Assuming the vast majority of healthcare professionals don’t intend to commit errors, some of the most important incident reports could be the ones most fraught with potential disciplinary consequences. This is because these reports may point to an aspect of the system unintentionally designed to encourage clinicians to go against regulations. Logically, it follows that by removing the consequence of disciplinary action, more of these types of errors will be reported.

But the picture is not quite as simple as legislating the removal of punitive measures resulting from incident report submissions. Whether a system is mandatory (generally focused on accountability - with greater likelihood of punitive measures), or voluntary (generally focused on learning - usually non-punitive), underreporting rates are still significant.

Some of this can be attributed to barriers such as confusion about what to report or time constraints. A likely more significant reason for underreporting can be attributed to blame culture and surrounding factors.

Which is why the organisation running the reporting system needs to be…

Independent

Perhaps the most widespread problem of patient safety reporting systems is that they are often run by the same organisation that also deals with punitive measures. Even if a reporting system is formally non-punitive, conflicts of interest25 can result in informal negative consequences, like reduced status or career prospects. Healthcare professionals may feel psychologically unsafe, reducing the likelihood of incident reporting.

An independent organisation running a reporting system would allow confidence that reports won’t be used punitively against the reporter, as the independent organisation would simply not have the power to punish. Recently, in the UK and Norway, independent investigative organisations have been set up for this very purpose.

Independence also likely provides psychological safety to healthcare reporters, as the information they submit won’t be seen by anyone working within the same organisation. This is especially relevant for many current local, hospital-wide reporting systems where staff report incidents directly to their superiors (e.g. supervisors or line managers). Not only does this mean that the superior could censor reports to administrators because of a conflict of interest (say, the line manager had performance targets they needed to keep to), but they could also use the incident reports as evidence to punish staff. Voluntary reporting systems have been used in this way within the United States.26

Ideally, an independent safety team within each local healthcare provider would operate and manage the incident reporting system. They would then report the issues to the board directly.27

Confidential

For readers familiar with reporting systems, you may have noticed I have not included “anonymous” alongside confidential. This is despite the fact that many reporting systems are either completely anonymous or provide a way to report anonymously. Some authors have argued for confidentiality over anonymity.28 Notably, the WHO also omitted anonymity in their initial report on the ideal incident reporting system, including only confidentiality.29

The primary issue is that anonymous reporting systems are problematic when it comes to the investigative process. Usually, reports require some form of follow-up, as investigators often have questions about the information provided by reporters, and reports also generally have information missing. In an anonymous system, follow-up isn’t possible,30 making investigations more difficult.

For the reporters submitting to anonymous systems, the lack of feedback after submission is also a significant problem. It can encourage submitting and forgetting of reports, reduce responsibility for local improvement, and, most importantly, a lack of feedback deters reporters from submitting future incident reports. With an anonymous system, reporters do not know if anyone has read their report, never mind whether the issue they have a stake in has led to any meaningful change or investigation.31

Confidential systems allow for the possibility of feedback to occur, opening the door to active participation in patient safety from all parties. Ideally, a system would allow for every report to be confidential, with an option for the reporter to stay anonymous.

Inclusive

Out of the many research papers and guidance on patient safety reporting systems, there appears to be a gaping hole. Patient contributions. Despite a 2020 WHO report advising that “[t]he best reporting systems also include and encourage patient generated reports”, the vast majority of incident reporting systems don’t appear to allow a patient to submit reports.32 Of the ones that do, patient contributions make up a tiny fraction of the total number of reports.33

In my opinion, this is a significant oversight. Unlike in industries like aviation, where the passengers are typically that, passengers, patients are part of the process. Almost every medical decision is discussed with the patient, and the patient is the ultimate decider on the treatment option they choose to go with (also holding a veto on any potential treatment offered).34

Patients have information that healthcare professionals do not have access to. While doctors see many patients, patients see many doctors… and surgeons… and nurses… and pharmacists… You get the picture. A patient’s care experience is different to healthcare professionals – as Pozzobon et al. (2023) puts it: “Patients are the only care team member present at every interaction across their care journey”.35

Patients can therefore identify mistakes and offer insights that would otherwise be inaccessible to staff. As an example, five out of my six psychiatrists never realised they made diagnostic and prescription mistakes during my care. By the time I learned the key information highlighting these errors, I was usually being seen by the next (or the next + 1) psychiatrist.

Given what I’ve mentioned so far, I don’t think it is too far-fetched to say that patients should be encouraged to write incident reports when the opportunity arises. But I don’t think this encouragement is occurring. In the decade I have been receiving treatment, I did not know that as a patient in the UK, I could have filled out an incident form to the NHS’s Learning From Patient Safety Events (LFPSE) reporting system. Heck, in my decade as a patient, I didn’t know that incident reporting systems even existed.

Whenever I encountered a problem with my care, the response from healthcare professionals was either an attempt to fix my problem (and only my problem) or a directive towards a complaints procedure.36 Of all the rhetoric about moving towards learning and away from blame, why is it that, for every problem I’ve had with the healthcare system, staff have failed to look for learning and instead asked me who I want to blame?

As a patient, I would much rather a system be fixed so that the mistakes that occurred to me yesterday don’t occur to somebody else tomorrow, than see one particular clinician be reprimanded and learn nothing. I am not alone in this sentiment.37

I think the time has come to include patients in patient safety reporting.

What about Psychiatry?

You may have noticed this article has been pretty light on psychiatry. If you were to read the patient safety literature, you would notice a similar phenomenon. Up until 2019, there had only been two reviews that examined patient safety in a mental health context.38 The barriers to incident reporting in psychiatry have been so significant, Russotto et al. (2024) used expert opinion as an alternative source of information to study inpatient psychiatric incidents. Research specifically about psychiatric patient safety in a community setting is practically nonexistent.

For the research that has been conducted, many of the studies are published in speciality-specific journals (e.g. a psychiatric nursing journal) instead of patient safety ones. Consequently, psychiatric patient safety studies are not only separated from general patient safety literature, they are separated from each other. The scientific practice of building on a diverse array of past studies has become extremely difficult.39

Given the paucity of research and unique challenges associated with patient safety in mental health,40 the field of psychiatry may very well be the area in medicine most in need of effective incident reporting systems.

But, before we start thinking about how to design such a system, it is worth exploring the history of an industry that has already gone through the inevitable trials and tribulations of incident reporting implementation: Aviation.


The series continues next week, where we will start our exploration into the history of aviation reporting systems.

If you enjoyed reading about how to best capture errors through patient safety incident reporting systems, please considdffdjhbbjddfdf #ERR #ERR

Rebooting… … … … #SUCCESS If you enjoyed reading about how to best capture errors, please consider gifting a heart❤️. And if you think someone else woullllll #BUFFERING 🔄🔄🔄 d appreciate a deep dive into patient safety reporting systems, please do restack🔄 or share🔗. It would be much appreciated!


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

The history of patient safety reporting systems will be covered in a future article.

2

and medicine in general

3

That could’ve come out better…

4

From Goekcimen et al. (2023):

To sum up, the scattered and unsystematically reported evidence in learning from [Critical Incident Reporting Systems] to improve patient safety paints a rather dire picture of the current situation.

Also see: Shojania (2021), Stavropoulou et al. (2015) & Mitchell et al. (2015).

5

Some authors have argued that there have been success stories when it comes to healthcare reporting systems, and patient safety overall. E.g. Pronovost et al. (2016)

But Carl Macrae (2025) perhaps sums up the state of patient safety best:

Despite enormous effort and considerable investment over the past several decades, frustrations are rising at the limited progress that has been made. Even in healthcare systems that have developed a significant strategic and policy focus on patient safety, such as England’s National Health Service (NHS), major failures of care continue to occur with distressing regularity and large numbers of patients are still harmed each year in patient safety incidents.

See Illingworth et al. (2022), Illingworth et al. (2023) & Panagioti et al. (2019) for the prevalence of patient harm.

See Barach and Small (2000) for the relation of non-medical reporting systems to healthcare.

6

Suggested reasons for underwhelming performance include poor processing of reports, inadequate communication & feedback to front-line healthcare staff, low engagement by staff, information overload due to unselective “report everything” guidance, among many others.

Solutions include targeted reporting, making the reporting process easier and quicker for staff, improved standardisation of event types, etc.

For more information, see Macrae et al. (2016), Goekcimen et al. (2023), Shojania (2021), Stavropoulou et al. (2015) & Mitchell et al. (2015)

7

An international system definitely does not exist.

8

The six characteristics were in part inspired by Table 2 in Leape (2002) (also in the WHO 2005 incident reporting guidelines) and this short aviation article

9

A near miss is an incident which did not reach the patient. This includes things like a medical error that could have resulted in an injury, but didn’t through an intervention by a practitioner or sheer luck. See page 130 of the WHO international classification for patient safety

A no harm incident is where an event reached the patient, but no harm occurred as a result. An example would be a patient falling down (perhaps due to a drug which lowered blood pressure), but having no injury.

A harmful incident is where an event reached the patient and subsequently caused harm.

10

For example: Table 1 of Cooper et al. (2018), this UK National Health Service (NHS) policy guidance on reporting systems & this 2013 slide from the Sheffield Health Partnership University NHS Trust

11

The UK’s NHS keeps a log of the number of sentinel events, which they call “Never Events”. For example, between April 2025 and November 2025 there were 116 wrong site surgeries (with 10 of these intended for a completely different patient), 73 retained foreign objects post procedure (including 1 cotton wool ball case, 13 surgical instrument cases & 19 cases of a vaginal swab left in a patient), and 1 patient trapped between the mattress and bedrail.

The full codification of Never Events can be found here.

12

More info about mandatory and voluntary incident reporting systems, as well as the differences between them, can be found in Chapter 5 of the Institute of Medicine’s To Err is Human report (2000), or Chapter 35 of the Patient Safety and Quality handbook for Nurses (2008)

For some examples of voluntary/mandatory/hybrid reporting systems, see Chapter 5 of the 2005 WHO adverse event and learning system guidelines (for a more up to date, but less specific, state of reporting systems in the world, see page 232 of the 2024 WHO Global patient safety report)

13

Interestingly, the NHS has moved away from a discrete categorisation of severe incidents. Due to the poor quality of severe incident investigations, among other issues, the Serious Incident Framework was ditched in favour of the Patient Safety Incident Response Framework (PSIRF) in 2022.

PSIRF has placed a greater focus on patient involvement, learning and compassion.

14

Examples include: culture, checklists, investigative bodies, safety committees, improvement projects, risk registers, safety huddles, complaints processes, regulatory bodies and how they interact, policy implementation, communication systems, etc.

15

The history of aviation safety reporting systems will be covered in future articles.

16

See Gong (2022) and Macrae (2016).

17

A Safety Management System is “a systematic approach to managing safety, including the necessary organisational structures, accountabilities, responsibilities, policies and procedures”

(From the ICAO Safety Management Manual 2018 - pdf copy here)

According to a review by Zhelev et al. (2025), out of five high-income countries (Australia, Canada, Ireland, the Netherlands & New Zealand), only the Netherlands had adopted a formal SMS approach.

For more info on Safety Management Systems, see Macrae (2025), the UK’s Health Services Safety Investigations Body 2023 report on SMSs or the aforementioned review by Zhelev et al. (2025)

18

Example taken from slide six of a Serious Hazards Of Transfusion (SHOT) presentation. PDF here

19

And was one of the pioneers of such a system.

20

See Meyer (2019) & the WHO global patient safety report (2024)

21

From the 1997 House of Representatives hearing: VHA’s risk management policy and performance.

This quote is also often misquoted as:

The single greatest impediment to error prevention in the medical industry is that we punish people for making mistakes.

Misquotes like these don’t appear to be all too rare. Sometimes, even if we literally hear the original correct quote, we may remember a different one (otherwise known as the Mandela effect).

There may be a few reasons why these types of misquotes become more memorable. From Processing Fluency, to the Illusory Truth Effect and the Misinformation Effect.

22

The paradox is resolved through understanding that incidents that are unlikely to result in disciplinary action will likely be reported with abandon, while incidents which could lead to disciplinary action may very well not be reported. Further, the reports themselves may miss crucial details which could get the reporter in trouble.

In other words, punitive action may encourage reports of a similar nature, with little diversity, while also reducing the reliability and quality of the incident reports.

23

For some examples: see Haw et al. (2014), Soydemir et al. (2017), Vrbnjak et al. (2016) & Waring (2005)

25

For example, the organisation running the reporting system is also the one paying the healthcare employees.

A perceived conflict of interest also applies here. For instance the UK’s LFPSE reporting system is run by NHS England, who generally don’t employ healthcare professionals (these usually sit with local providers of varying types).

But given I had to look to the terms and conditions of the LFPSE to check that NHS England does not have any power to investigate individual cases, means there are likely healthcare professionals who are also unsure about who they are reporting to under the generic “NHS” banner.

26

See Gampetro et al. (2024). Pdf copy here

27

See Macrae (2016)

30

Technically, this isn’t true. There is a way to provide anonymity for reporters and still communicate with them. But I will cover this in a future article.

31

For more on feedback, including as a barrier to reporting, see:

Macrae (2016), Mahmoud et al. (2023), Beecham et al. (2025), Koskiniemi et al. (2025), Bovis et al. (2018)

32

I found it difficult to ascertain how many reporting systems allow patient contributions without manually surveying.

For the United States, I did find an Office of Inspector General report about Patient Safety Organizations (PSOs), which noted that:

most of the PSOs we interviewed said that they do not accept patient submitted reports.

But this was only out of a sample of five PSOs.

Notably, the Agency for Healthcare Research and Quality has funded at least four projects since 2000 focused on the role of patients and families in reporting (and also carried out an extensive report into designing systems for patient contributions in 2012). But these aren’t exactly earth shattering numbers.

Worldwide was a tougher nut to crack. And I couldn’t find any direct survey data.

I could though, through evidence of absent patient contribution mentions in the literature, infer that very few systems allow patients to report.

For example, in Table 2 of Scott et al. (2023), a paper reviewing types of safety incident reporting, there are only 2 papers (out of 53) that might reference patient contributions (under the category of “Mix—Clinical staff, non-clinical staff, relatives or residents”). The rest of the papers fall under categories related only to staff

33

I could only find one explicit mention: A study using the Danish Patient Safety Database by Christiansen et al. (2021) that found patient contributions accounting for only 1.4% of the 209,263 incident reports studied.

From an implicit mention in a study by Ward and Armitage (2012), there is reason to believe that the UK’s reporting system (NRLS then LFPSE), which allows patient reporting, has a similarly low percentage of patient reports. The authors also claim it is a similar story worldwide.

34

As an analogy, if getting your car serviced were like healthcare, you would have to stand next to the mechanic as they were doing the service. All the while, they would have to explain to you what they were doing for every single check and fix, then you would be the one making the final choice about the best way to proceed for every single decision that needed to be made.

35

Pozzobon et al. (2023) is an excellent review about the handful of studies on patient contributions to incident reporting systems.

I highly recommend giving it a read (it is open access).

36

I’ll give an example:

After one of my psychiatrists wrote a report containing numerous factual inaccuracies and contradictions based on a fifteen minute phone call about a lithium dose increase, I went to the NHS’s Patient Advice and Liaison Service (PALS). They were friendly and helpful, but only pointed me towards who to complain to. There was never any mention that I could submit a report to LFPSE.

The complaints procedure revolved around my specific problem and what they could do to fix it (Spoiler: all they could offer was for me to submit another report to my medical record with annotations pointing out the inaccuracies - the original is still part of my record).

37

I found the “preventing reoccurrences” section of Mazor et al. (2014) to be the most personally impactful example

Also see: van Dael et al. (2020), Bismark et al. (2006), Bouwman et al. (2015) & Bark et al. (1994)

38

Thibaut et al. (2019) cited Brickell et al. (2000) and Kanerva et al. (2013)

39

See D’Lima et al. (2018) for claims made in this paragraph

]]>
<![CDATA[Explaining how mistakes are made using Swiss cheese and a lawnmower]]>https://psychiatricmultiverse.substack.com/p/explaining-how-mistakes-are-madehttps://psychiatricmultiverse.substack.com/p/explaining-how-mistakes-are-madeThu, 12 Feb 2026 19:01:19 GMTThis is the third article in the Psyverse #2 series. To read the two previous ones, see the compendium.


There is no getting around the fact that health and safety is a very boring subject.

This was made clear to me when I volunteered to give an induction talk to new PhD students at the start of my third year. I arrived early, in the middle of the health and safety officer’s talk. I watched from the back of the room as he heroically tackled a plethora of health and safety directives. He spoke passionately about who to talk to in case something went wrong, where information on safe handling of equipment could be found, and empathetically reassured the onlooking students that everyone makes mistakes – it is far better to find help when things go wrong than try to fix things on one’s own.

It was a thoroughly impressive talk by a thoroughly impressive speaker. “Gosh, how am I going to follow that?” I remember thinking as I walked nervously up to the front.

When I turned to face the sea of eyes, I immediately understood my miscalculation. I had never seen a more bored audience in my life. Out of the sullen faces, heads on desks and emotionally vacant expressions, one student was faced 90 degrees away from me, staring directly at a newly painted wall. “How am I going to follow that?” I thought a second time. I tried my best to energise and inspire the new cohort of PhD students, but all I managed was a quiet titter of laughter halfway through.

“That was brilliant Alex!” the postgraduate secretary commented, rushing up to greet me after, “I’ve never seen the new students laugh after the health and safety talk”. This might be the highest praise I have ever received.

Talking about health and safety is inevitably boring. But that doesn’t mean that doing health and safety should be boring. So rather than try to persuade you that the mind-numbingly long documents of guidelines and strategies and directives are somehow secretly sexy, or lecture you on the life-saving importance of health and safety rules that you all must follow to a tee, I’m going to argue that when health and safety is done well, you don’t see it as health and safety. It is a set of systems and culture that are part of the everyday machinations of one’s job.

Since health and safety applies to all jobs, from postal workers to gardeners to healthcare professionals, the theory behind this set of systems and culture is generalisable. So I hope you will join me in this exploration of Human factors, the subject concerning health and safety at work.1

Part of a good health and safety system/culture is knowing what to do when a mistake occurs. How do we rectify it? To answer this question, I need to talk about a lawn mower and Swiss cheese.

Not quite what I had in mind… Source: NightCafe

Four for the sides, please

Let’s say an employee named “Axel” has been hired by a lawnmower company as a door-to-door salesman. The company has just released a brand-new lawnmower. Axel goes around neighbourhoods looking for unkempt lawns to demonstrate the mighty power of the cutter 3000. Lo and behold, Axel finds just the lawn. And not only have the occupiers not thrown him off their property immediately, they’ve agreed for Axel to show them how the mower works.

As Axel is setting up the lawnmower, the occupiers ask him to set it on the highest setting. “No problemo!” Axel confidently replies, as he puts his hands on the push bar, strides confidently down the first horizontal line of lawn, and…

A blue lawnmower on grass

AI-generated content may be incorrect.
Oh Sh*t. Source: Not me. Nope, nada, nuh-uh

Axel has only gone and buzz-cutted a bald line into the turf! He slowly turns his head to see the open-mouthed expressions of the couple that so keenly invited him to demonstrate the cutter-3000. It ends how most sales attempts end. Axel gets chucked out by the occupiers.

Now, in this circumstance (which definitely did not happen with me and my own lawn), the primal urge is to blame.2 The mistake was Axel’s fault. In the context of a group of people, like in an organisation, this is what is called a blame culture. When someone makes a mistake, often the reaction is to find a small number of people who are responsible (who are then punished), drawing attention away from possible systemic issues. From The design of everyday things3 by Don Norman:

Humans err continually; it is an intrinsic part of our nature. System design should take this into account. Pinning the blame on the person may be a comfortable way to proceed, but why was the system ever designed so that a single act by a single person could cause calamity? Worse, blaming the person without fixing the root, underlying cause does not fix the problem: the same error is likely to be repeated by someone else.

Rather than immediately throwing Axel out, thus leaving a non-symmetric sideburnless garden and the lack of what is otherwise an excellent lawnmower, the friendly couple could have asked Axel what went wrong. Together they could try to piece together how the mistake was made, to understand how it could be prevented in future. This is the opposite4 of a blame culture – a learning culture (sometimes known as a “supporting culture” or “no-blame” culture).

In communication with the lawnmower company, changes could be made to the procedure or design of the cutter 3000. Ensuring no lawn will feel naked again. If the company keeps finding that Axel seems to have a pathological obsession with removing as much grass as possible, while other salesmen have stopped buzz-cutting grass, the company could ask Axel further questions. If he were found to have made the mistake on purpose,5 disciplinary procedures could be undertaken. This is a “just culture”, which lies between a blame and a learning culture.6 While learning is the ultimate aim, individuals have responsibility over their actions and can be sent for further training or disciplined if found to be engaging in risky behaviour deliberately.

Let us say that Axel made an honest mistake. How could we learn from it?

Slicing Swiss Cheese

First, we need a model of how mistakes occur within an organisational system. Fortunately, a British Psychologist called James Reason came up with such a model in the late 1980s. He imagined a series of layers representing defences and safeguards within a system to guard against hazards.7 Rather than being perfect, these defences have holes in them. Rob Lee, Director of the Bureau of Air Safety Investigation in Canberra at the time, suggested to Reason that the sequence of layers with holes looked a lot like Swiss cheese. Reason took this idea and ran with it, publishing a figure utilising Swiss cheese in his 2000 article Human error: models and management. And thus, the Swiss cheese model was born.

A cheese slices with holes

AI-generated content may be incorrect.
Source: Reason (2000)

So, as you can see from the Swiss cheese figure above, when anyone makes even the tiniest mistake, everyone dies.

Wait…

That isn’t right…

Can we look at the Swiss cheese figure again?

A cheese slices with holes

AI-generated content may be incorrect.
Source: Reason (2000)

James Reason has only gone and made an error in a paper about how errors are made! You cannot just copy and paste the same slice of Swiss cheese!8 It seems like human laziness has triumphed once again.

You see how easy it is to blame?

So, let us use the Swiss cheese model to help us understand how James Reason, the creator of the Swiss cheese model, made a mistake while trying to communicate the Swiss cheese model.

Firstly, we need to understand what the Swiss cheese model is actually supposed to represent.

The layers of cheese are defences against a source of harm (hazard), so that an event that causes harm to a person (loss) doesn’t occur. These can range from organisational influences to environmental and psychological conditions. The holes are gaps in these defences (e.g. toxic work culture, budget constraints, bugs in code, weakness in a safety barrier, etc.).9

The layers of cheese are not identical in nature. Together, they represent a system of defences as a whole, but each individual layer represents a different part of the system. In other words, the holes won’t be in the same places. The layers form a hierarchy, with the start of the arrow going through the “blunt end”, and the head of the arrow through the “sharp end”.

The blunt end is usually remote from any accident (or loss) in both time and space. The slices of cheese here represent senior decision makers, managers, the design of the system procedures, and organisational culture. The failures, or holes in the Swiss cheese at this end, are “latent conditions”. From Human error: models and management by James Reason:

[Latent conditions] arise from decisions made by designers, builders, procedure writers, and top level management…. Latent conditions have two kinds of adverse effect: they can translate into error provoking conditions within the local workplace (for example, time pressure, understaffing, inadequate equipment, fatigue, and inexperience) and they can create longlasting holes or weaknesses in the defences (untrustworthy alarms and indicators, unworkable procedures, design and construction deficiencies, etc). Latent conditions—as the term suggests—may lie dormant within the system for many years before they combine with active failures and local triggers to create an accident opportunity.

The sharp end is where frontline workers10 are in direct contact with the system environment, whether that be another person (e.g. a patient), a machine (e.g. an aeroplane), or a topology (e.g. a quarry). The slices of cheese here represent the state of the system conditions or the psychological/physical state of the frontline worker. The holes in these slices of cheese are called “active failures”, or “unsafe acts”. These are slips, lapses, fumbles, mistakes and procedural violations.11

Note that the holes in the Swiss cheese represent both active failures and latent conditions. Generally, latent conditions are higher up the hierarchy, nearer the hazard, with active failures lower down. Early versions of the Swiss cheese model had distinct explicit representations for each layer. But given the heterogeneity12 of health and safety systems, the layers were eventually left unnamed so that different systems could be labelled in the way that suited them best.13

Importantly, as James Reason notes in the very paper he published the misleading Swiss cheese figure in:

[Unlike in Swiss cheese the] holes are continually opening, shutting, and shifting their location. The presence of holes in any one “slice” does not normally cause a bad outcome. Usually this can happen only when the holes in many layers momentarily line up to permit a trajectory of accident opportunity—bringing hazards into damaging contact with victims.

So, the Swiss cheese figure is supposed to represent a snapshot in time when all the holes manage to line up. Perhaps a more accurate Swiss Cheese Model representation would be this figure:

Each of the layers has holes in different places and different sizes. The arrow touches at least one of the sides of the holes it passes through, which helps convey the momentary nature of the holes lining up. Finally, the arrow passes through a very tiny hole in one of the layers to convey that the hole has just opened up.14

So, back to Reason’s Swiss cheese figure from earlier. The “unsafe acts” that Reason made were copying and pasting the same Swiss Cheese and drawing an arrow directly through the middle of the holes.15 To get to the bottom of the latent conditions, a root cause analysis would need to take place. This is where an investigation of the “unsafe act” and the events leading up to it attempts to determine where the holes in the system were. In lieu of a root cause analysis, we can guess that some of the latent conditions were: the likely clunky software at the time of publishing (2000s), the publishing checks, and the peer review system.16

Swiss Cheesing the lawnmower

With our understanding of the Swiss cheese model in mind, let us look back at the lawnmower, and my active failures… No wait! Axel’s active failures…

Ack. I cannot keep up this ingenious facade any longer. Alas, it was me… I embarrassingly buzz-cut my own lawn.

With that out of the way, my first active failure was that I didn’t read the manual fully. I didn’t learn the ins and outs of how the lawnmower worked.

Then, I set the height of the lawnmower without double-checking it was correct. Finally, I didn’t test out the lawnmower on a very small inconsequential patch of grass, much like what one would do when testing paint colour.

The lawnmower company could give out directives for owners to follow as a slip of paper in the box the lawnmower came in. However, this would be like a band-aid (or “plaster” if British) over a continuously cracking system. Wiegmann et al. (2022) illustrate the shortcomings of such local fixes through a classic Dutch childhood story. A young boy named Hans uses his fingers to plug holes in a dam near his village, saving the inhabitants from flooding.

While such corrective actions are needed, and should be implemented, they often do not address the underlying causes of the problem (i.e., the reasons for the leak in the dam). It is just a matter of time before the next hole opens up. But what happens when little Hans ultimately runs out of fingers and toes to plug up the new holes that will eventually occur? Clearly, there is a need for the development other solutions that are implemented upstream from the village and dam. Perhaps the water upstream can be rerouted, or other dams put in place. Whatever intervention we come up with, it needs to reduce the pressure on the dam so that other holes don’t emerge.

In other words, we need to address the latent conditions in the system so that new holes don’t occur, or even better, so that Hans doesn’t need to put his fingers in the dam holes17 in the first place.18

What were the latent conditions in my lawnmower escapade? To find out, we need to do an investigation.

First of all, how did my buzz-cut mistake occur? Well, I set up the lawnmower by unfolding the push bar, putting in the battery, and plugging in the safety key. Then I looked at the tag attached to the push bar to check if the lawnmower was set at the right height. According to the label, the lawnmower was originally set to “low”, and I wanted it on “high”, so I reached down from my standing position behind the lawnmower and pulled the red handle to the “high” position. I made no question of this. The handle being up on the lawnmower, to indicate the high setting, was intuitive to me.

A person holding a lawnmower

AI-generated content may be incorrect.
A hand holding a metal detector

AI-generated content may be incorrect.
A close-up of a blue machine

AI-generated content may be incorrect.
Lawnmower with blown-up images of tag and handle. Source: Me

Then, I turned the lawnmower on, walked forward, and made my lawn feel embarrassed for a couple of weeks.

So, what on earth happened? Well. When you take a look at the side of the lawnmower, the answer reveals itself.

A screwdriver with a screwdriver in it

AI-generated content may be incorrect.
Side-on view of lawnmower. Source: Me

Pulling the red lever back lowered the lawnmower, not raised it! Since I pulled the lever back while standing behind the lawnmower, the height indicators were obscured from view. Similarly, while standing, I could not easily see that the lawnmower had lowered down. And since from my position, everything made intuitive sense, my brain was not on the lookout for things to go wrong – I was focused on pushing the lawnmower to mow the grass.

My guess is the tag was intended to be viewed in the folded position of the lawnmower handle, when it was close to the lever. In other words, the tag was supposed to be viewed while facing the front of the lawnmower instead of standing behind it.19

Possible design solutions for these latent conditions20 could be to change the inner mechanism of the lawnmower lever so that the higher up position corresponds to the lawnmower being higher. Pictographs of some short grass and long grass could be placed on top of the lawnmower at each corresponding end of the lever “hole” so that when a person reaches down to change the lever, they can confirm it is at the setting they want. The tag on the lawnmower could be attached to the other side of the handle, and perhaps an outline of a lawnmower surrounding the lever diagram could show clearly which end is the front of the lawnmower and which is the back.

Reporting the problem

Now that I have spotted the active failures and latent conditions21 after committing my unsafe act (as well as devised possible solutions to plug some of these holes in the Swiss cheese), how could I communicate it back to the lawnmower company?

I could just send them an email. But I might just get a message back saying “we have read your concerns and passed your feedback on to the relevant person”, knowing full well the administrator on the other end of the email has probably deleted it. However, if the lawnmower company had a form I could fill in, specifically for me to report a mistake, I would be more confident my concerns would be looked at.22

These forms would be part of something called a safety reporting system. Originating from industries like aviation and nuclear, they provide a means for front-line workers to report unsafe acts.

While in my lawnmower scenario, I have personally made the effort to conduct a very informal investigation, in the real world,23 front-line professionals usually don’t have the time, nor would be very happy about, being unpaid investigators. While it is beneficial for staff to be involved as much as possible in every aspect of safety, most of this job would go to an investigator. What we can expect is for front-line professionals to report their mistakes.

Compared to the aviation and nuclear industries, the implementation of reporting systems within healthcare is relatively recent. Mostly occurring within the last twenty-five years or so. It has gone… well… I think it wouldn’t be too controversial to say the jury is out at the moment.


The series continues next Thursday.

If you didn’t find the article too cheesy or full of holes, please consider giving it a heart ❤️.
And if you know someone who would appreciate an unnecessarily thorough analysis of a bald stripe in a lawn, please do restack 🔄 or share 🔗. My lawn and I would be grateful!


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

The UK’s Health and Safety Executive says that:

Human factors refer to environmental, organisational and job factors, and human and individual characteristics, which influence behaviour at work in a way which can affect health and safety

2

This could be honest or dishonest, i.e. you could blame other people for mistakes you have made.

4

More “different” rather than “opposite”. I’ve created a spectrum as an aid for the reader.

5

This could take the form of Impaired judgement, malicious action, reckless action, risky action or unintentional error (there is still a problem with Axel’s lawnmower that isn’t apparent in other ones). See Boysen (2013) for more details.

6

Sort of… It isn’t really in the middle of the two but kind of adapts aspects from both. See this definition of a just culture by Reason (1997):

An atmosphere of trust in which people are encouraged, even rewarded, for providing essential safety-related information – but in which they are also clear about where the line must be drawn between acceptable and unacceptable behaviour

7

I’m using a very broad definition of hazard in this article. Generally, a hazard is defined as “a potential source of harm or adverse health effect on a person or persons” (From the Irish Health and Safety Authority). I suppose you could argue the lawnmower’s height setting is a source of harm to the grass’s self-esteem…

8

By using the same slice of cheese for each layer, this implies that whenever any mistake is made (represented by any hole), harm is immediately inflicted upon the person who made it. i.e. we always go straight from hazard (potential harm) to loss (harm).

9

We can use a Trapeze show as an example. The main hazard would be falling from the Trapeze bar, and the loss would be injury as a result of falling. The “defences” may be a long training procedure before being allowed to perform, the quality of the training equipment, a culture which doesn’t pressure performers into performing (e.g. if they have an injury), and a net below the Trapeze performance area to catch any falls. “Holes” could be a packed performance schedule (not allowing performers to recover appropriately), inadequate training equipment, a culture with a “throw into the deep end” attitude to rookies, and literally a hole in the safety net.

10

The frontline workers could be nurses/doctors with a patient, construction workers on a building site, airline pilots in an aeroplane, to name a few.

11
  • Slips are execution failures where a frontline worker has the correct intention, but the action deviates from their plan (e.g. pressing the wrong button).

  • Lapses are memory failures where a worker forgets an intended action (e.g. forgetting to press a button).

  • Fumbles are active execution failures. These are slips related to poor physical execution (e.g. pressing the wrong button through miscoordination - liker fir instancew on a keybord).

  • Mistakes are planning or problem-solving failures where the frontline worker’s action goes as intended, but the plan/intention itself is incorrect (e.g. A button is mislabelled, meaning the wrong button is pressed). This is split further into rule-based mistakes, where a bad rule is applied (or good rule misapplied), and knowledge-based mistakes, where a worker finds themselves in a completely novel situation - of which no rule exists - and incorrect reasoning occurs.

  • Procedure Violations are a deliberate deviation from a rule, procedure, standard or safe operating practice (e.g. a worker presses the wrong button on purpose). These intentional actions are split further into routine (common practice despite being unsafe), exceptional (unusual deliberate departures from procedure) and optimising (cutting corners) violations. It is important to note that procedure violations are rarely malicious and usually result from a worker attempting to get a job done as efficiently as possible.

For more on types of active failures, see Human Error by Reason or this short explainer from the UK’s Health and Safety Executive.

12

I don’t quite know why, but homogenous and heterogenous are two words my brain continually stumbles on, meaning-wise. So, for readers who also encounter this problem:

Homogeneity means that parts of a system are all similar to each other. Whereas Heterogeneity means that parts of a system are all different from each other.

They are not absolute terms; it is a spectrum, and the two words are used to describe which end of the spectrum a system lies.

14

As a final note on the Swiss cheese model, it is meant to be a heuristic to help a general audience understand how organisations could become safer. Safety science has to encompass all the factors involved in the complex interaction of people with other people, as well as people with the environment. It is probably fair to say that nothing in safety science is simple.

Many criticisms of the Swiss cheese model fail to take into account other, more detailed versions of the heuristic Reason produced, such as:

Image
Source: Larouzee and Le Coze (2020

I recommend the excellent article by Larouzee and Le Coze (2020) for an extensive look at the history of the Swiss cheese model.

15

The hazard in this case is that the readers misinterpret what the Swiss cheese model is supposed to mean (Just search “Swiss cheese model” in Google images for an array of interpretations).

16

The image processing software would have been much less advanced than today, meaning the creation of different sizes and shapes of cheese would be more difficult. Or perhaps Clippy didn’t remind Reason about the mistake...

There may be systemic issues in the structure of the editorial board (for instance, not being paid for their time, introducing time pressures)

While it would be a totally Alex move to tangent into an analysis of the peer review system, I think this article would end up going on forever…

So, I’ll let Adam Mastroianni do it.

17

Works both ways

18

Wiegman et al. (2022) goes on to clarify:

This is not to say that directly plugging up an active failure is unimportant. On the contrary, when an active failure is identified, action should be taken to directly address the hazard. However, sometimes such fixes are seen as “system fixes” particularly when they can be easily applied and spread.

19

A hint that this was supposed to be viewed from the front comes from the orientation of the lever “notches” in the tag image. Which only align when viewed from the front of the lawnmower.

Though this still doesn’t explain why the label is on the wrong side of the lawnmower.

20

If going by the HFACS system, these would be “Preconditions for Unsafe Acts

21

Well, not quite…

Without knowing anything about the structure of the lawnmower company itself, the organisational & supervisory latent conditions are beyond our grasp. Possible places to look for them would be the procedures for user testing, organisational trade-offs (cost/time), etc.

22

The form could just be an empty box with the question “What happened?”, but if you are a large organisation receiving hundreds of these forms a week from something as small as “I input Mr Jenkins’ name as Mr Yenkins” on an invoice to “I accidentally amputated a toe with the lawnmower”, sorting through this information would be quite difficult. A structured report would be a better way to go.

23

If in doubt that something like my lawnmower incident couldn’t happen in medicine, have a look at this 2006 article by Lowe.

]]>
<![CDATA[A most costly preventable mistake]]>https://psychiatricmultiverse.substack.com/p/a-most-costly-preventable-mistakehttps://psychiatricmultiverse.substack.com/p/a-most-costly-preventable-mistakeThu, 05 Feb 2026 16:03:08 GMT
Source: NightCafe

Welcome to the psychiatric multiverse! Here, we will explore how ideas from other fields might be able to improve the psychiatric system. This is the second article of our in-depth series on incident reporting systems. It is also a follow-up to my previous article on mistakes.


Most people can adapt to new situations. Eventually.

In the no man’s land between “new situation” and “eventually”, things can often look a little baffling. There is perhaps no better example than a sports team suddenly adopting a completely new strategy, winning lots of games, and their opponents responding by… doing the same losing strategy as before.

I know how it feels to be one of these befuddling opponents. The day after my severe reaction to the Selective Serotonin Reuptake Inhibitor (SSRI) antidepressant Sertraline, when I was suddenly bombarded by a bunch of severe anxiety symptoms that required a hospital visit, I went into work. Because. Erm. My brain thought the best strategy to deal with this scary new situation was to do exactly what I had done before. That day was a workday, after all. I lasted about 15 minutes before heading back home.

Having definitely, absolutely, 100% learned my lesson, I decided to continue trying to go into work over the next couple of days. On my last attempt, I didn’t even make it to the bus stop.

The first thing I wanted to do when I moved back to my mum’s place to recover was book a meeting with a psychiatrist. When my internal universe changed, initially, all I wanted was a name for it. I knew what happened to me was serious, and I trusted the UK’s mental healthcare system would get me the urgent help I needed, post haste.

Two months later, naivety fully banished from my psyche, I left the first meeting with my psychiatrist relieved.

She told me that what I had was something she had seen before – an “alertion response” to the Sertraline.1 In fact, she was really pleased to see me in her office at such a young age – she told me that most of the patients she saw with a condition like mine were in their 40s or 50s. I think she meant this as some kind of reassurance. But all I thought was “you mean I managed to fit 40 or 50 years of stress into 23?”

Nevertheless, I was hopeful. I thought I had something rare amongst the general population, but something common for people who end up in the office of a psychiatrist. Things were going to get better.

A month later, I was crying in the same psychiatrist’s office. Things had gotten considerably worse, and I still didn’t know what on earth was wrong with me. I was upset because no matter how hard I was trying, I could not get through to the psychiatrist how serious my condition was. Something finally clicked when she asked “Are you okay to drive home?”

I replied “Umm... My mum drove me here? I’m not even close to being able to drive.”

She asked to see my mum, who came in and helped communicate more effectively my family history of psychiatric conditions and the symptoms I had after taking the Sertraline. She gave me a retrospective diagnosis of serotonin syndrome, adding that I was most likely on the bipolar spectrum.2

When my mum and I left the psychiatrist’s office and turned to go down the stairs, I lifted my head to see her office door slightly ajar. Framed by this gap, I saw my psychiatrist leaning over her desk, head in her hands. It can be easy as a patient to forget that mistakes can have a toll on clinicians too.


My psychiatrist had given me a prescription for Olanzapine, an antipsychotic. Over the next day or so, I agonised over whether I should take it. The potential side effects terrified me. One of the safest psychiatric medications had completely upended my internal reality – goodness knows what a “serious” drug could do. In the end, I decided against taking Olanzapine and continued down the psychotherapy route. I was just too scared of potential permanent side effects, however unlikely they were to occur.

A few months into psychotherapy, I decided to trial another medication. I was only given eight months off from my PhD, and it became clear I wasn’t going to recover in time to return. Having been discharged by my psychiatrist, I went to my GP. He wrote to my local NHS mental health centre. A psychiatrist at the centre made a recommendation, oddly, through a letter. I never saw this psychiatrist. The recommendation was for the SNRI antidepressant Venlafaxine.

The Serotonin out of Serotonin and Noradrenaline Reuptake Inhibitor terrified me, but my GP reassured me that Venlafaxine mostly acted on Noradrenaline reuptake inhibition. Thinking there were few other options, I decided to start a trial of the drug.

I went unbelievably slowly on the titration up to a therapeutic dose of 150mg. On several occasions, it felt eerily similar to my experience of taking Sertraline. I put this down to what I thought was the tiny S of Venlafaxine’s SNRI classification. After weeks of slow titration, I finally made it to 150 mg.

I didn’t feel that much different to before, but hoped Venlafaxine would act as a buffer against the most stressful events. Eventually, I did make it back to the PhD.

I took Venlafaxine for three years. During this period, I tried to come off it twice. Both times, I started to feel worse once titrating down. I therefore concluded Venlafaxine must have been helping. In fact, the opposite was true. By the time I finished the PhD, I was the sickest I had ever been. The deterioration was so slow, I never noticed.

In the years since, I have both learned more about Venlafaxine and seen the reports about me by my first psychiatrist – the one who prescribed Olanzapine.

Venlafaxine was not doing what I thought. My GP got the Venlafaxine mechanism of action the wrong way round. At low doses, Venlafaxine acts more on serotonin reuptake inhibition. It only starts having any meaningful Noradrenaline reuptake inhibition at around 225 mg per day. The highest I ever reached was 150 mg. In essence, I was taking a weak SSRI – the type of medication which caused my severe reaction – for three years.

A sentence in the last paragraph of my first psychiatrist’s report (4 months before I took the Venlafaxine) showed the SNRI prescription mistake was inherently preventable. She wrote:

[It is] very important that [Alex] avoids all the SSRI and SNRI medications.


I think it's natural to be angry knowing things could have turned out very differently. But shaking one’s fist at the world doesn’t help improve it. After all the mistakes that were made during my care, I wanted to learn why they were made. I wanted to learn how the kinds of mistakes that happened to me could be prevented from happening to other people.

What I found was a subject much more vast, more complex, more curious than anything I had ever come across in my life.


The series will continue next Thursday.

Thanks for reading the Multiverse Psychiatric. If I enjoyed this story about mistkes, please do give it a hearth❤️. It is much appreciated!


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

1

In her report about this meeting, she also said I “appeared euthymic in mood”

2

The bipolar spectrum diagnosis turned out to likely be another mistake (given I have never experienced a manic episode)

]]>
<![CDATA[Gateway to the Psyverse ]]>https://psychiatricmultiverse.substack.com/p/gateway-to-the-psyversehttps://psychiatricmultiverse.substack.com/p/gateway-to-the-psyverseWed, 04 Feb 2026 16:03:00 GMTThis Substack will be using a “season” model.1 During each season, I will be posting every Thursday. To help keep track of the Psyverse-specific articles, here is a reverse chronological list that I will continually update.


Psyverse #2 (Reporting systems)


Psyverse #1 (Stories)


Psyverse #0 (Why I write)

1

I will post a bunch of articles, each one a week apart, then take a break while writing the next bunch. This will allow me to publish consistently while also giving me the breaks needed as someone suffering from an energy-limiting condition (Generalised Anxiety Disorder)

]]>
<![CDATA[The curious inadaptability of healthy people]]>https://psychiatricmultiverse.substack.com/p/the-curious-inadaptability-of-healthyhttps://psychiatricmultiverse.substack.com/p/the-curious-inadaptability-of-healthyTue, 04 Nov 2025 14:31:18 GMTToday is the tenth anniversary of my chronic illness. On Halloween 2015, I was prescribed an antidepressant. Four days and a grossly inconvenient severe reaction later, my journey into a decade-long chronic illness began.

Nowadays, I don’t really like milestones – the moments in life generally seen as times to celebrate and reflect on the past. I don’t celebrate my birthday, or Christmas, or any holiday. This isn’t because I’m a negative nelly. I don’t spend the generally recognised periods of “time-off” in a darkened room lambasting the pointlessness of existence (though I don’t begrudge anyone that does – life can seem pointless at times). Nor do I sit in a mountaintop lair, plotting to steal presents from pacifists, with very strange hair.1

All in all, I think I’m a fairly optimistic individual. The main reason I don’t celebrate or reflect on milestones is because chronic illness doesn’t generally allow it.2 Periods of rest and recuperation are a social privilege afforded to the healthy. Holidays, and the like, require energy and help to organise. It is presumed that brain reward mechanisms are functional and in place, so that participants can actually enjoy the time off. Not to mention that usually the experience is much better with other people – effort is required to cultivate and maintain relationships so that people will join you in your time off.

Living with chronic illness means I have no energy to spare. The vast majority of my time is spent attempting a single goal: to get healthy. Any widely recognised day of rest is spent like all the others. Routines, practices, management, monitoring & slow build-up of functioning. Unfortunately, this is an unavoidable reality - chronic disease doesn’t care about customs and traditions.3

So why am I suddenly reflective? Goodness knows. Perhaps the strange draw of the magic number “10” was too much to ignore (which also makes me wish humans had six digits on each hand – two extra years before considering an article like this). Or perhaps it has been two months since my previous article, and I’m still not close to finishing the series on reporting systems. Whatever the reason, my thoughts have wandered towards the life I had before I became sick, and the one after it.

As a healthy person, I was functionally adaptive. I could schedule changes at the last minute to meet up with friends or do a favour for a colleague. I could stay up late to finish a project or go out to socialise. I was mostly reliable and dependable.

As a sick person, I can do none of these things. But in the past decade, I couldn’t help but notice that the healthy version of me wasn’t completely adaptable.

Stressful dog walks

A curious thing occurs almost every time I am out walking with my mum and our dog. Walking down a wooded path, a person up ahead will see us and stop dead. Arm outstretched, a tight grip on the leash, they will look nervously down at their dog, trying to figure out what to do next. The dog will invariably be lunging towards us, barking or something to that ilk.

If the dog is small enough, the other person will quite literally wrestle the dog up into their arms and carry them past us. A bigger dog, and they will use the leash to drag them as far off the path as possible.4 Meanwhile, our dog is tight to my mum’s leg, no leash in sight, as we both walk past the most confusing examples of human-canine partnerships.

This occurrence seems to be fairly common, as this YouTube video illustrates5:

Reflecting on this after a walk one day, I imagined what dog walking must be like for these people with reactive dogs. The anxiety it must provoke every day. The effort and stress that they must go through for every person they meet on their periodic semi-circular shaped route. Effort and stress which must have knock-on effects in other parts of their life – their relationships, their work. All for what, on the face of it, is not an unsolvable problem.6

Throughout my decade in disease, I have observed this type of inadaptability in friends, colleagues and strangers. I’ve seen countless times where a new member of a group comes in and disrupts the harmony, and instead of the group addressing the issue, everyone gossips in the background while pretending everything is fine in the foreground. I’ve been in situations where someone provides a solution to an individual (requiring a tiny bit of learning) that directly addresses their problem, making their life easier. But, instead, the individual subsequently ignores the suggested solution and continues doing the laborious method they were previously comfortable with (e.g. like manually filling in a spreadsheet instead of learning how to use formulas or macros).

I think that for healthy people,7 this kind of inadaptable “stickiness” is manageable. Yes, walking with a reactive dog is a tad stressful, but once it’s over, one can still get on with the rest of the day. That new member of the group is annoying, but it doesn’t ruin the experience too much. It takes a while to fill out this long spreadsheet, but the work still gets completed to a satisfactory standard.

These kinds of stressors are all manageable. On the other hand, they are also unnecessary. All of them are solvable problems, often without too high a degree of learning. Further, the consequences of a healthy person's inadaptability do not fall solely on the individual exhibiting it. Reactive dogs are stressful for other dog walkers too, newer members of a group may leave because of the unhealthy dynamic, and other people may be waiting on the completion of the spreadsheet.

While healthy people may lose only a small amount of functionality due to these situations, the chronically ill, caught in the collateral damage of someone else’s unintended inadaptability, often collapse into a functionality black hole.

But it might just be the chronically ill, who are the people able to help solve this inadaptability problem.

There is value here, at the bottom of it all

After four years of barely leaving the house with severe agoraphobia, I finally stepped out into the world, fully on my own, in January 2024. By March, springtime, I was taking part in an acting class. By summer, I was in an improv class, and by the autumn I even did a couple of small shows.

Whenever my agoraphobia and anxiety came up in conversation with class members, most were very empathetic, some shared their own anxiety troubles, a few didn’t understand what agoraphobia and severe anxiety were, and one said it was impressive.

I am grateful for the responses I’ve received, and it feels good to have support. Curiously, however, no one asked me how I did it. How I got up on a stage despite finding it difficult to leave the house on most days.8

I think there might be a misconception about the ability of severe anxiety sufferers to deal with anxiety. Such are the levels of anxiety I experience, it often incapacitates me. But this doesn’t mean I am bad at dealing with anxiety. Quite the opposite. In the same way that practising golf every day for ten years means you are probably going to get pretty good at golf, living with anxiety every day has meant I have become (at least in my opinion) pretty astute at dealing with anxiety. In other words, functional inadaptability forced me to become adaptable in the very areas healthy people don’t need to be – emotional and practical.

With a stress threshold the size of a peanut, any way to lessen the emotional and practical burden suddenly becomes an invaluable skill. Over this last decade, it became vital to learn as many viable techniques as possible to recognise when and how stress and other emotional states arise (and how to navigate other people’s stressful and emotional states). I am constantly checking in with how I feel and making decisions based on that. I have planned and experimented within an inch of my life. Finding and testing any possible practical way that could make my life a little easier.9

Mostly, chronic illness forces inadaptability on us as sufferers, in large swathes of our lives. But it can also stimulate adaptability in areas where the healthy are inadaptable. Therefore, there is some value to be unlocked.

As the clock ticks past the decade mark in my time as a chronically ill person, the thing I desire most is to be valued. Without a job, a collapsed social structure, and almost permanent exhaustion, feeling valued would not only help with my recovery,10 I think it would help people in the world of the healthy too.

Most importantly, there aren’t many experiences in life that help with perspective, empathy and desire to aid others as much as chronic illness. It is perhaps the ultimate pyrrhic shortcut to wisdom. It would be a shame for the healthy not to utilise it.


If my meandering musings scratched an itch not too small, consider gifting a❤️ like, it makes me feel ten feet tall.


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

To the best of my ability I can honestly say, that my heart is regularly sized in every which way.

2

Or, at the very least, milestones rarely feel like causes of celebration.

3

Perhaps the cruellest aspect of chronic illness is the social isolation. The friendships that would help the most are also the things I do not have the energy or resilience to form or maintain.

Over time, friendships slowly degrade and disappear. Part of this is due to lacking the energy to be a supportive friend. Partly because of the misunderstandings and stereotypes surrounding chronic illnesses, making interactions with friends more energy-intensive than they normally would be. Finally, it is partly because during the worst periods of illness, the only thing I wanted to talk about was how rubbish things were in my life. In fact, that was pretty much the only thing I could talk about. As I have said, one’s entire life becomes devoted to just getting better.

4

Or my mum, recognising a very reactive dog and an oblivious owner who is not in charge, will lead our dog off the track.

5

In Substack, I’ve noticed that the YouTube embeds tend to leave a “Error 153
Video player configuration error”, so I’ve put this one up as a normal video in the hope people can see it properly. The YouTube video is by Ethan Steinberg.

6

I’m not going to go into what a useful way to train one’s dog is. I’m not an expert (and my mum is the one who has an intuitive understanding of dog behaviour). Also, the area of dog training is a highly polarised, somewhat wild west of theories and philosophies (which may indeed be part of the problem).

7

For simplicity's sake, I suppose I would define healthy as not needing to go to the doctor (i.e. unhealthy is the objective need to go to a doctor whether one realises it or not, or whether one wants to go or not).

“Healthy” is actually a difficult term to define.

As part of a lived experience workshop I took part in, some academics pointed out that the World Health Organisation has this definition of health:

“Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity”

meaning that no one is healthy. As it describes a degree of perfection that is not possible.

For some reason, the group ended up talking about what God “was”. One member then described god as like an unattainable perfection, like a perfect state of wellbeing. Of which someone else immediately quipped: “I guess God must then qualify as healthy”

8

I plan to write more about the various tricks I learned in order to do things with severe anxiety in the future. For now I will leave you with one.

The hardest task of doing something that is associated with you feeling very anxious, is simply getting into the room in the first place.

I found that in order to get into the room, it could not be a choice in my head. Because if it was, 100 times out of 100, I would choose not to do it. Of course I would choose not to go into the room, it makes me feel really anxious!

Now, I don’t mean that I somehow forced my way into the room through some sheer force of will or, god forbid, some elaborate consensual kidnapping scheme. Ultimately, it was still a choice.

Rather, for something to become a necessity the reason to do the thing has to be an obvious one. Like eating food for instance. Of course we all eat, because if we didn’t, we’d die. So for me, I had to go to improv because if I didn’t, I would not socialise, and if I didn’t socialise, there would be no way back to full health. So, of course I had to go to improv.

When a reason is obvious, I don’t think about it. There is no debate in my head about whether I should or should not do it. I don’t think about whether I should eat food every day, I just do it. The same logic applied to going to the the improv.

This does only get you in the room, however. What to do with anxiety when in the room is a whole other matter.

Now that I think about it, this does remind me about a certain objective ridiculousness to having chronic illness sometimes.

When I have mentioned to people outside the improv group that I did improv, one of the most common responses I have received is “Oh, I could never do that!” (usually referencing that it is much too daunting a prospect), unaware that they are talking to someone with a severe anxiety condition.

9

Examples include rigging up my room for optimal sleep, coming up with my own kind of calendar to plan my day, figuring out the best way to meditate for me and my condition, a freezer cooking routine that isn’t too stressful and even a timer on my computer so I am forced to take breaks every 20 minutes. I’ll go into many more in a future article.

10

This applies to other chronically ill people in similar positions to me.

]]>
<![CDATA[Mental illness is attractive]]>https://psychiatricmultiverse.substack.com/p/mental-illness-is-attractivehttps://psychiatricmultiverse.substack.com/p/mental-illness-is-attractiveTue, 02 Sep 2025 18:14:58 GMTI received some unwelcome news a few weeks ago. My dog, who has been a support for me for over a decade, was diagnosed with terminal cancer.

I didn’t take this news well. Almost immediately, I collapsed into a non-functional state of pure unstoppable sadness. This had two consequences. Firstly, because I could not shut down the annoying scientific part of my mind – even in a state of distress - interesting thoughts about mental illness occurred. Secondly, it gives me the opportunity1 to provide an update about this Substack. Which I’ll put in this footnote2

A pull towards dysfunction

Compared with a healthy functioning person, my negative mood states have been “sticky”. By sticky, I don’t mean the depressive, everything feels like walking through mud, type of feeling. I mean an implicit characteristic of the negative mood states themselves. Negative mood states have stuck around for a long time.

During the worst period of my illness, I was permanently stuck in several severely negative mood states. Day in, day out. As I have slowly recovered, the “stickiness” of the mood states has reduced. At some point, I managed to achieve a semi-stable euthymic-ish mood state.3 However, even the slightest whiff of stress would knock me right back down into the negative mood states again. This infuriating cycle has continued with a very slow improvement of resilience to stress and reduction of negative mood stickiness.

My recent descent into a negative mood state prompted me to think about the problem I’ve had with this stickiness description. Becoming “unstuck” implies that one cannot return to the stickiness unless an external phenomenon “pushes” you back. In other words, it assumes that stress would be the only factor determining whether I return to a negative mood state. But this has not been my experience.

Even in times of semi-stable mood, I feel a pull, a constant attractive force, towards negative mood states and dysfunction. Like a piece of elastic I can never break free from. All the time, I have to provide some energy to keep me somewhat functional.

When a plane encounters a pothole

A perceived pull towards dysfunction made me think about general relativity. Or rather, it reminded me of my inability to think about it. At best, I have a tenuous grasp of the subject.4 Further, the fundamental problem of trying to communicate how general relativity describes gravity is that it requires the understander to be able to visualise four-dimensional spacetime. Unfortunately, humans did not evolve on a 4D hyperspherical planet.

Space and time (or spacetime) can be thought of as a four-dimensional geometrical structure, a fundamental universe landscape of sorts.5 “Stuff” in the universe interacts with this structure, causing it to bend, which then dictates how stuff moves through it.6 Instead of seeing gravity as a “pull”, it can be (non-)seen as a geometry.7

From this idea of using geometry, or a landscape, to describe a pull, a model of mental functionality started to form in my head.8 To begin with, and I have always wanted to do this, imagine a healthy, spherical human.9

A diagram of a healthy function

AI-generated content may be incorrect.

This orbicular person exists on a landscape, simplified to a one-dimensional line on a 2D image, representing the level of functionality. The higher up the line is, the more functional they are, the lower, the less functional (not to scale).10

Our isotropic human usually sits stationary, on the plane of healthy functioning. This plane represents a “normal day” for lack of a better description. Able to cook food, maintain hygiene, socialise, work sustainably etc. The use of energy to perform these tasks is represented by a force that causes a clockwise rotation of the sphere, moving them to the right.11 The winds of stress push the equidistant-surfaced person to the left.12

Ideally, our healthy round-like13 person will conserve enough energy14 to exactly match the force applied by the winds of stress. There is a lot of leeway, however. They can maintain relatively routine functioning despite gusts of stressful occurrences. On the other hand, too many stressors or too little conservation of energy will result in the individual falling into the pothole of reduced functioning - the place healthy people end up when they “over-do” it. An extra amount of energy is required, through rest, to return to the plane of healthy functioning.

Painting a picture of dysfunction in mental illness

My guess is that many healthy people go almost their entire lives with the impression that the landscape of functioning is solid. As solid as the ground beneath our feet. Some may see it as a fact of life – that other people have the same functionality landscapes as them.15

Perhaps the scariest part for the first-time sufferer of mental illness, is learning that the ground beneath your feet can move. Even with the mildest of tremors, the shock can be quite hard to process. I still remember the first time I suffered a depressive episode at 18. Lying awake at night, distressed by an internal reality I did not realise was possible. If only that young kid knew what was to come, how far the ground could fall.

Today, at my current stage of recovery, the situation looks something like this:

A drawing of a person's face

AI-generated content may be incorrect.

I sit in a meta-stable state of reduced functionality right on the edge of significantly reduced functionality – what I’m going to call the “Pit of Crappiness”. Since I exist on an incline, I have to constantly provide energy to stay stable, with or without stressors. I am continually being pulled down towards the crappy pit. Dysfunction in mental illness is attractive.

If necessary, I can temporarily provide more energy to climb further up the functioning landscape.16 However, if I am not careful when travelling back down the curve, I can easily roll off into the pit of crappiness. Therefore, I am forever in a state of managing my condition and navigating stressful events.17

As I recover, the landscape beneath me — the meta-stable “lip” I’m on, as well as the pit of crappiness — is slowly shifting upwards towards the plane of healthy functioning.18 The shift upwards is so slow that I don’t notice it happening. On a day-to-day basis, my functionality landscape usually feels quite solid. My aim is to get as close as possible to the healthy spherical human’s functionality landscape.

Unfortunately, the slow morphing of my landscape upwards is not the only way my landscape changes.

If I spend a long enough time in the pit of crappiness, or face a significant stressor,19 I go over a different type of tipping point. The ground beneath me snaps and shifts suddenly, like in an earthquake. And I then find myself plummeting towards, what I will call, a functionality black hole.

Astronomical black holes

Black holes are the places in the cosmos where the universe accidentally ran into a system error but decided to keep the game going anyway. Born from the gravitational collapse of a dying star,20 black holes are objects with centres of infinite density, otherwise known as a gravitational singularity.21

Flinging yourself into a black hole would be the epitome of a poor life choice.22 Even if you paid for the premium supermassive black hole experience, thus avoiding public spaghettification,23 let’s just say there would be no need for a return ticket. Below a certain radius, otherwise known as the event horizon, not even spontaneously turning yourself into light would save you from your inevitable journey to the end of time and the central singularity.24

Despite its name, events do occur beyond the event horizon, we just can’t see any of them. In fact, as you were falling into the supermassive black hole, you wouldn’t even notice your last mistake. Unfortunately, the entire universe would be able to see you about to commit your blunder for a very long time. To a faraway observer, you would move slower and slower, looking redder and redder, as extreme time dilation began to take effect, before you finally froze right on the edge of the event horizon.25

Functionality black holes

Falling into a functionality black hole looks a lot like falling into an astronomical one. To an outside observer, I look as if I am doing less and less, before I seem to stop functioning completely. On many occasions, outside observers have suggested I try something or other to rectify the situation. To them, it appears that escape is still possible. They don’t realise I have already crossed over the functionality (event) horizon.

In my head, I am fighting to reduce the speed of descent as much as possible. As an explosion of emotional and mental pain bursts forward from the depths of my mind, I must spend all my energy rotating my little spherical being as much as possible. I resort to the fundamentals: eating, sleeping and distraction.26

A drawing of a curve

AI-generated content may be incorrect.

Below the functionality horizon, the gradient is too steep to escape. It does not matter how much energy I conserve, how many cognitive tactics I implement, or self-care I try to carry out. I will stay dysfunctional. It feels as if all paths lead to a singularity of infinite dysfunction. It feels like I will stay in a dysfunctional state for the rest of my life. The certainty of this feeling is difficult to describe.

The only way out of a functionality black hole is to trust that the ground beneath my feet will eventually change back to the pit of crappiness.27

Sometimes, however, time doesn’t heal all. Not on its own. For a couple of years, I was stuck deep down in a functionality black hole. I had to find treatments, in my case medications, that would force my functionality geometry to change. I was lucky to find treatments that helped.

Many aren’t so lucky. For example, some people with chronic fatigue syndrome (CFS) and/or Myalgic Encephalomyelitis (ME) are tragically stuck beneath the functionality horizon, unable to escape the functionality black hole, with no treatments available for the foreseeable future. Many worry they will spend the rest of their lives never to see the light of functionality again.

Ultimately, I don’t know

I don’t really know how to finish this article. I first thought about trying a description of how I feel about my dog. But one becomes very superstitious in this particular scenario. Every wording I tried felt like I was tempting fate. Then I thought about attempting to write something deep about black holes relating to life and stuff. But it was vague and ended up as one of those passages that feels like it is describing something profound, but in reality, wasn’t really saying anything at all.

Perhaps then, it is quite fitting to end with a huge “I dunno” shrug. We don’t have an experimentally proven quantum theory of gravity, which means, ultimately, no one really knows what happens past the event horizon of a black hole.

In the same vein, while I’ve tried to come up with rudimentary, poorly drawn diagrams of what I think might be going on in my head while in a period of low functionality, ultimately, I don’t know if it resembles anything close to what is actually going on.

A sad end to an article starting with sad news.


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

“Opportunity” is not the right word, but I’m not sure the English language has a word for “something bad happened which allowed me the chance to do something”.

2

After some experimentation in the early part of this Substack, I’ve settled on a schedule of releasing articles in chunks or “seasons”. This means I am not locked into a set schedule, meaning I won’t have the internal pressure to release regular articles all the time. But it also means that when I do release articles, they will be on a regular weekly schedule.

The first season will be about mistakes and reporting systems. And will be around 10 articles or so. Which I’m close to finishing. Given the news about my dog, the motivation to finish writing about the ins and outs of incident reporting systems has, understandably, subsided for the moment. It will come back at some point. But I cannot say when.

3

It was more a negative-mood state “plus”, rather than anything close to a proper euthymic mood.

4

At school, general relativity was one of the subjects I was most excited about learning. But I never got to study general relativity at university - it wasn’t offered as a course. This may have been a blessing in disguise.

I did study special relativity (the relationship between space and time i.e. spacetime), and while the equations weren’t too bad to get my head around - their consequences provided hours upon hours of mind-bending confusion.

The simple inclusion of gravity into special relativity, thus producing general relativity, makes things anything but simple. The mathematics gets kicked up several notches, and the consequences become geometric nightmares. Basically, what I am trying to say is that I don’t understand general relativity.

In my opinion, or in my experience, understanding a theory in physics requires a decent grasp of both the mathematics and the conceptual consequences. You can understand the mathematics really well, but not necessarily be able to explain how they relate to the “real world”. You can understand the concept really well, but without understanding the equations, you won’t be able to apply this knowledge to different scenarios.

5

Often, the term “fabric” is used. And while I understand it to be quite a useful way to communicate the distortion of spacetime, I’ve always felt (pun unintended) that it implies spacetime is some kind of actual substance, like the aether, instead of a geometry. Further, I think people automatically think of the sheet stretched over a large ring with some marbles on it. When I think that more useful and cooler visualisations are available (See this video by ScienceClic)

6

I used “stuff” because I am not entirely sure what exact stuff causes spacetime to bend, and to what degree. It all hinges on understanding the Einstein field equations, which I currently do not understand. The minutephysics YouTube video on general relativity gives as succinct a summary as I can find (and is an excellent video in my opinion):

General relativity is actually the idea that all the stuff in spacetime: matter, radiation, pressure, energy, momentum, particles and so on… all that together with spacetime itself, obeys a set of equations called the Einstein field equations. These equations look simple if you write them in a clever way, but they are actually a very complicated set of ten non-linear differential equations that you have to solve in order to make predictions about how spacetime will curve and how the stuff in it will move depending on how spacetime is curving and how the stuff is moving.

7

While I do not have the requisite knowledge/understanding to describe any further what general relativity actually is, I can link you to people who do:

  1. Perhaps the most concise explanation is the following minutephysics YouTube video:
    General Relativity Explained in 7 levels of difficulty

  2. I think the river model of spacetime, explained by Mathew Francis, provides the best way of visualising the unvisualisable. This YouTube video by Alessandro Roussel provides a pretty spectacular version of it.

  3. For a more in-depth explanation, I found the article Gravity as a Geometry by Mikael Davidsson to be useful (behind paywall).

8

Just in case you happen to accidentally be on a different page than me, I’m not intending this to be any kind of serious description of the reality of mental illness. This model is entirely figurative. There will be, most certainly, glaring flaws and inconsistencies.

It is a completely unserious, conceptual model that I came up with to hopefully help others potentially understand a little better about what my experience of mental illness functionality (and potentially other people’s) is like.

9

This is inspired by the spherical cow joke.

10

A few points:

  1. I do not have the requisite skills nor the patience/energy to produce anything other than a hand drawn thing-that-I-hope-is-comprehensible

  2. I realised having a landscape describing functionality was easier and hopefully more generalisable to other conditions than having it represent the complexity of various negative mood states.

  3. I have absolutely no idea what a “functionality scale” would be. All I know is that what I have drawn won’t fit any of them.

11

Represented by an arrow in the above drawing. Technically this force is a torque - but the problem is that the arrow (vector) for torque points along the axis of rotation. So in this case into the page. I spent about two hours trying to remember why it is represented this way (it isn’t taught very well). I vaguely remember cross products, the right hand rule and a lot of headaches.

12

This includes relationship problems, work problems (generally any external problem), as well as internal mechanisms (like stress hormones) that are produced due to things like overwork, lack of food, etc.

13

I ran out of words for “spherical”

14

Through food, meditation, good sleep, etc.

15

I’ve noted on several occasions that well-meaning people, who have never experienced mental illness, will, in the case of a problem at work and subsequent loss of functioning, go “okay, have a break and come back when you feel better”. Rather than listening and discussing things that may help prevent the loss of functioning in the future.

16

The more functional I become, generally, the stronger the winds of stress are. More functionality means a greater likelihood of encountering stressors. As an example, I can socialise more when I am more functional. However, because my condition is biased towards negative emotional states, this results in a greater frequency of events that trigger negative emotional states. Resulting in loss of functionality.

17

It is sort of like trying to fly a commercial plane, but being unable to go higher than 100 ft off the ground. I can still get to places, but I have to navigate mountains, skyscrapers and tall trees. Whereas if I were healthy, I would be able to fly up into the stratosphere.

18

The pit of crappiness is also reducing and flattening out a bit. The lows are slowly becoming not quite as deep and are easier to get out of as time progresses.

19

Something like my dog getting diagnosed with cancer, for instance

20

Technically called stellar black holes. The formation of supermassive black holes is an area of active research. Essentially, from what I understand, we aren’t sure how supermassive black holes grew so quickly in the early universe.

21

I think the distinction between mass and density is important. Different black holes can have different masses (They can have varying amounts of “stuff” inside them). But all black holes have an infinite density (mass per unit volume) at their centre — a singularity. This means you can quite happily orbit a black hole as long as you are far enough away from the event horizon.

Caveat: because we do not have an experimentally confirmed quantum theory of gravity, it is impossible to say what actually goes on at the centre of a black hole. During my time as a physicist, if an infinity popped out at the end of my calculation, this generally meant something went wrong. So there is some reason to believe there isn’t a proper singularity at the centre of a black hole (though there might be something very close to one).

In computer science, if an infinity pops out at the end of running a process, something goes very, very wrong (I definitely did not accidentally send an infinite number of simulation jobs to a supercomputer)

22

Fortunately, several people have simulated this experience, with varying degrees of realism.

  1. Andrew Hamilton has an excellent site with many different simulations, and he also explains the various properties of black holes.

  2. NASA released a semi-realistic 3D video of what it would look like to fall into a black hole.

  3. The most impressive-looking simulation, in my opinion, comes from Science Clic. Who has also produced a separate video explaining what would happen.

23

In small black holes (stellar black holes), the gradient of spacetime curvature towards the singularity increases rapidly. Therefore, the stretching and squeezing of your body into a single strand of spaghetti would occur outside the event horizon, and therefore be observable to anyone with a powerful enough telescope.

In a supermassive black hole, Spaghetiffication would still occur, but it would be shielded from view behind the event horizon.

24

Below the event horizon, all future paths lead into the singularity. It doesn’t matter where you try to travel; everything, including light, ends up at the centre of a black hole.

If you found yourself within the event horizon, technically, out of desperation, you could delay the inevitable extension and compression of your atoms into a neat line through some powerful rocket engines. But this would be an ultimately futile endeavour.

The closest thing I can think of as to what this sort of means as an inescapable experience in a human sense is to imagine the entire universe contracting faster than the speed of light back into a singularity. i.e. the opposite of what has happened so far (back into a big crunch) and try to escape.

No matter where we try to travel in a spaceship, we end up at the singularity.

And no, I am not suggesting the universe is in a black hole.

25

There are two videos which I found helped to explain what would happen to different observers as a person fell into a black hole: an interview with Brian Cox by Cleo Abram, and a video by Fermilab’s Don Lincoln.

26

In the functionality black hole, mindfulness is worse than useless. In my experience, it is often lost on those advocating for mindfulness that it requires energy to do. Mindful meditation, for instance, requires energy to initiate. To sit down and start focusing on the breath requires mental energy.

In my experience of a functionality black hole, not only does expending energy on mindfulness feel painful, but the process of reducing outside stimulus allows the unending inner emotional stimuli to flow even faster into my awareness. In other words, it makes things much worse.

Only once outside the functionality black hole, and in moderation, has mindfulness provided a net benefit.

27

On the odd occasion, I have been asked whether I am able to do anything to help me escape. My response is always “just to wait” or “sleep”. Naturally, people don’t like the idea of powerlessness or hopelessness. I think we all want to believe that, given every possible situation, there is always something we can do (or could’ve done). We always want to feel in control.

]]>
<![CDATA[A litany of medical mistakes]]>https://psychiatricmultiverse.substack.com/p/a-litany-of-medical-mistakeshttps://psychiatricmultiverse.substack.com/p/a-litany-of-medical-mistakesTue, 03 Jun 2025 16:39:26 GMT

Source: NightCafé

Many future Psyverses will be based around the theme of medical mistakes. Psyverse #2 will be specifically about how independent national voluntary reporting systems, used in the aviation industry, could be applied to the psychiatric system. Before we get there, however, I thought I would tell you the story of my severe reaction to an antidepressant. And the many mistakes that were made.1


Mistake #1 may have been the advice from my GP to take an antidepressant. It was late 2015, a year into my PhD. An unfortunate series of events had sprouted blackened vines of depression, dragging me deep into dysfunction. In a desperate bid to escape, I tried meditation, eating healthily, exercising, quitting drinking, and university counselling. In my head, I thought taking an antidepressant was the last remaining option. I did not know that other forms of counselling existed. I didn’t know psychotherapy could go on for longer than six sessions.2

I was prescribed the SSRI3 Sertraline at the starting dose of 50 mg once daily. On the first day, I experienced the typical side effects: slight worsening of depression, a little nausea and a lack of appetite. During the second and third days on Sertraline, numbness replaced depression. It was strong. So strong that stressful events like going to a restaurant with my mum and disabled sister, with the accompanying glares, ceased to influence me. I thought this was brilliant (well, a fuzzy version of brilliant). I could return to my normal life.

After the fourth 50mg dose, I felt an “energy” build inside my head while washing dishes. Overcome by a strong urge to find safety, I dropped a half-washed dish into the sink and rushed to the sofa. A few seconds later, I snapped into the foetal position.

A violent tremor spread through my limbs, torso, and head, growing stronger with each moment. A viscous, lava-like ooze expanded outwards from the centre of my brain. It felt physical, like a real substance swallowing neurons. Just before it captured my outermost brain matter, I closed my eyes and waited to die.4

My first thought when I opened my eyes was, “Oh good, I’m not dead!” Shock, as I found out, affects different people in different ways. My version was not running around like a headless chicken, or sitting stationary in frozen silence. I moved mechanically in eerie calm. The only thoughts that occupied my mind during the next few hours were related to figuring out what on earth had just happened.

In conversation with my mum on the phone, we determined that I had just experienced a panic attack. We were wrong. Before that day, I had never experienced anything close to a clinical anxiety symptom. Therefore, I had no frame of reference.5

The next mysterious definitely-not-a-panic-attack jolted my eyes open during deep sleep. Due to the unnerving disorientation of waking up in the middle of one of these realistic nightmares, I cannot say for certain if it was worse than the first attack. It was at least just as bad.

My legs were shaking so violently this time that I became physically unable to walk. To pass the time, I grabbed my phone from the bedside table and rang the NHS 111 service.6

The person on the other end of the line instructed me to go through the checks for a stroke. Arms in the air, etcetera. When I passed these checks, they advised me to go back to sleep and call the GP in the morning. Then they hung up. This was Mistake #2.

I looked down at my still-shaking pair of legs. “Ah”, I thought, “probably should’ve led with the whole non-functioning legs thing”.7

There was no way in hell I was going back to sleep. After waiting fifteen minutes to gain control of my legs again, I gingerly hobbled to my housemate’s door, woke him up, and asked if he could give me a lift to the hospital. He did, no questions asked.

By this point, I had figured out that I was experiencing something worse than a panic attack. But this meant I didn’t have the language to describe what had happened to me. When I reached the front desk of the hospital emergency department at around 1 am, I picked the closest condition I could think of: “I think I’ve had a seizure…”

The receptionist asked me to take a seat in the waiting area as the nurses were all busy. I turned to see around fifty empty chairs. I didn’t give it a second thought. In an underfunded and understaffed NHS, I believed the receptionist.

“If this was a seizure, you would have choked on your tongue” a nurse opined ten minutes later. Mistake #3 was a doozy. Part one was pre-judging me as a patient before I gave her a proper patient history. Part two was making the egregious error of assuming all seizures were Tonic-Clonic.8 I didn’t know at the time, but there are many types of seizures that present in many different ways.9 For instance, in some types of seizures, like myoclonic seizures, sufferers usually don’t pass out.10

The nurse proceeded to make Mistake #4. She told me that getting on to antidepressants was tough; I needed to get through the initial side effects before the medication would start to work.

I saw a resident doctor next who asked what had happened. I told them a shortened story, along with the symptoms I suffered, to the best of my ability. The doctor started a typical series of tests. Most came back pretty normal.11 Then came the knee reflex test.12

As a naïve 23 year old, I had no idea what a “normal” reflex was supposed to look like. So, when I nearly kicked the resident doctor in the face after a light tap on the knee, I commented like an enthused child “Wow, it’s crazy how strong reflexes are!” then produced a dumb smile. She did not smile back.

Instead, I saw a deeply concerned glare directed at my knee. My face dropped immediately. After testing other reflexes on my body, producing similar limb-propelling results,13 the resident – with the once joviality in their voice changed to stern seriousness – stated that they needed to talk to the attending doctor and quickly exited the room.

The attending made Mistake #5 by not quickly rechecking my super-reflexes. Mistake #6 soon followed in a mostly one-sided conversation. “It is most likely just anxiety,” the attending remarked (paraphrasing). If the attending had taken a patient history, they might have realised that before taking Sertraline, I had never experienced a clinical anxiety symptom. At the very least, by allowing me to speak, they might have noticed my strange calmness.

I was sent home with a second occurrence of Mistake #3 – to keep taking the Sertraline. As this was a separate, independent incident, I’m calling this Mistake #7.

All the clinicians I saw that night made Mistake #8 – a missed symptom. A day or so later, my mum noticed my eyes juddering side to side. Ocular Clonus.14 Her brother had nystagmus15 growing up, which is why she spotted it.

Within a couple of hours, seven mistakes were made by four different clinicians. Quite the astonishing statistic. Each failed to recognise a potentially life-threatening syndrome. Unfortunately, this was not the end of the mistakes.


When I arrived home, I did not know whether I should take the next Sertraline tablet. My feeling was no, given the severe reaction, and I had already decided I was going to halve the next dose. But two clinical professionals had advised I continue. I wasn’t a doctor, and I knew my brain was in no state to make such an important decision.

I decided to call the university counselling service. I wanted to talk with my counsellor - someone I thought would be neutral. The administrator picked up the phone. They explained that it was not university policy for students to talk with counsellors outside of booked appointments. Through tears, I pleaded with the administrator to please give me two minutes, I could wait for a few hours, just two minutes. “No” was the response. But they could email me an e-leaflet containing strategies to deal with anxiety. “Okay”, I said dejectedly, and hung up. The first tip of the e-leaflet was to do a relaxing activity, like reading a book. I deleted the file.

Mistake #9 was soul-crushing. In my hour of need, the university counselling system was absent. Those two minutes could have been more helpful than the entire six-hour appointment structure.16

In the end, after talking with my mum, I decided to come off the Sertraline. When I rang my GP to inform him of my decision, I was asked three or four more times “Are you sure you don’t want to stay on the antidepressants?” Mistake #3 a third isolated time – which makes it Mistake #10. At every point throughout this story, I felt like clinicians were willing me to stay on the SSRI.

If I had listened to any of them, there is a possibility it would have resulted in my death. The signs and symptoms pointed to a serotonin syndrome – albeit a strange and rare case.17 Serotonin syndrome occurs when there is too much serotonin in the brain. It can be deadly. When John Mayer sings “I’ll be dreamin’ of the next time we can go into another serotonin overflow” in the song “Love on the Weekend”, he is either singing about a suicide pact or doesn't quite understand the ins and outs of brain chemistry.

I was experiencing mild to moderate symptoms, but had I continued to take the Sertraline at the same dose, one can imagine things becoming serious quickly.18

At the end of our conversation, my GP rounded off one of the most stressful and horrifying experiences of my life with Mistake #11. He confidently told me that once I came off the SSRI, the symptoms I was experiencing would disappear. I was relieved. For a few days.

The symptoms, while abating a little bit, did not disappear. In fact, some new ones arrived. For instance, severe panic attacks and hypochondria, and, well, it is easier to copy in the list I made at the time with the symptoms I was left with. I had experienced none of the following before taking the Sertraline19 :

  • Nausea

  • Palpitations

  • Pupil dilation

  • Dizziness

  • Confusion

  • Violent Shivering

  • Sweating without cause

  • panic attacks (approx. 10 per day)

  • occasional very severe Panic Attacks20

  • difficulty concentrating

  • anxiety attacks (very frequent and spread throughout the day

  • loss of balance

  • Twitches/facial tics

  • “Brain Zaps”

  • Numbness (extremities mainly)

  • Hypersensitivity

  • Hypochondria

  • Insomnia

  • Headaches of varying types (Dull, sharp and short, tension)

  • Sharp pains all around the body (randomly come and go - they would trigger brain zaps)

  • Diarrhoea

  • Visual “fireworks”

  • Hyper-reflexes

  • Raised blood pressure

  • Shortness of breath

  • Fatigue

  • Doom feeling and dropping sensation (associated with balance and anxiety attacks)

  • Weakness in legs

  • Restlessness

  • Decreased appetite

  • Emotionally and mentally all over the place

  • Depression

  • Generalised anxiety

And so began a six-year battle to find treatment.


During those six years, the mistakes did not end. It took six months for the serotonin syndrome to finally be diagnosed. I was prescribed Venlafaxine, an SNRI, ignoring a warning in my medical record that it should not be prescribed.21 In a care plan, my fourth psychiatrist rewrote my history based on a fifteen-minute phone call.22 I was given ten official diagnoses – only three of those were related to my condition. In Psyverse #0, I mentioned perhaps the biggest mistake of all, not diagnosing me with generalised anxiety disorder despite having clear symptoms. The most common mistakes were logistical and administrative errors. My lithium treatment was delayed by 42 days, mostly through administrative mess-ups.23

My story is not unique. In medication errors alone,24 it has been reported that an estimated 237 million occur every year in the UK.25 Most of these, around 75%, are estimated to cause no harm to the patient - many are caught before they even reach the patient. An estimated 2% have the potential to cause harm,26 with approximately 22,303 contributing to deaths.27

The number of patients who die each year from medical mistakes is a controversial subject. It has been significantly overestimated in the past. In a 2013 Manuscript, it was claimed that 440,000 deaths due to preventable adverse events occur annually in the United States. This was followed by a 2016 commentary which claimed 251,454 deaths by medical error per year, making it the third leading cause of death in the United States. The 2016 commentary was widely reported in news articles and widely referenced in the academic literature. Unfortunately, both of these statistics were unfounded.28 The actual figure is probably around 25,000 deaths by medical error per year in the US.29

But from my experience, focusing on mistakes causing direct harm misses the point. A much greater problem is the lack of trust created by “non-harmful” mistakes. Every time a clinician did not know what medication I was on, an administrator did not pass on the correct information, or my medication was prescribed with incorrect instructions, trust was lost. I became paranoid not because of my anxiety condition – paranoia was not there at the beginning – but because of the additive nature of the small mistakes I kept experiencing. I knew that one of these mistakes could cause me harm.30

Loss of trust between patient and practitioner means a loss of communication. I started to withhold information and accentuate certain symptoms to get the treatment I needed. To be clear, I didn’t start out this way. I started out as honest as I could possibly be. But when the honest information I provided led to misdiagnoses, misunderstandings and mismanagement, acts by clinicians became a threat to my health. The mistakes led to the expectation of a lack of competency.

As I will explore, there are many reasons for these mistakes occurring. There is no simple cause. In fact, health services have improved patient safety dramatically over the last twenty years. Nevertheless, the medical field is still playing catch-up. We are not near the end of the road yet.


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

Including mistakes by me…

2

I have thought that maybe if I had the chance to talk more in depth about my problems, perhaps I wouldn’t have needed the antidepressant, but this is a counterfactual. I don’t know what would have happened. I was very depressed. Taking an antidepressant seemed to be the logical option. No one else had any other suggestions.

3

Selective Serotonin Reuptake Inhibitor. They block the reuptake of serotonin by a presynaptic neuron, resulting in more serotonin in the space between neurons (synaptic cleft) and therefore increasing the likelihood of serotonin activating receptors on the postsynaptic neuron.

4

I want to make clear at this point that I am not against antidepressants, far from it. They help a lot of people. I believe it is important to consider all the risks of antidepressants, from common to rare. I don’t, however, want to leave readers considering antidepressants with the impression that what happened to me is common. I was extremely unfortunate.

5

In the days following my severe reaction to Sertraline, I subsequently developed severe panic attacks. So, with the benefit of hindsight, I can confidently say this was magnitudes worse than a panic attack.

6

It is an emergency call service like 999 (or 911 in the US), but for less severe or less time-sensitive events, to check whether something is an emergency or not.

7

On reflection, I did find it a bit strange that none of the tests involved the lower half of the body. According to the 111 service, I suppose the only thing they thought was worthy of an emergency was a stroke. I guess the other life-threatening conditions just aren’t important enough!

8

Previously known as Grand Mal seizures.

9

See this on types of seizures.

It may also interest you to learn what seizures are. I think many might have the impression that it is neurons firing randomly. But it is the opposite. From Grace Lindsay’s Book Models of the Mind:

What are neurons doing to create these strong signals during a seizure? They’re working together. Like a well-trained military formation, they march in lockstep: firing in unison then remaining silent before firing again. The result is a repeated, synchronous burst of activity that drives the EEG signal up and down over and over again. In this way, a seizure is the opposite of randomness  – it is perfect order and predictability.

I always allow myself a little chuckle when I hear the myth that we only use 10 percent of our brain. Using 100 percent of our brain (whatever that means) is not genius-level functionality, it is a seizure.

10

Myoclonic seizures cause a quick, uncontrollable muscle movement with no change in awareness. See here.

11

My blood pressure and heart rate were a tad high, and I had a slight temperature.

12

Note: A bit of a spoiler here, but unfortunately, despite the resident doctor being by far the most competent, they technically made Mistake #4.5. In the Hunter Serotonin Toxicity Criteria Inducible Clonus is checked before a reflex test. I was never checked for Clonus.

The reason this is only #4.5 and not #5 is because in order to carry out the Hunter Serotonin Toxicity criteria, you first have to suspect Serotonin Syndrome. At this stage, I’m not sure the resident doctor suspected this yet.

It should have been checked after the results of the knee reflex test, but this would’ve required the resident to challenge the attending. Easier said than done. (I plan to talk about culture in a later Psyverse.)

13

This symptom is called hyperreflexia.

14

Ocular Flutter & Opsoclonus are terms I see too. I’m not entirely sure if they are the same thing, or if each term describes something slightly different? I do know that the terms describe a different collection of vectors for saccades. If a medical practitioner could help me out here, I would be grateful!

15

Nystagmus and Ocular Clonus are different. See this video for an explanation.

16

If you would forgive my dark humour as a result of this episode, I sometimes quip that university counselling, and other short-form counselling, are “six sessions and a funeral”.

I believe there is not enough time to learn the skillsets needed to cope with mental health problems - not even CBT. I’m struggling to think of any skill in which you can achieve a reasonable degree of competence in six one-hour sessions. You cannot speedrun counselling.

I believe the university counselling six-session structure can be counterproductive, as they were in my case. They produced a false sense of security.

(To my younger readers, the dark joke above is based on the film title “Four Weddings and a Funeral”.)

17

Serotonin syndromes are not commonly thought to occur at such low doses of an SSRI.

18

I am mindful I might be scaring people into not wanting to take SSRI’s. Which isn’t what I want. There was something different about my brain compared to the vast majority of other people. Unfortunately, I don’t know what that was. Perhaps I had a weirdly high or weirdly low number of serotonin receptors (and something went wrong with up/downregulation), or one part of my brain wiring resulted in a traffic jam of serotonin, which then cascaded to the rest of my brain. Perhaps I had too few monoamine oxidase enzymes to break down serotonin.

Ultimately, I’ll probably never know. Nevertheless, it is important for people (clinicians and patients) taking/prescribing SSRI’s to know the signs and symptoms of Serotonin Syndrome.

Note that many webpages on serotonin syndrome will say something like this (from Mayo Clinic):

Although it's possible that taking just one drug that increases serotonin levels can cause serotonin syndrome in some people, this condition occurs most often when people combine certain medications.

Which may not be accurate. According to a 2024 review by Simon, Torrico and Keenaghan:

[S]erotonin syndrome reports to The United States Food and Drug Administration Adverse Event Reporting System revealed that nearly half of serotonin syndrome cases involved using a single serotonergic agent. [citing Culbertson et al. and Scotton et al.]

19

Apart from depression, and some of the milder symptoms like headaches I had experienced before - but only acutely. It was now chronic.

20

These are copied from old notes. I fell into a habit of capitalising “Panic Attacks”, because in common discourse, panic attacks have come to mean “anxiety attacks”. Which are much milder in nature. I capitalised it to emphasise that mine were not the everyday variety.

21

At the dose I was taking, it was essentially a weak SSRI, slowly but surely worsening my condition over the three years I was on it. I never noticed for three reasons:

1. The decline was exceptionally slow

2. I put a lot of work into making cognitive adaptations (through counselling, meditation, etc.), which gave the illusion of improvement. Improved functioning through better management of symptoms does not equal treatment. I realised this too late.

3. I tried to come off a couple of times, however, my condition would worsen rapidly once I started. I mistook this to mean the medication was doing something when, in fact, it was the start of withdrawal symptoms (when I finally came off the medication, the withdrawal symptoms were very bad).

22

The call was about increasing the dose of lithium (I was on the lowest dose at the time).

But, to my horror, it was used as an assessment. I was not told this. The care plan was a surprise through my letterbox.

My psychiatrist exaggerated and fabricated parts of my story (for instance, he insinuated I was drinking heavily at university when this wasn’t true). Some parts were factually incorrect - he said I had taken olanzapine when I hadn’t. I was mischaracterised too - he said I was relying too heavily on medication and not trying hard enough at counselling (despite having over 250 hours of counselling over the previous four years of many different modalities). Medication was the only option left.

23

I was in a lot of pain, and this prevented me from trialling other treatments.

24

According to the NHS resolution page, medication errors are

an error in the process of prescribing, preparing, dispensing, and administering, monitoring or providing advice on medicines. Medication errors can occur at many steps in patient care, from ordering the medication to the time when the patient is administered the drug.

25

This is based on a 2020 study. My gut instinct tells me that the number seems too high.

It is important to acknowledge that there is significant uncertainty, primarily due to a lack of data. From the limitations section of the study:

Source studies were generally conducted in small numbers of English centres…. Estimates of the total number of errors represent the sum total of errors at each stage rather than the errors that actually reach patients.

This study only considers medication errors under the responsibility of healthcare professionals and care staff, without including errors in administration and monitoring by patients and their caregivers…. We had to assume that the number of items prescribed in primary care equated to the number dispensed, which will have led to an underestimate of prescribed items, and any estimates of associated errors […].

[W]e were not able to make direct links between errors and harm, or what proportion of errors occurring at different stages of the medicines use process reached patients, and what proportion of those errors reaching patients caused actual harm. Therefore, the estimates of error prevalence are generated from completely separate data from the data used to generate estimates of harm. We have had to assume that the errors we have estimated to occur will lead to the burden that we have estimated will occur […].

A major, necessary, assumption in the estimation of the burden was that definitely avoidable ADEs constitute harm from errors. Estimated burden only included short-term costs and patient outcomes, as we had no data on burden of errors managed in care homes, and therefore it is likely to be an underestimate. Some key source studies from which the burden of errors was estimated were at least 10 years old, or from non-UK countries in scenario analyses.

It is also important to note that (I think) repetitions (like a typo in a repeat prescription) and minor errors are included in the estimate, too.

For instance, for my repeat Bupropion prescription, the pharmacist mistook the XL version of Bupropion for the SR version. The XL version is supposed to be taken once per day with SR twice per day. So my instructions were to take the XL tablets twice per day, each spaced eight hours apart, instead of two tablets together once per day (150mg *2). This was never corrected.

26

Which is still over 4 million medication errors.

27

This number is probably high, considering it accounts for around 10% of hospital deaths. Based on preventable death percentages (see also here), the number is more likely to be around 3 to 5 thousand.

28

[1] K. G. Shojania and M. Dixon-Woods, ‘Estimating deaths due to medical error: the ongoing controversy and why it matters: Table 1’, BMJ Qual Saf, p. bmjqs-2016-006144, Oct. 2016, doi: 10.1136/bmjqs-2016-006144.

[2] B. L. Mazer and C. Nabhan, ‘Strengthening the Medical Error “Meme Pool”’, J GEN INTERN MED, vol. 34, no. 10, pp. 2264–2267, Oct. 2019, doi: 10.1007/s11606-019-05156-7.

29

Based on the approximate 3 - 5% preventable death error rate from:

[3] H. Hogan, R. Zipfel, J. Neuburger, A. Hutchings, A. Darzi, and N. Black, ‘Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis’, BMJ, vol. 351, p. h3239, Jul. 2015, doi: 10.1136/bmj.h3239.

[4] T. Rogne et al., ‘Rate of avoidable deaths in a Norwegian hospital trust as judged by retrospective chart review’, BMJ Qual Saf, vol. 28, no. 1, pp. 49–55, Jan. 2019, doi: 10.1136/bmjqs-2018-008053.

Though it is important to note that determining the exact statistics is quite hard. What constitutes a preventable death due to medical error? There will usually be multiple factors, and concluding that medical error was the most prominent one is not straightforward. Especially with hindsight bias.

30

I am not alone in this conclusion. See:

[5] M. Schlesinger and R. Grob, ‘When Mistakes Multiply: How Inadequate Responses to Medical Mishaps Erode Trust in American Medicine’, Hastings Center Report, vol. 53, no. S2, pp. S22–S32, 2023, doi: 10.1002/hast.1520.

(Also, regarding the paranoia not being there in the beginning, you can check my list of symptoms above.)

]]>
<![CDATA[Feedback]]>https://psychiatricmultiverse.substack.com/p/feedbackhttps://psychiatricmultiverse.substack.com/p/feedbackThu, 01 May 2025 21:34:43 GMTPerhaps the most crushing thing about living with severe mental illness is the social isolation. Before things became serious, I had a social network. Out of the many numerous benefits, I could bounce ideas off friends and colleagues. I could get feedback on anything I wrote.

I fought hard to keep my social network in place. But once mental illness takes hold, friends disappear quickly. I am still shocked at just how easily I was forgotten by most of my friends. I kind of get it. Mental illness is a scary thing to be around. I couldn’t communicate how much scarier it was for me.

So, while I find myself in the extraordinarily lucky position of finding a treatment that is slowly reducing my anxiety symptoms, I also find myself with very few friends to speak of. The situation is improving – but these things take time.

The limited ability to attain feedback is not exactly at the top of my list of “problems of not having friends” but in the context of Substack, it is relevant!

It also doesn’t help that I’ve fallen into strange psychological irrationalities. It seems that the riskier the article, the greater my fear that it isn’t very good, the less I want to reach out for feedback. So, I release it to the public instead un-feedbacked. This makes no sense whatsoever.

The most prominent feeling I’ve felt after publishing the two Psyverse #1 articles is embarrassment. My previous guest posts were all serious critical analyses, and suddenly, here I am launching into moth-people with no context whatsoever. From the little feedback I’ve had so far, it seems most people are confused. But without knowing more detail, I am not sure how to fix it for future posts.

While I feel a bit vulnerable asking you so openly for feedback, I feel like this is the logical thing to do. I haven’t really seen other writers on Substack publicly ask for feedback (other than for topic suggestions). But I don’t have the friends or colleagues to provide feedback for this article about asking for feedback. I am stuck in a lack of feedback loop.

So, if there are readers out there who would be so kind as to tell me what they thought of the Psyverse #1 articles, I would be grateful. Rather than public comments, I’d appreciate a direct message, or email (see here).

To help, I’ve put the main message I wanted to convey in Psyverse #1 in this footnote,1 some of the questions I have in this footnote,2 and some of the reasons why I wanted to start the psychiatric multiverse in the first place in this footnote.3

Thank you to those who will take the time to write feedback on the Psyverse #1 pair of articles – it is much appreciated!

1

The following is a simplification.

In physics we have a model for how very small things, like atoms, interact (Quantum Mechanics). We also have a model for how the very large things, like planets, interact (General Relativity). At the moment, these models are incompatible with each other, we cannot draw lines of logic from the microscale superpositional world of atoms to the macroscale cosmological curvature of spacetime. But we do know in all probability that these lines of logic exist. Physics as a subject would fall apart otherwise. This is the coveted “theory of everything” physicists are working hard to outline.

In psychiatry, there is a model for how very small systems interact (Biomedical model. i.e. genes, chemicals and neurons). There is also a model for how very large systems interact (Psychosocial model i.e. people with other people). At the moment, these models are incompatible with each other. Lines of logic cannot currently be drawn from the microscale neuronal interactions in a brain to the macroscale psychological interactions in a society. In all probability, these lines of logic exist. Psychiatry, psychology and neuroscience as subjects would fall apart otherwise. This is the coveted “biopsychosocial model” psychiatrists, neuroscientists and psychologists are working hard to outline.

The difference between physics and psychiatry is there are no quantum physicists arguing vociferously, with little evidence, that every scale of physics is described entirely by quantum mechanics, nor are there cosmological physicists arguing, with little evidence, that every scale of physics can be described through the curvature of spacetime. There are no “factions” or a “centre ground” of physics. Physicists see it as a problem of scale, not ideology.*

While I believe the degree of polarisation is much less prominent than the public might think, in psychiatry there are factions, and there is a centre ground (arguably, the psychosocial end of the spectrum extends to greater extremes - the antipsychiatry movement - than the biological end). Polarisation, nevertheless, exists. There are psychiatrists and patients arguing vociferously that every scale of psychiatry should be described psychosocially, going as far as to say that the biological model of psychiatry should be abandoned. While I’ve seen defences of the biological model, I’ve not been able to find an argument that the psychosocial model should be abandoned. Still, there is no doubt that there are plenty of stories of patients who have suffered from overprescription of psychiatric medications.

As a physicist I find this kind of polarisation bizarre, and as a patient I find it depressing. Though I understand how this kind of polarisation can occur.

I know I have a disease, and I know I needed psychiatric medication for my disease. You could have placed me in a tropical paradise, in the best possible environment with all my needs accounted for, and I would still be in severe suffering. The cause of my specific mental affliction was almost entirely biochemical.

This was not the view of any of the psychiatrists and psychotherapists I encountered in my treatment. All of them seemingly placed my disease state closer to the psychosocial end of the spectrum, despite my serotonin syndrome (and the dramatic mental dysfunction immediately afterwards) being a rare strong and clear indicator of a predominantly biological cause. Even after 300 hours of counselling of multiple modalities, some psychiatrists were pushing for yet more counselling and were hesitant to prescribe medication (attitudes in the UK might differ from the US).

There are some psychiatrists who would argue that cases like mine don’t exist. For example, Mark Horowitz talking to Alex Curmi on the Thinking Minds Podcast as described by Thomas Reilly:

One aspect that concerned me though, was Mark’s reluctance to acknowledge any biological aspect to depression, or the utility of antidepressants. Alex asks Mark whether he recognises that some cases of depression do have a biological component, giving examples of severely unwell patients who may struggle to eat, or become slowed. Mark put the chance of finding biological causes of depression as very, very slim. Instead, he sees depression as ‘normal brains responding to events in life’.

So, in my head I wondered, if I needed to work with psychiatrists like Mark, how would I do it? Mark’s position is very different to mine, and most psychiatrists. If Mark and I were to have a conversation restricted to the etiology of mental illness, I think things would probably get heated very quickly.

I was struck while attending storytelling events like The Moth and Spark Stories how quickly low bandwidth, yet deep emotional connections could form. Five minutes was all it took. From personal experience, I also knew that these types of connections with people (formed over a longer period while doing activities like improv classes) helped me feel less antagonistic when ideological discussions occurred.

Therefore, for Psyverse #1, I was hypothesising that if storytelling events became part and parcel of psychiatrists’ everyday life, the animosity that can be found between ideological factions within psychiatry might be more easily overcome. Perhaps psychiatrists might more easily be open to discussion, and more willing to work in teams.

Thus getting closer to the future of psychiatry outlined by Robert Pies in his discussion with Awais Aftab (titled: The battle for the soul of psychiatry):

So my chief hope for “the future of psychiatry” is that it recover its pluralistic “core”—what I earlier described as the AJE tradition. I say “recover” because, as I noted earlier, I believe that psychiatry’s “solid center” is besieged by market-driven forces that would like to reduce us to “writing scripts” and “turfing” psychosocial interventions to less costly non-physicians. We need to push back hard against those trends! At the same time, I would like to see psychiatry achieve much better integration with neurology and general medicine, in what has been called “collaborative care.” I also think psychiatry has to do a much better job of “public outreach,” whereby we go out into the community in a proactive way, so that the general public has a better understanding of who we are and what we do. We can’t afford to let antipsychiatry define us in the public mind. The stakes for our profession and the well-being of our patients are far too high.


*Note: Seeing general relativity vs quantum mechanics as simply a problem of scale is an oversimplification (As an ex-experimental electron microscopist, a deeper discussion is well above my pay grade!). And there is debate about how far one theory extends into the other. Importantly, I don’t believe there are physicists pooh-poohing the evidence of each theory.

2

If you found the Psyverse #1 articles confusing, what was confusing about them? What did you like, not like? Would it be better if I set out the problem first and then went into talking about moths? If you wanted to combine the serious point made in Footnote 1 with the joy of science and finding things out outlined in Footnote 3, how would you do it?

3

As kids, my twin brother and I created entire shared universes together. With a set of simple toys, we imagined vivid universes and characters. My imagination provided me joy.

During the worst parts of my mental illness, however, my imagination was turned against me. Instead of providing relief, my severe anxiety twisted my imagination into producing living nightmares I could not escape from.

Not much of my past self has survived the filter of mental illness, but fortunately, the joy of imagination has. It has slowly returned. Writing Psyverse articles has been the first time in a very long time where I have felt good using my imagination. It is a joy I want to share.

Instead of Psyverses, I could just write plainly about my experience of mental illness and the psychiatric system, but other people do it much better than I ever could (e.g. Susan Mahler, Leon MacFayden, Skye Sclera, etc.). And the more I write about my past experiences of mental illness, the more miserable I become in the present. I am trying to combine the joy of imagination, with the problems psychiatry faces. And I am struggling to find the right formula to achieve this. So, I could use your help.

]]>
<![CDATA[Psyverse #1, part 2: Where psychiatrists tell stories]]>https://psychiatricmultiverse.substack.com/p/psyverse-1-part-2-where-psychiatristshttps://psychiatricmultiverse.substack.com/p/psyverse-1-part-2-where-psychiatristsTue, 08 Apr 2025 10:03:06 GMT
Moth Storytelling event, Toronto, 2019. Source: Adetoma Omokanye (worth a look at his portfolio, he takes some beautiful pictures)

Part one:

I am hoping that the creation of absurd alternate universes will help the discussion of ideas, reducing the friction between opposing viewpoints. I’ve noticed that when we present ideas within the context of our own universe, it can feel very personal to those who disagree - sometimes almost to the point of insulting. Angry responses produce angrier replies as discussion descends down the attractor of rage. Whereas when an idea is presented within the context of an already absurd premise, I hope it will become much harder to become angry. Affronted criticism can be instantly met with “Um… dude… we are talking about an earth entirely inhabited by moth-people.”

This is not to say I wish to completely avoid criticism. Quite the opposite! I am hoping to encourage discussion, whether through comments, DM’s, Substack notes, etc.


"There are women sitting in the front row!" I thought in exasperation. On the stage was a not unattractive bespectacled man in the middle of explaining all the different ways women had mistreated him while dating. He talked about ghosting and lack of commitment, amongst a list far too long to be remembered by the author. He then told a joke about how women should stay in the kitchen. For a brain in a state of confusion, it was surprising how clearly he managed to communicate to the audience the real reasons behind his "misfortune" in dating.

He was supposed to be telling a story, but it better resembled a cross between a rant and an advertisement. Throughout, he mentioned his new book outlining how he thought women should behave while dating a man. You really couldn't make this stuff up.

Well, apparently, he could.

Misogynist man was the first of three "story" tellers I endured during this non-Moth storytelling event. It was independently organized by a local businessman in the town where I lived. Moth storytelling events in the United Kingdom are only held in London, once a month. For me, it was at least an hour and a half journey each way. I thought this little independent local storytelling event would be a similar experience - I was wrong. Oh, I was so very wrong.

The second storyteller trudged onto the stage1 with the energy of a substitute teacher who had missed their morning coffee. She was tall and slender with shoulder-length blonde hair. I would tell you what her face looked like, but unfortunately, it remained hidden behind a sheet of A4 paper. “I’m going to read you a blog post about my adventures in the United States” she declared in a monotoned voice. I was hoping for a story like “Thelma and Louise”. It was like Thelma and Louise, if Thelma had gotten into their car, driven to a petrol station and gone home. Because that was pretty much all that happened in her story. Driving. Down a road. For fifteen unbearable minutes.

The third storyteller decided that stories were beneath her. She came dressed in a long flowing skirt, no shoes, and dandelions knitted into her hair. "She's going to read poetry, isn't she..." I said to my friend next to me. Before he could answer, she started reading poetry.

About two lines in, she ran out of words that rhymed with "San Francisco". Instead of using a thesaurus, she came up with the ingenious idea of rhyming San Francisco with itself. Funnily enough, the poem wasn't even about San Francisco. "Each poem costs 1 pound" she said proudly once finished. "Let's get the heck out of here" I whispered to my friend. We left at the intermission.

Spark Stories and proper storytelling

While demoralised, I had not given up hope. I decided I would go to a proper storytelling event in London. It wasn't an official Moth event but was very similar to it - Spark Stories.

At the beginning, the host gave a short introduction, going over the do’s and don’ts of telling a story. Storytellers had only five minutes to tell their stories; there was no opportunity to meander. You weren’t allowed to read from a script or notes. The stories had to be true and personal—the storyteller had to have first-hand experience of the story they were telling. A warning was given to avoid going into a stand-up routine. This structure geared the storytellers toward producing authentic stories.

I had no intention of telling a story, but the organisers persuaded me to have a go. While telling a story with zero preparation wasn't ideal, I was surprised at how much the structure helped. It was a completely different dynamic compared to the horror show I experienced at my local storytelling event.

At the end of the Spark Stories event, a magical thing happened. Talking to new people was easy. No confidence or energy was required. In fact, I felt compelled to talk to other people. The short, true and personal stories became the perfect icebreaker. Instead of the energy-depleting awkwardness of small talk meanderings like “What do you do?” “Where are you from?” “That’s a nice bracelet”, we were all away. It was a wonderful experience.

I eventually did manage to go and see a proper Moth storytelling event. It was a bit more of a show and less intimate, but the principles remained the same. At the Moth, storytelling was the focus. There were no long introductions or numerous titles attached to the storyteller’s name. No money changed hands from organiser to storyteller. There was no agenda. The aim was authenticity. The result was an experience that satisfied my primal urge to connect.

Why did the setup at The Moth and Spark Stories result in an urge to connect? Well, I think the removal of ulterior motives in combination with the restrictiveness of the rules encourages the storyteller to be vulnerable. This signals to the audience that it is okay to be vulnerable too - to let down the guardrails. I think the audience then starts to feel similar emotions to the storyteller, which can evoke similar personal stories. Once the event ends, the need to reciprocate a personal story is so strong, an audience member feels compelled to reach out to the storyteller and other listeners.

In other words, instead of connecting on ideas, storytelling events encourage people to connect on deep emotions. The trivialities of things that normally keep people apart, like politics or ideologies, cannot reach this place. It is superceded by deep emotional connection.

Since attending these storytelling events, I’ve often wondered what the world would look like if events such as Spark Stories or The Moth had become commonplace. I started to construct an entire universe within my mind.

Storytelling in Psyverse #1

In Psyverse #1, storytelling events are so abundant that they are not seen as events but as a social experience, like having lunch with a friend or going down the pub or bar for a drink. The Moth-people in this universe gather at the end of the workday, or at the weekend, and tell stories.

Friendships and relationships are deeper and more resilient. Moth-people are prepared to be more vulnerable, more willing to connect, and more able to relate to one another. There are still arguments and fights, friendships still come and go, relationships still end. But when the opportunity arises to form a deep connection with another person, Moth-people grab it with both hands, instead of passively watching it go by, like a shadow at dusk.

Perhaps unexpectedly, Moth-people are much more productive at work. In our universe, there are countless articles proclaiming the one true method or technique that will immeasurably boost your productivity. I observed during my PhD, however, that the greatest cause for a loss of productivity didn’t seem to have anything to do with a single method or technique. It was the absence of emotional connection that drove wedges into the cogs of workplace productivity.

The flaw of productivity hacks is that they mostly focus on the individual. But companies and institutions generally need people to work in teams. Since the colleagues around me saw each other mostly through titles,2 many would be worried about going to someone if they had a problem. They were worried about making mistakes. My peers would often complain and gossip to other employees instead of engaging with the colleague they had friction with, to let them know it existed.

The Moth-people of Psyverse #1 often socialise at work through the storytelling setup, both formally and informally. This means that most colleagues have at least one personal connection between each other, unrelated to work.

When I think about the importance of relationships in the workplace,3 I think about Starling Mumurations.4 During the winter, starlings visit the UK from the European continent. On the odd lucky occasion, I have seen large flocks of these birds moving and shifting, as if they had one mind.5

But this is an illusion. In reality, starlings self-organize. Each starling only keeps track of its seven nearest neighbours.6 When one of its neighbours moves, the starling has to move to adjust. For a murmuration to function, every starling has to be intimately connected to the movement of its neighbours. Relationships between individuals allow a group to move as one.7

Alex finally gets to the point

As a patient, the most disappointing thing about reading some of the psychiatric discourse is the bickering. The endless ideological attacks between some psychiatrists. This type of discourse is so far removed from the realities of patient care, I cannot help but feel hopeless about the situation. Thomas Reilly eloquently sums up an example of an ideological battle between the psychiatrists Joanna Moncrieff and Nassir Ghaemi in his article: Our blessed science (their barbarous pseudoscience).

While you might strongly agree or disagree with Ghaemi and Moncrieff’s positions, it is important to recognise they are human. They both have worlds outside of psychiatry with deep personal stories that readers might relate to. But we never hear these stories. There is no current social structure in place to tell them.

Between our two universes, it was the moth-psychiatrists that gained the most relative benefit. Storytelling helped moth-psychiatrists find common ground with each other, separate from their viewpoints. There are no moth-anti-psychiatrists. Storytelling events facilitated personal connections; moth-psychiatrists don’t see each other as enemies. They don’t simplify their mothanity to a philosophy or specific viewpoint. While nurturing their own individual theories and ideas, moth-psychiatrists still move as one, like starlings.

Psychiatrists in our universe routinely see patients in distress and, in an underfunded mental health system, are some of the busiest people I have ever come across. Burnout is prevalent,8 and it is not known how many psychiatrists suffer from mental health conditions themselves.9 Storytelling events help moth-psychiatrists create the close relationships and support networks needed in such an environment.

During my time in the psychiatric system, I have felt like a patient and nothing more. I've never wanted a psychiatrist to be my friend, but I would've liked to have been valued for something other than the disease I carried. Because of storytelling events, moth-psychiatrists understand that a small external connection with a patient helps immensely with communication and trust.

I think physicists in our universe could benefit from personal storytelling events too. But for different reasons. I think many of us struggle to express our emotions, which means there is a significant mental health problem within the academic community.

Nevertheless, I believe physicists manage to leverage creative non-fictional, non-personal storytelling better than most other groups of people. Physics is so weird, so absurd, trying to understand and then communicate concepts within it requires imagination.10 Randall Monroe and his “What If” book series exemplify this type of storytelling. If psychiatrists want to become more imaginative, they must become courageous enough to engage with and tell absurd stories.

Stories form the core of who we are as people. Religions are built on stories. When we meet up with friends for a coffee, we tell stories. When we sit around the dinner table, we tell stories. But, in modern life, storytelling is always the secondary focus. When it is the primary focus, like in a book, TV show, film or play, the stories are made up. Fictional. One of the, if not the only, avenues to tell a personal story to strangers in modern-day life is through a memoir. Which by its nature, is an isolated activity.

This does not mean I am imagining a universe where everyone is “like a family”. I'm saying that having one or two emotional connections with the people you work with, you play a sport with - whatever group you happen to be a part of - might help everyone communicate better and perhaps even live happier lives. I wonder what the world would look like if everyone had that one percent emotional connection with each other, to be one percent Moth.


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

Or, in this case, the floor in front of some chairs.

2

i.e. their relationships with other colleagues were based exclusively on who the person was at work. They didn’t know anything at all about other people’s personal lives.

3

And life in general

4

Note: the analogy between relationships and starlings is my own entirely hypothetical creation - please take it with a pinch of salt.

Truthfully, for a workplace, I thought about Canadian Geese in a “V-formation” in the middle of a starling murmuration. The rigid structure of the Canadian Geese V-formation provides an overall forward momentum (leaders in a hierarchy) for the starlings (employees), while the starlings provide adaptability to an ever-changing world and influence the direction of the Geese.

Then I realised maybe I’m thinking about birds too much…

6

Also see: “Self-organized aerial displays of thousands of starlings: a model” by Hildebrandt, Carere and Hemelrijk. Unpaywalled pdf here

7

For more on the importance of relationships, see the Relationships Project

8

[1] K. V. Bykov et al., ‘Prevalence of burnout among psychiatrists: A systematic review and meta-analysis’, J Affect Disord, vol. 308, pp. 47–64, Jul. 2022, doi: 10.1016/j.jad.2022.04.005.
[2] P. Dong et al., ‘Depression, anxiety, and burnout among psychiatrists during the COVID-19 pandemic: a cross-sectional study in Beijing, China’, BMC Psychiatry, vol. 23, no. 1, p. 494, Jul. 2023, doi: 10.1186/s12888-023-04969-5.
[3] C. McLoughlin, S. Casey, A. Feeney, D. Weir, A. A. Abdalla, and E. Barrett, ‘Burnout, Work Satisfaction, and Well-being Among Non-consultant Psychiatrists in Ireland’, Acad Psychiatry, vol. 45, no. 3, pp. 322–328, Jun. 2021, doi: 10.1007/s40596-020-01366-y.

10

I suppose I would call the type of imagination physicists utilise as “applied imagination”. Normal imagination can be used to come up with non-consistent ideas (in other words, the rules break down, e.g. Peter Pan flying with no explanation as to how he is doing it). Applied imagination has to be tethered to a set of consistent rules - it must be tethered to the real world.

While the first part of Psyverse #1 reads as ridiculous, note that everything I have imagined is plausible (though there are still clear flaws). In a world full of smog, navigating via moth antennae is a plausible evolutionary adaptation. Applied imagination is not an easy thing to do.

]]>
<![CDATA[Psyverse #1: Where everyone is one percent moth]]>https://psychiatricmultiverse.substack.com/p/psyverse-1-where-everyone-is-onehttps://psychiatricmultiverse.substack.com/p/psyverse-1-where-everyone-is-oneWed, 02 Apr 2025 14:03:47 GMT
I must admit, I find making AI images cool. And I hope the images bring the Psyverses more to life. But I also know that it is having a detrimental effect on artists. In the battle between looks cool and conscience, cool won. And I feel sad now. Source: NightCafe

For pretty much all the articles I have written up until now, I’ve had to reign in my tangential mind. My brain seems to like making connections between things which appear unrelated but are actually linked in an unexpected way. This seems like a cool ability on the face of it. Unfortunately, it is only around 5 – 10% efficient. Much of the connections I see aren’t really there. And sometimes I miss connections between things which are obviously related.

Given I am my own editor (I’m not sure whether this is a good or bad thing yet…) I’ve let my tangential brain free. This has two consequences.

You might often be wondering, “What on earth has this got to do with psychiatry?” By the end of this two-parter, you might, quite rightly, question why I didn’t just get to the point immediately. I guess my answer would be that I wouldn't be able to write the end if I didn’t write the beginning. I find a particular joy in the journey. Of telling the interesting stories that many of us miss in the hustle and bustle of everyday life.

The second consequence is footnotes - a lot of footnotes. A lot of interesting and detailed footnotes. I am aware that for some people this might be distracting – if so, I’m willing to post footnote-free versions somewhere else. Just let me know in the comments!

On to our first Psyverse…


Quite why the human-like inhabitants of Psyverse #1 had feathery antennae growing out of their foreheads was anyone's guess. The most popular theory in mothiety was that a radioactive moth bit a human ancestor - thus fusing their DNA.1 This theory first came to public attention after an Egyptian Hieroglyph displaying a muscular moth-like person fluttering mindlessly towards the moon2 was shared via social media. It certainly did not help that the film "Spider-mothman" - about a teenmothager who becomes a superhero after getting bitten by a radioactive spider - came out around the same time.

The Mothosapiens with common sense liked to point out that hieroglyphs "are not literal depictions of reality"3 and that "Spider-mothman is just a dumb film", but what do they know? Unfortunately, just like in our universe, common sense was a rather scarce commodity.

Most of the scimothists focused their attention on more plausible theories. The general consensus as to why every human had a pair of moth-like antennae was due to something called "convergent evolution". This is where distantly related organisms evolve similar looking features or exhibit similar behaviours. In other words, nature occasionally likes to reinvent the wheel.

For some reason, our earth in our universe is perfectly set up for crabs. So much so that there is a specific term for organisms that evolve crab-like features: Carcinisation. Psyverse #1, on the other hand, seemed perfectly set up for moths. Mothinisation4 had occurred in so many species, scimothists knew that there must have been something advantageous about the moth-like form - antennae in particular.

If the scimothists were inter-universal beings like you and I - free to wander around the psychiatric multiverse - they would quickly pick up on the fact that their universe had a lot more "stuff" in the air.

Earth in Psyverse #1 was almost in a permanent smog of chemicals.5 The furthest you could see, even with the extraordinary eyes of a bird,6 was about 10 metres. In nature, when one sense starts to become not very useful, its function diminishes.7 Therefore, the moth-people of Psyverse #1 had remarkably poor eyesight. Where’s Waldo (or Where’s Wally if British), a fun children’s game in our universe, is one of the most difficult sports to play in Psyverse #1. They even have World Championships, where, like many first-time watchers of ice hockey, most moth-fans have no idea what is going on.

Moth antennae8 are very sensitive to smell. Male moths are able to detect a single molecule of a female’s pheromone using their antennae and can follow these scent trails for several kilometres.9 In fact, some researchers have even developed a “smellicopter” that uses real moth antennae to navigate a drone using chemicals in the air.10

A drone hovering in midair. It has two plastic fins on its back.
The Smellicopter. The little arc at the front on top is the moth antennae. Source: Mark Stone, University of Washington, in UW NEWS

Scimothists proposed that the reason for the abundance of moth antennae in Psyverse #1 was because of the ability of moth antennae to pick out tiny concentrations of molecules in a diverse background of smells. Which could then be used like “sight” in a similar way dogs use smell to “see” in our universe. Why not noses? Who nose? Bu dum tsh... (Sorry).

Whatever the reason for the feathery protrusions emanating from people’s foreheads, moths became an important symbol of Psyverse #1 culture. So in 1997, when the moth-like version of George Dawes Green held the first storytelling event called "The Moth" in his New York living room, the relatable idea took off all over the world. Storytelling became part and parcel of modern day life.

Yeah...so...erm... Psyverse #1 is not about moths. It's about stories...

As common as going to the bar or pub, grabbing a cup of coffee or watching a film (I've heard Spider-mothman is quite good at the moment), moth-people would go to storytelling events. In Psyverse #1, they are held everywhere, from the biggest cities to the smallest towns.

The setup is relatively simple. A theme is selected for the night in advance - for example, this could be "Love hurts". Storytellers can interpret this theme however they see fit, which is part of the fun. Anyone attending can then drop their name into a hat to sign up to tell a story. Names are then drawn. Those lucky enough to be chosen are invited onto the stage to tell a five-minute story. The story has to be true, it must be on theme, it must have stakes, it must be personal, and it must be on time.

Due to its popularity, moth-people in Psyverse #1 are far less lonely and have stronger friendships than people in our universe. The most significant difference, however, is the strength of the communities. Storytelling helped form emotional bonds outside of tribal identities. The introduction of storytelling events in Psyverse #1 was so powerful that it transformed moth-psychiatry from a fractured collection of ideologies into a unified, patient-centered practice. Perhaps the same could be achieved here…

To be concluded in part two


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

So, moths can’t bite people. They have a Proboscis, a sort of tube that can suck nectar out from flowers (or sap from trees or juice from fruits). The moth-people perpetuating the radioactive moth theory say that the victim dressed like a flower, and that the radioactive moth had a steel proboscis. They could have argued for a simple caterpillar sting, but the radioactive moth bite sounded cooler.

2

This is, in reality, a misnomer. A recent paper in Nature has provided experimental evidence that moths and other insects are not attracted to light at night, but instead are “captured” by it. Supposedly, moths use moonlight, and light in general, to dictate which way is up. So they will always turn their backs to the light. Moths that fly by an artificial light will “orbit”, “stall”, or “invert” around it.

Fig. 1
Source: “Why flying insects gather at artificial light”; Figure Caption: The unusual flight motifs were: a Orbiting, b Stalling, and c Inverting. (Above) Diagrammatic representations of the three behavioural motifs. (Below) Overlaid flight trajectories of insects performing these characteristic patterns around UV light sources. Overlaid frames are separated by aesthetically chosen fixed intervals of 52 ms (left), 20 ms (middle), and 24 ms (right) for visualization.

It is all rather sad. But a reason to ensure that streetlamps do not “leak light” in directions other than down.

On the upside, an algorithm inspired by this behaviour, called Moth-Flame Optimization, has been utilised in many areas of computer science, from forecasting the electricity consumption of Mongolia to terrorism detection.

3

Champollion, the first person to decipher the logic and intention behind the Egyptian language, discovered that it is actually written in four different scripts: Hieroglyphic, Hieretic, Demotic, and Coptic.


He wrote in the book "Précis du système hiéroglyphique des anciens Égyptiens" (Summary of the hieroglyphic system of the ancient Egyptians): "Mais que l écriture HIEROGLYPHIQUE est un système complexe, une écriture tout-à-la fois FIGURATIVE, SYMBOLIQUE et PHONÉTIQUE, dans un même texte, une même phrase, je dirais presque dans le même mot"


(Popularly translated to: Hieroglyphic writing is a complex system, a script all at once figurative, symbolic and phonetic, in one and the same text, in one and the same sentence, and, I might even venture, one and the same word).

4

Not a real thing.

5

Interestingly, looking for smog in exoplanets has been proposed as a method for finding extraterrestrial life.

By observing the light that passes through an exoplanet’s atmosphere as it transits across the system’s star, we can see which wavelengths of this light have been absorbed. Called Transmission Spectroscopy, different particles in the planet’s atmosphere will absorb different wavelengths.

By looking at the amplitude of the absorption “dips” in intensity, it may be possible to pick up “technosignatures” of other planets - basically pollutants like CFC’s. The idea being that other intelligent life might also produce pollution which we could detect.

Whether it is possible to differentiate “alien-made” pollution from naturally occurring processes remains up in the air (pun unintended).

6

I have no idea if birds can see any better through smog than any other animal’s eye. But birds’ eyes are incredible.

Firstly, they are deceptively huge. As they are not spherical, they extend deep into most birds’ skulls. This allows a large field of view, a large picture from all the light coming in:

The visual acuity* of Hawks and Eagles is so good that they could do an eye test from across the street (assuming linear distance acuity). They have an acuity of 20/5, meaning that assuming you had normal healthy vision, you would have to stand 5 feet from the letter chart (or Snellen chart) to just about make out one of the lines of letters, while an eagle could make them out from 20 feet.

This is before we even get on to colour. Some raptors have four “cones” instead of the three humans have. The fourth absorbs wavelengths in the UV range - yes, some birds can see UV light alongside the other colours we mere mortal humans can see (and birds also have a higher density of cone cells, thus allowing a wider colour contrast range*).

*Acuity is being able to see minute detail of a very small object on a highly contrasted background. Contrast is the ability to detect boundaries of objects with similar coloured backgrounds. See here.

Here are some additional interesting sources:

A) Eagle Eye wiki
B) Why Bird vision is cool
C) Visual Acuity wiki
D) High colour vision resolution but low contrast sensitivity in raptors

7

For example, the blind cave fish

8

Technically male antennae

9

I’m not entirely sure how far.

The internet is terrible at providing references. As much as I would enjoy trying to figure out the right scientific words to search for in academic literature (that translates to “how far can moths smell?”), I was hoping blog posts and other articles would provide the source for me. Oh, how naïve I was!

My journey started with a search leading to this webpage about the wonders of antennae. It claimed that Indian lunar moths were shown to be able to detect a single sex pheromone molecule at a distance of 11km - giving a link to the asknature webpage providing the same claim. Which had a reference! Hurrah! The asknature article referenced a book by Shuker which, upon investigation, made the 11km claim without a reference… Booooo.

I then spent the next couple of hours looking at article after article after article that did not include references for claims.

Before landing on an article about how physicists figured out how male moths could detect females using a cone of detection up to 1,000 metres away. Oof - not good news for the claim. I gave up when I found an article which claimed that male moths could find a female up to 30 miles away (which is about the diameter of London!).

Trying the academic literature, I found an article within about 15 mins saying the distance was “several kilometres”, but then one of the references they used said “up to 100 m”.

So I finally gave up. If someone out there can point me to the research studies that can provide the true answer, I would be most grateful. Then I can say that an afternoon was definitely not wasted…

10

Not kidding - it is a real project. Though, given that the antennae will only work for a couple of hours once removed from an anesthetised moth placed in a fridge (and might be morally questionable as I’m not sure if moths can regrow antennae), it isn’t entirely practical.

]]>
<![CDATA[Psyverse #0: The need for a psychiatric multiverse]]>https://psychiatricmultiverse.substack.com/p/psyverse-0-the-need-for-a-psychiatrichttps://psychiatricmultiverse.substack.com/p/psyverse-0-the-need-for-a-psychiatricWed, 05 Mar 2025 11:14:31 GMT
Source: Pale Blue Dot Revisited, NASA, 2020

So, building a backlog of articles took a little longer than I thought. There were tons of interesting topics, and I now know way too much about horses. Anyway, we have finally arrived. Thanks for being patient!


Within the Local Galactic Group of the Virgo Supercluster, on the Orion spiral arm of the Milky Way galaxy, lies a pale wine-dark dot.1

Our pixel in the universe is home to a curious group of humans called psychiatrists. If you were to ask any psychiatrist where they thought psychiatry’s place in the universe was, I guess many would respond “We don’t even know psychiatry’s place in psychiatry”.

Psyverse #0 is our universe, our earth, our home. Here is where you will hear one psychiatrist advocating passionately about the efficacy of a certain treatment, while another completely disparaging it. Where some psychiatrists present themselves as experts on pretty much everything, and for others it can be unclear if they are experts on anything. It is on our planet where psychiatrists dizzy themselves in circular arguments about what is brain, and what is mind. And this is before we even get to the “anti-psychiatry” movement.

I'm not sure many within the field of psychiatry appreciate how incredibly odd it is to have a group of psychiatrists and patients who identify as "anti-psychiatry". Do any other medical fields have such a curiosity? Are there anti-cardiologists or anti-ophthalmologists? I don't think there are, but I claim overwhelming ignorance of these things. I do know that when scientists in my field mention "anti-matter physicists,” they are not describing fellow academics with a general dislike of matter.

In the swings and roundabouts of psychiatric discourse, it is difficult to find any common ground. There is too much on the line, too much skin in the game. It is likely some, if not most of you, have seen how destructive even the mildest of mental illnesses can be to the life of a sufferer. Never mind the bone-chilling suffocation of severe conditions. I am certainly not immune to the bias caused by personal experience.

I think it is fair to say that my experience of the psychiatric system has not been a wholly positive one. To start with, the circumstances under which I attained the conglomerate of chronic severe anxiety conditions I suffer from were often not believed by most of the clinicians I saw. Having never suffered a clinical anxiety symptom in my life, four days after starting an antidepressant, I had pretty much all of them.2 A severe reaction (a serotonin syndrome) on the lowest dose changed my life forever. Convincing any psychiatrist this was in fact the case proved to be mostly impossible. "Serotonin syndrome is an acute condition", many a clinician has explained patiently to me.

I know it is acute! When a medium strength typhoon blows through Tokyo, not much damage is caused. But if one were to turn up in Stockton on Tees,3 well, the town would get quite a severe semi-permanent makeover - and not in a good way.4 The pertinent question I wanted answered, but no psychiatrist asked, was: why was my brain like Stockton on Tees, rather than the Tokyo brains of other serotonin syndrome sufferers?

It subsequently took 6 years for a treatment to be found which started to reduce the symptoms of my Generalised Anxiety Disorder (GAD). The predominant reason was because I was never diagnosed with Generalised Anxiety Disorder.5 I'm still trying to figure out quite why. One psychotherapist even got annoyed when I couldn't be specific about my anxieties: "But what specifically are you anxious about?" she said. "I guess, everything outside?” I replied, “Though I'm not sure that counts as specific". "But what in particular?" she persisted. I sighed in frustration.

Had any clinician recognised my generalised anxiety disorder and followed the British National Formulary (BNF) guidelines6 for medication therapies of GAD, my condition would have been treated approximately 5 years and 11 months earlier.7

This I would not describe as an efficient process.

As you can probably tell, there is still quite a bit of anger in me about how things have turned out. But I think even the saintliest person would feel some injustice living with the knowledge that at least six years of abject suffering and misery could have been avoided.

Nevertheless, to categorise the entire psychiatric system as awful is neither true nor useful. And I don't want to be angry all the time. For the Psychiatric Multiverse, I want to go back to one of the reasons I became a physicist in the first place - the exploration of absolute absurdity.

Psychiatric discourse can often feel very serious. And for good reason. Mental illness is one of the few collections of pathologies that can kill at any stage. Even mild depression can still feel overwhelming enough for someone to go against their most primitive of instincts: survival.

Psychiatrists are trying to treat what is arguably the least understood organ of the human body, with little to no available information about it. There is a reason you will find that psychiatrists almost never run a blood test - there are currently no reliable biomarkers that indicate a relevant disease state.8

I'm not sure you would find a much better example of a "tough gig". In such circumstances, it can become easy for ideas to be ingrained. Static and still. I hope the Psychiatric Multiverse will help evoke the emotion that combats stuck minds: curiosity. As the psychiatrist Dost Öngür puts it while talking to Awais Aftab:

There is a lack of imagination in our field. It was probably necessary for psychiatric science to mature by becoming institutional, developing its own dictionaries (such as the DSM), and establishing shared terms and references. But these do not amount to a scientific paradigm; they are simply a set of concepts and practices that we inculcate in each new generation of researchers. And these concepts and practices narrow the scope of questions that are asked and explored. No need to get philosophical about this; suffice it to say that it is rare for me to read a newly submitted manuscript and think, ‘This is clever!’ That happens perhaps once or twice a year, and I feel excited that someone out there is thinking originally about our field. The constant parade of fashions without genuine creativity makes me think that our field needs, like Proust, not to seek new landscapes but to have new eyes

In lieu of eye transplants, I guess the Psychiatric Multiverse will have to do.


Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

Share

1

In Homer’s Odyssey, he describes the sea as “wine-dark”. This description is often used as evidence that the ancient Greeks could not see the colour blue (just google “greeks couldn’t see blue”). However, this historical oddity is highly likely due to the restrictiveness of language rather than a strange ancient colourblindness (among other reasons) as Matt Yglesias and Metatron explain

2

Funnily enough, I suppose this means my anxiety is generalised in every possible degree

3

To make amends for my honestly random, yet still distressing selection as a town to be hypothetically destroyed by a hurricane, here are some of the many reasons to visit Stockton on Tees:
A) It was one end of the Stockton and Darlington Railway - the first steam train passenger railway
B) It has pretty bridges like the Infinity Bridge:

C) It has one of the oldest markets in England, dating back to 1310
D) There is plenty to see and do in the town

4

Fun fact: Meteorologically speaking, it isn't actually possible for tropical cyclonic storms to reach as far north from the equator as Stockton on Tees. For tropical cyclones (like hurricanes and typhoons) to form, they require hot sea surface temperatures and very cold high-altitude atmospheric temperatures. This temperature difference allows the hot, humid air just above the sea surface to rise high up into the atmosphere and condense, releasing energy. The earth's rotation applies the apparent Coriolis force, resulting in the rotation of this air. Hurricanes are essentially giant heat engines.

The "comma-shaped" storms seen in the mid-latitudes get their energy from the lateral heat differential between warm air at the equator and cooler air at the poles.

The summary of this footnote is: Meteorology is hella complicated. See this video from the University of Manchester on extratropical storms and this video from PBS on hurricanes

5

I’ve had 10 other diagnoses though! At least some of them were sort of related to my illness…

6

To be clear, I am not advocating that psychiatrists should blindly follow guidelines and put practically no thought into treating patients! I only want to point out that for six years I didn't know that prescription guidelines for generalised anxiety disorder existed as they were never mentioned to me.

7

In the BNF guidelines, three psychotropic medications are recommended in the following order: Selective Serotonin Reuptake Inhibitors (SSRI’s), Serotonin and Noradrenaline Reuptake Inhibitors (SNRI’s) and then Pregabalin. Since SSRI’s caused my serotonin syndrome, probably not a good idea. The same goes for SNRI’s. It took nearly 6 years for Pregabalin, the medication that has significantly reduced my GAD symptoms, to be prescribed

8

[1] L. N. Yatham, ‘Biomarkers for clinical use in psychiatry: where are we and will we ever get there?’, World Psychiatry, vol. 22, no. 2, p. 263, May 2023, doi: 10.1002/wps.21079
[2] A. Abi-Dargham et al., ‘Candidate biomarkers in psychiatric disorders: state of the field’, World Psychiatry, vol. 22, no. 2, pp. 236–262, 2023, doi: 10.1002/wps.21078.

]]>
<![CDATA[The multiverse is under construction]]>https://psychiatricmultiverse.substack.com/p/the-multiverse-is-under-constructionhttps://psychiatricmultiverse.substack.com/p/the-multiverse-is-under-constructionThu, 04 Jul 2024 13:49:25 GMT
Source: NightCafe

What would happen if every psychiatrist was 1% moth? How would psychiatrists commute if the earth succumbed to a devastating greenhouse effect resulting in floating cities in the sky? These are the questions that no one is asking, with wildly speculative answers. Welcome to the Psychiatric Multiverse!

Each article in this newsletter proposes a hypothetical parallel universe (a “Psyverse”) where psychiatric culture is changed by a technique, idea or perspective from another area or subject. It will be weird. It will be wacky.

I am Alex Mendelsohn, a distinctly ordinary physicist with an out of the ordinary story. Much of this newsletter will be inspired by my decade as a patient in the psychiatric system.

In a contender for the most obvious statement of the year: being sick has several drawbacks. One such drawback is a lack of energy.

Many a writer starts a newsletter with well-meant optimism. “I shall write two articles a week!” they might think, before eventually settling down into one article a month.

I intend to start with well-meant pessimism. Recovering from mental illness while rebuilding my social life and career is a time-consuming process – keeping to a regular schedule of posts will initially be… difficult. So, I’m currently building a backlog of articles, to take some time-pressure off.

I’m hoping that within the next six months I’ll have this newsletter up and running, but I’m not putting a date on it. When the time does finally arrive, I’ll aim to post once a month and everything will be free. In the meantime, please feel free to have a look at some of my previously published articles.

This newsletter is by no means intended to be a lopsided multiverse. If you have a left-field ridiculous idea about how the psychiatric system could be improved and want to have a little bit of fun, please do reach out1.

That is all for now, see you in the first Psyverse!

Thanks for reading The Psychiatric Multiverse! Subscribe for free to receive new posts and support my work.

1

Either through substack DM’s or see my about page for contact details

]]>