Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
OT: an interesting musing I found on fedi:

Iām suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me - and many other writers and journalists - without consent.
State law requires consent before someoneās name can be used for commercial purposes.
And here is the complaint, via evacide.
the Pentagonās CTO has AI psychosis now. sighhhhhhhhh
The whole argument can just be countered with āif the Pentagon believes Claude is sentient and a danger to the military, then why make a deal with OpenAI to use ChatGPT, another LLM similar to Claude? Wouldnāt that also be a danger of becoming sentient? and why are Pete Hegseth and Donald Trump planning to force Anthropic to comply after 6 months if they believe Claude shouldnāt be in the military?? Why did you ask Anthropic to let you use Claude for mass surveillance and autonomous weapons if you believed it was sentient and a danger??ā
It just reeks of bullshit. āuhm actually we made Anthropic a supply chain risk because Claude is actually very dangerous and not because weāre doing banana republic shit to anyone who disagrees with us. we are a very responsible and safe government. please dont impeach trump.ā
I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI. Considering that one of the uses the DoD allegedly wants LLMs for is fully autonomous weapons that at the very least have a very distorted view of what the technology is capable of. Or they want an accountability sink so they can kill people with even less accountability. ā¦probably both.
I find it darkly hilarious that the doomer crit-hype is finally coming around to bite them, not in the form of heavy handed shut-it-all-down regulation to stop skynet, but in the form of authoritarian wackos wanting to make sure they are the ones āin chargeā of skynet.
I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI.
That is absolutely the reason, or at least part of it. See: Pete Hegseth Got His Happy Meal and how AGI-is-nigh doomers own-goaled themselves
Itās possible the attempt to shove AI in every nook and cranny in the pentagon didnāt especially pan out and since his face was all over that project, heās desperate for a scapegoat.
Like for sure heād have had the logistics of the entire US army running smoothly despite layoffs by now, if it werenāt for the wokies in anthropic acting up.
Reading comments cause I was bored, and had the misfortune to stumble upon this horribly formatted piece of work allegedly written by Claude
FT reports from Amazon insiders that theyāre investigating the role AI-assisted development has played in a spate of recent issues across both the store and AWS.
FT also links to several previous stories theyāve reported on related issues, and I havenāt had the time to breach the paywalls to read further, but the line that caught my eye was this:
The FT previously reported multiple Amazon engineers said their business units had to deal with a higher number of āSev2sā ā incidents requiring a rapid response to avoid product outages ā each day as a result of job cuts.
To be honest, this is why Iām skeptical of the argument that the AI-linked job losses are a complete fabrication. Not because the systems are actually there to directly replace the lost workers, but because the decision-makers at these companies seem to legitimately believe that these new AI tools will let their remaining workforce cover any gaps left by the layoffs they wanted to do anyways. It sounds like Amazon is starting to feel the inverse relationship between efficiency and stability, and I expect itās only a matter of time before the wider economy starts to feel it too. Whether the owning class recognizes whatās happening is, of course, a different story.
So oil prices are down again, and on nothing but a promise from Trump and a promise from the EU. The economy has proved remarkably resilient to me; the attack on Iran is like, wild nonsense number 17 that the USA regime did that I thought would trigger a major recession, and didnāt.
I mean donāt get me wrong, things are much worse now than 3 years ago, clearly. But theyāre not like, Great Depression worse. Theyāre not even 2008 worse. Itās just a certain level of degradation (cost of living is higher, purchasing power is lower, concentration of wealth is higher etc.) that people got used to as the new normal. People can get used to lots of things.
To make the IT analogy, I think the global economy is like Twitter. Sure, it feels like a Jenga tower held up by thoughts and prayers, but itās holding up. When Musk took over I really did think his catastrophic management philosophy would completely break Twitter, but no, it trudges on. Yes, moderation is now nonexistent, and Iām told itās down more often, and often in āsoft downtimeā like notifications not working, or DMs, or some other feature, or itās working but slow, and so on. But clearly the site is up most of the time and more or less functional. Users just get used to degraded quality as the new normal.
I predict AWS will 1) get slower and costlier thanks to āAIā, with higher downtime, at higher stress for the workers; 2) the leadership will refuse to see or admit or even consciously be aware of this; 3) the worsened services will be the new normal. I predict similar developments for the socioeconomic situation of the world, too; though Iām not ruling out a spiral into complete recession, either.
I somewhat agree although when the āother shoe dropsā and these things start impacting the money men they may start to realise AI isnāt the magic cure they thought it was (he says kind of hopefully)
6 hours of downtime for Amazon shopping. A very simple back of a napkin calculation. They made $213.4bn in sales in q4 2025. So divide that by 90 days and then 24 hours and multiply by 6⦠We are talking a $0.26bn loss for 6 hours downtime⦠That is not an insignificant amount of money. I imagine most bosses would be screaming for heads having lost that much money in sane non-hyper-scaled businesses.
Itās also a trend that I donāt see stopping without a major structural change. I donāt think thereās a point at which theyāre going to say āweāve cut enough corners and are going to stop risking stability and service degradation.ā The principal structure driving the economy, especially in the tech sector, is organized around looking for new corners to cut and insulating the people who make those choices from accountability for their actual consequences.
to follow this one up: there is now a new study about AI agents being dogshit at keeping code working over the long term
Unfortunately the paper structure screams āAI senpai, notice me!ā
AI coding agents seem bad at this job yet, but if you optimize for our benchmarkā¦
Silicon Valley is buzzing about this new idea: AI compute as compensation
These people are genuinely unhinged.
As the recent harpers article says:
"ā¦people who should be in The Hague are giving [startups] twenty million dollars. Something bad is gonna happen here, something really fucking bad is gonna happenā¦ā
āSelling your soul to the company store is not just fun, it is also invigorating!ā
this is just wages paid in crypto but adapted to new era in a way that doesnāt make sense
Man, that harper piece is a full DnD alignment chart of the most online bay area weirdos youāve ever seen.
DAIR, the AI-critical research organization founded by Timnit Gebru, is looking for a communications lead
Revealed: UKās multibillion AI drive is built on āphantom investmentsā
Chris Stokel-Walker at Fast Company reports:
High-level information about the private work of students and staff using ChatGPT Edu at several universities can be viewed by thousands of colleagues across their institutions due to a misunderstanding of what is being shared, according to a University of Oxford researcher who identified the issue.
The problem affects Codex Cloud Environments in ChatGPT Edu and exposes the names and some metadata associated with the public and private GitHub repositories that users within a university have connected to their ChatGPT Edu accounts. [ā¦] āAnyone at the university, or a large number of people at leastāincluding meācan see a number of projects [people have] been working on with ChatGPT,ā says Luc Rocher, an associate professor at the University of Oxford, who identified the issue and raised it with both the University of Oxford and OpenAI through responsible disclosure. He later approached Fast Company after what he felt was an inadequate response from both.
Just one of many reasons that the mere existence of āChatGPT Eduā means that many people need to be tased in the nads
Previously, on Awful, I predicted that Oracle would be all-in on the bubble:
Microsoft knows that thereās no money to be made here, and is eager to see how expensive that lesson will be for Oracle; Oracle is fairly new to the business of running a public cloud and likely thinks they can offer a better platform than Azure, especially when fueled by delicious Arabian oil-fund money.
But, uh, thereās not going to be any Arabian money while weāre dancing in the desert, blowing up the sunshine. The lawnmower is now running low on gas. Today, Oracle continues to make astoundingly bad business decisions:
Oracle is the only major player funding the AI buildout with debt, carrying over $100 billion on its books while free cash flow has gone negative.
I was not ready
AI was going to give us all universal healthcare but we didnāt believe hard enough and now all we have is this.
new development in ontology: āthe ontology that makes ai models valuable is americanā
āOur lethal capacities. Our ability to fight war.ā
These are two different things. But I fear he doesnāt get that.
Actually the race-realism use last week, combined with this one, makes me realize that for them itās just a fancy way of saying āworld-viewā [or what they consider to exist, and be true, which is not the craziest use of the word, but I would say unhelpful, and probably a small in-group marker].
Itās just a way of calling biases/prejudice legitimate.
And you know what, inasmuch the models have a āworld-viewā it IS annoyingly american in many ways. (at least the wrong kind of american.)
I was low-key hoping for a technical philosophical article, which argues that to find any of this shit useful you need a distinctly american understanding of reality.
I mean I guess given how the current guy took a chainsaw to American soft power, industrial capacity, economic prospects, and so on I guess our wildly over funded military is probably the only comparative advantage we unambiguously hold onto.
you gotta give him a morsel of credit, heās got his buzzword and heās stickinā to it
Starting this Stubsack off, Iāve found another FOSS project that hit the digital krokodil - ntfy.sh v2.18.0 was written by AI
I feel like at this point I want to highlight the ones that took a clear stance against LLM code. On a chardet thread, people listed:
- Gentoo
- Servo
- Loupe
- Qemu
- postmarketOS
- GoTo Social
- Zig
guess iāll have to write my own unifiedpush provider
Iām still happy with Pushover. Hasnāt changed in a decade (and a half?! Been using that since 2012, damn), works really pretty well.
Itās not self-hosted but when there are push notification services on the path, nothing really is.
thereās also overpush, which is meant as a self-hostable drop-in replacement for pushover and does not use ai afaict.
Ooh, thatās really cool. Also, very sobering section on the various e2e methods that are a pretty thorough indictment of all the chat systems out there.
That didnāt really surprise me tbh, I follow a blog that is ostensibly meant to be a furry blog but gets repeatedly sidetracked into cryptographic stuff, and the conclusion for anything e2ee is essentially that only signal is worth using if you are looking for actual e2ee. But yeah, encryption is generally pretty bad on this kinda thing.
Yeah, soatok has been doing really good work there. Itās disappointing that the matrix folks donāt seem to take it seriously.
this is a good post and some of yāall may enjoy it too: https://dotart.blog/cobbles/ai-and-that-guy-at-the-bar
It was very good, and Iām glad I clicked through to the link to Robert Kingettās story āThe Colonization of Confidenceā, which deserves its own highlight.
Even if the constant reminders that Iām trapped in the machine are painful.
Hey, heās posted here before!
That story is rad as hell. I was ready to run through a wall for those folks at the end. Appreciate you, Robert!
I noticed that too which is an extra reason why I figured Iād drop the link and name in. His posts about receiving an LLM-generated happy birthday is something I think about surprisingly frequently.
I swear every time his stuff floats through here I end up standing as I read it and wildly gesticulating at my living room or ranting extemporaneously to my basement about something it made me think of or feel. After reading this piece I hope that comes off as more complimentary to his work than showing myself to be a freaking weirdo.
Wew, Cory Doctorow sure is posting through it
https://pluralistic.net/2026/03/12/normal-technology/#bubble-exceptionalism
Itās true that these analogies can be stigmatizing, but they neednāt be. As someone with an autoimmune disorder, I am not bothered by people who describe ICE as an autoimmune disorder in which antibodies attack the host, threatening its very life.
This bothers me more than I can explain.
ICE as autoimmune disorder presupposes that itās normally a good thing to have ICE around and itās just malfunctioning as an exceptional state of things. If ICE is an immune system (malfunctional or not), what are we immigrants?
Yeah. When it comes down to it, the libs think the problem with Trump isnāt the fundamentals of what he is doing, it is that he is doing it without decorum or checking all the legal boxes or saying the usual lib pabulum to justify American imperialism. Skipping the legal checks and decorum is also bad, but in fact kids in cages was horrible when Obama was doing it the ārightā way.
Theyāre not vibe-coding mission-critical AWS modules.
and
- Itās worse than that, theyāre vibe coding critical operating system components
It is nuts to deny the experiences these people are having. Theyāre not vibe-coding mission-critical AWS modules. Theyāre not generating tech debt at scale:
https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes
Theyāre just adding another automation tool to a highly automated practice, and using it when it makes sense. Perhaps they wonāt always choose wisely, but thatās normal too. Thereās plenty of ways that pre-AI automation tools for software development led programmers astray. A skilled, centaur-configured programmer learns from experience which automation tools they should trust, and under which circumstances, and guides themselves accordingly.
Wow, the whole thing is indefensibly capital-W wrong, just an utterly weird rose-tinted view of the current corporate experience.
centaur-configured programmer
Cory, baby, my dogg, sure āenshittificationā was a big hit, but you canāt expect that your rough-draft followups are automatically gold
A skilled, centaur-configured programmer
This is like reading Yud mumbling about āShoggothsā. Itās giving knight errant, organ-meat eater, Byronic hero, Haplogroup Rlb.
Man, due to a weird alignment of the spheres I started reading those Honor Levy excerpts in the voice of Max Payne-style hardboiled narration and it fits weirdly well? Like a bargain version of the same sort of mid-budget semi-affectionate parody of existential angst thatās all tone and minimal substance.
I am retrospectively disturbed by how well āI really came in a fluffer that timeā slots into Dorothy Parkerās flow.
I mean she was undoubtedly too much of a lefty for the Thiel set to ever admit her influence, but I feel like thatās the exact type of vibe sheās trying and mostly failing to evoke.
Take āMorgellons Disease,ā a psychosomatic belief that you have wires growing in your body, which causes sufferers to pick at their skin to the point of creating suppurating wounds. Morgellons emerged in the 2000s, but the name refers to a 17th-century case-report of a patient who suffered from a similar delusion:
Nitpick but this is unusually sloppy for Doctorow. 1) People with Morgellonās donāt believe they have wires growing out of sores, but fibres (which upon examination turn out to be cotton for clothes). 2) The original Morgellons is a putative childrenās disease Ā«wherein they critically break out with harsh Hairs on their Backs, which takes off the Unquiet Symptomes of the Disease, and delivers them from Coughs and Convulsions.Ā» Which is quite different from the modern condition, whose sufferers have skin sores anywhere in the body with fibrous material looking like lint, dandelion fluff etc., and not particularly associated with convulsions. And 3) The association between the two was made by Miriam Leitao, a mother who believes her son suffers from the disease, and has gone to countless doctors and media trying to prove itās real. So itās an attempt to legitimise the postulated disease by cherry-picking something āhistoricalā that vaguely resembles it.
Kind of wild that the guy who popularized āenshittificationā as a term will die on the hill that the technology which drives the industrial enshittification of all human media is fine actually, because some people find the plugins useful.
He knows how LLMs work, right? This really is just cope because he got called out for being weird about using them. Really fucking disappointing
In the original post he kept referring to Ollama like it was an LLM instead of a server app that hosts LLMs so Iād say the juryās out on that.
edit: Also, throughout this piece he keeps equivocating between local LLMs and their behemoth online counterparts with their heavily proprietary tooling that occasionally wraps them into a somewhat useful product.
I think he assumes that because he can load up a modest speech-to-text model locally and casually transcribe several hours of video resources in somewhat short order (this was apparently his major formative experience with modern AI) it works the same with e.g. coding.
Like, hey gpt-oss please make sense of these ten thousand lines of context without access to a hundred bespoke MCP intermediaries and one or three functioning RAG systems as I watch the token generation rate slow to a trickle while the context window gradually fills up.
piece he keeps equivocating between local LLMs and their behemoth online counterparts with their heavily proprietary tooling that occasionally wraps them into a somewhat useful product.
This is fundamental to his approach. He believes that technology is inherently liberatory as long as itās in the hands of the consumer.
This really seems to be the case.
Hey, canāt get that SXSW London (a truly cursed event, but I digress) bag unless youāre willing to say LLMs Are Good, Actually
Man, itās frustrating to see him end up going down this route because the opening part of this is actually one of the better descriptions of AI psychosis Iāve seen, and i appreciate his emphasis on the way the delusion is built up in the suffererās mind rather than trying to game out whatās happening āinsideā the chatbot. Even his point about how LLMs arenāt bad in exceptional ways for a new technology is pretty cogent. But his insistence on defending his own use of these things (and others who do so in ācentaur-configuredā ways) rather than thinking about how it interacts with all the relatively normal ways that this technology is wildly destructive is a very conspicuous blind spot.
Like, you can absolutely drive a nail with a phone book, and given the wider surface area it even has the advantage over a traditional hammer of being harder to smash your fingers. An individual craftsman may well decide that this is a useful tool and in some cases worth using over other options. But if the only source of these hammer-books was an industry that relied on massive uncompensated use of creative work passed through exploited third-world labor, ground rainforests to dust to create special āold-growth paperā, placed massive and unsustainable burdens on existing road infrastructure to collect these parts and deliver them, and somehow had been blown into a speculative bubble that represented something like a quarter of the entire US economy by promising that if they created a big enough book then one guy could hammer all the nails at once and they could lay off all the carpenters, I think itās justifiable to look at the people using it as a normal tool and ask them āwhat the actual fuck are you doing?ā The usage statistics they represent and the user stories they tell are used to justify not addressing any of the harms necessary to enable this tool to exist in its current form, and are largely driving the absurd valuations that keep pumping the bubble. Your individual role in those harms as a small-time user who finds it occasionally useful may be incalculably small, but it is still real.
Like, it feels like I agree with Doctorow on basically all the premises here. He seems to have a decent grasp on how the things actually work (even if heās wrong about Ollama specifically being an LLM in its own right) and their associated limitations. He draws a decent line separating criticism from criti-hype. He is basically correct about how much of a bastard everyone involved in the industry at a high level is. But maybe because so many of these things arenāt really exceptional (save possibly in their sheer scale) he canāt seem to conceive of a world where things happen any differently, or of the role his actions and words play in reinforcing the status quo even as he writes pretty explicitly about how fucked up that status quo is.
Honestly it makes me think of the finale of his second Martin Hench novel, The Bezzle. After drilling into the business of the private prison operator that is making his friendās life hell and separating the merely fucked up parts from the things that might actually have consequences if word got to what passes for cops in that tax bracket, he doesnāt go to the papers or start reaching out to the SEC. Instead he goes to the bastard at the head of it all and blackmails him into making his friendās remaining incarceration less hellish and leaving him alone. And his friend, who started all this by begging for help unraveling this shit, rightly calls Marty a coward for it. Thereās something ironic in seeing Doctorow here seemingly make the same judgement: abuse and apathy are sufficiently normal that we shouldnāt even bother to try and make the world better, just find ways to shelter ourselves and the people we care about from the consequences. And hell, I guess even there Iām not immune to it. There are reasons why Iām posting here and not waiting out front of a hotel with some engraved brass. Still, on the continuum of such things Iām disappointed that the guy who wrote that scene is stuck in the normalization blues.
It sucks. :(
Honestly, the article reminds me of Scott Alexander, but succinct. āHere are several true things and an absolutely batshit wrong thing, presented together with equal earnestness.ā
The wrong thing being āBelieving that LLMs are trash is a mental disorder (not really but wink wink).ā
Why do this now, when itās all coming apart? Itās baffling.
The one-shotting phenomenon (or how a positive initial experience with the technology seems to lead to a heavily biased view of its merits) should probably be considered a distinct cognitive bias at this point.
Turns out a lot of bright people canāt deal with a technology being utterly subjective in its efficiency, and also how thatās specifically the part that reduces it to being so narrowly useful as to force the existential question, given the insane resource burn and the socioeconomic disruption thatās part and parcel, even if like Doctorow you think that their rape and pillage of artistās rights and intellectual property in general isnāt an especially big deal.
Also, local LLMs are hardly extricable from the whole mess, they are basically a byproduct, and updated versions only will keep coming as long as their imperial size online counterparts remain a viable concern.
Itās gotta be tied to the idea of anchoring, right? Like, the first credible bit of information you have is what sets the tone for everything that comes afterwards. At that point in a sufficiently complicated information ecosystem, confirmation bias kicks in and itās hard to break out of.
even if like Doctorow you think that their rape and pillage of artistās rights and intellectual property in general isnāt an especially big deal.
Itās not that he doesnāt think itās a big deal. Itās the one thing heās most consistently cared about for most of his career as an activist. Heās willing to put up with anything else if it circumvents copyright. And thatās why heās been consistently pushed, I reckon, despite his nominal hostility towards the hands that feed the media ecosystem he flourishes in.
Probably shouldāve written ānot a deal breakerā instead of not a big deal.











