• AI and academia

    Minas Karamanis talking about the impact of AI on academia.

    Science is about people:

    What’s great about science is its people. The slow, stubborn, sometimes painful process by which a confused student becomes an independent thinker. If we use these tools to bypass that process in favor of faster output, we don’t just risk taking away what’s great about science. We take away the only part of it that wasn’t replaceable in the first place.

    Knowing why those buttons exist:

    The real threat is a slow, comfortable drift toward not understanding what you’re doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can’t produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but can’t sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does.

    Training first, tools later:

    I use AI agents regularly, and so do most of the people in my research group. The colleagues I work with produce solid results with these tools. But when you look at how they use them, there’s a pattern: they know what the code should do before they ask the agent to write it. They know what the paper should say before they let it help with the phrasing. They can explain every function, every parameter, every modeling choice, because they built that knowledge over years of doing things the slow way. If every AI company went bankrupt tomorrow, these people would be slower. They would not be lost. They came to the tools after the training, not instead of it. That sequence matters more than anything else in this conversation.

    Output and understanding are two different things:

    Schwartz can use Claude to write a paper because Schwartz already knows the physics. His decades of experience are the immune system that catches Claude’s hallucinations. A first-year student using the same tool, on the same problem, with the same supervisor giving the same feedback, produces the same output with none of the understanding. The paper looks identical. The scientist doesn’t.

  • Speed and wisdom

    Jim Nielsen talking about how speed is not conducive to wisdom.

    Speed is how you avoid reckoning. It guarantees you miss things, and you can’t learn from what you don’t notice.

    Wisdom’s feedback loop is slow.

    Wise people I’ve met seem unhurried. I don’t think it’s because they’re slow thinkers or actors. I think it’s because they’ve learned that important things take the time they take, no amount of urgency changes that.

    Wisdom is chasing all of us, but we’re going too fast to notice what it’s trying to teach us.

  • Last step in thinking

    This comment by user roadside_picnic on how writing is the last step in thinking.

    I’ve long considered writing to be the “last step in thinking”. I can’t tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.

  • Survival

    Joe Wiggins reflects on his ten beliefs about investing. For me, survival is the most important one.

    Survival is the most important and neglected goal for all investors.

    Investors rarely talk about survival, but it is the most important aspect of any strategy. It should define whether we make a particular investment and how it is sized. We should always avoid creating situations where there is a meaningful risk of complete disaster (being wiped out) or where we are unable to withstand spells of poor performance. The best investment is the one we can stick with for the long-run, not the one with the highest potential return. If we don’t survive, nothing else matters.

  • Tool or the toolmaker

    Charles Arthur’s intriguing thought on the gradual loss of the cognitive capabilities as those capabilities are outsourced to LLMs.

    This makes me think that this complaint/debate has been going for a long time. The move from oral longform poetry such as The Iliad and Beowulf to writing it down, then printing it, then putting it on websites, then letting search engines find it for you, and now letting LLMs do some part of the work of analysing it – all of these seem to have been viewed as letting our brains slide back into the primordial ooze. If a problem is eternal, is it really because of the tools, or the toolmakers?

  • Laziness

    Bryan Cantrill explaining how laziness is a strength.

    […] when programmers are engaged in the seeming laziness of hammock-driven development, we are in fact turning the problem over and over in our heads. We undertake the hard intellectual work of developing these abstractions in part because we are optimizing the hypothetical time of our future selves, even if at the expense of our current one. When we get this calculus right, it is glorious, as the abstraction serves not just ourselves, but all who come after us. That is, our laziness serves to make software easier to write, and systems easier to compose — to allow more people to write more of it.

    […]

    The problem is that LLMs inherently lack the virtue of laziness. Work costs nothing to an LLM. LLMs do not feel a need to optimize for their own (or anyone’s) future time, and will happily dump more and more onto a layercake of garbage. Left unchecked, LLMs will make systems larger, not better — appealing to perverse vanity metrics, perhaps, but at the cost of everything that matters. As such, LLMs highlight how essential our human laziness is: our finite time forces us to develop crisp abstractions in part because we don’t want to waste our (human!) time on the consequences of clunky ones. The best engineering is always borne of constraints, and the constraint of our time places limits on the cognitive load of the system that we’re willing to accept. This is what drives us to make the system simpler, despite its essential complexity.

    Ha!

    The author goes onto emphasise that LLMs are going to play an important role in the future of software development, but they are a tool. Extraordinary tool, but still a tool.

  • Value comes from doing something non-obvious

    Lalit Maganti shares his experience of using AI build devtools for SQLite, something he dreamt for eight long years. This is probably the most measured take I have read on the internet. Most of the pieces are fluff which are written the moment AI spits out the first version of the codebase.

    AI turned out to be better than me at the act of writing code itself, assuming that code is obvious. If I can break a problem down to “write a function with this behaviour and parameters” or “write a class matching this interface,” AI will build it faster than I would and, crucially, in a style that might well be more intuitive to a future reader. It documents things I’d skip, lays out code consistently with the rest of the project, and sticks to what you might call the “standard dialect” of whatever language you’re working in.

    That standardness is a double-edged sword. For the vast majority of code in any project, standard is exactly what you want: predictable, readable, unsurprising. But every project has pieces that are its edge, the parts where the value comes from doing something non-obvious. For syntaqlite, that was the extraction pipeline and the parser architecture. AI’s instinct to normalize was actively harmful there, and those were the parts I had to design in depth and often resorted to just writing myself.

    But here’s the flip side: the same speed that makes AI great at obvious code also makes it great at refactoring. If you’re using AI to generate code at industrial scale, you have to refactor constantly and continuously. If you don’t, things immediately get out of hand. This was the central lesson of the vibe-coding month: I didn’t refactor enough, the codebase became something I couldn’t reason about, and I had to throw it all away. In the rewrite, refactoring became the core of my workflow. After every large batch of generated code, I’d step back and ask “is this ugly?” Sometimes AI could clean it up. Other times there was a large-scale abstraction that AI couldn’t see but I could; I’d give it the direction and let it execute. If you have taste, the cost of a wrong approach drops dramatically because you can restructure quickly.

  • Expert judgement

    This post on Dead Neurons talking about why expert judgement cannot by codified and hence cannot be taught.

    There is an apparent contradiction at the heart of expertise. Expert judgement is learnable, in the sense that people demonstrably acquire it over time. It is also non-transmissible, in the sense that no expert can transfer their judgement to another person through explanation. If it was once learnable, why can it not be taught?

    The resolution lies in a distinction between two fundamentally different modes of learning. The first is instruction: the transfer of explicit models, rules, and relationships from one person to another through language. The second is calibration: the development of internal models through repeated exposure to feedback in a specific environment. Judgement is learnable through calibration. It is not transmissible through instruction. These are different processes operating on different substrates, and conflating them is the source of the apparent contradiction.

    To see why, we need to be precise about what “high-dimensional” means when applied to expert knowledge, because the concept is doing all the real work.

    Consider a simple decision: should I cross the road? A rule-based encoding of this decision might operate on three variables: is a car visible, how fast is it moving, and how far away is it. These three dimensions are sufficient to produce a reasonable crossing decision most of the time. You could write this as an explicit rule, transmit it through language, and a person who had never crossed a road could apply it successfully in straightforward cases.

    Now consider the actual model that an experienced pedestrian uses. They are integrating: the car’s speed, its acceleration (is it slowing down?), the road surface (wet or dry, affecting stopping distance), the driver’s apparent attentiveness (are they looking at their phone?), the car’s trajectory (drifting within the lane?), the presence of other cars that might obscure the driver’s view, the width of the road, their own walking speed today (are they carrying something heavy, are they injured?), the behaviour of other pedestrians (are they crossing confidently or hesitating?), the sound of the engine (accelerating or decelerating, even before the speed change is visible), the type of vehicle (a truck has different stopping characteristics than a bicycle), the time of day (affecting driver fatigue and visibility), and dozens of other variables they could not enumerate if asked.

  • Algorithms

    From Collaborative Fund:

    Algorithms have an answer for everything except what’s good. They can tell you what’s trending, what people like you clicked on, what to watch next — but they can’t tell you why something matters, or what it felt like to discover it. That gap is widening. As AI-generated content fills every feed and recommendation engines collapse taste into pattern-matching, the most valuable signal online is increasingly the simplest one: a real person, with a real point of view, telling you about something they love.

    And this is why I love RSS readers. I am following a person. A person who will be like me in some ways and unlike me in many other ways. And the way that person is unlike me, is the opportunity for me to learn new things.

  • Why are executives enamoured by AI?

    John Wang sharing his thoughts on why executives are enamoured by AI.

    Executives have always had to deal with non-determinism. That’s par for the course:

    • People being out sick or taking time off unexpectedly
    • Someone not finishing an important project and not talking about it until far too late in the process
    • People reacting to an announcement in an unexpected way
    • A feature being built in a way that doesn’t make sense with respect to the rest of the product, but does technically achieve objectives.

    More generally, if you’ve ever taken a Chaos Theory class in math, you’ll know that nonlinear, chaotic systems emerge when individual agents in a system are all acting with different inputs, utility functions, etc. Systems become slightly easier to manage if you’re able to make those utility functions consistent (you’re able to get a grasp on system dynamics).

    A manager’s job is to create a model of the world and align everyone’s utility functions, knowing that there’s a large amount of non-determinism in complex systems. So it makes sense that as a manager, you’re ok with a decent amount of this.

    AI is something that is non-deterministic but has a lot of characteristics of a well behaved chaotic system (specifically a system where you can understand the general behavior of the system, even if you cannot predict the specific outcomes at any point in time).

    For example:

    • LLMs generally continue their work and provide an output regardless of time of day, how difficult the task is, how much information is available
    • LLM’s deficiencies have well defined failure modes (e.g. hallucinations, lack of ability to operate outside of their context, and especially poor outcomes when not given enough context)
    • The types of tasks that an LLM can accomplish are relatively well known, and the capability envelope is getting mapped out quickly. This is different than humans, where each person has a different set of strengths and weaknesses and where you need to uncover these over time.

    Many of these properties are more deterministic than large human systems, which makes AI incredibly attractive for an executive who is already used to this and likely has put a large amount of effort into adding determinism into their systems already (e.g. by adding processes and structure in the form of levels and ladders, standard operating procedures, etc.).

    Intriguing.

    He also talks about why individual contributors are sceptical of AI.