Sealed AbstractiOS and other rantshttps://sealedabstract.comenTue, 13 Jan 2026 08:37:49 +0000Tue, 13 Jan 2026 08:37:49 +0000250https://sealedabstract.com/posts/AI-hypeAI hype is 80% realIncreasingly, our profession as programmers is splitting.https://sealedabstract.com/posts/AI-hypeMon, 12 Jan 2026 00:00:00 +0000AI hype is 80% real

Increasingly, our profession—as programmers—is splitting. Some of us seem to be vibecoding everything. Others don’t trust an AI within a ten-mile radius.

Deep splits in programming are not new. Once upon a time, compilers were widely mocked as a waste of valuable machine time, in spite of Grace Hopper's arguments. Eventually, we decided she was right, and settled that debate.

This time, it feels different. We may not settle it.

Are LLMs the future of programming? Or are they a waste of valuable silicon? We can’t even agree on the time-honored interview answer—“it depends”—let alone what that depends on. Are LLMs useful for research or for coding? For “simple tasks”? And what, exactly, counts as simple?

Recently, I opened my browser to a top article titled “Don’t fall into the anti-AI hype,” argued by a brilliant programmer. Alongside it was the critical top comment:

I don’t understand the stance that AI currently is able to automate away non-trivial coding tasks. I’ve tried this consistently since GPT-3.5 came out, with every single SOTA model… I’m not sure what else I can take from the situation.

The good news is: I do understand. I’m quite sure what else to take from it.

The bad news is: that it takes a long time to explain. I’ve organized this essay into sections, but I’m afraid I can’t make it any shorter.

But it really feels like two intelligent, capable camps of engineers are transcending the tabs vs spaces argument into completely different fields. Or speaking different languages, using different standards of evidence. Maybe even living in different realities.

Some of us report almost unbelievable engineering feats using AI. Others say they can’t automate even a simple programming task.

Well: which is it?

Are vibecoders deceiving themselves by not reading the code closely enough? Are skeptics simply bad at prompting? What the hell is going on?

My first goal here is to explain—in far more detail than the modern attention span will tolerate—what is going on.

My second goal is to explain, or at least gesture at, in standard engineering terms, what those of us in the “moderately pro-agents” camp are actually doing. Is this responsible engineering practice, or is it reckless?

My third goal is to clean up my own house. Frankly, agent-coding advocates—including me—keep making arguments that don’t survive even the most basic scrutiny. There are much better arguments we should be using.

There are also legal, ethical, and societal issues in the room. I can’t avoid touching them entirely, but my aim here is far narrower: to speak about programming issues, as a programmer, to other programmers. Rather than gesticulating vaguely at domains outside my expertise.

So prepare to be unhappy for several reasons. Brew a strong cup of coffee, and bookmark this to read the rest of the story.

Hype cycles

Let’s get one thing out of the way: programming goes through hype cycles. An idea is hailed as the future, rapidly adopted (or mandated), endlessly debated—and then either collapses or settles into a quieter, more realistic niche.

Classic examples include OOP, XML, Java, Web 2.0, NoSQL, and microservices. You can add your own.

I think of these as tides. Tides come in; tides go out. There’s a lifecycle so regular you can almost set your watch by it.

It’s entirely reasonable to assume that AI is just the latest tide. The agent camp has done an abysmal job of distinguishing itself from that interpretation—comedically bad, in fact. Unfortunately, I can't transition into a comedy career because only programmers will laugh at my jokes.

I’m typing this on a computer with an NPU. What parts of my codebase can I accelerate with it? Where’s the API, the instruction set, the concrete C, C++, or Rust project that demonstrates this?

If you’ve read this far, then you’re probably also reading it on a device with an NPU too—and your silicon vendor won’t answer these questions, either. In fact, no manufacturer seems able to survive basic technical due diligence on... a headline feature??? The emperor has no clothes.

On the other hand, asking whether we’re in a hype cycle has a trivial answer: yes. Of course. Of course we are. But when wasn't the emperor naked?

The better question is: given that we’re in a hype cycle, what therefore, should we do? How do we separate real engineering from marketing bluster?

Because in the dot-com era, some companies were Amazon; others were Webvan. We're going to work for somebody; so which one is which?

Past, not future

Another messaging disaster from the AI camp is constantly changing the subject to the future. I haven’t seen the future yet. Neither have they.

So let’s do a hard U-turn and talk about the past. About things that already happened, that we can all observe and measure.

Static vs dynamic typing

Around the 2000s—and still today—there’s an ongoing debate about static versus dynamic typing. I’ve encountered it constantly in consulting work, and spent dozens of hours in meetings about it.

You may be familiar with standard hacker essays like “The Unreasonable Effectiveness of Dynamic Typing for Practical Programs” or “Parse, Don’t Validate.” Great essays. They get cited pretty often.

Instead, I want to focus on Dan Luu’s meta-review of the empirical research—something I’ve never seen brought up in a meeting:

Unfortunately, they all have issues that make it hard to draw a really strong conclusion… under the specific circumstances described in the studies, any effect, if it exists at all, is small… most of them are passed around to justify one viewpoint or another.

If he’s right—and I am convinced that he is—then:

  1. We don’t actually know whether static or dynamic typing has a meaningful effect.
  2. Many people are very confident that it does, for reasons that don’t survive scrutiny.
  3. If researchers got a nickel every time programmers cited an arXiv paper, the debate might have been settled by now.
  4. Probably, most things you read are just justifying a viewpoint. Like this one.

Most of those meetings were a waste of everyone's time.

An analogy

If you’ve read a peer-reviewed paper on agentic coding, it’s probably “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity.” Here is when it scored 775 points on hacker news.

And here’s the sentence that programmers quote:

Developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%.

Big if true.

But notice what a randomized controlled trial actually tells us: it controls for our own delusions. Interpreted literally, the result suggests we’re all 39% wrong about ourselves, regardless of whether we even care about AI at all.

A participant in the study later wrote that their own estimates were wildly off—hours instead of minutes—which echoes a very old lesson from Joel Spolsky’s Evidence-Based Scheduling that I still remember.

But once you start reading all the research, Dan Luu style, the picture becomes more complicated. Earlier. RCTs found productivity increases of ~20–25%, with wide confidence intervals. So which is it? Does AI help or hurt? I am deliberately using a hard date cutoff, because you didn't read any more recent papers, either.

I’ll leave the parallel problem that is playing out in research, to the researchers. What you can verify at home is these papers had much bigger Ns than N=16, and got less than half the discussion in programming circles! Pre-existing biases is doing most of the epistemic work around here.

Which is not surprising, actually.

Confounding factors

A very reasonable hypothesis is that some confounding factor explains all the contradictions. Maybe some people have good results, and other people have bad results. Often this is hand-waved as a “skill issue.”

I think that’s broadly true. Practice matters.

But where is the tutorial explaining what this skill is? Where’s the well-cited essay breaking it down? Where's that paper on arxiv? Why isn’t that the focus of all the meetings?

“I tried the models”

I need a whole section on this common programming refrain. Here it is:

I've tried this consistently since GPT 3.5 came out, with every single SOTA model up to GPT 5.1 Codex Max and Opus 4.5. Every single time,

Or:

Feel free to claim that paid versions or agentic models would be better but that’s not what I’m testing.

Or:

I don’t think that’s really a fair summary. He did try multiple assistants, including Claude.

Everybody in my camp is lying to you about this fact: the models don’t matter. Stop trying new models. Try something else. Anything else. Literally: imagine any other parameter than that, and you will accidentally stumble into new research in computer science.

Yes, models differ. Yes, some are better at some tasks. Yes, I can given you the "it depends" interview answer and speak at length about why this model is better than that model at some task.

Since this isn't an interview, here's the answer from a leaked source: We Have No Moat, And Neither Does OpenAI.

In particular, you can get absolutely astonishing results from awful models. Let me repeat that again: use bad models.

If you aren’t getting interesting results: you should ask why. Then you should ask why, again. And again, and again, until you finally discover there's a replication crisis in the entire field of programming.

This is classic Feynman territory, that I have tried, and have failed, to write any better. So instead I just need to quote it at length:

Other kinds of errors are more characteristic of poor science. When I was at Cornell. I often talked to the people in the psychology department. One of the students told me she wanted to do an experiment that went something like this—I don’t remember it in detail, but it had been found by others that under certain circumstances, X, rats did something, A. She was curious as to whether, if she changed the circumstances to Y, they would still do, A. So her proposal was to do the experiment under circumstances Y and see if they still did A. I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person—to do it under condition X to see if she could also get result A—and then change to Y and see if A changed. Then she would know that the real difference was the thing she thought she had under control. She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1935 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happens. Nowadays there’s a certain danger of the same thing happening, even in the famous field of physics.

Well: Nowadays there's a certain danger of the same thing happening, even in the famous field of programming.


But surely Mr. Feynman, someone is getting these impressive results?

Like if I am not kidding you, then somebody out there somewhere, is getting impressive results from AI models.

You’re absolutely right! And here they are:

In the past week, just prompting, and inspecting the code to provide guidance from time to time, in a few hours I did the following four tasks, in hours instead of weeks - antirez

I traveled around all year, loudly telling everyone exactly what needed to be built, and I mean everyone... I went to senior folks at companies like Temporal and Anthropic, telling them they should build [a project], I went up onstage at multiple events and described my vision for the orchestrator. I went everywhere, to everyone. ...But hell, we couldn't even get people to use Claude Code - Steve Yegge

I crossed an interesting threshold yesterday, which I think many other mathematicians have been crossing recently as well. In the middle of trying to prove a result, I identified a statement that looked true and that would, if true, be useful to me.Instead of trying to prove it, I asked GPT5 about it, and in about 20 seconds received a proof. The proof relied on a lemma that I had not heard of (the statement was a bit outside my main areas), so although I am confident I'd have got there in the end, the time it would have taken me would probably have been of order of magnitude an hour - Timothy Gowers

These results demonstrate that large language model-guided evolutionary search can autonomously discover mathematical constructions that complement human intuition, at times matching or even improving the best known results, arXiv

A recent example of this occurred on the Erdos problem website, which hosts over a thousand problems attributed to Paul Erdos... Already, six of the problems have now had their status upgraded from "open" to "solved" by this AI-assisted approach... an Erdos problem (#728) was solved more or less autonomously by AI (after some feedback from an initial attempt) - Terrance Tao (2)

This year, AlphaDev's new hashing algorithm was released into the open-source Abseil library, available to millions of developers around the world, and we estimate that it's now being used trillions of times a day. - DeepMind

I write all my apps in SwiftUI and I haven't written code since ~May.Everything's open source, so I can back it up. - Peter Steinberger

I was an AI skeptic. I thought LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked. - Cloudflare

An editorial note is the main problem I had writing this, is new things kept happening while I was drafting it. So I really need to get this off my plate, so I can go back to some actual work.

Also, sure. I, too, am writing the best code I ever wrote in my life. I too, have stories about how I solved X bug in Y hours that K people missed for N years, because very competent eyes were not enough to make all bugs shallow. But my stories are just more boring than accidentally solving an Erdos problem. and so is everybody's

The main point is: this is an iceberg. For everything that is visible, there are ten things that I didn’t write a longer and much more boring essay about.

Well, what can we say about the rest of the iceberg?

Silence as strategy

A recent 2025 game of the year lost that award over some milquetoast AI controversy. Its financial backers are laughing all the way to the bank, but among its credited contributors were at least six working programmers—people who could have listed that award on their résumé. Instead, a single buried quote in an interview to El Pais using “some AI” was enough to disqualify them.

The lesson is typical, but also very important:

  1. Don’t talk about using AI.
  2. Don’t write the obvious tutorial.
  3. Don’t explain the trick.

It’s safer as a career move to appear mysteriously productive than to explain why.

There’s an iceberg—and most of it is under the ocean.

Turn around now.


The silence of the params

If “skill” is the factor that explains the contradiction, then the most skilled practitioners may be the least likely to explain themselves.

I’m one of them, I guess. Like: of course my RSS feed is filled with people more skilled than me. But I can solve some pretty wild undiscovered bugs, using agents.

Stack Overflow

Many of us have noticed that Stack Overflow is dying. Here's the chart about it.

The usual explanation is that people ask LLMs their programming questions, instead instead of asking humans.

I think this explanation completely misses the point.

If programming is increasingly about wrangling agents, then “programming questions” are now questions about how to wrangle them—which may no longer fit traditional definitions of programming, at all. In fact, here is a paradoxical idea that many folks have, that "programming questions" are now banned on stack overflow.

The irony is painful. Here was Jeff Atwood's original essay:

There’s far too much great programming information trapped in forums, buried in online help, or hidden away in books that nobody buys any more. We’d like to unlock all that. Let’s create something that makes it easy to participate, and put it online in a form that is trivially easy to find.

And now, modern programming knowledge is trapped again: scattered across private chats, unpublished workflows, and quiet practice. Who is publishing it? Why should they? What incentive do they have?

As a side note, apparently building a website where the main cofounders disagree with each other, but nonetheless are committed to unlocking programming knowledge, is a winning formula. So I guess if you disagree with me and want to solve this problem anyway, send me an email.

Stage magic

There’s another profession that works this way, and it has done so from very ancient times: magicians.

Magicians show you what’s possible without telling you how. I know exactly what your card is, but I won't tell anything at all about how I know.

What many people don't know is, among professional magicians, they mostly don't talk about how that works, either. The entire discourse, among working magicians, is to assume you're probably 80% of the way there already, so let's obliquely hint at the remaining 20%, and if you get the hint, that seems fair enough.

There are a few reasons why they work this way. One, this weeds out the outsiders, the people who don't know the 80% at all, and who we need to be fooled to keep this entire thing going. Two, the 80% is actually easily assembled by standard industry knowledge, so a standard interview question is to guess how a trick is done, by assembling 80% of the industry knowledge against 20% of your own creativity. And three, on rare occasion if you force people to reinvent an effect on their own, sometimes they do! And they accidentally invent new magic.

Right now, programming is in that awkward phase where some people are insisting the trick is fake, while others are very carefully not explaining how it works. While very quietly, some people are applying variations on standard engineering principles, to invent new magic, that will likely be lost to our discourse.

In conclusion

What’s actually happening is quieter, messier, and harder to talk about than a hype cycle. The gains are real, unevenly distributed, and tightly coupled to skills we don’t yet have names for, let alone tutorials. The people getting the most value are often the least able—or least willing—to explain how they do it, because explanation is risky, unrewarded, and professionally counterproductive.

That leaves us with a distorted picture: skeptics honestly reporting failure, advocates cautiously reporting success, and almost nobody describing the method in between. We mistake silence for absence, and marketing for substance.

The right conclusion is not that AI “works” or “doesn’t work.” It’s that our epistemology is broken. Hell, our entire field is broken. We are bad at measuring our own productivity, bad at sharing tacit technique, and increasingly bad at agreeing on what counts as evidence.

Hype cycles come and go. This is different. The trick is real—but so far it’s incomplete, uncomfortable, and mostly happening offstage.

]]>
https://sealedabstract.com/posts/Terrain-heightfield-interpolationTerrain heightfield interpolationA new interpolation functionhttps://sealedabstract.com/posts/Terrain-heightfield-interpolationSat, 4 Jun 2022 17:00:00 +0000 MathJax = { tex: { inlineMath: [['$', '$']] } };

The problem

Suppose you have a heightmap. At each position, you store a height value. How do you find heights for positions "between" ones in your map?

For example, we have a small number of terrain values in our array, and we want to draw the terrain to our 5K display. How do we pick a height for some pixel?

Standard approaches

Well, there a billion ways to interpolate betwen some number of arbitrary points, as you may recall from your compsci math classes. Let's work an example. Here's a gentle cliff:

interpolation data

Side note: I will work with 1-dimensional problems in this post, although the problem is easily extendible to higher dimensions.

Let's start with the easiest solution, a line:

$$ line(t)= \begin{cases} 0 & t<=3\\ 100*(t-3) & t<=6\\ 300 & elsewhere \end{cases} $$ a line

This is, you know, pretty much fine. It looks how we imagine terrian would look with this data: 1. There's a flat part on top 2. And flat part on bottom 3. And a steep slope in the middle

At this point you are, you know, 80% of the way there. Depending on your requirements you can probably ship this line off to the GPU, which will be connecting them with lines (e.g., triangles) anyway.

But now you want to shade in between your polygons. And maybe light them. Well, for that, we want a nice curve and a clean derivative. Here's the derivative of our line:

line derivative data

It doesn't even exist at very reasonable positions like $3$. So, you might go looking for a nicer curve that's easier to work with.

Cosine

So, you consult Paul Borke, the internet authority on practical interpolations. First, cosine interpolation.

cosine interpolation

Most people have the reasonable intuition that terrain doesn't wriggle around like that. Only if you are a programmer or a mathematician would you ask, "but maybe a wriggle is okay if it's very small?"

It isn't okay, as we see in the derivative.

cosine interpolation derivative

If you told a GPU to draw something like the line as a geometry (or you just have in your mind somewhere that that's how it's supposed to look). And then you light it based on this function, you're going to have a bad time. Your cliff is supposed to be a slope, but it has these flat regions inside it. That's no good.

Cubic

So you go back to Paul and ask for a better function. One which offers true continuity between segments. One with a continuous derivative. This time, he recommends the cubic interpolation:

cubic interpolation

This is... not even a little bit how terrain looks. $0-3$ is supposed to be flat!

Then, you look at the derivative:

cubic interpolation derivative

Hoo boy. I mean it is technically continuous. It isn't, you know... sensible in any way.

The solution:

Just go back to the top of this post and draw the terrain between the points. Here is the curve I drew:

terrain interpolation

I submit that you or anyone would draw a curve "about like this". It is the curve that terrain ought to look like. We can break down its properties more formally:

  1. As described earlier, it is flat when the line is flat, sloped when the line is sloped. This is unlike the cosine and cubic interpolations.
  2. It has a smooth arc, unlike linear interpolation.
  3. It's biased symmetrically, pulling the ground inwards toward the slope. We can imagine a function that pulls outwards, or that does one thing on the left and the other on the right, but a lot of those functions tend to violate our "terrain intuition" on some inputs.
    1. (Alternatively some are global in scope, requiring some in-order process of the whole heightfield to ensure consistent results.)
  1. It's obviously going to have a nice derivative.

So to derive this curve, we start with the cubic bezier $b(a,c_1,c_2,b)$. The nice thing about bezier curves is that points $a$ and $b$ define the points between which we are interpolating, (so, that part is done!) while points $c1$ and $c2$ indirectly control the tangent at $a,b$. So, by clever choice of $c_1,c_2$ we can guarantee a continuous derivative.

Suppose we want a function $terrain(l,a,b,r)$ which interpolates from $a$ to $b$ (with $l$ on the left of $a$ and $r$ on the right of $b$.) We first define points on either side of $a,b$:

$$ \begin{align} a_l = (a - l)*scale; \\ a_r = (b - a)*scale;\\ b_l = a_r = (b - a)*scale;\\ b_r = (r - b)*scale;\\ \end{align} $$

$scale$ here is a constant between $[0,1]$. I generally like $0.4$.

To "pull inwards", we choose for our slope from $a,b$ the minimum of these, taking care to get our signs correct:

$$ \begin{align} a_s = \begin{cases}a_l & |a_l| < |a_r| \\ a_r & otherwise \end{cases} \\ b_s = \begin{cases}b_l & |b_l| < |b_r| \\ b_r & otherwise\end{cases} \end{align} $$

Then we simply calculate control points:

$$ \begin{align} c_1 = a + a_s;\\ c_2 = b - b_s; \end{align} $$

Now that we've defined our function, let's check out the derivative:

terrain interpolation sponsored by batman

This should be relatively sensible to do shading and lighting with. And of course, Batman is clearly the correct derivative for any situation.

Here is a comparison with the line derivative, showing a moderate correpsondance. For an effect that is different enough to be be worth implementing, but not so different as to look wrong, this is a good result.

comparison of terrain and line derivatives

There is a "easy" extension of this approach to three dimensions following the linear => bilinear transformation.

]]>
https://sealedabstract.com/posts/i-blame-blame-cultureI blame 'blame culture'An examination of the political discussion around the Texas power crisishttps://sealedabstract.com/posts/i-blame-blame-cultureSat, 20 Feb 2021 00:00:00 +0000There was water on my floor. I'm lucky that I saw it while it was happening, because I acted quickly and was able to warn others and divert the flow away from the building. Maintenance and emergency services were largely unable to respond even to shut it off, and it wouldn't make much difference if they did because the same thing happened in every building everywhere in the city and the entire water system just dumped onto the ground everywhere. As I write this, lots of very overworked people are trying to jerry-rig thousands of miles of pipe on the public and private side hoping to meet in the middle quickly enough to stem what is already a humanitarian disaster.

I write this to give you a glimpse of what this is like. Not that I have it particularly bad. I have a stockpile of potable water. I can write a poorly-thought-out blog post. I never lost power, unlike millions who had their temperature reach 30 or 40 indoors for 4+ days.

This is the kind of event that you hear about happening in the "shithole countries". Once as a child we lost power for a week in 75-degree weather in a remote town with 150 people. Texas is one of the largest economies, has dense cities and produces twice as much energy as the next-largest state. This is nuts. Not that it means much in 2021 to say something is nuts but this feels a lot nuttier than the overlapping pandemic and/or constitutional crisis. Which obviously we also have in Texas.

It is frustrating to see the national conversation so divorced from what it feels like to me on the ground. I want to be clear. It is true that climate change aggravated this storm. It's true that state Republican leaders failed to govern and are giving interviews telling galling lies to cover their ass, that our brand of deregulation is a mistake that kills people, that Ted Cruz is a spineless coward who ought to be recalled, that natural gas is the biggest culprit, etc. But none of this is new. It has been going on 7 days a week for 10+ years. It is merely "a Tuesday" in our society. So the fact that this is what the other parts of the country are told about the problem, while accurate in some sense, in another sense is deeply misleading.

These talking points are the opinions of people who have showered sometime this week. Who have not had a new pillar of civilization fall-over 3 of the last 4 days and are wondering which one will be next. Who did not get a boil water notice while their repurposed gallon jug was frozen solid, who are not melting snow to flush a toilet (surprisingly difficult actually! You won't believe how little water is in snow!) We are a little past debates about windmill policy right now.

The frustrating is that there really is a Wrong Windmill Policy™️ and a Right Windmill Policy and which one we choose does matter. But also in 2011 everything failed in exact proportion and the same will be true in this crisis.

For wind units, 16 percent failed, compared to their 15 percent of total units. For simple-cycle units, 21 percent failed, compared to their 20 percent of total units. For gas-steam and coal units, the percentage that failed exactly matched their percentage contribution of total units.

Depending on which news station you watch this fact is being spun as "wind is unreliable" (lie) or "republicans are lying about wind being unreliable" (true). I don't want to equivocate these because they're not equivalent, one is psychopathic and the other is doing its best to negotiate a hostage situation. Unfortunately either way we are not talking about the real causes.

The real causes are not good primetime TV. They are so boring that they happen at your home and workplace every day. You don't buy an energy-efficient heater for your house because it's only barely cold, only 3 days a year. Power plants don't have cold equipment for the same reason. If they do, it breaks because it's never used, nobody requires drills or conducts inspections. Grid operators rely on unreliable estimates from optimistic middle managers trying to hit their numbers and then surprise, they do not hit them. Operators overwork and underpay their staff and then low-morale employees make mistakes in stressful situations. There's little spare fuel because it's more efficient not to and you have to compete with everyone else who also does that. These are only the problems I had time to stress-read about between separate floodings. You can read them in the technical reports from the 2011 crisis. Some of them are even minor contributors, perhaps they will be bigger in a bigger disaster. We'll find out soon I expect.

These issues aren't political. I mean, I guess they are now because one party took us hostage and we can't get out of the car. And I guess they always have been for many people who are at the wrong end of a government policy and the rest of us are slow to empathy about "other people's problems". But in a state that hasn't failed, the power grid is not a wedge issue, it's not a culture war, it's just stuff you do so people can flush toilets and not be dead.

What I'm trying to say is that "infrastructure week" is a funny political punchline but if you tell it long enough the infrastructure becomes the joke. When we can't talk about infrastructure except as a minor plot point in a broader culture war we have problems that merely winning the culture war cannot fix. Winning matters of course. But so does plumbing.

There is nothing unique about what is happening to us. Texas is actually a deeply divided nation, and the weather came for everybody. You were not spared because you lived in an area that voted for adults instead of children, you had the Right Opinion In The Comment Thread, you bought clean energy from the city or you access Real News. Your water did not check your party affiliation or windmill policy ideas before it turned off, your heat did not ask your personal feelings on Ted Cruz. You were spared slightly if you had survival skills, were paranoid enough to have prepared for the imminent collapse of civilization, had been through a disaster before, had means or luck. Even then, it's a traumatic experience and it's still ongoing.

My advice? Get off facebook and watch plumbing videos. Texas is in the rare circumstance that we might become "scared straight" enough to sort-of get it together. (Or maybe not, we didn't last time and it's too early to tell.) What is clear is that the rest of the country has exactly the same boring problems we do but seems very determined to tilt at windmills.

I'd recommend redirecting that energy into learning not to die. Maybe not as fun but it has the key advantage you don't die. Pro tip: it's much more convenient to watch plumbing videos when you still have electricity.

]]>
https://sealedabstract.com/posts/antipolicy-as-policyAntipolicy is policyA recurring trick in modern politicshttps://sealedabstract.com/posts/antipolicy-as-policySat, 5 Dec 2020 00:00:00 +0000

If you choose not to decide, you still have made a choice.

A recurring trait of conservative or libertarian argument is presenting a policy as if it isn't. Representative example:

I find it disturbing that people have stopped supporting free speech when they stopped liking its content. Disturbing because either these people have caved to yet another mob thought or never believed in the concept at all. I don't agree with most of the content I see on my platforms. The content doesn't change me, however-- and I refuse the premise that we tech workers know best and therefore must protect the fragile little minds of everyone else.

The argument rests on the idea that "free speech" is an unmitigated public good. But what is free speech? Is it:

  1. The right of individual people to publish controversial ideas on their website, which mostly nobody reads?
  2. The right of platforms to moderate user-generated content according to their own partisan viewpoint?
  3. The right of persons to be protected against retaliation for their speech from the state? A platform? Their employer? The general public?
  4. The right of the employer, platform, or general public to express their own view of the person from 3?
  5. Some kind of fairness doctrine, where we give sense and nonsense equal airtime?

Inevitably, phrases like "free speech" or "free market" are actually some particular policy position like these, that favors one party at the direct expense of another. This is, of course, nothing more or less nefarious than any other policy proposal. But the genius of the phrase is it doesn't sound like taking a position. It sounds like an impartial referee that lets the chips fall where they may.

Of course, impartial referees are very important. But they are important within the context of operating a game that has rules. You can't have a referee as the rules, it isn't a game anymore. Nobody would play. Or perhaps everybody would play, but it's not a useful game.

For example, let's say we have no rules about what content is allowed on the Internet. Inevitably what happens is private platforms spring up that have rules. Moderators, social norms, EULAs, Terms of Service, and so on. So what you have accomplished with the deregulation is outsourcing the regulation to somebody else.

In fact, this same situation led to governments to start with. We started in a condition of human society without rules, and we wound up with federalism. So the whole topic of "internet governance" is just another member in the family of tribal, federal, state, international, corporate, etc., governance.

Now deregulating some subset of human life may be a perfectly legitimate accomplishment. In the internet context, we might imagine that various platforms arise with different rules, and consumers have some kind of choice of preferred regulatory regime, like the "laboratories of democracy" in federalism, and this is in itself a net social good. Of course, nothing about our deregulation specified that in particular would be what happens. It has not always happened in the case of actual governments, or of private corporations. And so it may also happen that all platforms which arise have quite similar rules and you have no effective choice between them. Or that all platforms consolidate and so you have no choice at all. To a first approximation, both US political parties level some degree of these criticisms against the major Internet platforms today.

If so, these outcomes would have much in common with what we wanted to avoid: all the advantages of a totalitarian regime, without the disadvantages of being able to influence it at the ballot box, since we abdicated that one. Our antipolicy had the effect of policy, and arguably created the thing we were afraid of.

Usually these sorts of antipolicy positions rest on the implicit assumption that such a consolidation cannot actually happen, either because of competitive pressure, the invisible hand is infallible, or just some general sense of superiority of not making decisions. If so, there would be no disadvantage to additionally preventing totalitarian outcomes by the means of regulation. However if you propose that, you will quickly discover that your regulation is in the superposition of both a) unduly restricting someone's freedom to form their own totalitarian regime, while simultaneously b) nobody would actually get away with that because of competitive pressure. In other words, the problem with regulation is that it is policy, and has all the drawbacks that policy can have, whereas implicitly deferring to somebody else's policy is immune to this sort of analysis.

In fact, antipolicy is policy. It is merely the policy that the thing ought to be decided at a lower level of abstraction, by the nation instead of an international body, the state instead of the nation, the corporation instead of the government, a middle manager instead of an employer, or an individual instead of their tribe. It presents itself in the clothing of abdication but is actually empowering whatever body takes up the matter instead, sharing some responsibility in their decisions and in the range of outcomes.

It may very well be that the lower level has better information or produces beneficial heterodoxies. It may also be the lower level has worse information and will produce unfair variation, a single totalitarian regime, a ogliarchal network, or any number of negative outcomes. Like any policy proposal, the thing requires examination on its merits.

What we cannot do is abdicate our responsibility by merely abdicating regulatory authority. Antipolicy is itself a policy. We cannot wash our invisible hands.

]]>
https://sealedabstract.com/posts/paperclips-r-usPaperclips R UsPerhaps the dangerous future of strong AI is here already.https://sealedabstract.com/posts/paperclips-r-usWed, 25 Nov 2020 00:00:00 +0000The rationalists have a thought prophecy to warn us about the dangers of strong AI:

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform "first all of earth and then increasing portions of space into paperclip manufacturing facilities".

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety... The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else.

I believe we are misinterpreting the prophecy, and at great existential risk.

THEN WHO WAS PHONE?

Intelligence is boring

When people think of AI, they think of models like GPT-3. Lots of people have been publishing very cool demos on what GPT-3 can do.

Of course, there's also discussion about how GPT-3 isn't smart at all. For example,

GPT-3 often performs like a clever student who hasn't done their reading trying to bullshit their way through an exam. Some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative.

Allegedly a criticism of AI, I think it's actually an indictment of humans. Humans produce bullshit essays all the time. Many would argue you are currently reading one.

When we think of humanity, we like to imagine that we are a bunch of Einsteins unraveling the mysteries of the universe. But we are also the ones doing the genocides and the slavery. If you take the median human activity, it might very well be writing bullshit essays.

What humans create

As Exhibit A, I will present to you the top comment on a thread in one of thoe most popular subreddits on Reddit. The poster wants to know if they're an asshole for sending their kid to school in a mask.

Also, this thread was from 8 months ago.

[You're the asshole]. Because the mask doesn’t help protect your child. And you’re teaching him to be paranoid.

This comment has 4135 upvotes. Or another one:

Please show your proof of these "mixed responses". Because the wearing of masks is 100% proven to be ineffective and do nothing. It's unanimously agreed upon that the masks are useless in these situations.

941 upvotes.

8 months later, we see the world a little differently. (Or at least, some of us do.) But these responses are overwhelmingly the view of the thread. A thread that some 2500 people voted on, and nearly 500 people were motivated to write their similar opinion.

Now perhaps Reddit is a cesspooll, and I should have used equivalent bullshit on HN or bullshit studies in science journals or whatever social institution you think is excellent and not subject to fact-free human pontificating. The real fact is that we are not exactly Einsteins, as a civilization.

What we are instead

If you assume that each commenter spent 5 minutes writing their comment it works out to about 9 person-days. We managed to turn one person's probably-imaginary school story into 9 days of misinformation. That is an amazing feat of optimization.

It turns out we are really good at optimization. I won't bore you with Moore's law, the obviously canonical optimization graph. Instead, here's carbon emissions per capita

emissions per capita

Where I'm going with this

We do not have to wonder about what if we someday get an AI that optimizes production of paperclips at our expense. Human society is doing that now. Human society is both clever enough and stupid enough to be an effective paperclip maximizer. It also has the Climate Change Advantage™️ of being slow enough that we can't mount an effective attack.

Things that are nobody's job

Humans like to pin things on evil masterminds. Your problems are Senate Republicans, the previous administration, Mark Zuckerberg, fake news, etc. Certainly, I think some combination of those are evil.

But the paperclip maximizer is another kind of evil: that the quest for office supplies at the expense of the other goals. It is this form of evil that presently permeates our society at every level.

Consider COVID. Whose job is it to stop disease? A group of scientists with very little political power. On the other hand, it's someone's job to keep their bar open, another's job to drive the stock market, another's job to exploit fear to be re-elected, another's job to drive engagement on a website, your job to remain employed, and so on. The scientist doesn't have a chance against the interests of all these others. Besides, they are busy actually doing scientist stuff. They have no expertise in how to receive death threats or navigating an unstable president or all the stuff we've piled on top of the original problem.

With that example developed, here's a braindump of other things that are "allegedly" some kind of important problem, but we dedicate approximately zero resources to them relative to how much stuff we pile on top to make them worse:

  • It isn't anyone's job to fight climate change, relative to how many people have jobs to perpetuate it
  • It isn't anyone's job to heal racial division, relative to how many people have jobs to inflame it
  • It isn't anyone's job to run america, relative to how many people run for office
  • It isn't anyone's job to inform the public, relative to how many people have jobs to misinform
  • It isn't anyone's job to connect people, relative to how many people have jobs to divide them
  • It isn't anyone's job to end poverty, relative to how many people have jobs to perpetuate it
  • It isn't anyone's job to create public discourse, relative to how many people have jobs to create engagement
  • It isn't anyone's job to replace the healthcare system, relative to how many people have jobs in that system
  • It isn't anyone's job to fund good startups, relative to how many people fund juicing machines
  • It isn't anyone's job to pass legislation, relative to how many people are there to repeal it
  • It isn't anyone's job to do research, relative to how many people are trying to get published
  • It isn't anyone's job to make software low-latency, relative to how many people are employed to do Electron
  • It isn't anyone's job to make people well-rounded people, relative to how many jobs sell coke or get people to binge-watch Netflix
  • It isn't anyone's job to protect privacy, relative to how many people have jobs selling your data
  • It isn't anyone's job to write open source, relative to how many people have jobs profiting from someone else's open source
  • It isn't anyone's job to lower housing costs, relative to how many people are employed to increase them
  • It isn't anyone's job to protect democracy, relative to how many people are employed to destroy it
  • etc

I mean, just look at the Cabinet. There's no Secretary of Climate Change, there's no Race Minister, there's barely a Board of Election Fairness. There's just Department of Blowing Things Up and Office of Getting Re-Elected. From cycle to cycle there may also be Department of My Family and Grifting Chairperson. Your company has no Keystroke Latency Initiative, it only has Less Servers Initiative. Your city has no Commission on Housing Costs, it only has Commissions on Increasing Property Values.

To an extent completely unappreciated, human society actually depends on things nobody has any incentive to do. In less efficient times, this was less of an issue: people could afford to spend time on things that weren't, strictly speaking, measured. They could write news articles that weren't moneymakers, they could take time off campaigning in order to govern, they could close their bar to protect their community, they could educate students instead of doing test prep. When people talk about "how things used to be", there is a silent "before we optimized everything".

These days, fixing healthcare and having good public discourse are resources that could be used to make more paperclips. Paperclips are eating the world. To a large extent, the AI apocalypse is already here.

A microexample

Here's one other example, just to get some air from the politics. One of my favorite YouTube channels is Linus Tech Tips. Over the years they've produced a lot of heavily-researched and quite frankly amazing videos on such topics as whether your PC fans should blow out or in, whether you can game at 16k resolution on high-end hardware, quasi-scientific experiments on whether fps improves competitive gaming, and so on. Basically, the kind of creative and delightful indie content that you would imagine to be free of corporate paperclip pushing.

Meanwhile, here are some thumbnails from their videos:

linus tech tips thumbnails

Evidently, it's somebody's job to get Linus to make a ridiculous face and make it the thumbnail for every video along with a RIDICULOUS headline DESTROYING his enemies. Because humans are emotionally-driven and this sort of thing drives engagement. If videos I worked hard on to write, shoot, and edit were packaged like this I'd be embarrassed. Then again, if videos had a more "sensible" thumbnail I might never have watched one.

If the paperclip maximizer is turning even high-quality independent creators into paperclip pushers, the world is getting very dark indeed.

Avoiding business as usual

It is in this context that I am quite annoyed at the push to be "apolitical" or "end the culture war", that people should just do their jobs, as if by ignoring the problems of society they will go away.

Don't get me wrong, I am annoyed at the culture war as anybody. But the thing is, "doing our jobs" is the whole problem. We wrote the software that relentlessly optimized human society without thinking too much about what it ought to be optimized for. So we ended up optimizing paperclips, at the expense of every other human value.

"The culture war" is in fact the process of deciding what our values are, and therefore what our jobs should even be. The impulse to ignore it is I think quite dangerous.

]]>
https://sealedabstract.com/posts/metapoliticsToward a Higher-Order MetapoliticsHow to improve political conversations onlinehttps://sealedabstract.com/posts/metapoliticsSat, 3 Oct 2020 00:00:00 +0000

Being abstract is something profoundly different from being vague … The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise. - E.W. Dijkstra

I think any objective observer would conclude we have a political malaise in the US, and possibly, worldwide. Depending on who you talk to, this is maybe caused by social media, capitalism, russia, boomers, millenials, racism, fox news, the death of expertise, and so on.

But separately to any of these causes, I think we suffer from a second-order issue, namely a poor political "typesystem". This concern is not getting nearly (or, any) weight, and I think this is a mistake.

What is Metapolitics?

Programmers intuitively understand that we can't write code for every case. You don't rewrite the print function again for every string you print, you write it once, and pass it different values. There was a time, back in the dark ages of technology, that people wrote machine code line-by-line, for every function in a program. But this doesn't really scale to the problems we want to solve as a society, and over the decades we developed increasingly robust tools for the programmer to work in higher and higher levels of abstraction.

The nature of these tools has varied over time, along with the specific constraints and concerns in that era. At one time, there was a great debate over compiled and interpreted machine models, before JITs threw a wrench in our two-party system. For a minute, OO and its various patterns were dominant, before we started mixing in more patterns from the functionalists and considering alternatives to inheritance. For awhile, dynamic typing provided the most expressive polymorphic abstractions (at some cost of speed/correctness), before static typing caught up with modern features like protocols/traits, generics, constraints, and inference. Right now, languages like Rust are statements that static abstractions can be both safe and fast, at the cost of some productivity.

The details of how and whether, e.g., generics are really competitive with runtime casting is not my point. Rather, I mean that programmers have conversations about those kinds of details, and not about the details of "how to write a function to buy a plane ticket". The closest you will get to "plane ticket code" in a programming blog is the plane ticket as a specific example of a general pattern. Plane tickets might be the contents of an array, or they might conform to a Buyable protocol, or we create nodes for them in the DOM tree or a table for them in the database, views that display them in UIKit, etc. But we work and think, pretty much entirely at a level of abstraction above the plane ticket, if not several levels above.

In politics, however, all the discussions are "about the plane tickets". Even things that are supposedly abstractions like "pro-life" tend to have very limited applicability to any context this is not the obvious context you know is the one I'm talking about. Every situation is a specific situation, about which no higher principles or conclusions are to be drawn. And as any assembly programmer can tell you, writing separate code for each situation can be great fun, but it can also be quite tedious and error-prone. "Tedious and error-prone" is, I think, a very accurate description of our current political situation.

Specific examples

I saw this thread recently on Hacker News:

Call me a corporatist, but I don't even think Google and Apple should have to justify their 30% pricing at all as "reasonable". They could charge anything they like -- 100%. It's a voluntary contract, and Epic doesn't have to engage in it. We don't have to engage in it. There's no fundamental right to offer an app on an app store, and if the scope of the market is defined as the market for video games/content, then none of these places are monopolies and companies/people are free to set their desired prices.

This encapsulates an argument I've heard pretty often against app store regulation. However, one user instantiates the argument over different types:

Call me a democrat but I think the United States of America shouldn't have to justify its laws at all and should regulate Google's and Apple's store however they want, tax them 100% instead of 20%, after all they voluntarily made a business, they can move their HQ to Swaziland, who is to tell the American people how to regulate their businesses

It's not like Apple and Google can't move, there's like 200 countries, that's like a hundred times more countries than app stores

The implicit criticism here is that when we apply the argument to a new situation, we get unexpected results. I think many of us have dealt with bugs like this in programming. Certainly, sometimes the way to resolve them is to constrain the input to an expected range, even if we need to write more functions for other situations. Other times, the solution is to rethink our abstraction to get a common framework that can cover everything.

The point is, we're not really in the habit of doing either of those things politically. We don't introduce our argument with "Given a corporation of a certain size..." "Given an entity that people interact with 'voluntarily' with regard to conditions V...". We just write a new argument to defend a position for each situation. If there's a new situation, we need a new argument every time, and please don't hold me to the argument I made the last time, it's different now.

Notoriously, RBG

Nowhere is this more evident than in the current fight over the Supreme Court. In 2016, Senate Republicans blocked Garland's nomination "because it was in an election year", and in spite of arguing very forcefully that this was some kind of principle that should be followed, now that it's their nominee they're quick to abandon it.

As far as I'm aware, that's about as far into the abstraction as anyone bothers to think. However I'd like to make two even higher observations. The first is that the real "principle" in play appears to be some version of "elections have consequences", which, abstracting the types a bit more, becomes a sort of "might makes right". I think many people would disagree with "might makes right" as a general principle, but because of our habit of having each political thought separately this connection is not necessarily clear when we advocate for a particular instantiation of it.

A similar argument that arises in many areas of hacker politics is that we ought to view some conduct a certain way, because it does (or does not) "violate the law", "violate the terms of the GPL", or whatever document seems like it advances our interest. This, of course, ignores the idea that our feelings are not bound by law nor license terms. More relevant to the political context, this sort of reasoning bypasses any discussion of what the law should be, which is the question in a democratic society, and replaces it with an acknowledgment that so-and-so came to power and here's what the law is, which is a weaker form of might-makes-right advocacy.

'Might makes right' as a political theory has many unintended consequences. One of them is the oft-observed erosion of our political norms. In response to a hypocritical nomination, the left has discussed retaliatory options such as court-packing or term limits, and the right uses the threat of that to push through the nomination now. It seems we are hopelessly locked into a political system that serves to polarize society and prevent government from operating and addressing current crises at any level. But more directly to our current news cycle, if 51 votes is cause for celebration among conservatives, it would follow that losing a few – say, because some of them got COVID – might be a celebration for Democrats. Whatever examples of the latter one might dig up is currently getting wall-to-wall coverage in conservative media. Of course, it's presented as if it exists in a vacuum, totally unconnected to last week's Republican strategy.

Another observation is that abstract principles themselves are becoming politicized. Democratic leadership is responsive to criticisms over abstract principles in way that Republican leadership is not. Biden pulled down his attack ads four years to the day that Trump attacked his opponent's health personally. One could argue that both of these are simply political strategies designed to appeal to a base. But if so, it suggests that what appeals to each base is very different.

Metapolitical advantage

In spite of the weak embrace of metapolitical reasoning by one party, which we might naturally expect to alienate the other party, I think metapolitics is our best, and perhaps only way out of this mess.

Some of this will be immediately apparent to the programmer. But metapolitical reasoning allows us to solve a problem once and re-use our solution many times. Software scales to the world's problems only because of our ability to encapsulate plane tickets and purchases and so on into objects or type instances, against which we can generically develop datastructures and algorithms without specializing them by hand for each case. Politics exists in a similar problem domain of "running the world", and it faces similar challenges, and we will not be able to meet them by always writing new political argument for each case.

As a separate issue, working at higher levels of abstraction gives us rhetorical distance and some protection against bias. It's very easy in the context of a specific political issue to write the rules to come out a certain way. Working generically forces us to consider alternative perspectives, and the limitations of our solution within a broader political framework.

Toward a higher-order metapolitics

So what does this look like? For one thing, we need better names.

In computer science, we have a robust vocabulary of abstractions. We know the flywheel pattern, RAII, inheritance, protocol-oriented, generics, and so on. Some of these have slightly different meanings in different programming communities but there is a broad tapestry of language to discuss such topics. This is of course, entirely besides the fact that we have formal languages to examine abstraction in particular programs.

In rhetoric, we have one pantheon like this in form of logical fallacies. You can see a long list of them on Wikipedia. These fill a similar role, in that we make some attempt to cover them academically and they are an abstraction free from any particular argument for or against a particular policy.

What we aren't doing is keeping them up to date. Occasionally, someone will produce a new one – the rationlists have the "motte and bailey" for example – but it's pretty rare and even then, it doesn't really penetrate politics at all. Moreover, there are many "devices" that are not fallacies per se, but are sort of "tools to have a new thought".

The "instantiate the app store argument with unexpected types" seems like it ought to be a general device with a catchy name. To apply it to some other examples, if we believe democracy is a fair system of electing a country's leaders, what does it say about choosing a company's CEO? If we ought to adopt animals from shelters instead of from breeders, what does it say about making new children? If Net Neutrality is good regulation against ISPs, what does it say about regulation against YouTube? There seems to be plenty of "new" political landscape to explore if we have the device to pose the question.

Another device is to consider how to apply a principle "against" its current motivation. Obviously in the Merrick Garland case, Senate Republicans did not expect to be in the situation of having their own election-year nominee. Similarly, Republicans did not consider abortion rights as a potential side effect of religious freedom. Of course, this occurs in part because our democracy lacks a method to enforce this sort of ideological consistency.

But, we could create one. We already have rules that congress can't raise their own pay, or criminalize behavior retroactively. We could require rules to take effect in a way that advantages the other party first. We could adopt the social principle that proposals ought to be made by those who can be disadvantaged by it, and not those who are mostly advantaged. Such a politics would be very different than our own, and may have new problems but at this point I think "very different" is a feature.

Really though, I think we ought to be designing and naming these sorts of higher-order patterns, recognizing them in political discussion and instantiating them into workable abstractions. Such discussion ought to be the dominant form of political discourse, not "what dumb thing Trump did this week", in a similar way that type theory and design patterns are dominant themes in the programmer discourse, not "what dumb plane ticket function I wrote this week".

]]>
https://sealedabstract.com/posts/bezier-curvatureThe problem with cubic bezier curvatureFixing a discontinuity in cubic bezier curvaturehttps://sealedabstract.com/posts/bezier-curvatureMon, 21 Sep 2020 00:00:00 +0000 MathJax = { tex: { inlineMath: [['$', '$']] } };

Recently, I needed a robust way to calculate the curvature for a cubic bezier curve. Now there are plenty of guides on how to do this, but when I implemented the usual methods I got some weird results.

weird line

Here is, I think we can agree, what seems like a straight line, with the cubic control points marked. When we calculate the curvature however we get something a bit weird:

weird curvature

At first I thought I had implemented this wrong. (It's correct.) Then I started to wonder if maybe the "curvature of a line" is a silly notion and I should special-case it in my code. That's when I found this:

new curvenew curvature

The meaning of these charts are that the curvature of the cubic is strongest at the origin. But if you imagine yourself traveling along the cubic like an arc, it doesn't "feel" like you're turning sharply at the origin. It "feels" like you're turning sharply about 3/4 of the way along. And indeed, the curvature graph has that too as a local maxima; it's just dwarfed by the curvature at the very beginning.

A discontinuity

It turns out that the curvature formula has a very nasty discontinuity. To see this, let's look at the formula:

$ \begin{multline}B_{x}'(t) B_{y}''(t) - B_{y}'(t) B_{x}''(t)\over [B_{x}'(t)^2 + B_{y}'(t)^2]^{3/2}\end{multline} $

It's clear something bad can happen if the denominator gets very small. Now we can expand this out in control point notation easily enough, but it's a nasty denominator that doesn't really have an obvious meaning to me. So, let's convert to another notation recommended by a friend instead:

  • $r$ - the distance between the endpoints
  • $tt$ the direction of a vector between the endpoints
  • $tt_{0}$ The difference between the initial tangent angle and $tt$. (For a control point on the line between endpoints, the tangent angle is $tt$, and so $tt_{0} = 0$.)
  • $tt_{1}$ - Like $tt_{0}$ but for the final tangent angle.
  • $r_{0}, r_{1}$ the length of initial and final tangent vectors

In this notation a cubic is parameterized with

$ B_{x}(t) = -t (r t (-3 + 2 t) Cos[tt] - 3 (-1 + t) (r_{0} (-1 + t) Cos[tt + tt_{0}] + r_{1} t Cos[tt + tt_{1}] )) $ $ B_{y}(t) = -t (r t (-3 + 2 t) Sin[tt] - 3 (-1 + t) (r_{0} (-1 + t) Sin[tt + tt_{0}] + r_{1} t Sin[tt + tt_{1}] )) $

What do we gain with this notation? One advantage is that at $t=0$, the curvature equation has an "easy" solution:

$ \begin{multline} -((2 (r * Sin[tt_{0}] - r_{1} * Sin[tt_{0} - tt_{1}])) \over {(3 r_{0} * Sqrt[r_{0}^2])} \end{multline} $

Therefore, the curvature will be infinite with $\lim\limits_{r_{0} \to 0}$, e.g. as the control point gets closer to the endpoint.

So, make sure your control points and endpoints aren't the same and they won't have zero distance and you don't divide by zero. Not so tough, right?

Turns out it's a little more complicated than that. Let's consider what happens when $r_{0}$ is a perfectly reasonable number like 20:

turnover diagram

Here, we might expect that $r_{0}=0$ is not curved, $r_{0}=20$ is a little bit curved, and so on, with $r_{0}=80$ being the most curved. However, when we crunch the numbers:

curvature for the previous figure

It's true as long as you check at $t>0.2$ or so. But if you judge the lines by their greatest curvature, it's at the endpoints. $r_{0}=20$ is the most curved, followed by $r_{0}=40$, $r_{0}=60$, $r_{0}=80$, and finally $r_{0}=0$. Other than the last one, this is the complete opposite of what I expected. It's as if you're trying to draw a straight line, and the harder you try, the more curved it is. (Unless you nail it, then it's fine.)

The problem is not really the undefined behavior when $r_{0}=0$, the problem is the well-defined division when it isn't.

Papering over the problem

I am a bit surprised that I can't google up some prior art on how to deal with this. I did find one stackexchange answer suggesting such curves are not "regular", but "this is unlikely to happen", and googling that terminology didn't help.

I did notice just playing around that I can make a "similar" cubic with larger $r_{0}$ that has nice curvature:

two cubics superimposedcurvature of one cubiccurvature of the other cubic

This suggests that I can pick a new control point for a larger $r_{0},r_{1}$ and get a nice curvature that way.

Picking $r_{0}$

There is some prior art on this general topic. For example, I found this paper which says

A reason for one to get undesired shapes is unsuitable magnitudes of the given tangent vectors. Usually, the larger the magnitudes of the tangent vectors, the more likely the occurrence of a loop in the resulting curve. On the other hand, the smaller the magnitudes of the tangent vectors, the closer the resulting curve to the base line segment. Therefore, the problem is how to choose suitable magnitudes for the endpoint tangent vectors.

That sounds very promising. Unfortunately, their solution is not overly interested in a nice curvature and it also involves assembling multiple cubics into a megazord cubic ("composite optimized geometric Hermite cubic" for short) and then using that, which is not ideal for my case.

Instead, I noticed that if we take the curvature equation from earlier, and let $t=0, r_{1}=r_{0}, tt_{0}=+err, tt_{1}=-err$ we get

$ \begin{multline} \kappa = {(2 (r - 2 r_{0} Cos[err]) Sin[err]) \over (3 r_{0} Sqrt[r_{0}^2])} \end{multline} $

The idea here is that we have a line that is "nearly straight" (curved a little by $err$), and then we hold the resulting curvature below some $k$:

$ \begin{multline} r_{0} = -{(2 Cos[err] Sin[err])\over {(3 k)}} + {1 \over 3} {\root 2 \of 2 * \root 2 \of {3 k r Sin[err] + 2 Cos[err]^2 Sin[err]^2) \over k^2}} \end{multline} $

Now we just pick an $err$ that seems straight (a few degrees), a $k$ that seems small ($0.02$), and crank out an $r_{0}$:

solution for some parameters

Here we see that I wanted $r_{0}$ to be at least 3, but in many of my examples it's only 1. Bumping it up makes the curvature nice again.

A few observations about this formula

If you want the "smallest possible" $k$ to address this problem, the practical limit of this expression in $\lim\limits_{k \to 0}$ seems to be $r_{0} == {r\over2}$. I say the "practical limit" because, well, if $k$ gets very small, things get a little odd:

situation for k very small

So the analytical limit looks indeterminate. Still, the $r\over2$ rule works well enough for many graphical applications.

However, $r\over2$ may be overly large if you can tolerate some asymptotic behavior. For example, if you're implementing the curvature, and want to know if you can do something reasonable for a given input curve, the full form may be more useful.

An implementation of this function appears in blitcurve, my general-purpose bezier geometry library.

Applying $r_{0}$

So if this how we get the $r_{0},$ what do we do with it? In the case of the line it is pretty easy; any control point along the line will produce "the same" line, so we just pick control points along the line with a new $r_{0}$ and we're done.

In general, however, a cubic with moved control points will be a different path. This situation was not important to my immediate problem, so I did not look into it extensively. However, I believe it can be done by reparameterizing to control-point notation and solving for the missing points.

]]>
https://sealedabstract.com/posts/rectangle-intersectionRectangle intersection at 60fpsA faster algorithm for rectangle intersectionhttps://sealedabstract.com/posts/rectangle-intersectionSun, 2 Aug 2020 00:00:00 +0000Demo

In this post, I discuss the rectangle intersector used for the following demo, which typically exceeds 4 gigaintersections per second:

This is the fastest result I'm aware of, possibly because I can't find any benchmarks of this kind, or much discussion of rectangle intersection in general.

Here is the article I was looking for.

Discussion

Suppose you have two rectangles and want to know if they intersect. Perhaps those rectangles are repsented as CGRects, or equivalently, they consist of horizontal and vertical lines. This axis-aligned case is very simple:

aligned

You simply check each point of one rectangle to see if it's inside the minimum/maximum points of the other. In fact, CGRect has a nice intersects(_:) API for this.

But how to deal with the case that the rectangles are rotated?

rotated

Stack Overflow

Stack Overflow gives two answers. The first is based on the hyperplane separation theorem. It's a fine algorithm, but it's generic across any polygon, so if your problem involves rectangles specifically there are faster techniques.

The second method is an approximation that gives wrong results in some cases.

Alternatives

I will now explain some methods I used in a recent project. I doubt they are, strictly speaking, new. But they seem to be new to Google.

Approximations

First, let's examine what the problem is with the axis-aligned approach. Suppose we pretended our rectangle were axis-aligned, and did a simple min < point < max comparison like CGRect does.

rotated

As you can see, this will find points not inside our rectangle, but inside its axis-aligned bounding box.

This is often an efficent first-pass for intersection, since it is extremely cheap to calculate and quickly eliminate rectangles that are far away. However, let's assume that we've either done this and need something better, or we can't use that fast path for some reason.

Alternatively, we can rotate the whole system so our rectangle is axis-aligned:

rotatedunrotated

In this configuration, checking whether points are inside is straightforward. You might imagine you can simply check the 4 points of the other rectangle to see if they are inside, however this is not the case:

rotated

In case you're like me and thinking "maybe adding the center point will fix that", it won't:

elonaged

This counterexample involves rectangles with one side more than twice the dimension of the other. If your data is sufficiently squareish this may not be a real situation. In fact, you can continue to add test points that are evenly spaced in the long dimension until

w > l / (k+1)

in which case your approximation will be exact. Here is a diagram of the test points for k=2:

k=2

In many cases, some small k is sufficient. However for elongated rectangles k can become large. This approach may be appropriate for a fast path, or even for the only path on squareish data.

But at some point we have to stop coming up with new fast paths and actually solve the problem. And we may need a different approach that has better worst-case performance.

Stepping back a moment

Most approaches assume we have the rectangle points. However, not all 4 points form rectangles:

not rect

In fact, most points you would try would not be rectangular and therefore not valid inputs to a rectangle-intersection algorithm. In the language of information theory, would say the points-representation lacks entropy. Or mathematically, we might say this representation doesn't naturally describe rectangles.

Alternatively, we could define a rectangle by its center, angle of rotation, and size. Now, any combination of values will produce a valid rectangle:

natural

Working in this notation turns out to be key. Of course, we can derive the point representation by:

  1. Generating 4 points that are +/- half each dimension
  2. Rotating the points about the angle
  3. Translating by the center

A general algorithm

Suppose for a moment we have a rectangle with angle 0 and center 0,0. Then the lines that extend from the edges have a simple definition, that is

natural

Why do we care about the lines in the edges? Well, because of the hyperplane separation theorem, two rectangles don't intersect if and only if there is some line that separates them:

separation

And in the case of polygons, one of the polyon edges will be that line.

separation2

Convince yourself that in this diagram, only the edge indicated separates the rectangles. As a consequence, there may only be a single separating edge, and it may not be on the first rectangle we choose.

This insight yields the following algorithm:

  1. Transform the whole system so one rectangle is at 0,0 with angle 0
  2. Now we can easily calculate "candidate lines" from the identities shown in the diagram.
step1
  1. Let's try the line indicated. We know that the axis-aligned points will all be in the top half (y >= -w/2). Therefore, if this line separates the rectangles, the other rectangle will be in the bottom half (y < -w/2). In this example, only 2 of its 4 points are in the bottom half, so this is not a separating line.
step2
  1. Alternatively, consider this line. We know the axis-aligned rect will be on the left half (x <= l/2). Therefore, a separating line would have the other rect on the right (x > l/2). Indeed, all points on the rect meet that criteria, so this is a separating line.
step3
  1. If there's a separating line, the rectangles do not intersect.
  2. If there wasn't a separating line, repeat this process with the other rect at 0,0 with angle 0
  3. If it didn't have any separating lines either, the rectangles do not intersect.

Performance remarks

This is substantially faster than the better-known polygon method in several respects.

First, when we "transform the system" we only need to transform a single rectangle. The axis-aligned rectangle is only used for lines, which we calculate directly from the identities. Therefore we don't even need to bother about that rectangle's points.

Second, when we transform the other rectangle, we can transform its center/angle representation directly, and derive the corner points from that. This is better than deriving the points first, and then transforming all of them serially.

Third, the bulk of this algorithm is simple comparisons against 4 points, and the occasional sin/cos for the rotation. There is an efficient vectorized implementation for practically every platform, including GPUs for example.

Implementation

You can see an implementation of this in blitcurve, a project which I will hopefully write more about soon™️.

In the meantime, blitcurve has both CPU and GPU implementations of various geometry algorithms like this one.

Special thanks to my friend Blue for his invaluable assistance with this algorithm.

]]>
https://sealedabstract.com/posts/what-we-wished-forWhat we wished forA complex take on free speechhttps://sealedabstract.com/posts/what-we-wished-forFri, 24 Jul 2020 00:00:00 +0000In “The Four Quadrants of Conformism” Paul Graham is concerned about a decline in free inquiry:

I'm biased, I admit, but it seems to me that aggressively conventional-minded people are responsible for a disproportionate amount of the trouble in the world, and that a lot of the customs we've evolved since the Enlightenment have been designed to protect the rest of us from them. In particular, the retirement of the concept of heresy and its replacement by the principle of freely debating all sorts of different ideas, even ones that are currently considered unacceptable, without any punishment for those who try them out to see if they work.

I am convinced that every discipline of human endeavor lends itself to certain ways of thinking. In programming, we learn to think in abstraction. We design interfaces and re-use them across many implementations. We take a symbol like + and it works across floats and integers, of various sizes, signed and unsigned, unbounded fields, dictionaries, arrays, and so on. Even when the results are ridiculous, we are so fluent in abstraction that they seem to us more real than actual reality. In my corner of the programosphere we write More<And<More<Generic<Code<EachYear>>>>> where Year = 2020 . And the ease with which we abstract the real world is hardly new, as anyone who has encountered a mismatch between floating-point and real can tell you.

The genius of abstraction is that our tools do the heavy lifting of dealing with many convoluted real-world cases. The danger is that we have outsourced the details of what really happens for someone else to deal with later. We are comfortable with this in programming, because usually, “someone else” is the compiler or the unit tests, diligently and tirelessly warning us about the difference between the real world and our Perfect Abstracted Tower on something resembling a regular basis. But this is not the practice in other places.

Like any good abstraction, the genius of terms like "independently-minded" and "freely debating ideas" is that each person can fill in the variables at runtime with their preferred values. Paul Graham imagines the independently-minded to be people like himself:

all successful startup CEOs are not merely independent-minded, but aggressively so

And so do we all. Activists identify themselves as standing up to conventions of oppression. Racists identify themselves as defying the dominant culture of political correctness. A fired university professor exists in the superposition of being a victim of cancel culture and simultaneously a member of the powerful intellectual elite. Trump ran on a platform of subverting conventional politics, Biden is now running on a platform of subverting that convention. Arguably the Confederates were the most independently-minded political movement in American history. Unless you consider the political climates in the confederate states, in which case actually, they were very conformist.

These details matter. We are reluctant to introduce the messiness of real examples into an abstract “principled” discussion, because we are concerned they will corrupt our pure abstract principles. But abstract principles don't exist. What we are actually doing is designing systems without any test cases.

What are the details of promoting a spirit of inquiry? I suspect Paul Graham and I share an interest, for example, in "doing something about" people being run out of town on the proverbial rail on social media. But what should we do, in detail? Do we disable the Tweet button when the ML model decides the author is too angry? Do we force people to patronize businesses who promote values they think are wrong? Maybe we just encourage people to have their free inquiries in private by banning them from Twitter? Maybe you can think of the right answer, but it seems to me most "fixes" themselves work against free inquiry, perhaps in some clever way, even as we profess to be interested in restoring it. The abstraction condemns all its instances equally, including its own Godel number.

Actually, it’s our relentless obsession with the featureless blank canvas of "free" inquiry that created this mess. We built a system for everybody to express their ideas. They are doing so, and that’s why you’re scrolling through newsfeeds at 3am too angry to sleep. In recent times American political discourse has been wide enough to encompass birtherism, pizzagate, science denialism, and racism. I think those are quite controversial ideas. Perhaps some on the right would like to put socialism or dismantling police on the weird idea list. Either way, we live in an unprecedented time for ideas to grow and discover their audience, to become amplified, do battle in the public discourse, and create human casualties.

Like those people in the folktales, we were not careful with what we wished for. We wanted information to be free. And so it’s free from facts, free from expert opinion. It’s free from any responsibility toward others, free from consequences, free from needing to produce practical results, from competing on its merits, or on any basis other than its ability to spread from one mind to the next, until it consumes as many humans as it possibly can, in some kind of perpetual outrage fireball.

Thing is, the Enlightenment ideal of free inquiry itself does not really exist in isolation, but rests on other less-spoken technologies. For example, the technology that we will check ideas against the facts. That we will have empathy with those who see things differently, because ultimately we believe we share a common cause. That we should be concerned with the tide that lifts or beaches all boats, instead of preoccupied mostly with the beaching of the enemy navy. That we will be fair and kind to one another, and share mutual respect, and forgive each other for our mistakes.

When I imagine a better world, I don't imagine a world where people are more independently minded, I imagine a world where they are kind. Yes, kind enough not to form Twitter mobs, so in that sense, more independent? But also, kind enough not to provoke the mobs. Sure, the freedom to be controversial is of value. But so is the freedom from being provoked on a constant basis.

A Facebook employee recently circulated a video about how Facebook is "hurting people at scale." I’d encourage you to listen to it, because my transcript will not carry the emotional sense of the speaker.

We were all being told that after long deliberation, [Facebook] leadership felt that Trump’s post didn’t violate our policies. But I don’t think that’s what concerned folks like myself were thinking at all... We weren’t asking "does this thing follow our policy". We weren’t asking whether it was consistent with our policy not to take action.

We were asking, "why would our policies allow for this thing?" Why don't our policies require that we do take some kind of action? But Mark framed his whole response around following policy, rathern than fixing policy...

...Facebook is getting trapped by our ideology of free expression, and the easy temptation of just trying to stay consistent with that ideology. This means we can't be responsive to what Arendt would call new beginnings, or new premises. Anything fundamental that changes in the world around us. And I think it also causes us to lose sight of other important premises. Like for example, free expression is supposed to serve human needs. It's supposed to serve people.

Our principles exist to serve human needs. Yes, we lose something when we set limits, socially or otherwise, on discourse. But we also lose when we decline to do so, and implicitly cede the discourse to only those with the capital to withstand mob justice, which is neither a meritocratic nor an enlightened process. And, in the tradition of Enlightenment inquiry, we ought to ask: how's that going? If we built the Internet on those principles, and we don't like it, maybe we should be tweaking the principles?

Let me fill in more details. In The Origins of Totalitarianism, leading political theorist Hannah Arendt observed that

Caution in handling generally accepted opinions that claim to explain whole trends of history is especially important for the historian of modern times, because the last century has produced an abundance of ideologies that pretend to be keys to history but are actually nothing but desperate efforts to escape responsibility.

I am concerned that our pluggable ideologies around promoting "unconventional ideas" as some unifying principle throughout history may shield us from the responsibility for the harm ideas can cause. Certainly, some harms are acceptable, and even necessary, for human progress. But we ought to say so, and with the specific parameters we used, so that the thing can be examined.

On the other hand the generic abstraction, into which each of us imagines ourselves, and which treats inputs alike be they sense or nonsense, is very dangerous. That program is valid for many more inputs than we have tried, and human imagination contains many monsters we have not yet created. I think we all sort of assumed that bad ideas will be countered by good ideas, which would obviously win out in some fuzzy meritocratic marketplace.

I don't think that strategy is working. Perhaps what counterbalances bad ideas is not good ideas, but good people, who do the work to advance good ideas when they don't spread on their own.

]]>
https://sealedabstract.com/posts/against-main-splainingAgainst man splainingPeople explain 'the way things are' instead of talking about how they ought to be.https://sealedabstract.com/posts/against-main-splainingFri, 19 Jun 2020 00:00:00 +0000I have recently noticed a growing problem in hacker culture, and I’m getting more concerned.

I would like to use a toy example before I bring out the flamey ones. (Although let's be honest: with this title, it may be a lost cause.) Close your eyes for a moment and imagine the 90s. (Evidently, we all wish to escape the present.) You just upgraded your computer from Windows 95 to Windows ME. It crashes a lot. Drowning in BSODs, you dial-in to the internet and post to Usenet about your problems. "This machine used to be so reliable!" you type with satisfying clicks on your mechanial-switch keyboard. "What is M$FT even doing?"

Several days later, when you bother to sign in, you start fielding replies. "This is because Windows ME is a real-mode OS. If you wanted stability you should have chosen an OS that has protected memory," says one. "Windows ME is designed for consumers. Next time try a business-class OS like NT" says another. “Linux doesn’t have this problem,” says another. "I'd just like to interject for a moment. What you're referring to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux."

Did these replies help you? Did you fix your BSODs? Are you more or less likely to post to Usenet after this experience?

It seems that of late, we have a tendency of turning conversations about human problems, into conversations that explain systems, as if we are typing "the manpage for the topic" into a comment box. We assume the system is correct. Problems can only arise because you do not understand the system, and once the system is explained to you, you will become enlightened. In fairness, many programming problems can be solved by a deeper understanding of some system, which is of course why manpages exist.

However, systems are only correct if they serve human goals. When a system fails to serve human goals, that’s a bug. Saying the system is designed to do this is equivalent to admitting that it's designed wrong.

Recently, there’s a story going around about about Apple rejecting a certain app for taking money outside the app store. Inevitably the conversation turns into wondering why people are surprised that the system works this way, as if the problem would be resolved by merely educating ignorant people about exactly how the system of "large companies moving in on your small business" operates. It’s as if someone had typed man appstore at the terminal and wanted to know how all these newfangled gizmos work. Inevitably the conversation expands to explain the workings of more and more systems that the people involved have even fewer formal qualifications to talk about: man monopoly-power , man antitrust, man contracts, man freedom-of-association...

I am surprised that everyone is surprised why everyone is surprised.

I am surprised that everyone is surprised why everyone is surprised.

In fact, what motivated the whole conversation is someone’s assertion that the system is bad. That it does not serve human interests. The real questions are these:

  • Why did we make this system?
  • What has changed in the world since that time?
  • Who benefits from this system?
  • Who is disadvantaged by it?
  • Are there alternatives that would benefit more and disadvantage fewer?
  • What are the costs of maintaining the status quo?
  • What are the risks of doing something new?
  • Who is willing to do the work to change this system?
  • Who will resist change?
  • What are the tools and techniques along these lines?

We are not having these conversations.

Actually, we sometimes have them in bad faith. For illustration, there’s a recent teapot controversy over submitting ethical licenses to the Open Source Initiative. (man ethical-licenses: like the JSON license, which requires you to use software for good and not evil.) The ensuing drama caused an OSI founder (and Prominent Famous Hacker) to rejoin the list after many years and argue forcefully that these should be forbidden, making arguments such as:

You are mounting an ideological attack on our core principles of liberty and nondiscrimination. You will not succeed while I retain any ability to oppose this.

Somewhat surprisingly, he was banned. Less surprisingly, he continued to argue off-list:

The effect – the intended effect – is to diminish the prestige and autonomy of people who do the work – write the code – in favor of self-appointed tone-policers...We are being social-hacked from being a culture in which freedom is the highest value to one in which it is trumped by the suppression of wrongthink and wrongspeak. Our enemies – people like [name redacted] – do not even really bother to hide this objective.

I think this demonstrates an important principle. When explaining Why The System Is This Way doesn't adequately convince our audience, we sometimes resort to casting suspicion on the motives of Those Who Work For Change. Actually, I think examining motives in an evenhanded way is totally legitimate. But what is missing here is any examination of OSI’s motives. Who benefits from forbidding field-of-use restrictions? (Answer: Amazon and other cloud companies that compete with software writers.) Who funds the OSI? (Answer: The cloud companies). So, rather than reciting OSI Freedom 6 as a magic talisman that should ward off our “enemies”, the conversation should be about what we gain or lose by choosing to build and operate a Freedom 6 System.

We have the unique (mis)fortune of living in a time of unprecedented global change, and we are grappling with complex issues more quickly than in recent past. Is pressuring your employer to cancel contracts with the police a good idea? Is it an outrage that so-and-so was fired for their political view? Should git branches be named master? What are the boundaries between work and politics? When we try to have a dialogue about these topics, we immediately reframe each of these controversies into mechanical discussions that explain different facts about the system. Free speech applies to the government, not to your employer. Private and public entities have certain differences. The etymology of this word is completely different than another offensive one.

In spite of whatever political affiliations you think you have gleaned from my examples, I do not actually know how to solve the Big Problems of our generation. I do know that the system of writing manpages at each other and calling it "discourse" will not scale to the questions we face. Mere explanation cannot reach to the question of What Should Be, it can only cement What Is Already.

I remember a time when hacker culture was more about the Limitless Possibility of Systems We Can Build. It seems as we have been building them, our dreams have grown small, and our conversations have become the Limitless Explanation of What We Built. Or, maybe we were just young and naive, and over the years, real life set in.

If so, I am naive still. But unlike the 90s, I can’t find the usenet group where all the naive people hang out and discuss building the new systems. If you happen to know where it is, please contact me.

]]>