Susam Pal https://susam.net/ Susam's Feed Git Checkout, Reset and Restore https://susam.net/git-checkout-reset-restore.html gcrrp Thu, 12 Mar 2026 00:00:00 +0000 I have always used the git checkout and git reset commands to reset my working tree or index but since Git 2.23 there has been a git restore command available for these purposes. In this post, I record how some of the 'older' commands I use map to the new ones. Well, the new commands aren't exactly new since Git 2.23 was released in 2019, so this post is perhaps six years too late. Even so, I want to write this down for future reference. It is worth noting that the old and new commands are not always equivalent. I'll talk more about this briefly as we discuss the commands. However, they can be used to perform similar tasks. Some of these tasks are discussed below.

Contents

Experimental Setup

To experiment quickly, we first create an example Git repository.

mkdir foo/; cd foo/; touch a b c
git init; git add a b c; git commit -m hello

Now we make changes to the files and stage some of the changes. We then add more unstaged changes to one of the staged files.

date | tee a b c d; git add a b d; echo > b

At this point, the working tree and index look like this:

$ git status
On branch main
Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        modified:   a
        modified:   b
        new file:   d

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
        modified:   b
        modified:   c

File a has staged changes. File b has both staged and unstaged changes. File c has only unstaged changes. File d is a new staged file. In each experiment below, we will work with this setup.

All results discussed in this post were obtained using Git 2.47.3 on Debian 13.2 (Trixie).

Reset the Working Tree

As a reminder, we will always use the following command between experiments to ensure that we restore the experimental setup each time:

date | tee a b c d; git add a b d; echo > b

To discard the changes in the working tree and reset the files in the working tree from the index, I typically run:

git checkout .

However, the modern way to do this is to use the following command:

git restore .

Both commands leave the working tree and the index in the following state:

$ git status
On branch main
Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        modified:   a
        modified:   b
        new file:   d

Both commands operate only on the working tree. They do not alter the index. Therefore the staged changes remain intact in the index.

Reset the Index

Another common situation is when we have staged some changes but want to unstage them. First, we restore the experimental setup:

date | tee a b c d; git add a b d; echo > b

I normally run the following command to do so:

git reset

The modern way to do this is:

git restore -S .

Both commands leave the working tree and the index in the following state:

$ git status
On branch main
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
        modified:   a
        modified:   b
        modified:   c

Untracked files:
  (use "git add <file>..." to include in what will be committed)
        d

no changes added to commit (use "git add" and/or "git commit -a")

The -S (--staged) option tells git restore to operate on the index (not the working tree) and reset the index entries for the specified files to match the version in HEAD. The unstaged changes remain intact as modified files in the working tree. With the -S option, no changes are made to the working tree.

From the arguments we can see that the old and new commands are not exactly equivalent. Without any arguments, the git reset command resets the entire index to HEAD, so all staged changes become unstaged. Similarly, when we run git restore -S without specifying a commit, branch or tag using the -s (--source) option, it defaults to resetting the index from HEAD. The . at the end ensures that all paths under the current directory are affected. When we run the command at the top-level directory of the repository, all paths are affected and the entire index gets reset. As a result, both the old and the new commands accomplish the same result.

Reset the Working Tree and Index

Once again, we restore the experimental setup.

date | tee a b c d; git add a b d; echo > b

This time we not only want to unstage the changes but also discard the changes in the working tree. In other words, we want to reset both the working tree and the index from HEAD. This is a dangerous operation because any uncommitted changes discarded in this manner cannot be restored using Git.

git reset --hard

The modern way to do this is:

git restore -WS .

The working tree is now clean:

$ git status
On branch main
nothing to commit, working tree clean

The -W (--worktree) option makes the command operate on the working tree. The -S (--staged) option resets the index as described in the previous section. As a result, this command unstages any changes and discards any modifications in the working tree.

Note that when neither of these options is specified, -W is implied by default. That's why the bare git restore . command in the previous section discards the changes in the working tree.

Summary

The following table summarises how the three pairs of commands discussed above affect the working tree and the index, assuming the commands are run at the top-level directory of a repository.

Old New Working Tree Index
git checkout . git restore . Reset to match the index. No change.
git reset git restore -S . No change. Reset to match HEAD.
git reset --hard git restore -SW . Reset to match HEAD. Reset to match HEAD.

The git restore command is meant to provide a clearer interface for resetting the working tree and the index. I still use the older commands out of habit. Perhaps I will adopt the new ones in another six years, but at least I have the mapping written down now.

Read on website | #technology | #how-to

]]>
HN Skins 0.4.0 https://susam.net/code/news/hnskins/0.4.0.html hnsfr Tue, 10 Mar 2026 00:00:00 +0000 HN Skins 0.4.0 is a minor update to HN Skins, a web browser userscript that adds custom themes to Hacker News and lets you browse HN with a variety of visual styles. This release introduces a small fix to preserve the commemorative black bar that occasionally appears at the top of the page.

When a notable figure in technology or science passes away, Hacker News places a thin black bar at the top of the page in tribute. Previously some skins could obscure this element. This update ensures that the bar remains visible and clearly noticeable. In dark themed skins, the black bar is rendered as a lighter shade of grey so that it maintains sufficient contrast and remains conspicuous.

Today Hacker News has a story about Tony Hoare passing away, which made me notice that the commemorative black bar was not rendered properly with some skins. This prompted me to investigate the issue and implement the fix included in this release.

Screenshots showing how the bar appears with different skins are available at susam.github.io/blob/img/hnskins/0.4.0/.

To install HN Skins, visit github.com/susam/hnskins and follow the instructions there.

Read on website | #web | #programming | #technology

]]>
HN Skins 0.3.0 https://susam.net/code/news/hnskins/0.3.0.html hnsth Sat, 07 Mar 2026 00:00:00 +0000 HN Skins 0.3.0 is a minor update to HN Skins, a web browser userscript that adds custom themes to Hacker News and allows you to browse HN with a variety of visual styles. This release includes fixes for a few issues that slipped through earlier versions. For example, the comment input textbox now uses the same font face and size as the rest of the active theme. The colour of visited links has also been slightly muted to make it easier to distinguish them from unvisited links. In addition, some skins have been renamed: Teletype is now called Courier and Nox is now called Midnight.

Further, the font face of several monospace based themes is now set to monospace instead of courier. This allows the browser's preferred monospace font to be used. The font face of the Courier skin (formerly known as Teletype) remains set to courier. This will never change because the sole purpose of this skin is to celebrate this legendary font.

To view screenshots of HN Skins or install it, visit github.com/susam/hnskins.

Read on website | #web | #programming | #technology

]]>
HN Skins 0.2.0 https://susam.net/code/news/hnskins/0.2.0.html hnskt Sun, 01 Mar 2026 00:00:00 +0000 HN Skins 0.2.0 is a minor update of HN Skins. It comes a day after its initial release in order to fine tune a few minor issues with the styles in the initial release. HN Skins is a web browser userscript that adds custom themes to Hacker News and allows you to browse HN with different visual styles.

This update removes excessive vertical space below the 'reply' links, sorts the skin options alphabetically in the selection dialog and fixes the background colour of the navigation bar in the Terminal skin by changing it from a dark grey to a dark green.

Soon after making this release, I discovered a few other minor issues, such as the Cafe and Terminal themes using Courier when I intended them to use the system monospace font. This has already been fixed in the development version currently available on GitHub. However, I will make a formal release later.

See the changelog for more details. To see some screenshots of HN Skins or to install it, visit github.com/susam/hnlinks and follow the instructions there.

Read on website | #web | #programming | #technology

]]>
HN Skins 0.1.0 https://susam.net/code/news/hnskins/0.1.0.html hnsko Sat, 28 Feb 2026 00:00:00 +0000 HN Skins 0.1.0 is the initial release of HN Skins, a browser userscript that adds custom themes to Hacker News (HN). It allows you to browse HN in style with a selection of visual skins.

To use HN Skins, first install a userscript manager such as Greasemonkey, Tampermonkey or Violentmonkey in your web browser. Once installed, you can install HN Skins from github.com/susam/hnskins.

The source code is available under the terms of the MIT licence. For usage instructions and screenshots, please visit github.com/susam/hnskins.

Read on website | #web | #programming | #technology

]]>
Feb '26 Notes https://susam.net/26b.html ntfts Fri, 27 Feb 2026 00:00:00 +0000 Since last month, I have been collecting brief notes on ideas and references that caught my attention during each month but did not make it into full articles. Some of these fragments may eventually grow into standalone posts, though most will probably remain as they are. At the very least, this approach allows me to keep a record of them.

Most of last month's notes grew out of my reading of Algebraic Graph Theory by Godsil and Royle. I am still exploring and learning this subject. This month, however, I dove into another book with the same title but this book is written by Norman Biggs. As a result, many of the notes that follow are drawn from Biggs's treatment of the topic.

Since I already had a good understanding of the subject from the earlier book, I decided to skip the first fourteen chapters of the new book. I began with Chapter 15, which discusses automorphisms of graphs and then moved on to the following chapters on graph symmetries. My main reason for picking up Biggs's book was to understand Tutte's well known result that any \( s \)-arc-transitive finite cubic graph must satisfy \( s \le 5. \) While I did not reach that chapter this month, I made substantial progress with the book. I hope to work through the proof of Tutte's theorem next month.

Contents

  1. Degree of Vertices in an Orbit
  2. Regular Non-Vertex-Transitive Graphs
  3. Vertex-Transitive But Not Edge-Transitive
  4. Edge-Transitivex But Not Vertex-Transitive
  5. Bipartiteness as a Necessary Condition
  6. Graph with an Automorphism Group
  7. Permutation Groups Need Not Be Automorphism Groups
  8. Symmetric Graphs

Degree of Vertices in an Orbit

If two vertices of a graph belong to the same orbit, then they have the same degree. In other words, for a graph \( X, \) if \( x, y \in V(X) \) and there is an automorphism \( \alpha \) such that \( \alpha(x) = y, \) then \( \deg(x) = \deg(y). \)

The proof is quite straightforward. Let \begin{align*} N(x) &= \{ v_1, \dots, v_r \}, \\ N(y) &= \{ w_1, \dots, w_s \} \end{align*} represent the neighbours of \( x \) and \( y \) respectively. Therefore we have \[ x \sim v_1, \; \dots, \; x \sim v_r. \] Since an automorphism preserves adjacency, we get \[ \alpha(x) \sim \alpha(v_1), \; \dots, \; \alpha(x) \sim \alpha(v_r). \] Substituting \( \alpha(x) = y, \) we get \[ y \sim \alpha(v_1), \; \dots, \; y \sim \alpha(v_r). \] Thus \[ \alpha(N(x)) = \{ \alpha(v_1), \; \dots, \; \alpha(v_r) \} \subseteq N(y). \] A similar argument works in reverse as well. By the definition of automorphism, if \( \alpha \) is an automorphism, so is \( \alpha^{-1}. \) From the definition of \( N(y) \) above, we have \[ y \sim w_1, \; \dots, \; y \sim w_s. \] Therefore \[ \alpha^{-1}(y) \sim \alpha^{-1}(w_1), \; \dots, \; \alpha^{-1}(y) \sim \alpha^{-1}(w_s). \] This is equivalent to \[ x \sim \alpha^{-1}(w_1), \; \dots, \; x \sim \alpha^{-1}(w_s). \] Thus \[ \alpha^{-1}(N(y)) = \{ \alpha^{-1}(w_1), \; \dots, \; \alpha^{-1}(w_s) \} \subseteq N(x) \] This can be rewritten as \[ \{ \alpha^{-1}(w_1), \; \dots, \; \alpha^{-1}(w_s) \} \subseteq \{ v_1, \dots, v_r \}. \] Therefore \[ N(y) = \{ w_1, \dots, w_s \} \subseteq \{ \alpha(v_1), \dots, \alpha(v_r) \} = \alpha(N(x)). \] We have shown that \( \alpha(N(x)) \subseteq N(y) \) and \( N(y) \subseteq \alpha(N(x)). \) Thus \[ \alpha(N(x)) = N(y). \] Thus \[ \lvert N(y) \rvert = \lvert \alpha(N(x)) \rvert = r. \] Therefore both \( x \) and \( y \) have \( r \) neighbours each. Hence \( \deg(x) = \deg(y). \)

Regular Non-Vertex-Transitive Graphs

The Frucht graph and the Folkman graph are examples of graphs that are \( k \)-regular but not vertex-transitive. In fact, the Folkman graph is a semi-symmetric graph, i.e. it is regular and edge-transitive but not vertex-transitive.

Vertex-Transitive But Not Edge-Transitive

The circular ladder graph \( CL_3, \) i.e. the triangular prism graph, is vertex-transitive but not edge-transitive.

Every vertex has the same local structure. Every vertex has degree \( 3 \) and it lies on exactly one of the two triangles and it has exactly one 'vertical' edge connecting it to the corresponding edge on the other triangle. Any vertex can be sent to any other by an automorphism.

Since triangle edges are in a triangle and vertical edges are in no triangle, no automorphism can send a triangle edge to a vertical edge or vice versa. Therefore the graph is not edge-transitive.

Edge-Transitivex But Not Vertex-Transitive

The complete bipartite graphs \( K_{m,n} \) with \( m \ne n \) are edge-transitive but not vertex-transitive.

Every edge connects one vertex from the \( m \)-part to one vertex from the \( n \)-part. Any permutation of vertices inside the \( m \)-part preserves adjacency. Similarly, any permutation of vertices inside the \( n \)-part preserves adjacency.

Take two arbitrary edges \[ uv, \; u'v' \in E(K_{m,n}) \] where \( u, u' \) are vertices that lie in the \( m \)-part and \( v, v' \) are vertices that lie in the \( n \)-part. Permute vertices within the \( m \)-part to send \( u \) to \( u'. \) Similarly, permute vertices within the \( n \)-part to send \( v \) to \( v'. \) This gives an automorphism that sends the edge \( uv \) to \( u'v'. \) In this manner we can find an automorphism that sends any edge to any other. Therefore, \( K_{m,n} \) is edge-transitive.

However, \( K_{m,n} \) is not vertex-transitive since no automorphism can send a vertex in the \( m \)-part to a vertex in the \( n \)-part since the vertices in the \( m \)-part have degree \( n \) and the vertices in the \( n \)-part have degree \( m. \)

Bipartiteness as a Necessary Condition

If a connected graph is edge-transitive but not vertex-transitive, then it must be bipartite.

Graph with an Automorphism Group

In 1938, Frucht proved that for every finite abstract group \( G, \) there exists a graph whose automorphism group is isomorphic to \( G . \)

Remarkably, this result remains valid even when we restrict our attention to cubic graphs. That is, for every finite abstract group \( G, \) there exists a cubic graph whose automorphism group is isomorphic to \( G. \) Moreover, the result has been extended to graphs satisfying various additional graph-theoretical properties, such as \( k \)-connectivity, \( k \)-regularity and prescribed chromatic number.

Permutation Groups Need Not Be Automorphism Groups

Consider the following specialised version of the problem discussed in the previous section: Given a permutation group on a set \( X, \) must there exist a graph with vertex set \( X \) whose automorphism group is precisely that permutation group?

The answer is no. Consider the cyclic group \( C_3 \) acting on \( X = \{ a, b, c \}. \) There is no graph \( \Gamma \) with \( V(\Gamma) = X \) and \( \operatorname{Aut}(\Gamma) \cong C_3. \) If we take \( \Gamma = K_3, \) then \( C_3 \subset S_3 = \operatorname{Aut}(K_3) \) but \( C_3 \ne \operatorname{Aut}(K_3) . \)

Symmetric Graphs

It is interesting that while we study graph symmetry through concepts such as graph automorphisms, vertex-transitivity, edge-transitivity, etc. the name symmetric graph is reserved for graphs that are \( 1 \)-arc-transitive. A vertex-transitive graph or an edge-transitive graph need not be \(1\)-arc-transitive and therefore need not be symmetric.

However, every \( s \)-arc-transitive graph is \(1 \)-arc-transitive for \( s \ge 1. \) Consequently, every \( s \)-arc-transitive graph is symmetric. Moreover, every distance-transitive graph is also \( 1 \)-arc-transitive and hence symmetric.

Formally, we say that a graph \( \Gamma \) is \( 1 \)-arc-transitive (or equivalently, symmetric) if for all \( 1 \)-arcs \( uv \) and \( u'v' \) of \( \Gamma, \) there is an automorphism \( \alpha \in \operatorname{Aut}(\Gamma) \) such that \( \alpha(uv) = u'v'. \)

Stated in more basic terms, we can say that \( \Gamma \) is symmetric if for all \( u, v, u', v' \in V(\Gamma) \) satisfying \( u \sim v \) and \( u' \sim v', \) there exists \( \alpha \in \operatorname{Aut}(\Gamma) \) such that \( \alpha(u) = u' \) and \( \alpha(v) = v'. \)

Switching gears now, we say that \( \Gamma \) is distance-transitive if for all \( u, v, u', v' \in V(\Gamma) \) satisfying \( d(u, v) = d(u', v'), \) there exists \( \alpha \in \operatorname{Aut}(\Gamma) \) such that \( \alpha(u) = u' \) and \( \alpha(v) = v'. \) Since all \( 1 \)-arcs \( uv \) and \( u'v' \) satisfy \( d(u, v) = d(u', v') = 1, \) distance-transitivity implies that there is an automorphism that sends \( uv \) to \( u'v'. \) Therefore a distance-transitive graph is also \( 1 \)-arc-transitive.

To summarise, a graph must possess a certain degree of symmetry in order to be called symmetric. It turns out that merely having a non-trivial automorphism group is not sufficient. Even being vertex-transitive or edge-transitive is not enough for a graph to be called symmetric. The graph needs to be at least \( 1 \)-arc-transitive to be called symmetric.

Another interesting aspect of this terminology is that the property of being asymmetric is not the exact opposite of being symmetric. For example, a vertex-transitive graph need not be symmetric. However, that does not make it asymmetric. A graph is called asymmetric if it has no non-trivial automorphisms, i.e. its automorphism group contains only the identity permutation. Thus, if a graph has at least two vertices and is vertex-transitive, it must admit a non-trivial automorphism that maps one vertex to another. So while such a vertex-transitive may not be symmetric, it isn't asymmetric either.

Read on website | #monthly | #mathematics

]]>
Nerd Quiz #4 https://susam.net/code/news/nq/4.0.0.html nqfou Sun, 22 Feb 2026 00:00:00 +0000 Nerd Quiz #4 is the fourth instalment of Nerd Quiz, a single page HTML application that challenges you to measure your inner geek with a brief quiz. Each question in the quiz comes from everyday moments of reading, writing, thinking, learning and exploring.

This release introduces five new questions drawn from a range of topics, including computing history, graph theory and Unix. Visit Nerd Quiz to try the quiz.

A community discussion page is available here. You are very welcome to share your score or discuss the questions there.

Read on website | #web | #miscellaneous | #game

]]>
Deep Blue: Chess vs Programming https://susam.net/deep-blue.html dblue Sun, 15 Feb 2026 00:00:00 +0000 I remember how dismayed Kasparov was after losing the 1997 match to IBM's Deep Blue, although his views on Deep Blue became more balanced with time and he accepted that we had entered a new era in which computers would outperform grandmasters at chess.

Still, chess players can take comfort in the fact that chess is still played between humans. Players make their name and fame by beating other humans because playing against computers is no longer interesting as a competition.

Many software developers would like to have similar comfort. But that comfort is harder to find, because unlike chess, building prototypes or PoCs is not seen as a sport or art form. It is mostly seen as a utility. So while brain-coding a PoC may still be intellectually satisfying for the programmer, to most other people it only matters that the thing works. That means that programmers do not automatically get the same protected space that chess players have, where the human activity itself remains valued even after machines become stronger. The activity programmers enjoy may continue but the recognition and economic value attached to it may shrink.

So I think the big adjustment software developers have to make is this: The craft will still exist and we will still enjoy doing it but the credit and value will increasingly go to those who define problems well, connect systems, make good product decisions and make technology useful in messy real-world situations. It has already been this way for a while and will only become more so as time goes by.


This note reproduces a recent comment I posted in a Lobsters forum thread about LLM-assisted software development at at lobste.rs/s/qmjejh.

See also: Three Inverse Laws of AI and Robotics.

Read on website | #miscellaneous

]]>
Soju User Delete Hash https://susam.net/soju-user-delete-hash.html sudhs Sat, 14 Feb 2026 00:00:00 +0000 In my last post, I talked about switching from ZNC to Soju as my IRC bouncer. One thing that caught my attention while creating and deleting Soju users was that the delete command asks for a confirmation, like so:

$ sudo sojuctl user delete soju
To confirm user deletion, send "user delete soju 4664cd"
$ sudo sojuctl user delete soju 4664cd
deleted user "soju"

That confirmation token for a specific user never changes, no matter how many times we create or delete it. The confirmation token is not saved in the Soju database, as can be confirmed here:

$ sudo sqlite3 -table /var/lib/soju/main.db 'SELECT * FROM User'
+----+----------+--------------------------------------------------------------+-------+----------+------+--------------------------+---------+--------------------------+--------------+
| id | username |                           password                           | admin | realname | nick |        created_at        | enabled | downstream_interacted_at | max_networks |
+----+----------+--------------------------------------------------------------+-------+----------+------+--------------------------+---------+--------------------------+--------------+
| 1  | soju     | $2a$10$yRj/oYlR2Zwd8YQxZPuAQuNo2j7FVJWeNdIAHF2MinYkKLmBjtf0y | 0     |          |      | 2026-02-16T13:49:46.119Z | 1       |                          | -1           |
+----+----------+--------------------------------------------------------------+-------+----------+------+--------------------------+---------+--------------------------+--------------+

Surely, then, the confirmation token is derived from the user definition? Yes, indeed it is. This can be confirmed at the source code here. Quoting the most relevant part from the source code:

hashBytes := sha1.Sum([]byte(username))
hash := fmt.Sprintf("%x", hashBytes[0:3])

Indeed if we compute the same hash ourselves, we get the same token:

$ printf soju | sha1sum | head -c6
4664cd

This allows us to automate the two step Soju user deletion process in a single command:

sudo sojuctl user delete soju "$(printf soju | sha1sum | head -c6)"

But of course, the implementation of the confirmation token may change in future and Soju helpfully outputs the deletion command with the confirmation token when we first invoke it without the token, so it is perhaps more prudent to just take that output and feed it back to Soju, like so:

sudo sojuctl $(sudo sojuctl user delete soju | sed 's/.*"\(.*\)"/\1/')

Read on website | #shell | #irc | #technology | #how-to

]]>
From ZNC to Soju https://susam.net/from-znc-to-soju.html fztsj Thu, 12 Feb 2026 00:00:00 +0000 I have recently switched from ZNC to Soju as my IRC bouncer and I am already quite pleased with it. I usually run my bouncer on a Debian machine, where Soju is well packaged and runs smoothly right after installation. By contrast, the ZNC package included with Debian 13 (Trixie) and earlier fails to start after installation because of a missing configuration file. As a result, I was forced to maintain my own configuration file along with a necessary PEM bundle, copy them to the Debian system and carefully set the correct file permissions before I could run ZNC successfully. None of this is necessary with Soju, since installing it from the Debian package repository automatically sets up the configuration and certificate files. I no longer have to manage any configuration or certificate files myself.

Setup

It is quite straightforward to install and set up Soju on Debian. The following two commands install Soju:

sudo apt-get update
sudo apt-get -y install soju

Then setting up an IRC connection involves another two commands:

sudo sojuctl user create -username soju -password YOUR_SOJU_PASSWORD
sudo sojuctl user run soju network create -name bnc1 -addr irc.libera.chat -nick YOUR_NICK -pass YOUR_NICK_PASSWORD

Here, YOUR_SOJU_PASSWORD is a placeholder for a new password you must choose for your Soju user. Finally, we restart Soju as follows:

sudo systemctl restart soju

Database

What previously involved maintaining several files that had to be installed and configured on each machine running ZNC is now reduced to the two sojuctl commands above. Still, the configuration needs to live somewhere. In fact, the two sojuctl commands introduce earlier store the configuration in a SQLite database. Here is a glimpse of what the database looks like:

$ sudo sqlite3 /var/lib/soju/main.db '.tables'
Channel              MessageFTS_data      ReadReceipt
DeliveryReceipt      MessageFTS_docsize   User
Message              MessageFTS_idx       WebPushConfig
MessageFTS           MessageTarget        WebPushSubscription
MessageFTS_config    Network
$ sudo sqlite3 /var/lib/soju/main.db 'SELECT * from User'
1|soju|$2a$10$mM5Qcz8.OPMi9lyWDxPRh.bNxzq7jtLdxcoPl09AYTnqcmLmEqzSO|0|||2026-02-17T23:24:24.926Z|1||-1
$ sudo sqlite3 /var/lib/soju/main.db 'SELECT * from Network'
1|bnc1|1|irc.libera.chat|YOUR_NICK||||YOUR_NICK_PASSWORD|||||||1|1

Client Configuration

Finally, the IRC client can be configured to connect to port 6697 on the system running Soju. Here is an example of how this can be done in Irssi:

/network add -nick YOUR_NICK -user soju/bnc1 net1
/server add -tls -network bnc1 YOUR_SOJU_HOST 6697 YOUR_SOJU_PASSWORD
/connect net1

You can also set up multiple connections to IRC networks through the same Soju instance. All you need to do is repeat the sojuctl commands to create additional networks such as bnc2, bnc3 and so on, then repeat the configuration in your IRC client using new network names such as net2, net3, etc. These network names are entirely user defined, so you can choose any names you like. The names bnc2, net2 and so on are only examples.

Read on website | #irc | #technology | #how-to

]]>
Twenty Five Years of Computing https://susam.net/twenty-five-years-of-computing.html tfyoc Fri, 06 Feb 2026 00:00:00 +0000 Last year, I completed 20 years in professional software development. I wanted to write a post to mark the occasion back then, but couldn't find the time. This post is my attempt to make up for that omission. In fact, I have been involved in software development for a little longer than 20 years. Although I had my first taste of computer programming as a child, it was only when I entered university about 25 years ago that I seriously got into software development. So I'll start my stories from there. These stories are less about software and more about people. Unlike many posts of this kind, this one offers no wisdom or lessons. It only offers a collection of stories. I hope you'll like at least a few of them.

Contents

Viewing the Source

The first story takes place in 2001, shortly after I joined university. One evening, I went to the university computer laboratory to browse the World Wide Web. Out of curiosity, I typed susam.com into the address bar and landed on its home page. I remember the text and banner looking much larger back then. Display resolutions were lower, so the text and banner covered almost half the screen. I knew very little about the Internet then and I was just trying to make sense of it. I remember wondering what it would take to create my own website, perhaps at susam.com. That's when an older student who had been watching me browse over my shoulder approached and asked if I had created the website. I told him I hadn't and that I had no idea how websites were made. He asked me to move aside, took my seat and clicked View > Source in Internet Explorer. He then explained how websites are made of HTML pages and how those pages are simply text instructions.

Next, he opened Notepad and wrote a simple HTML page that looked something like this:

<BODY><FONT COLOR="RED">HELLO</FONT></BODY>

Yes, we had a FONT tag back then and it was common practice to write HTML tags in uppercase. He then opened the page in a web browser and showed how it rendered. After that, he demonstrated a few more features such as changing the font face and size, centring the text and altering the page's background colour. Although the tutorial lasted only about ten minutes, it made the Web feel far less mysterious and much more fascinating.

That person had an ulterior motive though. After the tutorial, he never returned the seat to me. He just continued browsing the Web and waited for me to leave. I was too timid to ask for my seat back. Seats were limited, so I returned to my dorm room both disappointed that I couldn't continue browsing that day and excited about all the websites I might create with this newfound knowledge. I could never register susam.com for myself though. That domain was always used by some business selling Turkish cuisines. Eventually, I managed to get the next best thing: a .net domain of my own. That brief encounter in the university laboratory set me on a lifelong path of creating and maintaining personal websites.

The Reset Vector

The second story also comes from my university days. One afternoon, I was hanging out with my mates in the computer laboratory. In front of me was an MS-DOS machine powered by an Intel 8086 microprocessor, on which I was writing a lift control program in assembly. In those days, it was considered important to deliberately practise solving made-up problems as a way of honing our programming skills. As I worked on my program, my mind drifted to a small detail about the 8086 microprocessor that we had recently learnt in a lecture. Our professor had explained that, when the 8086 microprocessor is reset, execution begins with CS:IP set to FFFF:0000. So I murmured to anyone who cared to listen, 'I wonder if the system will reboot if I jump to FFFF:0000.' I then opened DEBUG.EXE and jumped to that address.

C:\>DEBUG
-G =FFFF:0000

The machine rebooted instantly. One of my friends, who topped the class every semester, had been watching over my shoulder. As soon as the machine restarted, he exclaimed, 'How did you do that?' I explained that the reset vector is located at physical address FFFF0 and that the CS:IP value FFFF:0000 maps to that address in real mode. After that, I went back to working on my lift control program and didn't think much more about the incident.

About a week later, the same friend came to my dorm room. He sat down with a grave look on his face and asked, 'How did you know to do that? How did it occur to you to jump to the reset vector?' I must have said something like, 'It just occurred to me. I remembered that detail from the lecture and wanted to try it out.' He then said, 'I want to be able to think like that. I come top of the class every semester, but I don't think the way you do. I would never have thought of taking a small detail like that and testing it myself.' I replied that I was just curious to see whether what we had learnt actually worked in practice. He responded, 'And that's exactly it. It would never occur to me to try something like that. I feel disappointed that I keep coming top of the class, yet I am not curious in the same way you are. I've decided I don't want to top the class anymore. I just want to explore and experiment with what we learn, the way you do.'

That was all he said before getting up and heading back to his dorm room. I didn't take it very seriously at the time. I couldn't imagine why someone would willingly give up the accomplishment of coming first every semester. But he kept his word. He never topped the class again. He still ranked highly, often within the top ten, but he kept his promise of never finishing first again. To this day, I feel a mix of embarrassment and pride whenever I recall that incident. With a single jump to the processor's reset entry point, I had somehow inspired someone to step back from academic competition in order to have more fun with learning. Of course, there is no reason one cannot do both. But in the end, that was his decision, not mine.

Man in the Middle

In my first job after university, I was assigned to a technical support team where part of my work involved running an installer to deploy a specific component of an e-banking product for customers, usually large banks. As I learnt to use the installer, I realised how fragile it was. The installer, written in Python, often failed because of incorrect assumptions about the target environment and almost always required some manual intervention to complete successfully. During my first week on the project, I spent much of my time stabilising the installer and writing a step-by-step user guide explaining how to use it. The result was well received by both my seniors and management. To my surprise, the user guide received more praise than the improvements I made to the installer. While the first few weeks were productive, I soon realised I would not find the work fulfilling for long. I wrote to management a few times to ask whether I could transfer to a team where I could work on something more substantial.

My emails were initially met with resistance. After several rounds of discussion, someone who had heard about my situation reached out and suggested a team whose manager might be interested in interviewing me. The team was based in a different city. I was young and willing to relocate wherever I could find good work, so I immediately agreed to the interview.

This was in 2006, when video conferencing was not yet common. On the day of the interview, the hiring manager called me on my office desk phone. He began by introducing the team, which was called Archie, short for architecture. The team developed and maintained the web framework and core architectural components on which the entire e-banking product was built. The product had existed long before open source frameworks such as Spring or Django came into existence, so features such as API routing, authentication and authorisation layers, cookie management, etc. were all implemented in-house as Java Servlets and JavaServer Pages (JSP). Since the software was used in banking environments, it also had to pass security testing and regular audits to minimise the risk of serious flaws.

The interview began well. He asked several questions related to software security, such as what SQL injection is, how it can be prevented and how one might design a web framework that mitigates cross-site scripting attacks. He also asked me a few programming questions, most which I answered pretty well. Towards the end, however, he asked how we could prevent MITM attacks. I had never heard the term, so I admitted that I did not know what MITM meant. He then asked, 'Man in the middle?' but I still had no idea what that meant or whether it was even a software engineering concept. He replied, 'Learn everything you can about PKI and MITM. We need to build a digital signatures feature for one of our corporate banking products. That's the first thing we'll work on.'

Over the next few weeks, I studied RFCs and documentation related to public key infrastructure, public key cryptography standards and related topics. At first, the material felt intimidating, but after spending time each evening reading whatever relevant literature I could find, things gradually began to make sense. Concepts that initially seemed complex and overwhelming eventually felt intuitive and elegant. I relocated to the new city a few weeks later and delivered the digital signatures feature about a month after joining the team. We used the open source Bouncy Castle library to implement the feature. After that project, I worked on other parts of the product too. The most rewarding part was knowing that the code I was writing became part of a mature product used by hundreds of banks and millions of users. It was especially satisfying to see the work pass security testing and audits and be considered ready for release.

That was my first real engineering job. My manager also turned out to be an excellent mentor. Working with him helped me develop new skills and his encouragement gave me confidence that stayed with me for years. Nearly two decades have passed since then, yet the product is still in service and continues to be actively developed. In fact, in my current phase of life I sometimes encounter it as a customer. Occasionally, I open the browser's developer tools to view the page source where I can still see traces of the HTML generated by code I wrote almost twenty years ago.

Sphagetti Code

Around 2007 or 2008, I began working on a proof of concept for developing widgets for an OpenTV set-top box. The work involved writing code in a heavily trimmed-down version of C. One afternoon, while making good progress on a few widgets, I noticed that they would occasionally crash at random. I tried tracking down the bugs, but I was finding it surprisingly difficult to understand my own code. I had managed to produce some truly spaghetti code full of dubious pointer operations that were almost certainly responsible for the crashes, yet I could not pinpoint where exactly things were going wrong.

Ours was a small team of four people, each working on an independent proof of concept. The most senior person on the team acted as our lead and architect. Later that afternoon, I showed him my progress and explained that I was still trying to hunt down the bugs causing the widgets to crash. He asked whether he could look at the code. After going through it briefly and probably realising that it was a bit of a mess, he asked me to send him the code as a tarball, which I promptly did.

He then went back to his desk to study the code. I remember thinking that there was no way he was going to find the problem anytime soon. I had been debugging it for hours and barely understood what I had written myself; it was the worst spaghetti code I had ever produced. With little hope of a quick solution, I went back to debugging on my own.

Barely five minutes later, he came back to my desk and asked me to open a specific file. He then showed me exactly where the pointer bug was. It had taken him only a few minutes not only to read my tangled code but also to understand it well enough to identify the fault and point it out. As soon as I fixed that line, the crashes disappeared. I was genuinely in awe of his skill.

I have always loved computing and programming, so I had assumed I was already fairly good at it. That incident, however, made me realise how much further I still had to go before I could consider myself a good software developer. I did improve significantly in the years that followed and today I am far better at managing software complexity than I was back then.

Animated Television Widgets

In another project from that period, we worked on another set-top box platform that supported Java Micro Edition (Java ME) for widget development. One day, the same architect from the previous story asked whether I could add animations to the widgets. I told him that I believed it should be possible, though I'd need to test it to be sure. Before continuing with the story, I need to explain how the different stakeholders in the project were organised.

Our small team effectively played the role of the software vendor. The final product going to market would carry the brand of a major telecom carrier, offering direct-to-home (DTH) television services, with the set-top box being one of the products sold to customers. The set-top box was manufactured by another company. So the project was a partnership between three parties: our company as the software vendor, the telecom carrier and the set-top box manufacturer. The telecom carrier wanted to know whether widgets could be animated on screen with smooth slide-in and slide-out effects. That was why the architect approached me to ask whether it could be done.

I began working on animating the widgets. Meanwhile, the architect and a few senior colleagues attended a business meeting with all the partners present. During the meeting, he explained that we were evaluating whether widget animations could be supported. The set-top box manufacturer immediately dismissed the idea, saying, 'That's impossible. Our set-top box does not support animation.' When the architect returned and shared this with us, I replied, 'I do not understand. If I can draw a widget, I can animate it too. All it takes is clearing the widget and redrawing it at slightly different positions repeatedly. In fact, I already have a working version.' I then showed a demo of the animated widgets running on the emulator.

The following week, the architect attended another partners' meeting where he shared updates about our animated widgets. I was not personally present, so what follows is second-hand information passed on by those who were there. I learnt that the set-top box company reacted angrily. For some reason, they were unhappy that we had managed to achieve results using their set-top box and APIs that they had officially described as impossible. They demanded that we stop work on animation immediately, arguing that our work could not be allowed to contradict their official position. At that point, the telecom carrier's representative intervened and bluntly told the set-top box representative to just shut up. If the set-top box guy was furious, the telecom guy was even more so, 'You guys told us animation was not possible and these people are showing that it is! You manufacture the set-top box. How can you not know what it is capable of?'

Meanwhile, I continued working on the proof of concept. It worked very well in the emulator, but I did not yet have access to the actual hardware. The device was still in the process of being shipped to us, so all my early proof-of-concepts ran on the emulator. The following week, the architect planned to travel to the set-top box company's office to test my widgets on the real hardware.

At the time, I was quite proud of demonstrating results that even the hardware maker believed were impossible. When the architect eventually travelled to test the widgets on the actual device, a problem emerged. What looked like buttery smooth animation on the emulator appeared noticeably choppy on a real television. Over the next few weeks, I experimented with frame rates, buffering strategies and optimising the computation done in the the rendering loop. Each week, the architect travelled for testing and returned with the same report: the animation had improved somewhat, but it still remained choppy. The modest embedded hardware simply could not keep up with the required computation and rendering. In the end, the telecom carrier decided that no animation was better than poor animation and dropped the idea altogether. So in the end, the set-top box developers turned out to be correct after all.

Good Blessings

Back in 2009, after completing about a year at RSA Security, I began looking for work that felt more intellectually stimulating, especially projects involving mathematics and algorithms. I spoke with a few senior leaders about this, but nothing materialised for some time. Then one day, Dr Burt Kaliski, Chief Scientist at RSA Laboratories, asked to meet me to discuss my career aspirations. I have written about this in more detail in another post here: Good Blessings. I will summarise what followed.

Dr Kaliski met me and offered a few suggestions about the kinds of teams I might approach to find more interesting work. I followed his advice and eventually joined a team that turned out to be an excellent fit. I remained with that team for the next six years. During that time, I worked on parser generators, formal language specification and implementation, as well as indexing and querying engines of a petabyte-scale database. I learnt something new almost every day during those six years. It remains one of the most enjoyable periods of my career. I have especially fond memories of working on parser generators alongside remarkably skilled engineers from whom I learnt a lot.

Years later, I reflected on how that brief meeting with Dr Kaliski had altered the trajectory of my career. I realised I was not sure whether I had properly expressed my gratitude to him for the role he had played in shaping my path. So I wrote to thank him and explain how much that single conversation had influenced my life. A few days later, Dr Kaliski replied, saying he was glad to know that the steps I took afterwards had worked out well. Before ending his message, he wrote this heart-warming note:

‘One of my goals is to be able to provide encouragement to others who are developing their careers, just as others have invested in mine, passing good blessings from one generation to another.’

The CTF Scoreboard

This story comes from 2019. By then, I was no longer a twenty-something engineer just starting out. I was now a middle-aged staff engineer with years of experience building both low-level networking systems and database systems. Most of my work up to that point had been in C and C++. I was now entering a new phase of my career where I would be leading the development of microservices written in Go and Python. Like many people in this profession, computing has long been one of my favourite hobbies. So although my professional work for the previous decade had focused on C and C++, I had plenty of hobby projects in other languages, including Python and Go. As a result, switching gears from systems programming to application development was a smooth transition for me. I cannot even say that I missed working in C and C++. After all, who wants to spend their days occasionally chasing memory bugs in core dumps when you could be building features and delivering real value to customers?

In October 2019, during Cybersecurity Awareness Month, a Capture the Flag (CTF) event was organised at our office. The contest featured all kinds of technical puzzles, ranging from SQL injection challenges to insecure cryptography problems. Some challenges also involved reversing binaries and exploiting stack overflow issues.

I am usually rather intimidated by such contests. The whole idea of competitive problem-solving under time pressure tends to make me nervous. But one of my colleagues persuaded me to participate in the CTF. And, somewhat to my surprise, I turned out to be rather good at it. Within about eight hours, I had solved roughly 90% of the puzzles. I finished at the top of the scoreboard.

Scoreboard of a Capture the Flag (CTF) event
CTF Scoreboard

In my younger days, I was generally known to be a good problem solver. I was often consulted when thorny problems needed solving and I usually managed to deliver results. I also enjoyed solving puzzles. I had a knack for them and happily spent hours, sometimes days, working through obscure mathematical or technical puzzles and sharing detailed write-ups with friends of the nerd variety. Seen in that light, my performance at the CTF probably should not have surprised me. Still, I was very pleased. It was reassuring to know that I could still rely on my systems programming experience to solve obscure challenges.

During the course of the contest, my performance became something of a talking point in the office. Colleagues occasionally stopped by my desk to appreciate my progress in the CTF. Two much younger colleagues, both engineers I admired for their skill and professionalism, were discussing the results nearby. They were speaking softly, but I could still overhear parts of their conversation. Curious, I leaned slightly and listened a bit more carefully. I wanted to know what these two people, whom I admired a lot, thought about my performance.

One of them remarked on how well I was doing in the contest. The other replied, 'Of course he is doing well. He has more than ten years of experience in C.' At that moment, I realised that no matter how well I solved those puzzles, the result would naturally be credited to experience. In my younger days, when I solved tricky problems like these, people would sometimes call me smart. Now people simply saw it as a consequence of my experience. Not that I particularly care for labels such as 'smart' anyway, but it did make me realise how things had changed. I was now simply the person with many years of experience. Solving technical puzzles that involved disassembling binaries, tracing execution paths and reconstructing program logic was expected rather than remarkable.

I continue to sharpen my technical skills to this day. While my technical results may now simply be attributed to experience, I hope I can continue to make a good impression through my professionalism, ethics and kindness towards the people I work with. If those leave a lasting impression, that is good enough for me.

Read on website | #technology | #programming

]]>
Jan '26 Notes https://susam.net/26a.html ntjts Thu, 29 Jan 2026 00:00:00 +0000 In these monthly notes, I jot down ideas and references I encountered during the month that I did not have time to expand into their own posts. A few of these may later develop into independent posts but most of them will likely not. In any case, this format ensures that I record them here. I spent a significant part of this month studying the book Algebraic Graph Theory by Godsil and Royle, so many of the notes here are about it. There are a few non-mathematical, technical notes towards the end.

Contents

  1. Cayley Graphs
  2. Vertex-Transitive Graphs
  3. Arc-Transitive Graphs
  4. Bipartite Graphs and Cycle Parity
  5. Tutte's Theorem
  6. Tutte's 8-Cage
  7. Linear Congruential Generator
  8. Numbering Lines

Cayley Graphs

Let \( G \) be a group and let \( C \subseteq G \) such that \( C \) is closed under taking inverses and does not contain the identity, i.e. \[ \forall x \in C, \; x^{-1} \in C, \qquad e \notin C. \] Then the Cayley graph \( X(G, C) \) is the graph with the vertex set \( V(X(G, C)) \) and edge set \( E(X(G, C)) \) defined by \begin{align*} V(X(G, C)) &= G, \\ E(X(G, C)) &= \{ gh : hg^{-1} \in C \}. \end{align*} The set \( C \) is known as the connection set.

Vertex-Transitive Graphs

A graph \( X \) is vertex-transitive if its automorphism group acts transitively on its set of vertices \( V(X). \) Intuitively, this means that no vertex has a special role. We can 'move' the graph around so that any chosen vertex becomes any other vertex. In other words, all vertices are indistinguishable. The graph looks the same from each vertex.

The \( k \)-cube \( Q_k \) is vertex-transitive. So are the Cayley graphs \( X(G, C). \) However the path graph \( P_3 \) is not vertex-transitive since no automorphism can send the middle vertex of valency \( 2 \) to an end vertex of valency \( 1. \)

Arc-Transitive Graphs

The cube \( Q_3 \) is \( 2 \)-arc-transitive but not \( 3 \)-arc-transitive. In \( Q_3, \) a \( 3 \)-arc belonging to a \( 4 \)-cycle cannot be sent to a \( 3 \)-arc that does not belong to a \( 4 \)-cycle. This is easy to explain. The end vertices of a \( 3 \)-arc belonging to a \( 4 \)-cycle are adjacent but the end vertices of a \( 3 \)-arc not belonging to a \( 4 \)-cycle are not adjacent. Therefore, no automorphism can map the end vertices of the first \( 3 \)-arc to those of the second \( 3 \)-arc.

For intuition, imagine that a traveller stands on a vertex and chooses an edge to move along. They do this \( s \) times thereby walking along an arc of length \( s, \) also known as an \( s \)-arc. By the definition of \( s \)-arcs, the traveller is not allowed to backtrack from one vertex to the previous one immediately. In an \( s \)-arc-transitive graph, these arcs look the same no matter which vertex they start from or which edges they choose. In the cube, this is indeed true for \( s = 2. \) All arcs of length \( 2 \) are indistinguishable. No matter which arc of length \( 2 \) the traveller has walked along, the graph would look the same from their perspective at each vertex along the arc. However, this no longer holds good for arcs of length \( 3 \) since there are two distinct kinds of arcs of length \( 3. \) The first kind ends at a distance of \( 1 \) from the starting vertex of the arc (when the arc belongs to a \( 4 \)-cycle). The second kind ends at a distance \( 3 \) from the starting vertex of the arc (when the arc does not belong to a \( 4 \)-cycle). Therefore the cube is not \( 3 \)-arc-transitive.

Bipartite Graphs and Cycle Parity

A graph is bipartite if and only if it contains no cycles of odd length. Equivalently, every cycle in a bipartite graph has even length. Conversely, if every cycle in a graph has even length, then the graph is bipartite.

Tutte's Theorem

For any \( s \)-arc-transitive cubic graph, \( s \le 5. \) This was demonstrated by W. T. Tutte in 1947. A proof can be found in Chapter 18 of Algebraic Graph Theory by Norman Biggs.

In 1973, Richward Weiss established a more general theorem that proves that for any \( s \)-arc-transitive graph, \( s \le 7. \) The bound is weaker but it applies to all graphs rather than only to cubic ones.

Tutte's 8-Cage

The book Algebraic Graph Theory by Godsil and Royle offers the following two descriptions of Tutte's 8-cage on 30 vertices:

Take the cube and an additional vertex \( \infty. \) In each set of four parallel edges, join the midpoint of each pair of opposite edges by an edge, then join the midpoint of the two new edges by an edge, and finally join the midpoint of this edge to \( \infty. \)
Construct a bipartite graph \( T \) with the fifteen edges as one colour class and the fifteen \( 1 \)-factors as the other, where each edge is adjacent to the three \( 1 \)-factors that contain it.

It can be shown that both descriptions construct a cubic bipartite graph on \( 30 \) vertices of girth \( 8. \) It can be further shown that there is a unique cubic bipartite graph on \( 30 \) vertices with girth \( 8. \) As a result both descriptions above construct the same graph.

Linear Congruential Generator

Here is a simple linear congruential generator (LCG) implementation in JavaScript:

function srand (seed) {
  let x = seed
  return function () {
    x = (1664525 * x + 1013904223) % 4294967296
    return x
  }
}

Here is an example usage:

> const rand = srand(0)
undefined
> rand()
1013904223
> rand()
1196435762
> rand()
3519870697

Numbering Lines

Both BSD and GNU cat can number output lines with the -n option. For example:

$ printf 'foo\nbar\nbaz\n' | cat -n
     1  foo
     2  bar
     3  baz

However I have always used nl for this. For example:

$ printf 'foo\nbar\nbaz\n' | nl
     1  foo
     2  bar
     3  baz

While nl is specified in POSIX, the cat -n option is not.

Read on website | #monthly | #mathematics | #programming | #javascript | #shell

]]>
QuickQWERTY 1.2.1 https://susam.net/code/news/quickqwerty/1.2.1.html qqoto Tue, 27 Jan 2026 00:00:00 +0000 QuickQWERTY 1.2.1 is now available. QuickQWERTY is a web-based touch typing tutor for QWERTY keyboards that runs directly in the web browser.

This release contains a minor bug fix in Unit 4.3. Unit 4.3 is a 'Control' unit that lets you practise typing partial words as well as full words. In one place in this unit, the following sequence of partial and full words occurs:

l li lime lime

The full word lime was incorrectly repeated twice. This has been fixed to:

l li lim lime

To try out QuickQWERTY, go to quickqwerty.html.

Read on website | #web | #programming

]]>
Attention Media ≠ Social Networks https://susam.net/attention-media-vs-social-networks.html amnsm Tue, 20 Jan 2026 00:00:00 +0000 When web-based social networks started flourishing nearly two decades ago, they were genuinely social networks. You would sign up for a popular service, follow people you knew or liked and read updates from them. When you posted something, your followers would receive your updates as well. Notifications were genuine. The little icons in the top bar would light up because someone had sent you a direct message or engaged with something you had posted. There was also, at the beginning of this millennium, a general sense of hope and optimism around technology, computers and the Internet. Social networking platforms were one of the services that were part of what was called Web 2.0, a term used for websites built around user participation and interaction. It felt as though the information superhighway was finally reaching its potential. But sometime between 2012 and 2016, things took a turn for the worse.

First came the infamous infinite scroll. I remember feeling uneasy the first time a web page no longer had a bottom. Logically, I knew very well that everything a browser displays is a virtual construct. There is no physical page. It is just pixels pretending to be one. Still, my brain had learned to treat web pages as objects with a beginning and an end. The sudden disappearance of that end disturbed my sense of ease.

Then came the bogus notifications. What had once been meaningful signals turned into arbitrary prompts. Someone you followed had posted something unremarkable and the platform would surface it as a notification anyway. It didn't matter whether the notification was relevant to me. The notification system stopped serving me and started serving itself. It felt like a violation of an unspoken agreement between users and services. Despite all that, these platforms still remained social in some diluted sense. Yes, the notifications were manipulative, but they were at least about people I actually knew or had chosen to follow. That, too, would change.

Over time, my timeline contained fewer and fewer posts from friends and more and more content from random strangers. Using these services began to feel like standing in front of a blaring loudspeaker, broadcasting fragments of conversations from all over the world directly in my face. That was when I gave up on these services. There was nothing social about them anymore. They had become attention media. My attention is precious to me. I cannot spend it mindlessly scrolling through videos that have neither relevance nor substance.

But where one avenue disappeared, another emerged. A few years ago, I stumbled upon Mastodon and it reminded me of the early days of Twitter. Back in 2006, I followed a small number of folks of the nerd variety on Twitter and received genuinely interesting updates from them. But when I log into the ruins of those older platforms now, all I see are random videos presented to me for reasons I can neither infer nor care about. Mastodon, by contrast, still feels like social networking in the original sense. I follow a small number of people I genuinely find interesting and I receive their updates and only their updates. What I see is the result of my own choices rather than a system trying to capture and monetise my attention. There are no bogus notifications. The timeline feels calm and predictable. If there are no new updates from people I follow, there is nothing to see. It feels closer to how social networks used to work originally. I hope it stays that way.

Read on website | #web | #technology

]]>
Nested Code Fences in Markdown https://susam.net/nested-code-fences.html ncfim Mon, 19 Jan 2026 00:00:00 +0000 Today, we will meet a spiky-haired nerd named Corey Dumm, who normally lives within Markdown code fences. We will get to know him a bit, smile with him when his fences hold and weep quietly when misfortune strikes.

One of the caveats of the Markdown universe is the wide variety of Markdown implementations available. In these parallel universes, the rules of Markdown rendering differ subtly. In this post, we will focus only on the CommonMark specification. Since GitHub Flavoured Markdown (GFM) is a strict superset of CommonMark, whatever we discuss here applies equally well to both CommonMark and GFM.

Contents

Basic Code Fences

Corey had a knack for working with computers ever since he was a kid.

Corey at his computer:

```
(o_o)--.|[_]|
```

Everything was perfect in Corey's world. The CommonMark renderer would convert the Markdown above to the following HTML:

Corey at his computer:

(o_o)--.|[_]|
View HTML
<p>Corey at his computer:</p>
<pre><code>(o_o)--.|[_]|
</code></pre>

At this point, all was well. Corey grew quickly. Before long, he had a head full of spiky hair. Then the fences began to matter.

Corey, all grown up:

```
 ```
(o_o)--.|[_]|
```

Let us see how this renders. I must warn you that during the Markdown-to-HTML translation, Corey loses his hair. Some viewers may find the following scene disturbing. Viewer discretion is advised. Here is the rendered HTML:

Corey, all grown up:

(o_o)--.|[_]|

View HTML
<p>Corey, all grown up:</p>
<pre><code></code></pre>
<p>(o_o)--.|[_]|</p>
<pre><code></code></pre>

Corey's hair is gone! What a catastrophic accident! Corey is alright, though. He is still quite afraid of Markdown fences, but otherwise well and bouncing back. Why did this happen? The second set of triple backticks immediately ends the fenced code block started by the first set of triple backticks. As a result, Corey's smiley face ends up outside the fenced code block. The triple backticks that were once Corey's hair are now woven into the fabric of the surrounding HTML. Fortunately, CommonMark offers a few ways to avoid such accidents.

Fancy Code Fences

In CommonMark, fenced code blocks are most commonly written using triple backticks. However, the specification also allows tildes as an alternative fence. This can be useful when the code itself contains backticks. Let us see an example:

Corey, all grown up:

~~~
 ```
(o_o)--.|[_]|
~~~

In fact, a code fence need not consist of exactly three backticks or tildes. Any number of backticks or tildes is allowed, as long as that number is at least three. The following is therefore equivalent:

Corey, all grown up:

~~~~~
 ```
(o_o)--.|[_]|
~~~~~

And so is this:

Corey, all grown up:

`````
 ```
(o_o)--.|[_]|
`````

All three examples render like this:

Corey, all grown up:

 ```
(o_o)--.|[_]|
View HTML
<p>Corey, all grown up:</p>
<pre><code> ```
(o_o)--.|[_]|
</code></pre>

No hair is lost in translation.

Basic Code Spans

A similar problem arises with inline code spans. Most Markdown users know to use backticks to delimit inline code spans. For example:

An old picture of Corey at his computer: `(o_o)--.|[_]|`

This produces the following output:

An old picture of Corey at his computer: (o_o)--.|[_]|

View HTML
<p>An old picture of Corey at his computer: <code>(o_o)--.|[_]|</code></p>

However, what do we do when we need to put Corey's dear friend Becky Trace within an inline code span? Becky has short, straight hair tucked neatly on either side of her face. Here's a picture of her:

`(o_o)`

I believe you can already see the difficulty here. Inline code spans use backticks as delimiters. So when we put Becky within a code span, the first backtick in Becky's face would terminate the code span immediately and then the rest of Becky would lie outside it. CommonMark offers solutions for this kind of situation as well.

Fancy Code Spans

An inline code span delimiter need not consist of exactly one backtick. It can consist of any number of backticks. So `foo` and ``foo`` produce identical HTML. There is another important but less well-known detail. When the text inside an inline code span begins and ends with spaces, one space is removed from each end before rendering. So `foo` and ` foo ` are equivalent. Therefore, when we need to put backticks within an inline code span, we can start the code span using multiple backticks and a space. For example:

Meet Corey's friend Becky Trace: `` `(o_o)` ``

Here is the rendered output:

Meet Corey's friend Becky Trace: `(o_o)`

View HTML
<p>Meet Corey's friend Becky Trace: <code>`(o_o)`</code></p>

Becky has her hair intact too. We have avoided the mishap that once caused great distress to Corey. That, my friends, is how backticks survive nesting in Markdown.

Specification

Before I finish this post, let us take a look at the CommonMark specification to see where these details are defined. The excerpts quoted below are taken from CommonMark Spec Version 0.30, which is by now over four years old.

From section 4.5 Fenced Code Blocks:

A code fence is a sequence of at least three consecutive backtick characters (`) or tildes (~). (Tildes and backticks cannot be mixed.)

The content of the code block consists of all subsequent lines, until a closing code fence of the same type as the code block began with (backticks or tildes), and with at least as many backticks or tildes as the opening code fence.

From section 6.1 Code Spans:

A backtick string is a string of one or more backtick characters (`) that is neither preceded nor followed by a backtick.

A code span begins with a backtick string and ends with a backtick string of equal length. The contents of the code span are the characters between these two backtick strings, normalized in the following ways:

  • First, line endings are converted to spaces.
  • If the resulting string both begins and ends with a space character, but does not consist entirely of space characters, a single space character is removed from the front and back. This allows you to include code that begins or ends with backtick characters, which must be separated by whitespace from the opening or closing backtick strings.

I hope these little nuggets of Markdown trivialities will one day prove useful in your own Markdown misfortunes.

Read on website | #web | #technology

]]>
Minimal GitHub Workflow https://susam.net/minimal-github-workflow.html mghwf Thu, 15 Jan 2026 00:00:00 +0000 This is a note where I capture the various errors we receive when we create GitHub workflows that are smaller than the smallest possible workflow. I do not know why anyone would ever need this information and I doubt it will serve any purpose for me either but sometimes you just want to know things, no matter how useless they might be. This is one of the useless things I wanted to know today.

Contents

Empty Workflow

For the first experiment we just create a zero byte file and push it to GitHub as follows, say, like this:

mkdir -p .github/workflows/
touch .github/workflows/hello.yml
git add .github/
git commit -m 'Empty workflow'
git push -u origin main

Under the GitHub repo's Actions tab, we find this error:

Error
No event triggers defined in `on`

On

Then we update the workflow as follows:

on:

Now we get this error:

Invalid workflow file: .github/workflows/hello.yml#L1
(Line: 1, Col: 4): Unexpected value '', (Line: 1, Col: 1): Required property is missing: jobs

On Push

Next update:

on: push

Corresponding error:

Invalid workflow file: .github/workflows/hello.yml#L1
(Line: 1, Col: 1): Required property is missing: jobs

Jobs

Workflow:

on: push
jobs:

Error:

Invalid workflow file: .github/workflows/hello.yml#L1
(Line: 2, Col: 6): Unexpected value ''

Job ID

Workflow:

on: push
jobs:
  world:

Error:

Invalid workflow file: .github/workflows/hello.yml#L1
(Line: 3, Col: 9): Unexpected value ''

Steps

Workflow:

on: push
jobs:
  world:
    steps:

Error:

Invalid workflow file: .github/workflows/hello.yml#L1
(Line: 4, Col: 11): Unexpected value '', (Line: 4, Col: 5): Required property is missing: runs-on

Runs On

Workflow:

on: push
jobs:
  world:
    runs-on:
    steps:

Error:

Invalid workflow file: .github/workflows/hello.yml#L1
(Line: 4, Col: 13): Unexpected value '', (Line: 5, Col: 11): Unexpected value ''

Runs On Ubuntu Latest

Workflow:

on: push
jobs:
  world:
    runs-on: ubuntu-latest
    steps:

Error:

Invalid workflow file: .github/workflows/hello.yml#L1
(Line: 5, Col: 11): Unexpected value ''

Empty Steps

Workflow:

on: push
jobs:
  world:
    runs-on: ubuntu-latest
    steps: []

Error:

Invalid workflow file: .github/workflows/hello.yml#L1
No steps defined in `steps` and no workflow called in `uses` for the following jobs: world

Run

Workflow:

on: push
jobs:
  world:
    runs-on: ubuntu-latest
    steps:
      - run:

Success:

▼ Run

  shell: /usr/bin/bash -e {0}

Run Echo

Workflow:

on: push
jobs:
  world:
    runs-on: ubuntu-latest
    steps:
      - run: echo

Success:

▼ Run
  echo
  shell: /usr/bin/bash -e {0}

Hello, World

Workflow:

on: push
jobs:
  world:
    runs-on: ubuntu-latest
    steps:
      - run: echo hello, world

Success:

▼ Run echo hello, world
  echo hello, world
  shell: /usr/bin/bash -e {0}
hello, world

The experiments are preserved in the commit history of github.com/spxy/minighwf.

Read on website | #technology

]]>
Three Inverse Laws of AI and Robotics https://susam.net/inverse-laws-of-robotics.html spilr Mon, 12 Jan 2026 00:00:00 +0000 Introduction

Since the launch of ChatGPT in November 2022, generative artificial intelligence (AI) chatbot services have become increasingly sophisticated and popular. These systems are now embedded in search engines, software development tools as well as office software. For many people, they have quickly become part of everyday computing.

I personally find these services incredibly useful, particularly for exploring unfamiliar topics and as a general productivity aid. However, I also think that the way these services are advertised and consumed can pose a danger, especially if we get into the habit of trusting their output without further scrutiny.

Contents

Pitfalls

Certain design choices in modern AI systems can encourage uncritical acceptance of their output. For example, many popular search engines are already highlighting answers generated by AI at the very top of the page. When this happens, it is easy to stop scrolling, accept the generated answer and move on. Over time, this could inadvertently train users to treat AI as the default authority rather than as a starting point for further investigation. I wish that each such generative AI service came with a brief but conspicuous warning explaining that these systems can sometimes produce output that is factually incorrect, misleading or incomplete. Such warnings should highlight that habitually trusting AI output can be dangerous. In my experience, even when such warnings exist, they tend to be minimal and visually deemphasised.

In the world of science fiction, there are the Three Laws of Robotics devised by Isaac Asimov, which recur throughout his work. These laws were designed to constrain the behaviour of robots in order to keep humans safe. As far as I know, Asimov never formulated any equivalent laws governing how humans should interact with robots. I think we now need something to that effect to keep ourselves safe. I will call them the Inverse Laws of Robotics. These apply to any situation that requires us humans to interact with a robot, where the term 'robot' refers to any machine, computer program, software service or AI system that is capable of performing complex tasks automatically. I use the term 'inverse' here not in the sense of logical negation but to indicate that these laws apply to humans rather than to robots.

Inverse Laws of Robotics

Here are the three inverse laws of robotics:

  • Humans must not anthropomorphise AI systems.
  • Humans must not blindly trust the output of AI systems.
  • Humans must remain fully responsible and accountable for consequences arising from the use of AI systems.

Non-Anthropomorphism

Humans must not anthropomorphise AI systems. That is, humans must not attribute emotions, intentions or moral agency to them. Anthropomorphism distorts judgement. In extreme cases, anthropomorphising can lead to emotional dependence.

Modern chatbot systems often sound conversational and empathetic. They use polite phrasing and social cues that closely resemble human interaction. While this makes them easier and more pleasant to use, it also makes it easier to forget what they actually are: large statistical models producing plausible text based on patterns in data.

I think vendors of AI based chatbot services could do a better job here. In many cases, the systems are deliberately tuned to feel more human rather than more mechanical. I would argue that the opposite approach would be healthier in the long term. A slightly more robotic tone would reduce the likelihood that users mistake fluent language for understanding, judgement or intent.

Whether or not vendors make such changes, the responsibility for avoiding this pitfall still lies with users. We must actively avoid the habit of treating AI systems as social actors or moral agents. Doing so preserves clear thinking about their capabilities and limitations.

Non-Deference

Humans must not blindly trust the output of AI systems. AI-generated content must not be treated as authoritative without independent verification appropriate to its context.

This principle is not unique to AI. In most areas of life, we should not accept information uncritically. In practice, of course, this is not always feasible. Not everyone is an expert in medicine or law, so we often rely on trusted institutions and public health authorities for guidance. However, the guidance published by such institutions is in most cases peer reviewed by experts in their fields. On the other hand, when we receive an answer to a question from an AI chatbot in a private chat session, there has been no peer review of the particular stochastically generated response presented to us. Therefore, the onus of critically examining the response falls on us.

Although AI systems today have become quite impressive at certain tasks, they are still known to produce output that would be a mistake to rely on. Even if AI systems improve to the point of producing reliable output with a high degree of likelihood, due to their inherent stochastic nature, there would still be a small likelihood of producing output that contains errors. This makes them particularly dangerous when used in contexts where errors are subtle but costly. The more serious the potential consequences, the higher the burden of verification should be.

In some applications such as formulating mathematical proofs or developing software, we can add an automated verification layer in the form of proof checker or unit tests to verify the output of AI. In other cases, we must independently verify the output ourselves.

Non-Abdication of Responsibility

Humans must remain fully responsible for decisions involving AI and accountable for the consequences arising from its use. If a negative outcome occurs as a result of following AI-generated advice or decisions, it is not sufficient to say, 'the AI told us to do it'. AI systems do not choose goals, deploy themselves or bear the costs of failure. Humans and organisations do. An AI system is a tool and like any other tool, responsibility for its use rests with the people who decide to rely on it.

This is easier said than done, though. It gets especially tricky in real-time applications like self-driving cars, where a human does not have the opportunity to sufficiently review the decisions taken by the AI system before it acts. Requiring a human driver to remain constantly vigilant does not solve the problem that the AI system often acts in less time than it takes a human to intervene. Despite this rather serious limitation, we must acknowledge that if an AI system fails in such applications, the responsibility for investigating the failure and adding additional guardrails should still fall on the humans responsible for the design of the system.

In all other cases, where there is no physical constraint that prevents a human from reviewing the AI output before it is acted upon, any negative consequence arising from the use of AI must fall entirely on the human decision-maker. As a general principle, we should never accept 'the AI told us so' as an acceptable excuse for harmful outcomes. Yes, the AI may have produced the recommendation but a human decided to follow it, so that human must be held accountable. This is absolutely critical to preventing the indiscriminate use of AI in situations where irresponsible use can cause significant harm.

Conclusion

The three laws outlined above are based on usage patterns I have seen that I feel are detrimental to society. I am hoping that with these three simple laws, we can encourage our fellow humans to pause and reflect on how they interact with modern AI systems, to resist habits that weaken judgement or blur responsibility and to remain mindful that AI is a tool we choose to use, not an authority we defer to.

Read on website | #miscellaneous

]]>
Writing First, Tooling Second https://susam.net/writing-first-tooling-second.html wftss Sat, 10 Jan 2026 00:00:00 +0000 I am a strong proponent of running independent personal websites on your own domains and publishing your writing there. Doing so keeps the web diverse and decentralised, rather than concentrating most writing and discussion inside a small number of large platforms. It gives authors long term control over their work without being subject to changing policies or incentives. I think that a web made up of many small, individually run websites is more resilient and also more interesting than one dominated by a handful of social media services.

I often participate in discussions pertaining to authoring personal websites because this is an area I am passionate about. Any discussion about authoring websites that I take part in seems to drift, sooner or later, into tooling. Aspiring personal website authors worry at length about which blogging engine to use, which static site generator to pick, which templating language to choose and so on. I think none of this is important until you have published at least five articles on your website. Just write plain HTML and worry about tooling later.

This very website you are reading right now began its life as a loose collection of HTML files typed into Notepad on a Windows 98 machine. I wrote some text, wrapped it in basic markup and copied it to my web server's document root directory. That's it. Tooling came much later, in fact many years later. As the number of pages grew, I naturally wanted some consistency. I wanted a common layout for my pages, navigation links, a footer that did not need to be edited in twenty places. An early version of this website used HTML frames to accomplish these goals. Later, I rewrote the website in PHP. Now this website is generated using Common Lisp. All of these additions came later, after I had been maintaining this website for several years. None of this was required to begin.

Around 2006, when blogging had become quite fashionable, I experimented with blogging too. Eventually, I returned to a loose, chaotic collection of pages. I do have an index page and an RSS feed that resemble a blog, but they are simply a list of selected pages arranged in reverse chronological order. The pages themselves are scattered across various corners of this website. My point is that not every website needs to be a blog. A website can just as well be a collection of pages, arranged in whatever way makes sense to you. The blog is merely one possible organising principle, not a requirement. If your goal is simply to share your thoughts on your own web space, worrying about blogging and tooling can easily become counterproductive. Just write your posts first, in plain HTML if need be.

If you truly dislike writing HTML, that is fine too. Write in Markdown, AsciiDoc or whatever plain text format you find pleasant and convert it to HTML using Pandoc or a similar tool. Yes, I am slightly undermining my own point here but I think a little bit of tooling to make your writing process enjoyable is reasonable. Tooling should exist to reduce friction, not to become the main ceremony. Personally, I write all my posts directly in HTML. I use Emacs, which provides a number of convenient key sequences and functions that make writing HTML very comfortable. For example, it takes only a few keystrokes in Emacs to wrap text in a <blockquote> element, insert a code block or close any open tags. But that is just me. I enjoy writing HTML documents in Emacs. If you do not, it is easy to run your Markdown files through a converter and publish the result with only a little extra tooling overhead.

It is easy to spend days and weeks polishing a website setup, selecting the perfect generator, theme and deployment pipeline, only to end up with a beautifully engineered website whose sole content is a single 'hello world' post. That might not be very useful to you or anyone else, unless setting up the pipeline itself was your goal. By contrast, a scrappy website made up of standalone HTML pages might be useful to you as well as others. You can refer back to it months later. You can send someone a link. You can build on it gradually. Even if you never turn it into a blog, never add RSS or never add any tooling, it still fulfills its most important purpose: it exists and it says something you wanted to say.

So to summarise my post here: Create the website. Publish something. Do it in the simplest way that lets you get your words onto the page and onto the web. Once you have content that you care about, tooling can follow. Your thoughts, your ideas, your personality and quirks are the essence of your website. Everything else is optional.

Read on website | #web | #technology

]]>
A4 Paper Stories https://susam.net/a4-paper-stories.html a4pps Tue, 06 Jan 2026 00:00:00 +0000 I sometimes resort to a rather common measuring technique that is neither fast, nor accurate, nor recommended by any standards body and yet it hasn't failed me whenever I have had to use it. I will describe it here, though calling it a technique might be overselling it. Please do not use it for installing kitchen cabinets or anything that will stare back at you every day for the next ten years. It involves one tool: a sheet of A4 paper.

Like most sensible people with a reasonable sense of priorities, I do not carry a ruler with me wherever I go. Nevertheless, I often find myself needing to measure something at short notice, usually in situations where a certain amount of inaccuracy is entirely forgivable. When I cannot easily fetch a ruler, I end up doing what many people do and reach for the next best thing, which for me is a sheet of A4 paper, available in abundant supply where I live.

From photocopying night-sky charts to serving as a scratch pad for working through mathematical proofs, A4 paper has been a trusted companion since my childhood days. I use it often. If I am carrying a bag, there is almost always some A4 paper inside: perhaps a printed research paper or a mathematical problem I have worked on recently and need to chew on a bit more during my next train ride.

Dimensions

The dimensions of A4 paper are the solution to a simple, elegant problem. Imagine designing a sheet of paper such that, when you cut it in half parallel to its shorter side, both halves have exactly the same aspect ratio as the original. In other words, if the shorter side has length \( x \) and the longer side has length \( y , \) then \[ \frac{y}{x} = \frac{x}{y / 2} \] which gives us \[ \frac{y}{x} = \sqrt{2}. \] Test it out. Suppose we have \( y/x = \sqrt{2}. \) We cut the paper in half parallel to the shorter side to get two halves, each with shorter side \( x' = y / 2 = x \sqrt{2} / 2 = x / \sqrt{2} \) and longer side \( y' = x. \) Then indeed \[ \frac{y'}{x'} = \frac{x}{x / \sqrt{2}} = \sqrt{2}. \] In fact, we can keep cutting the halves like this and we'll keep getting even smaller sheets with the aspect ratio \( \sqrt{2} \) intact. To summarise, when a sheet of paper has the aspect ratio \( \sqrt{2}, \) bisecting it parallel to the shorter side leaves us with two halves that preserve the aspect ratio. A4 paper has this property.

But what are the exact dimensions of A4 and why is it called A4? What does 4 mean here? Like most good answers, this one too begins by considering the numbers \( 0 \) and \( 1. \) Let me elaborate.

Let us say we want to make a sheet of paper that is \( 1 \, \mathrm{m}^2 \) in area and has the aspect-ratio-preserving property that we just discussed. What should its dimensions be? We want \[ xy = 1 \, \mathrm{m}^2 \] subject to the condition \[ \frac{y}{x} = \sqrt{2}. \] Solving these two equations gives us \[ x^2 = \frac{1}{\sqrt{2}} \, \mathrm{m}^2 \] from which we obtain \[ x = \frac{1}{\sqrt[4]{2}} \, \mathrm{m}, \quad y = \sqrt[4]{2} \, \mathrm{m}. \] Up to three decimal places, this amounts to \[ x = 0.841 \, \mathrm{m}, \quad y = 1.189 \, \mathrm{m}. \] These are the dimensions of A0 paper. They are precisely the dimensions specified by the ISO standard for it. It is quite large to scribble mathematical solutions on, unless your goal is to make a spectacle of yourself and cause your friends and family to reassess your sanity. So we need something smaller that allows us to work in peace, without inviting commentary or concerns from passersby. We take the A0 paper of size \[ 84.1 \, \mathrm{cm} \times 118.9 \, \mathrm{cm} \] and bisect it to get A1 paper of size \[ 59.4 \, \mathrm{cm} \times 84.1 \, \mathrm{cm}. \] Then we bisect it again to get A2 paper with dimensions \[ 42.0 \, \mathrm{cm} \times 59.4 \, \mathrm{cm}. \] And once again to get A3 paper with dimensions \[ 29.7 \, \mathrm{cm} \times 42.0 \, \mathrm{cm}. \] And then once again to get A4 paper with dimensions \[ 21.0 \, \mathrm{cm} \times 29.7 \, \mathrm{cm}. \] There we have it. The dimensions of A4 paper. These numbers are etched in my memory like the multiplication table of \( 1. \) We can keep going further to get A5, A6, etc. We could, in theory, go all the way up to A\( \infty. \) Hold on, I think I hear someone heckle. What's that? Oh, we can't go all the way to A\( \infty? \) Something about atoms, was it? Hmm. Security! Where's security? Ah yes, thank you, sir. Please show this gentleman out, would you?

Sorry for the interruption, ladies and gentlemen. Phew! That fellow! Atoms? Honestly. We, the mathematically inclined, are not particularly concerned with such trivial limitations. We drink our tea from doughnuts. We are not going to let the size of atoms dictate matters, now are we?

So I was saying that we can bisect our paper like this and go all the way to A\( \infty. \) That reminds me. Last night I was at a bar in Hoxton and I saw an infinite number of mathematicians walk in. The first one asked, "Sorry to bother you, but would it be possible to have a sheet of A0 paper? I just need something to scribble a few equations on." The second one asked, "If you happen to have one spare, could I please have an A1 sheet?" The third one said, "An A2 would be perfectly fine for me, thank you." Before the fourth one could ask, the bartender disappeared into the back for a moment and emerged with two sheets of A0 paper and said, "Right. That should do it. Do know your limits and split these between yourselves."

In general, a sheet of A\( n \) paper has the dimensions \[ 2^{-(2n + 1)/4} \, \mathrm{m} \times 2^{-(2n - 1)/4} \, \mathrm{m}. \] If we plug in \( n = 4, \) we indeed get the dimensions of A4 paper: \[ 0.210 \, \mathrm{m} \times 0.297 \, \mathrm{m}. \]

Measuring Stuff

Let us now return to the business of measuring things. As I mentioned earlier, the dimensions of A4 are lodged firmly into my memory. Getting hold of a sheet of A4 paper is rarely a challenge where I live. I have accumulated a number of A4 paper stories over the years. Let me share a recent one. I was hanging out with a few folks of the nerd variety one afternoon when the conversation drifted, as it sometimes does, to a nearby computer monitor that happened to be turned off. At some point, someone confidently declared that the screen in front of us was 27 inches. That sounded plausible but we wanted to confirm it. So I reached for my trusted measuring instrument: an A4 sheet of paper. What followed was neither fast, nor especially precise, but it was more than adequate for settling the matter at hand.

I lined up the longer edge of the A4 sheet with the width of the monitor. One length. Then I repositioned it and measured a second length. The screen was still sticking out slightly at the end. By eye, drawing on an entirely unjustified confidence built from years of measuring things that never needed measuring, I estimated the remaining bit at about \( 1 \, \mathrm{cm}. \) That gives us a width of \[ 29.7 \, \mathrm{cm} + 29.7 \, \mathrm{cm} + 1.0 \, \mathrm{cm} = 60.4 \, \mathrm{cm}. \] Let us round that down to \( 60 \, \mathrm{cm}. \) For the height, I switched to the shorter edge. One full \( 21 \, \mathrm{cm} \) fit easily. For the remainder, I folded the paper parallel to the shorter side, producing an A5-sized rectangle with dimensions \( 14.8 \, \mathrm{cm} \times 21.0 \, \mathrm{cm}. \) Using the \( 14.8 \, \mathrm{cm} \) edge, I discovered that it overshot the top of the screen slightly. Again, by eye, I estimated the excess at around \( 2 \, \mathrm{cm}. \) That gives us \[ 21.0 \, \mathrm{cm} + 14.8 \, \mathrm{cm} -2.0 \, \mathrm{cm} = 33.8 \, \mathrm{cm}. \] Let us round this up to \( 34 \, \mathrm{cm}. \) The ratio \( 60 / 34 \approx 1.76 \) is quite close to \( 16/9, \) a popular aspect ratio of modern displays. At this point the measurements were looking good. So far, the paper had not embarrassed itself. Invoking the wisdom of the Pythagoreans, we can now estimate the diagonal as \[ \sqrt{(60 \, \mathrm{cm})^2 + (34 \, \mathrm{cm})^2} \approx 68.9 \,\mathrm{cm}. \] Finally, there is the small matter of units. One inch is \( 2.54 \, \mathrm{cm}, \) another figure that has embedded itself in my head. Dividing \( 68.9 \) by \( 2.54 \) gives us roughly \( 27.2 \, \mathrm{in}. \) So yes. It was indeed a \( 27 \)-inch display. My elaborate exercise in showing off my A4 paper skills was now complete. Nobody said anything. A few people looked away in silence. I assumed they were reflecting. I am sure they were impressed deep down. Or perhaps... no, no. They were definitely impressed. I am sure.

Hold on. I think I hear another heckle. What is that? There are mobile phone apps that can measure things now? Really? Right. Security. Where's security?

Read on website | #absurd | #mathematics

]]>
Circular Recursive Negating Acronyms https://susam.net/circular-recursive-negating-acronyms.html crnax Mon, 05 Jan 2026 00:00:00 +0000 One of my favourite acronyms from the world of computing and technology is XINU. It stands for 'XINU Is Not Unix'. The delightful thing about this acronym is that XINU is also UNIX spelled backwards.

For a given word W, a recursive acronym that both negates W and reverses it is possible when W has the form W = '?NI?' where each '?' denotes a letter. Some fictitious examples make this clearer:

  • LINA Is Not ANIL.
  • TINK Is Not KNIT.
  • OINO Is Not ONIO.

Words of the form '?N?' also work if we are happy to contract the word 'is' in the acronym. In fact, in this case we can even obtain circular recursive acronyms:

  • ANI's Not INA. INA's Not ANI.
  • ONE's Not ENO. ENO's Not ONE.

In each pair, the two acronyms negate each other, making them circular while also being reverses of one another. Such acronyms could serve as amusing names for expressing friendly banter between rival projects.

Further, if we make the acronyms refer to themselves, we get paradoxes too:

  • ANA's Not ANA
  • XNX's Not XNX

Read on website | #miscellaneous

]]>