pbech32 v0.1.0

March 8, 2026

I just released the first version of pbech32, a Rust library for encoding and decoding Bech32 strings.

Bech32 is a fast and user-friendly base 32 encoding format that includes a namespace and checksum.

Library Features

Examples

Decode from string:

use pbech32::Bech32;

let s = "a1qypqxpq9mqr2hj"; // bech32m-encoded string
let got: Bech32 = s.parse()?; // decode string

assert_eq!(got.hrp.to_string(), "a"); // check human-readable part
assert_eq!(got.data, vec![1, 2, 3, 4, 5]); // check data

Encode to string:

use pbech32::{Bech32, Hrp, Scheme};

let scheme = Scheme::Bech32m; // checksum scheme
let hrp: Hrp = "a".parse()?; // human-readable part
let data = vec![1, 2, 3, 4, 5]; // data
let got = Bech32 { scheme, hrp, data }.to_string(); // encode as string

assert_eq!(got, "a1qypqxpq9mqr2hj"); // check result

Encode to a writer:

use std::io::Write;
use pbech32::{Encoder, Hrp, Scheme};

let mut vec: Vec<u8> = Vec::new(); // output vector
let hrp: Hrp = "hello".parse()?; // human readable part

let mut encoder = Encoder::new(&mut vec, Scheme::Bech32m, hrp)?; // create encoder
encoder.write_all(b"folks")?; // write data
encoder.flush()?; // flush encoder (REQUIRED)

let got = str::from_utf8(vec.as_ref())?; // convert output vector to string
assert_eq!(got, "hello1vehkc6mn27xpct"); // check result

New Year's Primality Testing by Hand

January 2, 2026

Happy New Year!

The number 2026 has two factors: 2 and 1013. I wondered “is 1013 prime?” and, for fun, “can I solve this in my head?” (e.g. no calculator, computer, or pen and paper).

If 1013 is not prime, then it is composite and must have at least one odd prime factor ≤ ⌊√1013⌋.

(We know the factor – if it exists – is odd because factors of odd composites are always odd, and we know it is prime because of the fundamental theorem of arithmetic).

So our approach will be to check odd primes from 3 to √1013 to see if any divide 1013.

Unfortunately we don’t know √1013. Fortunately 1013 is close to 1024, and 1024 is even power of 2. So let’s use 1024 to approximate √1013:

  1. 1013 < 1024, so √1013 < √1024
  2. √1024 = √(210 ) = 25 = 32
  3. Therefore √1013 < 32

So we need to test odd primes in the range [3,31] to see if any divide 1013.

Before that, though, we prune the list of potential factors with the divisibility rules. We remove:

  • 3, because the sum of the digits of 1013 isn’t divisible by 3: 1+1+3=5 and 3∤5
  • 5, because the last digit of 1013 isn’t 0 or 5.
  • 7, because 5 times the last digit (3) plus the rest (101) is not a multiple of 7: 5*3+101=116, 5*6+11=41, and 7∤41.
  • …and so on for 11, 13, 17, 19, 23, and 29.

(I didn’t remember the divisibility rules for the range [11,29], so I checked if any of the primes divide 1013 instead).

31 is the only remaining potential factor after pruning. To check it, we can either use the Euclidean algorithm to see if gcd(31, 1013) != 1 or do some trial arithmetic. I chose the latter:

  1. 31 = (30 + 1)
  2. (30 + 1) * 33 = 990 + 33 = 1023
  3. The closest multiples of 31 are 992 and 1023, so 31∤1013.

1013 does not have any odd prime factors in the range [3,√1013], so it must be prime.

Let’s check our work with SymPy:

>>> import sympy
>>> sympy.ntheory.primetest.isprime(1013)
True

 

Success!

Further Reading

polycvss v0.2.0

October 4, 2025

I just released polycvss version 0.2.0.

polycvss is a Rust library to parse and score CVSS vector strings.

Features:

  • CVSS v2, CVSS v3, and CVSS v4 support.
  • Version-agnostic parsing and scoring API.
  • Memory efficient: Vectors are 8 bytes. Scores and severities are 1 byte.
  • No dependencies by default except the standard library.
  • Optional serde integration via the serde build feature.
  • Extensive tests: Tested against thousands of vectors and scores from the NVD CVSS calculators.

Here is an example tool which parses the first command-line argument as a CVSS vector string, then prints the score and severity:

use polycvss::{Err, Score, Severity, Vector};

fn main() -> Result<(), Err> {
  let args: Vec<String> = std::env::args().collect(); // get cli args

  if args.len() == 2 {
    let vec: Vector = args[1].parse()?; // parse string
    let score = Score::from(vec); // get score
    let severity = Severity::from(score); // get severity
    println!("{score} {severity}"); // print score and severity
  } else {
    let name = args.first().map_or("app", |s| s); // get app name
    eprintln!("Usage: {name} [VECTOR]"); // print usage
  }

  Ok(())
}

 

Here is the example tool output for a CVSS v2 vector string, a CVSS v3 vector string, and a CVSS v4 vector string:

# test with cvss v2 vector string
$ cvss-score "AV:A/AC:H/Au:N/C:C/I:C/A:C"
6.8 MEDIUM

# test with cvss v3 vector string
$ cvss-score "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H"
9.8 CRITICAL

# test with cvss v4 vector string
$ cvss-score "CVSS:4.0/AV:L/AC:H/AT:N/PR:N/UI:P/VC:L/VI:L/VA:L/SC:H/SI:H/SA:H"
5.2 MEDIUM

 

This example tool is included in the Git repository as src/bin/cvss-score.rs.

Updates

Armbian on Odroid N2L

June 8, 2025

Last week I installed Armbian on an Odroid N2L. The installation steps, installation results, and fixes for some problems are documented below.

Installation

  1. Download and import the signing key (fingerprint DF00FAF1C577104B50BF1D0093D6889F9F0E78D5):
    wget -O- https://apt.armbian.com/armbian.key | gpg -- import -
  2. Download the current “Debian 12 (Bookworm)” image and the detached signature from the “Minimal/IOT images” section of the Armbian Odroid N2L page.
  3. Verify the signature:
    gpg --verify Armbian_community_25.8.0-trunk.8_Odroidn2l_bookworm_current_6.12.28_minimal.img.xz{.asc,}
  4. Uncompress the image:
    unxz Armbian_community_25.8.0-trunk.8_Odroidn2l_bookworm_current_6.12.28_minimal.img.xz
  5. Flash the uncompressed image to a microSD card:
    sudo dd if=Armbian_community_25.8.0-trunk.8_Odroidn2l_bookworm_current_6.12.28_minimal.img of=/dev/sda bs=1M status=progress
  6. Mount the second partition of the microSD card on /mnt/tmp:
    sudo mount /dev/sda2 /mnt/tmp
  7. Use the instructions and template from Automatic first boot configuration to populate /mnt/tmp/root/.not_logged_in_yet. My populated autoconfig is here, but it did not work as expected; see below.
  8. Unmount the second partition of the microSD card.
  9. Insert the microSD card into the Odroid N2L and power it on.

Installation Results

Worked as expected:

  • Successfully booted.
  • Successfully connected to WiFi on first boot.

Did not work as expected:

  • Did not connect to WiFI on subsequent boots.
  • Did not set the root password. Instead the root password was 1234.
  • Did not set the user password.
  • Did not set the user SSH key.

Fixes

To correct these problems I connected a keyboard and monitor and did the following:

  1. Logged in as root with the password 1234.
  2. Changed the root password and the user password.
  3. Edited /etc/netplan/20-eth-fixed-mac.yaml and fixed the errors. The corrected version is below.
  4. Ran netplan apply to apply the corrected network configuration.
  5. Rebooted to confirm that networking was working as expected.

Here is the corrected /etc/netplan/20-eth-fixed-mac.yaml:

network:
  version: 2

 

After fixing networking, I did the following:

  1. Copied my SSH key.
  2. Edited /etc/ssh/sshd_config to disable root logins and password logins.
  3. Ran apt-get update && apt-get upgrade.
  4. Installed unattended-upgrades.
  5. Rebooted to pick up the latest updates.

Odroid N2L running Armbian.

Odroid N2L running Armbian.

Nginx Caching and Security Headers

June 8, 2025

Yesterday I ported the caching and security headers from the Apache configuration for the public site to the Nginx
configuration for the Tor mirror.

The caching headers are particularly helpful for the Tor mirror.

The updated Nginx configuration and additional documentation are here: Site Backend - Onion Service.

Old OpenVPN Article

June 8, 2025

In 2006 I wrote an article about OpenVPN for the now-defunct Linux Magazine. This week I found a copy of the 2006 article on the Wayback Machine:

Casting Your Net with OpenVPN (Wayback Machine)

In 2025 you should prefer Wireguard over OpenVPN because Wireguard is faster, more secure, and easier to use.

Fun factoid: The AWS Client VPN is just an AWS-branded build of the OpenVPN client.

Uninstall Facebook

June 7, 2025

You should immediately remove the Facebook and Instagram apps from your Android devices:

We disclose a novel tracking method by Meta and Yandex potentially affecting billions of Android users. We found that native Android apps—including Facebook, Instagram, and several Yandex apps including Maps and Browser—silently listen on fixed local ports for tracking purposes.

This web-to-app ID sharing method bypasses typical privacy protections such as clearing cookies, Incognito Mode and Android’s permission controls. Worse, it opens the door for potentially malicious apps eavesdropping on users’ web activity. (emphasis mine)

Source

Ars Technica also has an excellent summary.

In English: If you have the Facebook app or Instagram app installed on your Android device, then Meta may have secretly collected your identity and your browsing history.

This is true even if you don’t have a Facebook account. It’s true even if you don’t use the Facebook app. It’s true even if you took steps to hide your browsing history like clearing cookies or using a private browser window.

On June 3rd, Meta claimed that the code responsible had “been almost complete removed”; this is weasel wording which actually means “the code has not been removed”.

Even if Meta actually did remove the code from their apps, there are still several problems:

  1. Meta has an atrocious privacy record. It would be foolish to take Meta at their word and they have a strong incentive to try this again or something similar in the future.
  2. Removing code does not address the information Meta has already collected. This information could be leaked in a data breach or subpoenaed by law enforcement.
  3. Malicious or trojaned apps could listen on the same local ports and collect the same information. The Local Mess researchers demonstrated this with a proof-of-concept app.

Additional privacy recommendations:

  1. Prefer web sites over apps. Many services use deceptive patterns to trick you into using an app instead of a web site. They do this because an app can collect more information about you than a web site.
  2. Remove unused and rarely used apps.
  3. Stop using Google Search. I recommend DuckDuckGo.
  4. Stop using Google Chrome. I recommend Firefox with uBlock Origin and some configuration changes. Some folks swear by DuckDuckGo Browser, but I haven’t used it myself. See also: The case for ditching Chrome. If you really do need Chrome or Edge, then at least install uBlock Origin Lite.
  5. Switch from Microsoft Windows to Linux. I recommend Ubuntu for new users. I use Debian. If you really do need Windows, then at least disable Windows telemetry.
  6. Switch from text messaging and WhatsApp (owned by Meta) to Signal.
  7. Set up Pi-hole on your home network. It has an easy-to-use web interface and can help block ads and tracking on mobile devices and “smart” TVs.
  8. Consider Tor Browser or Tails if the you need more protection and are willing to accept some tradeoffs.

Further reading: Surveilance Self Defense

Tor Site Mirror

May 18, 2025

This site is now mirrored on the Tor network at the following onion address:

http://pablotronfils76sk6pwvyoosvfjbhxe3sn4c654e4na4szidbnbqdyd.onion/

See Site Backend - Onion Service for more details.

Update (2025-06-02): New vanity .onion address.

ML-KEM vs. DH and ECDH

March 31, 2025

A couple of months ago I wrote an expanation of ML-KEM in the discussion thread of an Ars Technica article. Folks seemed to like my explanation and the Ars staff marked it as a “Staff Pick”.

Below is the explanation that I wrote with some minor corrections…


Close friend of mine from college works as a cryptographer for some government agency (he jokes he can’t tell me which one it actually is…he went straight there from his Stanford PhD) and he tried to explain ML-KEM to me a couple weeks ago in “simple” language and my brain gave up after about 2 minutes.

This might help: Kyber - How does it work?

One thing that is different about ML-KEM compared to finite-field Diffie-Hellman (DH) and Elliptic-Curve Diffie-Hellman (ECDH) is that the former is a Key Encapsulation Mechanism (KEM) and the latter are key-agreement protocols.

In DH and ECDH:

  1. Alice generates a keypair which consists of a public key and a private key.
  2. Alice keeps the private key to herself and sends the public key to Bob.
  3. Bob generates a keypair which consists of a public key and a private key.
  4. Bob keeps the private key to himself and sends the public key to Alice.
  5. Alice combines her private key from step #1 with Bob’s public key from step #4 and produces a shared value.
  6. Bob combines his private key from step #3 with Alice’s public key from step #2 and produces the same shared value that Alice produced in step #5.

In ML-KEM things work a bit differently:

  1. The first party (Alice) generates an keypair which consists of an encapsulation key (analagous to a public key in DH) and a decapsulation key (analagous to a private key in DH).
  2. Alice sends an encapsulation key to the second party (Bob).
  3. Bob generates a random value, which is hashed with a hash of Alice’s encapsulation key from step #2 to generate a shared value (32 bytes in ML-KEM).
  4. Bob uses Alice’s encapsulation key from step #2 to encapsulate the random value, producing a ciphertext.
  5. Bob sends the ciphertext from step #4 back to Alice.
  6. Alice uses the decapsulation key from step #1 to decapsulate the random value generated by Bob in step #3 from the ciphertext sent by Bob in step #5.
  7. Alice hashes the random value from step #6 with the hash of the encapsulation key to generate the shared value.

So in DH and ECDH, both parties contribute equally to the shared value. In ML-KEM, one party (Bob) generates a random value which is hashed with a hash of the other party’s (Alice) encapsulation key to produce the shared value.

Another difference between DH/ECDH and ML-KEM is this: the shared value produced by DH and ECDH cannot be safely used as the secret key for a symmetric cipher (e.g. AES) because the shared value is biased (some values are much more likely than others). To safely derive a secret key (or keys) for use with a symmetric cipher, the shared value produced by DH and ECDH needs to be passed through a key derivation function (KDF) like HKDF.

The shared value derived in ML-KEM is uniformly random and can be used directly as the key for a symmetric cipher (FIPS 203, section 3.1). In practice I expect the ML-KEM shared value to be passed to a KDF anyway, because many protocols (for example, TLS) need to derive several keys in order to establish a secure channel.

DH, ECDH, and ML-KEM all rely on “hard” problems based on trapdoor functions.

“Hard” in this context means “believed to be computationally infeasible to solve without an implausible amount of computational resources or time”.

A trapdoor function is a function that is easy to compute in one direction but hard to compute in the other direction without some additional information. For example, it is easy to calculate 61 * 71 and hard to calculate the integer factors of 4331. However, if I tell you that one of the factors of 4331 is 61, then it is easy for you to calculate the other factor: 4331 / 61 = 71. This is the integer factorization problem, and it’s the basis for RSA.

The hard problem that Finite-Field Diffie-Hellman (FFDH) key exchange is based on is the discrete logarithm problem, which is this:

Given b = gs mod p, it is hard to calculate s, where:

  • s is a large randomly chosen positive integer that is secret
  • g is a fixed positive integer that is publicly known and carefully chosen by cryptographers
  • p is a fixed large prime number that is publicly known and carefully chosen by cryptographers

The hard problem that Elliptic-Curve Diffie-Hellman (ECDH) key exchange is based on is known as the elliptic curve discrete logarithm problem (ECDLP), which is this:

Given b = sG, it is hard to calculate s, where:

  • s is a large randomly chosen positive integer that is secret, and
  • G is a fixed, publicly known point on an elliptic curve over a finite field. The point, elliptic curve, and field are all carefully chosen by cryptographers.

The hard problem that ML-KEM is based on is the Module Learning With Errors (MLWE) problem, which is derived from the Learning With Errors (LWE) problem. A simplified version of the LWE problem is this:

Given t = As + e, it is hard to calculate s, where:

  • t is a public vector with integer elements
  • A is a public square matrix with elements that are random integer values
  • s is a secret vector with elements that are small integer values (the secret)
  • e is a secret vector with elements that are small integer values (the error)

Note that if you remove e from the equation above, then solving for s becomes very easy:

  1. Calculate A-1 using Gaussian elimination.
  2. Multiply by A-1 from the left: A-1 t = A-1 As
  3. Solution: s = A-1 t

The important bit here is that the error vector is critical to making the problem hard.

In the Module Learning With Errors (MLWE) problem that is used by ML-KEM, the integers from the simplified LWE explanation above are replaced by polynomials with 256 coefficients.

(This is explained cryptically in FIPS 203, section 3.2)

Unfortunately the large polynomials make it difficult to visualize ML-KEM. There is a simplified implementation of Kyber called “Baby Kyber” in the Kyber - How does it work? article linked above that is easier to understand.

One problem with using 256-coefficient polynomials is multiplication. Adding polynomials is done coefficient-wise, so adding two polynomials with 256 integer coefficients requires 256 integer additions.

Polynomial multiplication, on the other hand, requires multiplying every coefficient by every other coefficient. This means that multiplying two polynomials with 256 integer coefficients requires 256 * 256 = 65536 integer multiplications.

To work around this, ML-KEM uses a trick called the Number-Theoretic Transform (NTT, FIPS 203, section 4.3). This allows polynomial multiplication to be done (almost) coefficient-wise and drastically reduces the number of integer multiplications needed.

Using polynomials instead of large, multi-word integers like DH and ECDH might seem confusing at first, but it actually simplifies a lot of the implementation because it enables SIMD optimizations and you don’t have to deal with carry propagation.

Another neat trick used by ML-KEM is compressing the A matrix in the encapsulation key by including a 32-byte seed (rho) instead of the actual polynomial coefficients. This seed value is expanded with SHAKE128 to pseudorandomly generate the coefficients for the polynomial elements of the A matrix (FIPS 203, SampleNTT() in section 4.2.2 and section 5.1).

There are a lot of other details but hopefully this gives you enough to start to get your head around ML-KEM.

If you want to see some source code, here is a self-contained, single-file, dependency-free C11 implementation of the FIPS 203 initial public draft (IPD) that I wrote last year. It includes test vectors, SIMD acceleration, and the necessary bits of FIPS 202 (SHA3-256, SHA3-512, SHAKE128, and SHAKE256), but it has not been updated to reflect the changes in the final version of FIPS 203. I mainly wrote it for fun, to learn, and to provide public comments to NIST during the standardization process.

JupyterLab Reverse Proxying in Apache

March 30, 2025

I’ve been using the SageMath kernel in JupyterLab quite a bit lately. I wanted to reverse proxy through an Apache vhost to JupyterLab. The documentation to do this is surprisingly elusive.

Here is a working example:

<VirtualHost *:443>
  ServerName sage.example.com

  # proxy http and websocket to http://localhost:1980/
  # note: in apache 2.4.47+, use upgrade=websocket to proxy websocket
  ProxyPass "/" http://localhost:1980/ upgrade=websocket
  ProxyPassReverse "/" http://localhost:1980/ upgrade=websocket

  # preserve Host header in proxied requests
  ProxyPreserveHost on

  # ... common vhost configuration elided
</VirtualHost>

 

Notes:

  1. Add upgrade=websocket to ProxyPass and ProxyPassReverse. Needed to proxy WebSocket connections in Apache 2.4.47+. See the Protocol Upgrade note in the mod_proxy documentation for additional details.
  2. Enable ProxyPreserveHost. This preserves the Host header in proxied requests and it is needed to prevent cryptic server-side origin errors.

The target JupyterLab instance is running in a Podman container, like so:

podman run -d -p 1980:8888 \
  -v sage:/home/sage/work \
  --restart=on-failure \
  --name sage \
  docker.io/sagemath/sagemath sage-jupyter
Archived Posts...
© 1998-2026 Paul Duncan