It’s a CLI that works with various static-site generators to publish records into your PDS - something I was just starting to think how I wanted to tackle it. This CLI is a good start, so I’m happy to try it out, file bugs, and even submit pull requests!
This post, if all is set up correctly, should be the first post published to the ATmosphere.
]]>rustup target add x86_64-unknown-linux-gnu
cargo build --target x86_64-unknown-linux-gnu
Of course, the targets’ compiler, headers, and libraries need to be available, but a typical install of Visual Studio or gcc will support common targets. But on my new work Surface Laptop 7 arm64 I needed to compile openssl-sys for my a project and that’s when the fun began:
error: failed to run custom build command for `openssl-sys v0.9.102`
----- SNIP ---8<
cargo:rerun-if-env-changed=OPENSSL_DIR
OPENSSL_DIR unset
While there are lots of ways of getting the bits I need - like compiling source myself or downloading and extracting files to the right locations - I like the convenience of a package manager especially when it comes to installing updates. But I’m running Ubuntu 24.04 aarch64, so how do I install packages for amd64 e.g., libssl-dev:amd64?
While I’ve had some experience managing debian package repositories e.g., in /etc/apt/ I’ve never before had need to try to install packages for other architectures. I wasn’t even entirely sure it was supported, but a quick search got me started:
sudo dpkg --add-architecture amd64
sudo apt update
But updating the package index failed: I was getting several 404s because packages weren’t available on http://ports.ubuntu.com/ubuntu-ports/. I was also only familiar with the one-line format and not this new format I was seeing, which `man 5 sources.list tells me is DEP822-style.
After reading sources.list(5) and a bit of trial and error, I ended up with the following /etc/apt/sources.list.d/ubuntu.sources
Types: deb
URIs: http://ports.ubuntu.com/ubuntu-ports/
Suites: noble noble-updates noble-backports
Components: main universe restricted multiverse
Architectures: arm64
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg
Types: deb
URIs: http://archive.ubuntu.com/ubuntu/
Suites: noble noble-updates noble-backports
Components: main universe restricted multiverse
Architectures: amd64
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg
Types: deb
URIs: http://ports.ubuntu.com/ubuntu-ports/
Suites: noble-security
Components: main universe restricted multiverse
Architectures: arm64
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg
Types: deb
URIs: http://archive.ubuntu.com/ubuntu/
Suites: noble-security
Components: main universe restricted multiverse
Architectures: amd64
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg
After a quick and finally successful sudo apt update, I was able to install packages:
sudo apt install --yes libssl-dev:amd64
I was able to get further with my original goal of compiling openssl-sys, but then got an error about no linker available. I installed gcc:amd64 and almost able to compile my crate, but the linker failed so I needed to specify the linker explicitly in .cargo/config.toml:
[tageet.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
[target.x86_64-unknown-linux-gnu]
linker = "x86_64-linux-gnu-gcc"
Success! …well, almost.
When I tried a normal build e.g., cargo build I got pages full of compiler errors. Turns out, when I installed gcc:amd64 it replace a few components, since apt list --installed *gcc showed gcc-13-aarch64-linux-gnu was now removable.
After a quick search - there’s plenty of information of people wanting to cross-compile for aarch64 on amd64 - I realized I need to install a few packages at once, to effectively make sure all the right symlinks are created. I could’ve done this manually, but figured there were a number of executables I’d need to fix and would rather just let apt handle it:
sudo apt install --yes gcc-13 gcc-13-aarch64-linux-gnu gcc-13-x86-64-linux-gnu
sudo ln -s /usr/bin/x86_64-linux-gnu-gcc /usr/bin/x86_64-linux-gnu-gcc-13
The last line was just for convenience, which apt did for aarch64-linux-gnu-gcc and, originally, when I installed gcc-13-x86-64-linux-gnu standalone.
I was finally able to cross-compile openssl-sys:
cargo build
PKG_CONFIG_SYSROOT_DIR=/usr/lib/x86_64-linux-gnu cargo build --target x86_64-unknown-linux-gnu
Fortunately, cleaning up all that Rust doesn’t require WD-40 and a little elbow grease. Here are a few tips:
Use cargo-cache to view information about and clean up the cargo cache:
cargo install cargo-cache
cargo cache --info
cargo cache --help
It hasn’t been updated in a while, but seems to work quite well.
Delete your target/ directory. Generally, most of the time building is spent acquiring crates anyway, but this can typically save you a lot of space, especially if you’ve been working in a repo for a while and dependencies have changed frequently.
This can be even more problematic when working in Windows Subsystem for Linux (WSL), since the virtual hard disk (VHDX) used to default to 256GB then later 512GB, and currently 1TB. Note, however, this is only the maximum size the VHDX sees as the maximum space. Depending on your physical drive space availability, it may be less.
In my case, recently, my old WSL Ubuntu-22.04 image thought it ran out because it only saw 256GB. Most of that was consumed by Rust and Go. You can view space using du from within WSL like so:
du -hd1 -t1gb .
If deleting repos’ target/ directories didn’t free enough space, you can resize your VHDX from an elevated shell:
# Replace 'Ubuntu-22.04' below with the name of your distro.
# Shut down your distro.
wsl --shutdown 'Ubuntu-22.04'
# Find the path to the VHDX file for your distro.
(Get-ChildItem -Path HKCU:\Software\Microsoft\Windows\CurrentVersion\Lxss `
| Where-Object { $_.GetValue("DistributionName") -eq 'Ubuntu-22.04' } `
).GetValue("BasePath") + "\ext4.vhdx"
# Launch diskpart and resize the disk returned above as {path}.
diskpart
select vdisk file="{path}"
expand vdisk maximum="{size in MB}"
exit
Now log into your distro’s shell again and tell it that it has more disk space available:
# Find the correct device.
sudo mount -t devtmpfs none /dev 2> /dev/null
mount | grep ext4
# Copy the name e.g., /dev/sdc and resize it using the same size in MB as above.
sudo resize2fs /dev/sdb '{size in MB}M'
When roads started getting icey, I decided to get a Zwift indoor trainer which I (mostly) enjoy. That gave me ample time to work on more Strava badges - the harder ones, especially.
Around these parts, it’s hard to avoid climbing on your bike so, with already strong legs, I leaned into it: I started signing up each month for Strava’s 7,000m (22,966ft) challenge. I lost a bunch of weight and my legs got stronger to the point that training with friends and cycling teammates meant frequently having to wait for them at the top of hills. I don’t mind. Fun to get out with people and (usually) to take a break after longer climbs.
I even finally road my first Bike MS: Deception Pass Classic and set a new course record on Strava; though, I plan to socialize more next year. It’s not a race, but I wanted to push myself on my first century ride and see what I could do over 100mi and over 5,000ft.
All this lead up to a lot of climbing over the past year to earn some badges, but more so to prove I could:

I’m on track to complete over 100km elevation and 5,000mi distance by EOY; however, after that I plan to back off a little. Cycling indoor is great exercise, but it does get a little boring. I discovered audiobooks help, but it’s also mentally exhausting and I’m burning out after an average of 4-5 rides week - most often pusing myself.
I’ll keep going, but I plan to tank any possibility of getting the 7,000m badge in January 2025. I’ll likely go for those badges in spring and summer months as I start training for the next charity ride(s).
]]>🎉 My pull request was merged: TypeSpec is now supported out of the box in Helix! You still need to install the LSP as with any other supported language, but that’s a single, simple command. See below, or in their wiki.
I’ve been using vim and before it vi for close to 30 years. I started my programming career in a terminal and, while I often enjoy GUI apps like Visual Studio Code and am more productive when touching lots of files, I still like the efficiency of a good terminal editor.
But vim is feeling rather dated. There’s clearly a lot of cruft they have to support, and much of this seems to have carried over into neovim. While neovim does support tree-sitter for LSP support and has a rich ecosystem, I thought I’d give another, fairly new editor a try: Helix.
Helix is a vim-like editor written in Rust and seems to be fairly popular with some other Rust developers. Given it doesn’t have a long legacy to support, it seems worth a try at first. So far, I’ve been loving it. Language support is fairly easy to add and it supports a large number of languages out of the box, though you have to install the LSPs and perhaps other tools for some languages.
One language it’s missing is TypeSpec, which my team, the Azure SDK team, develops. It’s also getting used by some third-parties, and since we’re planning to generate source generation only from TypeSpec for the Azure SDKs for Rust, I’ll be using it a lot more. It’s also being reviewed more frequently by the Azure REST API Review Board, of which I’m a member.
The TypeSpec LSP is contained in the @typespec/compiler, so all you need to do is install that globally (or otherwise discoverable in your $PATH):
npm install -g @typespec/compiler
You’ll need to add some configuration to your languages.toml configuration file e.g., ~/.config/helix/languages.toml:
use-grammars = { only = ["typespec"] }
[[grammar]]
name = "typespec"
source = { git = "https://github.com/happenslol/tree-sitter-typespec", rev = "af7a97eea5d4c62473b29655a238d4f4e055798b" }
[[language]]
name = "typespec"
language-id = "typespec"
scope = "source.typespec"
injection-regex = "(tsp|typespec)"
file-types = ["tsp"]
roots = ["tspconfig.yaml"]
auto-format = true
comment-token = "//"
block-comment-tokens = { start = "/*", end = "*/" }
indent = { tab-width = 2, unit = " " }
language-servers = ["typespec"]
[language-server.typespec]
command = "tsp-server"
args = ["--stdio"]
Once that configuration is in place, you need to actually compile the parser. Helix has made that easy:
hg -g fetch
hg -g build
Now that helix is configured to start and use the TypeSpec LSP for any files with a .tsp file extension with an ancestor tspconfig.yaml configuration file, you need to tell it what to do with that information.
To support syntax highlighting; selection of functions, arguments, models, etc.; and proper indentation, copy files from https://github.com/heaths/helix/tree/main/runtime/queries/typespec into the runtime/queries/typespec subdirectory of your helix configuration directory.
What do you get after all that?
]]>While there are many different ways to configure this, many have you set up a service or autorun program but I want neither of those affecting Windows boot and login performance for something I don’t use often throughout the course of every day. Fortunately, I found one one such article that accomplished exactly what I wanted using socat and npiperelay.
I made a few modifications including how to acquire npiperelay given changes to the Go toolset.
You need to acquire npiperelay in Windows. You can download it from https://github.com/jstarks/npiperelay/releases/latest into a directory in your PATH environment variable or, if you have Go installed, run:
go install github.com/jstarks/npiperelay@latest
Next you need to install socat in your WSL distribution. I’m assuming you are using some Debian-based distro e.g., Ubuntu. If you are using another distro, please use appropriate commands.
apt update
apt install -y socat
You’ll need to create a bash script that will start when you log into your distro. I’m assuming bash below.
mkdir ~/.1password
touch ~/.1password/agent && chmod +x ~/.1password/agent
Open ~/.1password/agent and paste the following content:
#!/usr/bin/bash
export SSH_AUTH_SOCK=$HOME/.1password/agent.sock
ALREADY_RUNNING=$(ps -auxww | grep -q '[n]piperelay.exe -ei -s //./pipe/openssh-ssh-agent'; echo $?)
if [[ $ALREADY_RUNNING != '0' ]]; then
if [[ -S $SSH_AUTH_SOCK ]]; then
rm $SSH_AUTH_SOCK
fi
(setsid socat UNIX-LISTEN:$SSH_AUTH_SOCK,fork EXEC:'npiperelay.exe -ei -s //./pipe/openssh-ssh-agent',nofork &) > /dev/null 2>&1
fi
To run the script when you log in interactively, edit your appropriate profile e.g., .bashrc for bash:
. ~/.1password/agent
You can restart your login session or just source ~/.1password/agent yourself.
Assuming you have already configured 1Password’s SSH agent using the instructions at the beginning of this post, you can test and reset any git repository you have handy e.g.,
cd ~/src/some-project
echo test > test.txt
git add -A
git commit -am'test' -S
git show --show-signature
If the git show --show-signature command about shows an unknown or invalid signer, be sure you have your allowed_signers set up for git. Unlike GPG that can use counter signatures to validate identities, SSH signatures need explicit approval:
mkdir -p ~/.config/git/
cat ~/.ssh/identity_rsa.pub > ~/.config/git/allowed_signers
git config --global gpg.ssh.allowedsignersfile $HOME/.config/git/allowed_signers
If you are also using SSH to tunnel into other hosts, you should configure SSH separately for github.com:
Host *
IdentityFile ~/.ssh/id_rsa.pub
Host github.com
IdentityAgent ~/.1password/agent.sock
IdentitiesOnly yes
$profile to define a custom function:prompt. This included custom parsing for any .git directory (even .hg for Mecurial!), and eventually any .git file to support worktrees and submodules. It was fast and, last I knew, still powered some internal environments we use at work. The downside is that this doesn’t work in bash. I do have PowerShell installed in Ubuntu on WSL2, but I don’t use it nearly as much as bash given bash is much faster to start up. And since I’m developing more often in Ubuntu for pretty much every language besides C# (and .NET in general) - because Visual Studio is so much better than Visual Studio Code for managed code - I really wanted all the bells and whistles of my custom profile. I was running powerline and know I can customize it too, but keeping Python3 up to date with the right version that vim needs was something I was growing tired of doing.
It was finally time to switch to Oh My Posh. I learned of this many years ago but I don’t think it existed when I first wrote my custom prompt. It did, however, inspite me to rewrite how the prompt was coded to be more declaritive. Still, adding all the same capabilities just didn’t seem worth my time.
I ended up writing my own custom theme that somewhat matches my old function:prompt with a few improvements I wanted. I can clone this to ~/.config/oh-my-posh and set my .bashrc or PowerShell $profile to initialize it with the custom configuration.
To support starting a shell on a machine that does not yet have Oh My Posh, I moved and refactored my old prompt to be a module I can load as a fallback.
Overall, my $profile was better optimized so PowerShell starts faster, though starting a new Go program each time my profile needs to render is a little slower - not enough to really impact useability, though.
All these configuation directories - along with my vim configuration - are stored as Git repositories to make it easier to sync across disparate machines as well as track changes over time. Who knows: maybe one day I want to restore some old functionality.
After so many, many years with my custom prompt, it’s a little hard to say goodbye, but look at this new prompt (which will also continue to change):

If you’re new to Mastodon (rather, the Fediverse) or want to learn about it, see https://joinmastodon.org/.
]]>One way to reduce how much much you fetch is to fetch a single branch. For example, assuming your upstream and origin remotes are set to the primary repository and your fork respectively, it’s rarely necessary to fetch more than the main branch e.g., git fetch upstream main to fetch just the upstream remote’s main branch. This will avoid fetching any other branches and all the objects referenced by those branches, tags, etc. I take advantage of this in my git sync alias, for example.
But if you often find yourself just running something like git pull, it’s beneficial to set remote tracking refspecs in your configuration file to only those branches of interest.
If you open your .git/config file from the root of your repo, you should see something like this somewhere:
[remote "upstream"]
url = https://github.com/Azure/azure-sdk-for-net.git
fetch = +refs/heads/*:refs/remotes/upstream/*
This means that any remote branches from upstream will be fetched if not specified explicitly. You can change that pattern and add more, for example:
[remote "upstream"]
url = https://github.com/Azure/azure-sdk-for-net.git
fetch = +refs/heads/main:refs/remotes/upstream/main
fetch = +refs/heads/release/*:refs/remotes/upstream/release/*
This will fetch only main and any branch starting with release/ from the upstream remote if no branch is specified explicitly.
If you work in a large team, this can save a lot of time if people are often pushing branches to a shared remote like your typical upstream remote.
Another way to reduce how much you fetch is to create a shallow clone:
git clone --depth=1 https://github.com/heaths/azure-sdk-for-net.git
cd azure-sdk-for-net
git remote add upstream https://github.com/Azure/azure-sdk-for-net.git
Note that none of the history prior to the cloned branch - often main for many repos - will not be fetched, nor will any objects those commits reference. This could affect some commands that depend on the history, though that’s probably unlikely in most cases. Any commits after that time will continue to accrue, however.
You can run git fetch --unshallow at any time to restore full history. There are also ways you can make an existing repo a shallow clone, but by then you’ve already fetched the brunt of the repo. Given that fact, and that it’s not straight forward, I’ll not cover that now.
When you run git checkout {branch} or similar, it checks out all files in the HEAD of that branch in your local repo. To limit how many files are checked out, you can either create a sparse clone or set up a sparse checkout on an existing repo.
To create a sparse checkout when you initially clone, run the following command:
git clone --sparse https://github.com/heaths/azure-sdk-for-net.git
cd azure-sdk-for-net
git remote add upstream https://github.com/Azure/azure-sdk-for-net.git
By default, this creates a .git/info/sparse-checkout file with the default content:
/*
!/*/
The file format is similar to .gitignore but with the default cone mode you can only specify directories. This default value checks out any files - including dotfiles - directly under the repo root directory, but no subdirectories. You can use the git sparse-checkout add command to add patterns, but these will be merely appended to the end of the file. If you add a directory that already exists, it will be added to the end while the old entry/entries remain resulting in duplicates. git sparse-checkout add also checks out files immediately, so if you want to negate any paths beneath it may waste a bit of time.
Instead, I find it easier to just open .git/info/sparse-checkout and modify it by hand. For example, in the repo I’ve been using as an example, I might want to only check out engineering system files and services I’m working on:
/*
!/*/
/.config/
/.vscode/
/common/
/eng/
/sdk/cognitivelanguage/
/sdk/keyvault/
After you make modifications, run git sparse-checkout reapply to affect changes.
Though the default content checks out all files under the root, it seems no other path can specify files when using cone mode. If you have files under a directory e.g., sdk/* that you need and want to negate the rest e.g., !sdk/*/ you’ll need to pass --no-cone to git sparse-checkout set along with all the patterns you want to enable; though, git sparse-checkout set --no-cone enables options to disable cone mode so you could still edit .git/info/sparse-checkout by hand afterward.
git sparse-checkout set --no-cone '/*' '!/*/' '/.config' '/.vscode' '/common' '/eng' '/sdk/*' '!/sdk/*/' '/sdk/cognitivelanguage' '/sdk/keyvault'
git sparse-checkout reapply
If you have already cloned a repo, you can create a sparse checkout by running git sparse-checkout set. Optionally it can take patterns on the command line or from stdin with --stdin, but I still personally find manually editing the .git/info/sparse-checkout file afterward is easier.
Note that if you use worktrees - another way to reduce how much you fetch if you need multiple clones of a single repository - git sparse-checkout set will create worktree-specific configuration to avoid adversely affecting other worktrees.
You can, of course, combine both approaches to really trim how much you fetch and checkout:
git clone --depth=1 --sparse https://github.com/heaths/azure-sdk-for-net.git
cd azure-sdk-for-net
git remote add upstream https://github.com/Azure/azure-sdk-for-net.git
You can even do this with the GitHub CLI, which conveniently clones your fork (if any, which is recommended) as origin and the upstream automatically:
gh repo clone azure-sdk-for-net -- --depth=1 --sparse
cone mode is deprecated.git sync, an alias I created to concisely pull the upstream repo’s main branch, push that branch to my origin fork, and fetch origin branches to determine which branches have been deleted - likely from merged pull requests. As many repos I work in have changed from master to main, not all of them have yet. Some also use trunk which, personally, I like better but is less common than main. Rather than separate aliases for each common main branch and given that shell aliases are processed by a linux shell - most often bash - we can use default argument values to run either of the following:
git sync
git sync trunk
Simply run the following command to set the alias to sync main by default, or whichever branch name was specified:
git config --global alias.sync '!git pull -p --ff-only upstream ${1:-main} && git push origin ${1:-main} && git fetch -p origin'