How LLMs changed the way I work

I first started mucking about with ChatGPT on the web back in the GPT-3.5 days, so late 2022. To be honest, it was quite substandard for coding. It got things wrong constantly, had a habit of sounding more confident than it had any right to and generally felt like an interesting novelty rather than something I could rely on. Even so, I came away from it with the feeling that the underlying technology clearly had legs.

GPT-4 was the point where I really started paying attention. That was the first release that made me think this was not just a toy with good marketing. The jump in quality was obvious. The problem was that the tooling around it still had not really caught up. Most of it was still driven through the ChatGPT web UI, with editor integrations only just starting to appear.

I tried a few of those early coding tools. I eventually settled on vim-chatgpt, now renamed vim-llm-agent, along with the official copilot.vim plugin. They were useful enough, but still felt a bit cramped. vim-chatgpt was basically file or selection based, while Copilot felt like autocomplete on steroids. Better than nothing, certainly, but still clunky and rather narrow.

There was also a middle phase in all this for me, and that was aider. I used it quite a lot before fully adopting Claude Code. It was much more manual than the newer agentic tools, but that was part of the appeal. It was open source, gave me more control and fit better with the sort of environment I like working in. Even so, once the newer wave of tools arrived, especially Claude Code, it was hard to ignore the gap in sheer capability. Crazy how fast things move in this space.

I was never especially taken with the GUI-heavy side of this world, so I skipped the Cursor hype entirely. I have to say, I am glad I did. I much prefer a CLI-based development environment where I can actually see what is going on, compose things myself and not feel like I am being gently pushed into somebody else's idea of how development ought to work. I did use VS Code and GitHub Copilot on some projects, more or less because I had to, and they were fine. Useful, yes. But they never felt natural to me.

The point where this all really clicked for me was summer 2025, when I trialled Claude Code on a devops-heavy internal workstream for a project deployed on AWS via Terraform. That was the first time I felt I was dealing with something that could take in not just the current file, or even just the repository, but the wider infrastructure and operational context as well. Suddenly AI-driven devops felt practical rather than gimmicky. Working that way was amazing. It genuinely felt like my abilities had been multiplied.

A concrete example was a fairly involved Terraform and Kubernetes deployment. I am not a Kubernetes expert, but I know enough to make my way around. Claude Code made it much easier to fill gaps in my knowledge quickly, understand how the moving parts fit together and get productive without weeks of mucking about first. That, to me, is one of the most interesting things about these tools. They amplify existing expertise rather than replacing the need for it. You still need enough background knowledge to spot when they are talking nonsense, glossing over something important or making dangerous suggestions, especially around security.

Since then I have tried a fair few tools, including OpenCode, Codex, VS Code Copilot, Claude Code and, more recently, pi.dev. I also tried Codex fairly early on, but at the time it felt like it still had a fair bit of growing up to do, so I did not stick with it. I should probably give it another look. I was actually quite bullish on OpenCode for a few months. It seemed promising and I liked where it was heading. But over time I became less comfortable with some of the security trade-offs, especially after a remote code execution issue, and parts of the privacy story felt murkier than I was comfortable with. It also started to feel too batteries included for my liking, in much the same way Claude Code can.

That was a big part of why I ended up adopting pi.dev instead. It is much simpler and much more transparent. What I really like is that I can inspect its behaviour while using it, and change that behaviour in real time. That is a very powerful property in a tool. I am not an Emacs user, but I can understand why people who are into Emacs become so attached to it, because this scratches a similar itch. When a tool is simple enough to understand and pliable enough to reshape to your own needs, it starts to feel less like a product and more like part of your environment.

I have also added my own permissions layer to pi.dev to make it safer to use, which suits the way I like to work. It is not doing anything especially magical there either, which I mean as a compliment. It is simple enough that I can see how it behaves and change it when I feel the need.

I am also checking out Amazon Kiro on the CLI side, and it seems very decent so far. That said, it still feels like a rather AWS-shaped product, being built on Amazon Bedrock, so it is a different sort of walled garden. For now, I have more or less settled on Claude Code and pi.dev, with Kiro looking promising as well. Claude Code feels very batteries included, which is not always a bad thing. pi.dev is almost the opposite. It is small, clear and easy to reason about. For my own work, I find myself preferring that.

The disappointing part is Anthropic's stance on third-party tools piggybacking on Claude consumer subscriptions and rate limits. OpenCode, for example, was explicitly blocked first. More recently Anthropic moved to stop third-party harnesses more generally from using Claude subscription limits. That has made the Claude ecosystem feel more like a walled garden than I would like. That is a pity, because I generally prefer Anthropic's models. OpenAI, by contrast, seem much more comfortable with third-party integrations and harnesses around their coding tools, and I think that is the right instinct.

As for the current state of the art, I have to say I am amazed by it. What these tools can already do is extraordinary, and I do not think the pace of change is slowing down any time soon. The effects on software engineering, and probably plenty of other fields, are going to be dramatic.

At the same time, I am not completely relaxed about what this means for the profession. I can quite easily see a future in which fewer software engineers are needed, junior roles become much rarer and more of the industry is driven by a relatively small number of highly leveraged people directing tools and agents, software captains if you like. If anything, these tools seem to increase the value of judgment and existing experience, which is part of why I suspect junior roles may get squeezed first. That may be where this is heading. I am fascinated by it, and I use these tools constantly, but I do not think software engineering is about to disappear. I do think it may end up looking very different from the profession many of us entered, and I would be lying if I said I did not find that future a bit unsettling.

From Mac to Linux... Again

I've been a GNU/Linux user in my personal life since around 2006/7, though my first encounter came a couple of years earlier, I remember burning a Ubuntu 5.04 (Hoary Hedgehog) Live CD and nervously booting it on the family PC, half convinced I'd destroy something in the process. I didn't, obviously. It's a Live CD. Online back then every other person who seemed to know what they were talking about was running Linux, and it felt like a new frontier. The first proper install was Ubuntu 6.10 (Edgy Eft) on my laptop at uni. Prior to all of that I was a Windows user going all the way back to 3.11 for Workgroups. Making the leap to Linux felt like a significant lifestyle change, and for most of the following decade-and-a-half it remained my computing home.

Work is what pulled me toward Mac. I've been using one at the office for nearly ten years now. The appeal is obvious: a Unix-like OS that let me stop mucking about with drivers and configuration and just focus on getting work done. Enterprise software support matters too: VPN clients, MDM solutions and the first-class apps for the usual corporate toolkit all just work on Mac in a way that requires considerably more patience on Linux.

That work experience gradually wore down my resistance. Around the early 2020s, with the arrival of Apple Silicon, I made the switch in my personal life too. The M-series chips are frankly astonishing. The performance and battery life are in a different league entirely. For a while, I had no complaints.

But things have been slowly souring. The announcement of macOS Tahoe with its Liquid Glass redesign crystallised something I'd been noticing for a while: Apple has been drifting from its "it just works" ethos toward form over function. The sentiment is widely shared. Recurring complaints about deteriorating software quality, bugs that persist across multiple releases and now a visual overhaul that prioritises aesthetics over usability. Others have documented it better than I can. The reaction on Hacker News says it all. The hardware is still extraordinary. The software is letting it down.

This got me thinking about going back to Linux. I picked up a Geekom mini PC with a Ryzen 5 processor for the purpose. I've always been a bit of an AMD fanboy and the Ryzen platform has solid Linux support, GPU drivers included. I've been running Debian on my home servers for the past decade without complaint, but for the desktop I decided to try Arch Linux this time. I'd briefly used it years ago on a netbook but never as a primary machine. The rolling release model appealed to me, always on current software without adding backports repos or hunting down dodgy PPAs. To be fair, Debian upgrades have always been painless for me, and I ran Debian Testing for a while to get more recent packages, but I seemed to be the only person doing it. The Arch community, particularly the forum, is considerably more vibrant, and I've never been one for Flatpak or similar workarounds. Arch just gives you the latest versions. The installation, which once required a multi-hour rite of passage through the wiki, was surprisingly painless with some help from Claude and the still-excellent Arch Wiki.

For the desktop I went with KDE Plasma 6. I did consider going back to a lightweight window manager. I've spent time with Openbox, Fluxbox, spectrwm and i3 over the years, but honestly I no longer have the appetite for that amount of configuration. Time is money (especially as I have a young family now) and I want something functional with sane/attractive defaults from the moment I log in. KDE fits that bill nicely.

And I have to say, I was impressed, and apparently I'm not the only one. Out of the box it feels solid and looks decent. The only thing I needed to change was the display scaling (from the default 125% down to 100%). That was it. Everything else just worked and I didn't feel the urge to "rice" it or spend hours tweaking appearance settings. The only thing I changed was the wallpaper settings. KDE can pull fresh wallpapers automatically from sources like Bing Picture of the Day or Wikipedia, with no third-party tools needed. I was genuinely impressed.

What I appreciate most is that KDE sticks firmly to the traditional desktop metaphor. Unlike GNOME, which has become increasingly opinionated and minimalistic to the point of frustration, KDE gives you a proper taskbar, a sensible application launcher and sane defaults. I was a GNOME 2 user back in the day and the transition to GNOME 3 is what originally drove me away, first to Xfce, then to lightweight window managers like Openbox and Fluxbox, and eventually to tiling setups with i3 and spectrwm. KDE was going through its own rough patch with the 4.0 transition at the time and I wasn't interested in MATE or Trinity. KDE never lost the plot the way GNOME did, though. It just stumbled for a while.

My tools of choice have always leaned FOSS and cross-platform, deliberately so, as it turns out. Zsh, Neovim, tmux, WezTerm, podman, KeePassXC, Transmission, LibreWolf, VLC (and recently Claude Code and Opencode) and they all work identically on Linux and make living with a foot in each camp surprisingly painless. There are some Linux-specific wins too: sshfs, for instance, which has become an increasingly painful exercise on recent macOS releases, just works. And while I don't have Homebrew on Linux, the AUR covers what I need. The one persistent irritant is muscle memory. I reliably reach for the wrong modifier key on whichever machine I'm on. I can't see that improving anytime soon.

One thing worth mentioning: I couldn't get suspend working reliably. The system would go to sleep fine but come back unresponsive, both via keyboard and over SSH. I didn't spend much time investigating it (life's too short) and in the end just turned off suspend in KDE and the SDDM login manager. On a laptop that would be a dealbreaker, and it's exactly the sort of hardware integration where Macs genuinely excel. On a desktop mini that doesn't move, it's a non-issue.

I'm not suggesting this is the right move for everyone. I still have a MacBook Air for portable use, and whatever Apple's software problems, they remain in a league of their own on the hardware side. But if you're dependent on enterprise software, specific professional applications or the broader Apple ecosystem then a Mac remains the pragmatic choice. For a desktop though, and for someone who spends most of their time in a terminal, a browser and a text editor, Linux in 2026 with KDE Plasma 6 is a compelling option. If macOS Tahoe is the direction things are heading, I don't see myself going back any time soon.

Thoughts on Star Trek: Strange New Worlds and "Nu Trek" in general

Sat 01 April 2023

I haven't surprisingly written about Star Trek before on this blog despite being an ardent Trekkie since childhood. I complained previously about my disappointment with the new Star Wars Films but I was never as committed to its universe compared to Trek. I grew up on TNG and then moved on DS9 and Voyager (and also the TNG films). The Utopian ideals that were embedded in these shows greatly influenced my own political views on how society should be constructed and behave.

The 90s were the golden age of Trek for me. The beginning of the decline started with Enterprise. It didn't feel like Trek to me (some say the blame lies with the TNG films). The later rebooted films were fun popcorn fares but felt even less close to the spirit of Trek.

I had huge hopes for Discovery when it was announced in the early 2010s but was gravely disappointed with what came out. It was just a mess. I appreciated the talented and diverse caste but the story and the world development was a fiasco. Season 2 in particular was atrocious. The quite, polite and introspective nature of 90s Trek was absent. You could excuse that in the flashy blockbuster films but not here. The whole invention of an intergalactic fungal highway to facilitate FTL was so stupid it was funny. I kept watching because that was the only Trek available at the time.

When I heard about Picard I was ecstatic. The character of Picard was an inspiration to me and I didn't foresee they would proceed to ruin this too. I envisaged Picard living a quite, retired life in his rural chateau, helping inspire youth in his community to also reach for the stars, provide expert guidance on local issues/disputes and maybe solve small local mysteries/crime here and there. The early trailers certainly hinted at this. Like Discovery, what came next was just bad. Season 2 started out strong but quickly devolved into hilarity. The plot arcs were nonsensical and ill-thought. Seeing iconic characters and lore get trashed like this has been very disappointing.

Some comments on the recent cartoons... Lower Decks has been so-so, it's main problem is that it tries far too hard to be funny rather than sci-fi. Prodigy was better and has more potential but still doesn't fully feel like Trek and its clear it is aimed at a much younger audience. What has been missing on all these Nu-Trek iterations has been the absence of the optimistic and utopian ideals of Classic and TNG-era Trek. It's mostly dark, gritty, violent and nihilistic but not in any sort of smart way. Paradoxically there also a lot of cheap, contrived sentimentality and emotional instability among the crew which was jarring compared to the professional demeanor and standards of earlier Trek crews. Great comparison on this from the Red Letter Media folks.

So I didn't have much hope for Strange New Worlds when it was announced. I was expecting more of the same in line with Discovery and Picard. But I have been greatly and pleasantly surprised. This is a show that feels like 90s Trek. The first episode was maybe a bit too preachy but it fully endorsed the classic utopian vision of humanity and intergalactic cooperation envisioned in Classic and TNG-era Trek. The story didn't revolve around a single protagonist but the entire crew working together to solve a crisis. The classic ethical dilemma of whether to break the "Prime Directive" was satisfactorily explored in both episode 1 and 2. The rest of the season generally didn't disappoint and continued in the same vein interspersed with some nice action oriented episodes.

Minor criticism (and spoilers) include needless nostalgia bait (e.g. why make La'an part of the Noonien-Singh tribe?) and the Gorn threat seemed to be clumsily retconned in. But I think I can overlook this. Trek feels like it is back.

PS - I have neglected this blog for a while due to work and family life but aim to get back into regularly writing again.

PPS - I am currently going through Picard Season 3 which is a definite improvement over earlier 2 seasons but I still have the same basic gripes. Maybe I will write on this later.

Running commands in multiple panes in tmux

Fri 29 June 2018

I recently discovered this neat feature in tmux which has changed my life. To repeat the same command in multiple panes in a tmux window, hit your tmux prefix and enter:

:setw synchronize-panes

Run the same command again to toggle off.

Thanks to Jahan Syed for this tip.

Decommissioning my Raspberry Pi

Thu 28 June 2018

A few weeks ago I decided to retire my beloved raspberry Pi (model 1 B) which I have been using as a dedicated home media center and a file/DNS/web/proxy/vpn/torrent/tor server for the past 3 years. I had purchased an Amazon firestick which took over media player duties (I highly recommend it) and this got me thinking whether I could upgrade from the model 1B.

The Pi always got the job done but it was definitely on the slow-ish side. I wanted to know if I could do better. I looked into the latest model which is a much more beefy little machine with 1GB of RAM and a 1.2 GHz CPU. But then I remembered that I had a Dell Mini 10 netbook languishing in the corner of my room. The last time I had used it was a good 2-3 years ago while experimenting with Arch Linux.

So I decided to use that instead. It is equipped with a 1.6GHz Intel Atom processor and 1GB of RAM so would definitely be an upgrade. Power consumption would be low due to the ultra low power nature of the CPU making this an ideal home server. I wiped the disk, installed Debian Stretch on it and copied over my settings and files. Within an hour or two I was up and running

Understandably it is running a lot faster than the Pi and will probably serve me a good few years barring any hardware failures.

I'm not sure what use I can put my little Pi to now though. I bought it just to tinker but it ended up inadvertently becoming a reliable home server. The fact that it was able to reliably carry out all those duties for so long is a testament to its hardiness and also the power of a good stripped down Linux distro.