Code by Tom https://codebytom.blog Thu, 15 Jan 2026 10:18:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://codebytom.blog/wp-content/uploads/2025/07/codebytom-logo-150x150.jpg Code by Tom https://codebytom.blog 32 32 Why “quick” code reviews are hurting you and your team https://codebytom.blog/2026/01/why-quick-code-reviews-are-hurting-you-and-your-team/ Thu, 15 Jan 2026 10:18:25 +0000 https://codebytom.blog/?p=2262 We’ve all been there: a notification pops up, you glance at the code, it looks “fine enough,” and you hit Approve.

I recently caught myself giving a rushed approval, and it forced me to realize that these “rubber-stamp” reviews aren’t just a personal lapse they are a systemic risk. As AI tools increasingly assist in writing our code, the ability to perform deep, intentional code reviews is becoming the most critical skill in an engineer’s toolkit

We are entering a world where the volume of code being produced is exploding, thanks to LLMs. While AI is great at generating syntax, it often misses the nuance of edge cases, architectural consistency, and long-term maintainability.

This is where you set yourself apart. In the age of AI, the “coder” is common, but the “discerning reviewer” is rare. If you treat reviews as a chore to be cleared, you are essentially outsourcing your team’s quality to an algorithm. Taking the time to truly understand the why behind every line of code is what transforms you from a code-writer into a high-level engineer.

Reviewing code is a competitive advantage

Ultimately, a code review is one of the best ways for a team to grow. It’s where trade-offs are discussed, different approaches are compared, and senior-level thinking is shared.

In a world saturated with AI-generated code, the engineers who take the time to deeply understand, test, and critique code are the ones who will greatly benefit themselves, the team and the product.

]]>
The cost of synchronous meetings https://codebytom.blog/2025/12/the-cost-of-synchronous-meetings/ Tue, 16 Dec 2025 14:07:46 +0000 https://codebytom.blog/?p=2252 As software engineers, our most valuable currency isn’t lines of code, it’s focus.

We all know the flow state: holding a complex mental model in your head, connecting the dots between systems. Then, a calendar notification pops up.

Synchronous video meetings are often the unintended enemy of this flow. They require three rigid things: a specific time, a specific place, and an immediate shift in mindset. You have to drop what you’re building, enter a new context, and contribute instantly.

There are certainly circumstances where video calls are necessary sometimes you need high-bandwidth communication to resolve ambiguity. But every meeting effectively taxes the team’s ability to do work.

The cost of context switching

The cost of a meeting isn’t just the 30 minutes blocked on the calendar. It’s the “ramp down” before and the “ramp up” after.

Research has shown it can take around 23 minutes to fully regain focus after an interruption. If you have back-to-back meetings, or even just scattered “quick syncs” throughout the day, that recovery time evaporates. You’re left with a fragmented day where tangible work struggles to happen.

Flipping the default to async

My preference has always been to treat synchronous meetings as a luxury, not a default. By turning updates, questions, and discussions into internal posts, we respect everyone’s time and cognitive load.

Here is why async wins for engineering teams:

  • Respect for flow: I can read a post during my low-energy gaps, saving my high-energy blocks for coding.
  • High-fidelity scanning: It is much faster to scan a written document for the relevant points than to sit through a video call waiting for them.
  • Permanent knowledge: Video calls vanish when the window closes. Written posts become searchable, reusable documentation for the future.
  • Richer context: You can embed screen recordings, logs, and code snippets directly where they are needed.
  • Better thinking: Real-time demands immediate answers. Async allows contributors time to think, resulting in deeper questions and more considered solutions.

Making space for real connection

Paradoxically, removing scheduled meetings can lead to better collaboration.

When your calendar isn’t fragmented by status updates and broad meetings, you find the time to actually sit down with your immediate team. It frees up space to jump on a call to pair on a hard problem.

It is in these private, intimate settings, solving code together rather than staring at a grid of muted faces—where you really get to know your colleagues. That is where trust is built.

Raising the bar for sync

This isn’t about eliminating human connection or banning Zoom. It’s about being intentional.

If we are going to ask for everyone’s synchronous attention, the bar should be high. Every meeting requires a clear agenda and a tangible goal.

For everything else, write it down. It gives your team the freedom to read in their own time, think deeply, and keep their focus where it belongs.

]]>
Make change cheaper: Making a new engineering job less draining https://codebytom.blog/2025/11/make-change-cheaper-making-a-new-engineering-job-less-draining/ Fri, 21 Nov 2025 22:28:00 +0000 https://codebytom.blog/?p=2249
We glorify adaptability in tech. Pivot fast, learn faster, ship faster. But every change taxes the same systems you use to think clearly and make good decisions.


Starting a new job as a software engineer is peak change. New codebase, new tooling, new rituals, new acronyms, new people. Your brain is building fresh predictions for everything. How PRs get reviewed, where incidents get triaged, which tests to trust, which Slack channels matter. That learning tax is real, and it draws from the same pool you need to do meaningful work.


In stable environments, habits carry you. Shortcuts form. Complexity becomes familiar. But a new role puts your brain in “cold start.” There’s no autopilot, just constant updating, constant learning. Stack enough changes, new manager, new architecture, new deployment pipeline, and you hit change fatigue. The instinct is to push harder or wait for things to “settle.” Both can backfire.


What helps when you’re onboarding and everything is moving:

  • Stop trying to get back to normal: Old workflows, old muscle memory, old velocity don’t apply here. Normalise slow starts, incomplete context, and small wins. Aim for present fit, not past performance.
  • Take inventory, fast and often: What’s actually working? Which doc sources are reliable? Who unblocks quickly? Which tests are flaky? Keep a running list. Pattern finding is how you reduce cognitive load.
  • Design tiny experiments. Treat onboarding like a lab and try out new things often.


Change won’t slow down. But its cost is adjustable. When you accept that adaptation consumes energy, when you surface what you’ve learned, and when you run small experiments instead of waiting for perfect conditions, you lower the tax. The goal is signal over speed. Ship less noise, learn the right things, and let the system become familiar enough that focus returns.

]]>
Five years at Automattic: A goodbye https://codebytom.blog/2025/10/five-years-at-automattic-a-goodbye/ Fri, 31 Oct 2025 08:22:31 +0000 https://lightgoldenrodyellow-finch-399595.hostingersite.com/?p=2225 After five incredible years, I’m moving on from Automattic. It’s been absolutely class.

I started working on WordPress.com, which was a brilliant introduction to the company. After a year, I moved to WooCommerce and never looked back. Over the next five years, I bounced between five different teams, each one giving me the chance to work with people who are genuinely passionate, ridiculously talented, and absolutely hilarious (whether they meant to be or not).

What i’m taking with me

I’ve worked on some interesting projects during my time here—some challenging ones too, and I’m dead proud of what we’ve achieved. But honestly, the biggest thing I’m taking away is what I’ve learned from being surrounded by people who are just… better at this than me. They’ve made me better at what I do, and I’m grateful for that.

The technical skills, the product thinking, the engineering practices. I’ve learned so much from teammates who struck the perfect balance between innovation and implementation. People who could transform complex problems into elegant solutions, who never made me feel like taking their time was a problem even when their plates were full.

The meetups

And the meetups. Bloody hell, the meetups. I have never (and I mean never) laughed so hard in my life. There have been times where I’ve cried laughing, barely slept, had class chats with amazing people, and made friendships that will last a lifetime.

Working at a distributed company means these in-person gatherings are something special. There’s something about finally meeting the people you’ve been working with remotely, and discovering that the friendship you’ve built online is just as real when you’re face to face.

The people

I’m not sure my time here would have been a fraction of what it was if it wasn’t for the people I worked with. From day one, I’ve been surrounded by teammates who care about their craft, who are endlessly curious, and who have the ability to turn a room full of strangers into a room full of mates.

I’ve worked with people whose technical abilities blow my mind, who could pick up new technologies at a rate I found admirable, who thought deeply about the user experience, and who brought proper energy to everything they did. People who checked in regularly, who were patient with my never ending questions, and who challenged me.

To everyone at Automattic: thank you.

]]>
Tracking Record Divergence with Content Hashing https://codebytom.blog/2025/10/tracking-record-divergence-with-content-hashing/ Fri, 24 Oct 2025 07:52:52 +0000 https://lightgoldenrodyellow-finch-399595.hostingersite.com/?p=2222 When users duplicate records in a database, tracking whether copies have diverged from their source becomes challenging. A content hash provides a simple mechanism to detect when duplicates no longer match their parent.

The Problem

Consider an email_templates table:

CREATE TABLE email_templates (
  id INT PRIMARY KEY,
  name VARCHAR(255),
  subject VARCHAR(255),
  body TEXT,
  footer TEXT,
  hash VARCHAR(64),
  parent_id INT NULL
);
SQL

User A creates a template (id: 1, parent_id: null). Then User B duplicates this template (id: 2, parent_id: 1). Both records are identical.

User A then updates their template’s subject to “Welcome aboard!”. The parent_id on User B’s record still points to id: 1, but the records no longer match since the subjects are different.

Content Hashing

A hash function converts variable-length input into a fixed-length string. SHA-256 produces a 64-character hexadecimal output. Identical inputs always produce identical hashes; even single character changes produce completely different outputs.

const crypto = require('crypto');

function generateHash(data) {
  return crypto
    .createHash('sha256')
    .update(JSON.stringify(data))
    .digest('hex');
}
JavaScript

Implementation

Whenever a user creates, or updates a record you take all (or some) of that records data and produce a hash based on it which is stored against the record.

async function updateEmailTemplate(id, updates) {
  const hashData = {
    subject: updates.subject,
    body: updates.body,
    footer: updates.footer
  };
  
  const hash = generateHash(hashData);
  
  await db.query(
    'UPDATE email_templates SET subject = ?, body = ?, footer = ?, hash = ? WHERE id = ?',
    [updates.subject, updates.body, updates.footer, hash, id]
  );
}
JavaScript

Detecting Divergence

Now in our example, since we have two records which are linked through id and parent_id we can now determine whether they have diverged from each other by comparing the hashes. Since User A updated the subject heading, their hash will now be different to User B’s copy.

async function hasDiverged(recordId) {
  const record = await db.query(
    'SELECT hash, parent_id FROM email_templates WHERE id = ?',
    [recordId]
  );
  
  if (!record.parent_id) {
    return false; // Not a duplicate
  }
  
  const parent = await db.query(
    'SELECT hash FROM email_templates WHERE id = ?',
    [record.parent_id]
  );
  
  return record.hash !== parent.hash;
}
JavaScript

Considerations

  • Select which fields to include in the hash. Exclude timestamps, usage counts, or other metadata that shouldn’t trigger divergence detection.
  • Store the hash in the database rather than computing it on demand to avoid performance penalties when checking multiple records.
  • For large text fields, consider hashing normalized versions (trimmed, lowercase) to avoid detecting inconsequential changes.

The hash comparison provides O(1) divergence detection without field-by-field comparisons or storing complete record history.

]]>
Write less, not more: Where time is really saved with AI assisted writing https://codebytom.blog/2025/09/write-less-not-more-where-time-is-really-saved-with-ai-assisted-writing/ Wed, 24 Sep 2025 15:05:25 +0000 https://codebytom.blog/?p=2217 We’re drowning in AI-generated content. Company channels overflow with verbose updates. LinkedIn feeds scroll endlessly through AI-polished posts. Blogs churn out article after article, each longer than necessary.

The problem isn’t the quantity, it’s that we’re using AI backwards.

The verbose trap

Most people use AI as a content multiplication machine. Why write one post when you can write five? But multiplication without curation creates noise, not signal. The result shifts cognitive burden to readers, who must extract meaning from inflated prose.

Distillation, not generation

The power of AI lies in helping us write less, not more.

Instead of asking AI to expand your thoughts into lengthy pieces, ask it to compress them. Use it to:

  • Identify your core point: What’s the one thing readers need to understand?
  • Eliminate redundancy: Where are you repeating yourself?
  • Cut filler: Which sentences add nothing?
  • Clarify confusing passages: Where might readers stumble?
  • Tighten your prose: How can you say more with fewer words?

This isn’t about dumbing down complex ideas it’s about respecting your readers’ time and cognitive load.

Saving time where it matters

The goal shouldn’t be saving the writer time, but saving the reader time.

A five-minute investment in making your post 30% shorter saves every reader those minutes. If 100 people read your post, you’ve saved hours of collective human attention. That’s a better return on investment than any productivity hack.

The scarce resource isn’t the ability to produce more words it’s the ability to choose the right ones. Use AI to write less, not more. Your readers will thank you.

]]>
An overview of basic cryptographic concepts https://codebytom.blog/2025/08/an-overview-of-basic-cryptographic-concepts/ Wed, 20 Aug 2025 18:12:24 +0000 https://codebytom.blog/?p=2094 I’ve been learning a little more about cryptographic fundamentals, this post documents my exploration. I’m focusing on the practical use cases, underlying mechanisms, and trade-offs that influence implementation decisions. In future, I plan to dive into more of the implementation of some of the below.

Hashing and salting

Hashing is basically a one-way function that takes any input and spits out a fixed size string of characters. Think of it like a meat grinder, you can put a steak in and get minced/ground beef out, but you can’t reverse it and turn it back into a steak. The same input always produces the same output, but even tiny changes to the input create completely different results.

Software uses hashing all over the place. The big one is password storage, instead of keeping actual passwords in your database (which would be a nightmare if someone broke in), you store the hash. When a user logs in, you hash their input and compare it to what’s stored. Hashing also gets used for file integrity checks, cache keys, and digital signatures.

Here’s where salting comes in. Without salt, identical passwords create identical hashes, which makes life easy for attackers. They can build these massive lookup tables called rainbow tables with common passwords and their hashes. Salt is just random data you add to each password before hashing it. Every user gets their own unique salt, so even if two people use “password123“, their hashes look completely different because we use different salt values to hash them.

The trick with salt is that you need to store it alongside the hash in your database. When someone tries to log in, you grab both the stored hash and the stored salt, then use that same salt to hash their input password. If the result matches what’s in the database, they’re good to go. The salt doesn’t need to be secret it just needs to be unique per user and stored permanently so you can use it again during verification.

Digital signatures

Digital signatures use asymmetric cryptography to bind a message to its creator through mathematical proof. The process begins when a sender generates a hash of their message. This hash gets encrypted with the sender’s private key. The resulting signature attaches to the original message for transmission.

Verification reverses this process. The recipient hashes the received message using the same algorithm, then decrypts the signature using the sender’s public key. If the decrypted hash matches the computed hash, the signature is validated. This proves the message originated from the private key holder and remains unmodified during transmission.

Digital signatures don’t hide data. They prove origin and detect tampering. For confidentiality, you need separate encryption.

Public-key cryptography

Public-key cryptography operates on asymmetric key pairs generated through mathematical algorithms. The algorithms create two keys where data encrypted with one can only be decrypted with the other. This mathematical relationship forms a ‘trapdoor function’, computationally easy in one direction but practically impossible to reverse without the private key.

The encryption process depends on which key initiates the operation. When encrypting data for confidentiality, you use the recipient’s public key, ensuring only they can decrypt with their private key. For digital signatures (described above), you encrypt a hash of your message with your private key, allowing anyone to verify authenticity using your public key. This dual functionality addresses both secrecy and authentication requirements.

The difference between symmetric and asymmetric encryption

Symmetric encryption uses a single shared key for both encryption and decryption, requiring secure key distribution between parties but offering fast performance. Asymmetric encryption uses mathematically related key pairs where the public key encrypts data that only the corresponding private key can decrypt, eliminating the key distribution problem but at the cost of significantly slower performance.

Most practical systems combine both approaches: asymmetric algorithms establish a shared symmetric key. This hybrid model leverages the security benefits of asymmetric cryptography for key exchange while maintaining the performance advantages of symmetric encryption.

]]>
Localised content in a VPN-first world https://codebytom.blog/2025/08/localised-content-in-a-vpn-first-world/ Tue, 12 Aug 2025 12:32:46 +0000 https://codebytom.blog/?p=2082 In this day and age websites have always tried to be helpful. They show prices in your currency, highlight your local news, or suggest content based on where you are. That’s not inherently bad in theory, it’s personalisation. But when connection location becomes the gatekeeper, we risk building digital experiences that can crumble in an instant under sweeping national changes.

On July 25, 2025, the UK’s Online Safety Act took effect, enforcing strict age-verification for anyone trying to access certain content. You’re now verifying your identity to see this content, maybe your face, maybe your credit card. The goal was to keep kids safe. The fallout was… predictable: Proton VPN saw an 1,800 % surge in UK sign-ups, NordVPN saw around 1,000%, and VPN apps flooded the UK App Store charts.

When local is no longer local

The lesson here is simple: when people are under pressure be it from regulations, risk to privacy, or surveillance they find the easiest way to slip the leash. And that means:

  • Localised currency: Not if you’re routing through another country.
  • Regional news: Wont work if your IP says you’re half way around the world to where you actually are.
  • Local media: Ads now appear in different languages showing products that are not relevant to you. Video suggestions are now in different languages etc.

Becoming mainstream

The UK’s Online Safety Act has done something few privacy campaigns could: it’s made everyday people acutely aware of just how much an insecure connection reveals about them. What was once the territory of the privacy conscious, those who already knew their IP address was a digital fingerprint has now gone mainstream. VPNs are no longer remaining a niche tools for the tech savvy, they’re becoming a default safeguard for anyone unwilling to hand over personal details just to browse. This shift is pushing the masses into masked networks by necessity rather than choice, and in the process people are waking up to the reality that online privacy and security aren’t luxury concerns. Sweeping government mandates are effectively forcing the public to learn, very quickly, what the privacy community has been saying for years: the internet was never as anonymous as it felt.

The UK’s numbers are incredible. But they’re also a signal: People value privacy, and when their comfort is crossed, they vote with their clicks. IP-based experiences are becoming dangerously irrelevant in that reality.

]]>
Building the Block Hooks API: My WordPress core experience https://codebytom.blog/2025/07/building-the-block-hooks-api-my-wordpress-core-experience/ Mon, 14 Jul 2025 08:41:39 +0000 https://codebytom.blog/?p=1901 What is the Block Hooks API

The Block Hooks API allows a block to automatically insert itself relative to instances of other block types. For example, a “Like” button block can ask to be inserted before the Post Content block, or a Mini Cart block can ask to be inserted after the Navigation block. Think of it as WordPress’s answer to the classic theme hooks and filters system to extend site UIs, but specifically designed for the block editor era.

As the name suggests, you can only insert blocks with the Block Hooks API. The API works holistically with site editing. While insertion happens automatically when a block is hooked, the user has the ultimate control. In the Site Editor, they can keep, remove, customise, or move the block, and those changes will be reflected on the front end.

The beauty lies in its simplicity: instead of users manually adding blocks everywhere, plugins can automatically insert their blocks where they make sense, while still giving users complete control to customize or remove them.

How I got involved

I was selected from the WooCommerce team to work directly with some incredibly talented individuals on the WordPress.org team, particularly Bernie Reiter, who led much of the technical vision for this project. The reason for my involvement was pretty straightforward – WooCommerce was going to be a primary consumer of this API, and we represented a perfect real-world use case.

Our immediate use-case: we wanted to automatically insert things like the Mini Cart block and My Account block into store headers without forcing users to manually add them. Every WooCommerce store owner shouldn’t have to waste time editing areas like their header to get this basic ecommerce functionality, they should come out the box automatically. That’s exactly the kind of problem Block Hooks was designed to solve.

The weight of building an API for WordPress core

Working on a new API for WordPress core meant we had to be incredibly careful about how we approached every decision. Unlike a typical project where you can iterate and change things as you learn, the API signature and behavior patterns we established had to be maintained going forward due to WordPress’s commitment to backwards compatibility.

Every function name, every parameter, every return value, once it shipped, it was essentially set in stone. That’s both terrifying and exciting when you’re helping to design something that could potentially be used by thousands of plugins and themes across millions of websites.

The frontend and editor challenge

This API had two distinct problems to solve at the heart of it: how to insert blocks on the server-side rendered frontend versus the client-side React editor. It was decided to focus on templates and template parts as the initial use case since WooCommerce headers are typically built as template parts. Diving into the core WordPress template system, two key functions: _build_block_template_result_from_file and _build_block_template_result_from_post in the block template utilities. These functions were perfect because they’re used to build template markup for both the editor REST API responses and the frontend rendering – giving a unified approach needed to solve both challenges with a single implementation.

The algorithm at the heart of it all

At these template function entry points, we ran an algorithm through a function called apply_block_hooks_to_content (which you can find in here). This function does the heavy lifting:

  1. It iterates through the block structure of the template content
  2. Parses the block markup into a workable format
  3. Identifies where hooked blocks should be inserted based on their anchor blocks
  4. Inserts the new blocks if they’re eligible
  5. Re-serializes everything back into the block markup format

The elegance of hooking into these template functions was that they served both our frontend and editor needs with a single implementation.

This is an oversimplification, but it’s the core concept that made the whole system work. For other use cases like specific blocks (navigation blocks, post content blocks), block patterns, and more, we approached them using similar patterns.

One area identified for future improvement is that this approach loads the entire template markup into memory for parsing and re-serialization. For very large templates, this could become a memory constraint. A streaming approach that processes the markup incrementally would be more efficient.

Respecting user choice

After inserting a newly hooked block, we added a reference to its anchor block in the form of ignoredHookedBlocks – essentially a list of hooked blocks associated with an anchor block that the API has explicitly inserted at one time or another. This was crucial because it meant when users removed a hooked block in the editor, the next time our algorithm ran against that markup, it would see the ignoredHookedBlocks list and respect the user’s decision not to re-insert it. Meaning if the hooked block was already there it wouldn’t be duplicated, and if removed it wouldn’t be re-inserted.

The navigation block challenge

One particularly tricky case was blocks like the navigation block, where the inner blocks aren’t stored in the template markup but in the database as their own posts. Since the inner blocks didn’t have its parent blocks data to access the ignoredHookedBlocks data at the time the algorithm ran against the parent block, we had to find a different storage solution.

We ended up storing this data in the wp_postmeta table against the post that stores the navigation’s inner blocks. It was a bit more complex, but it preserved the user choice principle that was so important to the overall design.

I wrote a more detailed post about implementing Block Hooks for the Navigation block that covers the specific challenges we faced, the entry points we chose, and how we solved the ignoredHookedBlocks tracking problem without access to the anchor block. It was one of the most requested features for the API and made it into WordPress 6.5.

My experience

Collaborating with talented people like Bernie was absolutely one of the highlights of this project. I learned an incredible amount working alongside him and the other core contributors. The pace was much more deliberate and considered – and for good reason.

We needed to ensure everything was right the first time. So we approached problems with multiple potential solutions, prototyped different approaches, and carefully evaluated the trade-offs before committing to an implementation. It was slower, but the thoroughness was necessary.

Testing was naturally an incredibly important aspect of this project. Every behaviour of the API was covered with comprehensive unit and integration tests. When you’re building something that will be used across the entire WordPress ecosystem, you can’t afford to have edge cases slip through.

The ultimate validation came when WooCommerce started using this API for some of its out-of-the-box behaviors which was one of the original goals of the project. Seeing it work in production across thousands of WooCommerce stores was satisfying.

Looking back

Being part of building a brand new API for WordPress was both challenging and rewarding. It’s one thing to work on an your typical project, but it’s entirely different to design something that needs to work for the entire ecosystem while maintaining WordPress’s commitment to backwards compatibility and user control.

The Block Hooks API shipped in WordPress 6.4 and has continued to evolve ever since. It’s amazing to see something you helped build continuing to grow and serve the community.

Related posts

]]>
The hidden cost of AI reliance https://codebytom.blog/2025/07/the-hidden-cost-of-ai-reliance/ Wed, 09 Jul 2025 15:53:14 +0000 https://codebytom.blog/?p=1866 I want to be clear: I’m a software engineer who uses LLMs ‘heavily’ in my daily work. They have undeniably been a good productivity tool, helping me solve problems and tackle projects faster. This post isn’t about how we should reject LLMs and progress but rather my reflection on what we might be losing in our haste to embrace them.


The rise of AI coding assistants has brought in what many call a new age of productivity. LLMs excel at several key areas that genuinely improve developer workflows: writing isolated functions; scaffolding boilerplate code like test cases, configuration files, explaining unfamiliar code or complex algorithms, generating documentation and comments, and helping with syntax in unfamiliar languages or frameworks. These capabilities allow us to work ‘faster’.

But beneath this image of enhanced efficiency, I find myself wondering if there’s a more troubling affect: Are we trading our hard-earned intelligence for short-term convenience?

What the studies show

Research consistently points to concerning trends in how AI usage affects our cognitive abilities. Studies using brain imaging technology found that ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels” [Study]. A comprehensive survey of 319 knowledge workers revealed that higher confidence in AI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking [Study]. Meanwhile, research involving 666 participants found a significant negative correlation between AI tool usage and critical thinking scores [Study], with cognitive offloading identified as a primary driver of this decline. These studies collectively suggest that while AI can boost immediate productivity, frequent use may reduce our inclination to engage in deep, reflective thinking, particularly among users who show higher dependence on AI tools and lower critical thinking scores.

What does this look like in practice for developers? Detailed in Addy Osmani’s ‘Avoiding Skill Atrophy in the Age of AI‘. One engineer with 12 years of experience confessed that AI’s instant help made him “worse at his own craft.” First, he stopped reading documentation. Why bother when an LLM can explain it instantly? Then debugging skills waned, stack traces and error messages felt daunting, so he just copy-pasted them into AI for a fix. “I’ve become a human clipboard”, blindly shuttling errors to the AI and solutions back to code.

The hallucination problem

Beyond skill atrophy lies another critical issue: AI reliability. LLMs hallucinate frequently, producing confident sounding but incorrect information. They generate plausible looking code that contains subtle bugs, suggest outdated practices, or make security compromising recommendations.

When we blindly trust AI output without verification, we’re not just risking immediate bugs we’re systematically degrading our ability to catch these errors. The very skills we need to validate AI-generated code are the ones that atrophy from disuse.

The shifting expectation landscape

Expectations are evolving rapidly under AI’s influence. Smaller teams are now responsible for broader scopes of work, with the implicit assumption that AI will handle much of the heavy lifting. This could create a dangerous feedback loop: as teams become more dependent on AI to meet these expanded expectations, they have even less time to develop and maintain core skills.

The pressure to ship faster with AI assistance can lead to a culture where understanding code becomes secondary to producing it. Developers now find themselves in environments where asking AI is not just acceptable but expected, potentially stunting their growth trajectory.

Shortcuts not breakthroughs

When we consistently choose the path of least resistance offered by AI, we miss opportunities to discover novel approaches or develop the kind of deep expertise that leads to breakthrough innovations.

We’re not becoming 10× developers with AI, we’re becoming 10× dependent on AI. Every time we let AI solve a problem we could’ve solved ourselves, we’re trading long term understanding for short term productivity.

Critical questions

This brings me to the fundamental questions I have found myself asking:

Are we using AI to our own detriment? When our intelligence regresses as we become more dependent on these tools, are we ultimately making ourselves less valuable and less capable?

Is the emphasis on AI over-reliance failing to invest in team longevity? Engineers thrive in environments that promote learning and growth. If we’re optimizing for short term productivity gains while systematically undermining the conditions that create truly skilled developers, what does our industry look like in 5-10 years?

I currently feel like the choices we make on AI reliance (individually or collectively) will determine whether it becomes the tool that elevates our profession, individual purpose and growth or becomes the crutch that ultimately diminishes it.

]]>