About Things | A Hans Scharler Blog https://nothans.com Life, Comedy, Games, Tech, Marketing, and Community Mon, 16 Mar 2026 21:50:52 +0000 en-US hourly 1 https://i0.wp.com/nothans.com/wp-content/uploads/2023/02/cropped-settings.png?fit=32%2C32&ssl=1 About Things | A Hans Scharler Blog https://nothans.com 32 32 114568856 Welcome to the Agentic Web. https://nothans.com/welcome-to-the-agentic-web https://nothans.com/welcome-to-the-agentic-web#respond Mon, 16 Mar 2026 21:50:47 +0000 https://nothans.com/?p=5351 I checked my server logs last Tuesday. Traffic was up. Way up. But engagement was flat. Same number of humans reading posts. The extra visitors weren’t reading anything at all.

They weren’t visitors. They were agents.

You Are Now the Minority

In 2024, automated traffic surpassed human traffic on the internet for the first time in a decade. Bots now account for 51% of all web traffic. Cloudflare processes 50 billion AI crawler requests per day. GPTBot traffic alone grew 305% in one year.

The web you built for humans? Humans aren’t the primary audience anymore.

Retail sites see 59% bot traffic. Travel sites: 48%. These aren’t all scrapers or spam bots. Increasingly, they’re shopping agents, research agents, booking agents. Doing things humans used to do, on websites humans used to visit.

Cloudflare published a stat that stopped me cold. For every single visitor Anthropic refers back to a website, its crawlers have already visited 38,065 pages. OpenAI’s ratio is 1,091 to 1. Perplexity: 194 to 1. The agents read your site a thousand times for every one human they send your way.

The web hasn’t died. But it’s molting.

The Protocol War

If 2024 was the year we noticed agent traffic, 2025 was the year everyone started building the plumbing.

Anthropic released MCP (Model Context Protocol) in November 2024. People call it “USB-C for AI,” a universal adapter that lets any AI system talk to any tool or service. It now has 97 million monthly SDK downloads and over 10,000 active servers. In December 2025, Anthropic donated it to the Linux Foundation’s new Agentic AI Foundation, co-founded with Block and OpenAI. Platinum members include AWS, Google, Microsoft, Bloomberg, and Cloudflare.

Google launched A2A (Agent-to-Agent Protocol) in April 2025. It lets agents from different vendors discover each other using “Agent Cards,” basically JSON resumes. Over 150 organizations signed on, including Microsoft, Amazon, SAP, Salesforce, and PayPal. Adobe and S&P Global already use it in production.

Then the commerce-specific protocols showed up. Shopify and Google co-developed UCP (Universal Commerce Protocol), endorsed by Etsy, Wayfair, Target, and Walmart. OpenAI and Stripe built ACP (Agentic Commerce Protocol), which powers “Buy it in ChatGPT,” launched February 2026.

There’s more. Jeremy Howard proposed llms.txt, a file that tells LLMs where your best resources are (the inverse of robots.txt, which tells crawlers where NOT to go). Over 600 sites adopted it, including Anthropic, Stripe, and Cloudflare. Vercel went further, proposing embedded LLM instructions directly in HTML: <script type="text/llms.txt">. Their 401 error pages already serve agent-specific instructions.

This is the HTTP moment for agents. The protocols being written right now will shape how the agentic web works for the next decade.

When Your User Has No Eyes

We’ve spent thirty years making websites look good. Careful typography. Hero images. Hover effects. Cookie banners with the “Accept All” button slightly bigger than the “Manage Preferences” button. All designed for humans who see, click, and feel.

Your next billion users won’t see any of it.

An AI shopping agent doesn’t care about your hero image. It doesn’t notice your brand colors. It doesn’t feel the emotional pull of your “Limited Time Only” banner. It parses your structured data, checks your Schema markup, reads your JSON-LD, and makes a decision based on price, specs, availability, and reviews.

CSS is irrelevant when your user has no eyes.

Bain found that 80% of consumers already rely on zero-click results for at least 40% of their searches, reducing organic traffic by 15-25%. Google referrals to news sites dropped 9-15% in 2025. That funnel where you attract visitors with content, dazzle them with design, and convert them with psychology? Agents skip the entire thing. They go straight to the data layer.

HubSpot put it bluntly: “The fastest-growing decision-maker in your funnel cannot see your ad, feel your brand, or be persuaded by your story.”

The advertising model of the internet is about to face its first existential threat since ad blockers. Except ad blockers were opt-in. Agent browsing is default. When Perplexity’s Comet browser started bypassing Amazon’s advertising, Amazon sued. A federal judge blocked Comet from Amazon on March 10, 2026. Perplexity argued the real motivation was protecting ad revenue, not cybersecurity.

That lawsuit is a preview. The entire attention economy was built on the assumption that humans look at screens. Agents don’t look at anything.

The Money Is Already Moving

This isn’t theoretical. The money has already started flowing through agent channels.

During Cyber Week 2025, one in five orders globally were associated with AI tools or agents. That’s 20% of all orders, roughly $67 billion. On Cyber Monday alone, AI traffic to US retail sites increased 670%. AI-influenced shoppers converted 38% more frequently than traditional visitors.

McKinsey estimates agentic commerce could redirect $3-5 trillion in global retail spend by 2030, with nearly $1 trillion from the US alone. Payment executives told CNBC this could be “more transformative than the rise of e-commerce platforms such as Amazon.”

The platforms are racing to own the checkout. Shopify launched Agentic Storefronts, letting merchants appear on ChatGPT, Perplexity, Microsoft Copilot, and Google AI Mode without needing a traditional website at all. Amazon built “Buy for Me,” an AI agent that purchases from third-party brand sites so customers never leave Amazon. OpenAI launched “Buy it in ChatGPT” in February with Stripe’s Agentic Commerce Protocol behind it.

Visa launched its Trusted Agent Protocol in October 2025, an open framework to distinguish legitimate AI agents from malicious bots. Mastercard is building its own trust framework. Both are running real transactions. Not pilot stage. Deployment.

47% of US shoppers already use AI tools for at least one part of their shopping journey. That number is going one direction.

What to Do About It

The agentic web is coming whether your site is ready or not. The transition will be messy, dual-interface, and gradual. Here’s what the practical path looks like.

Structured data first. Schema markup, JSON-LD, clean OpenGraph tags. This is the content layer agents actually read. If your product pages don’t have machine-readable pricing, availability, and specs, you’re invisible to agent shoppers.

Add llms.txt. It takes ten minutes. Create a /llms.txt file that tells LLMs where your most useful resources live. Over 600 sites have done this already. It’s the new robots.txt, but instead of “go away” it says “here’s the good stuff.”

Build an MCP server. If you have an API, wrap it in MCP. Anthropic, OpenAI, Google, and Microsoft clients all support the protocol. This is how agents will interact with your service natively, without scraping your UI.

Rethink your metrics. Traffic is no longer a proxy for interest. An agent visiting your site 38,000 times doesn’t mean you have 38,000 interested customers. You need to distinguish agent traffic from human traffic and measure what agents actually do: transactions, API calls, data retrieved.

Plan for agent authentication. Visa and Mastercard are already building trust frameworks. If your business involves transactions, you’ll need a way to verify that the agent placing an order is authorized to act on behalf of a real customer.

The visual web isn’t going away tomorrow. Humans still browse. But the share of your traffic that sees your CSS is shrinking every quarter, and the share that reads your structured data is growing. Design for both.

Your Homework

Go to your website’s analytics right now. Look at your traffic. Filter for known bot user agents. The number will be higher than you expect.

Then add a llms.txt file to your site root. Ten minutes. Tell the agents where the good stuff is.

The web is being rebuilt. You can watch, or you can leave the light on for your new visitors.

They won’t see it. But they’ll know it’s there.

]]>
https://nothans.com/welcome-to-the-agentic-web/feed 0 5351
Twenty Prototypes https://nothans.com/twenty-prototypes https://nothans.com/twenty-prototypes#respond Thu, 12 Mar 2026 22:34:22 +0000 https://nothans.com/?p=5348 Last week I was about to start a new feature. Muscle memory kicked in. I grabbed a whiteboard marker, uncapped it, and started drawing boxes. Service A talks to Service B. Data flows left. Cache sits here.

Thirty minutes in, I looked at my diagram. Then I looked at my terminal. Then back at the diagram.

I could have built three versions of this in the time I spent drawing one.

The Whiteboard Will Remember This — TERMINAL cartoon by NotHans.com

I put the marker down. Opened three terminal windows. Gave each agent a different approach to the same problem. Fifteen minutes later I had three working implementations. Not drawings. Not plans. Running code.

One of them had a concurrency bug I never would have caught on the whiteboard. One of them was elegant in a way I never would have designed on purpose. I picked the elegant one, trashed the other two, and moved on with my day.

The whiteboard is still there. Half-drawn. I haven’t erased it yet. It feels like a memorial.

When Building Was Expensive

This would have been insane five years ago. Twenty prototypes? For one feature? That’s not engineering. That’s chaos.

But the reason we think before we build is because building used to be expensive. Writing a prototype took days. Writing twenty of them was a luxury reserved for NASA and defense contractors with unlimited budgets.

For the rest of us, thinking was cheap and building was costly. A day of whiteboarding could save a week of wrong implementation. The entire culture of software architecture exists because of this math. Design reviews. Architecture decision records. RFC documents. Those meetings where six people stare at a diagram and argue about whether the arrow should point left or right.

All of it exists because building the wrong thing was so expensive that we needed to be really, really sure before we started.

That math made sense. For decades, it was correct.

The Cost Flip

It’s not correct anymore.

An AI agent can scaffold a working implementation in minutes. Not a sketch. Not pseudocode. Working code. The cost of building just fell through the floor.

When building was expensive, thinking first was smart. When building is nearly free, thinking first is waste. You’re spending your most expensive resource (time making decisions in the abstract) to save your cheapest resource (an agent’s time writing code).

Here’s the part nobody says out loud: decisions are better when you have real options in front of you. You pick a better couch in a furniture store than from a catalog. You write a better API after you’ve seen three different approaches running side by side. Abstract plans are guesses. Running code is data.

“I’ve never made a great architectural decision on a whiteboard. I’ve made some beautiful diagrams, though.” – Hans Scharler

Yes, I’m quoting myself on my own blog. It’s fine. I’m a thought leader now.

Twenty Prototypes

What does this look like in practice?

You have a decision to make. SQL or NoSQL. Event-driven or request-response. Monolith or microservice. The old way was to debate it. Write a design doc. Schedule a meeting. Get six opinions. Pick one and hope.

The new way: build all of them.

I’m not being metaphorical. You describe the problem to three different agents with three different constraints. “Build this with Postgres and a REST API.” “Build this with DynamoDB and event streams.” “Build this with SQLite and keep it dead simple.” Twenty minutes later you have three working prototypes. Real enough to test. Real enough to break. Real enough to learn from.

You throw edge cases at them. You read the code. You find the one where the tradeoffs actually work for your situation. Not the theoretical situation you imagined on a whiteboard. Your actual situation.

The losing prototypes cost you twenty minutes. The winning one has a head start on production.

Steve Yegge calls this “slot machine programming.” Pull the handle, get an implementation. Pull again, get a different one. I think that undersells it. Slot machines are random. This is deliberate. You’re choosing what to build and comparing the results with engineering judgment. It’s more like a taste test than a gamble.

What Design Becomes

I want to be clear. This is not the death of software architecture.

Design didn’t die. It moved.

You’re still the one making architectural decisions. You’re still evaluating tradeoffs, thinking about maintainability, asking what happens at scale. The difference is you’re doing it with twenty working examples in front of you instead of a whiteboard drawing.

That’s not less rigorous. It’s more rigorous. You’re choosing between things that exist instead of things you hope will work.

Toyota figured this out decades ago in manufacturing. They called it “set-based concurrent engineering.” Build multiple alternatives in parallel. Converge late, after you have real data. The car industry learned this lesson. Software is catching up now because we finally have tools cheap enough to make it practical.

The whiteboard isn’t gone. But it’s a sketchpad now. You doodle on it while your agents build the real thing.

Your Homework

Next time you sit down to start a new feature, or solve a hard problem, or make an architectural decision: skip the design doc.

Build two versions instead. Three if you’re feeling wild.

Different approaches. Different structures. Different tradeoffs. Run them. Compare them. Pick the one that actually works best, not the one that sounded best in a meeting.

]]>
https://nothans.com/twenty-prototypes/feed 0 5348
Claude Code and Agent Skills for Electron App Development: Your Desktop App Just Got a Cheat Code https://nothans.com/claude-code-and-agent-skills-for-electron-app-development-your-desktop-app-just-got-a-cheat-code https://nothans.com/claude-code-and-agent-skills-for-electron-app-development-your-desktop-app-just-got-a-cheat-code#respond Mon, 02 Mar 2026 00:14:19 +0000 https://nothans.com/?p=5336 I’ve been thinking about Compound Engineering a lot lately. This is the idea that every project should make the next one easier. And right now, there’s no better example of that than what’s happening with Claude Code, Agent Skills, and Electron app development.

Here’s the irony that got me started down this rabbit hole. Anthropic’s own Claude desktop app? It’s an Electron app. Boris Cherny from the Claude Code team confirmed it on Hacker News. The framework that everyone loves to hate is still the pragmatic choice. That tension tells you something important about where we actually are with AI-assisted development.

The Groundhog Day Problem (Electron Edition)

Every Electron project starts the same way. You configure BrowserWindow with contextIsolation: true and nodeIntegration: false. You write a preload script with contextBridge.exposeInMainWorld. You set up IPC channels. You configure Content Security Policy headers. You wrestle with electron-builder.yml. You set up code signing. You do this from memory, or you copy-paste from your last project, or you spend an hour on Stack Overflow re-finding the patterns you already know.

I called this the Groundhog Day Problem in my Compound Engineering post. Sixty to eighty percent of what you do on a new project, you’ve already done before. And yet, every time, you start from scratch.

Agent Skills fix this. Not like templates — templates are dead things. Skills are living context that Claude Code loads on demand when it recognizes you’re doing Electron work.

What Are Agent Skills? (The 60-Second Version)

If you haven’t been following the Agent Skills story, here’s the short version.

A skill is a folder with a SKILL.md file. It contains YAML frontmatter (name, description) and markdown instructions that Claude follows when the skill activates. Anthropic released Agent Skills as an open standard in December 2025, and it’s been adopted by over 26 platforms — not just Claude Code, but also OpenAI Codex, Gemini CLI, GitHub Copilot, Cursor, VS Code, and more.

The key design principle is progressive disclosure. Only the skill’s name and description load at startup — roughly 30 to 50 tokens per skill. The full SKILL.md loads only when triggered. Reference files and scripts load only when needed during execution. This means you can have dozens of skills installed without bloating your context window.

Think of it like an onboarding guide for a new team member — except the new team member is an AI agent that reads and follows instructions instantly.

The Electron Skill Stack

Here’s where it gets practical. There’s already a growing ecosystem of skills and subagents specifically for Electron development. Let’s walk through the ones worth knowing about — and how to install each one.

1. electron-scaffold

What it does: Scaffolds production-ready Electron apps with security hardening baked in from the start. It handles the architecture decisions (Electron Forge vs. Vite vs. electron-builder), sets up proper IPC patterns with contextBridge, configures CSP headers, enables context isolation, sets up auto-updates, integrates native menus, and generates the full project structure with TypeScript support.

Why it matters: This is the security-first scaffolding that most tutorials skip. It encodes the difference between a toy Electron app and one that’s ready for distribution.

How to install:

Using the Vercel skills CLI (works across Claude Code, Codex, Cursor, and others):

npx skills add chrisvoncsefalvay/claude-skills --skill electron-scaffold

Or manually: download from the claude-plugins.dev listing, extract the ZIP, and drop the folder into ~/.claude/skills/.

For Claude.ai users, go to claude.ai/settings/capabilities, find the Skills section, and upload the downloaded ZIP.

2. electron-pro (Subagent)

What it does: This isn’t a skill — it’s a full subagent. Think of it as a senior Electron developer persona with deep expertise in Electron 27+ and native OS integrations. It follows a phased approach: understanding your requirements, designing secure architecture, implementing with a full security checklist (context isolation, CSP, IPC validation, code signing), and packaging for multi-platform distribution.

Why it matters: It’s the difference between asking Claude to “make an Electron app” and having a dedicated Electron specialist with a checklist that covers everything from memory budgets to auto-update rollback strategies.

How to install:

Download the subagent file directly from VoltAgent’s repository and save it to your agents directory:

mkdir -p ~/.claude/agents
curl -o ~/.claude/agents/electron-pro.md \
  https://raw.githubusercontent.com/VoltAgent/awesome-claude-code-subagents/main/categories/01-core-development/electron-pro.md

Or use the built-in agent installer in Claude Code by typing /agents and creating a new agent from the file.

3. Full-Stack Electron Skill (partme-ai)

What it does: A comprehensive Electron reference skill organized to mirror the official Electron documentation structure. It covers everything: main process, renderer process, IPC communication, BrowserWindow management, menus, tray icons, native integrations, packaging with ASAR, electron-builder configuration, code signing, auto-updates, debugging, memory profiling, crash reporting, and security best practices including sandboxing and CSP.

Why it matters: This is the one that turns Claude Code into something like having the entire Electron docs loaded as contextual intelligence. Instead of searching docs, Claude just knows the right patterns.

How to install:

Via the Vercel skills CLI:

npx skills add partme-ai/full-stack-skills --skill electron

Via LobeHub:

mkdir -p ~/.claude/skills/partme-ai-full-stack-skills-electron && \
curl -fsSL "https://market.lobehub.com/api/v1/skills/partme-ai-full-stack-skills-electron/download" \
  -o /tmp/electron-skill.zip && \
unzip -o /tmp/electron-skill.zip \
  -d ~/.claude/skills/partme-ai-full-stack-skills-electron

4. Electron’s Own CLAUDE.md

What it does: The Electron framework itself ships a CLAUDE.md in its repository. This teaches Claude Code the Electron project’s structure — where the C++ shell code lives, how TypeScript implementations map to API modules, how to work with the 159+ Chromium patches and 48+ Node.js patches, and the build workflow using @electron/build-tools. It even includes a dedicated “Electron Chromium Upgrade” skill for Chromium version bumps.

Why it matters: This is a real-world example of a major open source project using CLAUDE.md to encode institutional knowledge. If you’re contributing to Electron itself, or if you want inspiration for structuring your own project’s CLAUDE.md, this is the gold standard.

How to access: No installation needed — it’s in the Electron repo. But the pattern is what matters. Your own Electron app should have a CLAUDE.md at the project root that teaches Claude Code about your specific architecture, IPC channel naming conventions, and build setup.

5. Electron FSD + React 19

What it does: A specialized skill for building Electron apps using Feature-Sliced Design architecture combined with React 19 patterns. It enforces a clean separation of concerns across the three-process model (Main, Preload, Renderer) while implementing strict FSD layer responsibilities. Covers modern React patterns like the use() hook and useActionState.

Why it matters: If your Electron app is a React app (and let’s be honest, a lot of them are), this skill bridges the gap between “generic Electron best practices” and “how to actually structure a complex React-based desktop application.”

How to install:

Available on MCPMarket. Download the skill ZIP and extract it:

mkdir -p ~/.claude/skills/electron-fsd-development
# Extract the downloaded ZIP into the directory above

Or upload it directly as a skill in Claude.ai settings.

Building Your Own Electron Skills

The pre-built skills get you started, but the real compounding happens when you build your own. Here’s the thing — you already have the knowledge. It’s just locked in your head.

That IPC channel naming convention you use across every project? That’s a skill. Your electron-builder.yml that took you a weekend to get right? That’s a skill. The way you structure preload scripts for your team? Skill.

Here’s what a simple custom Electron skill looks like:

---
name: my-electron-conventions
description: Project conventions for Electron IPC channels,
  preload patterns, and build configuration. Use when creating
  new IPC handlers, preload scripts, or modifying build config.
---

# Electron Project Conventions

## IPC Channel Naming
- Use colon-separated namespaces: `app:get-version`, `file:open`
- Prefix with `dialog:` for user-facing dialogs
- Prefix with `store:` for persistent data operations

## Preload Script Pattern
- One preload file per window type
- Always use `contextBridge.exposeInMainWorld`
- Never expose raw `ipcRenderer`

## Build Configuration
- Target: DMG for macOS, NSIS for Windows, AppImage for Linux
- Always enable `hardenedRuntime` on macOS
- Auto-updater points to GitHub Releases

Save that to ~/.claude/skills/my-electron-conventions/SKILL.md and it’s active globally across all your projects. Or put it in your project’s .claude/skills/ directory to scope it to one repo.

Since this follows the Agent Skills open standard, it also works in Codex, Cursor, Gemini CLI, and anywhere else that supports the spec.

What Happens When You Actually Use Them

Stephan Miller documented building an Electron writing app from scratch with Claude Code — 16 hours and $80 in API costs. His biggest lesson? Planning saves time. He had to stop and refactor his CLAUDE.md because the project outgrew his initial architecture.

Skills encode that planning. They front-load the decisions so you don’t have to make them again. With the Electron skills loaded, Claude Code doesn’t just generate code — it generates correct code with context isolation enabled, CSP headers configured, proper IPC patterns, and a project structure that scales.

This is the compound engineering flywheel in action. Project 1, you build everything from scratch and learn the hard way. By project 3, your skills are doing the heavy lifting. By project 5, you describe what you want and the system drafts the first 70% with security baked in. You refine, you polish, you add the creative spark.

The Meta Question: Should AI Kill Electron?

Drew Breunig wrote a post asking why Anthropic doesn’t use Claude to build a native desktop app instead. If coding agents are so good, why not generate native apps for each platform from a spec and test suite?

The answer is pragmatic. Agents excel at the first 90% of development, but that last 10% — edge cases, real-world testing, ongoing support — is still hard. And with three different native apps, your bug surface area triples. Electron still makes sense for most teams.

But here’s what skills change about the equation: they make Electron better. The security hardening that would normally be forgotten? A skill remembers it. The IPC patterns that would normally be sloppy? A skill enforces them. The packaging configuration that would normally be a weekend of trial and error? A skill has it pre-encoded.

Agent Skills don’t make Electron obsolete. They make Electron apps that feel like they were built by a team that actually cares about security and native integration.

Start the Flywheel

Here’s your homework. This week, install one of the Electron skills I listed above. Or better yet, write one. Take that electron-builder.yml you’ve tweaked fifty times. That preload script pattern you copy from project to project. That IPC naming convention that lives in your team’s heads.

Codify it. Make it a SKILL.md. Drop it in ~/.claude/skills/. Watch what happens on the next project.

If you want to get started quickly, here are all the install commands in one place:

# electron-scaffold (security-first scaffolding)
npx skills add chrisvoncsefalvay/claude-skills --skill electron-scaffold

# Full-Stack Electron reference (partme-ai)
npx skills add partme-ai/full-stack-skills --skill electron

# electron-pro subagent
mkdir -p ~/.claude/agents && curl -o ~/.claude/agents/electron-pro.md \
  https://raw.githubusercontent.com/VoltAgent/awesome-claude-code-subagents/main/categories/01-core-development/electron-pro.md

# Your own custom skill
mkdir -p ~/.claude/skills/my-electron-conventions
# Then create SKILL.md with your conventions

]]>
https://nothans.com/claude-code-and-agent-skills-for-electron-app-development-your-desktop-app-just-got-a-cheat-code/feed 0 5336
Compound Engineering: What If Every Project Made the Next One Easier? https://nothans.com/compound-engineering-what-if-every-project-made-the-next-one-easier https://nothans.com/compound-engineering-what-if-every-project-made-the-next-one-easier#respond Sat, 28 Feb 2026 19:10:34 +0000 https://nothans.com/?p=5330 I’ve been thinking a lot about compounding lately. Not the finance kind — though you do that too — but the kind where your work gets easier over time instead of harder. I’m calling it Compound Engineering, and I think it might be the most important shift in how we work.

Compound Engineering

Here’s the thing that’s been bugging me. I’ve been building stuff for a long time. Software, hardware, IoT platforms, weird pinball mods — you name it. And every single time I start a new project, there’s this moment where I think, “Didn’t I already do this part?” The setup. The boilerplate. The config files. The architecture decisions I’ve already made a dozen times before.

I call it the Groundhog Day Problem.

Your tools don’t remember you. You close the tab, and it’s like you never existed.

“Sixty to eighty percent of what you do on a new project, you’ve already done before.”

Hans Scharler

And yet, every time, you start from scratch. That’s not a feature. That’s a bug.

TL;DR

Compound Engineering by Hans Scharler
Compound Engineering by Hans Scharler

The Work Surface That Learns

Compound Engineering is the idea that your work surface — the environment where you actually do the work — should learn, adapt, and accumulate knowledge over time. Not like templates. Templates are dead things. I’m talking about living intelligence that evolves with you.

Think of it like compound interest, but for productivity. Every workflow you capture, every pattern you codify, every piece of knowledge you extract — it doesn’t just help you today. It helps you tomorrow, next month, and next year. It accrues.

I’ve been experiencing this firsthand. When I wrote about The Engineering Super Stack, I was already circling this idea — stacking the right tools so they yield something greater than the parts. But Compound Engineering goes further. It’s not just about picking good tools. It’s about tools that get better because you used them.

Five Layers That Stack

When I break it down, there are five layers to this compounding:

Workflows are the foundation. You do something once, capture the sequence, and now you can replay it, remix it, evolve it. That deployment script you write from memory every time? Capture it. Done.

Skills take it further — encoding your domain expertise into reusable, shareable modules. The stuff that lives in your head? Make it executable.

Commands are where you start to feel the leverage. Those ten steps you do every Monday morning? Collapse them into one. One click. Gone.

Agents are where it gets fun. Autonomous workers that carry your intent forward while you’re doing something else — or sleeping, which I hear some people do.

Knowledge is the substrate beneath everything. Context that doesn’t just persist — it deepens and connects across projects, across teams, across your career.

Each layer feeds the next. That’s the compounding.

Project 1 vs. Project 10

Here’s how it plays out in practice:

Project one, you build everything from scratch. You’re exploring, making mistakes, learning. It’s slow, and that’s fine.

By project three, your workflows are captured. Setup takes half the time. You’re not reinventing the wheel anymore.

By project five, agents handle the boring parts. Boilerplate? Done. Config? Done. You’re spending your time on the interesting problems — the ones that actually need your brain.

By project ten, you describe what you want, and the system drafts the first 70%. You refine, you polish, you add the creative spark. But the heavy lifting? Already handled.

Project ten shouldn’t feel like project one. And now it doesn’t have to.

I’ve talked before about how empathic AI prompting changed the way I work — treating your AI like a collaborator instead of a vending machine. Compound Engineering is the next step. It’s not just about how you talk to your tools. It’s about your tools remembering every conversation you’ve ever had.

What Actually Changes

This isn’t incremental. This rewrites the economics of work.

Onboarding gets transformed. New team members don’t get a wiki link and a “good luck.” They inherit the team’s compound knowledge from day one — the workflows, the skills, the patterns.

Expertise becomes portable. When your best engineer moves on, their expertise stays. Codified, not tribal.

The gap between “senior” and “junior” shrinks. Not because junior developers suddenly gain ten years of experience, but because the tools carry the seniority. The tools know the patterns. The tools remember the pitfalls.

Solo operators gain the leverage of teams. Small teams gain the leverage of enterprises. That’s not a tagline. That’s just what happens when you make expertise executable.

The Risk of Not Doing This

I’ll be blunt. If you’re not compounding, you’re falling behind.

Linear workers — folks doing great work but starting from zero every time — hit a ceiling. There’s only so fast you can move when you’re rebuilding the foundation each time. Compound workers hit escape velocity. Same talent, same hours in the day, dramatically different output over time.

Organizations feel this even harder. Institutional knowledge that isn’t captured gets lost to attrition, to time, to entropy. Your best person leaves, and a decade of expertise walks out the door with them.

The future belongs to whoever builds the flywheel first.

Where This Is Going

I see three things coming.

  • Connected work surfaces… where your tools talk to your teammates’ tools. Work surfaces that negotiate and share context without a meeting.
  • Skills marketplaces… codified expertise becoming a tradeable asset. A senior DevOps engineer publishes their deployment workflow. A startup buys it and deploys like a Fortune 500 company on day one.
  • Career-long AI… a personal AI that doesn’t reset when you change jobs. It compounds across your entire career. Every problem you’ve solved, every domain you’ve mastered, every lesson you’ve learned.

Start the Flywheel

Here’s your homework. Codify one workflow this week. Just one. That deployment script you always write from memory. The project setup you’ve done forty times. The onboarding checklist that lives in your head.

Write it down. Automate it. Make it reusable. Watch what happens.

The Compound Engineering Flywheel Effect
]]>
https://nothans.com/compound-engineering-what-if-every-project-made-the-next-one-easier/feed 0 5330
Happy Birthday, Claude Code. The Agentic Coding Platform Turns One. https://nothans.com/happy-birthday-claude-code-the-agentic-coding-platform-turns-one https://nothans.com/happy-birthday-claude-code-the-agentic-coding-platform-turns-one#respond Wed, 18 Feb 2026 16:26:03 +0000 https://nothans.com/?p=5321 One year ago, we heard rumors about a new language model from Anthropic. It turned out to be Sonnet 3.7. But Anthropic pulled, wait, there’s more, and dropped Claude Code as a Research Preview.

Claude Code Research Preview: February 24, 2025

Claude Code might be the most significant event of 2025 in the field of AI. For me, it changed the course of the year and led me to unthinkable places. I followed every twist and turn and stayed on top of the wave. What an amazing tool that delivers on its promise. It amplifies and scales your ability, keeps you moving forward, and leads to some serious compounding.

We are here.

An overlooked impact of AI is its compounding nature. From the first prompt, where everything is wrong. But, slowly, you learn, you figure out how to harness. At some point, you forget about the first hallucination, and you are in a new spot. When I talk to others, I can immediately tell where they are on the curve by how they talk about these tools. A scary thing, though, is how far behind you can get, and that you might not be able to catch up to those who have started. If you do anything this month, go down the bumpy road and get on the other side.

It’s the least I could do… I made you a card.

]]>
https://nothans.com/happy-birthday-claude-code-the-agentic-coding-platform-turns-one/feed 0 5321
Build Confidence in Yourself By Learning to Surf https://nothans.com/build-confidence-in-yourself-by-learning-to-surf https://nothans.com/build-confidence-in-yourself-by-learning-to-surf#comments Sat, 31 Jan 2026 13:51:51 +0000 https://nothans.com/?p=5303 So, there’s a lot going on in the world. None of it can you control. Letely, I have felt overwhelmed trying to “figure it all out” for everyone else. What I lost track of is the innate confidence in myself. It comes with the more you know, the more you know that you don’t know. I let that attack my confidence.

My approach to rebuilding my confidence is remembering that I control how I feel and act. My joy comes from surfing the endless waves of technology breakthroughs and figuring them out. I forgot that this is my superpower and might be the critical skill in this ocean of chaos.

Hans Scharler learning to surf

Well, I am on top of the wave. It is a small wave now, but that’s how it works. One at a time.

My advice to you and me is to learn to surf metaphorically. What’s going on? Dig in. Talk with your friends. Reconnect. Explore. Network. Nobody knows where it is all going, but it is going.

]]>
https://nothans.com/build-confidence-in-yourself-by-learning-to-surf/feed 4 5303
3D Printed Godzilla Pinball Mod https://nothans.com/3d-printed-godzilla-pinball-mod https://nothans.com/3d-printed-godzilla-pinball-mod#respond Tue, 30 Dec 2025 18:15:03 +0000 https://nothans.com/?p=5283 George and I got a shared Christmas gift this year. It is a 3D printer from Bambu Lab. I have been in the 3D printer game since the beginning, as it was a big part of makerspaces back in the mid-2000s. The hobby was way more about 3D printer maintenance than it was about successful prints. I decided after a decade to jump back in. My friends John, Pete, and Roy assured me the water is warm, things are different, “You will be saving money with all of the things you are going to make and don’t have to buy.” Enablers.

They were right: the Bambu Lab P1S was plug-and-play. The 3D printer was calibrated. It then asked me to try a print with one of the built-in 3D models. George and I decided to go with the model scrapper. That’s meta. Making something with a 3D printer for the 3D printer. Then, we turned to Maker World and found a cool F1 fidget toy. It turned out well. My confidence grew. It was time to go big.

I have a Godzilla 70th edition pinball machine by Stern Pinball. It is my favorite theme. The pinball game is spectacular, one of the top-rated pinball machines of all time. The 70th edition has black and white… and red artwork. One thing that I always noticed is how punny Godzilla is in the machine. Godzilla is ticked in the corner. I searched around for mods and found some options. They all have a Pinside waitlist, and I didn’t get my name called after six months, so it is time to make my own.

My goal was to have a Godzilla model inthe game that was slightly larger than Mechagodzilla. Even Mechagodzilla got more prominence than Godzilla. I found someone on Printables who shared my challenge and goal. This is also a big part of what has changed in the 3D printing hobby. You are standing on the shoulders of giants. There are so many places to find 3D models, inspiration, tutorials, and videos. No excuses at this point.

First Step: Print the Godzilla 3D Model

I downloaded the STL files and imported them into BambuStudio. It was a straightforward process. I had to add supports. I recommend trees. Move from the prepare to the preview tab to start the slicing. I picked Bambu Lab Jade White PLA Basic filament. I checked out the 3D preview for a bit and hit Build. It said it would take 6 hours, and it did.

Godzilla 3D Print with Supports

Second Step: Prepare the model for painting

I removed all of the support trees and lightly sanded Godzilla with 400-grit sandpaper. After that, I sprayed a black primer on the model. This will help acrylic paint adhere to PLA. This is a fantastic thing. I tried painting the raw PLA, but it didn’t work at all. Let it dry.

Primed Godzilla Model

Third Step: Paint the Godzilla Model

This was a fun part for me. I had a lot of apprehension about painting it. I was going to run it. Then I realized that, at worst, I was six hours away from another model. At best, I prime it again. I chose Gray, Metallic Silver, and Metallic Black paints. The last model I painted was my Yoda model from the 80s, my toy Yoda. Classic joke.

After I finished it, I sprayed a UV-resistant top coat. It dried quickly. I am not sure if this step was needed, but I wanted to make sure everything was protected. I loved how it turned out.

Fourth Step: Install Godzilla…

The tiny Godzilla sits in the back corner, held in place with a couple of screws. The 3D model came with a base plate. I put that in first, checked clearances, and used two-sided Gorilla tape to hold Godzilla down to the baseplate. The Gorilla two-sided tape is also magical. It has held all of my mods in place over the years.

New Godzilla vs. Old Godzilla
Godzilla mod installed in Stern Pinball Machine

A spotlight shines on Godzilla, illuminating him during specific game modes. I adjusted it since this model is so huge. Looks awesome.

Fifth Step: PROFIT!

This 3D printer is just printing money. Okay, okay. I am getting ahead of myself. This was my third print. I went big, and I felt confident that this would work. It was fun to go the whole way. I overcame some fears of painting a model. I did it with the help of literally millions of people sharing on forums, YouTube, and 3D modeling sites.

What an amazing community. Hobby.

]]>
https://nothans.com/3d-printed-godzilla-pinball-mod/feed 0 5283
The Merry Manhattan Redux: A Smoked Cherry and Rosemary Cocktail for Christmas and New Year’s Day https://nothans.com/the-merry-manhattan-redux https://nothans.com/the-merry-manhattan-redux#respond Mon, 29 Dec 2025 15:54:47 +0000 https://nothans.com/?p=5275 I was looking over my Google Analytics for my blog posts. A couple of years ago, I wrote a blog post about The Merry Manhattan cocktail creation for a party. Two things were interesting about that blog post. One, it was my own creation. Two, I used DALL-E to create a photo of the drink for my blog post. I didn’t get a picture of the drink, even though I made it 12 times that night. For whatever reason, this blog post was my most popular one for December 2025. I thought I would give it a redux. Image generation has come a long way, so let’s see how things have changed.

As a baseline, here’s the AI-generated image from two years ago, created with DALL-E 3.

This image has an empty alt attribute; its file name is image.png
The Merry Manhattan (as visualized by DALL-E 3, December 2023)

The original DALL-E 3 photo looks kind of crazy when you look back. I would never garnish a drink with a grapefruit wedge; it would have been a peel.

To remind you of the cocktail recipe for The Merry Manhattan:

The Merry Manhattan

Recipe by Hans Scharler
Prep time

5

minutes

The Merry Manhattan is a festive twist on the classic Manhattan cocktail, perfect for holiday celebrations. This elegant drink features a rich amber hue, achieved by blending rye whiskey with sweet vermouth. The traditional flavor is enhanced with a unique addition of smoked cherries, adding a subtle, smoky sweetness. A sprig of rosemary infuses the cocktail with a fragrant, herbaceous aroma, invoking the essence of winter. The drink is served in a rocks glass containing a large ice chunk. The finishing touch is a gracefully twisted grapefruit peel, adding a citrusy zing and completing the cocktail’s holiday charm.

Ingredients

  • 1 oz Carpano Antica Formula Vermouth

  • 2 oz Whistle Pig Rye Whiskey

  • 2 dashes Sour Cherry Bitters

  • 2 dashes Peychaud’s Bitters

  • Fresh cherries (for smoking)

  • Fresh rosemary (for smoking)

  • Grapefruit peel (for garnish)

  • Ice

Directions

  • Prepare the Smoke:
    Gather a few fresh cherries and a sprig of rosemary.
    Using a kitchen torch, gently torch the rosemary and cherries until they start to smoke. Be careful not to burn them.
    Immediately cover the smoking rosemary and cherries with a rocks glass to trap the smoke inside. Let it sit for a minute to infuse the glass with the smoky aroma.
  • In a mixing glass, combine 1 oz of Vermouth and 2 oz of Rye Whiskey.
  • Add two dashes each of Sour Cherry Bitters and Peychaud’s Bitters.
  • Fill the mixing glass with ice and stir well to chill and dilute the cocktail.
  • Add an ice chunk to the smoked rocks glass.
  • Strain the stirred cocktail into the smoked rocks glass.
  • Finish and Garnish:
    Take a grapefruit peel and express (squeeze) its oils over the drink.
    Use the grapefruit peel as a garnish.
    Use one of the smoked cherries as a garnish.
    Use the sprig of rosemary as a garnish.

Notes

  • Enjoy the smoked cherries as a treat, or save them for a future cocktail garnish.

Here’s the first try with Nano Banna Pro. I gave it the recipe card along with the prompt to create a realistic photo for a Christmas or New Year’s party setting.

The Merry Manhattan (as visualized by Nano Banana Pro, December 2025)

I am still not in love with the photos. They are better for sure. Let me try a new approach. I am going to just send a link to the blog post and ask for the photos again.

The Merry Manhattan (as visualized by Nano Banana Pro, December 2025, Christmas setting)

This time it was much better. I like that it used a fancy cocktail cherry, like a Luxardo cherry, instead of one with a stem.

I found this an interesting way to visualize how a drink can come together. I like inventing my own cocktails for parties. This gives me a way to experiment with the visual presentation.

Now, Nano Banana Pro has way more capability than DALL-E 3 had, so I can do more things. I can make process diagrams for the recipe card. Let’s try that.

Now, we are talking.

A few takeaways:

  • Models are improving rapidly.
  • You can do something new with Generative AI models.
  • And, the power of AND. I started with a photo, then pivoted to a recipe card diagram. I could keep anding. I could make it a YouTube video script. I could make it a series of cocktails. This is the most critical takeaway for 2026. It is not just about doing one thing more efficiently; it is about doing more things than you could before.

Buckle up. You might need a drink in 2026.

]]>
https://nothans.com/the-merry-manhattan-redux/feed 0 5275
Claude Opus 4.5 vs. Gemini 3 Pro: What a Week https://nothans.com/claude-opus-4-5-vs-gemini-3-pro-what-a-week https://nothans.com/claude-opus-4-5-vs-gemini-3-pro-what-a-week#respond Mon, 24 Nov 2025 23:47:02 +0000 https://nothans.com/?p=5254 This past week was one of those moments where you just lean back and enjoy the ride. Google dropped Gemini 3 Pro. Anthropic dropped Claude Opus 4.5. Both landed within days of each other. If you work in AI, this is the good stuff.

Gemini vs. Claude (as visualized by Nano Banana Pro)

Gemini 3 Pro

Google went a different direction. Gemini 3 Pro is all about reasoning, multimodal inputs, and that million-token context window.

The benchmark numbers are wild. It hit 91.9% on GPQA Diamond. On ARC-AGI-2, the abstract reasoning benchmark, it scored 31.1% (and up to 45% in Deep Think mode). That is a huge leap over previous models. On LMArena it took the top ELO spot.

If your work is heavy on reasoning, vision, video, or you need to throw massive context at a problem, Gemini 3 Pro is built for that.

Claude Opus 4.5

Anthropic announced Opus 4.5 on November 24, 2025. They are calling it the best model in the world for coding, agents, and computer use. Bold claim.

On their internal engineering benchmarks, Opus 4.5 scored higher than any human candidate ever on their take-home exam. It also delivers higher pass rates on tests while using up to 65% fewer tokens than Sonnet 4.5. That efficiency piece matters if you are building agents that run for hours.

The pitch is clear: if you care about code, automation, and not burning through tokens, Opus 4.5 is the one.

How They Compare

Opus 4.5 is aimed at engineers building agents and writing code. It is optimized for efficiency. Gemini 3 Pro is aimed at everything else: reasoning, multimodal, long context, general purpose.

Both are frontier models. Both are available on major clouds and APIs. The honest answer is you might end up using both depending on the task.

The Real Point

The meta-point is not which model wins. The point is that two frontier models landed in the same week, both pushing hard on different axes. Reasoning, coding, agents, vision, efficiency. The pace of improvement is nuts.

If you are building with AI right now, the table is full. Pick your model, match it to your task, and start experimenting. There has never been a better time.

What excites me about Claude Opus 4.5?

While I was writing this post, a friend texted me to ask what excited me about Claude Opus 4.5. I spent a couple of hours with it today, and I would have to say… tool search.

Tool search lets Claude discover and load tools on-demand instead of pre-loading every definition at the start.

Here is how it works. You provide a catalog of tools to the API with names, descriptions, and input schemas. You mark most of them with defer_loading: true, which means they stay out of the model’s context until needed. Then you include a tool-search tool in the list. When Claude needs a new capability, it searches, finds the right tool, and only then does that tool get loaded into context.

There are two pain points this solves… expensive tokens and picking the right tool.

When you load dozens of tools upfront, the definitions alone eat up thousands of tokens. That is space you could be using for reasoning, tool outputs, and user messages. With tool search, you load only the search tool and maybe three to five relevant definitions. The overhead drops significantly.

With large libraries of tools, the model can struggle to pick the right one, especially when names or parameters are similar. The search step narrows the candidates to tools that actually match the task.

]]>
https://nothans.com/claude-opus-4-5-vs-gemini-3-pro-what-a-week/feed 0 5254
Google’s New Antigravity Agentic IDE has a Brain (folder) https://nothans.com/google-new-antigravity-has-a-brain https://nothans.com/google-new-antigravity-has-a-brain#respond Thu, 20 Nov 2025 16:54:00 +0000 https://nothans.com/?p=5248 There are many agentic IDE options for software developers. For one, they are all forks of Visual Studio Code, and then they add AI chat and agents based on a set of AI models and harnesses. They also have to manage something called the context window. Most models are paying attention to about 200k tokens, and those tokens cost you extra money. But, if the context window is full of random things, or out-of-date parts of the conversation, or a bunch of error messages. And, when you start a new conversation, the model has to build context to help solve the problem.

The latest agent IDE is called Antigravity by Google. It is based on their top coding model, Gemini 3, and their own agent framework. It is brand new, and people are crushing it with requests; they are making frequent updates and fixing bugs.

Google Antigravity

I noticed when I started a new chat for my project, Antigravity thought about my request and opened a “brain” file. It looks to be some markdown that Antigravity is managing as I work on the project. This seems like a smart idea and might be where Google can differentiate itself from the competition.

Antigravity using its brain to help me with agentic coding projects
]]>
https://nothans.com/google-new-antigravity-has-a-brain/feed 0 5248