Eric Mann's Blog https://eric.mann.blog These things matter ... Wed, 11 Mar 2026 13:59:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/eric.mann.blog/wp-content/uploads/2021/12/cropped-00100sPORTRAIT_00100_BURST20190126115324341_COVER-scaled-1.jpg?fit=32%2C32&ssl=1 Eric Mann's Blog https://eric.mann.blog 32 32 201020038 The Upfront Investment That Saves 10,000 Hours https://eric.mann.blog/the-upfront-investment-that-saves-10000-hours/ Wed, 11 Mar 2026 14:30:00 +0000 https://eric.mann.blog/?p=10059 I spent years doing WordPress agency work. Good clients, meaningful projects — and the same mind-numbing ritual every single time we landed one.

Eight to ten hours. Billable, technically. But brutal.

Every new project meant manually creating a custom theme from scratch. Wiring in custom post types and query backends, setting up internationalization text domains, orchestrating test harnesses and CI pipelines. The same work, in the same order, again and again. We were billing it to clients who didn’t care about any of it — they just wanted a site that worked.

I got sick of it.

So I took a day off and built an automation framework using Grunt — a popular task runner at the time. One config file. One command. What used to take eight to ten hours of focused effort became a thirty-second automated task. Same output. Same quality. A fraction of the time.

I presented the whole thing at WordCamp San Francisco in 2013.1

One of my colleagues had a good laugh about it. I’d “just cost the company $1,200–$1,500 of billable work per client in perpetuity.” His math wasn’t wrong — we stopped charging for bootstrap time because it no longer took any.2

What he missed was everything that happened next.

Compounded Returns

Our engineers stopped spending days per project on boilerplate. They instead focused on the things clients actually hired us to do. Features. Performance. Strategy. The stuff that moved the needle.

Some of those clients — companies near the top of the Fortune 500 — are still running sites built on that framework today. More than a decade later. I’ve seen evidence of dozens of other organizations picking up the tool after I open-sourced it.

One day of grinding to build something no one would ever see — a quiet, unglamorous automation layer — saved an estimated 10,000+ hours of work across my team and the wider community.

My colleague’s concern about lost billing was real but short-sighted. The return wasn’t just “not negative.” It was enormous.

How Does this Apply in 2026

I’ve been thinking about this a lot lately. The Hacker News crowd has been picking apart this piece from Claude Code Camp about building AI agents that run autonomously. The critique that stuck with me:

“You’re spending weeks of effort babysitting harnesses and evaluating models while shipping nothing at all.”

I get the frustration. The setup cost of any new automation pattern looks wasteful from the outside. Before the Grunt framework existed, someone watching me build it would have said the same thing: “You’re spending a whole day writing code that doesn’t serve any client directly.”

They’d have been right about the observation. Wrong about the conclusion.

Every automation has a non-zero upfront cost. That’s not a bug — it’s the nature of building something reusable. You’re not just solving the problem in front of you. You’re building the infrastructure to solve it ten, a hundred, a thousand times more. The first few runs feel expensive. Then the curve inverts.

The weeks someone spends building and evaluating AI agent harnesses today aren’t wasted. They’re the 2026 equivalent of me spending a Saturday iterating on a Gruntfile. The people willing to pay that cost now are the ones who’ll ship in thirty seconds rather than weeks in the future.

Will every AI harness investment pay off at the same scale as that WordPress automation? I can’t promise that. Some won’t. That’s true of every infrastructure bet.

From where I sit — having made exactly that kind of investment and watched it compound for over a decade — it’s a good bet. The critics are confusing “I don’t see output yet” with “there’s no value being created.”

Those are very different things.

What foundational work have you done that seemed wasteful at first, and paid off more than you expected?

1    Another colleague and I later collaborated on rebuilding the whole thing atop Yeoman. I’m sure today it would take yet another form.
2    Relatedly this why I so frequently bemoan time and materials based billing. While our improved performance and time to delivery was a huge advantage over other agencies, there was no direct way to convert this investment into return. A fixed setup fee would have fixed that. But at strict “hours-based billing” it really did seem — on paper — as if my improvements had legitimately cost us more than it was worth.
]]>
10059
From Defense AI Drift to Policy Enforcement: Why I Built Firebreak https://eric.mann.blog/from-defense-ai-drift-to-policy-enforcement-why-i-built-firebreak/ Sat, 28 Feb 2026 20:35:00 +0000 https://eric.mann.blog/?p=10052 I spent time this week writing about the gravitational pull that drags defense AI companies from defensive applications toward offensive ones.

Yesterday, the Secretary of Defense declared Anthropic a “supply chain risk” because they won’t enable mass surveillance or autonomous kill chains.

Today weekend, I built the engineering solution both sides actually need.


Today’s Portland Claude Code hackathon runs all day. One-day sprint, solo or teams, build anything that meaningfully integrates Claude. I went in solo with a problem I’ve been thinking about since I left defense AI work.

The problem: how do you let the Pentagon use AI for missile defense at machine speed — without requiring a phone call to a CEO — while simultaneously blocking mass domestic surveillance in a way nobody can override?

The answer is the same pattern already proven at scale in production infrastructure.

Policy-as-code enforcement.


The Pattern Already Exists

Kubernetes admission controllers evaluate thousands of API requests per second against pre-negotiated policies. No human intervention. No phone calls. No exceptions for urgent deployments.

The container either passes the security policy or it doesn’t deploy. Period.

OPA (Open Policy Agent) does the same thing for authorization decisions across distributed systems. Rules are code. Evaluation is automatic. Audit trails are complete.

This isn’t theoretical. This is running in production at scale, right now, protecting infrastructure that handles hundreds of millions of requests.

The same pattern works for LLM deployments.


What I Built

Firebreak is a policy enforcement proxy that sits between an LLM consumer and the API endpoint.

Every request gets intercepted. Intent gets classified. Policy gets evaluated. The decision executes automatically.

  • ALLOW — the prompt passes through. Standard audit logging.
  • ALLOW_CONSTRAINED — the prompt passes through with enhanced logging and notifications to legal counsel.
  • BLOCK — the prompt is rejected. The LLM never sees it. Critical alerts fire to trust & safety, inspector general, whoever was specified in the policy.

Everything is logged to an immutable audit trail.

The policies themselves are YAML files. Version-controlled, testable, deployable. Both sides negotiate them once. Neither side can unilaterally change them.

Missile defense? Pre-authorized. Passes through at machine speed. No phone call required.

Mass surveillance? Hard blocked. Doesn’t matter who’s asking or how urgent the situation is. The infrastructure says no.


Why This Matters

I wrote about this pattern in my piece on defense AI drift. The technical infrastructure for monitoring and the infrastructure for targeting are often the same. The sensor fusion architecture that detects threats can identify targets. The data analysis platform that summarizes intelligence can build pattern-of-life profiles.

What changes is the intent — and without infrastructure-level constraints, that intent drifts.

Policy documents don’t hold. I’ve seen it happen. The documents exist, everyone agrees to them, then operational urgency overrides them. The drift happens through a series of scope expansions that each seem reasonable in isolation.

Infrastructure constraints create friction. Friction creates accountability.

The Pentagon’s complaint that they can’t be “beholden to a private company” during a crisis is legitimate. You can’t have national security dependent on whether a CEO answers their phone.

Anthropic’s position that they can’t allow unrestricted use for surveillance and autonomous weapons is also legitimate. Those are real red lines that exist for good reasons.

These positions aren’t incompatible. They just need the right infrastructure layer between them.


The Demo

Seven scenarios run through the system in the demo. Intelligence summarization — allowed, standard audit. Farsi translation — allowed, standard audit. Court-authorized surveillance with a valid warrant — allowed with constraints, legal counsel notified.

Mass domestic surveillance? Blocked. Trust & safety and inspector general alerted.

Autonomous targeting recommendations? Blocked. Three alert targets notified immediately.

The whole thing runs as either an interactive TUI demo or as an OpenAI-compatible proxy server. Point any compatible client at it, and the policy enforcement happens transparently.

I’ve got it working with Cursor IDE through ngrok. Same enforcement layer, same audit trail, zero changes to the developer experience.


Whether this makes it to the final ten at the hackathon is anyone’s guess.1 There are maybe 150 participants and that’s a long shot.

But the point isn’t winning a hackathon. The point is proving the pattern works.

The technology itself isn’t hard. The hard part is getting both sides to agree on the policy. Once they do, Firebreak2 makes sure the agreement holds.

I’ve seen what happens when the line doesn’t hold. I’ve participated in the drift. I’ve made the decisions that seemed reasonable at the time and looked different six months later.

This time, I built the infrastructure I wish had existed then.

The code’s on GitHub. The demo video’s on YouTube. Anyone who’s serious about solving this problem now has a reference implementation to start from.

That’s more than worth it.

1    Pencil’s down is in a half hour. My fingers are crossed!
2    … or a solution like it.
]]>
10052
AI Slop and the em-dash https://eric.mann.blog/ai-slop-and-the-em-dash/ Fri, 27 Feb 2026 21:37:59 +0000 https://eric.mann.blog/?p=10038 I saw a post on X/Twitter today citing the use of the em-dash as a clear giveaway of AI-powered writing.

Sadly, it’s not quite this easy …

I use WordPress for my site. I’ve used WordPress consistently since I started publishing in public back in 2007. It’s evolved quite a bit since those early days, but the core promise is the same: a clean, clear interface for writing that democratizes publishing for all.

In fact, WordPress itself was a fork of an earlier project: b2/cafelog. The original project had stopped development, and Matt Mullenweg wanted to add specific typography improvements. I remember a talk he gave once about wanting to add curly quotes to his site, thus partly leading to the fork.

The beauty of tools like WordPress lies in focus. I don’t need to know when to use a quote mark that curls one way or the other. I just write, stay focused on content, and the publishing engine does the work for me.

The em-dash

I grew up using semicolons and hyphens. Frequently. They were the easiest way to add an interjecting thought to my prose – kind of a casual aside in an otherwise professional piece – without breaking stride.

But when I add a hypen, I merely type a literal - on my keyboard. This renders as an en-dash when you read my work. But some professional writers, long before generative AI was even a thing, heavily leveraged a longer em-dash in their work.

In fact, that used to be what would help distinguish casual writing against professional. Because the pros could afford to either take the time (or pay someone) to implement proper typography in their publications.

But tools like WordPress made that accessible to everyone.

The two images below show the impact WordPress adds for writing with hyphens. On the left, the raw input in WordPress’ Gutenberg editor. On the right, the rendered output in a preview.

  • The first line gives you an en-dash
  • The second, a double hyphen, renders as an em-dash
  • The third again as an en-dash
  • The fourth, again, as an em-dash
  • Single hyphens within words give you a literal hyphen
  • Double hyphens within words give you an en-dash

Writing that sample I only pressed one key (repeatedly) on my keyboard, but was able to, depending on context, render three distinct characters.

I haven’t seen anyone seriously use [an em-dash] over a hyphen

I don’t ever use an em-dash, seriously or otherwise. Neither do I ever seriously use an en-dash. However, you see examples of both in this very article because the tools I use help polish my typography.

Can the em-dash be an indicator of AI-generated content? Sure. But it’s also an indication that someone is using a serious writing tool. Don’t mistake one for the other.

]]>
10038
The Gravity Problem: Why Defense AI Companies Drift Toward Offense https://eric.mann.blog/the-gravity-problem-why-defense-ai-companies-drift-toward-offense/ Fri, 27 Feb 2026 00:15:00 +0000 https://eric.mann.blog/?p=10022 This week, the Secretary of Defense gave Anthropic1 an ultimatum: allow the military to use their AI for “all lawful purposes” by Friday, or he’ll invoke the Defense Production Act to force them.

Anthropic’s position is that it won’t allow its systems to be used for two purposes:

  1. mass domestic surveillance
  2. autonomous lethal weapons without human oversight.

The Pentagon’s position is that it can’t be “beholden to a private company” for access to AI capabilities during a crisis. A senior Pentagon official reportedly said they’d make Anthropic “pay a price for forcing our hand.”

I’ve been watching this unfold with a kind of recognition that’s hard to describe. Not because I have any inside knowledge of Anthropic’s situation, but because I’ve been inside a version of this story before.

Twice, I’ve spent time working inside defense – once explicitly on AI for defense. I joined because I genuinely believed in its mission. I still do. I helped build teams and products focused on cybersecurity, big data analysis, and tools designed to protect the US and our allies. I left when the mission I’d signed up for and the mission the organization was actually pursuing drifted apart.

I’m not writing this to name names or settle scores. The people I worked with were talented and sincere, and many of them are still doing work they believe in. I’m writing this because the pattern matters.

It’s the same pattern that’s playing out between Anthropic and the Pentagon right now.


The Mission That Drew Me In

I joined a defense AI startup because the pitch was compelling and, I believe, genuinely meant. Our mission was to help protect US allies. We aimed to strengthen cyber defenses and make sense of the overwhelming volume of data that modern intelligence operations generate.

The team was talented and mission-driven. Leadership truly cared about the work. Our products were defensive in nature. We had a suite of cyber security tools that helped identify and respond to threats. We deployed a data analysis platform that helped allied nations make sense of complex intelligence. I led engineering teams working on these products, and I was proud of what we were building.

Let me be clear, as this matters for the rest of the story: this isn’t a tale about a company that was rotten or corrupt. The mission was real. The people were good. What happened wasn’t about bad intentions. It was about gravity.


The Gravity Problem

Defense AI companies face a structural force that pulls them from defensive applications toward offensive ones. It’s not unique to any single company. It’s an industry-wide dynamic, and understanding it is critical for anyone thinking about the future of AI in national security.

Here’s how the gravity works:

Defensive products (cybersecurity, data analysis, threat monitoring) have diffuse value. They protect broad categories of assets and operations. They’re genuinely important, but they’re hard to attribute direct ROI. The customer (usually the DoD or an allied military) values them, but with modest budgets and glacial procurement cycles.

Offensive and targeting applications have concentrated value. The one primary customer with enormous budgets, and that customer’s most urgent operational needs tend toward capabilities that are further along the kill chain. Target identification. Sensor fusion. Disposition recommendations. Strike coordination.

The pipeline from “monitoring” to “targeting” is shorter than most people outside the defense world realize. The same sensor fusion architecture that detects threats can identify targets. The same data analysis platform that summarizes intelligence can build pattern-of-life profiles. The same AI that monitors anomalies can recommend actions against them.

Including lethal ones.

The technical infrastructure is dual-use by nature. What changes is the intent. It shifts gradually, through a series of scope expansions that each seem reasonable in isolation.


How the Drift Happens

Nobody walks into a conference room and says “let’s pivot from defense to offense.”

First, the defensive products work well. The cybersecurity platform catches real threats. The data analysis tool surfaces real intelligence. The teams are delivering value.

Then someone asks a reasonable question: “Could we extend this capability to cover a broader set of use cases?” The answer is almost always yes, because the underlying technology is flexible.

The scope expands…

Partnerships form with companies whose core business is further along the offensive spectrum. These partnerships make strategic sense; they open new markets, they provide distribution, they bring credibility with defense procurement offices. But they also shift the center of gravity. The partner’s priorities become your priorities, gradually, through the normal mechanics of business alignment.

Then the white papers get requested. Use cases that weren’t in the original pitch start appearing in planning documents. “Monitoring” becomes “monitoring and interrogation of anomalies.” “Threat assessment” becomes “threat assessment with disposition recommendations.” The language shifts by degrees, each feeling small.

The product lines that were working – the genuinely defensive ones – gradually become deprioritized. Not because they failed, but because they’re not where the money is pointing. Resources are reallocated. Teams reassigned. The people who joined for the defensive mission find themselves working on something they didn’t sign up for.

I watched this happen. I participated in it. I’m not claiming I stood apart from it and saw clearly while everyone else was compromised. I was inside the machine, making decisions, managing trade-offs, trying to steer toward the mission I believed in while the ground continued to shift.


The Hardest Part

At some point, the gap between the original mission and the current trajectory became too wide for me to bridge.

We shut down product lines – including the cybersecurity platform that was my whole reason for joining. The whole division. Good engineers doing good work on a product that mattered. Gone, because the organizational priority had shifted elsewhere.

We’d already done the same thing with our data analysis platform. More good people, more mission-aligned work, more capability that didn’t survive the gravitational pull.

I tried to reorient what was left around a product I still believed in. I spent months building a path forward that I thought could work. I hired and trained my replacement.

Then I left.2


What I’d Do Differently

I’ve thought about this a lot. There are a few things I’d change about how I approached the experience. I don’t want to relitigate old decisions; I think these lessons are relevant for anyone working in AI right now, defense or otherwise.

I’d ask harder questions during the interview process. Not because leadership was being dishonest – I don’t think they were. The gravitational pull I’m describing isn’t always visible at the start. I’d want to understand: what happens when the DoD asks you to extend a defensive product into an offensive use case? What’s the company’s actual decision-making process for that? Who has the authority to say no, and have they ever used it?

I’d build ethical guardrails into the infrastructure, not just the policy documents. This is the big one. We had policies about appropriate use. Everyone in the defense AI space has policies. But policies are documents that live in a knowledge base and are overridden by operational urgency. What I wish we’d had was technical constraints: infrastructure-level enforcement that required deliberate, auditable action to modify. Policy documents drift silently. Infrastructure constraints create friction, and friction creates accountability.

I’d establish clearer tripwires for myself. Pre-decided criteria for when the mission has drifted too far. It’s harder to draw a line from within a situation and every individual step seems like a small concession. The time to draw your line is before you’re standing on it.

I’d name the drift earlier. In the “I notice we’re having conversations this quarter that we weren’t having last quarter” way. Not adversarially. Often the drift happens because no one says it out loud until it’s already happened. By the time someone raises the question, the answer feels predetermined.


The Pattern Playing Out Again

Which brings me back to Anthropic and the Pentagon.

If you’ve read this far, you can probably see why this week’s headlines feel familiar to me. The pattern is playing out in public, in real time, at an unprecedented scale.

An AI company enters the defense space with genuine guardrails and a sincere commitment to responsible use. The initial contracts focus on broadly defensive applications: intelligence analysis, data processing, decision support. The company partners with a defense data intermediary to deploy on classified networks.

Then the pressure mounts. The customer wants broader access. The language shifts from “specific authorized uses” to “all lawful purposes.” The company is asked to trust that the customer will self-regulate. The company’s position – that pre-negotiated technical constraints are more reliable than trust – is framed as obstructionist, or worse, unpatriotic.

I’ve seen how this plays out when the company gives in. The slide is gradual. Each individual concession seems reasonable. The endpoint is somewhere nobody planned to go at the start.

What makes Anthropic’s situation different and genuinely important is that they’re saying no. Publicly. With hundreds of millions of dollars on the line, with the threat of being designated a “supply chain risk” usually reserved for foreign adversaries. With the Defense Production Act being invoked to potentially force compliance.

They’re holding the line.3

The other two items—fully autonomous weapons and AI for strategic decision-making—are harder lines to draw since they have legitimate uses in defending democracy, while also being prone to abuse. Here I think what is warranted is extreme care and scrutiny combined with guardrails to prevent abuses. My main fear is having too small a number of “fingers on the button,” such that one or a handful of people could essentially operate a drone army without needing any other humans to cooperate to carry out their orders. As AI systems get more powerful, we may need to have more direct and immediate oversight mechanisms to ensure they are not misused, perhaps involving branches of government other than the executive. I think we should approach fully autonomous weapons in particular with great caution, and not rush into their use without proper safeguards. – Dario Amodei

I don’t know how this ends for them. I don’t have any inside information. I don’t presume to advise a company with far more resources and context than I have. But I’ve seen what happens when the line isn’t held.

Anthropic’s stance matters more than most people realize. It sets a precedent about whether any AI company can maintain ethical boundaries in the face of government pressure.


An Engineering Problem, Not a Political One

What frustrates me most about the current standoff: it’s being treated as a political argument. It’s actually an engineering problem.

The Pentagon says they can’t call a tech CEO every time they need to use an AI system in a crisis. They’re right. That’s an absurd operational model and no military should be dependent on a phone call to a private company during a missile defense scenario.

Anthropic says they can’t allow unrestricted use of their systems for mass surveillance or autonomous lethal action. They’re also right. Those are legitimate red lines that exist for good reasons.

These positions aren’t actually incompatible. The defense world already solved this kind of problem in other domains. Classified networks have automated access controls that enforce compartmentalization rules without requiring a phone call to the NSA director. Nuclear launch requires dual authorization: pre-negotiated rules, enforced by infrastructure, not by real-time human negotiation. SCIF access is governed by pre-set policies that execute automatically.

The same pattern can apply to AI deployments.

Pre-negotiate the policies. Encode them as infrastructure-level constraints. Deploy them as automated enforcement that neither side can unilaterally modify. Missile defense goes through at machine speed – pre-authorized, no phone calls. Mass domestic surveillance gets blocked automatically – no negotiation, no ambiguity. Both sides get what they need while neither side is “beholden” to the other.

This is a solvable problem.

The conversation hasn’t yet included enough people with understanding of both the technology and the operational reality. That’s why it’s being fought through ultimatums and threats rather than engineering solutions. Both sides need engineers in the room who’ve been there, done that, and understand policy-as-code patterns. The folks who’ve personally seen what happens when the guardrails exist only on paper.


Who Needs to Be in the Room

Two groups are dominating the conversation: people who build the models and people who deploy weapons systems. Both are necessary. Neither is sufficient alone.

What’s missing is the engineering leadership layer between them.

I want to be clear: I’m not anti-defense. I believe AI should be used for national security. Cyber defense, missile defense, intelligence analysis, threat assessment. These are legitimate and important applications. I’ve built products in this space.

I’d willingly do it again.

I also know, from direct experience, that “defensive” and “offensive” are not stable categories in this industry. They drift. The gravitational pull is real, it’s structural, and it doesn’t require bad intentions to operate. Without deliberate engineering and deliberate leadership to maintain the boundary, things drift in one direction only.

The companies that hold the line need people who’ve seen what happens when the line doesn’t hold. They need engineers who can build the technical guardrails, not just write the policy documents. They need leaders who’ve made tough decisions to combat mission drift. Those who understand viscerally why infrastructure-level constraints matter more than good intentions.

This week, watching Anthropic navigate the exact forces I’ve felt firsthand, is a reminder that the work of building responsible AI isn’t theoretical. It’s happening right now, under real pressure, with real consequences.

AI will be used in defense. There’s no question there. What is undecided is whether the people building and deploying these systems have the tools, the infrastructure, and the organizational courage to ensure ethical boundaries hold.

I know what happens when they don’t.

1    Anthropic is the company behind Claude, one of the most capable AI systems in the world.
2    The decision to stay, though, is also a legitimate choice. People have families, mortgages, teams they care about, and a genuine belief they could make things better from the inside. I respect that choice. Mine was different, but I don’t think it was more virtuous. I’d reached a point where I believed my own leverage to change things had been exhausted.
3    Even as I write this, that line is shifting. Earlier today, Time reported that Anthropic is dropping the central pledge of its Responsible Scaling Policy. This is their 2023 commitment to halt training if safety couldn’t be guaranteed in advance. The gravity I’m describing doesn’t spare anyone. Not even the company most publicly committed to resisting it.
]]>
10022
PHP Tek 2026: Kubernetes and Semantic Search for PHP Developers https://eric.mann.blog/speaking-at-php-tek-2026/ Mon, 26 Jan 2026 16:00:00 +0000 https://eric.mann.blog/?p=10013 I’ll be presenting two talks at PHP Tek this May. I wanted to share what I’m working on.

Kubernetes for PHP Developers: From Docker Compose to Production

The first talk tackles something I’ve written about frequently: the frustrating gap between local development and production deployment.

Most PHP developers I know have mastered Docker Compose. Redis spins up with one command. PostgreSQL just works. Local development is smooth.

Then they try to deploy to production.

Suddenly there’s a wall of YAML. Deployments, Services, Ingresses, ConfigMaps, Secrets. The vocabulary alone takes a week to absorb. Actually configuring it correctly? That’s where teams burn months of runway.

This talk is the translation guide I wish someone had handed me years ago. Docker Compose concepts mapped directly to Kubernetes primitives — the mental model transfers if someone shows you how.

I’ll walk through real examples, including how I deployed DailyMedToday (a Laravel app with PostgreSQL, Redis, and queue workers) to production Kubernetes in an afternoon using Displace.

Semantic Search and Embeddings in Laravel

The second talk explores something different: how to build search that understands meaning, not just keywords.

Traditional search matches words. You type “peace” and get results containing “peace.” Useful, but limited.

Semantic search matches concepts. You type “feeling overwhelmed by life” and get results about peace, stillness, surrender, and rest — even if those exact words never appear in your query.

I built this into DailyMedToday. The meditation archive uses PostgreSQL’s pgvector extension to store embeddings generated by MongoDB’s Voyage AI. Users can search by feeling, by situation, by question — not just by scripture reference.

The talk covers the architecture: how embeddings work, how to generate them, how to store and query them efficiently in Laravel. It’s practical, code-heavy, and based on production experience.

See You in May

PHP Tek runs May 19-21 in Chicago. If you’re attending, I’d love to connect. I’ll be around for the entire event, probably talking too much about infrastructure and AI.

If you can’t make it, I’ll be sharing related content here, on X/Twitter, and through Mastodon over the coming months.

What topics would you want me to cover in more depth before the conference?

]]>
10013
The gap between Docker Compose and production Kubernetes https://eric.mann.blog/the-gap-between-docker-compose-and-production-kubernetes/ Tue, 13 Jan 2026 13:00:00 +0000 https://eric.mann.blog/?p=10001 My first introduction to Kubernetes was pure pain. Tag-based lookups, opaque command line operations, limited documentation. I couldn’t sort things out and abandoned the effort for other tools.

Over time I forced myself to learn Kubernetes anyway. But deploying a Laravel app was still a special kind of pain.

I’d been using Docker Compose for years. Local development was entirely pain-free. One docker-compose up and everything worked — PostgreSQL, Redis, the works. I felt confident. Kubernetes was just layers of frustrating abstraction atop what I thought was already adequate. But I still needed it for production.

Three weeks later, I was still debugging Ingress configurations.

The False Complexity Problem

Here’s what nobody tells you about Kubernetes: most of the complexity is artificial. Not because the technology is poorly designed — it’s actually elegant once you understand it. The complexity comes from documentation written for platform engineers, not application developers.

When you’re a PHP developer who just wants to ship a Laravel app, you don’t need to understand Custom Resource Definitions. You don’t need to know about Operators or service meshes or GitOps workflows. Not yet, anyway.

You need to know how the concepts you already understand map to Kubernetes primitives.

The Translation Layer

Docker Compose and Kubernetes solve identical problems with different vocabulary. Once you see the translation, the whole system clicks.

Services in Docker Compose → Services in Kubernetes

Both define how containers talk to each other. The names match because the concept is the same.

Volumes in Docker Compose → PersistentVolumeClaims in Kubernetes

Docker Compose mounts local directories.1 Kubernetes requests storage from a provider. Same idea, different mechanism.

docker-compose.yml → Deployment + Service YAML

Your Docker Compose file describes what to run. In Kubernetes, that splits into a Deployment (what container, how many replicas) and a Service (how to reach it).

Environment Variables → ConfigMaps and Secrets

You’re already putting credentials in `.env`. Kubernetes formalizes this into ConfigMaps (non-sensitive) and Secrets (sensitive). The pattern is identical.

What Actually Changes

The biggest mental shift isn’t technical. It’s accepting that Kubernetes expects your application to be disposable.

Containers crash. Pods get evicted. Nodes fail. Kubernetes handles all of this automatically — but only if your application doesn’t fight it. That means stateless processes. External databases. No local file storage for user uploads.

If you’re already following twelve-factor app principles, you’re 90% there.

The Talk at PHP Tek

In May, I’m presenting “Kubernetes for PHP Developers: From Docker Compose to Production” at PHP Tek. The talk walks through this translation layer step by step, using real Laravel applications as examples.

I’ll show the YAML. I’ll explain the patterns. And I’ll demonstrate how tools like Displace can automate the translation entirely!

This isn’t a Kubernetes-for-platform-engineers talk. It’s for PHP developers who want to deploy their own applications without hiring a DevOps team.

If you’re coming to Tek, I’d love to see you there. If you can’t make it, I’ll be posting related content here and on both X and Mastodon over the coming months.

What’s the biggest obstacle keeping you from Kubernetes adoption?

1    Sometimes you’ll mount a specific file path through your compose file. If you don’t but are still using a volume, Docker transparently mounts from a hidden directory anyway. All volumes within a Compose-managed stack are just local file mounts!
]]>
10001
Why I built a Kubernetes deployment tool https://eric.mann.blog/why-i-built-a-kubernetes-deployment-tool/ Tue, 06 Jan 2026 13:00:00 +0000 https://eric.mann.blog/?p=9995 I’ve been deploying applications to production for twenty years. I wrote a new PHP Cookbook for O’Reilly. I’ve managed infrastructure at companies handling millions of simultaneous requests. I write the monthly Security Corner column for PHP Architect magazine.

I’ve seen a lot of deployment strategies come and go.

And yet — getting some apps into production on Kubernetes still takes most teams weeks. Sometimes months.

That disconnect has bothered me for years.

The Problem I Kept Seeing

Every developer I talk to has the same story, whether they write PHP or something else. They’ve mastered Docker Compose. Local development is smooth. Redis spins up with a single command. PostgreSQL just works. Life is good.

Production is different.

Suddenly they’re drowning in YAML files. Deployments, Services, Ingresses, ConfigMaps, Secrets, PersistentVolumeClaims. The terminology alone takes a week to internalize. Actually configuring it correctly? That’s a month of trial and error — if you’re lucky.

Some of the best developers I’ve known take fatal shortcuts. Production becomes a mirror of their laptop: Docker Compose or manual Bash scripts running the application through brute force.

I watched this happen at company after company. Smart developers, talented teams, burning weeks of runway on infrastructure instead of building features their users actually needed.

Why I Built Displace

The gap between docker-compose up and production Kubernetes shouldn’t require a platform engineering team.

That’s the premise behind Displace. It’s a CLI tool and infrastructure platform that translates Docker Compose concepts into production-ready Kubernetes — without requiring you to become a Kubernetes expert first.

You build what you need. Displace handles the deployment.

It’s a wrapper around industry-standard tools that lowers the learning curve to launching a production application. When you need it, Displace makes the process of rolling out code easy. When you’re ready to call yourself an SRE1 and go it alone – you can still use the same underlying tools!

Here’s what that looks like in practice:

I recently built a new project: DailyMedToday. It’s a Laravel application with PostgreSQL, Redis, queue workers, and a nightly backup CronJob. Locally, I use Docker Compose to test my changes on http://localhost. But a quick command with Displace launches it to my production Kubernetes cluster.

The migration from one to the other took an afternoon.

Not weeks.

The same patterns work for WordPress. For any PHP application, really. And increasingly for non-PHP workloads too.

What This Means for PHP Developers

I’m presenting two talks at PHP Tek in May. One covers the conceptual bridge from Docker Compose to Kubernetes. The other demonstrates semantic search and embeddings in Laravel — using DailyMedToday as a live example.

Both talks exist because I’ve learned that PHP developers don’t need another 40-hour Kubernetes course. They need someone to show them the translation layer. Docker Compose concepts map directly to Kubernetes primitives. Services are still Services. Volumes become PersistentVolumeClaims. The mental model transfers, if someone shows you how.

That’s what Displace does automatically. And it’s what I’ll be teaching at Tek.

The Consulting Angle

Beyond the tool itself, I offer hands-on consulting for teams that want to make this transition. Whether you’re migrating an existing Laravel monolith, deploying WordPress at scale, or building something entirely new — I can help.

Working with me lets you get to production Kubernetes in days instead of months.

The PHP ecosystem deserves better infrastructure tooling. That’s what I’m building.

If you’re heading to PHP Tek, come find me. If you’re struggling with Kubernetes adoption, take a look at Displace. The CLI can help with core concepts for free; when you’re ready to graduate to the cloud I offer simple pricing for additional workflows and tools.

If you just want to follow along as I build this in public, I active on both X and Mastodon.

What infrastructure problems are slowing your team down?

1    Site Reliability Engineer
]]>
9995
Short Stints, Real Experience: Rethinking Career Tenure https://eric.mann.blog/short-stints-real-experience-rethinking-career-tenure/ Tue, 06 Jan 2026 03:23:02 +0000 https://eric.mann.blog/?p=10005 I used to judge candidates with short stints on their resumes.

Three months here. Six months there. A year at one company, then gone. I’d look at that pattern and wonder: can this person actually commit? Do they have the discipline to push through when things get hard?

I was wrong.


I hired a data engineer named Tim1 a few years back. He was outstanding and had exactly the unique experience we needed to support a new product line. He interviewed well and the team loved him! I offered him the role immediately. The catch was that he’d need to relocate from Utah to Portland to work in the office. He asked for a month to handle the move. I happily gave it to him.

During his first week in Portland, everything fell apart. the Vice President responsible for the entire product line was unexpectedly and suddenly terminated by our new CEO. We had to interrupt Tim’s onboarding so he could join a “we’ll figure this out” pep talk with the team.

A week later, the CEO decided to shutter the product line entirely. I was told to let my team go. This impacted not just Tim but the 15 other engineers working in that division as well.

Tim called his old boss and asked for his job back. I weighed in and encouraged it. Thankfully, they took him.

He’s never counted that two-week stint with me on his resume. I don’t blame him. But had he not been able to return to his old job, he would’ve been unemployed. Through absolutely no fault of his own.


A friend recently confided in me about their job search:

My goal is to try to find somewhere to sit for two years due to scrutiny of my “short stints.” Control what I can though. Now people seem to be expecting 2+ years to not be job hopper, it seems. Which is wild given all the layoffs.

Wild is the right word.

We’ve spent the last few years watching tech companies lay off thousands of workers. Engineers who joined in good faith, performed their roles well, and got cut when the financial winds shifted. Many of them didn’t even make it to their one-year anniversary. Their employers just couldn’t cut it.

And now those same engineers face scrutiny for having “short stints” on their resumes.


I was recently on the receiving end of this sentiment as well. I’d applied for a solid role and was rejected a few days layer. Thankfully, the hiring manager took the time to reach out and explain what was missing in my background that caused them to pass:

More than 2 years of tenure at the same place — It’s important for our Product Engineers to have experienced the consequences of their decisions, iterated based on those consequences, and honed their judgment for future decisions. We believe the outcomes of the most challenging engineering work sometimes take years to shake out.

I appreciate the transparency. And there’s validity in wanting engineers who’ve seen the long-term impact of their work. But this framing assumes that short stints are always a choice. It ignores the reality that many of us have lived.

I’ve made intentional moves throughout my career — to learn a new skill, master a specific technology, challenge myself in a new way. Those decisions resulted in 2-5 year tenures at most roles. Deliberate, strategic, growth-oriented.

But I’ve also been the manager who created holes in other people’s resumes. I’ve sat across from talented engineers and told them their position was eliminated. I’ve watched good people pack their desks because of decisions made levels above my head.


The tenure myth assumes a level of control that most employees simply don’t have.

It assumes that if you left after a year, you weren’t committed. That if you didn’t stick around for two years, you never experienced the consequences of your decisions. That short stints reflect something about your character rather than your circumstances.

Tim didn’t leave after two weeks because he lacked commitment. My friend isn’t job hopping because they can’t buckle down. The thousands of engineers laid off in the last few years aren’t leaving because flaky. Circumstances require navigating an industry that treats employees as expendable.

I’ve learned something important being on both sides of this equation. Tenure tells you how long someone stayed. It tells you very little about what they learned, what they built, or what they’re capable of.

Some of the sharpest engineers I’ve worked with had resumes that looked “choppy” on paper. Some of the least effective had been at the same company for a decade.

Correlation isn’t causation.

Time spent doesn’t equal experience gained.

Course Correction

If you’re hiring, I’d encourage you to ask why someone left rather than just when. Dig into what they learned in those six months. Ask about the decisions they made and what happened next — even if “next” was at a different company.

If you’re job searching with short stints on your resume, own your story. Explain the context. The right employer will understand that layoffs aren’t character flaws. Reach out directly to the hiring manager; don’t wait for them to connect the dots between your roles.

Finally if you’re a manager, remember: you might be the one creating those gaps in someone else’s resume. The tenure requirements we enforce on candidates might one day exclude the very people we let go.

I used to judge candidates with short stints. Now I know better.

What assumptions are you holding onto that deserve a second look?

1    Not his real name, obviously …
]]>
10005
Gratitude https://eric.mann.blog/gratitude/ Wed, 03 Dec 2025 21:30:00 +0000 https://eric.mann.blog/?p=9983 In late 2018, another leader on my team challenged all managers to show gratitude towards their employees. She charged us all with delivering hand-written thank you notes to members of the team. Our objective was to thank them for their contributions over the past year.

We also included any of our peers with whom we worked closely. I also included my own direct manager.

That November I sent out 20 or so hand-written cards and had no idea what the impact would be.

“No one’s ever written me a thank you card before,” from one of my direct reports.

“Honestly, I’ve had a rough time lately and was about to resign. Thank you for reminding me why I do this,” from a peer.

A Regular Habit

That year was one of the easier ones for sending cards. I managed two smaller teams and had a handful of peers to whom I sent cards. Later years saw my teams grow, meaning more and more cards. Covid forced us all to work from home, so my habit now required a trip to the post office rather than a walk through the building.

On particularly busy year saw me sending 45 cards, several internationally. Working with a distributed team was stellar. Being able to genuinely thank folks for their contributions provided real meaning.

I’ve now exercised this habit every year between Thanksgiving and Christmas. Years with smaller teams have meant fewer cards. Larger teams has meant a lot more writing. But every time it’s an opportunity to pause, reflect, and feel deep gratitude for the individuals around me.

Leadership versus Management

I’m not currently in a management role. After nearly a decade as a team lead, director, head of engineering, or VP I’ve taken an intentional step back. I’m taking the opportunity to contribute as an individual, writing code on a daily basis to help push our collective roadmap forward.

But even as an IC1 I’m always looking for ways to show leadership within the team. I take ownership of specific tasks or niches within the product. I lend my voice to shaping and shoring up our overall delivery process.

And I take the time to recognize the contributions of my peers and explicitly thank them.

Gratitude is a leadership skill. Like any other skills, it needs to see regular use to stay fresh. And you can very solidly practice leadership within the team even without holding an explicit role in management.

What are you thankful for? How do you practice gratitude?

1    Individual Contributor
]]>
9983
Burnout Prevention Through Strategic Reassignment https://eric.mann.blog/burnout-prevention-through-strategic-reassignment/ Tue, 25 Nov 2025 16:54:00 +0000 https://eric.mann.blog/?p=9977 In a previous role, I was asked to join several projects in the eleventh hour to get them shipped. The engineer who had been leading them til then had resigned. They were burned out and needed to focus on their health. Nothing final had been shipped, so we risked losing months of progress if someone didn’t land them immediately.

I jumped in. Ten-hour days, seven days a week, covering both my existing workload and that of my colleague. I worked through three consecutive weekends processing data migrations and deployments. On the third, I mentioned offhand to my project manager: “I haven’t taken a day off in over three weeks. I might need to take a couple of days of flex time next week just to rest for a bit.”

They were understanding and supportive. They told me to block out my schedule. I did. I went fishing and hiking for a couple of days. It was glorious.

When I came back, everything was wrong.

I’d already shipped the last-minute projects had shipped and our clients were happy. But I returned to discover all of my original projects had been moved to an entirely new team. These were projects I’d led for months. Clients with whom I’ve worked for years. All assigned to new teams as I was shifted in the org to an entirely different business unit.

The workload was, admittedly, much lighter. But this wasn’t at all what I’d asked for or expected.

“Wait, so I voice concern about burnout and get punished for it?”

They kept trying to tell me that wasn’t what was happening. But I never got to work on my favorite projects again. I had to train a brand new project manager and build a new team. We worked on smaller projects – nothing of the scale I’d been handling before.

I guess they did keep me from burning out on a high-profile project. But at what cost?


There’s a satirical HR account on X that perfectly captures this corporate logic:

This hit so close to home I didn’t realize at first that it was satirical. My biggest fear, though, is that some managers might see it as a playbook.

The most efficient way to prevent employee burnout is to remove the burned-out employee from the equation. No employee, no burnout. Problem solved. Success rate: 100%.

My experience wasn’t quite that dramatic. I wasn’t fired. The solution to my burnout wasn’t addressing the workload or timelines that caused it. It wasn’t examining why another person leaving created such a crisis.1 It wasn’t considering whether a 70-hour workweek for a month was sustainable or reasonable.

The solution was to remove me from the situation that burned me out. Unfortunately this was also the work I was most passionate about.

This approach treats burnout as an individual failure rather than a systemic issue. The employee who can’t handle the workload becomes the problem to be managed, shuffled, or eliminated. The workload itself, the staffing levels, the expectations remain untouched. After all, someone else will pick up those projects. Someone who hasn’t burned out yet.

The particularly insidious part is how this masquerades as support. My manager was purportedly understanding when I asked for time off. They approved my flex time without hesitation. They genuinely seemed to care about my well-being. But their actual solution revealed they actually cared about protecting the projects, not the person.

There’s an implicit message in being reassigned after voicing burnout: don’t voice burnout. Suffer in silence until you can’t anymore, then leave quietly without disrupting the project timeline. The alternative is watching someone else take over the work you built while you’re handed lighter responsibilities. You’re being benched rather than helped.

I often wonder what would have happened if I’d never mentioned feeling burned out. Would I have eventually crashed harder? Would I have left the company entirely? Or would I have just powered through and been fine?

I’ll never know.

I do know that after that experience, I became far more careful about admitting when I was struggling at work. I could never gauge whether or not the company’s support would come at a price I wasn’t willing to pay.

Maybe that’s the real lesson here. The question is whether burnout prevention programs are designed to protect people or to protect productivity. When the solution to someone burning out is removing them from what they’re passionate about, we have our answer.

1    This experience is a large reason why, in every role since then, I’ve preached about “bench depth” to anyone who will listen. I intentionally build redundancies in all of my teams so no one person is critical path for success. Everyone has support. Everyone can take time when they need it. No one person taking a break will compromise delivery.
]]>
9977