Nothing New Under the Spec

I recently stumbled across Cameron Sjo‘s spec-compare project while working on a similar comparison of my own. The project is outstanding. Compiling, testing, and documenting this many tools and frameworks takes considerable effort, and the findings are immediately useful if you’re evaluating spec-driven approaches for your team. I’m grateful someone else did this work and shared it openly. If you haven’t explored the project yet, it’s well worth your time.

Then I read the Critical Analysis.

The practical observations about AI adherence, review burden, and especially the MDD parallel are solid work. Where it lost me was the historical framing. The analysis builds its central tension around whether SDD is “waterfall dressed in AI clothing.” This is built on several layers of popular misconception, and like a house of cards at a toddler’s birthday party, it doesn’t survive contact with much scrutiny.

The Page 2 Problem

The analysis states: “Waterfall (1970s): Comprehensive upfront planning, failed due to inflexibility.” Wrong.

Winston Royce’s 1970 paper, “Managing the Development of Large Software Systems,” is eleven pages long. The famous sequential diagram appears on page 2. Royce presented it as the approach that does not work. He spent the remaining nine pages describing iteration, feedback loops, and the insight that you should plan to build the system twice — aligning directly with Fred Brooks’ “plan to throw one away” from The Mythical Man-Month.

Our industry read page 2 and spent fifty years arguing about what the other nine pages said. It’s the software engineering equivalent of reading the first chapter of a murder mystery, putting the book down, and writing a five-star review about how the butler did it. Royce wrote nine more pages. Nobody read them.

“Over” Is Not “Instead Of”

The Agile Manifesto says “working software over comprehensive documentation.” The word is over, not instead of. “I prefer vanilla over chocolate” does not mean I don’t like chocolate at all. I happen to like both! I just like vanilla a little more.

The nuance evaporated immediately. Training courses simplified preferences into prohibitions. “We’re Agile, so we don’t write documentation” became acceptable to say with a straight face. The dumber versions won because they were easier to teach, easier to sell, and easier to use as an excuse for not thinking through a problem before writing code.

I held these same simplified views early in my career. The original Agile principles were considerably more thoughtful than what most of us were taught. We just didn’t bother to read those carefully, either.

The Emperor’s New Sprint

Look at how most teams practice Agile today. Two-week sprints with fixed scope. Backlog refinement that functions as requirements gathering. A daily standup that is a status report wearing a casual Friday outfit. A retrospective nobody acts on.

Strip away the vocabulary and you have a two-week waterfall cycle. We took “respond to change over following a plan” and turned it into meticulously planned two-week plans we follow to the letter.

So when the analysis asks whether specs can coexist with Agile — coexist with which Agile? The one in the meeting room already runs on specs. They’re called “acceptance criteria,” and everyone pretends that’s different.

Same Song, Different Verse

Qoheleth had something to say about new paradigms: “there is nothing new under the sun,” or in our case, “under the [spec].”

Writing specifications before building software is what we have always done. Requirements documents, functional specs, user stories — these are all specifications. The format changed. The core activity didn’t. Calling it a paradigm shift is like calling a new bread recipe a paradigm shift in wheat.

What changed is the executor. When a human reads a spec, they walk over to your desk and say “this makes no sense.” When an AI reads a spec, you get hallucinated APIs and cheerful disregard for your instructions. Those are problems with the AI, not with writing things down. Blaming the recipe because the oven is unreliable won’t help anyone bake better bread.

The more interesting axis is time scale. Classic Waterfall planned in months. Agile plans in weeks. SDD plans in hours. Same song, different verse. When your spec-to-implementation cycle is measured in minutes, the distinction between “upfront planning” and “iterative development” dissolves. The pendulum didn’t swing. We just zoomed in.

What the Analysis Gets Right

The MDD parallel is the strongest section. MDD failed because the translation from spec to code was too rigid and too opaque. LLMs offer flexibility but introduce non-determinism — like trading a car that only turns left for one that occasionally invents directions that don’t exist.

The AI adherence problems need engineering solutions, not methodology debates. Hand-wringing about whether we’ve “returned to Waterfall” distracts from fixing them.

Wrapping Up

I’m tired of these poor regurgitations of popular versions of Waterfall and Agile history. They don’t serve anyone but the marketing departments of large vendors looking to sell you the next transformation. Call me pedantic if you must, but we should continually strive to be accurate with terms. When we build arguments on a version of Waterfall from page 2 of an eleven-page paper and a version of Agile from a certification course that turned preferences into prohibitions, we’re just passing around the same wrong answers and wondering why we keep asking the same questions.

Spec-driven development is what we’ve always done, only this time with an AI reading the spec. The interesting questions are about time scale, AI reliability, and whether the tooling can mature fast enough to deliver on the promise.

Have you run into these same misconceptions on your teams? Let me know in the comments.

A Flurry of Library Updates: FSharp.Data.JsonSchema and Frank

I recently carved out some time to revisit some dormant projects. The primary driver was a tic-tac-toe app inspired by Scott Wlaschin‘s Enterprise Tic-Tac-Toe series of posts and presentation and a desire to learn and test out Datastar. An upcoming post will dive deeper into that topic. In this post, I want to share a high-level overview of what’s new with FSharp.Data.JsonSchema and Frank. Subsequent posts will dive into further details.

FSharp.Data.JsonSchema 3.0.1

FSharp.Data.JsonSchema is now three packages:

  • FSharp.Data.JsonSchema.Core: Core JSON Schema representation types that relies on only FSharp.SystemTextJson. This library can be uses to parse a JSON Schema into F# types and F# types into a JSON Schema. However, it no longer has a specific target.
  • FSharp.Data.JsonSchema.NJsonSchema: This is equivalent to the previous version with a dependency on NJsonSchema as the target. This should be backward compatible with previous versions of the library.
  • FSharp.Data.JsonSchema.OpenApi: The new target depends on the Microsoft.OpenApi library introduced with the net9.0 framework target. This target is intended for use with generating Open API documents from ASP.NET Core applications.

In addition, several long overdue bug fixes and enhancements should now be resolved:

  • Recursive Types (#15): Recursive F# types no longer cause infinite loops. Self-referential DUs, records with optional self-references, and recursion through collections all generate proper $ref: "#" schemas, and a follow-up fix in 3.0.1 resolved an NJsonSchema serialization failure where Ref("#") in nullable contexts tried to look up the root reference in the definitions dictionary instead of referencing the root schema directly
  • Choice types (#22): Choice<'A,'B> through Choice<'A,…,'G> now generate clean anyOf schemas instead of the verbose internal-tag encoding
  • Anonymous records: Inline object schemas, no $ref
  • DU encoding styles: InternalTag, AdjacentTag, ExternalTag, and Untagged via a new unionEncoding parameter on Generator.Create
  • Format annotations: Proper date-time, guid, uri, duration, date, time, and byte formats for DateTime, Guid, Uri, TimeSpan, DateOnly, TimeOnly, and byte[]

Frank 7.2.0

Frank has had a long and winding history as my favorite hobby project for trying out different approaches to encoding web applications. The computation expression approach starting in (IIRC) v5.0 has stuck. The goal is still to produce an HTTP resource-style set of builders that provides a consistent means of defining HTTP resources and the ASP.NET Core WebHost in which to run them while allowing for a lot of flexibility and extensibility. As such, I’ve added some additional libraries I’ve found useful to test out the extensibility and support the new tic-tac-toe hobby project mentioned above.

Packages

  • Frank: Added Metadata field to ResourceSpec, a list of (EndpointBuilder -> unit) convention functions applied during RouteEndpointBuilder.Build(). This generic extensibility point lets companion libraries (Auth, OpenApi, etc.) attach typed endpoint metadata without requiring changes to the core Frank library. This is a binary-breaking change but source-compatible with the empty default. Also added plugBeforeRouting, plugBeforeRoutingWhen, and plugBeforeRoutingWhenNot for middleware ordering control around UseRouting().
  • Frank.Analyzers: F# Analyzer (FSharp.Analyzers.SDK) that detects duplicate HTTP handler registrations within a resource block at compile time, enforcing the constraint of a single HTTP method per resource. It works in IDEs (Ionide, VS, Rider) and CLI (dotnet fsharp-analyzers) for CI/CD.
  • Frank.Auth: Adds WebHostBuilder registration and resource-level authorization via ResourceBuilder extensions, including requireAuth, requireClaim, requireRole, requirePolicy using AND semantics for resources and useAuthentication, useAuthorization, authorizationPolicy for WebHostBuilder.
  • Frank.OpenApi: Adds long-planned, declarative OpenAPI 3.0+ document generation, including a handler computation expression for pairing handlers with metadata (name, summary, tags, produces, accepts). F# type schemas (records, DUs, options, collections) via FSharp.Data.JsonSchema.OpenApi. useOpenApi on WebHostBuilder wires services and middleware. Includes Scalar UI to provide a web-based client for viewing and testing endpoints. Targets net9.0/net10.0.
  • Frank.Datastar: Native SSE implementation similar to the StarFederation.Datastar.FSharp library. Zero-copy buffer writing via IBufferWriter, zero external NuGet dependencies, full Datastar SDK ADR compliance. Added stream-based overloads (streamPatchElements, etc.) accepting TextWriter -> Task for zero-allocation HTML rendering. No breaking API changes. Targets net8.0/net9.0/net10.0.

New Samples

  • Frank.Datastar.Basic: RESTful hypermedia patterns (click-to-edit, search, bulk ops) using Frank.Datastar with F# string templates.
  • Frank.Datastar.Hox: Same patterns as Basic using the Hox view engine. Demonstrates stream-based SSE overloads via Render.toStream.
  • Frank.Datastar.Oxpecker: Same patterns as Basic using Oxpecker.ViewEngine.
  • Frank.OpenApi.Sample: Product catalog API demonstrating the handler CE with OpenAPI metadata, mixed plain/enriched handlers, useOpenApi, and Scalar UI.

Looking Ahead

I’ll spend some time in upcoming posts exploring each of these. In the meantime, I’d love feedback on any of the updates and changes. I hope I haven’t broken anyone with changes to FSharp.Data.JsonSchema, and there is a transition package with version 3.0.0 to make it easier to switch without changing package names. I don’t have any additional plans at the moment for either of these, but please open issues if you have ideas or bug reports.

Leadership, Failure, and Erlang

I’ve been reflecting on leadership recently and recalled an idea that struck me during my time leading organizations in the Texas A&M University Memorial Student Center Student Programs Office. In organizations that turned over leadership each year, the leadership followed a predictable pattern of flipping from strong leadership to weak leadership, then from weak to strong, as if on a loop. I developed the idea that in subsequent generations, strong leaders beget weak leaders, and weak leaders beget strong leaders. Based on this observation and idea, I developed a practice of being intentionally weak in certain circumstances in order to develop leadership in those I led.

Before we continue, let me define terms as I’m using them:

  • strong – proactive, decisive, quick to correct, and minimize risk of failure
  • weak – passive, indecisive, leaves a gap that needs to be filled, risk of failure

When I’ve searched for these terms online, I typically see something that looks more like a contrast of “good” versus “bad” leaders, for varying definitions of “good” and “bad.” When I use “strong” and “weak,” I assume both exhibit “good” leadership qualities, for whatever definition you wish to use for “good.”

The practice I tried to develop while at Texas A&M was to identify and intentionally give space for others to step up and grow as leaders. I provided backup to minimize the impact of failure and follow-up with a retrospective to learn and grow. This was moderately successful, in large part because I had a lot of room to learn and grow myself. Thankfully, the advisors at Texas A&M were wonderful and provided the same kind of environment to grow as leaders.

I’ve found this approach continues to work well throughout my career, though I have forgotten to use it at times. I recently started exploring new (to me) programming languages and came across Gleam, a typed, functional programming language for the Erlang runtime. Erlang is known for its resilience and fault tolerance, yet it achieves this by means of a “Let it Crash” philosophy. This seems counterintuitive. Success through failure? In Programming Erlang, Joe Armstrong notes that the difference is in expecting failure, one can focus instead on planning how to identify and recover.

My great concern is not whether you have failed, but whether you are content with your failure. - Abraham Lincoln

There is a correlation between the “Let it Crash” philosophy and growing leadership abilities. We tend to think of success as good and failure as bad, but failure is only bad if it does not translate into a learning opportunity. Successful and unsuccessful outcomes can both be positive outcomes, but they need to be planned. Planning involves identifying opportunities for each person you want to grow in leadership, assessing risk, and providing for contingencies.

You may be wondering how this is different than coaching. I see the difference in coaching is an explicitly communicated opportunity, whereas what I propose above is not explicit. You have to make room for others to identify and then pursue the opportunity on their own. Coaching should certainly be part of the process, but it falls into the “strong” leadership category.

Leaving room for others is challenging. It means waiting on making improvements. You may get only a partial solution. However, your people will struggle to make it to the next step without opportunities. I’ve enjoyed reflecting on and rediscovering this approach. I’d love to know how others approach leadership development in their people. Let me know in the comments.

Azure Functions with Swift

I’ve been a bit busy lately with several projects, but I’ve tried to carve out time to continue learning. As I’ve been writing some Azure Functions for work projects, I was looking to leverage see how easy it would be to use the Custom Handlers preview to add support for Swift. Turns out Saleh Albuga already built a tool for building and deploying Swift Azure Functions called swiftfunc! The only downside is this tool currently only works on macOS.

Undaunted, I decided to try a different approach. While looking for OSS Swift compilers, I happened upon RemObjects Elements Compiler. I was surprised I had not heard of them before. Their compiler platform is worth investigating, as it supports many languages, even within one project! However, I was interested in Swift, and their Silver compiler is kept very close to the latest Swift spec, including extensions for things like async/await. As the Elements compiler can be used to build apps for mobile, .NET, Java, WebAssembly, and more platforms, I wondered whether I could use Silver to build a Swift Azure Function against the .NET libraries. With a little help from Marc Hoffman, the answer is a resounding YES!

Continue reading
Aside

Loose Coupling and High Cohesion in Teams

Most software engineers are familiar with the OK design principle of loose coupling and high cohesion from the Gang of Four Design Patterns book. I have been reflecting on my experiences leading teams while reading Leading with Honor by Lee Ellis and realized a correlation of this same principle for high performing teams.

Highly coupled teams are like an assembly line. Everyone is specialized, and everything moves along well until something breaks. In a team with low cohesion, the only ones who can fix the break are those responsible for the area of the assembly line that broke, which means most of the team is not working while the problem is identified and fixed. If a team has high cohesion, then some members can switch roles to help out with the area that is broken, but the rest of the assembly line is still stopped.

Highly cohesive teams are those that have high collaboration with one another and typically are composed of members with different specializations. Everyone can contribute to any part of the work. If the team is highly coupled, however, then the loss of one team member or the need for that team member on any given task causes the rest of the team to grind to a halt while waiting on that team member to free up. A loosely coupled team can continue working on multiple tasks concurrently with no hard dependencies on any one person.

Much like in software, my experience suggests highly cohesive but loosely coupled teams are far more efficient and effective at achieving their objectives. I’m curious whether you’ve had a similar or different experience, and I’m interested in any research on this topic. Please share in the comments.

Custom Site Generation with Azure Static Web Apps

While fooling around with Azure Static Web Apps — which went into public preview today — I found a trick to working with any front-end build tool, not just npm install && npm run build. In this post, I’ll work through adding a new build step and using a custom static site generator. To keep things interesting, I’ll use an F# script to generate the site.

Continue reading

My Take on FAKE in 2020

I’ve been using FAKE since roughly 2009 when Steffen Forkmann first introduced it. I’ve used it for OSS and work projects, builds and deployments, and even committed features to it. I think FAKE is a fantastic tool, and I loved the changes that came in FAKE 5.

However, I’ve been reconsidering my use of FAKE as a default build scripting tool in smaller projects and wanted to write up my reasons for switching to dotnet CLI builds for new projects and migrating some OSS projects to do the same.

Continue reading
Aside

Blazor Server Tip: Sometimes you need Task.Delay(1)

I recently encountered an issue with server-side Blazor in which the UI didn’t refresh after calling StateHasChanged. The UI refreshed just fine until I added about 30k more records to a database table, which caused a query to take a bit longer to run. I filed an issue here.

I debugged through the issue by trying different things like using an in-memory data store, re-checking against a smaller data set, and wrapping StateHasChanged to make sure it was actually called. Everything was working as expected with the in-memory store and smaller data set, and StateHasChanged was always called. However, with the larger data set, the components’ lifecycle methods were not called.

I finally stumbled upon a solution using an old JavaScript trick: adding await Task.Delay(1); This magically worked. If you run into something similar, you may try await Task.Delay(1); and see whether that resolves the issue.

Revisiting Microsoft Forms: WinForms

This is a series of posts on older Microsoft forms technologies and reflections on what is really good about them. When I first used these platforms, I had strong biases against them, which were encouraged by co-workers and friends. Having spent over a decade building software in .NET, I’ve come to appreciate at least certain aspects of these tools, some of which are moving forward to .NET 5. Windows Forms, or WinForms, is one of those platforms, and I would like to spend some time talking through some really nice aspects of the framework.

Continue reading