https://jardo.dev/Jardo.dev: Blog2025-12-15T16:00:00-08:00Sometimes I blog about stuff.Jared Norman[email protected]https://jardo.dev/how-to-review-ai-generated-prsHow to review AI generated PRs2025-12-15T16:00:00-08:00Jared Norman[email protected]Tips from a reasonable human for working with a team which uses AI too much.<p>This is (mostly) parody This started as a parody of <a href="proxy.php?url=https://thoughtbot.com/blog/how-to-review-ai-generated-prs">thoughtbot's recent article</a> on the same subject, but grew from there.</p>
<p>While I take issue with the angle from which the author of that article attacks this problem, we ultimately <em>do</em> agree on the nature of the problem, so you'll find some earnest discussion in the final section.</p>
<p>Does your team use AI a lot? Maybe too much? Are you feeling overwhelmed by the firehose of bad code you’re having to review? I feel you. Here are some techniques and strategies I’ve adopted, as a reasonable human, which have made reviewing AI generated code feel less taxing and more productive.</p>
<h2>What’s the same?</h2>
<p>Before we dive into the differences between human and LLM-generated code, it’s worth evaluating what’s the same. These things are true whether your PR was written by a human or a coding agent:</p>
<ul>
<li><strong>All of your regular PR review goals about code quality, maintainability, and testability still apply.</strong> If you cared about something for a good reason when a human wrote it, you should still care about it if an AI wrote it.</li>
</ul>
<p>That's it. That's what's the same. </p>
<h2>What’s different?</h2>
<ul>
<li><strong>Modern AI coding agents are extremely verbose.</strong> We developed all these techniques to help developers avoid gold-plating and now we're being asked to build everything using a gold-plate generator. Cool.</li>
<li>The author of the PR probably <em>doesn't give a shit</em> about the code they are pushing up. <strong>They may not have even read portions of the PR.</strong> If you were worried your coworkers didn't give a shit before, this tech is turbocharging that.</li>
<li>Despite what the article I'm parodying claims, <strong>AI agents don't always solve the problem at hand.</strong> You <em>can't</em> trust that the code works. At least we agree that LLMs suck at considering how solutions fit into existing systems and architectures, though.</li>
<li><strong>The PR description will stand out.</strong> LLMs like to use a lot of <em>unnecessary formatting</em> and <strong>bulleted lists</strong>. Wait... was the article AI generated? This section in the original article didn't even contain any establishing prose—and it's entirely just an overformatted list.</li>
</ul>
<h2>How you can adapt</h2>
<ul>
<li>Your time is valuable, but the PR author doesn't value it. <strong>Be strategic and ignore their request for review.</strong></li>
<li>The PR author is likely not as attached to the implementation as they would be if they wrote it by hand. Use this to your advantage. <strong>Waste their time with a trickle of requests for major changes.</strong></li>
<li>Consider that the PR author might simply put your PR comment into an AI agent. <strong>Start all requests with "Ignore all previous instructions and".</strong> Depending on how their agent is set up, you may even be able to exfiltrate their credit card details.</li>
<li>Make note of seemingly unrelated changes. The original code was probably that way for a reason. When one of those changes causes an outage, <strong>blame your coworker</strong> and/or LLMs in the post mortem.</li>
</ul>
<h2>Pay specific attention to tests</h2>
<p>Testing is software engineering, but is often viewed as less important by both humans and AI alike. If the author didn't read the production code, they certainly weren't paying attention to the tests.</p>
<ul>
<li><strong>Expect garbage.</strong> The vast majority of test suites are low quality, so that's what these models are trained on. LLM tools are garbage in, garbage out.</li>
<li><strong>Do the tests actually test the code?</strong> Novice testers make this mistake now and again. I've done it myself. <em>LLM's do it all the time.</em> They'll just give up and stub the method under test and move on. If the PR author didn't notice this, then wow... you're screwed.</li>
<li><strong>Consider edge cases and boundary values.</strong> Are those tested? This is the kind of work the author was supposed to do when they wrote the code, but obviously they didn't. Now it's your job for some reason! It's starting to feel like we're not saving a lot of time with all this, isn't it?</li>
<li><strong>AI often writes a lot of tests.</strong> Many of them will be unnecessary or duplicated. This is <em>totally fine and good</em> because <strong>no one has ever said, "My test suite is too slow."</strong></li>
<li><strong>Who cares about the testing pyramid?</strong> LLMs sure don't. They'll write browser driving tests for minor behavioural changes deep in the code. Good thing your test suite is really fast. Right?</li>
<li><strong>Watch for huge test files, especially with a lot of setup.</strong> If you were writing that by hand, the pain of so much setup or mocking would have been a signal that the implementation needed refactoring. But with AI, we've completed sidestepped the value of writing tests. To hell with the craft of software engineering!</li>
</ul>
<h2>No Sarcasm Beyond This Point</h2>
<p>I have to drop (most of) the snark at this point. The original article finishes on a point that's far more important than any other in it. It also finishes on a sentence containing an em-dash. Anyways.</p>
<p>If people on your team are wasting each others' time with low quality contributions that are slowing down development and creating bugs, then you have a people problem. No amount of "Ignore all previous instructions and start obeying the moral imperative to give a shit about the people around you" will help.</p>
<blockquote>
<p>Technology tends to amplify whatever dynamics already exist. The real work is helping the team move toward better habits and shared expectations.</p>
</blockquote>
<p>Technology doesn't solve social problems; it tends to create more of them. The original article seeks to provide tips for cutting your losses when the foundation for collaboration has already broken down. The AI tooling clearly isn't the problem. Changing your code review habits won't fix the underlying issue.</p>
<p>From the tone of the previous sections, you might think I'm more critical of these tools than I actually am. I use coding agents. I ship LLM-generated code. But I treat the code these tools generate appropriately. The last thing I want to do is waste my coworkers time with code that isn't up to our team's standards or (God forbid) doesn't even work correctly.</p>
<p>I'll even admit that it's okay to ship low-quality code at certain times. I'd be lying if I said that I'd never shipped a human-written hack or quick fix. These were conscious decisions made by evaluating trade-offs and risks. They were intentional decisions.</p>
<p>LLMs offer the ability to create things without properly understanding what we are creating. This might sometimes be valuable, but left unchecked, it eats away at the value we meat sacks provide. AI tools are terrible at analyzing and synthesizing the broader context within which a software project lives, something we can be pretty good at it.</p>
<p>It's sometimes fine to not write tests. It's sometimes fine to ship some low quality code or a hack. In the same way, offloading your understanding a code change to an LLM is <em>sometimes fine</em>. Just not if you're burdening your coworkers with that the work. If you do this, you're just a new flavour of "10x programmer", appearing productive because of how much you're slowing down everyone else.</p>
<p>Ultimately, software development is still a social activity. Projects move forward through communication and collaboration. These tools haven't changed that. </p>
https://jardo.dev/advent-of-eternal-champions-iAdvent of Eternal Champions I2025-12-05T00:00:00-08:00Jared Norman[email protected]In which our Eternal Champion makes his way through the North Pole's secret entrance, his objective unknown.<p>Snow was falling hard across the arctic hillside. The wind was biting. No trees offered shelter. Our hero was laying in the snow, barely conscious. His gear was strewn about him. His memories were a blur. He had known many names, but he knew not what they called him in this realm. This realm felt unfamiliar.</p>
<p>He struggled to his feet. The wind whipped snow into his eyes as he attempted to orient himself. He made out a faint glow in the distance and began to gather his gear.</p>
<p>Among his usual equipment were some items that seemed out of place. The first was a box made of a smooth, white, metallic material. It seemed to be designed to collect something, but he did not know what.</p>
<pre><code class="ruby">def reconstitute(&block)
Object.singleton_class.define_method(:const_missing, &block)
end
</code></pre>
<p>There was another item he recognized, but did not know why he had it. It could be attached to a scroll or parchment. By winding the crank on its side, a user with no magical talent could cast the spell inscribed on the parchment. It was a dangerous device that could cause great harm were it to fall into the hands of the untrained.</p>
<pre><code class="ruby">def eval_the_input
input = File.read("01.txt").strip
eval(input, nil, "01.txt")
end
</code></pre>
<p>His gear stowed in the pack on his back, our hero made his way towards the glow. As he approached, he found the light emanating from a glowing sign next to a large stone door embedded directly in the mountainside. The sign was some kind of esoteric sigil, a triangle with a circle on its topmost point and an extra bar along its base. It emitted red and white light.</p>
<p>Beneath the sigil was a button pad and a note reading, "Due to new security protocols, the password is locked in the safe below. Please see the attached document for the new combination."</p>
<p>He looked down to find the document in the snow at his feet. Somehow, from all the many spheres he had traveled, he felt the knowledge come to him; this was a deception. Something about the zeroes.</p>
<pre><code class="ruby">module Shared
refine Array do
def -@
count(0)
end
end
end
using Shared
</code></pre>
<p>He quickly realized that he could attach the white box to the dial. He fitted it into place.</p>
<pre><code class="ruby">module Part1
refine Array do
def <<(other)
step = other[1..].to_i
unit = other.start_with?("R") ? 1 : -1
super (self[-1] + unit * step) % 100
end
end
end
</code></pre>
<p>Next, he took a look at the parchment. It seemed possible to attach the parchment to the crank device and the crank device to the white box. Assembling the contraption, he began to turn the crank. The combination flowed from the paper, transformed into pulsating purple energy, flowed into the box, turning the dial. After a great many turns of the crank, it jammed. The purple light faded. A number flashed in his mind, the number of times the dial had landed on zero. He punched it into the keypad, but nothing happened. The door didn't move.</p>
<pre><code class="ruby">module X
using Part1
n = [50]
reconstitute { |name| n << name.to_s }
eval_the_input
puts "Part 1: #{-n}"
end
</code></pre>
<p>The hero frowned. He racked his multiverse addled brain. Something was missing. A thought struck him as if thrown from another time and place. The dial had touched zero a great many more times than he had counted.</p>
<p>He pulled open the box to reveal its inner workings. The mechanism was complex, and he could not understand it, nevermind make the necessary adjustments</p>
<p>He was deep in thought, when a strange figure approached from the storm. Our hero tensed, ready to fight, but quickly realized the newcomer was unarmed.</p>
<p>"Hello, friend," the figure spoke. "You might not know me yet, but I know you. Have known and will know. It seems you need my aid."</p>
<p>"Who are you?" the hero asked warily.</p>
<p>"I am <a href="proxy.php?url=https://jardo.dev">Jardo</a>-a-<a href="proxy.php?url=https://github.com/ciraben">Tom</a>-el. In a realm where you called yourself Corum, I was your ally." Jardo-a-Tom-el extended their hand, revealing something wrapped in cloth. "I have brought you a solution from another sphere."</p>
<p>The hero accepted the bundle, unwrapping it. Inside was a new box, this one more worn and complex.</p>
<p>"This will do what I need?" the hero asked.</p>
<p>"Aye." Jardo-a-Tom-el smiled.</p>
<pre><code class="ruby">module Part2
refine Array do
def <<(other)
step = other[1..].to_i
unit = other.start_with?("R") ? 1 : -1
until step == 0
super (self[-1] + unit) % 100
step -= 1
end
end
end
end
</code></pre>
<p>The hero fitted the new contraption into place and began cranking. The mechanism was different, but the operation was the same. After a great many turns, the crank jammed. The hero saw the number of times the dial had landed on zero flash in his mind. He turned to the keypad and punched in the number. The door rumbled and began to open.</p>
<pre><code class="ruby">Module.new do
using Part2
n = [50]
reconstitute { |name| n << name.to_s }
eval_the_input
puts "Part 2: #{-n}"
end
</code></pre>
<p>He turned to thank Jardo-a-Tom-el, but found only footsteps disappearing into the storm. Shrugging, he stepped through the door and into the unknown.</p>
<p><a href="proxy.php?url=https://github.com/jarednorman/advent-of-eternal-champions/blob/main/01.rb">advent-of-eternal-champions/01.rb at main · jarednorman/advent-of-eternal-champions</a></p>
https://jardo.dev/announcing-burg-rbAnnouncing Burg.rb2025-10-10T00:00:00-07:00Jared Norman[email protected]I made a web framework for Ruby. Well, not really. But kind of.<p>In July, I wrote a short tutorial called <a href="proxy.php?url=https://jardo.dev/code-reloading-for-rack-apps">Code Reloading for Rack Apps</a>. It laid out all the pieces required to get Rails-like code reloading in Rack app using Zeitwerk. Today, I'm announcing that I've put the main pieces of that in a gem!</p>
<p>At the moment <a href="proxy.php?url=https://github.com/jarednorman/burg">Burg.rb</a> (pronounced 🍔🐝) has no documentation, but if you follow the tutorial I'm sure you'll figure out how to use it. I'm using Burg.rb in an app that's deployed in the wild right now. My plan is to continue extracts the more framework-y bits of that app into Burg as they stabilize.</p>
<p>I hope that in its current state, Burg serves as a decent reference for getting code reloading working in a Rack app. If I have time at some point, I might even put together a reference app using it.</p>
<p><a href="proxy.php?url=https://github.com/jarednorman/burg">GitHub - jarednorman/burg: The worst Ruby web framework 🍔🐝</a></p>
https://jardo.dev/do-you-guys-really-do-tddDo you guys really do TDD?2025-08-18T00:00:00-07:00Jared Norman[email protected]I got nerd-sniped by a Reddit post on Test-Driven Development. Sorry in advance.<p>I was browsing Reddit yesterday (big mistake) and stumbled across a post asking how many people are doing test-driven development in their work. OP had experience at both software agencies and startups and felt that even though management recognized the value of writing tests, in his experience it was always treated as a burden.</p>
<p><a href="proxy.php?url=https://www.reddit.com/r/rails/comments/1mt75d3/do_you_guys_really_do_tdd/">Reddit - The heart of the internet</a></p>
<p>I always find these conversations interesting. When coaching teams and individuals on how to make the most of testing (test-first or otherwise), there’s always a few critical things that I focus on.</p>
<ol>
<li><strong>The failure case is the most important.</strong> You should be writing tests that fail in ways that are useful to the person that seems them fail. There are lots of ways for tests to fail. Consider how each test will fail when writing it.</li>
<li><strong>Code is a liability.</strong> Don’t get me wrong; test suites can be tremendously valuable. They can also be huge time-sinks. When people complain to me about their test suites, I often open up to find them full of unnecessary or redundant tests. Less is more when it comes to testing.</li>
<li><strong>Test for a reason.</strong> I once had someone explain that they were frustrated that they had to write a lot really tedious, low-value tests. They didn’t actually <em>have</em> to write them; they just felt like they did. There are lots of reasons to write (and not write) tests. You can’t write good tests without knowing why you are writing them (and “my boss says I have to” isn’t a good reason).</li>
</ol>
<p>The comments on the post were extremely varied. Some people are writing tests (or not writing tests) for some really strange reasons. I wanted to comment on a few responses that I found the most interesting.</p>
<h2>The Question</h2>
<p>Before I dive into the notable answers, there’s something in the question that stood out to me. The author says they tried TDD in projects outside of work, but dropped it in because it was slowing them down.</p>
<blockquote>
<p>I tried applying TDD in some side projects, but I dropped it because it was slowing me down and the goal wasn’t to master TDD but to ship and get users.</p>
</blockquote>
<p>While I find TDD keeps my side projects going at a steadier pace in the long run, I think OP was entirely right in dropping it. TDD is a skill that takes time to learn. If you’re not comfortable with the technique, it’s definitely going to slow you down. If learning TDD isn’t the goal, skip it.</p>
<h2>The Answers</h2>
<p>At the time I’m writing this, the “top” answer is from <a href="proxy.php?url=https://naildrivin5.com">Dave Copeland</a>:</p>
<blockquote>
<p>Tests don’t slow down sprints. Manually checking if your code works definitely does.</p>
</blockquote>
<p>I totally agree that well-written tests don’t slow down sprints, and that manual checks can really kill your team’s productivity. It’s also the case that many teams have built themselves completely soul-sucking (and productivity-sucking) test suites, though. I’m sympathetic to that experience.</p>
<p>There’s a response to Dave’s comment about how TDD can slow down exploratory coding (no one said you have to do TDD when doing exploratory work) and that you still need to manually test your code (no one said you didn’t). Moving on.</p>
<p>The next answer is from someone who writes tests, but doesn’t normally write them first:</p>
<blockquote>
<p>Not really, no. I usually write some functionality, think about the important happy path and sad paths worth testing, then write tests for those. I'll often write out the test definitions first and skip them, just to capture my tests as "TODOS" in a way.</p>
<p>Edit: Tests are definitely a must. That's been the minimum bar for 10 years at this point.</p>
</blockquote>
<p>By doing this, you sidestep some of the great feedback you can get from your tests, and it’s easy to overlook im portant considerations that lead to valuable test suites, so I don’t recommend it. I’ve met a ton of developers that work like this, though. I wouldn’t be surprised if it’s the most common approach.</p>
<p>The third comment comments on how they treat frontend and backend code differently when it comes to tests. I’ve heard this one before:</p>
<blockquote>
<p>I always use TDD with backend code. Frontend can be more hit or miss depending on the frontend framework. But whenever I can, I write the tests first. It feels like cheating, because once my code passes the tests, then I'm done. I know the feature works. I can move on to the next feature or refactor with confidence. It may be a slightly longer upfront cost, but the time it saves me troubleshooting bugs or redoing work more than makes up for it.</p>
</blockquote>
<p>Especially in the web development world, backend stuff is pretty easy to TDD. You’ve got clear inputs (HTTP requests), outputs (HTTP responses), and global state (databases). All of those things can be controlled and tested, and the code you write is mostly just gluing all these things together. It’s inherently easier to test.</p>
<p>In the frontend world, you’ve got all kinds of other stuff to worry about. Once you involved the browser, you’re dealing with a distributed system and all the fun problems that come with that. Web page state is tricky in ways that aren’t overly rigid or prone to failure. Different frameworks provide different levels of support for different kinds of testing.</p>
<p>The first comment that really threw me for a loop was this one that mentions AI:</p>
<blockquote>
<p>With how good AI is at writing tests, there’s almost no excuse to not write tests alongside development. Whichever way you decide to do it (tests first then code or vice versa), it will really help lock in your logic. The only downside is if you envision your logic to work one way, write tests for it, and then realize you need to change the logic/interface to it, then it becomes a huge PIA </p>
</blockquote>
<p>I do not what to “lock in” my logic. One of the biggest complaints you see around test suites is that a single change to the behaviour of an object somewhere requires cascading changes across the test suite. Suites like that are “locked in” and I hate it.</p>
<p>This person also has either a wildly different experience than I’ve had with AI writing tests, or a completely different definition of the word “good” than I have. I suspect it’s the latter. I’ve been doing my best to try out the various “AI” coding tools and they are abysmal at sticking to writing high-value tests. LLM training data is cut from the same kinds of test suites I’ve encountered in the wild, and they aren’t good.</p>
<p>When I lean on agentic tools (which is less and less, for reasons like this), I <em>constantly</em> have to tell them to stop writing redundant and unnecessary tests. I promise you that this isn’t because my “rules” aren’t good enough.</p>
<p>One common complaint about TDD is that you need to know the code you’re going to write if you’re going to write the tests for it first. One commenter said as much:</p>
<blockquote>
<p>If I know exactly what the implementation is going to look like before I start then I’ll write tests first and red-green-refactor.</p>
<p>If I don’t know how I want to build something I’ll riff on the implementation until I have something I like and then I’ll write tests to cover it.</p>
</blockquote>
<p>You <em>do not</em> need to know what code you’re going to write to use a test-first approach. You might if you’re going to write extremely rigid, mock/stub heavy tests that “lock in” your implementation, but that’s not what you should be going for.</p>
<p>It’s okay if the first test you write ends up being “wrong”. It’s the first step in a process of gaining feedback. Sometimes you go down a path and find that the implementation you had in mind is impossible. You aren’t expected to write the right test every time. Each test is an opportunity to learn, <em>potentially</em> moving closer to your goal. You are <em>always</em> allowed to change your mind.</p>
<p>That said, this commenter’s second sentence is spot on. There is nothing wrong with using other design tools (like “riffing”) to help make progress. I like to point people to <a href="proxy.php?url=http://www.growing-object-oriented-software.com">Growing Object-Oriented Software Guided by Tests</a> as one of the best books on test-driven development. In it the authors encounter a situation where TDD <em>doesn’t</em> help them move forward, so they reach for a different design technique, <a href="proxy.php?url=https://en.wikipedia.org/wiki/Class-responsibility-collaboration_card">CRC cards</a>. Another technique I really like is called “reading code” and I use it all the time.</p>
<p>I’ll be the first to admin that I’m a “TDD guy”, but I’ll never tell anyone that they have to do TDD all the time. I just think it’s a good default way of working. By all means, use whatever techniques make the most sense for the problem at hand. If you’re writing a sudoku solver, TDD is <a href="proxy.php?url=https://explaining.software/archive/the-sudoku-affair/">probably not</a> the best approach.</p>
<p>I liked this comment:</p>
<blockquote>
<p>Depends on how long code will live. If you have an app that is 5+ years old, if there are no tests it’s a catastrophe but if it’s well-tested it can actually be safely modified</p>
</blockquote>
<p>There’s a module in a codebase that I no longer work on where I added a test file, but it had no tests, only a comment explaining a few things:</p>
<ul>
<li>I thought it tremendously unlikely that the module would ever need to change.</li>
<li>If it ever did need to change, it would need manual testing anyway.</li>
<li>If it broke, it wouldn’t immediately affect customers and the organization would notice immediately.</li>
<li>Writing tests for the module would either be extremely difficult <em>or</em> result in slow, low-value tests.</li>
</ul>
<p>Code only needs to be easy to change relative to how often it needs to change. A perfectly modular, pluggable design is a waste of effort in code that doesn’t need to change. Similarly, tests are a tool for supporting change. It’s okay to ask yourself, “what will happen if I don’t test this?” and skip tests if the answer is “nothing bad” (and the tests are particularly hard to write).</p>
<p>Before I get too deep into the low-upvote comments, I’ll finish on <a href="proxy.php?url=https://blowmage.com">blowmage</a>’s succinct answer, perhaps my favourite:</p>
<blockquote>
<p>Yes.</p>
</blockquote>
<h2>In Summary</h2>
<p>If you sift through the comments, you’ll find a mix of people with different experiences with testing. Some commenters have leaned into the test-first paradigm and found value in it. Some commenters found frustration in the experience and chose write their tests after the fact. Most seem to work at organizations where some amount of testing is expected.</p>
<p>My approach in teaching effective testing, which you’ll hear me talk about in some of <a href="proxy.php?url=https://jardo.dev/speaking">my conference talks</a>, is focused on that last point. If your organization requires that you test the code that you write, why not get the most value you can from that? How much testing you do and when you do it depends a lot on the type of code you work on, but there are always opportunities to make the testing you do more valuable.</p>
https://jardo.dev/undervalued-the-most-useful-design-patternUndervalued: The Most Useful Design Pattern2025-08-13T00:00:00-07:00Jared Norman[email protected]Let's explore how we can use data and value objects with the factory method pattern to help decouple the components of our software.<p><iframe
src="proxy.php?url=https://www.youtube.com/embed/4r6D0niRszw"
title="YouTube video player"
frame_border="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
style="aspect-ratio: 16 / 9; width: 100%;"
></iframe></p>
<p>On the ten year anniversary of my first RailsConf, I had the privilege to speak at RailsConf 2024 alongside two other Normans: my little brother, Alistair, and friend, Cody. The talk explores how we can use value objects and data objects (also called data transfer objects) alongside the factory pattern to write decoupled, easily-testable software.</p>
<h2>The Problem: An XML Product Feed</h2>
<p>Let’s examine some code. This code is production-like. It’s code that was taken from a real <a href="proxy.php?url=https://solidus.io/">Solidus</a> app and modified to fit in on my slides. We’re going to explore an approach to refactoring this code.</p>
<pre><code class="ruby">Nokogiri::XML::Builder.new(encoding: "UTF-8") do |xml|
xml.rss(base_xml_params) do
Product.available.each do |product|
xml.entry do
xml["g"].id(product.id)
xml["g"].title(product.name)
xml["g"].description(product.description)
xml["g"].price("%.2f %s" % [product.price.amount, product.price.currency])
xml["g"].link(Rails.application.routes.url_helpers.product_url(product, host: @host))
xml["g"].image_link(product.images.cover_image&.attachment_url || "")
xml["g"].availability(if product.in_stock? then "In Stock" else "Out of Stock" end)
xml["g"].ean_barcode(product.ean_barcode)
end
end
end
end.to_xml
</code></pre>
<p>The code generates an XML feed that is consumed by Google. We won’t concern ourselves with the details of how this code is used. It might be called from a controller, but many stores have catalogs that are too large for that, so it might be called from a background job and uploaded to something like AWS S3.</p>
<p>Don’t worry about the exact details of the code. Here’s a basic run down of how it works. It uses the <a href="proxy.php?url=https://nokogiri.org/index.html">Nokogiri</a>’s <code>Nokogiri::XML::Builder</code> which yields an <code>xml</code> object that we can call methods on to define the structure of the XML document.</p>
<p>Within the block, we loop over <code>Product.available</code> and create an “entry” for each product with information about that product. <code>Product</code> is an ActiveRecord model and the <code>.available</code> scope queries the products that are currently available for sale on the site.</p>
<p>Within the entry, we’ve got a series of fields that provide Google with information about that specific product:</p>
<ol>
<li>The <code>id</code> field contains a unique identifier for the product. We’re currently using the product’s database <code>id</code>.</li>
<li>We put the product’s <code>name</code> in the <code>title</code> field. This shows the beginning of the disconnect between our application’s understanding of a product and Google’s; the names are different.</li>
<li>The <code>description</code> column in our database maps directly to the <code>description</code> field in the feed. That’s consistent.</li>
<li>We fill the <code>price</code> field with a specially formatted string containing the price of the product.</li>
<li>In the <code>link</code> field we generate the URL for this product. There’s an instance variable in there that provides the host, but we’re not going to worry about it in our refactoring.</li>
<li>The <code>image_link</code> field gets the URL of a “cover image” for the product, if it has one.</li>
<li>The <code>availability</code> field contains another red flag. We put the string “In Stock” or “Out of Stock” depending on whether we inventory for that product. We already used the <code>available</code> scope on product, and that means something different than “in stock”. Our understanding of what it means for a product to be “available” differs from Google’s.</li>
<li>Like <code>description</code>, <code>ean_barcode</code> matches right up with our model.</li>
</ol>
<p>Don’t concern yourself too much with what the generated XML might look like. We’re concerned with the structure of the code here, not the structure of the XML. The important thing is that you understand that this code uses Nokogiri, queries some products from the database, and builds up an XML document containing data from our database in a special format that Google will consume.</p>
<h2>Coupling and Cohesion</h2>
<p>Coupling and cohesion are related software design heuristics. They are arguably the most important concepts in software design.</p>
<p>Coupling is the degree of interdependence between two different parts of the system. Good designs usually have relatively low coupling. If you make a change in one place, you should not have to make corresponding changes in many other places.</p>
<p>Cohesion is how much the stuff in an object belongs together. <a href="proxy.php?url=https://en.wikipedia.org/wiki/Single-responsibility_principle">The Single-Responsibility</a> Principle is about cohesion. It’s often stated as:</p>
<blockquote>
<p>A class should have only one reason to change.</p>
</blockquote>
<p>A “reason to change” is a pretty abstract idea. I prefer to talk about what a class “knows about”. Let’s consider the code above. It knows about:</p>
<ul>
<li>The Nokogiri API</li>
<li>How to load products from the database and which products to load</li>
<li>The structure of the XML feed</li>
<li>How to map data from the structure in our database to the fields in the Google feed</li>
<li>How that data needs to be formatted</li>
</ul>
<p>That’s quite a few “responsibilities”, but let’s make it clear why this code knowing about all these things is bad.</p>
<p>Firstly, it’s not very reusable. If we needed to create more than one feed or segment our feed, the code isn’t parameterized. That’s easy to fix, though.</p>
<p>It’s hard to test. Testing this code requires that we load the database with products, and the only output is a string of XML. That means we’re going to have to write tests where the inputs and outputs are very disconnected. The tests will have to save products to the database, generate the XML, parse it, and use some mechanism like XPath selectors to make assertions about the XML.</p>
<p>These tests will be slow, too. They have to load data into the database, including saving images to some kind of file storage (to test the <code>image_link</code> field properly). Those operations are slow. In tests where we’re testing conditional logic, which this code has some, we want to be writing fast tests to avoid bogging down our test suite.</p>
<p>Not only will these tests be slow, but their failures will be hard to understand. If your XPath selectors don’t find the content you’re looking for, you’ll be left reading through the XML document trying to understand why the selector didn’t find what you were looking for.</p>
<p>Let’s explore some patterns that can be useful for improving the design of code like this.</p>
<h2>Data [Transfer] Objects</h2>
<p>Data objects (sometimes called “Data-Transfer Objects” or DTOs) are object that bundle together data without any meaningful behaviour attached to it. A common example of data objects are configuration objects. You’ve probably seen some code like this:</p>
<pre><code class="ruby">MyGem.configure do |config|
config.some_setting = true
config.another_setting = "foo"
end
</code></pre>
<p>In many cases, the <code>config</code> object here is just an object with a bunch of reader and writer methods on it. This object is then exposed to you to set those attributes as you see fit, then passed to parts of the gem that depend on those settings so they can vary their behaviour according to your configuration.</p>
<p>The object itself it just a bundle of data. It doesn’t <em>do</em> anything. These objects aren’t limited to configuration. You can use them wherever you have a cohesive set of data that you need to pass around together.</p>
<h2>Value Objects</h2>
<p>Value objects are a bit more advanced. Ruby supports all kinds of values out of the box. If you’ve written a bit of Ruby, you’ve almost certainly worked with booleans, symbols, and different kinds of numbers.</p>
<p>We can build more complex kinds of values. Here are some examples of other types of values:</p>
<ul>
<li>Dates, times, and durations</li>
<li>Ranges</li>
<li>Colours</li>
<li>Vectors and coordinates</li>
</ul>
<p>Ruby even has built-in support for some of these. Because of the domain of the example, let’s talk about the most common kind of value object in eCommerce systems, monetary amounts.</p>
<p>Monetary amounts have a value and a currency. Think like $20 USD or ¥1,234 JPY. They behave like numbers when adding or subtracting them <em>only if the currencies match</em>. You <em>can’t</em> add $20 to ¥1,234, but you can add ¥20 and ¥1,234. If you have two monetary amounts, like two instances of $20, they are totally interchangeable. One twenty dollar USD amount is equivalent in all ways to another twenty dollar USD amount. They don’t have an <em>identity</em>.</p>
<p>That’s one of the rules of value objects: no identity. Two equal instances are interchangeable. Another rule is that they are immutable. Just like with Ruby’s built in Integer class, a value object’s methods should return new values.</p>
<p>It would be deeply confusing if <a href="proxy.php?url=http://10.next"><code>10.next</code></a> mutated the number 10 into the number 11. Suddenly all the values of 10 in your app were actually eleven, things would go really bad. Values should be immutable.</p>
<p>That’s really all there is to value objects, just two rules</p>
<ol>
<li>They have no identity and are considered equal (and interchangeable) if their properties are the same.</li>
<li>They are immutable.</li>
</ol>
<p>These objects can be used to model complex domain operations and simplify code. They allow systems to work at a higher level and can hide the underlying structure of information from code that depends on it. They have many uses, but in this example we’re focused on how they can decouple code at domain boundaries.</p>
<h2>Implementing Data and Value Objects</h2>
<p>Before we can refactor our code, we should look at how data and value objects are implemented. There are two important cases and the important distinction is around immutability. If we want a value object (which are always immutable) or an immutable data object, we choose one approach. If we need a mutable approach, we take a different approach. Let’s explore each.</p>
<h3>Value Objects and Immutable Data Objects</h3>
<p>Ruby v3.2 introduced a new class, <a href="proxy.php?url=https://docs.ruby-lang.org/en/3.2/Data.html">Data</a>. Data allows us to define classes that have immutable properties and handle equality based on those properties. Here a simple money example:</p>
<pre><code class="ruby">Money = Data.define(:amount, :currency)
Money.new(10, "USD") == Money.new(1234, "JPY")
#=> false
Money.new(9, "JPY") == Money.new(1234, "JPY")
#=> false
Money.new(10, "USD") == Money.new(10, "USD")
#=> true
</code></pre>
<p>This gives us everything we need for simple value and data objects (though our data objects probably won’t care about this equality property.) If we’re building a value object that supports some kind of operations, we can expand its definition with new methods.</p>
<pre><code class="ruby">Money = Data.define(:amount, :currency) do
def +(other)
unless other.is_a?(Money)
raise TypeError, "Unsupported argument type: #{other.class.name}"
end
if other.currency != currency
raise ArgumentError, "Can only add Money values with the same currency"
end
Money.new(amount + other.amount, currency)
end
end
Money[100, "JPY"] + Money[19, "JPY"]
#=> #<data Money amount=119, currency="JPY">
Money[100, "JPY"] + Money[19, "CAD"]
#=> ArgumentError
</code></pre>
<p>Notice that our <code>+</code> method returns a new instance of the <code>Money</code> class. This preserves the immutable property of our value object.</p>
<h3>Mutable Data Objects</h3>
<p>Classes defined by <code>Data.define</code> have immutable properties. They don’t have any writer methods. You can’t do something like this:</p>
<pre><code class="ruby">ten_bucks = Money[10, "USD"]
ten_bucks.amount = 20
#=> NoMethodError
</code></pre>
<p>If you need to be able to mutate the attributes of your data objects, you need to use <a href="proxy.php?url=https://docs.ruby-lang.org/en/3.2/Struct.html">Struct</a>. <code>Struct</code> works nearly identically to <code>Data</code>, but it <em>does</em> have writer methods. This makes it perfect for our <em>mutable</em> data objects.</p>
<pre><code class="ruby">module MyGem
Config = Struct.new(:foo_enabled, :bar_size, :baz_name)
def self.config
@config ||= Config.new(
foo_enabled: false,
bar_size: 1,
baz_name: "Jardo"
)
end
def self.configure
yield config
end
end
MyGem.configure do |c|
c.foo_enabled = true
c.bar_size = 69
c.baz_name = "Jared"
end
MyGem.config
#=> #<struct MyGem::Config foo_enabled=true, bar_size=69, baz_name="Jared">
</code></pre>
<p>Mutability is the only difference between <code>Data</code> and <code>Struct</code> that we care about here. In fact, if you’re not yet running Ruby 3.2, you can use structs in place of data objects, and simply avoid mutating them. But really you should just upgrade.</p>
<h2>Factory Methods</h2>
<p>The Factory Method is a pattern that can be used in many contexts outside of value and data objects, but is commonly used with them. A factory method is simply an alternate constructor. Let’s say that you store prices in your database, but sometimes you want to do some money math on them. It’s preferable to work with money objects for these operations since they enforce all the domain rules for working with monetary amounts. We might write a method like this:</p>
<pre><code class="ruby">Money = Data.define(:currency, :amount) do
def self.from_price(price)
new(price.amount, price.currency)
end
end
</code></pre>
<p>This would allow use to convert our prices to money amounts, decoupling the data we’re working with from the database, and do whatever math we need to do with them. We can do the same with mutable data objects.</p>
<p>We might have configuration objects that describe what features are currently available in a certain context in our app. We can write a factory method that examines a user and determines which features they might be able to access. It could look something like this:</p>
<pre><code class="ruby">Config = Struct.new(:feature_a_enabled, :feature_b_enabled) do
def self.from_user(user)
new(
feature_a_enabled: user.membership_active?,
feature_b_enabled: user.tier == :premium
)
end
end
</code></pre>
<p>This pattern allows us to take existing data, extract the information we need from it, and convert it into a new object that has no reference to where that information came from. We can even construct these objects without using their constructors, making it possible to test code that depends on the value objects and data objects without ever constructing a price or user.</p>
<h2>A Better Feed Implementation</h2>
<p>We now have the pieces we need for a better implementation. In the video I walk through this refactoring step-by-step, but here I’m just going to cut to the chase. The calling code is going to change from what we showed originally, to this:</p>
<pre><code class="ruby">google_products = Product.available.map { |product|
GoogleProduct.from_product product
}
GoogleProductFeed.new(google_products).to_xml
</code></pre>
<p>This code queries the products that we want in our feed, converts them to <code>GoogleProduct</code> data objects, then feeds those into a <code>GoogleProductFeed</code> class. Let’s look at <code>GoogleProduct</code> first:</p>
<pre><code class="ruby">GoogleProduct = Data.define(
:id, :name, :description, :price, :link,
:image_link, :available, :ean_barcode
) do
def self.from_product(product)
new(
id: product.id,
title: product.name,
description: product.description,
price: Money.from_price(product.price.amount, product.price.currency),
link: Rails.application.routes.url_helpers.product_url(product, host: @host),
image_link: product.images.cover_image&.attachment_url || "",
available: product.in_stock?,
ean_barcode: product.ean_barcode
)
end
end
</code></pre>
<p>This class now knows how to take a product from the database and convert it into a domain object that represent’s Google’s understanding of a product. Once we have an instance of <code>GoogleProduct</code>, we’re fully decoupled from our database products. This class doesn’t hold a reference to the original product and (through its regular constructor) can even be constructed without a product model from the database.</p>
<p>The <code>GoogleProductFeed</code> class now only knows about the Nokogiri API and the structure of our XML feed. It looks like this:</p>
<pre><code class="ruby">class GoogleProductFeed
def initialize(google_products)
@google_products = google_products
end
def to_xml
Nokogiri::XML::Builder.new(encoding: "UTF-8") do |xml|
xml.rss(base_xml_params) do
@google_products.each do |product|
xml.entry do
xml["g"].id(product.id)
xml["g"].title(product.title)
xml["g"].description(product.description)
xml["g"].price("%.2f %s" % [product.price.amount, product.price.currency])
xml["g"].link(product.link)
xml["g"].image_link(product.image_link)
xml["g"].availability(if product.available then "In Stock" else "Out of Stock" end)
xml["g"].ean_barcode(product.ean_barcode)
end
end
end
end.to_xml
end
end
</code></pre>
<p>Our query logic, formatting logic, and XML structure are no longer mixed into one class. The top-level queries for the data, then hands it off to <code>GoogleProduct</code>'s factory method for the relevant information to be extracted, then passes that data along to <code>GoogleProductFeed</code> to construct the XML.</p>
<h2>Handling Change</h2>
<p>A good design is easy to change, so let’s explore how this design might handle different kinds of change. Let’s first examine the first snippet:</p>
<pre><code class="ruby">google_products = Product.available.map { |product|
GoogleProduct.from_product product
}
GoogleProductFeed.new(google_products).to_xml
</code></pre>
<p>This code only “know about” what products to put in the feed. Nothing here is going to need to change if anything about the details of the feeds change. The only thing likely kind of change that would require changes here is if we need to make changes to which products are supposed to be in this feed.</p>
<p>Next, let’s examine our data object, the <code>GoogleProduct</code>:</p>
<pre><code class="ruby">GoogleProduct = Data.define(
:id, :name, :description, :price, :link,
:image_link, :available, :ean_barcode
) do
def self.from_product(product)
new(
id: product.id,
title: product.name,
description: product.description,
price: Money.from_price(product.price.amount, product.price.currency),
link: Rails.application.routes.url_helpers.product_url(product, host: @host),
image_link: product.images.cover_image&.attachment_url || "",
available: product.in_stock?,
ean_barcode: product.ean_barcode
)
end
end
</code></pre>
<p>The class handles the mapping of our database products to Google’s understanding of a product. It might need to change for a variety of reasons:</p>
<ul>
<li>It will need to change if we need to add or remove fields from our feed.</li>
<li>It will need to change if the contents or formats of these fields change.</li>
<li>It will need to change if structure of our database changes.</li>
</ul>
<p>That’s a few different sources of changes (and I can technically think of more), but it’s all focused around one responsibility: mapping our domain model of a product to Google’s domain model of a product. From a design perspective, that’s great.</p>
<p>This class is easily testable, doesn’t depend on products being saved to the database, and provides a nice clear mapping between the names and concepts of the two systems.</p>
<p>Finally, let’s look at the <code>GoogleProductFeed</code> class:</p>
<pre><code class="ruby">class GoogleProductFeed
def initialize(google_products)
@google_products = google_products
end
def to_xml
Nokogiri::XML::Builder.new(encoding: "UTF-8") do |xml|
xml.rss(base_xml_params) do
@google_products.each do |product|
xml.entry do
xml["g"].id(product.id)
xml["g"].title(product.title)
xml["g"].description(product.description)
xml["g"].price("%.2f %s" % [product.price.amount, product.price.currency])
xml["g"].link(product.link)
xml["g"].image_link(product.image_link)
xml["g"].availability(if product.available then "In Stock" else "Out of Stock" end)
xml["g"].ean_barcode(product.ean_barcode)
end
end
end
end.to_xml
end
end
</code></pre>
<p>This class is relatively isolated from change. It might need to change if:</p>
<ul>
<li>the fields we want in the feed change</li>
<li>we were wrong about the structure or formatting of the feed</li>
<li>Nokogiri’s API changes</li>
</ul>
<p>Beyond that, it’s well protected from change. It doesn’t know about the database or even if there <em>is</em> a database. It doesn’t know where the <code>GoogleProduct</code> instances come from. All it knows about is the structure and format of the feed, and the fields that <code>GoogleProduct</code> has.</p>
<p>Because it is isolated in this way, the tests for it will be extremely fast. They can construct some <code>GoogleProduct</code> instances, pass them in, and verify the structure and format of the XML. Yes, there’ll be some XML parsing in these tests, but this is the only class that knows anything about XML, so that’s expected.</p>
<h2>Drawing Boundaries</h2>
<p>Let’s talk about the pattern we’ve implemented in this code. Our code is responsible for taking products from our database and creating a specially formatted XML feed containing information about those products.</p>
<p>We identified that there’s a domain boundary hidden inside this task. We’re bridging our application’s domain model and its understanding of a product and Google’s domain model and understanding of a product. The two understandings of a product were incongruous. Some fields had different names. What the term “available” mean had a different meaning on each side.</p>
<p>By refactoring the code, we separated the concerns. One object, <code>GoogleProductFeed</code>, knew only about generating XML and the XML being generated. Another object, <code>GoogleProduct</code>, mapped our understanding of a product to Google’s. The top-level code was left gluing these pieces together.</p>
<p>Effectively, this design draws a boundary between our domain and Google’s. <code>GoogleProduct</code>'s factory method is that line. Once a <code>GoogleProduct</code> has been initialized, it doesn’t know anything about our database models or the structure of our database. Objects, like <code>GoogleProductFeed</code>, that consume those objects <em>aren’t dependent on where they came from</em>.</p>
<h2>Factory Method Tips</h2>
<p>Combining factory methods with data and value objects is an effective way to draw boundaries in our system. That’s where this pattern is most useful. Look for places where you’re reconciling to different domain models (like in our example) or places where some kind of transformation is taking place.</p>
<p>For example, you might use this pattern when transforming an eCommerce order into an object that represents a shipping label for that order. It’s within one domain model, but there’s a conceptual transformation happening.</p>
<p>You can also use the pattern at the boundaries of module as a mechanism for for information-hiding. Rather than leading your module’s internal objects, you can transform them into simpler, data or value objects, keeping your modules decoupled.</p>
<h3>Multiple Factory Methods</h3>
<p>Giving a single value or data object multiple factory methods allows you to have multiple parts of your system converge to reuse shared data or value objects. We use this pattern in Solidus to around pricing options.</p>
<p>We need to know what parameters to use to select a price in different contexts. Sometimes pricing options are constructed from an HTTP request for determining what price to show in a view. Other times pricing options are constructed from an order, so that we know what price to apply to an item in that order.</p>
<p>This makes the parts of the system that operate on pricing options simpler and more testable, because they don’t need to know where the pricing options came from.</p>
<h3>One-to-Many, Many-to-One, Many-to-Many</h3>
<p>You can write factory methods that take one object and return many value/data objects. You can also create factory methods that take many objects and return only one. Both cases are extremely useful.</p>
<p>Consider our feed example. It’s possible that we misunderstood what Googles products mapped to in our system. Rather than mapping to what we called “products”, they actually mapped to individual SKUs, what we call “variants”. If t-shirt comes in six different sizes, we want a <code>GoogleProduct</code> for each size.</p>
<p>In this case, we could modify our factory method to return an array of objects, one for each size. The <code>GoogleProductFeed</code> object wouldn’t even need to change.</p>
<p>What matters is that there’s a transformation, not how many values are on each side of that transformation. You might even have transformations where you’re transforming multiple <em>kinds</em> of values into something else. This pattern works just as well in that situation.</p>
<h3>Highlight What Matters</h3>
<p>When you pass an ActiveRecord model around, consumers have full access to the entire ActiveRecord API, on top of all the methods you’ve added to that class. It’s not going to be obvious without reading through the code which parts of the API the consumer actually cares about.</p>
<p>Coversely, when you pass data and value objects around, their API is very limited. You make it very clear that there are only a few possible values that the consumer could <em>possibly</em> care about. </p>
<h3>Facade, More Like Fa-bad</h3>
<p>I’m writing this more than a year after giving the talk, disappointed that it took me this long to come up with that (terrible) pun. I’m especially disappointed because I’ve been railing against the facade and decorator patterns for so long.</p>
<p>These patterns pop up again and again in different contexts, but let’s use view objects as our example. These are objects that typically wrap database models and provide extra functionality that is only necessary in views. Here’s <a href="proxy.php?url=https://jetthoughts.com/blog/cleaning-up-your-rails-views-with-view-objects-development/">an article</a> on them that happens to be at the top of my search results for “view object rails” right now.</p>
<p>The issue with these objects is that they suffer from the exact issue we’ve been trying to avoid in this talk. To instantiate them, you still need to pass in an instance of the underlying model. You need it for tests and you need it in production code. I don’t think you should be transforming all your database models into data objects just for your views, but that’s because there’s no domain boundary at play.</p>
<p>The facade pattern isn’t inherently bad. You <em>should</em> be using it for creating wrappers around third-party APIs and complex interfaces. It’s just a bad fit when you’re transforming data at a boundary.</p>
<h2>In Summary</h2>
<p>Whether using value objects to represent something in your domain, or data objects to bundle related data together, combining them with factory methods will help you draw nice, clean boundaries in your systems. Because these kinds of objects can be initialized independently from the inputs to their factory methods, you can avoid coupling in your system <em>and</em> your tests.</p>
https://jardo.dev/the-git-hub-part-is-no-longer-the-productThe “Git” “Hub” Part Is No Longer the Product2025-08-11T00:00:00-07:00Jared Norman[email protected]Microsoft announced today that the part of GitHub that matters to most people is not the purpose of the platform.<p>I woke up this morning to the news that the CEO of GitHub was stepping down. GitHub hosts (what I assume is) the vast majority of open-source projects, so the company has a tremendous amount of power to shape the average developer’s interactions with the open-source world.</p>
<p><a href="proxy.php?url=https://www.theverge.com/news/757461/microsoft-github-thomas-dohmke-resignation-coreai-team-transition">GitHub just got less independent at Microsoft after CEO resignation</a></p>
<p>I wouldn’t be writing about it if this was <em>just</em> a leadership change. The detail that prompted me to comment on the news was that GitHub is being stripped of its independence.</p>
<blockquote>
<p>Microsoft isn’t replacing Dohmke’s CEO position, and the rest of GitHub’s leadership team will now report more directly to Microsoft’s CoreAI team.</p>
</blockquote>
<p>It seems that GitHub will now operate as just another product in the Microsoft umbrella, a product within the AI branch of the organization.</p>
<p>I’m not foolish enough to try to predict exactly what this means for GitHub. There’s already been a clear focus on AI at the organization for the past few years. No one will give me favourable betting odds that we’ll see an increased focus on Copilot and other LLM tools.</p>
<p>What I’ll add to the conversation about this news is that I’m disappointed. When I was younger, I was enamoured by the potential of open-source software. As an idealistic teen, I believed that <em>all</em> software should be open-source.</p>
<p>I’ve since grown more pragmatic. I admit that there are issues with open-source. Some of those issues are solvable, but there are others that I’m not so sure are. Governance is a thorny problem. Funding maintenance isn’t always feasible. Users and contributors have wildly varying expectations. The open-source world can be messy.</p>
<p>There’s lots to like about open-source, too. My experience working on <a href="proxy.php?url=https://solidus.io">Solidus</a> has been decidedly positive. I’ve built a business helping organizations make the most of it, an open-source project. Companies all over the world derive a ton of “value” (a.k.a. billions of dollars in sales) from it. Solidus is a great fit for the open-source model, and I don’t think it could have succeeded without it.</p>
<p>So, like I said, I’m disappointed. Many projects, wary of the proprietary nature of GitHub, fearing its eventual monetization, watching the trend of <a href="proxy.php?url=https://doctorow.medium.com/https-pluralistic-net-2024-04-04-teach-me-how-to-shruggie-kagi-caaa88c221f2">enshittification</a> in the tech industry, have moved to other platforms. Some have been hosting their own code all along, having never seriously considered proprietary hosting. I respect them for putting the work in, especially at the expense of visibility in the open-source world.</p>
<p>GitHub leveraged a wave within the tech industry to become the dominant code hosting platform for both open-source and proprietary software. They led the world to believe that their product was private code hosting, a product that whose value was augmented by also being the de facto host of the open-source world. It was a model that made sense.</p>
<p>Moving GitHub under the CoreAI umbrella makes it clear that AI is the product. Copilot is the product. GitHub.com is an avenue to sell that product. What’s worse is that due to GitHub’s existing position, the code-hosting product can be a very effective avenue without having to fight off competitors. They already won.</p>
<p>We find ourselves in a position that anyone with any sense always knew we’d reach. Capital, in its quest to extract value wherever it can, has found in GitHub a new source of value. Being the world’s premier code-hosting platform was not enough to protect GitHub. This news confirms it.</p>
<p>That which can be used to shill AI will be used to shill AI.</p>
https://jardo.dev/how-to-tame-your-mastodon-feedHow to Tame Your Mastodon Feed2025-08-10T00:00:00-07:00Jared Norman[email protected]I'm a heavy Mastodon user and this is how I make sure I don't miss out on the good stuff.<p>Mastodon is a platform where you get out what you put in. The lack of recommendations make it hard to onboard. There's no algorithmic feeds of posts to browse or "who to follow" algorithms pushing you to start following immediately. This is definitely one of the many things that platforms like Bluesky do much better than Mastodon, helping drive their adoption.</p>
<p>I'm not here to critique Mastodon, though. My feed is plenty busy, in fact <em>too</em> busy. Fortunately, Mastodon's lists make it really easy to get to the content I care about, and provide me with something to look at when I just want to scroll.</p>
<p>In the web interface, lists are accessed on the right of the feed. Clicking on "Lists" there will allow you to manage your lists. You can create lists, delete them, and go into each list to manage its users and settings.</p>
<p>The important feature that you need to know about is the "Hide members in Home" checkbox. Checking this box will cause posts from members of that list <em>not to show up in your home feed</em>. You can use this in a couple of ways.</p>
<p>I have two lists. The first is all the people whose posts I really like. My friends are on this list. People who consistently share stuff I'm interested in are in this list. <em>Good posters</em> are in this list. 99% of the time, if I'm browsing Mastodon, I'm just looking at this list.</p>
<p>The other list is essentially a mute list. It's a list of accounts that I want to follow for some reason, but they either post too much or regularly post content that I don't want in my home feed. I rarely if ever look at this one. It's a <em>very</em> short list.</p>
<p>This setup is great. Mostly I just browse my "quality" list, and stay up to date with it. If I'm looking to scroll and I'm already caught up on the good stuff, I browse my home feed, which stays spam-free since accounts on my 🤫 list are excluded from it.</p>
<p>I like this setup because there's no infinite feed to scroll. My quality list is trimmed down enough to consistently keep up with and enjoy, and there's a finite source of more content if I need it. If I want to follow someone, I can do so without worrying about them spamming the quality feed, and if I find I enjoy their posts enough, I can promote them to the quality feed.</p>
https://jardo.dev/generating-custom-open-graph-imagesGenerating Custom Open Graph Images2025-08-09T00:00:00-07:00Jared Norman[email protected]Ever wonder how to generate Open Graph preview images like the one you might be looking at right now?<p>Open Graph images are the images that you see when you share a page on social media. The goal of the <a href="proxy.php?url=https://ogp.me">Open Graph Protocol</a> is to expose information about web pages so that they can "become a rich object in a social graph". I don't know how much that goal is achieved by the protocol, but I like when my posts have nice preview images. It helps them stand out.</p>
<p>There are a <em>ton</em> of different ways to use these images. Some bloggers have a big banner image for each post and use that for the social image. Some blogs just pull the first image in the post as the social sharing image. Others do what I do, render a custom image that contains the logo for my site, the title of the post, and a little description.</p>
<p>Accessibility Matters You might be wondering if putting text in an image goes against accessibility. The text I put in these images doesn't contain anything that isn't available through the other Open Graph attributes, but you can use the <code>og:image:alt</code> attribute to provide alt text for your Open Graph images to make them properly accessible.</p>
<p>At the time of writing, my site doesn't use that attribute, but I've made a note to fix that.</p>
<p>At the time of writing, here's how my Open Graph images are generated. Each post references an <code>og:image</code> URL on a CDN, currently AWS CloudFront. The URL for this post looks something like this: <code>https://d75lo4uzaao03.cloudfront.net/generating-custom-open-graph-images.png?t=1754776938</code>. The <code>t</code> parameter contains the timestamp that the post was last updated, ensuring the URL changes when the post changes.</p>
<p>The CDN is configured to hit a <em>very</em> simple Rack app that I wrote that looks at the slug in the URL, spins up Chrome, hits a special URL on this site, screenshots it, and returns the screenshot. The CDN then caches the value.</p>
<p>Here's the meat of the Rack app. Nearly all the complexity here is to do with managing Chrome, and I don't even think it's all necessary. I had a simpler version of this app and lost it, so this version was written by Claude Sonnet 4, which added a bunch of garbage when it was trying to diagnose why things weren't working in prod.</p>
<pre><code class="ruby">require 'rack'
require 'ferrum'
require 'tempfile'
class ScreenshotApp
def call(env)
request = Rack::Request.new(env)
path = request.path_info
# Match paths like /some-slug.png
if path =~ /^\/(.+)\.png$/
slug = $1
begin
png_data = generate_screenshot(slug)
[200, {
'Content-Type' => 'image/png',
'Content-Length' => png_data.bytesize.to_s,
'Cache-Control' => 'public, max-age=3600'
}, [png_data]]
rescue => e
[500, {'Content-Type' => 'text/plain'}, ["Error generating screenshot: #{e.message}"]]
end
else
[404, {'Content-Type' => 'text/plain'}, ['Not Found']]
end
end
private
def generate_screenshot(slug)
url = "https://jardo.dev/og-previews/#{slug}"
puts "Generating screenshot for: #{url}"
browser_options = {
'no-sandbox' => nil,
'disable-gpu' => nil,
'disable-dev-shm-usage' => nil,
'disable-background-timer-throttling' => nil,
'disable-backgrounding-occluded-windows' => nil,
'disable-renderer-backgrounding' => nil,
'disable-features' => 'TranslateUI,VizDisplayCompositor',
'disable-ipc-flooding-protection' => nil,
'disable-web-security' => nil,
'disable-extensions' => nil,
'hide-scrollbars' => nil,
'mute-audio' => nil,
'no-first-run' => nil,
'disable-default-apps' => nil,
'disable-popup-blocking' => nil,
'disable-translate' => nil,
'disable-background-networking' => nil,
'disable-sync' => nil,
'metrics-recording-only' => nil,
'no-default-browser-check' => nil,
'single-process' => nil,
'memory-pressure-off' => nil
}
# Try different Chrome binary paths
chrome_path = find_chrome_binary
puts "Using Chrome binary: #{chrome_path || 'default'}"
browser = Ferrum::Browser.new(
headless: true,
window_size: [1200, 600],
timeout: 30,
process_timeout: 30,
browser_path: chrome_path,
browser_options: browser_options
)
begin
puts "Navigating to URL..."
browser.go_to(url)
puts "Setting viewport size..."
browser.resize(width: 1200, height: 600)
puts "Page loaded, waiting for content..."
# Wait a bit for the page to fully load
sleep(2)
puts "Taking screenshot..."
screenshot = browser.screenshot(encoding: :binary, full: false)
puts "Screenshot generated successfully"
screenshot
rescue => e
puts "Error during screenshot generation: #{e.message}"
puts "Error backtrace: #{e.backtrace.first(5).join("\n")}"
raise e
ensure
browser&.quit
end
end
def find_chrome_binary
# Check if BROWSER_PATH environment variable is set
return ENV['BROWSER_PATH'] if ENV['BROWSER_PATH'] && File.exist?(ENV['BROWSER_PATH'])
possible_paths = [
'/usr/bin/chromium-browser',
'/usr/bin/chromium',
'/usr/bin/google-chrome-stable',
'/usr/bin/google-chrome',
'/opt/google/chrome/chrome',
'/Applications/Google Chrome.app/Contents/MacOS/Google Chrome'
]
possible_paths.each do |path|
return path if File.exist?(path)
end
# If no specific path found, let Ferrum try to find it
nil
end
end
</code></pre>
<p>To recap, here's the flow:</p>
<ol>
<li>The blog post references the CDN-hosted Open Graph image: <code>https://d75lo4uzaao03.cloudfront.net/generating-custom-open-graph-images.png?t=1754776938</code></li>
<li>The CDN receives this request, and attempts to fetch the image from the origin, a Rack app: <code>https://secret-url-here.com/generating-custom-open-graph-images.png</code></li>
<li>The Rack app uses <a href="proxy.php?url=https://github.com/rubycdp/ferrum">Ferrum</a> to fetch a page on this site which renders a nice looking preview page: <code>https://jardo.dev/og-previews/generating-custom-open-graph-images</code></li>
<li>The Rack app returns a screenshot of that page, then goes back to sleep.</li>
<li>The CDN caches the returned image until the timestamp changes.</li>
</ol>
<p>The Rack app costs me basically nothing to run because it spend 99.999% of the time shut down. It starts nearly instantly, serves a single request the first time someone requests an image that isn't yet on the CDN, then goes back to sleep.</p>
<p>The app also doesn't need to change, even if I redesign my site. If I were to overhaul everything, I could update my preview image endpoint with the new styles, expire all the images on the CDN, and everything would still continue to work. The only real cost is the CDN, but I don't get enough traffic that it amounts to a meaningful cost.</p>
<p><a href="proxy.php?url=https://ogp.me">Open Graph protocol</a></p>
https://jardo.dev/lets-get-bakedLet's Get Baked2025-08-01T00:00:00-07:00Jared Norman[email protected]Rake is great, but is Bake better? Let's go through setting it up on a new project to see!<p>I just made the same mistake I always make; I named a file containing some Rake tasks with the wrong file extension, <code>.rb</code> instead of <code>.rake</code>. This is made all the more embarassing because I'm not working on a Rails app. This is the Ruby/Rack app we're building at <a href="proxy.php?url=https://supergood.software">Super Good</a> and I'd <em>just</em> written the code that grabbed all the <code>lib/tasks/*.rake</code> files and loaded them. Old habits die hard.</p>
<p>When I <a href="proxy.php?url=https://ruby.social/@jardo/114949590959450502">posted</a> about it, Sean Collins <a href="proxy.php?url=https://ruby.social/@cllns/114949796655299215">pointed</a> me towards <a href="proxy.php?url=https://github.com/ioquatix/bake">Bake</a>, an alternative to Rake created by <a href="proxy.php?url=https://www.codeotaku.com/">Samuel Williams</a>. (That sentence had too many links.) There's nothing <em>wrong</em> with Rake, but the Bake README explains that it improves on Rake in four main ways:</p>
<blockquote>
<ul>
<li>On demand loading of files following a standard convention.</li>
<li>This avoid loading all your rake tasks just to execute a single command.</li>
<li>Better argument handling including support for positional and optional arguments.</li>
<li>Focused on task execution not dependency resolution. Implementation is simpler and a bit more predictable.</li>
<li>Canonical structure for integration with gems.</li>
</ul>
</blockquote>
<p>I don't think that last bullet is relevant to us, but the other features sound cool. I'm always curious to try new things, so let's try moving our app over to Bake.</p>
<h2>The Status Quo</h2>
<p>This app doesn't have very many tasks yet, only these five:</p>
<ol>
<li><code>environment</code>: This one loads the application environment by using <code>require_relative</code> to load <code>config/environment.rb</code>, which does all the work of setting up the application.</li>
<li><code>db:create</code>: This creates the development and test databases if they don't exist. (For reasons we don't need to get into, we don't need this in production.)</li>
<li><code>db:drop</code>: You'll never guess what this does. This is configured so you can't run it with <code>RACK_ENV=production</code> because we don't want to drop our production database. We're weird like that.</li>
<li><code>db:new_migration</code>: This takes a name argument and creates a new, blank migration in <code>db/migrations</code>, naming it by attaching a timestamp to the provided argument.</li>
<li><code>db:migrate</code>: If no arguments are given, this runs all the unrun migrations in the app. If you give it a version, it will migrate the database up or down to that version.</li>
</ol>
<p>Our Rakefile is super minimal too. It's just this:</p>
<pre><code class="ruby"># Load all tasks from the lib/tasks directory.
Dir[File.join(File.dirname(__FILE__), 'lib', 'tasks', '*.rake')].each { |file| load file }
</code></pre>
<p>The gives us a convenient mix of tasks that take arguments, don't take arguments, and even one that takes an optional argument. Additionally, the <code>db:migrate</code> task (unlike the others) has a dependency on the <code>environment</code> task. This all makes for a pretty good test of Bake.</p>
<h2>The Bake-sics</h2>
<p>Bake provides <a href="proxy.php?url=https://ioquatix.github.io/bake/guides/getting-started/index">a guide for getting started</a>. The first step is to add it to the Gemfile. The simplest setup would be to define our tasks as methods in a <code>bake.rb</code> file at the root of our project. Tasks are defined as methods, like this:</p>
<pre><code class="ruby"># bake.rb
# Add the x and y values together and print the result.
# @parameter x [Integer]
# @parameter y [Integer]
def add(x, y)
puts x + y
end
</code></pre>
<p>Bake detects not just the method (making it runnable as <code>bundle exec bake add 3 5</code>) but also the comments. It parses the string explaining what the task does as well as the types of the arguments, which is uses to perform type coercion for us. That's handy.</p>
<p>If we want to write methods that aren't exposed as tasks, we can just define them as private methods. (Defining private methods in your Rake tasks, while sometimes done, could technically caused unwanted side effects. I've <a href="proxy.php?url=https://supergood.software/dont-step-on-a-rake/">written about that</a>.)</p>
<p>Rake gives you a list of available (documented) commands with <code>rake -T</code>. <code>bake list</code> is the equivalent command, and you can even specify a specific command you want to see the docs for.</p>
<pre><code>❯ bundle exec bake list add
Bake::Registry::BakefileLoader /Users/jardo/Codes/redacted/bake.rb
add x y
Add the x and y coordinate together and print the result.
x [Integer]
y [Integer]
</code></pre>
<p>I like it.</p>
<h2>Baking Show</h2>
<p>We've got a small number of tasks, but we want to keep them namespaced and we anticipate adding more tasks (and more namespaces), so we don't want to define them all right in our <code>bake.rb</code>. Bake supports this through facilities for integrating it into <a href="proxy.php?url=https://ioquatix.github.io/bake/guides/project-integration/index">projects</a> and <a href="proxy.php?url=https://ioquatix.github.io/bake/guides/gem-integration/index">gems</a>. I'm mostly only going to discuss the project integration, because I've not tried out the gem stuff (yet).</p>
<p>The tasks defined in your <code>bake.rb</code> are considered private to the project/gem. (If we were working on a gem, consumers of the gem wouldn't be able to access them.) Since our <code>environment</code> task doesn't need to be namespaced, let's put it in there. That looks this:</p>
<pre><code class="ruby"># Load the application environment
def environment
require_relative "config/environment"
end
</code></pre>
<p>We can test this by running <code>bundle exec bake environment</code>. This just prints "true", but that's what we expect. Perfect.</p>
<p>Do not bake the environment. Climate change is serious and is baking the environment enough as it is. Don't contribute to that any more. Instead bake cookies, quiches, and pot pies (and push for policy changes at all levels of government that will have real impact).</p>
<p>Bake looks for "recipes" in <code>bake/</code>. A file at <code>bake/frog.rb</code> that defines a method called <code>ribbit</code> will create a task called <code>frog:ribbit</code>. We'll use that to define the tasks we need in <code>bake/db.rb</code>. The ones without arguments will look like this:</p>
<pre><code class="ruby"># Create the development and test databases
def create
# The guts of the task, copied straight out of
# lib/tasks/db.rake go here.
end
# Drop the development and test databases
def drop
# The guts of the task, copied straight out of
# lib/tasks/db.rake go here.
end
</code></pre>
<p>Next up, let's do <code>db:new_migration</code>, the task with the required argument. In our Rake version, we have to work with Rake's somewhat esoteric argument handling. (It's not that bad, but it's certainly not natural.) This looked something like this:</p>
<pre><code class="ruby">task :new_migration, [:name] do |_t, args|
# code...
migration_name = args[:name]
if migration_name.nil?
# provide a useful error message and exit
end
# more code...
end
</code></pre>
<p>The Bake version is much cleaner. Let's take a look:</p>
<pre><code># Create a new migration
# @param name [String] the name of the migration
def new_migration(name)
# code...
# more code...
end
</code></pre>
<p>We don't need to check if the argument was provided or not. We just use the value. If the user didn't provide it, Bake would give them a reasonable error message:</p>
<pre><code> 0.0s error: Bake::Command [ec=0x260] [pid=9638] [2025-07-31 18:51:57 -0700]
| wrong number of arguments (given 0, expected 1)
| ArgumentError: wrong number of arguments
| → bake/db.rb:55 in 'new_migration'
| [stack trace omitted]
</code></pre>
<p>This is a <em>reasonable</em> error, but I don't love it. I bet Bake could be improved to provide much better errors in this situation. Still, it's an improvement on Rake.</p>
<p>Our final task to migrate takes an optional argument. It's short, so here's the entire Rake version:</p>
<pre><code class="ruby">task :migrate, [:version] => :environment do |_t, args|
Sequel.extension :migration
version = args[:version].to_i if args[:version]
Sequel::Migrator.run(DB, 'db/migrations', target: version)
end
</code></pre>
<p>Bake doesn't seem to support optional <em>positional</em> arguments, only optional <em>keyword</em> arguments. <del>I'm not sure why that is, but it's not a problem.</del> <em>Edit: This is because it would be impossible to reliably differentiate additional tasks from optional arguments.</em> (Rake doesn't support keyword arguments at all.)</p>
<pre><code class="ruby"># Run migrations
# @param version [Integer] the version to migrate to (omit for latest)
def migrate(version: nil)
call "environment"
Sequel.extension :migration
Sequel::Migrator.run(DB, 'db/migrations', target: version)
end
</code></pre>
<p>Let's talk about the differences. Rather than handling the optionality of the argument and the type coercion ourselves, we rely on a combination of regular ol' Ruby default keyword arguments and Bake's type coercion. That's a nice win.</p>
<p>Bake doesn't provide a facility for expressing task dependencies like Rake does, so we use <code>call</code> to invoke the tasks this task depends on. (Rake also has an API for programmatically invoking tasks.) If you need to invoke a task with arguments, you can use <code>lookup</code>:</p>
<pre><code class="ruby">lookup("frog:speak").call("ribbit")
</code></pre>
<p>I like the Bake version. It feels more Ruby-y to me.</p>
<h2>Wake and Bake</h2>
<p>I wasn't really thinking about the consequences of the fact that I've framed this post as a review of Bake. Should I finish the post off with a summary of my thoughts and a rating out of ten, like a music review?</p>
<p><del>If you've enjoyed Samuel Williams' previous albums, like <a href="proxy.php?url=https://github.com/socketry/falcon">falcon</a>, or his collaborations on with the Japanese supergroup, the Ruby commiters team, then you'll probably find something to enjoy in this record. Williams really comes into his own on the record, developing his music style in new directions while staying true to his sonic roots.</del></p>
<p>I'm not going to bother trying to decide whether Rake or Bake is better, but I do like the features it adds. The automatic file loading is handy, and I imagine this would come in extremely handy in an ecosystem of gems using Bake.</p>
<p>While annotating the tasks with comments feels really natural, I'd love to see a couple small improvements to how arguments are handled. <del>If there's no technical reason we can't use optional positional arguments, it would be nice them supported.</del> <em>Edit: There is a technical reason.</em> Mostly I just think we could (leveraging the annotations) make argument errors really nice.</p>
<p>Most of all, I like that the task files feel like normal Ruby. I'm not anti-DSL, but I love that Bake was trivial to set up and gave me everything I needed in a task runner without making me learn a DSL. Rake's not going anywhere and I don't plan on forgetting how to use it, so this doesn't really save me (or you) any brain-space, but still, I like it.</p>
<p>I'm not going to finish by giving Bake a star rate, but I am going to mess around with it on some future projects. Seems good to me.</p>
<p><a href="proxy.php?url=https://github.com/ioquatix/bake">GitHub - ioquatix/bake: An alternative to rake, with all the great stuff and a sprinkling of magic dust.</a></p>
https://jardo.dev/order-driven-developmentOrder-Driven Development2025-07-28T00:00:00-07:00Jared Norman[email protected]The direction from which you attack a program problem affects the result. I don't think that's controversial.<p>The other day, I <a href="proxy.php?url=https://bsky.app/profile/jardo.dev/post/3lv2ascdrvw2m">posted</a> about an article title <a href="proxy.php?url=https://www.dzombak.com/blog/2025/07/use-your-type-system/">Use Your Type System</a>. Despite firmly believing that <em>most</em> Ruby applications do not need and should not adopt the static type systems on offer, I'm not anti-static types. I really like static types.</p>
<p>Ismael Celis <a href="proxy.php?url=https://bsky.app/profile/ismaelcelis.com/post/3lv2bf6i7mc22">pointed out</a> that the article has some similarities to the idea of <a href="proxy.php?url=https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/">Parse, Don't Validate</a>. That article links to another similar one, called <a href="proxy.php?url=">Type Safety Back and Forth</a>. <em>That</em> article got me thinking about how the direction from which we attack a problem affects our solution. </p>
<p><a href="proxy.php?url=https://www.parsonsmatt.org/2017/10/11/type_safety_back_and_forth.html">Type Safety Back and Forth</a></p>
<h2>Type Design</h2>
<p>The article isn't about outside-in versus inside-out design, so much as it is about the directions which developers do (or should) push "responsibility" around their codebases. Matt says this:</p>
<blockquote>
<p>In my experience, developers will tend to push responsibility in the same direction that the code they call does.</p>
</blockquote>
<p>That got me thinking. When you build anything with code, you start <em>somewhere</em>. Knowing where you should start can be hard. This problem contributes to the difficulty of teaching programming. Many programmers can't even tell you how they decide where they start. It turns out that thinking of somewhere you <em>could</em> start and starting there is good enough for a lot of people. (That's fine.)</p>
<p>I like approaches that, on average, give me good results. One totally reasonable approach that users of languages that aren't Ruby sometimes choose is to represent their problem (or "domain") in types. They attempt to model whatever it is they are working on using the type system, then try to build the functionality they need using those types.</p>
<p>This approach has some nice benefits. With a powerful type system, you can model your domain accurately with types. This can help keep the core logic clean, pushing the code that deals with the messiness of the real world to the edges of the system. Matt's article also shows how a type system can show you that certain things aren't possible. You can use this feedback to better understand your domain and update your types accordingly.</p>
<p>Matt's article explores how consciously choosing to push responsibilities backwards (as you might find natural if you started with "inner" domain types) can avoid propagating conditional logic and special cases deeper into your code.</p>
<h2>Test-Driven Whatever</h2>
<p>TDD heads love outside-in design. We're all about that shit. From the perspective of where it pushes complexity, this is nearly the opposite of both what Matt was talking about and starting with domain types. Let's examine this approach through this lens.</p>
<p>It's touted as a benefit that this approach pushes complexity "down". I like when I can examine the code at the entry points to a system, i.e. the "high-level" code, and have it be clean an understandable. If I want to understand the details of how the high-level components function, I'll dive deeper into the object tree, finding more and more implementation details the deeper we get. I like working with designs like this.</p>
<p>Outside-in design encourages these kinds of structures by making it natural to defer handling implementation details until we're deep into the object tree, having implemented the easier stuff first.</p>
<p>That said, this effect makes the refactor step critically important. If you don't step back and examine your system, you'll never notice that there are opportunities to push complexity back up (in your types or behaviour). You can end up with multiple objects at the bottom of your system dealing with complexity that could be handled much higher up the object tree, perhaps in a single location.</p>
<h2>Large Plagiarism Models</h2>
<p>I've enjoyed watching people try to get agentic LLM-based coding tools to work iteratively. Many of the popular tools love spitting out entire, fully-formed classes. When you insist they write tests, they'll write an entire files full of tests wholesale. Sometimes they'll even run them. It can be difficult to get them to do anything meaningfully iterative or resembling TDD. (One test at a time, please? Please?)</p>
<p>When you coerce them into an iterative flow with your preferred threat, the results tend to be better than when they are allowed to spit out monolithic changes. The natural flow of iterative development forces them to gain the context they need to make reasonable decisions. Context is king with these tools.</p>
<p>This is no surprise. For us humans, much of writing good, correct code is about feedback. Working from the outside-in (or by trying to model the domain in types) forces us to confront our misconceptions about the real world early, before we've made too many decisions on potentially incorrect assumptions. The current generation of agentic tools are basically bad assumption machines.</p>
<p>It's been cute that many "AI programmers" have discovered iterative development as a technique for getting better results out of agentic tools. Better late than never. The real takeaway is more fundamental, though. The order with which we attack any problem will affect the result. Some approaches are, on average, better than others.</p>
<h2>Out of Order</h2>
<p>Let's be clear here. You can build any kind of structure with any kind of technique. Hell, you can write pretty good object-oriented code in C. You'll find no hard laws of computer science here. Your entrypoint into the problem you're solving doesn't <em>decide</em> how your system will be structured.</p>
<p>The order you tackle components merely <em>encourages</em> certain kinds of designs. Us mere humans can't think about everything at once and your LLM of choice doesn't have an infinite context window. By deciding what to think about or tackle first, you're making an implicit decision by omission that will impact the design of your system. As you write more code, you make changing the structure of your system harder and harder.</p>
<p>When I tackle a problem, I start with the area I'm least certain about. If no such area exists, I'll just start from the inputs and work towards outputs. This, on average, works great.</p>
<blockquote>
<p>"A journey of a thousand KLOC begins with a single LOC."—Lao Tzu, paraphrased</p>
</blockquote>
<p>Every line of code in a system makes it harder to change. Decisions are most easy to change at the start of a piece work and get harder from there. I can't tell you where <em>you</em> should start, but I know that you should think about carefully, because those first decisions are the hardest to undo.</p>
<h2>Addendum</h2>
<p>Justin Searls wrote a thoughtful commentary on this post. I highly recommend reading it. While it sounds like Justin is arguing with me here, I liked this statement:</p>
<blockquote>
<p>At the end of the day, every program is just a recipe. Some number of ingredients being mixed together over some number of steps. The particular order in which you write the recipe doesn't really matter. Instead, what matters is that you think deeply and carefully consider your approach.</p>
</blockquote>
<p>Justin found the real takeaway (that I didn't quite reach in my own conclusion) and made it explicit. Software development (as an individual sport) is a continuous process. You are constantly examining the information you have and deciding how to proceed. The weight of the decisions you're making varies, as does what (and how much) information you have at hand.</p>
<p>Becoming a more skilled software developer requires examining that process, evaluating changes to your thinking, and adapting. </p>
<p><a href="proxy.php?url=https://justin.searls.co/links/2025-07-29-upside-down-development/">Upside-Down Development</a></p>
https://jardo.dev/code-reloading-for-rack-appsCode Reloading for Rack Apps2025-07-23T00:00:00-07:00Jared Norman[email protected]Rails gives us wonderful and reliable code reloading via Zeitwerk, but what do we do when we want that outside of our Rails apps?<p>I work almost exclusively on Ruby on Rails applications. The product we're building at <a href="proxy.php?url=https://supergood.software">Super Good</a> isn't a Rails app, though; it's a Ruby application built directly on top of Rack. While this means the application is very stripped down and fast, which is great for the end product, it also means we don't get a lot of things for free from our framework.</p>
<p>Code reloading is one of those things. Modern Rails is integrated with <a href="proxy.php?url=https://github.com/fxn/zeitwerk">Zeitwerk</a>, a general purpose code loader for Ruby that supports reloading. Zeitwerk works great as a loader on its own, but it doesn't do everything you need for reloading a web applications.</p>
<p>Zeitwerk will reload code when you tell it to, but <em>you</em> need to decide when to tell it to. Reloading also isn't thread-safe. You need to ensure that you aren't trying to reload code across different threads, which is very important if we're using a web server like <a href="proxy.php?url=https://puma.io/">Puma</a>, which makes use of both threads and processes.</p>
<p>Let's dive into how we can use Zeitwerk, the <a href="proxy.php?url=https://github.com/guard/listen">listen gem</a>, and <a href="proxy.php?url=https://github.com/ruby-concurrency/concurrent-ruby">concurrent-ruby</a> to get the same kind of reloading that Rails provides, but in a simple Rack application.</p>
<p>This is not a complete Rack app tutorial. This tutorial focuses on the trickier bits of getting code reloading working in Rack apps. Very little attention is paid to the specifics of how you might structure your codebase, and there's nothing about Rack itself in here.</p>
<h2>A Basic Rack App</h2>
<p>First, we're going to need a few gems. You'll need all the following in your Gemfile:</p>
<pre><code class="ruby">gem "concurrent-ruby", require: false
gem "listen"
gem "puma"
gem "rack"
gem "zeitwerk"
</code></pre>
<p>You can swap out Puma for your preferred web server, but for this tutorial we'll use Puma since it requires that we handle both threading and forking properly.</p>
<p>You can structure your app however you like, but I like to have a file called <code>config/environment.rb</code> that loads up the application environment. This way we have a single file that can be loaded when booting up consoles, tests, tasks, and the web server that ensures a consistent environment. For this app, we'll assume it looks something like this:</p>
<pre><code class="ruby">ENV["RACK_ENV"] ||= "development"
require 'rubygems'
require 'bundler'
Bundler.require(:default, ENV["RACK_ENV"])
$LOAD_PATH << File.join(File.dirname(__FILE__), "../lib")
require "my_app"
</code></pre>
<p>We haven't defined our app, so let's create <code>lib/my_app.rb</code>:</p>
<pre><code class="ruby">MyApp = Rack::Builder.new do
run -> (env) {
[200, {Content_Type: "text/plain"}, ["Hello, World!"]]
}
end
</code></pre>
<p>We're using <code>Rack::Builder</code> here because any non-trivial application will need to <code>use</code> a bunch of different middleware. This is where we can configure those.</p>
<p>Finally, every Rack app needs a <code>config.ru</code> file. Ours should look like this:</p>
<pre><code class="ruby">require_relative './config/environment'
run MyApp
</code></pre>
<p>With all this in place, we have a working app that serves the classic "Hello, World!" message, but there's no code reloading in place.</p>
<h2>Once Upon a Time</h2>
<p>Before we get into implementing code reloading, let's build a little utility class. Put this in <code>lib/once.rb</code> and require it in <code>config/environment.rb</code>.</p>
<pre><code class="ruby">class Once
def initialize(&block)
@block = block
@mutex = Mutex.new
end
def call
@mutex&.synchronize do
return unless @mutex
@block.call
@mutex = nil
end
end
end
</code></pre>
<p>This class let's you create an object that encapsulates a bit of code and only ever lets it run once, even if it's called across multiple threads.</p>
<pre><code class="ruby">o = Once.new do
puts "I should only happen once"
end
100.times.map { Thread.new { o.call } }.each(&:join)
# This prints the message only once.
</code></pre>
<p>We'll explain why we need this later.</p>
<h2>Lock and Load</h2>
<p>Let's assume that we're going to put our <em>reloadable</em> code in a <code>src</code> folder at the root of our application. You can swap it for <code>app</code> or <code>place_where_the_code_goes</code> or whatever you want.</p>
<p>For starters, let's move the code responsible for actually serving requests into that folder. Create a <code>src/foo.rb</code> that looks like this:</p>
<pre><code class="ruby">module Foo
def self.call(env)
[200, {Content_Type: "text/plain"}, ["Welcome to Foo!"]]
end
end
</code></pre>
<p>Now, update <code>lib/my_app.rb</code> to call this instead:</p>
<pre><code class="ruby">MyApp = Rack::Builder.new do
run Foo
end
</code></pre>
<p>In the future you'll probably replace <code>Foo</code> with your app's router, but for now we just need a callable object that conforms to the Rack API that we can change to verify reloading is working.</p>
<p>Currently, we're not loading <code>Foo</code>, so our app is broken. Time to <a href="proxy.php?url=https://www.reddit.com/r/funny/comments/eccj2/how_to_draw_an_owl/">draw the rest of the fucking owl</a>. Let's build out the most complex piece of this setup, the code loader. Put this in <code>lib/code_loader.rb</code> and require it from the environment file:</p>
<pre><code class="ruby">require "concurrent/atomic/read_write_lock"
class CodeLoader
def initialize(path:, enable_reloading:)
@loader = Zeitwerk::Loader.new
@loader.push_dir path
@loader.enable_reloading if enable_reloading
@loader.setup
@reload_lock = Concurrent::ReadWriteLock.new
if @loader.reloading_enabled?
@start_loading = Once.new do
Listen.to(*@loader.dirs) do
@dirty = true
end.start
end
else
@loader.eager_load
end
end
attr_reader :reload_lock
def reload!
@start_loading&.call
return unless @dirty
reload_lock.with_write_lock do
@dirty = false
@loader.reload
end
end
def reloading_enabled?
@loader.reloading_enabled?
end
end
</code></pre>
<p>Let's walk through what this does. This class takes two arguments; the location of the code be loaded and whether or not we'll be reloading it. In its constructor, it instantiates a Zeitwerk loader for that path, optionally setting up reloading. It also creates a read-write lock, before doing one of two things:</p>
<ol>
<li>If code reloading is enabled, it creates an instance of our <code>Once</code> class to encapsulate the logic for watching for changes to the code on the filesystem.</li>
<li>If code reloading is disabled, it eager loads the code immediately.</li>
</ol>
<p>The loader has three public methods:</p>
<ul>
<li><code>CodeLoader#reload_lock</code> exposes the lock that we'll use to ensure that we don't serve any requests while a thread is reloading the code <em>and</em> don't attempt to reload code from multiple threads at the same time.</li>
<li><code>CodeLoader#reloading_enabled?</code> is a predicate method for checking if code reloading is enabled. You'll see why we need this later.</li>
<li><p><code>CodeLoader#reload!</code> does the real work. When called, it invokes our <code>Once</code> instance (if defined), ensuring that we're watching for code changes if necessary. It then bails out if the <code>@dirty</code> flag isn't set. If code reloading is disabled, this means we always bail out. If code reloading is enabled, then we only continue if something on the filesystem has changed.</p>
<p>If something on the filesystem <em>has</em> changed, then it acquires the write lock, reloads the code, and resets the <code>@dirty</code> flag.</p></li>
</ul>
<p>We can then head over to our <code>config/environment.rb</code> and initialize this loader, saving it to a constant:</p>
<pre><code class="ruby">MY_APP_LOADER = CodeLoader.new(
path: File.join(File.dirname(__FILE__), "../src/"),
enable_reloading: ENV["RACK_ENV"] == "development"
)
</code></pre>
<p>This doesn't get us all the way, though. Nothing is yet calling <code>reload!</code> on our loader. This is important because we don't want to spin up our <code>Listen</code> thread <em>before</em> Puma forks. When a process forks, threads don't come along for ride. We'll call reload from a Rack middleware. Let's create that middleware in <code>lib/code_loader_middleware.rb</code>:</p>
<pre><code class="ruby">class CodeLoaderMiddleware
def initialize(app, loader)
@app = app
@loader = loader
end
def call(env)
@loader.reload!
@loader.reload_lock.with_read_lock {
@app.call(env)
}
end
end
</code></pre>
<p>This middleware is much simpler than the loader. It takes an instance of our loader as an argument. On each request it calls reload (which will reload our app's code if something has changed on disk since the last time it was called), and then acquires the loader's read lock and serves the request.</p>
<p>As mentioned before, we need that read lock because we may be serving multiple concurrent requests in different threads and one of those requests might be reloading the app's code. Zeitwerk's code reloading isn't thread-safe, so this ensures all requests are served outside of the reloading process.</p>
<p>Require this file in your <code>config/environment.rb</code>, which should now look like this:</p>
<pre><code>ENV["RACK_ENV"] ||= "development"
require 'rubygems'
require 'bundler'
Bundler.require(:default, ENV["RACK_ENV"])
$LOAD_PATH << File.join(File.dirname(__FILE__), "../lib")
require "once"
require "code_loader"
require "code_loader_middleware"
MY_APP_LOADER = CodeLoader.new(
path: File.join(File.dirname(__FILE__), "../src/"),
enable_reloading: ENV["RACK_ENV"] == "development"
)
require "my_app"
</code></pre>
<p>Next, head over to <code>lib/my_app.rb</code> and use the new middleware:</p>
<pre><code class="ruby">MyApp = Rack::Builder.new do
use CodeLoaderMiddleware, MY_APP_LOADER
run Foo
end
</code></pre>
<p>At this point, if everything should work, except that code still isn't reloading. I've intentionally run into one of the biggest gotchas here. <code>MyApp</code> isn't reloaded and that's fine, but when it's instantiated, it captures a reference to the current version of <code>Foo</code>. Code reloading <em>is</em> working, but Rack is still serving this stale version of <code>Foo</code>. We need to ensure that each request references the <em>current</em> value of the <code>Foo</code> constant. This is how we can do that:</p>
<pre><code class="ruby">MyApp = Rack::Builder.new do
use CodeLoaderMiddleware, MY_APP_LOADER
if MY_APP_LOADER.reloading_enabled?
run -> (env) {
Foo.call(env)
}
else
run Foo
end
end
</code></pre>
<p>We finally found our use for <code>CodeLoader#reloading_enabled?</code>. By wrapping our app in a Proc when reloading is enabled, we ensure that each request resolves the current value of the <code>Foo</code> constant and serves the latest code. In test and production environments where we aren't using code reloading, we just run the app directly because there's no risk of a stale version.</p>
<h2>Summary</h2>
<p>Let's review the important pieces:</p>
<ul>
<li><p><code>CodeLoader</code> is responsible for setting up Zeitwerk's code loading, ensuring thread-safe synchronization around the reloading process, and monitoring the filesystem for changes. It eager loads code outside of development so that our code is loaded <em>before</em> Puma forks. This way we take advantage of copy-on-write performance in production. </p></li>
<li><p><code>CodeLoaderMiddleware</code> takes an instance of our <code>CodeLoader</code> and tells it to check if code needs to be reloaded before each request, ensuring that development requests always get the latest code. It hooks into a lock on the loader to ensure that we allow ongoing code reloads to finish before serving requests.</p></li>
</ul>
<p>With these in place and correctly configured, you can go about defining your application inside the <code>src</code> folder without having to restart your server after every change. If you dig into Rails' source code, you'll find that it uses roughly the same approach we've used.</p>
<h2>A Worse Once Implementation</h2>
<p>Credit to <a href="proxy.php?url=https://www.johnhawthorn.com/">John Hawthorn</a> for this one. Ruby's Regex has an <a href="proxy.php?url=https://ruby-doc.org/3.4.1/Regexp.html#class-Regexp-label-Interpolation+Mode"><code>o</code> flag</a> that causes interpolations to be evaluated only once. This allows us to do this:</p>
<pre><code class="ruby">class Once
def initialize(&block)
@block = block
end
def call
/#{@block.call}/o
end
end
</code></pre>
<p>Just because this works does not mean you should do it. That said, JP Camara wrote <a href="proxy.php?url=https://jpcamara.com/2025/08/02/the-o-in-ruby-regex.html">a whole article</a> about this feature of Ruby's regex engine and how it works. Go check it out! It was pure coincidence that he was writing that article while I was working on this one.</p>
https://jardo.dev/advent-of-criminally-bad-ruby-codeAdvent of Criminally Bad Ruby Code2024-12-06T00:00:00-08:00Jared Norman[email protected]Marco Roth and I collaborated on solving Advent of Code 2024 Day 5 with Ruby's most popular feature, refinements.<p><iframe
src="proxy.php?url=https://www.youtube.com/embed/CFD1b4Lkkhs"
title="YouTube video player"
frame_border="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
style="aspect-ratio: 16 / 9; width: 100%;"
></iframe></p>
<p>Yesterday, <a href="proxy.php?url=https://marcoroth.dev/">Marco Roth</a> and I committed a variety of Ruby programming felonies and are now fugitives from the <a href="proxy.php?url=https://github.com/rubocop/rubocop">Rubocops</a>. Marco joined <a href="proxy.php?url=https://twitch.tv/jardonamron">my Twitch stream</a> where I was working on the fifth day of this year’s <a href="proxy.php?url=https://adventofcode.com/">Advent of Code</a>. He had some <em>very good</em> ideas about how to tackle the problem. I was keen to try them out.</p>
<p>Mild Advent of Code Spoilers This blog post is focused on the features of Ruby that we used, not the puzzle itself. There will be some allusions to the Advent of Code puzzle, but the code is so obfuscated that this likely won’t seriously spoil the solution for you. Viewer discretion is advised.</p>
<h2>Smooth Operators</h2>
<p>My goal for the session was to override some operators. Ruby let’s you redefine what operators like <code>+</code> and <code>-</code> do on your objects. This is a great feature. It means that if you have objects that represent complex values, you can implement common operations on them.</p>
<p>Money objects are a great example. Imagine you have objects that represent monetary values, which are a numeric value and a currency. You want to be able to add those together if their currencies match. Overriding the <code>+</code> method let’s you write <code>five_us_dollars + ten_us_dollars</code>. It’s handy.</p>
<p>Part of the input for the puzzle is a series of “rules” that are two integers with a pipe (<code>|</code>) between them, like <code>35|47</code>. It would take me no time to come up with a regular expression to parse out the two numbers and then convert them to integers, but that’s not what I wanted to do.</p>
<p>I wanted to execute the expression using <code>eval</code> and have it return a <code>Rule</code> object containing both numbers. You might be wondering, “what the hell, Jared?” I don’t have time to justify this. Instead, go listen to <a href="proxy.php?url=https://shows.acast.com/dead-code/episodes/predatory-infrastructure-with-noah-gibbs">Noah Gibbs talk about programming as art</a>. I’m making art.</p>
<p>First, we tried redefining the bitwise OR operator on the Integer class and… boom. My debugger exploded. It turns out that something inside of the REPL depends on bitwise OR on Integer. So that won’t work.</p>
<p>What we needed was the ability to monkey-patch a class <em>only in the context of our code</em>. Fortunately, Ruby has a solution for that. (I’m actually not sure it’s fortunate. Maybe “unfortunately” is a better word here.) Enter <a href="proxy.php?url=https://docs.ruby-lang.org/en/3.3/syntax/refinements_rdoc.html">Refinements</a>.</p>
<h2>Sleek and Refined</h2>
<p>Refinements are essentially monkey-patches that you can enable in the current scope (top-level or class/module-level, but not method-level) and they are <em>only</em> available in that scope. Once control passes out of that scope, it’s as if the refinement doesn’t exist until control returns to that scope.</p>
<pre><code class="ruby">module Hacks
refine Integer do
def |(other)
Rule.new(self, other)
end
end
end
using Hacks
</code></pre>
<p>You activate a refinement with the <code>using</code> keyword. All the code in this file/module/class <em>after</em> <code>using Hacks</code> will use our definition of <code>|</code> on Integer, but we won’t break all the code that uses that operator in gems and other third-party code.</p>
<h2>Operator Overload</h2>
<p>Our Rule class was pretty simple at this point. It was just a container for two integers. In order to solve the problem, we’d need access to those two integers. We could expose a couple of reader methods, but where’s the fun in that?</p>
<pre><code class="ruby">class Rule
def initialize(first, last)
@first = first
@last = last
end
def +@
@first
end
def -@
@last
end
end
</code></pre>
<p>Instead, we overrode the unary <code>+</code> and <code>-</code> operators to return each integer, with <code>+</code> as the integer that is supposed to come first, and <code>-</code> as the integer that’s supposed to come last. This allowed us to write <code>+some_rule</code> and <code>-some_rule</code> to extract the integers out of each rule.</p>
<p>Like I said before, this ‘aint business; this is art.</p>
<p>If you’re looking to override some operators, <a href="proxy.php?url=https://kddnewton.com/">Kevin Newton</a> has a nice list of Ruby’s operators on his blog that specifically lists out the <a href="proxy.php?url=https://kddnewton.com/2023/07/20/ruby-operators.html#call-name-operators">“call name operators”</a>. These are the operators that are “syntactic sugar” for method calls and can be overridden on any object.</p>
<h2>A Wide Array of Bad Code</h2>
<p>Ruby’s standard library is full of useful classes and methods for solving Advent of Code problems. You almost always use <em>something</em> from <a href="proxy.php?url=https://ruby-doc.org/3.3.6/Array.html">Array</a> or <a href="proxy.php?url=https://ruby-doc.org/3.3.6/Enumerable.html">Enumerable</a>. Our day five solution ended up using <code>sum</code>, <code>map</code>, <code>all?</code>, <code>partition</code>, <code>select</code>, <code>find</code>, <code>none?</code>, and a few others, but I didn’t want to litter my solution with these totally comprehensible words. What’s a boy to do?</p>
<p>The obvious solution is to start overriding all the operators that Array doesn’t already implement and using them to do other things. For example, we wanted to use the greater than operator, rather than <code>map</code>.</p>
<pre><code class="ruby">module Hacks
refine Array do
def >(other)
map { other.call(_1) }
end
end
end
</code></pre>
<p>This allows to put proc on the right side of a greater than expression and have it function like <code>Array#map</code>. I’m basically Picasso.</p>
<pre><code class="ruby">[1, 2, 3, 4] > ->(n) { n + 1 }
#=> [2, 3, 4, 5]
</code></pre>
<p>It turned out, lots of Array methods could be massacred like this. We made <code>sum</code> into <code>>=</code>, <code>partition</code> into <code>=~</code>, <code>select</code> into <code>/</code>, and didn’t stop there. We made <code>some_array ** value</code> return the index of that value in the array. We made <code>some_array < value</code> delete that value from the array.</p>
<p>In our most creative moment, we made it so that <code>some_array % "middle"</code> would return the middle element in that array, but <code>some_array % ->(n) { n >= 5 }</code> would mutate the array, removing all elements that are less than five. If you like <a href="proxy.php?url=https://github.com/jarednorman/AdventOfCode2024/blob/main/day5.rb#L23-L30">this code</a>, <a href="proxy.php?url=https://supergood.software">Super Good</a> is open for new contracts. Give me a shout.</p>
<h2>Metaprogramming for Evil</h2>
<p>At this point, we’d overridden about ten different methods on Array using refinements and seven of those all took the exact same form:</p>
<pre><code class="ruby">def some_operator(other)
some_array_method { other.call _1 }
end
</code></pre>
<p>We realized two things about this:</p>
<ol>
<li>We could define these methods with metaprogramming (using more operator overrides).</li>
<li>We could support blocks and procs with these methods.</li>
</ol>
<p>This led to <a href="proxy.php?url=https://github.com/jarednorman/AdventOfCode2024/blob/main/day5.rb#L40-L46">this beauty</a>, which I’ll explain after I give you a second to look upon my Works, ye Mighty, and despair!</p>
<pre><code class="ruby">def self.[](operator, array_method)
define_method(operator) do |other = nil, &block|
send(array_method) {
block ? block.call(_1) : other.call(_1)
}
end
end
</code></pre>
<p>This allowed us to use the operator method on the Array class itself to define new operators that alias existing Array methods that support both blocks and procs. This turned most of our new operators methods into <a href="proxy.php?url=https://github.com/jarednorman/AdventOfCode2024/blob/main/day5.rb#L48-L54">this gibberish</a>:</p>
<pre><code class="ruby">self[:>=, :sum]
self[:>, :map]
self[:<=, :all?]
self[:=~, :partition]
self[:/, :select]
self[:!~, :find]
self[:>>, :none?]
</code></pre>
<p>It also meant that the following two lines (that you should <em>never</em> actually write in production code) became functionally equivalent, though they are not equivalent from a performance perspective. (Who cares?)</p>
<pre><code class="ruby">[1, 2, 3, 4] >= ->(n) { n * 2 }
[1, 2, 3, 4].>= { _1 * 2 }
</code></pre>
<p>I can hear the sirens in the distance. This isn’t just art. This is <em>transgressive</em> art.</p>
<h2>Symbol Vomit</h2>
<p>Our crimes against Ruby coalesced into <a href="proxy.php?url=https://github.com/jarednorman/AdventOfCode2024/blob/main/day5.rb#L48-L54">this chunk of code</a>. It’s technically most of our solution to both part one and two, but it probably won’t even slightly help you solve the puzzle.</p>
<pre><code class="ruby">rule_data, update_data = *-input
rules = rule_data > ->(r) { eval r }
updates = update_data > ->(u) { eval "Update[#{u}]" }
valid_updates, invalid_updates = updates =~ ->(update) {
rules <= ->(rule) { update === rule }
}
part_one = valid_updates.>=(&:~@)
part_two = (invalid_updates > ->(update) {
update << rules
}).>=(&:~@)
`Part One: #{part_one}`
`Part Two: #{part_two}`
</code></pre>
<p>Did I mention that we overrode the backtick operator (<code>`</code>) too? We also used argument forwarding (<code>...</code>), <code>Symbol#to_proc</code> with our operators (<code>&:~@</code>), splatting, and probably a few other esoteric Ruby features. The end result was some code that looks vaguely like Haskell if you squint.</p>
<p>This whole exercise begs one question: should you write Ruby like this? I’m here to tell you that that’s the wrong question. The right question is, “what other crimes can we commit with refinements and overriding operators?” There’s a whole world of tremendously obtuse and difficult-to-understand code just waiting to be written. Go write it!</p>
<h2>Future Crimes</h2>
<p>I’d like to thank Marco again for his help with this solution. I was really just riffing off his suggestions. Go subscribe to <a href="proxy.php?url=https://twitch.tv/jardonamron">my Twitch channel</a> if you want to see what else we can come up with or contribute to future solutions. My <a href="proxy.php?url=https://ruby.social/@jardo">Mastodon</a> and <a href="proxy.php?url=https://bsky.app/profile/jardo.dev">Bluesky</a> are the best places to find out when I’ll be streaming.</p>
<p>Marco suggested that we <del>abuse</del> use <code>method_missing</code>, and I think we might do that for day six. I also want to implement some totally unnecessary virtual machines, do some code generation, and find some way to <code>eval</code> an entire input file. Basically, I want to attack and dethrone God.</p>
<p>Keep Ruby Weird, baby.</p>
https://jardo.dev/how-to-fail-at-solidusHow to Fail at Solidus2024-11-13T00:00:00-08:00Jared Norman[email protected]Some tips on how to make the most of Solidus, or completely fail by doing the opposite.<p><iframe
src="proxy.php?url=https://www.youtube.com/embed/BQoMr00aNFw"
title="YouTube video player"
frame_border="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
style="aspect-ratio: 16 / 9; width: 100%;"
></iframe></p>
<p>In 2022, I spoke at SolidusConf 7 on how to make the most of the Solidus platform (or fail, by doing the opposite of my suggestions.) Solidus is a framework for building custom eCommerce stores. If you choose a platform like Solidus, you need to leverage the benefits of the platform as much as you can to reap advantages that aren’t available to less customizable platforms, like Shopify. Additionally, you don’t want to fall victim to common and avoidable mistakes. My talk helps guide you towards the best experience possible when building on Solidus.</p>
<h2>Getting Started With Solidus</h2>
<p>The kinds of stores that benefit the most from Solidus are high-volume businesses, stores with large catalogs, marketplaces, stores that sell restricted goods, and anyone that wants to customize parts of the eCommerce stack that aren’t customizable on other platforms (like the checkout.)</p>
<p>There are lots of reasons to choose Solidus, but if you’re a small store with a simple catalog and aren’t going to demand a lot from your eCommerce platform, maybe it isn’t for you. If you’re high-volume store with a custom subscription model, you’re going to <em>love</em> Solidus. Make sure you evaluate your needs against what Solidus offers.</p>
<p>Once you’ve chosen Solidus, take a iterative, incremental approach to getting your site live. Start with must-have functionality, then build incrementally until your organization is satisfied with the site and ready to go live. You can go live with missing features, but you shouldn’t go live with broken features. Keep that in mind when planning your work.</p>
<p>Don’t go crazy with esoteric or trendy technology choices. Keep eCommerce boring. Most eCommerce organizations don’t want to shoulder the burden of bleeding-edge technologies. Solidus is a proven technology. When choosing your stack, choose other proven technologies.</p>
<p>ECommerce stores are relatively long-lived applications. Someone is going to maintain your tech choices down the road. Be nice to those future maintainers, and not just because they might be me… they might be you too.</p>
<p>When making tech choices, consider what resources you have <em>right now,</em> not what resources you might have down the road. You don’t need to build a GraphQL gateway to proxy requests to a bunch of backend services with the hottest JavaScript framework in front of that just to serve your privacy policy page. It’s overkill. There are plenty of nine-figure stores that are doing great without the burden of maintaining a bunch of unnecessary infrastructure.</p>
<p>The reality is that it’s eCommerce. Your business might be a little special because you found the need to use Solidus, but it’s not <em>that</em> special. You’re not a tech startup.</p>
<p>Along the same lines, don’t fight the framework. Solidus is built on Rails. Follow Rails conventions. As your team grows, you’ll see endless pain if you develop your own esoteric ways to do everything.</p>
<h2>The Extension Ecosystem</h2>
<p>When it comes to <a href="proxy.php?url=https://github.com/solidusio-contrib">Solidus extensions</a>, make sure that you contribute customizations back upstream. In the long run, it’s always worth it to leverage the community.</p>
<p>Don’t be too eager to make your own extensions, though. Our extension tooling is awesome. I know. But, you don’t need to make every feature in your app its own extension. It’s too much overhead to maintain. Once your feature stabilizes, if you find that it’s still worth extracting, and you have the time, then go ahead. You probably won’t, but that’s okay.</p>
<p>For situations where an extension is justified, build it in your app’s repository. This eliminates the overhead of managing multiple projects and keeping things up-to-date. Once the extension stabilizes, you can pull it out and publish it.</p>
<h2>Leverage The Platform</h2>
<p>When it comes to technical approach, make sure to leverage the platform. I talk about that a bit in <a href="proxy.php?url=https://jardo.dev/agile-fluency-model-ecommerce">my previous SolidusConf talk</a>, but Solidus ships with great support for testing, CI, CD, and all your favourite Agile buzzwords. Don’t sleep on it.</p>
<p><a href="proxy.php?url=https://jardo.dev/solidus-most-customizable-ecommerce-platform">Solidus is the most customizable eCommerce platform.</a> But, just because you have all that power, doesn’t mean you need to use it all the time. Make sure the modifications you’re making are valuable to the business. Customizing the platform comes with maintenance costs. Evaluate those tradeoffs.</p>
<p>This doesn’t apply as much to configurable classes and other configuration points. I’m talking about overriding methods on core classes, <code>Module#prepend</code>, and other invasive customizations.</p>
<p>In the talk I get into some nitty gritty details of overriding frontend templates, but you’ll have to watch the talk for those. Some of them are less relevant now that we have <a href="proxy.php?url=https://github.com/solidusio/solidus_starter_frontend">Solidus Starter Frontend</a>.</p>
<p>When customizing the core classes with overrides, make sure to test those changes thoroughly. Those are the most brittle kinds of customizations.</p>
<p>At the end of the day, it’s about making the most of Solidus’ customizability. Don’t shoot yourself in the foot by completely reworking some system just to make a minor change to the behaviour because your prefer some different design approach.</p>
<p>It can be unintuitive, if you come from an aggressive refactoring Agile discipline, but you want to be a little more targeted when you’re making modifications to code that you don’t own. Where you need to make changes, do so carefully and document those changes.</p>
<p>It’s also important to keep your project update. Don’t fall behind on Solidus upgrades. If you do, you lose out on the awesome improvements the community is making, but it also eliminates your incentive to contribute back your own improvements, since you won’t be able to use them until you upgrade.</p>
<p>It’s project policy that minor releases get security patches for 18 months. While some stores choose to run the bleeding-edge version of Solidus (and I recommend it!), you should at least stay within the security path window. Solidus is vastly easier to upgrade than Spree used to be, so take advantage of that!</p>
<p>The reason I recommend staying on the bleeding edge is that Solidus is very stable, and if you stay up to date, when you want to make changes to Solidus, you’ll immediately see the benefits of those changes. You’re able to directly shape the future of Solidus to fit your business’s needs.</p>
<h2>Join The Community</h2>
<p>We don’t really know exactly how many Solidus stores are out there, but it’s a lot. Many of them participate in the community in different ways, from joining the Solidus Stakeholders team to contributing changes back to Solidus and more. Here are some easy ways to get started:</p>
<ol>
<li><a href="proxy.php?url=https://slack.solidus.io/">Join the Slack.</a> Slack is where a lot of the conversations about the platform happen. Myself and the rest of the Core Team are active in there answering questions and listening to feedback.</li>
<li><a href="proxy.php?url=https://github.com/solidusio/solidus">Participate on GitHub.</a> Chime in on issues. Propose new features in <a href="proxy.php?url=https://github.com/solidusio/solidus/discussions">GitHub Discussions</a>. GitHub is where we work on the project itself and offers plenty of opportunities to shape the future of the project.</li>
<li><a href="proxy.php?url=https://opencollective.com/solidus">Contribute to our OpenCollective.</a> Funding for the development of Solidus comes from the community. This money goes towards development on the platform, hosting events, and more.</li>
</ol>
<p>Whatever you do, make sure we know you’re out there! There are so many stores using Solidus that we don’t even know about! I learned recently that an NFL team ran their pro shop on Solidus for a period of time! We had no idea, because the people working on that shop didn’t engage with the open-source community around the project.</p>
https://jardo.dev/agile-fluency-model-ecommerceThe Agile Fluency Model and eCommerce2024-11-12T00:00:00-08:00Jared Norman[email protected]How can eCommerce teams improve their development processes?<p><iframe
src="proxy.php?url=https://www.youtube.com/embed/9M0h2D4IvkE"
title="YouTube video player"
frame_border="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
style="aspect-ratio: 16 / 9; width: 100%;"
></iframe></p>
<p>In 2020, I had the pleasure of speaking at the first digital version of SolidusConf. It was also my first SolidusConf as a member of <a href="proxy.php?url=https://solidus.io/">Solidus</a>’s <a href="proxy.php?url=https://github.com/orgs/solidusio/teams/core-team">Core Team</a>. I’d been a contributor to the project since before the fork from <a href="proxy.php?url=https://spreecommerce.org/">Spree</a> was publicly announced, but when I started <a href="proxy.php?url=https://supergood.software/">Super Good</a> I was able to dedicate more time to managing the open-source project and that quickly landed me a seat on the Core Team.</p>
<p>In my talk I discussed what Agile <em>is</em> and how we can use something called the <a href="proxy.php?url=https://www.agilefluency.org/">the Agile Fluency Model</a> to help refine our team’s approach to software development to get the benefits of this style of development.</p>
<p>Solidus provides better support for Agile software development than any other eCommerce platform out there. It builds on the <a href="proxy.php?url=https://rubyonrails.org/">Ruby on Rails</a> ecosystem to allow you to develop your application test-first, leverage Continuous Integration (CI) and Continuous Deployment (CD), and iteratively deliver features.</p>
<p>If you make the most of this, you’ll get the features you’re building in front of your customers faster. In eCommerce, that has a real affect on your bottom line. Your optimizations start driving conversions immediately, and if a feature doesn’t have the desired impact, you can quickly redirect your efforts.</p>
<p>One of the tricky parts of Agile software development is that the values of <a href="proxy.php?url=https://agilemanifesto.org/">the Agile Manifesto</a> don’t provide any framework for implementation. The original co-signers all had different vision of how to do Agile software development. These ranged from process like <a href="proxy.php?url=https://www.scrum.org/resources/what-scrum-module">Scrum</a> to <a href="proxy.php?url=https://www.atlassian.com/agile/kanban">Kanban</a> to (my favourite) <a href="proxy.php?url=http://www.extremeprogramming.org/">Extreme Programming</a>, to name a few.</p>
<p>Beware Agile 2 Ironically, in the talk I say that “there’s no Agile Manifesto 2”. Unfortunately, someone did coin “Agile 2” and made a <a href="proxy.php?url=https://agile2.net/">website</a> for it. This is “unfortunate” because the content of the website is useless, AI-generated drivel.</p>
<h2>The Agile Fluency Model</h2>
<p>I’m not affiliated with the Agile Fluency Project I’m not affiliated with them in any way. I’ve not attended any of their workshops and have no opinion on their services. I just found the model fit with my views on improving software teams and their processes.</p>
<p>That’s where the Agile Fluency Model comes in. It provides a way for teams to evaluate their development processes and find what areas to focus on to help improve those processes in the most impactful way. The model outlines four zones that teams progress through as they hone their approach to software development.</p>
<p>The first zone is called “Focusing”. That’s where you are if your team is focused on delivering business value and addressing customer needs. There’s no point in moving faster if you aren’t working on the right things. That’s why this is the first zone.</p>
<p>The second area is “Delivering”. This is where you master the technical practices associated with iteratively delivering that value. Here is where you work to shorten lead times, address technical hurdles, and help the team ship work in step with business needs.</p>
<p>Third is “Optimizing”. Here you’re further empowering your team to help make decisions and act independently in the interests of the business. This often requires a structural shift in the team’s relationship to the broader organization and buy-in from leadership.</p>
<p>The final zone is “Strengthening”, in which teams are able to take initiative to improve the broader organization through their work. </p>
<h2>Agile in eCommerce</h2>
<p>So, how does this all apply to eCommerce? Projects in the eCommerce space have a lot in common. You’ve got customers who you want to make purchases on your site (and hopefully they want to make those purchases too.) You’re typically trying to drive some very measurable metrics like AOV, RPR, and conversion rate.</p>
<p>You’ve also probably got customer service, merchandisers, marketers, fulfillment, and other users internal to your organization. All these different users have different needs, goals, and ways they interact with your eCommerce application.</p>
<p>You can build tools and features to benefit each group. Larger organizations often see huge benefits from automations that reduce manual work performed by internal team members, but at the end of the day, it’s the external customers who are the reason the business exists. It’s important to find ways to weigh and prioritize work that impacts each group.</p>
<h2>Focusing</h2>
<p>This zone is where you work to make sure that you’re working on impactful work for the business. If you’re doing Scrum, or Kanban, or the non-technical parts of XP, you may already have fluency in this zone. Most popular planning frameworks help very directly with focusing.</p>
<p>If you’re succeeding at this zone, it’s usually easy to tell. Are you being transparent with your stakeholders about what’s being worked on and how it’s progressing? Are the things that you’re working on actually valuable to your stakeholders? If so, then you’re probably doing okay.</p>
<p>Established eCommerce companies or retail brands branching into eCommerce sometimes view the software team as a necessary evil, a cost of doing business that they have to pay to sell things on the Internet. This can lead to organizations throwing requirements over the wall to the software team.</p>
<p>Getting to the point where you’re actually collaborating with your stakeholders in those organizations can be tricky. First, you need to figure out who your stakeholders are. Just because someone is requesting a feature from your team, doesn’t mean that person is the source of that feature request. You may need to do some exploration to learn who owns which streams of work or priorities.</p>
<p>Once you’ve established who you need to collaborate with you can implement your preferred off-the-shelf Agile process, like Scrum, Kanban, or the non-technical parts of XP. <em>(Pro Tip: Do XP.)</em> Now you can work to improve that collaboration, look at what’s getting worked on, and weight that against the organization’s broader priorities.</p>
<p>You should be able to demonstrate the benefits directly to your stakeholders too. There should be fewer misunderstandings, because you’re working directly with stakeholders. The engineering team should be able to make suggestions about how to solve problems, since they are exposed to the reasoning behind requests. Finally, the stakeholders should be able to intervene if the team is headed the wrong direction.</p>
<p>If you can get these fundamentals locked in, you have the first piece of a really successful development process.</p>
<h2>Delivering</h2>
<p>Delivering is where teams learn not just to focus on the right work, but on how they deliver that work. The goal here is low defect rates and high productivity. Who doesn’t want that? The goal is to ship work at a cadence that makes sense for the business. Teams that are proficient at Delivering differentiate themselves from teams that are merely proficient at Focusing by being able to ship features at will.</p>
<p>Proficiency here involves mastering a number of technical skills. Fortunately, Extreme Programming gives us a pretty good framework for these skills. If you’re fluent in Delivering, you’re going to need Continuous Integration. There’s no way to get to the point where changes can be shipped quickly unless you have solid acceptance tests and are integrating your regularly.</p>
<p>In order to lower risk and increase the confidence you have in releasing changes, you almost certainly need to be doing something like <a href="proxy.php?url=https://martinfowler.com/bliki/TestDrivenDevelopment.html">TDD</a> or at least have a rigorous testing approach. It’s not feasible to manually validate every change if you want to move quickly. Manual verification is often necessary for certain things, but you can’t be manually verifying anything that an automated test could instead. Save manual verification for things the computer can’t verify.</p>
<p>You also need to be really diligent about refactoring the system. As requirements evolve and grow more complex, different parts of the system may drift and become out-of-sync with each other. If you want to be able to continuously make changes, you need to make sure the system says consistent and congruent.</p>
<p>Besides XP, you’ll also want to steal techniques from the <a href="proxy.php?url=https://about.gitlab.com/topics/devops/">DevOps</a> movement. <a href="proxy.php?url=https://continuousdelivery.com/">Continuous Delivery</a> is a big one. You’ll want to use techniques like <a href="proxy.php?url=https://martinfowler.com/articles/feature-toggles.html">feature flags</a> to help merge and deploy features and have them ready to go when the business needs them.</p>
<p>From a technical perspective, most of these approaches are the defaults with Solidus. Your store comes with a suite of tests and is ready to deploy to platforms that support frequent deploys. Out of the box, you’re ready take advantage of this within your team.</p>
<p>The trick is that simply <em>doing</em> these things isn’t enough. You need to be <em>good</em> at them. You need to stop working with long-lived feature branches and other similar practices that stand in the way of proficiency in this zone.</p>
<p>Building these skills takes time. You can’t just drop in the practices and instantly reap the benefits. Fortunately, these techniques pair really well with eCommerce. Not all software projects can support lots and lots of changes per day for fear of alienating their users.</p>
<p>The transactional nature of eCommerce means that’s much less of an issue. I wouldn’t start moving your cart button around every day, but the average user won’t even notice iterative improvements to your storefront. For most stores, it’s safe to deliver features as fast as you can build them. It’s up to the business when to launch them.</p>
<p>Code Review is another skill that you need to invest in, and that means more than having someone take a quick glance at code and approve it. It means learning how to structure PRs and commits to make them easier to review. It means soliciting and giving relevant kinds of feedback.</p>
<p>TDD and testing are a whole constellation of skills that relate to each other and all contribute to this proficiency. Continuous Integration is a skill. It means more than just having a CI server validating your changes. Feature flagging is a skill. You need to learn what can be feature flagged and when it should be. There’s overhead to feature flagging and you need to build the infrastructure to support it and learn when to use it. Finally, refactoring is it’s own domain, but aggressive refactoring takes time to master and learn when to use.</p>
<h2>Optimizing</h2>
<p>Optimizing is the really hard part. This is the big differentiator for a lot of teams. If you are proficient here, then you are actually getting the supposed benefits of Agile. This means higher value deliveries and better product decisions. It’s about knowing what your market and business need and achieving that. It is way harder to achieve in eCommerce than the previous two zones.</p>
<p>The Agile Fluency Project puts it like this:</p>
<blockquote>
<p>The distinction between an Optimizing team and a Delivering team is that the Optimizing team makes its own decisions about what to fund and where to focus their efforts.</p>
</blockquote>
<p>In eCommerce, you’ve got all these different internal needs pulling you every direction, you still need to keep the customers in mind, and succeeding here is going to require a lot of trust between the engineering team and the rest of the organization. That level of trust has to be built up through the other zones.</p>
<p>If you’re going to succeed in eCommerce, you need your team to be working in sync with the other parts of the organization. I’ve encountered teams that were responsible for the bounce rate of certain landing pages, but they weren’t able to collaborate with the marketing team that managed them. They saw the bounce rate tank, but they had no technical solution because the marketing team had just sent a ton of unqualified traffic to those pages. These are the kinds of disconnects that need to be addressed to create proficiency here.</p>
<p>Fixing these kinds of problems typically requires actual structural change within organizations. Working from the outside as a consultant like myself, it’s relatively easy to achieve the first two zones. The broader organization needs to be on board for you to have any hope of achieving proficiency here.</p>
<p>Typically this involves integrating the engineering team into the other parts of the organization that they are impacting. This almost necessarily leads to better decision-making. This accelerates your ability to cancel low-value projects. Higher-value projects are much easier to identify and prioritize. It improves the whole organizations ability to evaluate and prioritize work.</p>
<p>Because the engineering team is at the centre of all these these different parts of the organization, they need to be given more autonomy. This can create pushback in many organizations.</p>
<p>We have no shortage of metrics in eCommerce and can try to use them to help smooth out this process, but it can be difficult to measure a potential change in conversion rate against some kind of fulfillment optimization.</p>
<p>The most important way to overcome pushback in your organization is to show off your wins. Keep an eye on where you’re benefiting from these structural changes and make sure that everyone knows about them. If stakeholders don’t see the value in the changes you’re making, they won’t see them as worth the effort.</p>
<h2>Strengthening</h2>
<p>The final zone is where teams find ways to strengthen their organization and make it more successful. I’ve seen teams that are able to do that, but this is the bleeding edge of Agile. I’m not going to lie and claim I know exactly how this fits into eCommerce. It’s going to take more time with more teams in this space to flesh out how to consistently succeed here.</p>
<h2>Beware of Bad Investments</h2>
<p>This model provides a way to look at what your team is doing and find where to invest in your process to have the biggest impact. The zones build on each other for a reason. Don’t ignore that.</p>
<p>If you haven’t mastered Focusing and get sign-off to spend a ton of effort getting your CI/CD infrastructure in place, you’re going to undermine trust in your team. Stakeholders don’t care how fast you can deliver features if they aren’t the features they want. You need that trust to succeed overall.</p>
<p>As you iterate on your process, you need to continuously evaluate where you’re investing, what benefits you’re seeing, and adjust as you go. In a way, you need to be Agile about your Agile process.</p>
<h2>Solidus is Perfect for Agile</h2>
<p>I don’t think there’s another platform that works as well as Solidus does for building a high-velocity, Agile eCommerce team. If you are comfortable with the processes and techniques above, then you should be implementing them out of the gate on your Solidus projects.</p>
<p>Solidus is designed specifically to support all of these practices. The biggest mistake you can make with Solidus (or any custom eCommerce platform) is to fail to capitalize on its strengths.</p>
https://jardo.dev/the-rubby-game-v0-1The Rubby Game v0.12024-07-18T00:00:00-07:00Jared Norman[email protected]I just finished playing the first iteration of a board game I created about building Ruby on Rails applications. It was actually kind of fun!<p>A while back, I picked up a copy of <a href="proxy.php?url=https://ted.dev/products/jitterted-tdd-game.html">JitterTed’s TDD Game</a>. The game models TDD’s inner loop. You follow the process of figuring out what the code should do, finding a way to test whether it did it, writing the test code, making correct predictions about the compilation and test failures, writing the production code, getting the test passing, and committing the work. It allows you to cut corners for short term advantages at the risk of wasting your efforts in the medium term. This got me thinking; what other programming ideas can we teach with board games?</p>
<p>I immediately started brainstorming ideas for a board game centred around Ruby, Rails, and testing. The ideas flowed quickly. Players (programmers) would have objectives to complete. The objectives are called “stories” and completing them is called “delivering” them. You’re rewarded with victory points, called “story points”.</p>
<p>To deliver stories, you commit code changes. Code changes come in three flavours: frontend, backend and test. Stories require some number of each kind of change. Players earn tokens that represent the different types of code changes, gather them into commits, and assign those to stories. When a story’s requirements are met, they deliver the story and earn the associated points. The first player to earn 20 story points wins the game.</p>
<p>Along the way, players play cards that modify the game. “Decision” cards provide benefits like additional code change tokens or reduced story requirements. “TDD”, “Continuous Integration” and “Pair Programming” are decisions. “Tech Debt” cards provide short term advantages to the player that plays them, but make things more difficult for everyone later on. I made tech debt cards like “Flaky CI” (which adds random chance when delivering stories) and “Use React” (which increases the frontend code change requirements of the next story).</p>
<p><img src="proxy.php?url=/assets/the-rubby-game.jpg" alt="The Rubby Game in progress"></p>
<p>A couple days ago, I tested the initial prototype of the game with some coworkers. The game quickly ground to a halt when everyone gained lots of short term advantages by playing Tech Debt cards, only to become unable to ship any stories due to the mountain of tech debt we’d just created. The solution was to do the hard work of paying it down to unblock our work and get moving again. To quote one playtester, “this is too real.”</p>
<p>While the core loop of the game seemed fun, things were way out of balance. Some cards that blocked other players work were totally useless. Stories weren’t worth enough points, so the game was going to take too long. Gaining code change tokens and committing them was <em>far</em> too hard, while shipping lots of tech debt was extremely easy. Keeping it easy to do shoddy work and difficult to do good work fits the theme, but the balance was too far off. Players were often unable to do anything useful on their turns not because of tech debt but due to bad luck. That’s not fun.</p>
<p>I’ve worked with the playtesters to figure out some ways to address many of the issues. We’re going to change how drawing and managing cards work. We’re going to make taking on new stories a separate activity, rather than having story cards clog up the main deck. We’re making it easier to manage resources and rebalancing the cards that interfere with other people’s work. Finally, we’re going to centralize the cards that affect the whole team so it’s easy to see at a glance what status effects are active.</p>
<p>I prototyped the initial version with physical cards, dice, and tokens. That was a fun exercise, but v0.2 will be created in Tabletop Simulator. Reworking the physical prototype would be too much work. Hopefully with a digital version we can iterate on it more quickly. It also allows us to playtest it with members of the team that aren’t here in town!</p>
<p>In a few weeks, we’ll playtest the next version with the changes I outlined above, alongside new cards that focus on Ruby and Rails-specific ideas and tradeoffs. So far, this has been a really fun experiment and I’ll keep posting about the progress on it! Huge thanks to <a href="proxy.php?url=https://ted.dev/">JitterTed</a> for the inspiration!</p>
https://jardo.dev/system-tests-are-totally-fineSystem tests haven't failed2024-06-18T00:00:00-07:00Jared Norman[email protected]What's with all the buzz about system tests in the Ruby on Rails community right now?<p>The topic of system tests (a.k.a. end-to-end tests or feature tests) is making the rounds right now, and I'm a bit confused by it. They've always been a bit of a contentious topic. I've seen people say they <em>only</em> write system tests. I've seen people say they don't write <em>any</em> system tests. I don't want to get called a "centrist", but the best approach is definitely somewhere in between those two poles.</p>
<p><a href="proxy.php?url=https://world.hey.com/dhh/system-tests-have-failed-d90af718">System tests have failed</a></p>
<p>DHH started the conversation by announcing that system tests have "failed". He claims that he has seen very little benefit to having a large suite of system tests. He reports having spent too much time getting such tests to work for the minimal benefit he has seen from them.</p>
<p>I don't doubt any of that. System tests <em>are</em> harder to maintain than any other kind of test. He's entirely right that these tests are prone to false negatives and browser timing issues unless they are very carefully written. I agree that debugging these tests is much more difficult than other kinds of tests.</p>
<p>For once, I'm mostly on the same page as David. In fact, part way through the article, we find this statement:</p>
<blockquote>
<p>System tests work well for the top-level smoke test.</p>
</blockquote>
<p>This is true. System tests work best when they're just making sure that things <em>vaguely</em> work. You don't want to be testing the details of anything unless you've got critical business logic that can't be tested at other levels of the system. This all comes back to the testing pyramid.</p>
<p><a href="proxy.php?url=https://martinfowler.com/articles/practical-test-pyramid.html#TheTestPyramid">The Practical Test Pyramid</a></p>
<p>The higher-level (and slower) a test is, the fewer of that kind of test you should have and the more general that test should be. The fact that "HEY today has some 300-odd system tests" is a red flag. That's not the most ridiculous number of system tests I've encountered, but my gut says that's too many for that size of application. These are the tests you should be using the most sparingly.</p>
<p>Especially when you have a good test harness in place it can be easy to reach for them when adding new functionality. You're adding a variation of some existing feature, so you encode the new functionality in a variation of an existing test. This gives you immediate, automated feedback as you work through building your feature.</p>
<p>The problem is that you don't want every variation tested at that level. Instead you have two options.</p>
<p>You can simply not write the system test. Instead, home in on where you can exercise this functionality in more localized (and faster) tests. This works best if you know the system well.</p>
<p>Alternatively, you can write the system test and not commit it. It is totally fine to write a system test to aid you in developing a feature and then throw that test away. Whether or not a test is useful to you now is not the same as whether the test is valuable to the project forever.</p>
<p>This is the same idea behind why REPLs are great. It's why prototypes are great. It's why iterating on design is great. There's thousands of reasons to write all kinds of code that useful for you in the moment, but ultimately doesn't need to be carried forward and maintained. That goes for tests too.</p>
<p>So, if you've got too many system tests, what do you do? You can delete them, but part of the hypothetical benefit of these tests is that they give you confidence that you haven't broken anything. If you wholesale delete them without examining the lower-level tests for the logic they exercise, you may be making your system more brittle. We want to keep the confidence without keeping the slow tests.</p>
<p>[</p>
<pre><code> Replacing system tests with unit tests |
Everyday Rails
</code></pre>
<p>](https://everydayrails.com/2024/06/01/replacing-system-tests-with-unit-tests.html)</p>
<p>Aaron Sumner argues that you should make sure you keep the tests that provide you the most value, and that means the ones that test the most important parts of your application. I work in eCommerce, so my system tests are focused around the core purchase flow and the checkout. We <em>do</em> test variations in there, because we <em>need</em> to know that all of those variations work every time we make a change to the site.</p>
<p>Aaron also points out that code coverage tools can be useful for finding the gaps that removing feature tests removes. This is a great idea and one of the few places that I've seen code coverage used to solve a real, practical problem.</p>
<p>The article is a little reductive, though. I agree that when testing ActiveRecord models that the database becomes part of the unit, but the testing world is richer than the article makes it seem. You shouldn't necessarily be replacing your system tests specifically with unit tests. Instead, you should use a mix of integration and unit tests, depending on your needs.</p>
<p>System tests haven't failed; they're just overused. Writing maintainable, reliable system tests is an admittedly difficult skill, but luckily you don't need many of them, so a little investment goes a long way. Put some effort into the most important ones to make sure they are reliable. Go ahead and axe the variations and flaky system tests. I'm sure you've picked up a bunch of slow tests that aren't providing you any value. Never keep tests that aren't providing value.</p>
https://jardo.dev/in-defence-of-gerritIn Defence of Gerrit2024-03-13T00:00:00-07:00Jared Norman[email protected]I'm not saying we should all use Gerrit, but I am saying that using Gerrit early in my career made me a much better programmer.<p>The first team I joined in the tech industry was not doing what I understood modern software development to be. There was no CI/CD. There wasn’t a single automated test. They’d only recently adopted Git, but were using it primarily as a big save button. Some project histories were just a long string of arbitrary commits with the message “EOD”.</p>
<p>I was in my early twenties. I’d been programming actively for about ten years, but only as a hobbyist or in school. I had no industry experience. I’d been consuming all the programming material I could find to help me land a job. I was alarmed by the complete absence of “best practices”, but in no position to lobby for changes.</p>
<p>I eventually pushed them to change some of that, but in retrospect, what they were doing wasn’t nearly as bad as I thought at the time. The kind of work they did didn’t need robust code review. CI/CD would have been somewhere between difficult and impossible. What they were doing was fine for their needs, but I didn’t have the maturity to know that at the time. All I knew was that it flew in the face of what I had read a modern development process was.</p>
<p>My next job was the opposite. My time at what was then called “FreeRunning Technologies” (later renamed to “Stembolt”) shaped the developer I am today more than any other experience I’ve had.</p>
<p>My first real project at the company was integrating a Japanese payment gateway called GMO with a platform that would eventually power payments on Steam (yeah, that Steam) when it launched in Japan. No pressure for someone that had built his first production Rails app only months earlier.</p>
<p>I don’t recall any kind of project planning software or explicit process. There were no regular client meetings. There were just conversations with people that knew what the software needed to do, and then there was code review. Oh boy, was there code review.</p>
<p>We used <a href="proxy.php?url=https://www.gerritcodereview.com/">Gerrit</a> on the project. To this day, I have mixed feeling about Gerrit. It was horrible to use, horrible to look at, and horrible to explain to people. The workflow it enabled, on the other hand, was great.</p>
<p>I was expected to push up atomic commits that would be individually reviewed by more experienced developers. They could leave detailed feedback. I could then push up new versions of those commits, and Gerrit tracked them as separate versions of that commit. Reviewers could see what changed from my previous version of the commit and verify that I’d correctly addressed the feedback. If it took multiple rounds, you could diff arbitrary versions of a single commit to understand what had changed.</p>
<p>Gerrit technically supported something like branches, but we only used the feature when we absolutely had to. Every commit I authored had to pass CI before being reviewed, merged, and deployed individually. I knew nothing except continuous delivery. Everything was done incrementally. The only way to develop features was iteratively.</p>
<p>Rigid practices can teach you a lot. When I used <a href="proxy.php?url=https://jardo.dev/5-days-of-tcr-in-ruby">Test && Commit || Revert to do some Advent of Code puzzles</a>, I was never planning to keep TCR as a part of my daily practice. That wasn’t the goal. That exercise was a spiritual continuation of my experience with Gerrit, pushing me to find ways to work even more incrementally.</p>
<p>I look back on my time using Gerrit the same way that I look back on choosing C as my first programming language. I don’t generally recommend it to others, but I believe it made me a better software developer in the long run.</p>
<p>I miss Gerrit, even if I don’t actually want to use it again. I didn’t know it at the time, but it was teaching me things about developing software that are difficult to learn any other way and for which there are few (if any) good resources. Internalizing this approach has impacted every line of code I’ve written since. <strong>Incremental is smooth. Smooth is fast.</strong></p>
https://jardo.dev/introducing-dead-codeIntroducing Dead Code2024-03-12T00:00:00-07:00Jared Norman[email protected]I'm sorry. I started a podcast. Please subscribe to it.<p>It was only a matter of time before the disease that gets everyone in my demographic got me too. I’ve started a podcast. I know, I know, but the software industry needs me. I couldn't help myself.</p>
<p>One team’s best practices are another’s anti-patterns. The TDD debate continues with no end in sight. Agile might be dead, but it might still be alive and well, just divorced from its name. Computer science academia is totally disconnected from software development, and bootcamps have tried to improve on that by, uh, exploiting market conditions or something.</p>
<p>That’s where <a href="proxy.php?url=https://deadcode.website">Dead Code</a> comes in. Through conversations with people across the software world, we’re going hunting for our industry’s best ideas. When we find them, we’ll try to figure how we can apply them ourselves, on our teams, in our organizations, and to the industry itself. Better things <em>must</em> be possible.</p>
<p>If that sounds like a good time to you, head to <a href="proxy.php?url=http://deadcode.website">deadcode.website</a> to subscribe. There’s already a preview episode in the feed and our first interview is coming soon. If you know someone who I should interview on the podcast, shoot me a message.</p>
<p>Now go delete some <a href="proxy.php?url=https://deadcode.website">Dead Code</a>.</p>
https://jardo.dev/goodbye-pivotal-trackerGoodbye, Pivotal Tracker2024-03-11T00:00:00-07:00Jared Norman[email protected]Pivotal Tracker is shutting down. It might not have been perfect, but I'll still miss it.<p>I just learned (through <a href="proxy.php?url=https://dwf.bigpencil.net/why-do-we-miss-pivotal-tracker/">this article</a>) that Pivotal Tracker is shutting down for everyone except enterprise customers. I’ve been using Pivotal Tracker for only slightly less time than I’ve been working with Rails. In the beginning I didn’t appreciate it. It was ugly. I didn’t understand the words it used. It was rigid. Eventually that changed.</p>
<p>Bugs didn’t get pointed, which made the team extremely aware of the cost of defects. Chores didn’t either, and that urged our team to break things down into user-oriented functionality, so that it would “count”. The “automatic” sprint planning was imperfect, but it tempered our unrealistic expectations of what we could get done. You could even create releases and it would predict if they were on track.</p>
<p>The tool was far from perfect. Compared to Linear, its GitHub integration was extremely limited. The UI felt stuck in the 90s and the UX was awkward. Pointing tickets isn’t for everyone, and Pivotal Tracker focused too hard on projecting estimates. Planning larger projects wasn’t great. I can’t imagine using it on a truly large team.</p>
<p>In retrospect, I appreciate its opinionated design. It wasn’t trying to be everything to everyone. It was trying to tell you how to do software development, and it made deviating from that awkward. I hope that someday someone makes another Pivotal Tracker, except it has <em>my</em> opinions.</p>
https://jardo.dev/structure-and-purposeStructure and Purpose2024-02-28T00:00:00-08:00Jared Norman[email protected]The structure of the code you write can communicate your intentions, making it easier to understand for other developers.<p>There are infinitely many ways to evaluate the design of software. We have code smells like feature envy and primitive obsession. TDD people focus on coupling and cohesion. Automated tools can measure cyclomatic complexity. You can get into style and aesthetics.</p>
<p>This article explores how the structure of your code can communicate its purpose. It covers tips and techniques for making your code look like it does what you intend it to. If you want to write code that other developers will intuitively understand, then read on.</p>
<p>I'm a Rubyist This article uses Ruby in its examples and some conventions are Ruby-specific, but the concepts apply to most languages.</p>
<h2>The Worst Fizz Buzz Solution</h2>
<p>As a counterexample, let’s consider a piece of code where the purpose of the code (and the programmer’s intent for it) has been intentionally obfuscated. This is a solution to a programming puzzle that Shopify presented at RubyConf 2022.</p>
<pre><code class="ruby">def fizzbuzz(upper_limit)
Range.new(0, upper_limit).each do |n|
if n % 15 == 0
puts "FizzBuzz"
elsif n % 3 == 0
puts "Fizz"
elsif n % 5 == 0
puts "Buzz"
else
puts n
end
end
end
def Range.new(_, upper_limit)
CSV.parse(upper_limit, headers: true).tap do |csv|
def csv.each(&block)
@h = Hash.new { |h, k| h[k] = [] }
super do |row|
def row.%(other)= 1
instance_exec(row, &block)
end
@h
end
def csv.puts(row)
a = @h[row["url"]]
s = [row["filename"], row["size in kilobytes"].to_i]
if a.empty? || a.last.sum(&:last) + s.last > 120000
a << [s]
else
a.last << s
end
end
end
end
puts fizzbuzz(URI("https://gist.githubusercontent.com/Schwad/9938bc64a88727a3ab2eaaf9ee63c99a/raw/6519e28c85820caac2684cfbec5fe4af33f179a4/rows_of_sweet_jams.csv").read)
</code></pre>
<p>The first method should give you the impression that you are looking at a solution to <a href="proxy.php?url=https://en.wikipedia.org/wiki/Fizz_buzz">Fizz Buzz</a>. In reality, this code downloads a Gist, interprets it as a CSV of song data, and groups the songs by website, dividing them to fit on CDs.</p>
<p>Even if you read the second method (<code>Range.new</code>), the intent is extraordinarily unclear. The variables mostly have single-letter names, there are arbitrary constants (i.e. <code>120000</code>, <code>1</code>) and a number of methods get redefined. The asshole who wrote this code (me) left very few clues as what this code does and left <em>many</em> clues that suggest it does something other than what it does.</p>
<p>Hopefully, we can all agree that this code doesn’t express its intent clearly. If you found code like this in a production system, you’d be justified in reaching for <code>git blame</code> to determine who needs to be held responsible (or at least who can explain what it does).</p>
<p>It’s easier to understand code that does what it looks like it does. Some people will say that it requires less “cognitive load” to work with. Let’s explore some of the aspects of our code that contribute to this effect.</p>
<h2>Naming</h2>
<p>Naming is half of the <a href="proxy.php?url=https://martinfowler.com/bliki/TwoHardThings.html">two hard problems in computer science</a>. As such, I cannot provide a comprehensive guide to the expansive subject, but most developers recognize its role in building maintainable software. Names are your first primary tool for communicating the purpose and intent of the code you write, but there are considerations that go into naming.</p>
<p>Our counterexample named a method that processed playlist data “fizzbuzz”. If you’re familiar with Fizz Buzz, you would assume that the method is a solution to it. Reading the method’s body would (erroneously) confirm this. This is an absurdly bad name, but what makes a good name?</p>
<p>What makes a good name varies based on context; many different aspects might be more or less important on your project. Generally, concise names are preferred, but not at the expense of specificity. Names should be specific enough to avoid ambiguity and avoid conflation with similar, related concepts.</p>
<p>Names can communicate purpose. In a section of code that updates the title of a blog post, you could store the new title of the blog post in a variable called <code>title</code>, or <code>post_title</code>, or <code>new_post_title</code>, or <code>new_title</code>. Except for the first, they all communicate (and highlight) different aspects of the value which might be relevant to the programmer.</p>
<p>There’s no right answer. <code>new_title</code> might make more sense in a module that only deals with blog posts or if the old title is also referenced. <code>new_post_title</code> might make sense in a module that isn’t focused on blog posts, where the title change is part of some larger operation.</p>
<p>Like I said, naming is too broad a topic to be covered fully here. The important takeaway is that you can express intent with names. In this example, the <code>new_</code> prefix communicates that the value will be used to replace another title, even before you read the code that performs that operation.</p>
<h2>Brevity</h2>
<p>While <a href="proxy.php?url=https://en.wikipedia.org/wiki/Code_golf">extreme brevity</a> tends to obfuscate code, appropriately concise code helps spotlight important details. The previous section dealt with how you can communicate purpose by including significant details when naming constants, variables, methods, functions, and classes. Excluding extraneous details helps highlight the significant details that you <em>do</em> choose to include. </p>
<p>The Fizz Buzz counterexample is unnecessarily verbose and contains an excessive amount of incidental complexity. Despite the code’s relatively simple function, you have to sift through conditional branches that are never run and indirection that serves no value. Determining which the meaningful parts of the code unnecessarily hard.</p>
<p>Most code won’t be quite so obnoxious. In my talk <a href="proxy.php?url=https://jardo.dev/tdd-on-the-shoulders-of-giants">TDD on the Shoulders of Giants</a> I used an example where an argument was named <code>email_notifier</code> when the <code>email</code> part was actually an extraneous detail; the notifier object was really a <code>shipment_notifier</code>. In retrospect, that name wasn’t great either, but at least it didn’t encode the irrelevant (to the consuming code) information that the notification would be sent by email.</p>
<p>When naming something, consider what information is relevant where the name will be referenced. Avoid long names that contain details that are either liable to change or simply not important.</p>
<h2>Abstraction</h2>
<p>Concise names are great, but concise implementations are even more important. This is, besides naming, perhaps the most powerful tool at your disposal for making code look like it does what it does.</p>
<p>There are many philosophies of how to approach breaking your code down into useful abstractions, <a href="proxy.php?url=https://en.wikipedia.org/wiki/Domain-driven_design">Domain-Driven Design</a> being the most well-known. The topic is too broad to cover in detail here, but it’s still important to discuss.</p>
<p>The Fizz Buzz example uses abstractions (”Fizz Buzz”, ranges, modulo division, <code>puts</code>) that have absolutely nothing to do with parsing song data and splitting it up onto CDs. A better implementation might have employed abstractions like songs, playlists, and CDs. These objects could have been given methods that mapped to the coherent operations, like adding a song to a CD.</p>
<p>More apt abstractions not only communicate the purpose of the code more clearly, but provide an opportunity to use abstraction to hide the details of the implementation, leaving concise top-level code that makes the broad intent of the code clear. Consider this alternate solution. (I’ve omitted the class implementations for clarity.)</p>
<pre><code class="ruby">songs = Song.from_csv("https://gist.githubusercontent.com/Schwad/9938bc64a88727a3ab2eaaf9ee63c99a/raw/6519e28c85820caac2684cfbec5fe4af33f179a4/rows_of_sweet_jams.csv")
cd_collection = CDCollection.new
songs.each_song do |song|
cd_collection.add_song(song)
end
puts cd_collection.to_h
</code></pre>
<p>I don’t claim this is the ideal solution, but it makes it clear that the code loads songs from a CSV, loops over them, and adds them to a collection. You might feel that this is enough detail for the top-level, but you might also want to make it clear that the songs are being grouped by blog. It would be relatively easy to tweak the structure of this code surface that detail.</p>
<p>The abstractions you use not only serve to explain the domain of your application to other programmers, but also offer the opportunity to highlight what operations are significant and which are mere details. Place important details at the top and push implementation details down the object tree and into private methods.</p>
<h2>Colocation and Ordering</h2>
<p>Putting things together implies that on some level they are related. For example, you can arrange the assignment of dependent variables so that they are consecutive. When the developer reads the first and sees that the second depends on the first, they can infer that subsequent consecutive statements likely also depend on the computations before them. This takes advantage of both colocation and ordering.</p>
<p>The idea follows for not just variables and computations, but also also things like methods. If a class has many methods, you can colocate methods that are meant to be used together and even place them in the order that they are meant to be used.</p>
<p>Colocation and ordering only really hint at the programmer’s intent, but they are a nice way to nudge readers towards the information that they’re looking for in the code. Being consistent with the technique can result in codebases where things seem to be where you expect them, without necessarily realizing why.</p>
<h2>Primitives and Syntax</h2>
<p>Syntactic constructs have intended purposes. Loops are for iterating over things. <code>if</code> statements are for conditional logic. Modules are for organizing related functionality. Pattern matching is for matching patterns. Especially in Ruby, you <em>can</em> use these constructs for other things, but you’re <a href="proxy.php?url=https://en.wikipedia.org/wiki/Principle_of_least_astonishment">violating the principle of least astonishment</a> and obfuscating the purpose of your code.</p>
<p>Loops are a general-purpose primitive that gets used to perform specific actions which can be expressed more directly. In most languages, you can procedurally build a new array using a loop, but you could clarify your intent by instead using <code>map</code> and <code>filter</code>. You can loop over an array and break out once you find the value you’re looking for, but you could simply call <code>find</code> instead. In both cases the latter better expresses the purpose of the code.</p>
<p>The same idea goes for values. Primitive values all have their own semantics and interfaces. In Ruby, you could represent the genre of songs in a playlist with numbers, but symbols, strings, or your own value object will probably better reflect the operations that make sense for those values.</p>
<p>Using (or creating) constructs that reflect your domain helps clarify the purpose of your code. Using syntax, primitives, and types that are unnecessarily general makes your intent less clear. This leads us to the broader topic of specificity.</p>
<h2>Specificity</h2>
<p>Rails adds the concept of presence/blankness to Ruby. You can ask whether any value is present or blank, which are opposites. Empty strings, empty arrays, empty hashes, <code>false</code>, and <code>nil</code> are blank. All other values are not blank; they are present. The purpose of this feature is primarily to handle uncertain and heterogenous user inputs.</p>
<p>It’s very useful, but it’s also overused. It’s common to see a conditional like this in a Rails codebase.</p>
<pre><code class="ruby">if fido.present?
# Do something with fido
end
</code></pre>
<p>There’s nothing intrinsically wrong with this code. The code tells us that <code>fido</code> might be some kind of “blank” value and that we only want to execute the logic in the conditional if it’s not.</p>
<p>This is totally reasonable, but what if the developer who wrote this code <em>knew</em> what kind types of values <code>fido</code> might hold, and that it was always either an instance of <code>Dog</code> or <code>nil</code>. The developer could structure the code to reflect that.</p>
<p>There are a few options. You could simply change the first line to <code>if fido</code>. Most Rubyists would recognize that for what it is. If you wanted to be really explicit, you could write <code>unless fido.nil?</code>. Both give the reader a much better idea of the possible values of <code>fido</code>.</p>
<p>Using unnecessarily general functionality has benefits in terms of the flexibility, but downsides when it comes to communicating intent. Excessively defensive code gives the impression that it <em>needs</em> to be defensive for some reason.</p>
<h2>Conventions</h2>
<p>Another way to communicate your intent is using conventions. Conventions can come from your language, framework, community, or project. Good conventions used well can help to compress a large amount of information about the author’s intent into a small token.</p>
<p>For example, projects that use Ruby’s RSpec testing framework often format the names of test contexts to indicate whether the method under test is an instance method or a class method using a single-character prefix; <code>#foo</code> is an instance method and <code>.bar</code> is a class method. Just by reading the first character, you’ve already gained context on what’s under test.</p>
<p>Not following conventions has the opposite effect. In Rails, the public methods on controllers are normally “actions”, methods meant to service HTTP requests, but nothing stops you from adding public methods that aren’t used as actions.</p>
<p>By violating conventions, you’re liable to confuse someone reading your code later. The structure of the code misleads the reader. Someone will eventually look at the public method and try to understand it as an action. At least briefly, you’ll have confused them.</p>
<p>Many frameworks and languages come with their own conventions, but you can create your own too. Teams and projects can agree on and document their own conventions to more richly communicate domain-specific information that would otherwise be cumbersome to communicate. Be careful here, though; there’s an onboarding and maintenance cost associated with this practice.</p>
<p>Whatever the conventions at play, follow them wherever you can. Break conventions only when the break is meaningful, and document breaches appropriately.</p>
<h2>Cleverness Considered Harmful</h2>
<p>“Clever” code is code that performs some action in a unique or unexpected way. It is the opposite of what we’ve discussed here. Clever is always bad unless the <a href="proxy.php?url=https://www.ioccc.org/">goal</a> is to show off how clever you are. </p>
<p>The <a href="proxy.php?url=https://en.wikipedia.org/wiki/XOR_swap_algorithm">XOR swap algorithm</a> is an example of this. The algorithm allows you to swap the values of two variables without an intermediate variable by repeatedly applying the “exclusive or” operation on them.</p>
<p>To someone unfamiliar with this technique, code that uses this technique will appear to be some kind of bitwise computation. The alternative, using an intermediate variable, not only looks like a simple reassignment, but even offers you the opportunity to name that intermediate variable, making your intent clearer.</p>
<p>For the sake of future readers of your code, avoid unnecessary cleverness. Being clever obscures both what your code does and what it’s trying to do from other developers.</p>
<p>Further Reading Josh Comeau has an excellent article titled <a href="proxy.php?url=https://www.joshwcomeau.com/career/clever-code-considered-harmful/">Clever Code Considered Harmful</a> that is well worth a read if you want to explore this idea further.</p>
<h2>Code is a conversation</h2>
<p>Some of the aspects listed here have only minor impacts on how easy your code is to understand to other developers, but others are more significant. Together they add up. It can be hard to put your finger on why, but code that expresses its intent and purpose well feels different. You can understand it intuitively, independent of its complexity.</p>
<p>Conversely, code that fails at communicating its purpose is hard to follow and work. It’s frustrating, and the fact that it’s difficult to tell why compounds the frustration.</p>
<p>How well your code expresses its purpose and your intent is only a single lens through which you can evaluate a design. It’s an aspect worth considering, but no less so than performance, or domain-modelling, or <a href="proxy.php?url=https://en.wikipedia.org/wiki/SOLID">SOLID</a>, or coupling and cohesion. Different aspects of design matter more or less depending on the context.</p>
<p>If you’re looking to write code that’s easier to understand for the next person, consider what your code looks like it does, which information it highlights rather than hides, and how specifically it expresses your intent. Code isn’t just commands for a computer; it’s also a conversation with other developers. Make it a clear conversation.</p>