<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd">
  <channel>
    <title>RyanNickel.com</title>
    <link>http://RyanNickel.com/</link>
    <description>Ryan Nickel's thought poops</description>
    
		<item>
			<title>Friction and Quality</title>
			<link>http://RyanNickel.com/html/friction_and_quality.html</link>
			<description>
				<![CDATA[
					<p>My threshold for friction is low. That&rsquo;s not a complaint &ndash; it&rsquo;s one of my most useful signals of quality.</p>

<p>Most conversations about software quality gravitate toward things that are measurable: test coverage percentages, cyclomatic complexity scores, deployment frequency. These are fine proxies. But I&rsquo;ve found another reliable signal &ndash; how much friction I feel when I go to change something.</p>

<p>My personal threshold for friction is low. Annoyingly low, some might say. If I open a file and have to scroll past 400 lines before I understand what it does, that&rsquo;s friction. If adding a feature means touching five separate modules in sequence, hoping I don&rsquo;t miss one, that&rsquo;s friction. If the local dev setup requires following a seven-step README, two of which are outdated, that&rsquo;s friction. None of these are catastrophic individually. Together, they&rsquo;re a sign of rot.</p>

<p>Quality isn&rsquo;t what the code does when everything goes right. It&rsquo;s how hard the code <em>fights</em> you when you need to change it.</p>

<p>The subtle thing about friction is that it accumulates slowly and invisibly. Each shortcut is defensible in isolation. The test you skipped because the deadline was real. The abstraction you deferred because the requirement felt unstable. The naming you left vague because you&rsquo;d come back to it. You always intend to come back. You rarely do. The debt compounds quietly, and then one day a routine change takes three times as long as it should, and you can&rsquo;t quite say why.</p>

<p>I like to think about maintainability less as a property of code and more as a property of the future developer experience (that future developer is usually me, three months later, with no memory of why I made any of these decisions). Writing with empathy for future me is a forcing function. It pushes toward smaller functions, honest naming, and architecture where the seams are obvious. Not because some style guide said so, but because future-me is going to feel it if present-me is lazy.</p>

<p>This is why I&rsquo;ve come to treat my own friction response as a legitimate quality metric. When something feels annoying to work with, I try to resist the urge to push through and instead ask why it&rsquo;s annoying. Usually the answer points directly at something worth fixing &ndash; a missing abstraction, a leaky boundary, a function doing too much. The irritation is diagnostic.</p>

<p>Good software doesn&rsquo;t just work. It stays workable. It stays workable when the requirements change, when the team changes, when the context you had in your head six months ago is completely gone. That durability doesn&rsquo;t come from clever architecture alone. It comes from a consistent, almost stubborn refusal to let friction slide &ndash; catching it early, naming it honestly, and fixing it before it fossilizes into the codebase.</p>

<blockquote>
<p>How you do one thing is how you do everything.</p>
</blockquote>

<p>That applies here. The developer who lets a confusing variable name slide is the same one who lets an unclear API boundary slide. The team that skips the test because it&rsquo;s &ldquo;just a small fix&rdquo; is the same team that ships the outage. Standards aren&rsquo;t rules &ndash; they&rsquo;re habits. And habits show up everywhere, whether you&rsquo;re watching or not.</p>

<h2>The AI Angle</h2>

<p>I&rsquo;ve been thinking lately about how this threshold in the age of AI coding. AI coding tools are genuinely useful. I use them. But they have a particular failure mode that maps directly onto everything above: they generate plausible code faster than you can evaluate it. The friction of writing is gone. The friction of <em>understanding</em> what was written is very much still there.</p>

<p>If your standard before was &ldquo;I&rsquo;ll clean this up later,&rdquo; AI gives you ten times the surface area of later to deal with. It will happily produce a 200-line function, inconsistent naming, and a abstraction that almost makes sense &ndash; and it will do it confidently, in seconds. The output isn&rsquo;t wrong enough to reject immediately. It&rsquo;s just subtly not right, in ways that <em>compound</em>.</p>

<p>This is why a low friction tolerance might be <em>more</em> valuable now, not less. The bottleneck has shifted. You&rsquo;re no longer constrained by how fast you can type &ndash; you&rsquo;re constrained by how fast you can think critically about what&rsquo;s in front of you. Developers who trained themselves to feel friction, to notice when something is harder to follow than it should be, have a real edge here. They&rsquo;ll catch the AI&rsquo;s lazy abstractions. They&rsquo;ll push back on the generated code that technically works but quietly makes the next change harder.</p>

<p>The old excuse was that there wasn&rsquo;t time to do it right. AI has largely killed that excuse. There&rsquo;s now almost always time to do it right &ndash; the question is whether you&rsquo;ve built the habit of caring. How you do one thing is how you do everything. If you accepted slop before, you&rsquo;ll accept AI slop now, just faster and at greater volume.</p>

<p>Low friction tolerance is a feature. In the age of AI, it might be the feature.</p>

				]]>
			</description>
			<pubDate>Sun, 05 Apr 2026 20:20:11 EDT</pubDate>
			<guid isPermaLink="false">http://RyanNickel.com/html/friction_and_quality.html</guid>
		</item>
	
		<item>
			<title>How AI Amplifies NPM Supply-Chain Risk</title>
			<link>http://RyanNickel.com/html/how_ai_amplifies_npm_supply-chain_risk.html</link>
			<description>
				<![CDATA[
					<p>Further to my previous post about the <a href="/html/axios_comprimised_the_case_for_pinning.html">compromised axios npm package supply-chain attack</a> — and my view that <a href="/html/how_npm39s_defaults_set_you_up_for_failure.html">npm’s defaults can set projects up for failure</a> — I wanted to jot down how AI can exacerbate the risk.</p>

<p>Agents can now churn out code and features faster than ever, and can run <sup>24</sup>&frasl;<sub>7</sub>. That means even a brief window of package compromise can have a much larger blast radius.</p>

<p>The axios issue was live for only a few hours in the middle of the night in my time zone. The team was asleep, so we weren’t exposed. But as teams increasingly rely on unattended AI, that risk grows exponentially, giving bad actors more leverage.</p>

<p>It’s a reminder that guardrails matter more than ever: tighter dependency policies, version pinning, lockfile enforcement, audit automation, and human review for critical updates.</p>

				]]>
			</description>
			<pubDate>Wed, 01 Apr 2026 22:31:32 EDT</pubDate>
			<guid isPermaLink="false">http://RyanNickel.com/html/how_ai_amplifies_npm_supply-chain_risk.html</guid>
		</item>
	
		<item>
			<title>Axios Comprimised: The Case for Pinning</title>
			<link>http://RyanNickel.com/html/axios_comprimised_the_case_for_pinning.html</link>
			<description>
				<![CDATA[
					<p>This morning I woke up, opened my usual tech news feed, and the first thing staring back at me was an axios supply-chain compromise. For details, see <a href="https://thehackernews.com/2026/03/axios-supply-chain-attack-pushes-cross.html">Axios supply-chain attack pushes cross-platform malware via npm</a>. It is also a reminder that supply-chain risk is not theoretical. Your build is only as safe as the newest version you allow in.</p>

<p>That is why I keep arguing for exact versions in <a href="/html/how_npm39s_defaults_set_you_up_for_failure.html">How NPM&rsquo;s Defaults Set You Up for Failure</a>. This is another case study. It is not even just the packages you chose, it is the transitive dependencies you inherited. When you rely on semver ranges like <code>^</code> or <code>~</code>, you are opting into silent updates. In the real world, those updates are not always safe or even predictable.</p>

<p>If you are going to keep ranges anyway, add friction to newly published releases. The latest npm includes a <code>--min-release-age</code> flag you can set in your <code>.npmrc</code> so packages must be at least that old before they are eligible for install. It is a small guardrail, but it gives the ecosystem time to surface problems before they land in your CI.</p>

<p>Pin what you can, review what you must, and add guardrails where you will not. Supply-chain safety is the sum of boring, repeatable habits.</p>

				]]>
			</description>
			<pubDate>Tue, 31 Mar 2026 23:06:05 EDT</pubDate>
			<guid isPermaLink="false">http://RyanNickel.com/html/axios_comprimised_the_case_for_pinning.html</guid>
		</item>
	
		<item>
			<title>Mole: The Utility I Didn’t Know My Mac Needed</title>
			<link>http://RyanNickel.com/html/mole_the_utility_i_didnt_know_my_mac_needed.html</link>
			<description>
				<![CDATA[
					<p>I recently discovered a small macOS utility called <a href="https://github.com/tw93/Mole">Mole</a>, and it very quickly earned a permanent place in my setup.</p>

<p>I first heard about it while listening to a <a href="https://changelog.com/">ChangeLog</a> podcast titled <a href="https://changelog.com/friends/124">Kaizen! Let it crash</a>. One of the hosts casually mentioned Mole as a tool they’d had great luck with, and that was enough to send me digging. I’ve tried plenty of Mac “cleanup” utilities over the years. Some are beautiful. Some are functional. Most eventually hit you with a paywall or subscription right when you actually need them.</p>

<p>Mole felt different almost immediately.</p>

<p>My <strong>M1 MacBook isn’t slow</strong>—not even close. Apple silicon still holds up incredibly well. But after years of installs, experiments, abandoned apps, and random tooling, it was clearly overdue for some TLC. I wasn’t looking to wipe my machine or reinstall macOS. I just wanted to clean house properly without blowing everything up.</p>

<p>That’s where Mole <em>really</em> shines.</p>

<p>Its real value, at least for me, was <strong>truly uninstalling apps</strong> and aggressively cleaning caches, logs, and leftover system junk. Not just removing an app bundle, but all the hidden remnants that quietly pile up over time. On my first run, Mole reclaimed <strong>over 70GB of cached data</strong>—which immediately justified the install. (why was my slack cache over 500mb??)</p>

<p>Mole is multiple utilities rolled into one: deep cleaning, a smart uninstaller, disk usage insights, and live system monitoring—all in a single, lightweight binary.</p>

<p>No bloat. No upsell. No account creation. Just a free tool that does its job well and gets out of the way.</p>

<p>Software like this reminds me how good small, focused utilities can be. Huge shout-out to <a href="https://github.com/tw93">tc39</a>—I’ll definitely be keeping Mole in my core Homebrew installs.</p>

				]]>
			</description>
			<pubDate>Sun, 18 Jan 2026 12:48:46 EST</pubDate>
			<guid isPermaLink="false">http://RyanNickel.com/html/mole_the_utility_i_didnt_know_my_mac_needed.html</guid>
		</item>
	
		<item>
			<title>Functional Programming Meets Dependency Injection in Express.js</title>
			<link>http://RyanNickel.com/html/functional_programming_meets_dependency_injection_in_express.js.html</link>
			<description>
				<![CDATA[
					<p>Recently, I’ve been exploring some core functional programming (FP) principles and experimenting with how to incorporate them into my daily development workflow. One area where FP really shines is in how it pairs with Express.js—especially when it comes to structuring dependency injection (DI).</p>

<p>DI is a familiar pattern used to keep application code modular, clean, and maintainable. In the Node.js + Express ecosystem, you’ll often come across tutorials where services, database clients, and configuration are wired directly into the app. That works, but there’s a simpler and more flexible alternative using FP principles like composition and higher-order functions.</p>

<h2>What is Dependency Injection (DI)?</h2>

<p>In a typical Express.js app, services, database clients, and config values are commonly wired up like this:</p>

<pre><code>const express = require('express');
const app = express();
const dbClient = require('./dbClient');
const userService = require('./userService')(dbClient);

app.get('/users', async (req, res) =&gt; {
    const users = await userService.getUsers();
    res.json(users);
});

app.listen(3000, () =&gt; console.log('Server running on port 3000'));
</code></pre>

<p>While this approach is functional and easy to follow, it has a few downsides:</p>

<ul>
<li><code>userService</code> is tightly coupled to dbClient, making testing and future changes harder.</li>
<li>The app logic is directly tied to how the services are instantiated, which reduces flexibility.</li>
</ul>

<h2>Bringing in Functional Programming</h2>

<p>Functional programming encourages passing dependencies <strong>explicitly</strong> via <strong>parameters</strong>. It focuses on <strong>pure functions</strong>, <strong>composition</strong>, and <strong>clarity</strong>. Here’s how that can play out in your Express.js app.</p>

<h2>Step 1: Create Pure Business Logic Functions</h2>

<p>Instead of defining <code>userService</code> as a class or an immediately invoked function, use a factory function:</p>

<pre><code>const createUserService = (db) =&gt; ({
    getUsers: async () =&gt; {
        return db.query('SELECT * FROM users');
    }
});
</code></pre>

<p>This keeps your business logic clean and makes it easy to pass in any kind of <code>db</code> — a real client in production or a mock in tests.</p>

<h2>Step 2: Compose Routes Using Injected Dependencies</h2>

<p>Route handlers can be built in a similarly modular way:</p>

<pre><code>const createUserRoutes = (userService) =&gt; {
    const router = require('express').Router();

    router.get('/users', async (req, res) =&gt; {
        const users = await userService.getUsers();
        res.json(users);
    });

    return router;
};
</code></pre>

<p>Now your routing layer doesn’t know or care how the service is built — it just uses what it’s given.</p>

<h2>Step 3: Assemble the Application</h2>

<p>Finally, you bring everything together in the main application file:</p>

<pre><code>const express = require('express');
const dbClient = require('./dbClient');
const createUserService = require('./userService');
const createUserRoutes = require('./userRoutes');

const app = express();

const userService = createUserService(dbClient);
const userRoutes = createUserRoutes(userService);

app.use('/api', userRoutes);

app.listen(3000, () =&gt; console.log('Server up on 3000'));
</code></pre>

<p>This approach keeps your main file focused on configuration and composition.</p>

<h2>Why This Approach Works Well</h2>

<p>Adopting functional programming for DI offers several benefits:</p>

<ol>
<li><strong>Improved Testability</strong> – Easily swap in mock implementations without rewriting core logic.</li>
<li><strong>Greater Modularity</strong> – Each component is self-contained and can be developed in isolation.</li>
<li><strong>Enhanced Reusability</strong> – Functions and services can be reused across routes or even projects.</li>
<li><strong>Simplified Maintenance</strong> – With clear dependency boundaries, the code is easier to understand and refactor.</li>
<li><strong>Reduced Dependency Overhead</strong> – There’s no need to rely on an external DI framework. Let’s face it: npm packages can be black holes—pulling in way more than expected.</li>
</ol>

<p>You don’t need a complex setup or heavy abstraction to apply DI effectively in Express.js. With just a few functional programming patterns, you can keep your app lightweight, testable, and easy to evolve.</p>

<p>If you&rsquo;re looking to simplify your architecture while staying flexible, this approach is definitely worth trying out.</p>

				]]>
			</description>
			<pubDate>Wed, 26 Mar 2025 12:45:14 EDT</pubDate>
			<guid isPermaLink="false">http://RyanNickel.com/html/functional_programming_meets_dependency_injection_in_express.js.html</guid>
		</item>
	
		<item>
			<title>The One True Code Value - Why Maintainability is the Unsung Hero of Programming</title>
			<link>http://RyanNickel.com/html/the_one_true_code_value_-_why_maintainability_is_the_unsung_hero_of_programming.html</link>
			<description>
				<![CDATA[
					<p>There are two kinds of programmers in this world: those who love writing brand-new code (fresh, shiny, and untouched by mortal hands), and those who have gazed into the horrifying abyss of code maintenance – and survived. You know where I stand. Give me maintainability any day. Because nothing says “programming value” quite like writing code you can still understand a year later.</p>

<p>Let’s be honest: the thrill of solving problems and shipping features tends to make us a bit reckless. We’ve all experienced the coding flow state – that zen place where you’re cranking out solutions, piling up commits like there’s no tomorrow. It’s exciting! But tomorrow does come, and it’s not as kind as you might hope. That’s when maintainability steps in. Think of it as a sort of wisdom, gently whispering, &ldquo;Future You will be the one cleaning up this mess. Be kind to Future You.&rdquo;</p>

<h2>A Love Letter to Maintainability</h2>

<p>First, let’s define this beautiful concept. Maintainability isn’t just about making things “easier to fix later.” It’s about creating code that your teammates can read without taking headache medicine, code that will remain functional and reliable under the stress of future updates, code that won’t spontaneously combust just because the product team had an epiphany at 3 a.m.</p>

<p>There’s an underrated elegance to maintainable code. It’s code that fits together neatly, like a well-organized sock drawer. You’re not just throwing things in randomly, but folding, sorting, and arranging with intention. The goal? To make sure that every future visit to that code is painless. Blissful, even.</p>

<h2>It’s All Fun and Games Until Someone Says, “Let’s Add a New Feature”</h2>

<p>The real test of code isn’t whether it works today but whether it’ll still work – and be adaptable – tomorrow. Picture this: you’re deep into a project, your code is humming along nicely, and the product team suddenly decides to add a new feature. If your code is a spaghetti nest of confusing logic, scattered files, and function names like <code>doStuff123</code>, then I’m afraid you’re in for a long night. Adding anything to a mess is just going to make a bigger mess.</p>

<p>But if your code is maintainable, your response is different. Instead of a sinking feeling, you get to say, “Sure! That’s an easy tweak.” Because your functions are readable, your components are organized, and your logic makes sense. That’s the power of maintainability: it’s a gift you give yourself, wrapped in clean, readable code and a bow of future-proofing.</p>

<h2>The Perks of Maintainable Code</h2>

<p>So, what exactly makes maintainable code so precious? Here are some benefits that make it worth prioritizing above all else (yes, even above &ldquo;It Works™&rdquo;):</p>

<p><strong>Readability</strong> – Imagine writing code that makes sense even to someone who didn’t write it. Cue gasps of amazement. Well-structured, maintainable code is like a well-written novel: anyone can jump in, understand the plot, and see the purpose behind each character (er, function).</p>

<p><strong>Adaptability</strong> – Remember that feature request we just talked about? Maintainable code makes changes feel like a gentle adjustment rather than a Herculean task. When your functions are modular, your variables are meaningfully named, and your logic flows naturally, new features fit right in.</p>

<p><strong>Collaboration</strong> – Writing code isn’t a solo act. You’re a part of a team, and nothing says “team player” like writing code that someone else can actually understand. After all, there’s no better feeling than receiving a Slack message from a teammate that reads, “Your code was so clear – adding that feature was a breeze!”</p>

<p><strong>Easier Debugging</strong> – Bugs are inevitable, but debugging doesn’t have to be a nightmare. Maintainable code means that, when things break, they do so in predictable ways. Debugging is like following a trail of breadcrumbs through the forest instead of hacking your way through dense underbrush with a dull machete.</p>

<p><strong>Longevity</strong> – Code that’s maintainable can withstand the test of time. It’s robust, resilient, and – dare I say it – timeless. You might move on to another project or another company, but your maintainable code will be a gift to those who come after you, and they will sing your praises in hushed, reverent tones. (Okay, maybe not, but they will be grateful.)</p>

<h2>The Art of Maintainability: A Few Tips</h2>

<p>Achieving maintainable code isn’t rocket science, but it does take a little <strong>intentionality</strong>. Here are some practical ways to start putting maintainability front and center in your codebase:</p>

<p><strong>Name Things Well</strong> – Clear, descriptive names are your best friends. A function called <code>calculateTotalInvoiceAmount</code> beats <code>calc1</code> any day. Don’t force Future You (or anyone else) to decipher what you were thinking.</p>

<p><strong>Keep It DRY (Don’t Repeat Yourself)</strong> – Write reusable functions and avoid copying and pasting code blocks. Not only is DRY code easier to maintain, but it’s also less error-prone.</p>

<p><strong>Comment with Purpose</strong> – A well-placed comment can be a lifesaver, but too many comments can muddy the waters. Focus on explaining why you did something, not what you did.</p>

<p><strong>Write Tests</strong> – Tests are the unsung heroes of maintainable code. They give you confidence that changes won’t accidentally break everything.</p>

<p><strong>Refactor Relentlessly</strong> – Maintainability isn’t something you achieve once and then forget. Revisiting and refining code is essential for long-term maintainability.</p>

<h2>Embrace the Boring Virtue of Maintainability</h2>

<p>In a world that loves the new, the flashy, and the cutting-edge, maintainability might seem like the shy kid at the back of the class. But here’s the truth: the shiniest, most performant code in the world is useless if no one can work with it six months down the line.</p>

<p>So next time you’re coding, embrace the humbling virtue of maintainability. Write code that’s clear, organized, and prepared for whatever the future holds.</p>

				]]>
			</description>
			<pubDate>Sun, 03 Nov 2024 08:51:50 EST</pubDate>
			<guid isPermaLink="false">http://RyanNickel.com/html/the_one_true_code_value_-_why_maintainability_is_the_unsung_hero_of_programming.html</guid>
		</item>
	
		<item>
			<title>How NPM's Defaults Set You Up for Failure</title>
			<link>http://RyanNickel.com/html/how_npm39s_defaults_set_you_up_for_failure.html</link>
			<description>
				<![CDATA[
					<p>In software development, consistency and stability are critical — especially when managing dependencies. However, NPM&rsquo;s default behavior when installing packages—using the caret (<code>^</code>) in <code>package.json</code> — creates a fragile environment that often leads to frustrating issues for developers. Idempotent and reproducible dependency management should be the standard experience by default, but NPM&rsquo;s use of <code>^</code> undermines this goal.</p>

<p>Here&rsquo;s why it&rsquo;s time to rethink the default use of caret versioning in NPM, and how it sets developers up for future pain, particularly given the complexities of semantic versioning (semver).</p>

<h2>Why ^ Versioning Isn’t a Safe Default</h2>

<p>The caret (<code>^</code>) prefix in NPM allows minor and patch version updates automatically. For example, if you run:</p>

<pre><code>npm install lodash
</code></pre>

<p>NPM adds the following entry to your <code>package.json</code>:</p>

<pre><code>&quot;lodash&quot;: &quot;^4.17.21&quot;
</code></pre>

<p>This means that any future installation can upgrade the <code>lodash</code> dependency to anything below <code>5.0.0</code>. While this aims to provide developers with bug fixes and new features without manual intervention, it introduces significant risks: <strong>unexpected changes</strong>, <strong>hidden regressions</strong>, and <strong>version incompatibilities</strong> in complex projects.</p>

<h2>1. Semver is Hard—and NPM Makes it Harder</h2>

<p>Semantic versioning (semver) is designed to make versioning predictable: major versions introduce breaking changes, minor versions add features, and patches fix bugs. However, not all packages in the JavaScript ecosystem strictly adhere to semver rules. Accidental breaking changes are common, even in minor or patch versions.</p>

<p>By defaulting to <code>^</code>, NPM shifts the burden of semver compliance onto developers, who must constantly monitor updates to avoid potential breakages. This behavior creates unpredictable outcomes that often surface in production or CI pipelines. <strong>NPM’s default makes the promise of semver nearly impossible to uphold</strong>, resulting in wasted time troubleshooting issues that stem from unintended updates.</p>

<h2>2. A Reproducible Developer Experience Should Be the Default</h2>

<p>An idempotent development environment ensures that every code checkout and dependency installation yields the same behavior, regardless of when or where it happens. This level of consistency is essential for stable CI/CD pipelines, reliable production releases, and seamless collaboration across teams.</p>

<p>Other ecosystems, such as <a href="https://go.dev/">Go</a> and <a href="https://www.nuget.org/">NuGet</a>, use exact versioning to prevent dependency drift. In Go, the <code>go.mod</code> file locks dependencies to specific versions, ensuring builds are reproducible every time. NuGet favors deterministic package management for similar reasons. These approaches prioritize stability by ensuring nothing changes unless a developer explicitly updates a version.</p>

<p>In contrast, NPM&rsquo;s default caret (<code>^</code>) versioning can lead to differing versions on different machines. Reproducing bugs becomes a nightmare when dependencies shift between development, CI, and production environments, making stability difficult to maintain.</p>

<h2>3. ^ is a Time Bomb Waiting to Go Off</h2>

<p>The flexibility provided by ^ introduces risks that compound over time. Here’s why:</p>

<ol>
<li><strong>Invisible Updates</strong>: Dependencies can be updated silently, resulting in unexpected issues.</li>
<li><strong>Version Conflicts</strong>: As the dependency tree grows, conflicts arise between direct and transitive dependencies.</li>
<li><strong>CI Pipeline Failures</strong>: Even if code works locally, a new patch version of a dependency may cause unexpected CI build failures.</li>
</ol>

<p>These issues could be avoided by making exact versioning the default. Developers should have control over when to upgrade dependencies, whether through planned reviews, automated tools like Dependabot, or manually tested patch releases.</p>

<h2>4. The Case for <code>--save-exact</code> as the Default</h2>

<p>NPM should default to <code>--save-exact</code> behavior, where every dependency is locked to the precise version installed. This would look like:</p>

<pre><code>&quot;lodash&quot;: &quot;4.17.21&quot;
</code></pre>

<p>This ensures that the versions used during development are identical across all environments. Developers can still upgrade dependencies intentionally, but only when they are ready to manage the risks. This approach reduces reactive debugging and ensures a <strong>reproducible</strong> development experience by default.</p>

<h2>How to Implement an Idempotent NPM Workflow</h2>

<p>If NPM doesn’t change its default behavior, you can take these steps to enforce stability in your projects:</p>

<ol>
<li><strong>Use npm install <code>--save-exact</code></strong>. Set this behavior as the default:
<code>npm config set save-exact true</code></li>
<li><strong>Leverage <code>npm ci</code> for CI pipelines</strong>. This ensures only the versions listed in <code>package-lock.json</code> are installed, preventing unexpected updates.</li>
<li><strong>Use automated tools to manage updates</strong>. Tools like Dependabot can help you control upgrades in a systematic manner.</li>
</ol>

<h2>Conclusion</h2>

<p>NPM&rsquo;s default use of the caret (<code>^</code>) for versioning creates more problems than it solves. In an ideal world, semantic versioning would work flawlessly, but in reality, <strong>semver is hard</strong> — and NPM’s current behavior makes it even harder by encouraging version drift and unpredictable builds. A reproducible, idempotent developer experience should be the default, not the exception.</p>

<p>By shifting to <code>--save-exact</code>, NPM would align with practices of more stable ecosystems like Go and NuGet. This change would empower developers to upgrade dependencies on their own terms, reducing surprises and enabling smoother workflows. The minor inconvenience of managing updates manually is a small price to pay for long-term stability and consistency.</p>

<p>It’s time for NPM to rethink its defaults and prioritize stability—because predictable software is <strong>better software</strong>.</p>

				]]>
			</description>
			<pubDate>Tue, 22 Oct 2024 19:06:32 EDT</pubDate>
			<guid isPermaLink="false">http://RyanNickel.com/html/how_npm39s_defaults_set_you_up_for_failure.html</guid>
		</item>
	
		<item>
			<title>My workflow for managing documents with Paperless NGX</title>
			<link>http://RyanNickel.com/html/my_workflow_for_managing_documents_with_paperless_ngx.html</link>
			<description>
				<![CDATA[
					<p>Managing a pile of documents can be a real headache, can’t it? I used to have all these papers stuffed into folders, always mixing things up. Was my business tax return in the “Tax Returns” folder or jammed in the “Business” one? And good luck finding anything in that mess!</p>

<h2>Enter the Digital Hero: Paperless-NGX</h2>

<p>Then, I decided enough was enough and started scanning all these documents to keep them in <a href="https://docs.paperless-ngx.com/">Paperless</a>. It was cool, but when I upgraded to Paperless-NGX, things got a whole lot cooler. Why? One word: <a href="https://docs.paperless-ngx.com/advanced_usage/#archive-serial-number-assignment">ASN (archive serial number) labels</a>. Sounds fancy, right? It basically means I slap a number on my paper documents, scan them, and bam – I can search for them in Paperless-NGX, then find it physically based on the number.</p>

<h2>The Magic of ASN Labels</h2>

<p>So, I stumbled upon this nifty tool by <a href="https://tobiasmaier.info/asn-qr-code-label-generator/">Tobias L. Maier</a> that lets me print out these ASN labels right at home. But oh boy, did I go through a ton of label sheets trying to get the printer settings right. Eventually, I found just the right printer settings so I wouldn’t waste more labels. It was a game-changer. (I have a fork of the tool with my notes <a href="https://ryannickel.com/asn-qr-code-label-generator/">here</a>)</p>

<h2>My New Workflow</h2>

<p>Now, here’s how I roll with my documents:</p>

<ol>
<li>Slap a Label: Every new document gets the next number in line. No guessing games.</li>
<li>Scan and Upload: I scan the doc, upload it to Paperless-NGX, and it’s in the system.</li>
<li>Physical Filing: Then, the paper version just goes on top of my physical pile. Easy to find with its number if I ever need it again.</li>
</ol>

<p>This setup has made my life a whole lot easier. No more digging through folders or wondering where I put that one bill. It’s all there, a search away in Paperless-NGX.</p>

<h2>Wrapping It Up: Living the Organized Life</h2>

<p>Switching from a chaotic pile of folders to a neat, numbered system with Paperless-NGX felt like a huge win. If you’re tired of playing hide and seek with your documents, give this method a shot. It’s pretty laid back once you get the hang of it, and you’ll wonder how you ever managed without it. Plus, you get to feel a bit like a librarian with your very own indexing system. Who doesn’t love that?</p>

				]]>
			</description>
			<pubDate>Sat, 06 Apr 2024 11:03:50 EDT</pubDate>
			<guid isPermaLink="false">http://RyanNickel.com/html/my_workflow_for_managing_documents_with_paperless_ngx.html</guid>
		</item>
	
		<item>
			<title>Fall Or Dodge in Hell - A Promising Start That Got Lost in the Digital Abyss</title>
			<link>http://RyanNickel.com/html/fall_or_dodge_in_hell_-_a_promising_start_that_got_lost_in_the_digital_abyss.html</link>
			<description>
				<![CDATA[
					<p>So, I recently dove into Neal Stephenson&rsquo;s &ldquo;Fall; Or, Dodge in Hell,&rdquo; expecting an epic adventure. Well, let me tell you, it didn&rsquo;t quite deliver. Clocking in at a whopping 900 pages, you&rsquo;d think there&rsquo;d be enough room for some serious action. But nope, not much happens for the most part. I mean, we get bombarded with pages upon pages of philosophical musings, theological ramblings, and political banter. Some of it relates to the story, but a lot of it feels like it&rsquo;s just there to fill up space. It&rsquo;s like Stephenson took a philosophy/theology textbook and tried to disguise it as a novel.</p>

<p>The story? Yeah, it&rsquo;s in there somewhere, but good luck finding it. It keeps getting swallowed up by this jungle of words. And let me tell you, it&rsquo;s not even that great of a story. In fact, it could&rsquo;ve been told in a fraction of the pages. But no, we&rsquo;re treated to this never-ending maze of verbiage instead.</p>

<p>The writing itself is a bit odd, especially when we venture into the simulated &ldquo;souls&rdquo; in the virtual world. Those characters speak in this archaic, overdone high fantasy language that feels totally out of place. And get this—they draw inspiration from Norse, Greek, and Judeo-Christian mythology, which they just assume the reader to understand.. I mean, cool, but it&rsquo;s done in such an obvious and random way that it just doesn&rsquo;t make sense. The virtual world, which should be the heart of the book, ends up feeling forced and unconvincing. It&rsquo;s like, come on, give us something to believe in!</p>

<p>Towards the end, we spend almost all our time in the virtual world, and guess what? It turns into a typical, run-of-the-mill Quest fantasy with a weak and predictable ending. Talk about a letdown!</p>

<p>Overall, I hate to say it, but it feels like Stephenson wasn&rsquo;t giving it his all. It&rsquo;s like he was going through the motions, throwing together a mishmash of parts that don&rsquo;t fit well. It&rsquo;s a shame because we all know he can be a fantastic writer when he wants to be. Unfortunately, &ldquo;Fall; Or, Dodge in Hell&rdquo; isn&rsquo;t terrible, but it sure isn&rsquo;t a masterpiece either.</p>

<p>Individually the &ldquo;parts&rdquo; offered a very interesting premise which could have been a story unto themselves. The hoax&rsquo;s propogated by the internet and then believed in by certain demographics. The idea of <strong>Ameristan</strong>, the idea of &ldquo;souls&rdquo; living in a digital world. All very interesting, however I was very much left wanting.</p>

<p>Rating this one? Well, I&rsquo;d say it&rsquo;s a solid 2.5 out of 5 stars.</p>

				]]>
			</description>
			<pubDate>Wed, 31 May 2023 12:34:00 EDT</pubDate>
			<guid isPermaLink="false">http://RyanNickel.com/html/fall_or_dodge_in_hell_-_a_promising_start_that_got_lost_in_the_digital_abyss.html</guid>
		</item>
	
		<item>
			<title>Setting up my own @ryannickel.com domain for Mastadon</title>
			<link>http://RyanNickel.com/html/setting_up_my_own_ryannickel.com_domain_for_mastadon.html</link>
			<description>
				<![CDATA[
					<p>If you&rsquo;re using Mastodon or other decentralized social networks, you might be looking for ways to improve your discoverability. One technique that&rsquo;s gained popularity is using the webfinger protocol to create a discoverable profile for your domain.</p>

<p><a href="https://www.hanselman.com">Scott Hanselman</a> wrote a <a href="https://www.hanselman.com/blog/use-your-own-user-domain-for-mastodon-discoverability-with-the-webfinger-protocol-without-hosting-a-server">blog post</a> about how to set this up, and it&rsquo;s a pretty easy process. Basically, you just need to create a webfinger file for your domain and add it to your site.</p>

<p>The webfinger file is a JSON object that contains information about you or your service, like your email address or URL. You can also include links to your social media profiles or a brief description of what you&rsquo;re all about.</p>

<p>In my case, I&rsquo;m running this site on GitHub Pages, and the tricky part is getting the webfinger file accessible from the well-known URL on your domain. By default, GitHub Pages doesn&rsquo;t serve up files or directories that start with a <code>.</code>. But fear not, my friend! You can use the <code>_config.yml</code> file to override this behavior and include the <code>.well-known</code> directory in your site.</p>

<p>All you need to do is add the following line to your <code>_config.yml</code> file:</p>

<p>(for a full list of overrides see <a href="https://jekyllrb.com/docs/configuration/options/">Jekylls Configuration Options</a> documentation)</p>

<pre><code>include: ['.well-known']
</code></pre>

<p>That tells Jekyll to include the <code>.well-known</code> directory in your site&rsquo;s output, which means your webfinger file should now be accessible at <a href="https://yourdomain.com/.well-known/webfinger">https://yourdomain.com/.well-known/webfinger</a>.</p>

<p>Once you&rsquo;ve got your webfinger file set up, Mastodon and other decentralized services should be able to discover information about you and your domain by performing a webfinger lookup. This can help others find and connect with you, which is always a good thing!</p>

<p>So there you have it, folks. Adding a webfinger file to your GitHub Pages site is an easy way to improve your discoverability on decentralized social networks. Follow the steps outlined above and start making some new connections today!</p>

				]]>
			</description>
			<pubDate>Tue, 18 Apr 2023 15:51:13 EDT</pubDate>
			<guid isPermaLink="false">http://RyanNickel.com/html/setting_up_my_own_ryannickel.com_domain_for_mastadon.html</guid>
		</item>
	

</channel>
</rss>