April 14, 2026
AI code review sounds great in theory. In practice, most teams just end up with more noise in their merge requests. In this article, I share 3 steps that made AI code review actually useful in my team and also reasonably priced (under $1.50 per review).
April 8, 2026
AI made coding faster. But faster code without a fast feedback loop is just faster chaos. If we can code faster, we need everything around it to keep up: pipelines, testing, deployments, and monitoring. In this article, we break down a Python development workflow designed for shipping reliably in the AI era. Four pipelines. Three environments. Small MRs, you're willing to throw away. Deployments that are non-events. This is the first in a series where we'll dig into each step - from automated pipelines to feature flags to monitoring that catches issues before your customers do.
March 31, 2026
Most Python test suites have the same problem: they cost more trouble than they save. CI jobs that take forever, tests that break on every refactor, flaky pipelines you keep restarting. After years of writing (and deleting) tests, I've landed on 7 qualities that separate tests worth keeping from tests that just slow you down.
Jan. 7, 2026
A utility for mocking Python's requests library in tests, helping avoid real network traffic during testing.
Nov. 27, 2025
A library enabling developers to freeze time in tests for reliable validation of subscription status, trial eligibility, and date calculations.
Oct. 14, 2025
Techniques for handling scenarios where identical methods need distinct behaviors across multiple invocations.
Sept. 16, 2025
Methods for improving test failure clarity through custom error messaging.
April 3, 2025
Guidance on maintaining readability when using pytest parametrization with numerous examples.
March 13, 2025
Using create_autospec to guarantee proper mock object usage in automated tests.
Feb. 5, 2025
Leveraging pytest parametrization to test multiple scenarios efficiently.