Jekyll2026-02-11T19:23:56+00:00https://cosmastech.com/feed.xmlcosmastechThoughts and tutorials on open-source with a focus on PHP/LaravelLuke KuzmishJSON Decoding HTTP Responses2026-01-20T00:00:00+00:002026-01-20T00:00:00+00:00https://cosmastech.com/2026/01/20/json-decoding-responsesWith the release of Laravel 12.48.0, you can now specify which flags you want to use when decoding an HTTP response from the HTTP client.

$response = Http::get('https://cosmastech.com'); // a URL that returns HTML
$json = $response->json();

What is the value of $json here? You may be surprised to find out that it’s null. Why? Because by default, that’s how PHP treats invalid JSON passed to json_decode(). If you’re just now learning this esoteric fact, don’t feel bad. Most devs probably come to learn this tidbit when their code breaks in production.

If you want to check if json_decode() returned null because the input was 'null' or because there was an error, you can use the json_last_error() function.

That’s pretty cumbersome, and thankfully, in PHP 8, the ability to pass JSON_THROW_ON_ERROR as a flag to json_decode was added. This will raise an exception when given a non-JSON string to parse.

$str = '{"HELLO": xyz}'; // Invalid JSON
json_decode($str, flags: JSON_THROW_ON_ERROR); // throws a \JsonException

How to Use In Laravel

Laravel now offers the ability to decode JSON from an HTTP response, specifying any of the standard decoding flags.

$response = Http::get('https://cosmastech.com');
try {
    $json = $response->json(flags: JSON_THROW_ON_ERROR);
} catch (\JsonException $e) {
    // Perfect, we KNOW we have invalid JSON
}

You can additionally pass these flags to a few other Response methods as well: collect(), object(), and fluent().

Http::fake([
    '*' => '{
        "hello": "world",
        "big_int": 123343343580999843483023
    }'
]);
$response = Http::get('https://laravel.com');
$obj = $response->object(
    flags: JSON_BIGINT_AS_STRING
);
var_dump($obj->big_int); // "123343343580999843483023"

In this example, you’ll notice we passed the JSON_BIGINT_AS_STRING flag. If attempting to decode JSON with an integer that exceeds PHP_INT_MAX (9223372036854775807 on my system), it will convert the integer to a float. By using this flag, rather than converting the int to a float, it parses it as a string.

Application Defaults

You may wish to always decode your HTTP response JSON with certain flags. This is available by adding a method like this to AppServiceProvider’s boot method:

use Illuminate\Http\Client\Response;

public function boot()
{
    Response::$defaultJsonDecodingFlags = JSON_BIGINT_AS_STRING | JSON_THROW_ON_ERROR;
}

In the above example, we don’t need to pass any flags when calling $response->json(), it will always default to converting overflowed integers to strings and throwing an exception on invalid JSON.

If for some reason you want to override these defaults, just pass different flags at decoding time.

Http::get('https://cosmastech.com')->json(flags: 0); // use default json_decode behavior

Use Flags

Trying to remember to always pass these flags at every call site can be a really arduous task. If you can, try to set sensible defaults for how JSON is decoded in your Laravel application. If not, going forward, add them to new call sites. Future self will thank you for the clarity.

]]>
Luke Kuzmish
Simplicity Matters2026-01-16T00:00:00+00:002026-01-16T00:00:00+00:00https://cosmastech.com/2026/01/16/simplicity-mattersSimplicity matters. It’s important for humans and our tiny brains. It’s important for AI Agents with their tiny context windows. It’s important for businesses that are fighting and clawing for market share as competitors clone their product in a span of months.

And simplicity is hard. It’s hard because it takes experience to discern between what is essential and what is just harder than it needs to be. Seeing our code through the eyes of a new developer is nearly impossible, so when you get a new person working on your project, their insight can be priceless and unparalleled.

Simplicity also takes discipline. Morgan Housel, in his book Same as Ever: Timeless Lessons on Risk, Opportunity and Living a Good Life, writes “Complexity gives a comforting impression of control, while simplicity is hard to distinguish from cluelessness.” We write long docs because a few simple phrases make us sound like we’re not earning our keep. It’s hard to admit that a lot of my work amounts to API endpoints and basic CRUD operations… I’m not writing video games or compilers. I’ve written complicated abstractions to keep my brain busy, to feel useful, to have something to do. I will tell you that it’s highly likely I was unaware at the time, but I can see in retrospect that I introduced complexity where it wasn’t totally necessary.

Sticking with a codebase, even for just a short while, it becomes apparent: complexity is a tax on change that is paid by new developers and veterans of the codebase. It’s paid sprint after sprint, project after project. What could be a one or two line change in production code turns into a sprawling exercise in jumping through interfaces, adding new methods, modifying DTOs. Then come the tests… my god the tests that have to change.

Inherent Complexity

A worthwhile software business sells a product that does something. If you’re a saleable business, you’re probably doing something interesting that has its own complexity baked in. I recently started a new job in a domain I have little experience in. Without even looking at code, there is time required just to understand what the business products do for its different types of users. Each tenant has a unique combination of settings and package add-ons, so their users and customers have an increasing number of experiences which our code must accommodate.

What’s the opposite of inherent complexity? It’s accidental complexity. It’s the abstractions we build for their own sake, the dead-code graveyards that no one deleted, the unit tests that confirm a function is called through a chain of mocks instead of asserting against observable behavior, choosing to hand-roll features that are available by your language or framework out of the box.

Our Tools

I am a little bit of a Laravel evangelist, it seems. Who knew? I have worked with Laravel for around 5 years, and I have come to intimately understand the framework and its internals. The selling point of a batteries-included, rapid application development web framework is you can get something that just works in less time than it takes to roll your own. It’s not reasonable to expect everyone who drives a car to be able to rebuild an engine or even explain how it works. Similarly, it’s not reasonable to expect everyone who uses Laravel to study the release notes. For Laravel in particular, there are often a dozen or more features added in each week’s release. Being a student of this can be daunting.

For those who do keep apprised of changes to their tools, they can see the tool often evolves to meet the needs of other developers. When the tools change, oftentimes it is to solve a problem that I may have too. One of the secrets about why I contribute so frequently to Laravel is because I have problems that I want solved at the first-party level. I don’t want to have to build hacky workarounds or extend and override classes to get the functionality I need. Let Taylor handle the maintenance. :laughing:

Use the Magic, but Understand the Magic

Frameworks like Laravel, Rails, and Django can be disliked. Sometimes for the opinions they strongly enforce, or because they have too much magic. I cannot say that Laravel doesn’t have lots of magic. I’m a curious person and I don’t like magicians because I just want to know how they do the trick. I’m also a software developer who can read PHP and has a step debugger; I can see exactly how the magician does the trick. None of us is too busy to get a baseline understanding of how our tools work.

There are big payoffs for understanding (at even a basic level) how things like queued jobs work end-to-end; models and relationships execute queries; and how the Container magically slots in your dependencies. Knowing little bits of the magic gave me that feeling of being in control, the same feeling of control I get if I were to roll it all by hand.

But You Don’t Have To Use All the Magic

There are plenty of things that I won’t use in Laravel because I recognize they don’t fit my use case. Hey, have I mentioned that I don’t like model events and observers? They cause indirection, don’t fire when inserting records in bulk, and are hard to discover. So I don’t use them for projects I work on.

But it’s important to note that while some things are difficult to use at scale work great for a single developer who has the entire project in their head. I encourage you to consider your working environment with an open-mind, doing your best to not be emotionally attached to the code you have written.

For those who are interested, I like Models to function as:

  • a simple data holder with appropriate casts and maybe some “ask me about” methods (canReceiveStripePayments())
  • a way to query for related Models via relationships
  • an excellent query builder with convenient scopes
  • a novel way to think about my data access patterns
  • building blocks of integration tests through Factories

I don’t put a lot of functional logic in them. These Models don’t suffer from anemia, they thrive.

Dogma versus Pragmatism

I have read Clean Code. I have implemented Clean Code. I have been sad.

Holding fast to rules in the face of a reality to the contrary is a recipe for complexity without payoff. We must be like trees in the wind, bending so we don’t snap. I had a mentor early on who, after me asking what the rule is for something, would tell me “you have to use your brain.” Being realistic about the size of a project or the makeup of my team bears better fruit than any book or blog post.

For instance, repositories. I have written them. They end up looking like this:

// Use the suffix -Entity because I know the technical distinction
// between an entity and a value object and it's VERY important
// you know that I know. (This is a true story.)
final readonly class UserEntity
{
    public function __construct(
        // private because TECHNICALLY no other developer
        // should be allowed to know our secret integer ID
        private int $id,
        public string $uuid,
        public string $firstName,
        public string $lastName,
        public EmailAddress $emailAddress,
    ) { }
    
    public function setEmail(EmailAddress $email): UserEntity
    {
        // this probably does something great, but everything must
        // be immutable, so I'll definitely want to clone this
        // object and return a new one
    }
    
    public function flush(): array
    {
        // Outbox pattern mentioned!
    }
}

final readonly class UserRepository implements UserRepositoryInterface
{
    public function get(string $uuid): ?UserEntity
    {
        return User::query()->firstWhere('uuid', $uuid)?->toEntity();
    }
}

They end up wrapping Eloquent functionality, but with worse developer experience. Each time I need to do a slightly different query, I end up writing a new repository method. If I add a property to a core entity, trust and believe, I’m now going to have to update UserEntity, UserMapper, UserRepository@update() and UserRepository@create(), and probably a bunch of other places.

…oh and all of those unit tests I wrote! :sweat:

Is Eloquent always right for the job? No. The questions to ask are: how does this complexity serve us? Does it make our business more attractive to clients? Does it allow us to ship features faster? Does it make it easier to change our code now and in the future? Is it easier for an LLM to parse and generate code for? Does it make it easier to debug? What are we protecting ourselves from? What would happen if we made it easier?

Simplicity Is Not Just Fewer Lines of Code

let userContactMethod = user.enrolledInSms()
  ? 'sms'
  : user.enrolledInEmail() ? 'email'
  : user.enrolledInPush() ? 'push' 
  : user.hasParent() && user.parent().enrolledInChildActivities() ? 'parent' 
  : null;

This code doesn’t have many lines, but that doesn’t make it simple. When I write dense code like this, I regret it during maintenance and debugging.

The essential complexity, if this is what your product owner has deemed is the flow, is that there’s a lot of branching logic. Nested ternaries make my brain explode, add in spreading arrays and null-coalescing, and I’m starting to sweat. If I have to slow-down to read the code, even when it’s truly simple, then that’s a cost of complexity. I should aim to spend that complexity wisely. Future me thanks present me.

Time Teaches Simplicity

Time is our greatest teacher. As a quick test, go open a project you worked on 18 months ago and ask yourself if you would write this code the same way today. Git blame, 9 times out of 10, is for me to ask “who the hell wrote this crap?” and 8 out of 10 times, I find out it was me. Staying with a project for even a little while is the way to learn from our mistakes.

I was assigned to build a project a few years ago, starting totally from scratch. I, like I think many developers, find greenfield to be alluring. I can admit I told my manager “I’m not going to write it the Laravel way, I’m going to write it the right way.” It didn’t take longer than four months for me to begin to see that the choices I made were making maintenance, development, and changes harder for me and the developers who had to jump on the project.

A batteries-included framework has strong opinions and therefore architecture choices can be reduced or foregone. LLMs trained on blog posts and docs love familiar conventions and patterns. New hires can jump in without re-learning all the architecture choices which you documented so thoroughly. …you did document all of your architecture decisions, right?

So at the end of four months on the first leg of the project, I was able to admit honestly I made choices that hurt future me. By sticking with the project, I was able to see the consequences of choices made. It’s disheartening to see negative consequences of those choices, but that also shows growth and learning (dare I say wisdom?). I continued on with the project and made a pivot towards greater simplicity. And six months after that, I saw improvements in many areas, but still things that I felt were a little bit ugly. Evidence that I still have more to learn.

I Tuned Out A While Ago, Are You About Done?

One of my favorite of Jack Kerouac’s “western” haikus:

Perfect moonlit night

marred

By family squabbles.

Take a moment to read that slowly. Each of its eight simple words is integral. There is no fat, not even muscle: just bone. It brings to life an entire story. Would it be better if it were smushed together as one-long line? Would it be as effective or as joyful to read if it rambled longer, with greater embellishment?

A haiku is form that emobdies paring away the unnecessary. Writing code is not an ancient art of Zen masters, but neither is it a hard science without deviation. There are no masters of software engineering who can dictate precisely how effective code is written for you, your domain, or your team.

Simplicity is found by sculpting away complexity until what we have left is essential.

]]>
Luke Kuzmish
Musings on 20252026-01-01T00:00:00+00:002026-01-01T00:00:00+00:00https://cosmastech.com/2026/01/01/musingsAs 2025 wraps up, I wanted to touch on some broader topics that I thought deeply about this year.

Active Record (Laravel Eloquent) and Domain Purity

Active Record is fine, maybe even good. I worked on a project in 2024 where I wanted to go full-on, all-in domain objects and repositories. I had just left a Java shop with some great devs, and I thought I needed to fully replicate their patterns. I also read Clean Architecture by Uncle Bob, which is typical Java-bro meat and potatoes. If it’s in a book, if micro-service Java devs use it, I thought, it must be The Way. I don’t like it, but I can admit it: PHP has a stink to it. It’s not taken seriously. Its execution model is alien to modern programming languages. Java is the backbone of so many big corps, from Netflix to Amazon. So I guess I wanted to write this project like it was in Spring Boot, so I could feel like I was a real developer…

But the project was in Laravel, so I had some real problems.

The framework is designed to work with models. It’s baked into its DNA. By leaning on this fact, everything is simpler, from auth to route binding.

Developers who worked at the company already knew Laravel, so onboarding them to a new style of writing code was a cost. And it would remain a cost for every developer who had to onboard to the project.

Laravel is quite good at rapid app development. The project was for a burgeoning market, and forcing the code to look like how I thought real code must be written for some unfounded belief I had meant going slower. Except I couldn’t go slower, due to the business demand… so I just worked longer hours for some kind of domain purity.

Domain purity in apps which are basically just CRUD is a silly thing to shoot for. The right tool for the right job. There are times where I:

  • Map an Eloquent Model into a domain object
  • Pass that domain object to a repository method
  • Use Eloquent’s Query Builder in order to insert the data
  • Construct a different domain object with hydrated data
  • Return that domain object to the caller

What am I doing? A lot of work for idealism.

I was missing the fact that active record forces the developer to think in terms of the database, which is actually incredibly valuable for CRUD apps. I liken it to mechanical sympathy. When I think about how the data is being stored and retrieved, I can make greater efficiencies and optimizations. I can retrieve exactly what is needed, rather than constructing a rich domain object which has data I don’t care about. My series of inserts, updates, or deletes can run inside of a small, very tight database transaction. I have witnessed developers write code where they treat all abstractions as the same: whether it’s reading from a database or an external API. This is dangerous. Understanding what is actually happening in our code is absolutely critical. Abstractions are great, but they are hard to get right. As a developer, I cannot just wave my hands at an interface. I must know what is happening in my production code in order to make informed decisions.

I am not here to make the claim that anyone else is doing it wrong. I do think the benefit of active record carries a stink similar to PHP, but memes and tropes are not a reality. As always nuance is critical, but not always something that our current climate values.

And what did I really bring with me from that Java team I briefly worked on? That testing is important. Good testing is made possible through good code composition. There’s nothing about using Eloquent Models that makes it impossible (or even harder) to compose code. There are some guard rails to put in place (requiring data to be preloaded or passed as a separate argument to a function; being careful about how data is mutated; never using Model Observers; using bulk inserts whenever possible, rather than single updates in a loop), but the code can lend itself to testing just as it would be if written in a different paradigm.

Testing

Big test suites are a liability. They cost time to develop and time to maintain. If a test pipeline takes too long, I see two outcomes:

One, developers get in the habit of not running the test suite locally. They just leave it for CI. Our laptops are much more powerful than the runners, so preferring the CI pipeline slows the feedback loop.

Or, the second option: I think to myself “I’ll just go and work on something else for nine minutes.” 30 minutes later, I remember to check the suite locally, see there was a failure, and switch context back. The time I spend waiting for the pipeline is usually not terribly productive: I’m either doing something non-work related (twitter, reddit, etc), or I’m half-reviewing someone else’s code.

Oh, and then once my suite looks clean on local, I push my code and run the pipeline there, potentially repeating the process of waiting, getting lost, and paying a context switch.

Slow CI pipelines slow down merges to trunk and releases to production. Slow dev feedback robs developer productivity.

Flaky tests cost money. Without fail, if a test is going to fail intermittently, it’s going to do so when I am in a hurry to merge something. I don’t know if there’s a way to retry just a single test in a CI pipeline, but I know I haven’t found it. That flaky test just cost me another full run through the test suite, which means I’m liable to fall into the trap of finding something small to focus on while I wait, and forgetting about the pipeline.

Mocking frameworks are problematic. Not because of how they work, but because of how they allow for poor code architecture, low-value tests, and misleading tests. I had a former colleague who told me “mocking frameworks are the devil,” and I thought he was being a bit extreme. I still don’t know if I would call them devilish, but I would say they’re worth actively avoiding.

Not to mention in PHP, classes cannot be marked final or readonly if you wish to use them with Mockery or PHPUnit. This isn’t the end of the world, but it feels like the tail wagging the dog.

AI Agents

As much as I hate to admit it, AI agents are pretty great. I don’t see them killing the career of software engineers just yet, but I do think they are here for the duration.

It is my belief that we are in a transitional state for software engineers. As software development enters a new frontier, we will be forced to change the way we work. Code reviews will be different. Upfront planning will be different. And of course, the practice of building software will be different. Clinging to the old way of doing things isn’t going to work, but neither is letting AI write and build everything.

I strongly dislike how much code I have to review now, either within my editor reviewing an LLM’s code, or the influx of pull requests from other devs. Agents, specifically Cursor’s BugBot, that automatically review code are a good first step. However, I see a problem with all of this: my brain turns off. I stop thinking about code the same way, or even really trusting my own instincts.

I also realize that anything over a few sentences becomes intolerable to read. My brain just starts seeking out context clues, and if I can’t find them, I have an LLM summarize it for me. The biggest scourge on all of business is the amount of doc-slop we create. Someone has two or three sentences that they want made into a Notion or Markdown document, which their LLM of choice expands into something that takes 30 minutes to read. No one reads it, and instead passes it to an LLM to summarize back into 2 or 3 sentences. Combine this with the tells of LLM writing: too many emojis, the word “blazing-fast,” em-dashes (if they haven’t spent a billion dollars training that out of new models), or just a writing style that doesn’t match the person who generated the doc. I immediately lose confidence in the document and get frustrated because I don’t believe they even read it.

It is a vulgar act to ask someone to read a document or review code that you yourself have not read. But our brains are atrophying and we’re losing confidence in our own ability to write, read, and think. I think that this is another place we need to change our practices as LLMs become ever-present.

Work-life balance

I have not figured out a way to suss out work-life balance in job interviews. I have seen work-life balance be bad in a couple of ways:

On-call rotations where something always goes wrong. If I have to be handcuffed to my laptop for an every entire week every six weeks, I’m going to be unhappy. If I expect that I’m going to be paged at 2 AM at least once every rotation, I’m going to be resentful.

Continuous, unrealistic deadlines. Having to work long hours occasionally isn’t something that I mind; in fact, I think it’s pretty standard in the industry. Once it becomes the norm at a company, it’s unbearable.

Support rotations are tough. Being in the support role when I am brand new feels like drowning. I don’t even understand what the ticket is describing or what the expected behavior is. Hell, I don’t even know if the ticket is describing exactly what is supposed to be happening. I am forced to ask for help, which doesn’t always feel comfortable. Five hours spent looking at support tickets feels longer than eight hours of coding and reviewing.


2026 will probably be a somewhere between good and great. It’s all in how we show up for it. I’m going to have to get comfortable with some of the things which at present make me uncomfortable. I’m going to have to open myself up to new ways of doing things. I know these are the paths to growth, and like all good things, they’re uphill.

]]>
Luke Kuzmish
Leveraging Promises and HTTP Pooling2025-12-01T00:00:00+00:002025-12-01T00:00:00+00:00https://cosmastech.com/2025/12/01/request-pooling-patternLaravel 8 first introduced HTTP request pooling, thanks to a contribution from Andrea Marco Sartori. This allows developers to write code which will execute any number of HTTP requests concurrently. Under the hood, this is made possible thanks to the async request functionality of Guzzle and cURL’s multi handler functionality.

Serially Executed Requests Vs Pooled Requests

Let’s imagine we are building a platform for travelers to get the best deals on travel. A traveler needs transportation, lodging, a rental vehicle, and recommendations for what to do when they are in town.

use Illuminate\Support\Facades\Http;

$flightsResponse = Http::post('https://travel-api.example.dev/flights/', [
    'from' => 'New York City, NY',
    'to' => 'San Francisco, CA',
    'departing' => '2026-02-13',
    'returning' => '2026-02-16',
]);

$hotelsResponse = Http::post('https://hotels-api.example.dev/hotels', [
    'location' => 'San Francisco',
    'check_in' => '2026-02-13',
    'check_out' => '2026-02-16',
    'adults' => 2,
    'amenities' => [
        'pool' => true,
        'breakfast' => false,
        'shuttle' => false,
    ],
]);

$carsResponse = Http::post('https://vehicles-api.example.dev/rental-vehicles', [
    'location' => 'San Francisco',
    'type' => 'sport',
    'options' => [
        'orange',
        'lamborghini',
    ],
]);

$activitiesResponse = Http::get('https://around-town-api.example.dev/SF-CA-USA', [
    'categories' => [
        'nightlife',
        'art',
        'music',
        'historical',
    ],
]);

The above example gathers all of this data, but it does so sequentially. If this data is all gathered during a web request, the caller has to wait for the cumulative time of all requests. For instance, imagine this is the response time for each:

Request Time
Flights 1.1s
Hotels 1.9s
Cars 1.1s
Activities 0.4s
Total 4.5s

We can improve the wait time by leveraging HTTP pooling.

use Illuminate\Http\Client\Pool;
use Illuminate\Support\Facades\Http;

$responses = Http::pool(static function (Pool $pool) {
    $pool->as('flights')->post('https://travel-api.example.dev/flights/', [
        'from' => 'New York City, NY',
        'to' => 'San Francisco, CA',
        'departing' => '2026-02-13',
        'returning' => '2026-02-16',
    ]);

    $pool->as('hotels')->post('https://hotels-api.example.dev/hotels', [
        'location' => 'San Francisco',
        'check_in' => '2026-02-13',
        'check_out' => '2026-02-16',
        'adults' => 2,
        'amenities' => [
            'pool' => true,
            'breakfast' => false,
            'shuttle' => false,
        ],
    ]);

    // ...the other requests
}, concurrency: 4);

Now the response wait time is only as long as the slowest request because

Request Time
Flights 1.1s
Hotels 1.9s
Cars 1.1s
Activities 0.4s
Total 1.9s

Notes

The second parameter to Http::pool() is named concurrency and it informs how many requests should be in flight at any given time. If you have 10 requests with $concurrency set to 5, the sixth request will not start until the first is complete.

Silver Bullet?

HTTP pooling doesn’t solve all your problems. In many applications, a response from one endpoint is used to inform another call in the chain. Use HTTP pooling where it fits, but recognize that some requests must still be executed in series.

What Does pool() Return?

Http::pool() returns an array with each value being one of:

  • Illuminate\Http\Client\ConnectionException meaning there was a timeout trying to connect to the server
  • Illuminate\Http\Client\Response a response object if the request received a response
  • Illuminate\Http\Client\RequestException if you marked that you want your request to throw() on failing status codes

Let’s take the last example and see how we might use the responses.

namespace App\Schema;

use Carbon\CarbonImmutable;

final readonly class AvailableFlight
{
    public function __construct(
        public string $airline,
        public string $airport,
        public CarbonImmutable $departure,
        public string $cost,
    ){
        // ...
    }
}
use Carbon\CarbonImmutable;
use App\Schema\AvailableFlight;
use App\Schema\ServiceUnavailableResponse;

$responses = Http::pool(static function(Pool $http) { /* ... */ });

$apiResponse = [];

$flightResponse = $responses['flights'];
if ($flightResponse instanceof ConnectionException) {
    $apiResponse['flights'] = new ServiceUnavailableResponse;
} else {
    $flights = [];
    foreach($flightResponse->json()['results'] as $flightResult) {
        $flights[] = new AvailableFlight(
            airline: $flightResult['carrier'],
            airport: $flightResult['airport'],
            departure: CarbonImmutable::parse($flightResult['departing']),
            cost: (string) $flightResult['price'],
        );
    }

    $apiResponse['flights'] = $flights;
}

If you’re like me, there’s something off about putting all of this mapping logic into a single method of a service class. I want there to be single responsibility, not because Uncle Bob told me so, but because I want to be able to test my code at the component level. And of course, there’s that gnawing feeling that “maybe I’ll need to use this in another place,” but more on that later.

What if I had a class that was responsible for mapping Laravel’s response into my data object? I could then test this function in isolation, giving me confidence it behaves as desired given different scenarios, such as “what if a key is missing? what if the connection times out? what if the API returns a 500 response code?”

Promises To the Rescue

As mentioned above, the async nature of HTTP requests is made possible thanks to Guzzle’s Promises library. Because the PHP runtime is not async by nature, the Promises offered are closer to Laravel’s Pipeline than they are to Promises in JavaScript. While Guzzle can execute HTTP requests concurrently, Promises can be used to simply pipe the results of one function into another.

The Promise interface allows us to chain mutations together and then wait for each link in the chain to be resolved. For instance, we can pipe our Response into a method and have it give us back a POPO (or Laravel Data object if you fancy).

namespace App\Requests;

use App\Schema\AvailableFlight;
use App\Schema\ServiceUnavailableResponse;
use Carbon\CarbonImmutable;
use Illuminate\Http\Client\ConnectionException;
use Illuminate\Http\Client\RequestException;
use Illuminate\Http\Client\Response;
use Throwable;

class GetFlights // This name will make more sense in a moment
{
    public function mapToAvailableFlight(array $flight): AvailableFlight
    {
        return new AvailableFlight(
            airline: $flight['carrier'],
            airport: $flight['airport'],
            departure: CarbonImmutable::parse($flight['departing']),
            cost: (string) $flight['price'],
        );
    }

    /**
     * @return  list<AvailableFlight>|ServiceUnavailableResponse
     */
    public function mapResponseToAvailableFlights(
        Response|RequestException|ConnectionException $flightsResponse
    ): array|ServiceUnavailableResponse {
        if ($flightsResponse instanceof Throwable) {
            return new ServiceUnavailableResponse;
        }

        $flights = [];

        foreach($flightsResponse->json()['results'] as $flight) {
            $flights[] = $this->mapToAvailableFlight($flight);
        }

        return $flights;
    }
}

With the release of Laravel 12.41, we can leverage the then() method on our Promise. The then() method executes a callback against the Response.

$responses = Http::pool(static function (Pool $pool) {
    $pool->as('flights')
        ->post('https://travel-api.example.dev/flights/', [
            'from' => 'New York City, NY',
            'to' => 'San Francisco, CA',
            'departing' => '2026-02-13',
            'returning' => '2026-02-16',
        ])
        ->then(
            (new GetFlights)->mapResponseToAvailableFlights(...)
        );

    /* ... */
});

if ($responses['flights'] instanceof ServiceUnavailableResponse) {
    // ... handle the failure
} else {
  // now we have an array of AvailableFlight
}

Making It More Extensible Still

I conspicuously named the class above GetFlights because I want to highlight my favorite part of this pattern. It’s quite common at my work that I need to need to make a one-off request to fetch some data, but at other times, it’s more practical to do so in a pool. This can lead to code duplication, which can lead to drift: I updated a parameter in this service method, but forgot to do in a different service method where maybe I am pooling the requests.

The HTTP facade allows us to mark a single request as async, even if it’s not being used in a pool. Then our terminal function (like post() or get()) returns a PromiseInterface, rather than a Response object.

$bodySize = Http::async()
    ->get('https://cosmastech.com')
    ->then(fn (Response $response) => strlen($response->body()))
    ->wait();

In the above, we are making a single request that will return the character count of a webpage.

Above I mentioned how we may want to use our request building and mapping logic in another place. So how can we use this as a lever for better devex and eliminating duplication? Let’s add a few more methods to our GetFlights class.

use GuzzleHttp\Promise\PromiseInterface;
use Illuminate\Http\Client\PendingRequest;
use Illuminate\Support\Facades\Http;
use RuntimeException;

class GetFlights
{
    /**
     * @param  array<string, mixed>  $flightRequestBody
     */
    public function fromPendingRequest(
        array $flightRequestBody,
        ?PendingRequest $pendingRequest = null
    ): PromiseInterface {
        $pendingRequest ??= Http::createPendingRequest();

        return $pendingRequest
            ->async()
            ->post('https://travel-api.example.dev/flights/', $flightRequestBody)
            ->then($this->mapResponseToAvailableFlights(...));
    }

    /**
     * Retrieve available flights.
     *
     * @param  array<string, mixed>  $flightRequestBody
     * @return list<AvailableFlight>
     *
     * @throw RuntimeException when there is request failure
     */
    public function fetch(array $flightRequestBody): array
    {
        $result = $this->fromPendingRequest(
            $flightRequestBody
        )->wait();

        if ($result instanceof ServiceUnavailableResponse) {
            throw new RuntimeException('Service unavailable');
        }

        return $result;
    }

    /* code below from previous example */

    public function mapToAvailableFlight(array $flight): AvailableFlight
    {
        return new AvailableFlight(
            airline: $flight['carrier'],
            airport: $flight['airport'],
            departure: CarbonImmutable::parse($flight['departing']),
            cost: (string) $flight['price'],
        );
    }

    /**
     * @return  list<AvailableFlight>|ServiceUnavailableResponse
     */
    public function mapResponseToAvailableFlights(
        Response|RequestException|ConnectionException $flightsResponse
    ): array|ServiceUnavailableResponse {
        if ($flightsResponse instanceof Throwable) {
            return new ServiceUnavailableResponse;
        }

        $flights = [];

        foreach($flightsResponse->json()['results'] as $flight) {
            $flights[] = $this->mapToAvailableFlight($flight);
        }

        return $flights;
    }
}

With these simple additions, we can get our flight via pooling or as a one-off, and it will always be immediately mapped to our AvailableFlight data object.

$flightGetter = new GetFlights;

$requestPayload = [
    'from' => 'Erie, PA',
    'to' => 'Little Rock, AR',
    'departing' => '2026-01-11',
    'returning' => '2026-02-13',
];

// Get in a pool
$responses = Http::pool(function (Pool $pool) use ($flightGetter, $requestPayload) {
    $flightGetter->fromPendingRequest(
        $requestPayload,
        $pool->as('flights')
    );

    /* ... other pooled requests ... */
}, concurrency: 4);

// Or use it as a one-off
$availableFlights = $flightGetter->fetch($requestPayload);

Why Does This Work?

HTTP Pooling works by keeping an array of PendingRequest objects, all of which are have their async property set to true by default. When the pool() method executes, it is just awaiting an array of Promises, for which we already chained a then() method to map them into the object we want. For our one-off request case (GetFlights@fetch()), we are creating a new PendingRequest and marking it as async via $pendingRequest->async(). We do this not because the request will be handled concurrently with other requests, but because we want to share the Promise chaining.

Why?

I came to this pattern as I was refactoring some endpoints which were very slow. The sluggishness was a result of sequential requests to an external API. When these methods were written initially, they worked fine, because we didn’t need to gather everything at once. The frontend of the web application called a separate application endpoint for flights, for hotels, for cars, etc. In that way, they were able to be called asynchronously.

As we move towards a single endpoint returning all of gathered data, the series of requests becomes a bottleneck. But for a product which releases code multiple times per day, and for whom some service methods still needed to be used one-by-one, this feels like an elegant solution: we have one class which is responsible for building a request and mapping it to a data transfer object, but the request can be made in a batch or one-by-one.

The approach to refactoring is first to create each Get* class. Next we will move our existing service methods to call this class using the fetch() method. Finally, we seek out opportunities for pooling, and in those cases, refactor to the fromPendingRequest() method inside of an HTTP pool rather than calling the service method.

In Closing

When I was working towards this pattern, I felt like I had just discovered some kind of magic. PHP can be called anachronistic for its runtime model of one process per request. However, it still offers excellent developer ergonomics: we don’t have to think about threads, function coloring, or manually cleaning up the application at the end of a request. The ability to pool our HTTP requests to avoid sequential slowness is a big win for developers.


Are you using HTTP pooling in your application? Got big thoughts on Promises? Did I make a mistake in this post and you want to bring it to my attention? Drop a comment below or find me on X.

]]>
Luke Kuzmish
Improving unit test run time in a Laravel modular monolith2025-11-13T00:00:00+00:002025-11-13T00:00:00+00:00https://cosmastech.com/2025/11/13/improve-modular-test-suite-performanceI recently started a new job and was given my first exposure to a modular monolith. On the surface, they have a lot to love: all aspects of your app in one repo, one set of dependencies to keep updated, reduced code duplication, and allowing multiple engineering teams to work in in their own isolated sections of the code are first that spring to mind. Modular monoliths of course present some challenges, and the one that kind of struck me by surprise was test suite run time.

In previous projects, test suites usually took at most two minutes to run locally. Parallelization obviously helps a great deal, but it is not a silver bullet. I have spent a great deal of my focus on writing tests that don’t execute extraneous database queries:

  • don’t build more than you need
  • leverage bulk inserts versus many inserts in a loop
  • be mindful of model events that may trigger database queries
  • test a single unit of code when possible
  • leverage Factory::make() if writing to the database isn’t strictly necessary

These are helpful guidelines, but an unfamiliar codebase with over 19,000 tests is not something that can be optimized quickly, if those opportunities even exist.

Wait, it takes how long?

In our CI pipeline, just the PHPUnit tests were taking between 8 and 10 minutes to execute fully. Heaven help you if you needed something merged quickly. The unit tests were going to be the gate. This means we were running about 1900 to 2300 tests per minute. Not slow per se, but long enough to lead to context loss. Any process that takes long enough for me to think “Oh, I’ll just do while I wait" leads to me losing context, and usually forgetting what I was waiting on in the first place.

The Bottleneck

I had theories. Slow database queries, I figured. A staff engineer reached out to ask me if I saw anything obvious that would be slowing down our application boot times. He had taken the effort to convert some of our integrations tests to plain old PHPUnit tests and saw major improvements in the speed. He reported that boot times were fine in production. He’s a smart guy and I trusted his intuition, so I decided to go down a rabbit hole.

Leveraging Herd’s handy SPX profiler wrapper, I decided to take a look at where our time and memory consumption was getting eaten up. What I found was a bit surprising.

In a partial run of our test suite run in series, we were spending a great deal of time and memory preparing routes and loading the configuration. My initial suspicion about database queries wasn’t proving itself to be true.

The Design of a Modular Monolith

In order to allow for domain ownership, modules often mirror the default Laravel application structure.

src
---- Shipping
-------- config
------------ shipping.php
-------- Http
------------ Middleware
------------ Controllers
------------ routes.php
-------- Models
-------- Providers
------------ ShippingServiceProvider.php
---- Payments
-------- config
------------ payments.php
-------- Http
------------ Controllers
---------------- ProcessPaymentController.php
------------ routes.php
-------- Models
-------- Providers
------------ PaymentsServiceProvider.php

You’ll notice that each domain (Shipping and Payments in the above example) has its own config, routes file, and ServiceProvider. The service provider will register the modules routes and merge in its own config.

<?php

declare(strict_types=1);

namespace Domain\Shipping\Providers;

use Illuminate\Support\ServiceProvider;

final class ShippingServiceProvider extends ServiceProvider
{
    public function boot(): void
    {
        $this->loadRoutesFrom(__DIR__ . '/../Http/routes.php');
        $this->mergeConfigFrom(__DIR__ . '/../config/shipping.php');
    }
}

When the application begins, such as the start of a request or an artisan console command, each service provider is instantiated and its boot() method is called. But Laravel’s got your back. You can cache routes, config, views, and events before deployment. That means your web requests can be served faster because the expense of gathering routes, events, configs, and views has been paid once and is stored on disk as a PHP array.

Back to the tests

For every “feature” test (those which extend from Laravel’s Illuminate\Foundation\Testing\TestCase), the application is built for each test method. Take the following trivial test class as an example.


use Illuminate\Foundation\Testing\TestCase;

final class MyTest extends TestCase
{
    public function test_collect_all_returns_empty_array(): void
    {
        // When
        $actual = collect()->all();

        // Then
        self::assertEquals([], $actual);
    }

    public function test_collection_can_return_count(): void
    {
        // Given
        $c = collect([1, 2, 5]);

        // Then
        self::assertEquals(3, $c->count());
    }
}

While these tests are pretty trivial, what I want to highlight is what happens before either one of these tests is executed. The entire application boot process has to happen, which includes gathering routes from every service provider, as well as configuration. Not just once for the entire class, but one time for each test method.

Options

I could have promoted leveraging route, event, configuration caching for tests. Simply run php artisan optimize --env=testing. However, the cached files do not indicate their environment, so unless you remember to remove these files, calling an endpoint from your browser will pull in the routes defined in the testing environment. In my experience, these cached files tend to be easy to forget about, and in frustration folks will waste time trying to figure out just why their newly added route doesn’t work. Or perhaps why their local environment is suddenly using test environment variables. Asking devs to remember to cache before tests and clear cached files after running tests seems ineffective.

The other option? Role up my sleeves and figure out how this can be optimized at the framework level.

WithCachedRoutes and WithCachedConfig traits

With the release of Laravel 12.38.0, two new traits were added for optimizing this behavior. Illuminate\Foundation\Testing\WithCachedRoutes and Illuminate\Foundation\Testing\WithCachedConfig are traits which can be applied to a test (or maybe best of all, the base test case, usually Tests\TestCase) which memoize the routes and config, respectively.

<?php

declare(strict_types=1);

namespace Tests;

use Illuminate\Foundation\Testing\TestCase as BaseTestCase;
use Illuminate\Foundation\Testing\WithCachedConfig;
use Illuminate\Foundation\Testing\WithCachedRoutes;

abstract class TestCase extends BaseTestCase
{
    use WithCachedRoutes;
    use WithCachedConfig;
}

Or, if you prefer the Pest flavor:

<?php

use Illuminate\Foundation\Testing\WithCachedConfig;
use Illuminate\Foundation\Testing\WithCachedRoutes;

pest()->use(WithCachedConfig::class);
pest()->use(WithCachedRoutes::class);

This works in parallel runners too.

These traits work by grabbing the routes and configuration after they have been built, but before the test runs, and storing it statically. When the next test method is spinning up a fresh application instance, it will now use the statically memoized values instead of gathering every route and config file.

Outcome

On my local environment, I was able to get parallel test run time down from 07:52 to 04:28. Being able to run my tests locally and in short order means that I can get feedback about my changes rather than pushing something up for review, waiting for the much slower CI pipelines to give me feedback. (The CI pipelines are down to about 5 minutes, a marked improvement from where we started.)

While this is a huge win, there are still individual tests to improve. Using a test which boots and tears down the application will always be slower than a plain old PHPUnit test. Anything that can be tested via a unit test should be, but for the high confidence that integration tests can offer, these traits are a huge win.

Let me know on X or comment below how these new traits are improving your test suite run time.

]]>
Luke Kuzmish
Cleaner middleware: static factory functions2025-10-01T00:00:00+00:002025-10-01T00:00:00+00:00https://cosmastech.com/2025/10/01/cleaner-middlewareRoute Middleware is a way to execute code before a request is handled by a controller. Some examples of middleware in apps that I have built or used: complex authentication logic 😢, loading and validating a child-relationship, setting contextual data, modifying headers, and caching responses which require heavy computation to produce.

Sometimes we want to be able to define a route’s middleware with some kind of property.

namespace App\Http\Middleware;

use Closure;
use Illuminate\Auth\Access\AuthorizationException;
use Illuminate\Container\Attributes\Singleton;
use Illuminate\Http\Request;
use Symfony\Component\HttpFoundation\Response;

#[Singleton]
class EnsureUserHasRoleMiddleware
{
    public function handle(Request $request, Closure $next, string $role): mixed
    {
        if (! $request->user()?->hasRole($role)) {
            throw new AuthorizationException();
        }

        return $next($request);
    }
}

In the above example, we are verifying that a user has a particular role. If they do not, then we throw an authorization exception.

When we define a route which uses this middleware, we must tell it which role to use.

use App\Http\AdminController;
use App\Http\Middleware\EnsureUserHasRoleMiddleware;

Route::post(
    'admin',
    [AdminController::class, 'create']
)->middleware(EnsureUserHasRoleMiddleware::class . ':super-admin');

This instructs Laravel that before entering into the POST /admin route, pass the request through our middleware to ensure they have the role of 'super-admin'.

Let’s Clean It Up

As our application grows, we may end up repeating this same class-string + role again and again. It’s not the nicest looking thing in the world and it has the problems of relying on a magic string. In this case, I like to add a static method to my middleware for building the route definition.

class EnsureUserHasRoleMiddleware
{
    public static function forSuperAdmin(): string
    {
        return self::class . ':super-admin';
    }

    public static function forAdmin(): string
    {
        return self::class . ':admin';
    }

    // ...
}

Then we can tidy up our route definition like this:

Route::post(
    'admin',
    [AdminController::class, 'create']
)->middleware(EnsureUserHasRoleMiddleware::forSuperAdmin());

If we are disciplined about using these static factory methods when defining middleware for our routes, our IDE can help us easily jump to usages of these functions to see which routes require these roles.


This isn’t ground-breaking, just something I feel makes for a cleaner codebase.

👋 Happy coding!

]]>
Luke Kuzmish
Creating type-safe configs in Laravel2025-08-23T00:00:00+00:002025-08-23T00:00:00+00:00https://cosmastech.com/2025/08/23/creating-type-safe-configs-in-laravelWhen I began building a new product in Laravel last year, I wanted a lot of things: good test coverage, type-safety, a high PHPStan level, and for some reason, to write the project like I was still working on Java Spring Boot microservices. I’ll leave that last one for another time, as it was a complete disaster, but I did learn a lot from it.

One of the things that feels yucky about Laravel is the use of magic strings. When PHP added enums, they immediately changed the way I write code. Recently, Laravel has accepted a number of PRs that allow passing enums to methods which previously only accepted primitives. For instance, to retrieve a value from a file defined in the config/ directory, you make a call config('github.api_token') which returns something of some type, and if it’s not found, it returns null. For additional type-safety, you can leverage Config::string('github.api_token'), functionality that was added in Laravel 11.x.

One of the patterns that I landed on which feels elegant is creating a class for a service’s configuration. Here’s a little configuration for an AIM client (a little nostalgia for those who remember crafting the perfect away message).

namespace App\Config;

use Illuminate\Container\Attributes\Config;
use Illuminate\Container\Attributes\Singleton;

#[Singleton]
final readonly class AolInstantMessengerConfig
{
    /**
     * @param array<int, string> $scopes
     */
    public function __construct(
        #[Config('aim.api_token')]
        public string $apiToken,
        #[Config('aim.base_url')]
        public string $baseUrl,
        #[Config('aim.scopes')]
        public array $scopes,
        #[Config('aim.timeout', 60)]
        public int $timeout,
    ) {
    }
}

Let’s take a look at what’s going on in this example, and how we might use this configuration.

How we can use this

In our code, we will never call config('aim.api_token'), but instead we always refer to these values via the config class we have created. This gives us type-safety and reduces magic strings in our code.

Historically, I would write something like:

use Http;

Http::baseUrl(config('aim.base_url'))
    ->timeout(config('aim.timeout'))
    ->withToken(config('aim.api_token'))
    ->post('/login', [
        'scopes' => config('aim.scopes'),
    ]);

Now instead, I can just reference a configuration object.

use App\Config\AolInstantMessengerConfig;
use Http;

$config = app(AolInstantMessengerConfig::class);

Http::baseUrl($config->baseUrl)
    ->timeout($config->timeout)
    ->withToken($config->apiToken)
    ->post('/login', [
        'scopes' => $config->scopes,
    ]);

The Config attribute

The Config attribute tells the Laravel container to inject the value from the config/aim.php file. Paired with constructor property promotion, this makes the class construction nice and clean.

The Singleton attribute

This attribute, added by Rias and made available in the Laravel 12.21 release, will bind a class as a singleton in the container when it is first resolved. When there is a request to build this class subsequently, it will just reference the same instance of the AolInstantMessengerConfig.

For the toy example above, I wouldn’t expect a noticeable change in performance if it was bound as a singleton or not. That said, this attribute is a nice and convenient way to register a singleton.

readonly class

Marking this class as readonly ensures that these properties never get changed after instantiation. While I imagine (hope?) that most of you are not modifying config values in your production code, this signals to users that “this is what it is and you shall not change it.”

This pairs nicely with making the class a singleton: we know that the configuration at first instantiation will never change in the course of the app.

But what about …

There are tons of ways to skin this cat. You may prefer calling Config::string() or Config::boolean() for type-safety. You may not like having a separate class for configuration. You may not mind using magic strings. That’s fine! Nothing here is objectively better or worse: it’s entirely team and developer dependent. Just wanted to share something that I enjoy.

Have fun building!

]]>
Luke Kuzmish
FailOnException: Short-circuit Laravel job retries2025-06-17T00:00:00+00:002025-06-17T00:00:00+00:00https://cosmastech.com/2025/06/17/fail-on-exceptionThe Problem

We have a queued job that grabs data from the database and performs some checks. If all is well, it either inserts a new record or maybe calls out to an external API. We want the job to retry if the API request fails or there’s a hiccup with writing to the database, however, if any of the checks fail, we want to mark the job as failed and not bother retrying it.

Not seeing anything obvious in the Laravel docs, we reach for Claude to tell us how to do this. It hallucinates some methods that don’t exist, and now we have our original problem and we’re a little frustrated with AI’s short-comings.

We’ve all been there.

But with the release of Laravel 12.19, there’s an easier way.

FailOnException Job Middleware

The solution is now as easy adding a middleware that specifies which exceptions short-circuit the job retry.

For instance, imagine your application receives an SNS payload and has to fetch a user from another system and store a record in your local database. An example of this might look something like this:

use App\Actions\RetrieveUserFromApiAction;
use App\Exceptions\UserDoesNotExistException;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Http\Client\ConnectionException;
use Illuminate\Http\Client\RequestException;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;

class SyncUserJob implements ShouldQueue
{
    use Dispatchable;
    use InteractsWithQueue;
    use Queueable;
    use SerializesModels;

    public $tries = 10;

    public function __construct(private readonly array $snsPayload)
    {
    }

    /**
     * @throws UserDoesNotExistException
     * @throws ConnectionException
     * @throws RequestException
     */
    public function handle(RetrieveUserFromApiAction $retrieveUserAction): void
    {
        $user = $retrieveUserAction->handle($this->snsPayload['user_id']);

        // do some other stuff with the user based on the SNS event...
    }
}

You’ll notice that our docblocks indicate that the job’s handle() method can throw a host of exceptions. A RequestException or ConnectionException can occur due to some kind of transient network issue, or perhaps the external API is having an outage. However, if the UserDoesNotExistException is thrown, then further retries will return the exception again and again.

Instead of putting unnecessary strain on our infrastructure, we can inform the queue worker to mark this job as failed by leveraging the Illuminate\Queue\Middleware\FailOnException middleware. We simply add a middleware method to our job:

    public function middleware(): array
    {
        return [
            new FailOnException([UserDoesNotExistException::class]),
        ];
    }

When the exception is thrown, the job will be immediately marked as failed, no longer ticking down tries. This exception will be visible in Horizon (or Telescope, or your APM) as the failure reason.

]]>
Luke Kuzmish
This Week I Learned: About TypeScript2025-01-11T00:00:00+00:002025-01-11T00:00:00+00:00https://cosmastech.com/2025/01/11/til-jsWhy TypeScript? Aren’t you like a full-time PHP guy?

Professionally, yes, I write PHP almost exclusively. But JavaScript is the web standard. Almost all tech will be equalized being accessed through the browser. It makes sense to be at least familiar with JavaScript. All of my JavaScript knowledge was based on what I had learned in the early 2010s and I wanted to rectify that.

Deno

Deno is a JavaScript runtime to sort out a lot of the warts of node.js, led by Ryan Dahl, the same person who created node.js years ago. Ryan is also leading the effort to cancel Oracle’s trademark on the term “JavaScript,” so that people can freely title their course “Rust for JavaScript Devs” without having to worry about getting a cease and desist letter for using a trademark. Sign the petition at javascript.tm.

Deno focuses on a security model where you must authorize access to resources on your machine or remotely. Additionally, Deno runs TypeScript natively without the need for a transpilation step. This lowers the barrier for entry for a person like me. It also comes packaged with its own formatter, linter, and package manager. Fewer things to install and configure means I can start learning faster.

Deno offered a free sticker to anyone who completed at least one Advent of Code puzzle using Deno. I’m not the kind of person who wants to put stickers on things, but I thought it was fun, so I gave it a shot.

Learning Through Open-Source

I like the mission of Ryan Dahl and Deno. Hopefully not too annoying to the maintainers, I also decided I would give back to the project and practice my TypeScript skills. I found an open GitHub issue that seemed pretty self-contained: the cli.promptSecret() function needed some love. promptSecret was designed to be used in a CLI where the user needed to enter sensitive details into the terminal (like a password) and have the characters obscured.

The problem was that if you wrote more characters than the terminal width, it would begin to duplicate the line.

Password: ********
Password: ********
*
Password: ********
**

The input still recorded properly, but the user-experience was confusing. I decided to take a stab at it.

The previous implementation

Capture the input, store it in a variable, but don’t write it to the terminal. Write the mask string (Password: ***) instead, but first… we use \r\u001b[K, a cursor control sequence that clears the current line and jumps the cursor to the first column in the line. That way, we have replaced the prompt with the new masked input.

Once you’ve reached the width of the console, the next character appears on the next line, so clearing the current line and rewriting the mask is clearing the characters in the overflow line. Hence the duplication.

Password: ********
*       <--- this is the current line now, so we only clear this,
             but the code is rewriting "Password: *********"

The solution

We need to know the character width of the console to start. Once our masked input has reached the limit, the next character added automatically gets added to the next line. For this reason, we only need to clear/rewrite characters exceeding the width.

This works great… but what about if the user starts to delete characters and we need to go back to the first line? We can use a different control sequence (\r\u001b[1F) to jump to the start of the previous line.

There was some other fun things, like realizing we don’t want to clear the current line for the first character on the new line. :sweat_smile:

It was a fun opportunity to contribute, learn some modern JS, and understand console control sequences a bit better.

Bun

Bun is another JS runtime, similar to node or Deno. It’s receiving a lot of buzz in the tech world for its speed and its big standard library. Bun offers the ability to run TypeScript without a transpilation step. I also did a handful of Advent of Code exercises in Bun [though no sticker :(] and found it easy and fast to develop with.

Interested to give back, I saw an open issue on their GitHub that noted that @types/bun didn’t contain types for some of the recently added standard library functions.

DefinitelyTyped

DefinitelyTyped owns the @types namespace on npm. It is a community project that allows adding TypeScript definitions for popular libraries. Some projects are written in TypeScript, so this is less necessary, but for projects written in JavaScript, supplementing the library with type definitions helps give type-safety in userland code. Interestingly, anyone is free to submit type definitions for a package. For instance, jQuery may not be interested in adding type definitions, but a community member may create a jquery.d.ts file and submit it to DefinitelyTyped. Then a user can include @types/jquery in their project and not have to deal with every function accepting and returning any as its value.

To fix the Bun issue, all I had to do was open a PR that pointed to the latest release of bun. A maintainer of the Bun project was pinged by a bot to approve the PR, and it was available to the world.

Thoughts on TypeScript

I look forward to playing with TypeScript, Deno, and Bun more. A lot of the trepidation I had was having to relearn all of the things I missed out in the ~15 years I was away from the JavaScript world, but thanks to the wealth of free resources (and a very patient colleague), I found that getting back up to speed was pretty easy.

Admittedly, TypeScript has a type system with lots of intricacies, and I still have a lot to learn. Despite its reputation, there’s plenty that can be done without understanding the advanced topics. This makes the barrier to entry much lower than I would have expected.

]]>
Luke Kuzmish
TIL: Laravel’s Factory::forEachSequence2025-01-04T00:00:00+00:002025-01-04T00:00:00+00:00https://cosmastech.com/2025/01/04/for-each-sequenceThis title is a lie, I actually did not learn this today, but came across it a few weeks ago. Nonetheless, I wanted to share it since I feel it’s incredibly valuable, but is not included in the Laravel documentation.

Drake Knows

Factory::sequence()

When I need to create a series of Models of the same type, the tool to reach for is Laravel factories. Specifically, the sequence functionality.

For instance, say I have Payment model. Here is an example of the factory:

/**
 * @extends Factory<Payment>
 */
class PaymentFactory extends Factory
{
    protected $model = Payment::class;

    public function definition(): array
    {
        return [
            'gateway' => $this->faker->randomElement(['stripe', 'ach']),
            'amount' => $this->faker->numberBetween(1_00, 25_000_00),
            'status' => $this->faker->randomElement(['paid', 'pending', 'refunded']),
            'user_id' => User::factory(),
            'created_at' => Carbon::now(),
            'updated_at' => Carbon::now(),
        ];
    }
}

Say I want to create 4 payments belonging to a user. I might reach for something like this in my test case.

$user = User::factory()->create([
    'id' => 102,
    'first_name' => 'Luke',
    'last_name' => 'Kuzmish'
]);

$payments = Payment::factory()
    ->for($user)
    ->sequence(
        ['id' => 5000, 'amount' => 100_00, 'status' => 'paid'],
        ['id' => 5002, 'amount' => 200_00, 'status' => 'refunded'],
        ['id' => 7000, 'amount' => 1_000_00, 'status' => 'pending'],
        ['id' => 7002, 'amount' => 1_050_00, 'status' => 'pending'],
    )
    ->create(['gateway' => 'stripe']);

What is the value of $payments? You may be frustrated to learn that it’s actually just a single Payment model, not a Collection of four Payments.

So what did I forget? I forgot to specify the count of models to create.

$payments = Payment::factory()
    ->for($user)
    ->sequence(
        ['id' => 5000, 'amount' => 100_00, 'status' => 'paid'],
        ['id' => 5002, 'amount' => 200_00, 'status' => 'refunded'],
        ['id' => 7000, 'amount' => 1_000_00, 'status' => 'pending'],
        ['id' => 7002, 'amount' => 1_050_00, 'status' => 'pending'],
    )
++  ->count(4)
    ->create(['gateway' => 'stripe']);

I’ve personally made this mistake countless times. Worse still is I may remember to chain count() but later modify the test to add a new sequence entry.

$payments = Payment::factory()
    ->for($user)
    ->sequence(
        ['id' => 5000, 'amount' => 100_00, 'status' => 'paid'],
        ['id' => 5002, 'amount' => 200_00, 'status' => 'refunded'],
        ['id' => 7000, 'amount' => 1_000_00, 'status' => 'pending'],
        ['id' => 7002, 'amount' => 1_050_00, 'status' => 'pending'],
++      ['id' => 9999, 'amount' => 2_999_99, 'status' => 'refunded'],
    )
    ->count(4)
    ->create(['gateway' => 'stripe']);

Here I end up with only four Payment models again.

A Better Solution

Enter forEachSequence. Using this method, the factory will create a Payment model for each sequence entry.

$payments = Payment::factory()
    ->for($user)
    ->forEachSequence(
        ['id' => 5000, 'amount' => 100_00, 'status' => 'paid'],
        ['id' => 5002, 'amount' => 200_00, 'status' => 'refunded'],
        ['id' => 7000, 'amount' => 1_000_00, 'status' => 'pending'],
        ['id' => 7002, 'amount' => 1_050_00, 'status' => 'pending'],
        ['id' => 9999, 'amount' => 2_999_99, 'status' => 'refunded'],
    )
    ->create(['gateway' => 'stripe']);

Now we have our five Payment models and never need to worry about specifying the count of models to create.

]]>
Luke Kuzmish