Our query builder takes care of all of that. It reads query parameters from the URL, translates them into the right Eloquent queries, and makes sure only the things you've explicitly allowed can be queried.
// GET /users?filter[name]=John&include=posts&sort=-created_at $users = QueryBuilder::for(User::class) ->allowedFilters('name') ->allowedIncludes('posts') ->allowedSorts('created_at') ->get(); // select * from users where name = 'John' order by created_at desc
This major version requires PHP 8.3+ and Laravel 12 or higher, and brings a cleaner API along with some features we've been wanting to add for a while.
Let me walk you through how the package works and what's new.
The idea is simple: your API consumers pass query parameters in the URL, and the package translates those into the right Eloquent query. You just define what's allowed.
Say you have a User model and you want to let API consumers filter by name. Here's all you need:
use Spatie\QueryBuilder\QueryBuilder; $users = QueryBuilder::for(User::class) ->allowedFilters('name') ->get();
Now when someone requests /users?filter[name]=John, the package adds the appropriate WHERE clause to the query:
select * from users where name = 'John'
Only the filters you've explicitly allowed will work. If someone tries /users?filter[secret_column]=something, the package throws an InvalidFilterQuery exception. Your database schema stays hidden from API consumers.
You can allow multiple filters at once and combine them with sorting:
$users = QueryBuilder::for(User::class) ->allowedFilters('name', 'email') ->allowedSorts('name', 'created_at') ->get();
A request to /users?filter[name]=John&sort=-created_at now filters by name and sorts by created_at descending (the - prefix means descending).
Including relationships works the same way. If you want consumers to be able to eager-load a user's posts:
$users = QueryBuilder::for(User::class) ->allowedFilters('name', 'email') ->allowedIncludes('posts', 'permissions') ->allowedSorts('name', 'created_at') ->get();
A request to /users?include=posts&filter[name]=John&sort=-created_at now returns users named John, sorted by creation date, with their posts eager-loaded.
You can also select specific fields to keep your responses lean:
$users = QueryBuilder::for(User::class) ->allowedFields('id', 'name', 'email') ->allowedIncludes('posts') ->get();
With /users?fields=id,email&include=posts, only the id and email columns are selected.
The QueryBuilder extends Laravel's default Eloquent builder, so all your favorite methods still work. You can combine it with existing queries:
$query = User::where('active', true); $users = QueryBuilder::for($query) ->withTrashed() ->allowedFilters('name') ->allowedIncludes('posts', 'permissions') ->where('score', '>', 42) ->get();
The query parameter names follow the JSON API specification as closely as possible. This means you get a consistent, well-documented API surface without having to think about naming conventions.
All the allowed* methods now accept variadic arguments instead of arrays.
// Before (v6) QueryBuilder::for(User::class) ->allowedFilters(['name', 'email']) ->allowedSorts(['name']) ->allowedIncludes(['posts']); // After (v7) QueryBuilder::for(User::class) ->allowedFilters('name', 'email') ->allowedSorts('name') ->allowedIncludes('posts');
If you have a dynamic list, use the spread operator:
$filters = ['name', 'email']; QueryBuilder::for(User::class)->allowedFilters(...$filters);
This is the biggest new feature. You can now include aggregate values for related models using AllowedInclude::min(), AllowedInclude::max(), AllowedInclude::sum(), and AllowedInclude::avg(). Under the hood, these map to Laravel's withMin(), withMax(), withSum() and withAvg() methods.
use Spatie\QueryBuilder\AllowedInclude; $users = QueryBuilder::for(User::class) ->allowedIncludes( 'posts', AllowedInclude::count('postsCount'), AllowedInclude::sum('postsViewsSum', 'posts', 'views'), AllowedInclude::avg('postsViewsAvg', 'posts', 'views'), ) ->get();
A request to /users?include=posts,postsCount,postsViewsSum now returns users with their posts, the post count, and the total views across all posts.
You can constrain these aggregates too. For example, to only count published posts:
use Spatie\QueryBuilder\AllowedInclude; use Illuminate\Database\Eloquent\Builder; $users = QueryBuilder::for(User::class) ->allowedIncludes( AllowedInclude::count( 'publishedPostsCount', 'posts', fn (Builder $query) => $query->where('published', true) ), AllowedInclude::sum( 'publishedPostsViewsSum', 'posts', 'views', constraint: fn (Builder $query) => $query->where('published', true) ), ) ->get();
All four aggregate types support these constraint closures, making it possible to build endpoints that return computed data alongside your models without writing custom query logic.
Laravel 13 is adding built-in support for JSON:API resources. These new JsonApiResource classes handle the serialization side: they produce responses compliant with the JSON:API specification.
You create one by adding the --json-api flag:
php artisan make:resource PostResource --json-api
This generates a resource class where you define attributes and relationships:
use Illuminate\Http\Resources\JsonApi\JsonApiResource; class PostResource extends JsonApiResource { public $attributes = [ 'title', 'body', 'created_at', ]; public $relationships = [ 'author', 'comments', ]; }
Return it from your controller, and Laravel produces a fully compliant JSON:API response:
{ "data": { "id": "1", "type": "posts", "attributes": { "title": "Hello World", "body": "This is my first post." }, "relationships": { "author": { "data": { "id": "1", "type": "users" } } } }, "included": [ { "id": "1", "type": "users", "attributes": { "name": "Taylor Otwell" } } ] }
Clients can request specific fields and includes via query parameters like /api/posts?fields[posts]=title&include=author. Laravel's JSON:API resources handle all of that on the response side.
The Laravel docs explicitly mention our package as a companion:
Laravel's JSON:API resources handle the serialization of your responses. If you also need to parse incoming JSON:API query parameters such as filters and sorts, Spatie's Laravel Query Builder is a great companion package.
So while Laravel's new JSON:API resources take care of the output format, our query builder handles the input side: parsing filter, sort, include and fields parameters from the request and translating them into the right Eloquent queries. Together they give you a full JSON:API implementation with very little boilerplate.
To upgrade from v6, check the upgrade guide. The changes are mostly mechanical. Check the guide for the full list.
You can find the full source code and documentation on GitHub. We also have extensive documentation on the Spatie website.
This is one of the many packages we've created at Spatie. If you want to support our open source work, consider picking up one of our paid products.
]]>Previous versions required Meilisearch as the search engine. That works well, but it means running a separate service.
With v3, your application's own database is all you need. It supports SQLite, MySQL, PostgreSQL, and MariaDB out of the box, and it's the new default.
Let me walk you through it.
First, install the package via composer:
composer require spatie/laravel-site-search
Next, you should publish and run the migrations:
php artisan vendor:publish --tag="site-search-migrations" php artisan migrate
Then, create a search profile. The package provides an artisan command for this:
php artisan site-search:create-profile
This will ask you for the name and URL of the site you want to index. Once configured, crawl the site:
php artisan site-search:crawl
That's it. No Meilisearch to install, no external service to configure. The package uses your existing database. Under the hood, the crawling is powered by our own spatie/crawler package, which recently got a major update as well.
The Search class provides a fluent API for querying your index:
use Spatie\SiteSearch\Search; $searchResults = Search::onIndex('my-site') ->query('laravel middleware') ->get();
You can limit the number of results:
$searchResults = Search::onIndex('my-site') ->query('laravel middleware') ->limit(10) ->get();
Or paginate them, which integrates with Laravel's built-in pagination:
$searchResults = Search::onIndex('my-site') ->query('laravel middleware') ->paginate(20);
Each result is a Hit object with properties like url, pageTitle, entry, and description. You can also get a highlighted snippet where matching terms are wrapped in <em> tags:
foreach ($searchResults->hits as $hit) { echo $hit->url; echo $hit->title(); echo $hit->highlightedSnippet(); }
The biggest addition in v3 is the DatabaseDriver. Instead of relying on Meilisearch, it stores all indexed documents in a site_search_documents table in your own database and uses each database engine's native full-text search capabilities.
For SQLite, it creates FTS5 virtual tables with porter stemming and unicode support. BM25 is used for relevance ranking, and highlighted snippets are generated natively by FTS5.
For MySQL and MariaDB, it sets up FULLTEXT indexes and uses boolean mode search. Highlighting is handled in PHP after the query.
For PostgreSQL, the driver creates tsvector columns with GIN indexes. It uses ts_rank() for relevance scoring and ts_headline() for generating highlighted snippets. Content is weighted so that matches in headings rank higher than matches in the body text.
All supported databases provide stemming (so "running" matches "run"), prefix matching ("auth" matches "authentication"), relevance ranking, and highlighted search results with matching terms wrapped in <em> tags.
Since the database driver is the new default, getting started requires no configuration beyond what you already have. Install the package, run the migration, set up a search profile, and you're ready to crawl.
Meilisearch is still supported. If you're already using Meilisearch or need its advanced features like synonyms and custom ranking rules, nothing changes for you. The Meilisearch driver is still fully supported. You can switch between drivers by setting the default_driver in the config or per search profile via the driver_class attribute.
// config/site-search.php 'default_driver' => \Spatie\SiteSearch\Drivers\MeiliSearchDriver::class,
We're using laravel-site-search in production on several of our own sites. The search on freek.dev, the documentation search on Oh Dear, and the documentation search on Mailcoach are all powered by this package.

The source code of freek.dev is open source, so you can see exactly how the package is integrated. The Livewire search component handles querying and displaying results. The search profile configures the crawler and determines which pages should be indexed. A custom indexer cleans up page titles before storing them. And the config file ties it all together, specifying which CSS selectors and URLs to ignore during indexing.
With v3, laravel-site-search no longer requires any external dependencies. Install it, point it at your site, and search. If you're using MySQL, PostgreSQL, SQLite, or MariaDB, you already have everything you need.
You can find the full documentation on the documentation site. The source code is available on GitHub.
This is one of the many packages we've created at Spatie. If you want to support our open source work, consider picking up one of our paid products.
]]>Our spatie/invade package provides a tiny invade function that lets you read, write, and call private members on any object.
You probably shouldn't reach for this package often. It's most useful in tests or when you're building a package that needs to integrate deeply with objects you don't control.
Let me walk you through how it works.
Imagine you have a class with private members:
class MyClass { private string $privateProperty = 'private value'; private function privateMethod(): string { return 'private return value'; } }
If you try to access that private property from outside the class, PHP will stop you:
$myClass = new MyClass(); $myClass->privateProperty; // Error: Cannot access private property MyClass::$privateProperty
With invade, you can get around that. Install the package via composer:
composer require spatie/invade
Now you can read that private property:
// returns 'private value' invade($myClass)->privateProperty;
You can set it too:
invade($myClass)->privateProperty = 'changed value'; // returns 'changed value' invade($myClass)->privateProperty;
And you can call private methods:
// returns 'private return value' invade($myClass)->privateMethod();
The API is clean and reads well. But the interesting part is what happens under the hood. Before we look at the package code, there's a PHP rule you need to know about first.
Let me walk you through how the package works internally. We'll first look at the old approach using reflection, and then the current solution that uses closure binding.
In v1 of the package, we used PHP's Reflection API to access private members. Here's what the Invader class looked like:
class Invader { public object $obj; public ReflectionClass $reflected; public function __construct(object $obj) { $this->obj = $obj; $this->reflected = new ReflectionClass($obj); } public function __get(string $name): mixed { $property = $this->reflected->getProperty($name); $property->setAccessible(true); return $property->getValue($this->obj); } }
When you create an Invader, it wraps your object and creates a ReflectionClass for it. When you try to access a property like invade($myClass)->privateProperty, PHP triggers the __get magic method. It uses the reflection instance to find the property by name, calls setAccessible(true) on it, and then reads the value from the original object. The setAccessible(true) call tells PHP to skip the visibility check for that reflected property. Without it, trying to read a private property through reflection would throw an error, just like accessing it directly.
This worked fine, but it required creating a ReflectionClass instance and calling setAccessible(true) on every property or method you wanted to access. In v2, we replaced all of this with a much simpler approach using closures. To understand how, we first need to look at a lesser-known PHP visibility rule.
In PHP, private visibility is scoped to the class, not to a specific object instance. Any code running inside a class can access the private properties and methods of any instance of that class.
Here's a concrete example:
class Wallet { public function __construct( private int $balance ) { } public function hasMoreThan(Wallet $other): bool { // This works: we can read $other's private $balance // because we're inside the Wallet class scope return $this->balance > $other->balance; } } $mine = new Wallet(100); $yours = new Wallet(50); // returns true $mine->hasMoreThan($yours);
Notice how hasMoreThan reads $other->balance directly, even though $balance is private. This compiles and runs without errors because the code is running inside the Wallet class. PHP doesn't care which instance the property belongs to. As long as you're in the right class scope, all private members of all instances of that class are accessible.
This is the foundation that makes v2 of the invade package work. If you can get your code to run inside the scope of the target class, you get access to its private members. PHP closures give us a way to do exactly that.
PHP closures carry the scope of the class they were defined in. But the Closure::call() method lets you change that. It temporarily rebinds $this inside the closure to a different object, and it also changes the scope to the class of that object.
$readBalance = fn () => $this->balance; $wallet = new Wallet(100); // returns 100 $readBalance->call($wallet);
Even though $balance is private, this works. The ->call($wallet) method binds the closure to the $wallet object and puts it in the Wallet class scope. When PHP evaluates $this->balance, it sees that the code is running in the scope of Wallet, so it allows the access.
This is the entire trick that invade v2 is built on. Now let's look at the actual code.
When you call invade($object), it returns an Invader instance that wraps your object. The current version of the Invader class is surprisingly small:
class Invader { public function __construct( public object $obj ) { } public function __get(string $name): mixed { return (fn () => $this->{$name})->call($this->obj); } public function __set(string $name, mixed $value): void { (fn () => $this->{$name} = $value)->call($this->obj); } public function __call(string $name, array $params = []): mixed { return (fn () => $this->{$name}(...$params))->call($this->obj); } }
That's the entire class. No reflection, no complex tricks. Just PHP magic methods and closures.
When you write invade($myClass)->privateProperty, the invade function creates a new Invader instance. PHP can't find privateProperty on the Invader class, so it triggers __get('privateProperty'). The __get method creates a short closure fn () => $this->{$name} and calls it with ->call($this->obj). As we just learned, this binds $this inside the closure to your original object and puts the closure in that object's class scope. PHP then evaluates $this->privateProperty inside the scope of MyClass, and the private access is allowed.
The __set method uses the same pattern, but assigns a value instead of reading one:
(fn () => $this->{$name} = $value)->call($this->obj);
The $value variable is captured from the enclosing scope of the __set method, so it's available inside the closure.
For calling private methods, __call follows the same approach:
return (fn () => $this->{$name}(...$params))->call($this->obj);
The closure calls the method by name, spreading the parameters. Since ->call() binds the closure to the target object, PHP sees this as a call from within the class itself, and the private method becomes accessible.
The invade package is a fun example of how PHP closures and scope binding can bypass visibility restrictions in a clean way. It's a small trick, but understanding why it works teaches you something interesting about how PHP handles class scope and closure binding.
The original idea for the invade function came from Caleb Porzio, who first introduced it as a helper in Livewire to replace a more verbose ObjectPrybar class. We liked the concept so much that we turned it into its own package.
Just remember: use it sparingly. It works great in tests or when you're building a package that needs deep integration with objects you don't control. In your regular project code, you probably don't need invade.
You can find the package on GitHub. This is one of the many packages we've created at Spatie. If you want to support our open source work, consider picking up one of our paid products.
]]>We originally created this package after DigitalOcean lost one of our servers. That experience taught us the hard way that you should never rely solely on your hosting provider for backups. The package has been actively maintained ever since.
With the package installed, taking a backup is as simple as running:
php artisan backup:run
This creates a zip of your configured files and databases and stores it on your configured disks. You can also back up just the database or just the files:
php artisan backup:run --only-db php artisan backup:run --only-files
Or target a specific disk:
php artisan backup:run --only-to-disk=s3
In most setups you'll want to schedule this. In your routes/console.php:
use Illuminate\Support\Facades\Schedule; Schedule::command('backup:run')->daily()->at('01:00');
To see an overview of all your backups, run:
php artisan backup:list
This shows a table with the backup name, disk, date, and size for each backup.
The package also ships with a monitor that checks whether your backups are healthy. A backup is considered unhealthy when it's too old or when the total backup size exceeds a configured threshold.
php artisan backup:monitor
You'll typically schedule the monitor to run daily:
Schedule::command('backup:monitor')->daily()->at('03:00');
When the monitor detects a problem, it fires an event that triggers notifications. Out of the box the package supports mail, Slack, Discord, and (new in v10) a generic webhook channel.
Over time backups pile up. The package includes a cleanup command that removes old backups based on a configurable retention strategy:
php artisan backup:clean
The default strategy keeps all backups for a certain number of days, then keeps one daily backup, then one weekly backup, and so on. It will never delete the most recent backup. You'll want to schedule this alongside your backup command:
Schedule::command('backup:clean')->daily()->at('02:00');
v10 is mostly about addressing long-standing community requests and cleaning up internals.
The biggest change is that all events now carry primitive data (string $diskName, string $backupName) instead of BackupDestination objects. This means events can now be used with queued listeners, which was previously impossible because those objects weren't serializable. If you have existing listeners, you'll need to update them to use $event->diskName instead of $event->backupDestination->diskName().
Events and notifications are now decoupled. Events always fire, even when --disable-notifications is used. This fixes an issue where BackupWasSuccessful never fired when notifications were disabled, which also broke encryption since it depends on the BackupZipWasCreated event.
There's a new continue_on_failure config option for multi-destination backups. When enabled, a failure on one destination won't abort the entire backup. It fires a failure event for that destination and continues with the rest.
Other additions include a verify_backup config option that validates the zip archive after creation, a generic webhook notification channel for Mattermost/Teams/custom integrations, new command options (--filename-suffix, --exclude, --destination-path), and improved health checks that now report all failures instead of stopping at the first one.
On the internals side, the ConsoleOutput singleton has been replaced by a backupLogger() helper, encryption config now uses a proper enum, and storage/framework is excluded from backups by default.
The full list of breaking changes and migration instructions can be found in the upgrade guide.
You can find the complete documentation at spatie.be/docs/laravel-backup and the source code on GitHub.
This is one of the many packages we've created at Spatie. If you want to support our open source work, consider picking up one of our paid products.
]]>Let me walk you through what the package can do and what's new in v8.
The simplest way to use the package is to point it at your site and let it crawl every page.
use Spatie\Sitemap\SitemapGenerator; SitemapGenerator::create('https://example.com')->writeToFile($path);
That's it. The generator will follow all internal links and produce a complete sitemap.xml. You can filter which URLs end up in the sitemap using the shouldCrawl callback.
SitemapGenerator::create('https://example.com') ->shouldCrawl(function (string $url) { return ! str_contains(parse_url($url, PHP_URL_PATH) ?? '', '/admin'); }) ->writeToFile($path);
If you'd rather have full control, you can build the sitemap yourself.
use Carbon\Carbon; use Spatie\Sitemap\Sitemap; use Spatie\Sitemap\Tags\Url; Sitemap::create() ->add(Url::create('/home') ->setLastModificationDate(Carbon::yesterday()) ->setChangeFrequency(Url::CHANGE_FREQUENCY_YEARLY) ->setPriority(0.1)) ->add(Url::create('/contact')) ->writeToFile($path);
You can also combine both approaches: let the crawler do the heavy lifting, then add extra URLs on top.
SitemapGenerator::create('https://example.com') ->getSitemap() ->add(Url::create('/extra-page')) ->writeToFile($path);
If your models implement the Sitemapable interface, you can add them to the sitemap directly.
use Spatie\Sitemap\Contracts\Sitemapable; use Spatie\Sitemap\Tags\Url; class Post extends Model implements Sitemapable { public function toSitemapTag(): Url | string | array { return route('blog.post.show', $this); } }
Now you can pass a single model or an entire collection.
Sitemap::create() ->add($post) ->add(Post::all()) ->writeToFile($path);
Large sites can easily exceed the 50,000 URL limit that the sitemap protocol allows per file. New in v8, you can call maxTagsPerSitemap() on your sitemap, and the package will automatically split it into multiple files with a sitemap index.
Sitemap::create() ->maxTagsPerSitemap(10000) ->add($allUrls) ->writeToFile(public_path('sitemap.xml'));
If your sitemap contains more than 10,000 URLs, this will write sitemap_1.xml, sitemap_2.xml, etc., and a sitemap.xml index file that references them all. If your sitemap stays under the limit, it just writes a single file as usual.
Sitemaps are XML files, and they look pretty rough when opened in a browser. Also new in v8, you can attach an XSL stylesheet to make them human-readable.
Sitemap::create() ->setStylesheet('/sitemap.xsl') ->add(Post::all()) ->writeToFile(public_path('sitemap.xml'));
This works on both Sitemap and SitemapIndex. When combined with maxTagsPerSitemap(), the stylesheet is automatically applied to all split files and the index.
Under the hood, we've upgraded the package to use spatie/crawler v9.
You'll find the complete documentation on our docs site. The package is available on GitHub.
This is one of the many packages we've created at Spatie. If you want to support our open source work, consider picking up one of our paid products.
]]>Throughout the years, the API had accumulated some rough edges. With v9, we cleaned all of that up and added a bunch of features we've wanted for a long time.
Let me walk you through all of it!
The simplest way to crawl a site is to pass a URL to Crawler::create() and attach a callback via onCrawled():
use Spatie\Crawler\Crawler; use Spatie\Crawler\CrawlResponse; Crawler::create('https://example.com') ->onCrawled(function (string $url, CrawlResponse $response) { echo "{$url}: {$response->status()}\n"; }) ->start();
The callable gets a CrawlResponse object. It has these methods
$response->status(); // int $response->body(); // string $response->header('some-header'); // ?string $response->dom(); // Symfony DomCrawler instance $response->isSuccessful(); // bool $response->isRedirect(); // bool $response->foundOnUrl(); // ?string $response->linkText(); // ?string $response->depth(); // int
The body is cached, so calling body() multiple times won't re-read the stream. And if you still need the raw PSR-7 response for some reason, toPsrResponse() has you covered.
You can control how many URLs are fetched at the same time with concurrency(), and set a hard cap with limit():
Crawler::create('https://example.com') ->concurrency(5) ->limit(200) // will stop after crawling this amount of pages ->onCrawled(function (string $url, CrawlResponse $response) { // ... }) ->start();
There are a couple of other on closure callbacks you can use:
Crawler::create('https://example.com') ->onCrawled(function (string $url, CrawlResponse $response, CrawlProgress $progress) { echo "[{$progress->urlsProcessed}/{$progress->urlsFound}] {$url}\n"; }) ->onFailed(function (string $url, RequestException $e, CrawlProgress $progress) { echo "Failed: {$url}\n"; }) ->onFinished(function (FinishReason $reason, CrawlProgress $progress) { echo "Done: {$reason->name}\n"; }) ->start();
Every on callback now receives a CrawlProgress object that tells you exactly where you are in the crawl:
$progress->urlsProcessed; // how many URLs have been crawled $progress->urlsFailed; // how many failed $progress->urlsFound; // total discovered so far $progress->urlsPending; // still in the queue
The start() method now returns a FinishReason enum, so you know exactly why the crawler stopped:
$reason = Crawler::create('https://example.com') ->limit(100) ->start(); // $reason is one of: Completed, CrawlLimitReached, TimeLimitReached, Interrupted
Each CrawlResponse also carries a TransferStatistics object with detailed timing data for the request:
Crawler::create('https://example.com') ->onCrawled(function (string $url, CrawlResponse $response) { $stats = $response->transferStats(); echo "{$url}\n"; echo " Transfer time: {$stats->transferTimeInMs()}ms\n"; echo " DNS lookup: {$stats->dnsLookupTimeInMs()}ms\n"; echo " TLS handshake: {$stats->tlsHandshakeTimeInMs()}ms\n"; echo " Time to first byte: {$stats->timeToFirstByteInMs()}ms\n"; echo " Download speed: {$stats->downloadSpeedInBytesPerSecond()} B/s\n"; }) ->start();
All timing methods return values in milliseconds. They return null when the stat is unavailable, for example tlsHandshakeTimeInMs() will be null for plain HTTP requests.
I wanted the crawler to a well behaved piece of software. Using the crawler at full speed and with large concurrency could overload some servers. That's why throttling is a polished feature of the package.
We ship two throttling strategies. The first one is FixedDelayThrottle that can give a fixed delay between all requests.
// 200ms between requests $crawler->throttle(new FixedDelayThrottle(200));
AdaptiveThrottle is a strategy that adjusts the delay based on how fast the server responds. If the server responds fast, the minimum delay will be low. If the server responds slow, we'll automatically slow down crawling.
$crawler->throttle(new AdaptiveThrottle( minDelayMs: 50, maxDelayMs: 5000, ));
Like Laravel's HTTP client, the crawler now has a fake to define which response should be returned for a request without making the actually request.
Crawler::create('https://example.com') ->fake([ 'https://example.com' => '<html><a href="/about">About</a></html>', 'https://example.com/about' => '<html>About page</html>', ]) ->onCrawled(function (string $url, CrawlResponse $response) { // your assertions here }) ->start();
Using this faking helps to keep your tests executing fast.
Like in our Laravel PDF, Laravel Screenshot, and Laravel OG Image packages, Browsershot is no longer a hard dependency. JavaScript rendering is now driver-based, so you can use Browsershot, a new Cloudflare renderer, or write your own:
$crawler->executeJavaScript(new CloudflareRenderer($endpoint));
I'm usually very humble, but think that in this case I can say that our crawler package is the best available crawler in the entire PHP ecosystem.
You can find the package on GitHub. The full documentation is available on our documentation site.
This is one of the many packages we've created at Spatie. If you want to support our open source work, consider picking up one of our paid products.
]]>config/claude/, so it's easy to sync across machines.
In this post I'll walk through my setup.
The first file worth mentioning is my CLAUDE.md. This contains global instructions that Claude Code reads at the start of every session, regardless of which project I'm working in. I've kept it fairly short. It tells Claude to be critical and not sycophantic, to follow our Spatie PHP guidelines, and to use gh for all GitHub operations.
That last one is more useful than you might think. Instead of Claude trying to hit the GitHub API directly with curl, it just uses the GitHub CLI, which is already authenticated and handles all the edge cases.
My settings.json gives Claude Code broad permissions to run commands and edit files. I know some people prefer to keep things locked down, but I find the constant approval prompts break my flow. I also have thinking mode set to always on, which I've found leads to noticeably better results on complex tasks.
The most fun part of my setup is a custom status line. Claude Code lets you configure a shell script that renders at the bottom of the terminal. Mine shows two things: the name of the repo I'm working in, and the current context window usage as a percentage.
The script reads JSON from stdin that contains workspace info and context window statistics. It extracts the repo basename and calculates how much of the context window has been consumed. Then it color-codes the percentage: green when it's below 40%, yellow between 40% and 59%, and red at 60% or above. This gives me a quick visual indicator of when I should consider starting a fresh conversation.
The output looks something like laravel-og-image | ctx: 27%. Here's a screenshot of it in action while I was working on one of our packages:

You can find the full script as statusline.sh in the dotfiles repo. It's a straightforward bash script, nothing fancy, but it's one of those small touches that makes the daily experience noticeably better.
Claude Code supports custom agents, which are essentially pre-configured personas with specific models and instructions. I have four of them.
The laravel-simplifier uses Opus and automatically refines code to be simpler and more readable. The laravel-debugger runs on Sonnet and is focused on tracking down bugs. The laravel-feature-builder uses Opus for building out new features. And the task-planner uses Opus to break down larger tasks into manageable steps.
Having these as separate agents means I can quickly switch context without re-explaining what I want Claude to focus on.
My config includes a laravel-php-guidelines.md file with our comprehensive Spatie coding standards. This ensures that any code Claude writes follows our conventions from the start. No more correcting formatting or naming conventions after the fact.
Beyond that, I have over 40 skills configured, covering everything from PHP guidelines to marketing and SEO. Skills in Claude Code are reference documents that Claude can pull in when relevant. They keep the context window clean by only loading when needed.
If you're using Claude Code, I'd encourage you to invest some time in your configuration. The defaults are fine for getting started, but a tailored setup makes a real difference in daily use. My entire configuration is public in my dotfiles repo under config/claude/, so feel free to take a look and borrow whatever is useful to you.
Let me walk you through what the package can do.
Install the package via Composer:
composer require spatie/laravel-og-image
The package uses spatie/laravel-screenshot under the hood, which requires Node.js and Chrome/Chromium on your server. If you prefer not to install those, you can use Cloudflare's Browser Rendering API instead (more on that later).
The package automatically registers middleware in the web group, so there's no manual configuration needed. Just drop the Blade component into your view:
<x-og-image> <div class="w-full h-full bg-blue-900 text-white flex items-center justify-center"> <h1 class="text-6xl font-bold">{{ $post->title }}</h1> </div> </x-og-image>
That's all you need. The component outputs a hidden <template> tag in the page body, and the middleware injects the og:image, twitter:image, and twitter:card meta tags into the <head>:
<head> <!-- your existing head content --> <meta property="og:image" content="https://yourapp.com/og-image/a1b2c3d4e5f6.jpeg"> <meta name="twitter:image" content="https://yourapp.com/og-image/a1b2c3d4e5f6.jpeg"> <meta name="twitter:card" content="summary_large_image"> </head>
The image URL contains a hash of the HTML content. When you change the template, the hash changes, so crawlers automatically pick up the new image.
The clever bit is that your OG image template lives on the actual page, so it inherits your page's existing CSS, fonts, and Vite assets. No separate stylesheet configuration needed.
Here's what happens when a crawler requests the image:
/og-image/{hash}.jpeg?ogimage appended?ogimage parameter and replaces the response with a minimal HTML page: just the <head> (preserving all CSS and fonts) and the template content at 1200x630 pixelsCache-Control headersSubsequent requests serve the image directly from disk. The route runs without sessions, CSRF, or cookies, and the content-hashed URLs play nicely with CDNs like Cloudflare.
You can preview any OG image by appending ?ogimage to the page URL. This is really useful while designing your templates.
Instead of writing the HTML inline, you can reference a separate Blade view:
<x-og-image view="og-image.post" :data="['title' => $post->title, 'author' => $post->author->name]" />
The view receives the data array as variables:
{{-- resources/views/og-image/post.blade.php --}} <div class="w-full h-full bg-blue-900 text-white flex items-center justify-center p-16"> <div> <h1 class="text-6xl font-bold">{{ $title }}</h1> <p class="text-2xl mt-4">by {{ $author }}</p> </div> </div>
This is handy when you reuse the same layout across multiple pages or when the template gets complex enough that you want it in its own file.
Pages that don't use the <x-og-image> component won't get any OG image meta tags by default. You can register a fallback in your AppServiceProvider:
use Illuminate\Http\Request; use Spatie\OgImage\Facades\OgImage; public function boot(): void { OgImage::fallbackUsing(function (Request $request) { return view('og-image.fallback', [ 'title' => config('app.name'), 'url' => $request->url(), ]); }); }
The closure receives the full Request object, so you can use route parameters and model bindings to customize the image. Return null to skip the fallback for specific requests. Pages that do have an explicit <x-og-image> component are never affected by the fallback.
You can configure the image size, format, quality, and storage disk via the OgImage facade in your AppServiceProvider:
use Spatie\OgImage\Facades\OgImage; OgImage::format('webp') ->size(1200, 630) ->disk('s3', 'og-images');
By default, images are generated at 1200x630 with a device scale factor of 2, resulting in crisp 2400x1260 pixel images. You can also override the size per component:
<x-og-image :width="800" :height="400"> <div>Custom size OG image</div> </x-og-image>
If you don't want to install Node.js and Chrome on your server, you can use Cloudflare's Browser Rendering API instead:
OgImage::useCloudflare( apiToken: env('CLOUDFLARE_API_TOKEN'), accountId: env('CLOUDFLARE_ACCOUNT_ID'), );
By default, images are generated lazily on the first crawler request. If you'd rather have them ready ahead of time, you can pre-generate them with an artisan command:
php artisan og-image:generate https://yourapp.com/page1 https://yourapp.com/page2
Or programmatically, which is useful for generating the image right after publishing content:
use Spatie\OgImage\Facades\OgImage; class PublishPostAction { public function execute(Post $post): void { // ... publish logic ... dispatch(function () use ($post) { OgImage::generateForUrl($post->url); }); } }
Our og image package is already running on the blog you're reading right now. You can see the pull request that added it to freek.dev if you want a real-world example of how to integrate it. Try appending ?ogimage to the URL of any post on this blog to see which image would be generated for that post.
With this package, your OG images are just Blade views. You design them with the same Tailwind classes, fonts, and assets you already use in the rest of your app. No separate rendering setup, no external API, no manual meta tag management.
You can find the full documentation on our documentation site and the source code on GitHub.
The approach of using a <template> tag to define OG images inline with the page's own CSS is inspired by OGKit by Peter Suhm. If you'd rather not self-host the generation of OG images, definitely check out OGKit.
This is one of the many packages we have created at Spatie. If you want to support our open source work, consider picking up one of our paid products.
]]>The CLI has dozens of commands and hundreds of options, yet we only wrote four commands by hand. Our laravel-openapi-cli package made this possible: point it at an OpenAPI spec, and it generates fully typed artisan commands for every endpoint automatically.
Here's how we put it all together.

The Flare CLI combines Laravel Zero for the application skeleton, our laravel-openapi-cli package for automatic command generation, and an agent skill to make everything accessible to AI. Let's look at each piece.
The Flare CLI is built with Laravel Zero, which lets you create standalone PHP CLI applications using the Laravel framework components you already know. Routes become commands, service providers wire everything together, and you get dependency injection, configuration, and caching out of the box.
But the really interesting part is what generates the commands.
The entire CLI is powered by our laravel-openapi-cli package. This package reads an OpenAPI spec and generates artisan commands automatically. Each API endpoint gets its own command with typed options for path parameters, query parameters, and request bodies.
The core of the Flare CLI is this single registration in the AppServiceProvider:
OpenApiCli::register(specPath: 'https://flareapp.io/downloads/flare-api.yaml') ->useOperationIds() ->cache(ttl: 60 * 60 * 24) ->auth(fn () => app(CredentialStore::class)->getToken()) ->onError(function (Response $response, Command $command) { if ($response->status() === 401) { $command->error( 'Your API token is invalid or expired. Run `flare login` to authenticate.', ); return true; } return false; });
That's it. That one call registers the Flare OpenAPI spec and generates every single API command. The useOperationIds() method uses the operation IDs from the spec as command names, so listProjects becomes list-projects, resolveError becomes resolve-error, and so on. The spec is cached for 24 hours so the CLI doesn't need to fetch it on every invocation. Authentication is handled by pulling the token from the CredentialStore, and the onError callback provides a friendly message when the token is invalid.
If you browse the app/Commands directory, you'll find only four hand-written commands: LoginCommand, LogoutCommand, InstallSkillCommand, and ClearCacheCommand. Everything else, every single API command for errors, occurrences, projects, teams, and performance monitoring, is generated at runtime from the OpenAPI spec.
The CredentialStore is straightforward. It reads and writes a JSON file in the user's home directory:
class CredentialStore { private string $configPath; public function __construct() { $home = $_SERVER['HOME'] ?? $_SERVER['USERPROFILE'] ?? ''; $this->configPath = "{$home}/.flare/config.json"; } public function getToken(): ?string { if (! file_exists($this->configPath)) { return null; } $data = json_decode(file_get_contents($this->configPath), true); return $data['token'] ?? null; } public function setToken(string $token): void { $this->ensureConfigDirectoryExists(); $data = $this->readConfig(); $data['token'] = $token; file_put_contents( $this->configPath, json_encode($data, JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES), ); } }
No database, no keychain integration, just a plain JSON file at ~/.flare/config.json. Simple and portable.
A CLI with consistent, predictable commands is already a great interface for AI agents. But to make it even easier, the Flare CLI ships with an agent skill that teaches agents how to use it:
flare install-skill

The skill file gets added to your project directory and any compatible AI agent will automatically pick it up. It includes all available commands, their parameters, and step-by-step workflows for common tasks like error triage and performance investigation.

This is a pattern any API-driven service can follow: if you have an OpenAPI spec, you can use laravel-openapi-cli to generate a full CLI, add an agent skill file that describes how to use it, and your service instantly becomes accessible to both humans and AI agents.
The best part of this approach: when the Flare API evolves and new endpoints are added, the CLI picks them up automatically the next time it refreshes the spec. No code changes, no new releases needed for API additions.
We used the exact same technique to build the Oh Dear CLI. Oh Dear is our website monitoring service, and its CLI also uses laravel-openapi-cli to generate all commands from the Oh Dear OpenAPI spec. The result is a full-featured CLI for managing monitors, checking uptime, reviewing broken links, certificate health, and more.

If you have a service with an OpenAPI spec, this pattern works out of the box. Point laravel-openapi-cli at your spec and you get a complete CLI for free.
The combination of Laravel Zero for the application skeleton and laravel-openapi-cli for the command generation means the Flare CLI is mostly configuration and a handful of custom commands. If your service has an OpenAPI spec, you can build a similar CLI in an afternoon.
To see the CLI in action, check out the introduction to the Flare CLI for a full walkthrough of all available commands. We also wrote about letting your AI coding agent use the CLI to triage errors, investigate performance issues, and fix bugs for you.
The Flare CLI is currently in beta. My colleague Alex did an excellent job creating it. If you run into anything or have feedback, reach out to us at [email protected].
You can find the source code on GitHub and the full documentation on the Flare docs site. The laravel-openapi-cli package that powers the command generation has its own documentation as well.
Flare is one of our products at Spatie. We invest a lot of what we earn into creating open source packages. If you want to support that work, consider checking out our paid products.
]]>