Vercel News Blog 2026-03-17T03:49:39.459Z https://vercel.com/ https://vercel.com/changelog/litellm-server-now-supported-on-vercel LiteLLM server now supported on Vercel 2026-03-16T13:00:00.000Z

You can now deploy LiteLLM server on Vercel, giving developers LLM access with an OpenAI-compatible gateway connecting to any supported provider, including Vercel AI Gateway.

To route a single model through Vercel AI Gateway, use the below configuration in litellm_config.yaml:

Deploy LiteLLM on Vercel or learn more on our documentation

Read more

Elvis Pranskevichus Greg Schofield Ricardo Gonzalez Marcos Grappeggia Anthony Shew
https://vercel.com/changelog/next-forge-6 next-forge 6 is now available 2026-03-13T13:00:00.000Z

next-forge is a production-grade Turborepo template for Next.js apps, designed to be a comprehensive, opinionated starting point for new apps.

This major release comes with a number of DX improvements, an agent skill, and new guides for quickstart, Docker, and migration paths.

next-forge skill

You can now install a next-forge skill into your preferred agent, giving it structured knowledge of next-forge architecture, packages, and common tasks.

Bun by default

The default package manager is now Bun. The CLI init script detects your current package manager before prompting, and pnpm, npm, and yarn are still supported through the init flow.

Graceful degradation

Every optional integration now silently degrades when its environment variables are missing, rather than throwing an error. Stripe, PostHog, BaseHub, and feature flags all return safe defaults. The only required environment variable to boot the project is DATABASE_URL.

New guides

The quickstart guide gets you to a running dev server in a few minutes with just Clerk and a Postgres database.

There is also a new Docker deployment guide and migration guides are available for Appwrite (auth, database, storage), Convex (database), and Novu (notifications).

Read the documentation to get started.

Read more

Hayden Bleasel Ben Sabic
https://vercel.com/blog/notion-workers-vercel-sandbox How Notion Workers run untrusted code at scale with Vercel Sandbox 2026-03-12T13:00:00.000Z

Notion Workers let you write and deploy code to give Custom Agents new powers: sync external data, trigger automations, call any API. With Workers, developers can build agents that sync CRM data on a schedule, open issues when error rates spike, and turn Slack threads into formatted content.

Under the hood, every Worker runs on Vercel Sandbox.

The problem: safely running code from any developer or agent

Notion wanted to let anyone extend their platform with custom code. That's a hard infrastructure problem, but an even bigger security problem. Every Notion Worker runs arbitrary code generated by a third-party developer or agent, on behalf of a Notion user, potentially inside an enterprise workspace.

Without proper isolation, a Worker would run in the same environment as the Custom Agent, with access to its secrets, permissions, and everything else in that execution context. A single prompt injection could exfiltrate credentials or access another user's data.

The requirements were clear:

  • Hard isolation: One Notion Worker can never access another's data or state

  • Credential security: Notion Workers need API keys to talk to external services, but those secrets can never be exposed to the code itself

  • Network controls: Enterprise customers need guarantees about the external services a Worker is allowed to reach

  • Scale: Workers need to support millions of users running concurrent executions without performance degradation

  • State preservation: Workers need fast cold starts, which require the ability to snapshot and restore filesystem state

  • Economics: A billing model that is built for agents with low CPU utilization rates

Why Vercel Sandbox

Vercel Sandbox runs each Notion Worker in an ephemeral Firecracker microVM. Every VM boots its own kernel, providing stronger isolation than containers. Each execution gets its own filesystem, its own network stack, and its own security boundary. When the Notion Worker finishes, the microVM is either destroyed or snapshotted for later retrieval.

To support workloads like Notion Workers at scale, Vercel Sandbox provides several critical capabilities:

Credential injection. Sandbox's firewall proxy can intercept and inject API keys into outbound requests at the network level, so credentials never enter the execution environment. For agent-driven workloads, this eliminates the most dangerous prompt injection vector: an agent being tricked into exfiltrating secrets. (We wrote about this architecture in depth in security boundaries in agentic architectures).

Network policies. Sandbox supports dynamic network policies that can be updated during runtime without restarting the process: start with internet access to install dependencies, then lock down egress before running untrusted code. Platform builders can pass these controls through to their own customers.

Snapshots. Install dependencies once, snapshot the filesystem state, and resume from that snapshot on subsequent invocations. Combined with active-CPU billing, where CPU costs only accrue when your code is actually executing, not waiting on I/O, this keeps costs predictable as usage scales.

The bigger picture: Notion as a developer platform

Notion Workers isn't a one-off feature. It's the beginning of Notion becoming a developer platform.

This shift requires infrastructure that Notion shouldn't have to build. Secure code execution, credential management, network isolation, file-sytem based snapshotting: these are hard problems that compound as the platform scales.

Vercel Sandbox handles the infrastructure complexity so Notion can focus on the developer experience.

What developers are building with Notion Workers

Notion Workers support three main patterns: third-party data syncing, custom automations, and AI agent tools.

Developers use them to sync external data, such as CRM records, analytics, and support tickets, into Notion on a schedule. A Worker can also be attached to a button, triggering arbitrary code with a single click. And when Notion's custom agents invoke Workers as tool calls, they become far more capable than agents limited to pre-built integrations.

Extend your platform with Vercel Sandbox

Notion Workers requires the same capabilities as other agent platforms. Any platform that wants to let users or agents run custom code faces the same set of problems: isolation, credential security, network controls, and scale.

Vercel Sandbox provides these as capabilities out of the box. If you're building a platform that needs to run untrusted code, whether for AI agents, developer plugins, or workflow automation, then this is how you do it.

Read more

Karson Seeley Harpreet Arora
https://vercel.com/changelog/ai-elements-1-9 AI Elements 1.9 is now available 2026-03-12T13:00:00.000Z

AI Elements 1.9 adds new components, an agent skill, and a round of bug fixes across the library.

AI Elements skill

You can now install an AI Elements skill into your preferred agent, giving it a better understanding of how to build and use composable AI interfaces.

<JSXPreview />

The new <JSXPreview /> component renders JSX strings dynamically, supporting streaming scenarios where JSX may be incomplete. It automatically closes unclosed tags during streaming, making it a good fit for displaying AI-generated UI in real time.

<PromptInputActionAddScreenshot />

A new <PromptInput /> sub-component that captures a screenshot of the current page, useful for giving visual feedback to AI models.

Download conversations

The <Conversation /> component now includes an optional button that downloads the conversation as a markdown file.

Read the documentation to get started.

Read more

Hayden Bleasel Ben Sabic
https://vercel.com/changelog/deprecating-the-dhe-cipher-suite-for-tls-connections Deprecating the DHE cipher suite for TLS connections 2026-03-12T13:00:00.000Z

On June 30th, 2026, Vercel will remove support for the legacy DHE-RSA-AES256-GCM-SHA384 cipher suite.

This cipher may still be used by automated systems, security scanners, and HTTP clients with non-standard TLS configurations.

After this date, clients using TLS 1.2 will only be able to connect to the Vercel network with our six remaining cipher suites:

  • ECDHE-ECDSA-AES128-GCM-SHA256

  • ECDHE-RSA-AES128-GCM-SHA256

  • ECDHE-ECDSA-AES256-GCM-SHA384

  • ECDHE-RSA-AES256-GCM-SHA384

  • ECDHE-ECDSA-CHACHA20-POLY1305

  • ECDHE-RSA-CHACHA20-POLY1305

Modern clients and TLS 1.3 connections are unaffected.

If you operate integrations or automated systems that connect to a domain hosted on Vercel over TLS 1.2, verify that your TLS client supports at least one of the above cipher suites. Modern TLS libraries support these by default.

Read more

Matthew Stanciu
https://vercel.com/changelog/vercel-flags-are-now-optimized-for-agents Vercel Flags are now optimized for agents 2026-03-11T13:00:00.000Z

The Vercel CLI now supports programmatic flag management, giving teams a direct way to create and manage feature flags from the terminal without opening the dashboard.

Add the Flags SDK skill

Building on this foundation, the Flags SDK skill lets AI agents generate and manage flags through natural language prompts.

The skill leverages the CLI under the hood, enabling agents to implement server-side evaluation that prevents layout shifts and maintains confidentiality. Using the SDK's adapter pattern, agents can connect multiple providers and evaluate user segments without rewriting core flag logic.

Once added, try prompting your agent with this prompt to create your first flag.

Start generating flags with the Flags SDK skill.

Read more

Vincent Derks Hayden Bleasel Chris Widmaier
https://vercel.com/changelog/subscribe-to-webhook-events-for-vercel-flags Subscribe to webhook events for Vercel Flags 2026-03-11T13:00:00.000Z

You can now subscribe to webhook events for deeper visibility into feature flag operations on Vercel.

New event categories include:

  • Flag management: Track when teams create, modify, or delete flags across your project.

  • Segment management: Receive alerts when segments are created, updated, or deleted.

These events help teams build monitoring directly into their workflows. You can track the complete lifecycle of your flags, monitor changes across projects, and integrate feature flag data with your external systems.

Read the documentation to start tracking feature flag events.

Read more

Luis Meyer
https://vercel.com/changelog/chat-sdk-adds-whatsapp-adapter-support Chat SDK adds WhatsApp adapter support 2026-03-11T13:00:00.000Z

Chat SDK now supports WhatsApp, extending its single-codebase approach to Slack, Discord, GitHub, Teams, and Telegram with the new WhatsApp adapter.

Teams can build bots that support messages, reactions, auto-chunking, and read receipts. The adapter handles multi-media downloads (e.g., images, voice messages, stickers) and supports location sharing with Google Maps URLs.

Try the WhatsApp adapter today:

The adapter does not support message history, editing, or deletion. Cards render as interactive reply buttons with up to three options, and fall back to formatted text. Additionally, WhatsApp enforces a 24-hour messaging window, so bots can only respond within that period.

Read the documentation to get started or browse the adapters directory.

Special thanks to @ghellach, whose community contribution in PR #102 laid the groundwork for this adapter.

Read more

Malte Ubl Hayden Bleasel Ben Sabic
https://vercel.com/changelog/improved-data-collection-for-web-analytics-and-speed-insights-with-resilient Improved data collection for Web Analytics and Speed Insights with resilient intake 2026-03-11T13:00:00.000Z

Web Analytics and Speed Insights version 2 introduces resilient intake to improve data collection reliability. By dynamically discovering endpoints instead of relying on a single predictable path, the new packages ensure you capture more complete traffic and performance data.

To utilize resilient intake, update your packages and deploy your changes. No other configuration is required, and existing implementations will continue working as before. It's available to all teams at no additional cost.

Install the latest versions

npm install @vercel/analytics@latest

npm install @vercel/speed-insights@latest

These packages include a license change from Apache-2.0 to MIT to align with other open source packages. Nuxt applications can leverage Nuxt modules for a one-line installation of Speed Insights and Web Analytics.

Update your packages to capture more data, or explore the Web Analytics documentation and Speed Insights documentation.

Read more

Damien Simonin Feugas
https://vercel.com/blog/how-we-run-vercels-cdn-in-front-of-discourse How we run Vercel's CDN in front of Discourse 2026-03-10T13:00:00.000Z

Vercel's CDN can front any application, not just those deployed natively on the platform, and it's simple to set up. This allows you to add firewall protection, DDoS mitigation, and observability to platforms like Discourse or WordPress without migrating them completely.

The Vercel Community is an example of this architecture. It is a Discourse application hosted elsewhere, but we proxy it ourselves via Vercel's CDN, which both protects the app and gives us access to useful features in Vercel's website stack:

  • Web Analytics gives us anonymized, cookie-free demographic and referrer data, so we can see where users are coming from and what they're looking for.

  • Firewall gives us DDoS protection and has automatically prevented several attacks in the last year.

  • Bot Management lets us block malicious scrapers while allowing trusted crawlers to index the forum and allow community posts to show up in ChatGPT searches.

Some parts of the community platform, like Vercel Community live sessions, run directly on Vercel with Next.js. We use Vercel Microfrontends to mount a Next.js app on the same domain as the Discourse app, for three reasons:

  • To create new pages that would be impractical to implement as CMS plugins.

  • To overwrite existing Discourse pages that we can't fully customize.

  • To keep users authenticated through Sign in with Vercel

When the new pages are ready to launch, we add the path to our microfrontends configuration and users are rerouted seamlessly on the next deploy.

Vercel as a CDN

To set up Vercel as a CDN proxy like this, you need two domains:

  1. Inner host: The origin server where the site is actually hosted. This might look like your-site.discourse.com

  2. Outer host: The Vercel project domain that users interact with, such as community.vercel.com

Ensure that all links on the site and its canonical URLs use the outer domain.

Once those are in place, create a new project on Vercel that deploys to the outer host. You can then use vercel.ts (formerly vercel.json) to rewrite traffic to the inner domain.

Running multiple apps on a single domain with microfrontends

To extend the community forum beyond the limits of Discourse, we configured with the outer host domain using a vertical microfrontend approach.

Vercel's microfrontends allow you to mount different Vercel projects to different route paths. We added a microfrontends.json file that directs traffic for specific routes to separate Vercel projects.

Additional pages can be added incrementally, route by route. We also added the .well-known/workflow route to use Workflow Development Kit for event creation and video processing.

While you could accomplish some of this by using negative matching in the proxy regex to avoid proxying certain routes, splitting the projects provides better isolation. This approach allows for independent environment variables and organization permissions, locking down the project that talks to the third-party host.

A modern CDN without a massive migration

At this point, you have Vercel's CDN standing between your users and your origin server. All traffic flows through Vercel's global network, giving you enterprise-grade security without touching your existing application.

You get even more flexibility when you combine this with microfrontends. You now have a path to modernize your application incrementally. Instead of a "big bang" refactor, you can create a Next.js application and turn on specific routes one by one, while your core application continues to run on Discourse, WordPress, or whatever platform it is built on.

This architecture unlocks a pragmatic path forward: secure your existing investment with Vercel's CDN today, then layer modern features on top tomorrow, all without the risk of a full platform migration.

Learn more by reading the Vercel microfrontends documentation or see it in action at community.vercel.com/live.

Read more

Jacob Paris
https://vercel.com/changelog/chat-sdk-adds-postgresql-state-adapter Chat SDK adds PostgreSQL state adapter 2026-03-10T13:00:00.000Z

Chat SDK now supports PostgreSQL as a state backend, joining Redis and ioredis as a production-ready option with the new PostgreSQL adapter.

Teams that already run PostgreSQL can persist subscriptions, distributed locks, and key-value cache state without adding Redis to their infrastructure.

Try the PostgreSQL state adapter today:

The adapter uses pg (node-postgres) with raw SQL queries and automatically creates the required tables on first connect. It supports TTL-based caching, distributed locking across multiple instances, and namespaced state via a configurable key prefix.

Read the documentation to get started or browse the adapters directory.

Special thanks to @bai, whose community contribution in PR #154 laid the groundwork for this adapter.

Read more

Hayden Bleasel Ben Sabic
https://vercel.com/changelog/vercel-sandbox-now-supports-1-vcpu-2-gb-configurations Vercel Sandbox now supports 1 vCPU + 2 GB RAM configurations 2026-03-10T13:00:00.000Z

Vercel Sandbox now supports creating Sandboxes with only 1 vCPU and 2 GB of RAM. This is ideal for single-threaded or light workloads which don't benefit from additional system resources. When unspecified, the default is still 2 vCPUs and 4 GB of RAM.

Get started by setting the resources.vcpus option in the SDK:

Or using the --vcpus option in the CLI:

Learn more about Sandbox in the docs.

Read more

Rob Herley Tom Lienard
https://vercel.com/changelog/chat-sdk-adapter-directory Chat SDK now has an adapter directory 2026-03-10T13:00:00.000Z

Chat SDK now has an adapter directory, so you can search platform and state adapters from Vercel and the community.

These include:

  • Official adapters: maintained by the core Chat SDK team and published under @chat-adapter/*

  • Vendor-official adapters: built and maintained by the platform companies, like Resend and Beeper. These live in their GitHub org and are documented in their docs.

  • Community adapters are built by third-party developers, and can be published by one, following the same model as AI SDK community providers.

We encourage teams to build and submit adapters to be included in this new directory, like Resend's adapter that connects email to Chat SDK:

Browse the adapter directory or read the contributing guide to learn how to build, test, document, and publish your own adapter.

Read more

Hayden Bleasel Ben Sabic
https://vercel.com/changelog/ai-gateway-supports-openais-responses-api AI Gateway supports OpenAI's Responses API 2026-03-06T13:00:00.000Z

OpenAI's Responses API is now available through AI Gateway. The Responses API is a modern alternative to the Chat Completions API. Point your OpenAI SDK to AI Gateway's base URL and use the creator/model names to route requests. TypeScript and Python are both supported. All of the functionality in the Responses API was already accessible through AI Gateway via the AI SDK and Chat Completions API, but you can now use the Responses API directly.

What you can do

  • Text generation and streaming: Send prompts, get responses, stream tokens as they arrive

  • Tool calling: Define functions the model can invoke, then feed results back

  • Structured output: Constrain responses to a JSON schema

  • Reasoning: Control how much effort the model spends thinking with configurable effort levels

Getting started

Install the OpenAI SDK and point it at AI Gateway.

Basic example: text generation

Send a prompt and get a response from any supported model.

Structured output with reasoning

Combine reasoning levels with a JSON schema to get structured responses.

To learn more about the Responses API, read the documentation.

Read more

Rohan Taneja Walter Korman Jerilyn Zheng
https://vercel.com/changelog/chat-sdk-adds-table-rendering-and-streaming-markdown Chat SDK adds table rendering and streaming markdown 2026-03-06T13:00:00.000Z

Chat SDK now renders tables natively across all platform adapters and converts markdown to each platform's native format during streaming.

The Table() component is a new card element in Chat SDK that gives you a clean, composable API for rendering tables across every platform adapter. Pass in headers and rows, and Chat SDK handles the rest.

The adapter layer converts the table to the best format each platform supports.

Slack renders Block Kit table blocks, Teams and Discord use GFM markdown tables, Google Chat uses monospace text widgets, and Telegram converts tables to code blocks. GitHub and Linear already supported tables through their markdown pipelines and continue to work as before. Plain markdown tables (without Table()) are also converted through the same pipeline.

Streaming markdown has also improved across the board. Slack's native streaming path now renders bold, italic, lists, and other formatting in real time as the response arrives, rather than resolving when the message is complete. All other platforms use the fallback streaming path, so streamed text now passes through each adapter's markdown-to-native conversion pipeline at each intermediate edit. Previously, these adapters received raw markdown strings, so users saw literal **bold** syntax until the final message.

Adapters without platform-specific rendering now include improved defaults, so new formatting capabilities work across all platforms without requiring adapter-by-adapter updates.

Update to the latest Chat SDK to get started, and view the documentation.

Read more

Malte Ubl Hayden Bleasel
https://vercel.com/changelog/v0-api-now-supports-custom-mcp-servers v0 API now supports custom MCP servers 2026-03-06T13:00:00.000Z

The v0 API now supports connecting to any custom MCP server. Teams can configure new servers programmatically by providing the necessary endpoint and authentication details.

Once configured, you can make these custom servers available directly during a v0 chat session by referencing the server ID:

Visit the v0 API docs.

Read more

Max Leiter
https://vercel.com/changelog/skip-unaffected-builds-for-projects-in-bun-monorepos Skip unaffected builds for projects in Bun monorepos 2026-03-06T13:00:00.000Z

Skipping unaffected builds in monorepos now detects Bun lockfiles, extending the same compatibility already available for other package managers.

When Vercel evaluates which projects to build, it reads lockfile changes to determine whether dependencies have changed. Teams using Bun can now rely on this detection to skip builds for projects that haven't changed, reducing unnecessary build time across monorepos.

See the monorepo documentation to learn how skipping unaffected projects works.

Read more

Anthony Shew
https://vercel.com/changelog/deployment-step-now-15-percent-faster Deployment step now 15% faster 2026-03-06T13:00:00.000Z

Builds on Vercel now deploy 1.2 seconds faster on average, with more complex projects seeing the biggest gains (up to 3.7 seconds).

The improvement comes from optimizing how credentials are provisioned during the build process, eliminating a blocking step that previously added latency at the end of every build.

Learn more in the builds documentation.

Read more

Ali Smesseim Andrew Healey Janos Szathmary
https://vercel.com/blog/from-idea-to-secure-checkout-in-minutes-with-stripe From idea to secure checkout in minutes with Stripe 2026-03-05T13:00:00.000Z

Building commerce applications looks very different than it did even a few years months ago.

Teams are no longer treating storefronts and billing systems as long-running integration projects that happen after the product is complete. They iterate quickly, deploy globally by default, and increasingly rely on AI tools to generate UI, checkout flows, and subscription logic.

Commerce is becoming more programmable and increasingly agent-driven. As AI systems begin to generate storefronts, assemble checkout flows, and optimize billing logic, the setup, integrations, and infrastructure need to be just as composable and automated.

With tools like v0, and agentic coding agents working with Vercel CLI and Vercel Marketplace, developers can move from idea to deployed product much faster than before. As that workflow becomes more automated and AI-native, the surrounding systems need to keep pace, and the developer's experience (which includes the agent's) needs as much focus as the end-user's.

An improved developer experience

Historically, moving from a Stripe Sandbox to accepting live payments required retrieving API keys, copying them into environment variables, and verifying configuration across multiple environments. It worked, but it introduced unnecessary friction and risk at precisely the moment a team was ready to go live.

Stripe is now generally available on the Vercel Marketplace and in v0, with full support for connecting production accounts.

You can connect an existing live Stripe account directly to a Vercel project or import one into your environment and begin accepting real payments without rebuilding your integration. The connection flow provisions the required environment variables automatically, so moving from test mode to live transactions does not require manual key exchange or rewiring your application.

The beta release supported Stripe Sandbox account creation, and now general availability unlocks full production use cases, including live ecommerce storefronts, SaaS subscriptions, usage-based billing, and invoicing.

With this release, going to production with payments is a single integration between your project and Stripe, rather than a separate configuration step that happens outside of it.

Reducing setup friction while improving security

Making payments easier to connect is only useful if it is also secure by default.

To support production connections, Vercel partnered with Stripe to build a new set of key management APIs.

Credentials are now generated, exchanged, and stored programmatically, reducing the surface area for human error while maintaining correct separation across development, preview, and production environments.

Under the hood, the integration performs a cryptographic key exchange rather than requiring developers to manually retrieve and paste API credentials. The required Stripe keys are provisioned automatically and stored as environment variables within the appropriate Vercel environment, Stripe provides two types of API keys:

  • Secret keys (STRIPE_SECRET_KEY): These must only be used in server-side code, such as API routes or Server Actions. They should never be exposed in client-side code or committed to version control

  • Publishable keys (NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY): These are safe for client-side use. They identify your Stripe account but cannot perform sensitive operations.

Get started

With Stripe connected through the Vercel Marketplace, moving from a working application to live revenue becomes part of the same workflow you use to build and deploy your product.

You can start with a simple example that creates a Checkout Session and deploy your first online store using Vercel and Stripe.

Create a Checkout Session:

Install Stripe from the Vercel Marketplace or v0, connect your account, and your environment variables are ready before you write a line of payment code. See the changelog and documentation for more details.

Read more

Dima Voytenko Hedi Zandi
https://vercel.com/changelog/provider-level-custom-timeouts-for-faster-failover-on-ai-gateway Customize timeouts for faster automatic failover on Vercel AI Gateway 2026-03-05T13:00:00.000Z

AI Gateway now supports per-inference provider timeouts for faster failover than the provider default. If a provider doesn't start responding within your configured timeout, AI Gateway aborts the request and falls back to the next available provider.

Provider timeouts are available in beta for BYOK (Bring Your Own Key) credentials only, with support for system provider timeouts coming soon. Note that some providers don't support stream cancellation, so you may still be charged for timed-out requests depending on the provider.

Basic usage

Set timeouts per provider in milliseconds using providerTimeouts in providerOptions.gateway.

Advanced usage with multiple providers and failover

Use with order to control both the provider sequence and failover speed.

For more information, read the custom provider timeouts documentation.

Read more

Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/vercels-cdn-gets-a-new-dashboard-experience Vercel CDN gets a new dashboard experience 2026-03-05T13:00:00.000Z

Vercel's CDN now has a new dashboard to give you a single place to track global traffic distribution and top-line CDN metrics, manage caching, and update routing rules. The new experience includes:

  • Overview: A live map of your project's global traffic distribution across Vercel Regions and top-level request volume and cache performance metrics.

  • Caches: A redesigned page for purging content across Vercel CDN's caching layers, which was previously under project settings.

  • Project-level Routing: A new project-level UI for updating routing rules, like setting response headers or rewriting to an external API, without triggering a new deployment.

Learn more about Vercel's CDN or visit the CDN tab for your project to see the updates.

Read more

Priyanka Jindal Mark Knichel Andrew Gadzik Yash Kothari Hannah Hearth
https://vercel.com/changelog/vercels-cdn-now-supports-updating-routing-rules-without-a-new-deployment Vercel's CDN now supports updating routing rules without a new deployment 2026-03-05T13:00:00.000Z

You can now create and update routing rules within a project, such as setting response headers or rewrites to an external API, without building a new deployment.

Project-level routing rules are available via the dashboard, API, CLI, and Vercel SDK and take effect instantly after you make and publish the change. Project-level routes run after bulk redirects and before your deployment config's routes.

With this addition, Vercel's CDN now supports three routing mechanisms:

  • Routes defined in your deployment configuration (via vercel.json, vercel.ts, or next.config.js)

  • Bulk redirects

  • Project-level routes

Project-level routes are available on all plans starting today. Read the documentation or go to the CDN tab in your project dashboard to get started.

Read more

Mark Knichel Andrew Gadzik Yash Kothari Hannah Hearth Priyanka Jindal
https://vercel.com/changelog/streamdown-2-4 Streamdown 2.4: More customization, accessibility and custom rendering 2026-03-05T13:00:00.000Z

Streamdown v2.4 introduces customization hooks, accessibility features, and user experience improvements for developers rendering markdown.

Teams can now customize the appearance of their markdown output using several new properties. You can override the built-in icons by passing a specific component map to the icons prop.

The createCodePlugin now accepts a themes option for light and dark Shiki themes, a startLine meta option for custom starting line numbers, and an inlineCode virtual component for styling inline code independently from blocks.

Streamdown now supports internationalization and text direction. The dir prop automatically applies left-to-right or right-to-left formatting based on the first strong Unicode character, and the translations prop supports custom languages.

Tables include a fullscreen overlay controlled via the controls prop, complete with scroll locking and Escape key support. Developers can hook into streaming events using the onAnimationStart and onAnimationEnd callbacks.

This release fixes empty lines collapsing in syntax-highlighted blocks and prevents ordered lists from retriggering animations during streaming.

For projects using Tailwind v4, the new prefix prop namespaces utility classes to avoid collisions.

To get started, learn more.

Read more

Hayden Bleasel
https://vercel.com/changelog/run-cron-jobs-from-deployment-summary Run cron jobs from deployment summary 2026-03-05T13:00:00.000Z

You can now run cron jobs from your application in the summary section of your deployments dashboard.

Try it out by deploying a Vercel cron job template. Once you deploy, Vercel automatically registers your cron jobs.

Learn more in the cron jobs documentation.

Read more

Tom Knickman Mehul Kar
https://vercel.com/changelog/stripe-is-now-generally-available-on-the-marketplace-and-v0 Stripe is now generally available on the Marketplace and v0 2026-03-05T13:00:00.000Z

You can now connect your production Stripe account to Vercel and start accepting real payments. The integration securely provisions your API keys as environment variables and supports both sandbox and live modes.

Test your payment flows in sandbox, then move to production without manually exchanging or managing keys. Built in collaboration with Stripe, the new key management APIs make it possible to reduce setup friction while strengthening security from day one.

This unlocks real production use cases like:

  • Live ecommerce: Accept real payments and manage checkout flows for production storefronts

  • Production SaaS billing: Charge customers for subscriptions, usage, and invoices from day one

  • Shipping to real users: Move from sandbox to production without re-wiring your integration

Get started today with this example to build your first online simple store using Vercel and Stripe. See the documentation to learn more.

Read more

Dima Voytenko Hedi Zandi Ismael Rumzan
https://vercel.com/changelog/create-private-blob-stores-with-a-single-click-in-v0 Create private blob stores with a single click in v0 2026-03-05T13:00:00.000Z

Teams can now create private and public blob stores with a single click in v0. When adding Vercel Blob to a chat, a dialog lets you select your preferred region and access type.

Private storage is selected by default and requires authentication to access sensitive files, while public storage allows direct reads for assets like media.

Once connected, the agent automatically understands your store's configuration. It writes the correct implementation for your choice, setting up authenticated delivery routes for private stores or direct URLs for public ones, without requiring you to write any code manually.

Learn more in the Vercel Blob documentation.

Read more

Vincent Voyer
https://vercel.com/changelog/gpt-5-4-is-now-on-ai-gateway GPT 5.4 is now on AI Gateway 2026-03-05T13:00:00.000Z

GPT-5.4 and GPT-5.4 Pro are now available on AI Gateway.

This model brings the agentic and reasoning leaps from GPT-5.3-Codex to all domains. This includes knowledge work like reports, spreadsheets, presentations, and analysis in addition to coding. It handles complex multi-step workflows more reliably, including tasks that involve tools, research, and pulling from multiple sources. GPT-5.4 is faster and also more token-efficient than previous iterations (GPT-5.2). GPT-5.4 Pro is for developers who need maximum performance on the most complex tasks.

To use this model, set model to openai/gpt-5.4 or openai/gpt-5.4-pro in the AI SDK.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/mcp-apps-support-on-vercel MCP Apps support on Vercel 2026-03-04T13:00:00.000Z

Teams can now build and deploy MCP Apps on Vercel with full support for Next.js.

MCP Apps are similar to ChatGPT apps, but are a provider-agnostic open standard for embedded UIs. They run inside iframes and communicate with any compatible host, such as Cursor, Claude.ai, and ChatGPT, using a shared bridge.

This architecture uses ui/* JSON-RPC over postMessage, enabling a single UI to function across any compatible host without platform-specific integrations.

By combining this standard with Next.js on Vercel, developers can leverage Server-Side Rendering (SSR) and React Server Components to build portable, high-performance agent interfaces.

Deploy the template or learn more in the documentation.

Read more

Andrew Qu
https://vercel.com/blog/building-slack-agents-can-be-easy Building Slack agents can be easy 2026-03-03T13:00:00.000Z

Slack is already where teams work. It provides a natural interface for agents, with messages, threads, buttons, and events, so you don't need to invent a new UI or onboarding flow. Getting from "I want a Slack agent" to a running deployment, though, means coordinating across a lot of systems:

  • Creating an app in the Slack API console

  • Configuring OAuth scopes and event subscriptions

  • Writing webhook handlers and signature verification

  • Deploying to infrastructure that can handle Slack's 3-second response window

Each piece has its own docs, and they all need to work together.

Coding agents like Claude Code, OpenCode, Cursor, and GitHub Copilot are well suited for exactly this kind of coordination because they can read docs, reason through dependencies, and generate code in seconds. We built the Slack agent skill to take advantage of that. It builds on our Slack Agent Template, works with the coding agent of your choice, and takes you from idea to a deployed Slack agent on Vercel in a single session, automating steps when possible and showing you exactly what to click when it can't.

From idea to deployed agent with the skill wizard

Install the skill and run the wizard:

Then run the skill in your agent. For example, with Claude Code:

The wizard starts by asking what kind of agent you want to build. You might say "a support agent that answers questions from our internal docs," or "a standup bot that collects updates from the team every morning." Based on your answer, it generates a custom implementation plan tailored to your use case. You review and approve the plan before any code is written.

From there, you move through five stages:

  • Project setup: You choose your LLM provider, and the agent scaffolds your project from our Slack Agent Template.

  • Slack app creation: The agent customizes your manifest.json with your app name, description, and bot display settings, then opens Slack's console and walks you through creating the app and installing it to your workspace. OAuth scopes, event subscriptions, and slash commands come pre-configured from the template.

  • Environment configuration: The agent walks you through setting up your signing secret, bot token, and any API keys your project needs.

  • Local testing: The agent starts your dev server and connects it to Slack so you can message your bot and see it respond in real time before anything touches production.

  • Production deployment: The agent walks you through deploying to Vercel and setting up your environment variables. From this point, every git push triggers a new deployment.

What the skill gives you

The Slack agent skill gives you an agent that can:

  • Hold multi-turn conversations across messages and threads

  • Pause for human approval before taking sensitive actions

  • Stream responses to Slack in real time

  • Read channels and threads on its own

Your agent interacts with Slack and your systems through tools, functions it can call to take actions or retrieve information. The template ships with tools for:

  • Reading channel messages

  • Fetching thread context

  • Joining channels (with human approval)

  • Searching channels by name, topic, or purpose

You can also tell your coding agent to add custom tools that connect to your own systems. Want the agent to look up a customer record, create a support ticket, or query a database? Each of those becomes a tool the agent knows when and how to call.

Workflow DevKit is what makes the agent durable. A Slack agent often needs to hold a conversation across many messages, or wait hours for someone to approve a request. Workflow DevKit lets the agent suspend mid-conversation, wait for external input, and pick back up exactly where it left off. Tool calls are automatically retried on failure, and responses stream back to Slack in real time.

Human-in-the-loop is built in. When the agent needs to perform a sensitive action like joining a channel, it posts a message with Approve and Reject buttons and suspends. You're only billed for active CPU time, so waiting costs nothing, even if approval takes days. This pattern extends to any action requiring approval, from sending messages to modifying data to calling external APIs.

AI Gateway gives your agent access to hundreds of models from every major provider through a single API key. Switching models is a one-line change, and if a provider goes down, AI Gateway automatically routes to another so your agent stays up.

Going deeper

Once your agent is live, there are a few ways to extend it and understand it better.

Our Vercel Academy Slack Agents course covers the entire lifecycle, from creating and configuring a Slack app to handling events and interactive messages, building agents with the AI SDK, and deploying to production.

Vercel preview deployments let you test changes before they reach production. For Slack bots, this may require bypassing deployment protection so Slack's webhook verification can reach your endpoint. Our testing guide explains how to set this up.

Vercel Sandboxes let your agent execute code in isolated environments, so it can run user-provided scripts like analyzing a spreadsheet, generating a chart, or transforming data without risking your infrastructure.

Get started

The whole experience fits in one session with your coding agent.

Read more

Timothy Jordan
https://vercel.com/blog/scaling-redirects-to-infinity-on-vercel Scaling redirects to infinity on Vercel 2026-03-03T13:00:00.000Z

Redirects are trivial at a small scale, but at millions, latency and cost become real systems problems.

Previously on Vercel, redirects were handled by routing rules and middleware. Routing rules support up to 2,000 complex redirects with wildcards, and they function as an ordered list evaluated in sequence. Each rule may involve regex matching, meaning a single request could trigger many expensive evaluations. This is acceptable for a few thousand routing rules, but as counts grow, per-request work increases linearly.

Middleware offers more flexibility, but it adds latency by running extra code on every request. To serve millions of redirects with low latency, we needed a dedicated lookup path with near-constant or logarithmic time per request. Building on our previous work to make global routing faster with Bloom filters, we found a way to scale to millions of redirects.

What we optimized for

  • Scale:

    • Support millions of static redirects per project

  • Runtime behavior:

    • No additional latency cost for projects that don't configure redirects

    • A fast "no redirect" path, since most requests won't be redirected

    • Low process memory usage, relying on external storage and caching layers instead

  • Engineering values:

    • Simplicity and debuggability over premature optimization

    • Evolve iteratively rather than trying to get it perfect on the first try

With those goals in mind, we started with the simplest design we could think of, combining the redirects and Bloom filter in a single file. Since the redirect data was already JSON, and our Bloom filters already supported JSON exporting, we decided to use the JSONL file format to store this information.

JSON and Bloom filters versus napkin math

A Bloom filter is a probabilistic data structure that tests whether an element is a member of a set. Bloom filters can return false positives but never false negatives, so they answer "definitely not in the set" or "maybe in the set." By checking a small, cached Bloom filter first, we could skip the redirect lookup entirely for requests that don't match, keeping the common "no redirect" path extremely cheap. Only on a positive match would we parse the JSON file.

Simple, but would it scale? The napkin math said no. A million redirects could easily produce a file in the hundreds of megabytes, and fetching and parsing something that large would blow our latency and memory budgets. We needed to avoid loading the entire dataset at once.

Sharding and Bloom filters keep memory low and lookups fast

The fix was sharding. Instead of one massive JSONL file, we hashed the redirect path to distribute entries across many small shards. This allows us to load a small slice of data for a specific request, which shifts the burden from process memory to external storage and the file system cache. The Bloom filter still sits in front, short-circuiting the lookup for the vast majority of traffic. But now, when a request does pass the Bloom filter, we only need to fetch and parse a single small shard rather than the entire redirect set.

Shard structure

Each shard contains 3 parts:

  • A header line that encodes the properties of the Bloom filter

  • The base64 encoded Bloom filter

  • A JSON object of redirects, keyed by src path

Here is a sample:

At build time, we generate all of the shards and their Bloom filters and upload them to external storage. At runtime, the server only needs to know which dataset and shard count apply to a given project or deployment when it receives a request.

The lookup path checks the Bloom filter before parsing JSON

At request time, the bulk redirect lookup works like this:

  • Check whether the project or deployment has bulk redirects configured. If not, skip everything and proceed as usual.

  • Compute the redirect key from the incoming request and hash it to determine the shard.

  • Retrieve the shard from the cache or origin, and check the Bloom filter.

    • If the key is not present in the Bloom filter, we do not parse the JSON body of the shard.

    • If the key is maybe present in the Bloom filter, we load the JSON body of the shard and look up the exact redirect inside that object.

This design has some nice properties:

  • Fast negative lookups: Bloom filters are very fast and can be tuned to have a very low false positive rate

  • Human‑readable shards: Shards are just JSONL files. If something goes wrong, it's easy to dump a shard and see exactly what it contains

  • Low implementation risk: JSON parsing and Bloom filters are simple, so this can ship quickly, allowing us to gather real‑world data

JSON parsing became a bottleneck on positive lookups

We suspected JSON parsing might become a bottleneck, and our dogfooding confirmed it. When the Bloom filter indicated a redirect might exist, parsing the full JSON body for the relevant shard took considerable time. We also saw massive latency spikes under high CPU load, since JSON parsing is CPU-intensive and competes for resources with everything else on the node.

Reducing shard size would help with parsing speed, but smaller shards increase cardinality (the number of shards to manage) and cache miss rates. This created a trade-off. Large shards meant higher CPU overhead from parsing, while small shards meant more I/O latency from cache misses. We needed a data format that could retrieve a single value without parsing the entire shard.

Binary search over sorted keys to avoid parsing the entire shard

Instead of storing redirects in a JSON blob, we implemented a binary search keyed by the redirect path. Each shard stores its redirect keys in sorted order, so we can perform a logarithmic-time search over those keys. Once we find the key, we only need to parse the JSON for that specific redirect. This sidesteps the shard size problem entirely. Lookup cost no longer scales with the total amount of data in the shard, so we can keep shards large enough for good cache hit rates without paying for full JSON parsing.

Latency dropped and the spikes disappeared

With JSON parsing out of the hot path for positive lookups, requests for redirects that actually exist became both faster and more predictable.

The most visible improvement was the elimination of the latency spikes we had seen under high CPU load. When parsing a full JSON shard, redirect lookups competed for CPU time with everything else running on the node. With binary search, the per-request CPU cost dropped low enough that resource contention stopped being a factor.

Designing for the common case

Redirects themselves are simple. The challenge comes from combining that simple abstraction with large, mostly cold datasets and strict latency expectations at the edge. Routing rules were the wrong tool for this job.

Instead, we built a dedicated path for bulk redirects:

  • Shard redirect data so each piece stays small

  • Use Bloom filters so the common "no redirect" case stays cheap

  • Store redirects in a layout that supports binary search over keys

This development cycle reinforced a principle we keep coming back to. Avoid premature optimization. By starting with a simple, debuggable implementation and instrumenting it, we let production data dictate where complexity was actually needed.

Get started with bulk redirects

Bulk redirects are available for Pro and Enterprise customers, configurable via project configuration, the dashboard, API, or CLI. The current limit is 1 million redirects per project. If you need more capacity, reach out to us.

Plan

Included redirects

Additional capacity

Pro

1,000 per project

$50/month per 25,000

Enterprise

10,000 per project

$50/month per 25,000

Use bulk redirects to manage large-scale migrations, fix broken links, handle expired pages, and more. See our bulk redirects documentation or the getting started guide.

Read more

Ben Roberts Tim Caswell Sudais Moorad
https://vercel.com/changelog/vercel-sandbox-now-accepts-environment-variables-at-creation Vercel Sandbox now accepts environment variables at creation 2026-03-03T13:00:00.000Z

The Vercel Sandbox SDK and CLI now support setting environment variables at sandbox creation that are automatically available to every command.

When running multi-step processes in a Vercel Sandbox like installing dependencies, building a project or starting a dev server, each step often needs the same environment variables. Now, these are available with every runCommand call.

Environment variables passed to Sandbox.create() are inherited by all commands automatically. Per-command env in runCommand can still override individual values when needed.

Update to the latest Sandbox CLI and SDK, run npm i @vercel/sandbox to get started.

Read more

Gal Schlezinger
https://vercel.com/changelog/vercel-workflow-is-now-twice-as-fast Vercel Workflow is now twice as fast 2026-03-03T13:00:00.000Z

Server-side performance for Vercel Workflow, the fully managed platform built on top of the open-source Workflow Development Kit (WDK), is now twice as fast, delivering a 54% median improvement across the board.

Over the last two weeks, the median API response time has been reduced from 37ms to 17ms, with queue latency, Time to First Byte (TTFB), and per-step overhead all reduced.

Workflows that coordinate multiple steps benefit the most, as lower overhead compounds across each step in a run.

To get these and future speedups, update to the latest version of the Workflow DevKit ([email protected] or newer) or view the documentation.

Read more

John Lindquist Peter Wielander Pranay Prakash Nate Rajlich
https://vercel.com/changelog/gpt-5-3-chat-is-now-on-ai-gateway GPT 5.3 Chat is now on AI Gateway 2026-03-03T13:00:00.000Z

GPT-5.3 Chat (GPT 5.3 Instant) is now available on AI Gateway.

This update focuses on tone, relevance, and conversational flow for more accurate answers, better-contextualized web results, and fewer unnecessary refusals and caveats. It also reduces hallucination rates and produces smoother and more direct responses.

To use this model, set model to openai/gpt-5.3-chat in the AI SDK.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/gemini-3-1-flash-lite-is-now-on-ai-gateway Gemini 3.1 Flash Lite is now on AI Gateway 2026-03-03T13:00:00.000Z

Gemini 3.1 Flash Lite from Google is now available on AI Gateway.

This model outperforms 2.5 Flash Lite on overall quality, with notable improvements in translation, data extraction, and code completion. Gemini 3.1 Flash Lite is best suited for high-volume agentic tasks, data extraction, and applications where budget and latency are the primary evaluation constraints.

To use this model, set model to google/gemini-3.1-flash-lite-preview in the AI SDK. This model supports four thinking levels, minimal, low, medium, and high.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Matt Lenhard Jeremy Philemon Jerilyn Zheng
https://vercel.com/blog/advancing-python-typing Advancing Python typing 2026-03-02T13:00:00.000Z

We’re excited to share a year-long research effort aimed at making Python’s type system more expressive and composable, something closer in spirit to the programmable types in TypeScript, but carefully crafted for Python’s runtime model. The result is PEP 827: Type Manipulation.

Python’s runtime is incredibly powerful: classes, methods, and even whole APIs can be generated on the fly from a few lines of code. Concepts like metaprogramming can transform class declarations, decorators can give functions and methods additional behaviors, and those are just a few examples.

But Python's static typing often can’t “follow along” without typechecker plugins or boilerplate code. PEP 827 proposes a set of standard, type-level building blocks for introspecting existing types and constructing new ones, designed to help both type checkers and runtime tooling.

FastAPI creator Sebastián Ramírez summed up the potential impact well on our post in the Python Discourse:

Quick taste

One concrete example is the familiar TypeScript utility types, like Pick and Omit. Here's Pick implemented in TypeScript and Python side by side:

We can immediately see that the TypeScript dedicated typing syntax is short and to the point, albeit quite different from the rest of the language. Python, on the other hand, relies on the standard Python imperative syntax combined with type-level APIs.

Now let's look at how Omit can be implemented:

Interestingly enough, Python's version is more in line with the Pick implementation, the only difference is inverting the condition. TypeScript, on the other hand, composes quite differently, and requires a deeper rewrite.

This illustrates that the big idea isn't "make Python look like TypeScript." It’s to give Python typing a programmable core that matches Python’s semantics and stays introspectable at runtime, so frameworks like Pydantic can benefit too.

What's next

PEPs are debated, revised, and sometimes rejected. We’re excited to be part of that process, and we invested in this research because we build across TypeScript and Python and want both ecosystems to thrive.

One might ask: in an age where agents are writing an increasing share of source code, should we even care about programming language syntax, tooling, or type system capabilities?

We argue the answer is, more than ever, "yes". We want type checkers to be more thorough and frameworks to be more expressive, so that we can safely ship more reviewable, succinct code. The less boilerplate we have to maintain, the better, and we don’t see that changing anytime soon.

So yes, agents will care. And so will we.

Read more

Yury Selivanov Michael J. Sullivan
https://vercel.com/blog/how-waldium-made-a-blog-platform-work-for-humans-and-ai-alike How Waldium made a blog platform work for humans and AI alike 2026-03-02T13:00:00.000Z

Amrutha Gujjar has been writing software since she was young. What drew her in was always the act of building: shaping product logic, making architecture decisions, and turning ideas into real software. The frustrating part was always surrounding infrastructure. The ports, deployment configs, and cloud issues. They always had a way of interrupting momentum at exactly the wrong time.

I was never stuck on the part I was excited to build,” she says. “It was always the infrastructure friction around it.”

Impact at a glance

  • 500+ customer blogs served from a single Vercel deployment

  • 5 minutes: How long it takes new customers to get MCP endpoints live

  • AI query response times consistently under 50ms globally

  • 45% lower infrastructure cost vs. per-customer deployment model

That frustration became a design principle when she co-founded Waldium with CTO Shivam. The product, an agentic CMS that automates content research and creation for businesses, needed to stay focused on what made it interesting. The infrastructure needed to disappear.

It mostly has. But the reason why took a year and a new mental model of what a blog actually is.

When agents became part of the audience

Waldium started the way most content platforms do: building blogs for humans to read. But something Amrutha kept noticing was quietly changing who, and what, showed up to read them.

"The people consuming content are not even necessarily people these days," she says. "It's oftentimes agents consuming content."

Developers and technical teams were pulling blog content into their coding environments, into Claude Desktop, into ChatGPT, using it as live context in the tools where they actually worked. The browser tab was still part of the picture, but it wasn't the whole story anymore. Content that couldn't travel into those environments was leaving reach on the table.

That insight pointed Waldium toward MCP (Model Context Protocol) as the right primitive. If every customer blog had its own MCP server endpoint, an AI assistant could search, retrieve, and interact with that content directly, without anyone leaving their workflow. A developer could ask an agent to find a specific code snippet across an entire blog archive and get an answer in the same window where they were building.

For technical founders who don't think of themselves as marketers, it opened up something new entirely. "Being able to use MCP," Amrutha says, "allows you to create a blog post from the same place you're building a feature."

The vision was clear. The infrastructure question was harder.

A thousand front pages, one codebase

Supporting hundreds of customer blogs across custom domains is already a meaningful engineering challenge. Give every one of those blogs its own branded MCP server, with subdomain generation, SSL certificates, and a live install page, and the traditional approach collapses under its own weight.

Waldium had started on AWS Amplify, and the friction was constant: GitHub Actions that needed manual wiring, deployment pipelines that ballooned in complexity, and an overall model that kept pulling attention toward infrastructure. They moved to Vercel early. The difference was immediate.

"The developer experience is so seamless," Amrutha says. "It allows our team to focus on what we do best."

The unlock for Waldium's MCP infrastructure specifically was Vercel Platforms. With a single Next.js application, the team now serves every customer blog, every MCP endpoint, and every custom domain from one unified deployment. Vercel Middleware handles routing dynamically so that when an AI agent sends a query, the request reaches the right tenant automatically.

The Vercel Domains API provisions custom domains in seconds, with SSL certificates issued and renewed without any manual work. When Waldium evaluated MCP-specific hosting tools, the subdomain limits weren't close: competitors capped out in the dozens. Vercel's ceiling is in the tens of thousands.

"It allowed us to think at scale without having to worry about scale," Amrutha says.

Shivam points to the overall mental model as the thing that's proved stickiest. "It's very simple now: push to a particular git branch, get a preview deployment, get a production deployment. It works cleanly with authentication, with our databases. It just works." The integrated storage layer, with Neon and Upstash sitting alongside the application rather than off in a separate console, gave the team what he calls "a single pane of glass into all the parts of our system."

The result: a new customer signs up, gets a unique subdomain generated during onboarding, and walks away with a sitemap, LLMs.txt, robots.txt, an MCP install page, and a live MCP endpoint—all in under five minutes.

From weeks of research to a thousand posts

The real test of any infrastructure decision is what it enables downstream. For Waldium's customers, MCP endpoints opened a distribution channel that simply didn't exist before: their content became queryable directly inside AI assistants, not just discoverable through search.

Take Sapra AI, a safety compliance company that publishes highly technical educational content. Previously, their team faced weeks of manual research for every content push, with someone reading through thousands of pages of ISO standards to identify what would actually be useful to customers.

With Waldium, a team of research agents handles that work continuously, building company and industry profiles, flagging newsworthy developments, and generating content at a volume no traditional content team could match. Sapra AI produced over 1,000 posts in a single month. The research timeline went from weeks to hours.

Amrutha sees this as the new table stakes. "Before, if you were starting a business, you had a landing page. Today, if you're starting a business, you have to have a corpus of data about your business that an LLM can consume." Waldium is building the fastest path to that corpus, and Vercel Platforms is what makes it possible to hand that path to hundreds of customers at once, without rebuilding it for each one.

"Once you understand how good things can be," Amrutha says, "it's really hard to go back to a product that hasn't given as much intentional thought toward good design." Her team isn't managing servers. They're shipping.


About Waldium: Waldium is an agentic CMS and blog hosting platform that automates content research and creation for businesses, and provides every customer with dedicated MCP endpoints so their content is accessible directly within AI assistants.

Read more

Nic Vargus
https://vercel.com/changelog/vercel-cli-for-marketplace-integrations-optimized-for-agents Vercel CLI for Marketplace integrations optimized for agents 2026-03-02T13:00:00.000Z

AI agents can now autonomously discover, install, and retrieve setup instructions for Vercel Marketplace integrations using the Vercel CLI. This lets agents configure databases, auth, logging, and other services end-to-end in one workflow.

These capabilities are powered by the new discover and guide commands in the Vercel CLI.

By using the --format=json flag with the discover command, the CLI provides non-interactive JSON output that benefits developers as well, making it easier to automate infrastructure, write custom scripts, and manage CI/CD pipelines.

When building an application, agents begin by exploring available integrations using the discover command.

After exploring the options, the agent can add an integration and then fetch getting started guides and code snippets for a specific integration using the guide command.

The Vercel CLI returns this setup documentation in an agent-friendly markdown format. This allows the agent to easily parse the instructions, write the necessary integration code, and configure the project autonomously.

For integrations with required metadata fields, agents can use the help command to determine the required inputs and pass them as options to the add command.

The CLI also makes it easy to pause this process for human decisions, like terms of service acceptance. Agents can prompt developers for confirmation, enabling hybrid workflows that require human oversight of certain integration decisions.

These commands are continuously tested against agent evaluations to ensure reliable autonomous behavior.

Update to the latest version of the Vercel CLI to try it out, or read the documentation.

Read more

Tony Pan Bhrigu Srivastava
https://vercel.com/blog/gamma-builds-design-first-agents-with-vercel Gamma builds design-first agents with Vercel 2026-02-28T13:00:00.000Z

Gamma began with a simple idea: what if your presentation could design itself?

With a single sentence, users can generate a complete presentation that respects layout, spacing, and hierarchy. Columns reflow automatically. Diagrams adjust when new layers are added. The product handles the formatting so teams can stay focused on the ideas.

That philosophy reflects the company's DNA. Of Gamma's first ten hires, three were designers. "The attention to detail and value placed on design has been baked into the culture from the very, very beginning," says Sherwin Yu, Head of AI and Product Engineering. "Our designers at Gamma are fantastic. They ship code, they're technical. They'll push to production."

"There's a lot of discussion about how do we, whenever possible, elevate the user experience," Sherwin says.

As adoption grew, the team realized generation was only the beginning. Real presentation work happens in iteration. Teams outline, restructure, refine tone, and polish visuals. In October 2025, Gamma launched Gamma Agent, a conversational editing that shifted the AI capabilities dramatically.

Evolving complex agent architectures with AI SDK

The first version of Gamma generated decks from a prompt. Gamma Agent introduced dialogue, and with it, a new relationship between the user and the product.

As the team started prototyping more powerful agents, that simplicity broke down. They needed finer control and more persistence over conversation state. They needed the ability to pass context from one agent to another, manage message history across sessions, and orchestrate more complex multi-step interactions than a simple request-response loop.

The decisions a user made early in a workflow, the reasoning behind the structure, the tone they'd settled on… all of that was valuable context that couldn't just live in a disposable chat window.

By building on the AI SDK rather than custom orchestration code, Gamma can evolve agent behavior without re-architecting its backend.

Gamma's investment in composable, model-agnostic architecture extends beyond text. The company's image pipeline, which has generated more than 1.5 billion images across 60 models and 20 providers, has gone through its own architectural reckoning.

Image generation

Staying on the frontier of image generation means integrating new models fast… sometimes within days of launch. When the Vercel AI SDK introduced ImageModelV3, a standard interface for image generation with a composable middleware layer, Gamma's team saw it as yet another opportunity.

Today, adding a new image model to Gamma is about 30 lines of code: just a model ID, cost formula, supported sizes, and capability flags. Tracing, cost tracking, and image preprocessing are handled automatically by shared middleware that wraps every model. Engineers never think about that plumbing; they just declare what a model can do. This pays off in the product.

Infographics

When the team shipped AI infographics, Gemini needed multimodal style references (actual images showing the target aesthetic), while Flux worked best with concise, text-only prompts. Because the model layer is just configuration, those per-model strategies live in the feature code, not buried in infrastructure. New model, new capability, new feature—each independent.

The result: Gamma ships new models in hours, not weeks, and every model automatically gets production-grade observability from its first request.

Shipping continuously with preview deployments

Gamma applies the same philosophy to its deployment workflow: pick stable foundations, then move fast on top of them. Instead of building its own release system, the team relies on Vercel's Preview Deployments, production deployments, and Instant Rollbacks.

"We try not to reinvent infrastructure we don't have to," Sherwin says. "We'd rather spend that engineering energy on the product."

Despite Gamma’s team of just 20 or so engineers, Gamma averages more than 250 deployments per day across preview and production. Deploys complete in just over 7 minutes at median, with a 99 percent success rate.

Preview deployments make it safe to experiment with agent behavior on every pull request. Instant Rollbacks provide confidence when shipping changes that affect model logic or orchestration.

Scaling the AI content pipeline on Vercel

Gamma's AI outputs raw HTML, but a presentation is more than markup, it's a structured document with layout rules, resolved images, live charts, and editable diagrams. Every generated card passes through a conversion layer that bridges that gap in real time.

Gamma runs this critical translation layer as Vercel Functions. Every AI-generated card passes through a serverless endpoint that instantiates the complete Tiptap editor schema inside JSDOM, parses the LLM's HTML output into structured editor content, and resolves async assets.

Other serverless functions handle the reverse direction (serializing editor content into AI-readable HTML) and generating theme preview images on the fly.

All together, Gamma’s use of serverless functions ensures presentations load quickly and AI-powered editing stays responsive for users worldwide.

Designing for what’s next

As agents across the industry get more capable, the limiting factor shifts from intelligence to information.

"An agent that knows your brand guidelines, your previous presentations, and your company's tone of voice is infinitely more valuable than a generic model," Sherwin says. "Right now, context is what separates a useful agent from a generic chat bot."

He sees context operating at three levels: the immediate session, the user's history across projects, and the organizational layer (meaning things like brand assets, templates, knowledge base). Getting all three into the model's window, efficiently and at the right moment, is the architectural challenge every company building agents is wrestling with.

It's the same vision Gamma has been building toward from day one, making it effortless to turn ideas into polished, compelling communication. First through intelligent layout and design. Then through conversational editing. And now, through a context layer that understands what you're building and why.

What hasn't changed is how Gamma builds: pick the right abstractions, stay model-agnostic, keep enough flexibility to rebuild when the landscape moves, and ship before the window closes.

In a space that reinvents itself every six months, that adaptability is the real moat.

Read more

Madison McIlwain
https://vercel.com/blog/How-avalara-turns-pipedreams-into-patent-pending-with-v0 How Avalara turns pipe dreams into patent-pending with v0 2026-02-28T13:00:00.000Z

Avalara connects businesses to more than 1,400 systems to automate tax compliance around the world. It’s a massively complex ecosystem that spans ERP systems, finance platforms, and compliance tools, all talking to each other.

For Chief Strategy and Product Officer Jayme Fishman, the path forward is modernizing how Avalara builds. His mandate is to drive digital transformation, with a sharp focus on AI and innovation.

Enter Vercel’s v0, which translates plain language into working prototypes. Within months, the team built two new patent pending products—and along the way, changed how the company builds.

Seeing is believing

Before v0, bringing an idea to life required a mountain of slides, careful specs, and ample interpretation. Fishman might have a strong vision, but getting started meant writing everything down, then waiting for designers and engineers to bring it to life. 

“It could be a significant delay before we even had a conceptual mock-up.”

That changed overnight.

One of Avalara's biggest challenges was supporting customers who could be plugging into more than a thousand different systems. "We could provide technical documentation and show customers what to do," Fishman said, "but we couldn't see what they were doing. Once they left our system, we lost visibility… and the ability to help."

Fishman imagined a solution that could meet customers where they were. What if Avalara built a Chrome extension that could live alongside a user's workflow, walk them through each step of an integration specific to the systems they were using, and stay behind to answer any questions? He described it to a teammate, who went straight into v0.

"The next morning, there's a video in my Slack. It shows exactly what I described the night before," Fishman recalled. "I showed it to my exec team, and all the light bulbs lit up."

“I can describe what I want and wake up to a working demo. It’s tectonically shifting how we build.”

That demo—built in v0—became the basis for a new patent, a production build, and a press release, all within about 60 days. “It was one of those moments,” he said, “where you realize you don’t need to talk people into an idea if they can see it.”

Driving alignment with product design

Like many SaaS organizations, Avalara’s product and design process used to depend on long handoffs. Product managers wrote PRDs. Designers translated them into Figma files. Engineers reviewed and rebuilt. “There’s desire and intent,” Fishman said, “and then there’s what actually happens—where everyone gets tagged in late and we lose momentum.”

With v0, that flow changed completely. Product leads now start directly in the tool, describing what they want in plain language and watching v0 translate intent into a functioning interface. “It’s like you can will it into existence,” Fishman said. “You describe the problem, and five minutes later, you’re looking at a solution.”

For designers, the shift has been equally dramatic. “You can just grab someone, show them what you mean, and start iterating,” Fishman explained. “It takes something that used to be async and turns it into a real conversation.”

A new way of building

Across Avalara, prototypes have replaced concepts. Fishman calls it “a cultural accelerant.”

The results speak for themselves: two patent-pending products created in roughly 60 days, faster design and validation cycles, and a company-wide shift toward building through iteration, not interpretation.”


About Avalara: Avalara connects businesses to more than 1,400 systems to automate tax compliance around the world.

Read more

Nic Vargus
https://vercel.com/blog/keeping-community-human-while-scaling-with-agents Keeping community human while scaling with agents 2026-02-27T13:00:00.000Z

At Vercel, our developer community is at the heart of everything we do. It's how we stay closest to the people using what we build.

As our community grew, automation helped us scale. But questions still got lost, routing took time, and context switching pulled us away from the work that actually required our expertise. And automation could never help with the things that mattered most, the moments where you really connect with someone and help them. You can't use AI to replicate the feeling of talking to a person who really cares.

So we built agents to take over the routing, triage, and follow-ups that don't need a human. We call this the Community Guardian. Let's talk about what it does, how we built it, and how anyone, including non-engineers, can ship agents too.

The Community Guardian operations layer

When a new post comes in, the Guardian analyzes it, checks for duplicates, and assigns it to the team member with the right specialty and bandwidth. Each person handles up to 10 questions before new ones go to someone else, keeping the workload balanced across time zones.

Nothing gets overlooked. If a question goes unanswered for 48 hours, the Guardian reassigns it. It sends reminders when we're waiting for more information and detects when conversations are resolved.

Under the hood, the Guardian uses Claude through AI Gateway and runs on Vercel Workflows, which lets it check in every 10 minutes and sleep between cycles without consuming resources.

That handles the operations side, but our team still needed better context to respond well.

The intelligence layer: c0, the research assistant

While the Guardian manages logistics, c0 is the agent that goes deep on research. It lives in Slack, where our team already works.

When a team member needs context on a thread, c0 searches our knowledge base, documentation, GitHub issues, and past discussions to put together a context package. The context package helps our team respond faster and more accurately instead of relying on their own memory.

Beyond individual threads, c0 helps us close the loop with our product teams. It tracks community sentiment and recurring technical hurdles, so rather than someone spending hours auditing a week's worth of posts, we can ask c0 for the "top product feedback" and bring real data to our product conversations.

Reclaiming human focus

In its first 23 days, the system helped 281 unique users:

Metric

Outcome

Initial context gathering

4,716 first responses triaging issues and gathering logs before a team member arrives

Thread revival

1 in 8 "ghosted" threads brought back to life, resulting in 23 confirmed solutions

Operational scale

Over 1,400 agent runs in a recent two-week period, from stale-checks to auto-solving

Duplicate detection

4 duplicate threads detected via vector similarity, with 3 auto-closed at 95%+ confidence

Every substantial answer still comes from our team. Agents handle everything else around those answers. Without the repetitive parts of triage and tracking, our team can spend time on complex pair-debugging and relationship building, creating content for the broader community, or just having fun with the developers they care about.

Build your own

You don't have to be a developer to build something like this. You just need an idea. I'm not an engineer. I manage community and talk to developers. Sure, I understand the problems we're solving, but I'm not writing production code every day.

My idea started at a talk in Zurich where I showed how we were automating community workflows. But that was traditional automation, scripts and rules and if-this-then-that logic. It worked, but it was brittle. Every edge case needed a new rule.

I wanted something smarter, so I started experimenting with my coding agent to add a thinking layer, the step between "new post arrives" and "take action." Instead of "if post contains 'billing' then route to billing team," it became "read this post, understand what the person actually needs, then decide."

The thinking layer is like another DX engineer looking at each post who can read between the lines when a user says "it's not working," connect dots to a GitHub issue from three months ago, understand when someone's frustrated vs. just confused, and know when to escalate vs. when to gather more context. Building this way meant I could describe what I wanted in plain English, get working code back, test it against real community threads, and iterate.

I wanted to use different models for different tasks, give our agent access to read our docs and community, and allow it to suspend, resume, and recover if something failed. Instead of building all of that from scratch, I described what I needed to my coding agent and landed on AI Gateway, AI SDK, and Vercel Workflows, which already handle those complexities.

The prompts that built it

The first prompt was the core idea: "Build me an agent that helps me with the community, day-to-day operations like assigning posts and formatting. I don't know which model will work best yet but make it easy to switch without needing new API keys. Use the AI SDK for the agent."

From there, the prompts got more specific as I understood more about what I was building. "And triggers every 10 minutes, I want to check for the latest threads." I'd started with cron jobs, but switched to Vercel Workflows for this. The durable execution meant the agent could suspend between checks and resume exactly where it left off.

"Make sure we're rotating assignments every 4 hours." Every prompt unlocked the next question. I wasn't following a tutorial or docs. I was having a conversation, and the system grew from that conversation.

You don't need to know the right terminology or how to code. You just need to know your problem well enough to describe it and be willing to iterate when something doesn't work the way you expected. The thinking layer turns automation from "follow these exact rules" into "understand the situation and make a judgment call."

Build with heart

Community is about people, and we want our people to have the time and energy to show up fully, building with and for the developers in our community.

If you want to build something similar, we built c0 with the Chat SDK, a unified TypeScript SDK for building agents across Slack, Teams, Discord, and more. The Guardian uses Vercel Workflows for durable execution. Come share what you build in the community. We're always happy to talk through what we've learned.

Read more

Pauline P. Narvas
https://vercel.com/changelog/vercel-queues-now-in-public-beta Vercel Queues now in public beta 2026-02-27T13:00:00.000Z

Vercel Queues is a durable event streaming system built with Fluid compute, and is now available in public beta for all teams. Vercel Queues also powers Workflow: use Queues for direct message publishing and consumption, Workflow for ergonomic multi step orchestration.

Functions need a reliable way to defer expensive work and guarantee that tasks complete even when functions crash or new deployments roll out. Queues makes it simple to process messages asynchronously with automatic retries and delivery guarantees, providing at-least-once delivery semantics.

How it works:

  • Messages are sent to a durable topic

  • The queue fans messages out to subscribed consumer groups.

  • Each consumer group processes messages independently.

  • The queue redelivers messages to consumer groups until successfully processed or expired.

Publish messages from any route handler:

Create a consumer:

Configure the consumer group:

Adding a trigger makes the route private: it has no public URL and only Vercel's queue infrastructure can invoke it.

Vercel Queues is billed per API operation, starting at $0.60 per 1M operations, and includes:

  • Multiple AZ synchronous replication

  • At-least-once delivery

  • Customizable visibility timeout

  • Delayed delivery

  • Idempotency keys

  • Concurrency control

  • Per-deployment topic partitioning

Functions invoked by Queues in push mode are charged at existing Fluid compute rates.

Get started with the Queues documentation.

Read more

Joe Haddad Casey Gowrie Harpreet Arora
https://vercel.com/changelog/chat-sdk-adds-telegram-adapter-support Chat SDK adds Telegram adapter support 2026-02-27T13:00:00.000Z

Chat SDK now supports Telegram, extending its single-codebase approach to Slack, Discord, GitHub, and Teams, with the new Telegram adapter.

Teams can build bots that support mentions, message reactions, direct messages, and typing indicators.

The adapter handles single file uploads and renders basic text cards, with buttons and link buttons that display as inline keyboard elements, allowing developers to create interactive workflows directly within Telegram chats.

Get start with Telegram adapter setup:

Telegram does not expose full historical message APIs to bots, so message history relies on adapter-level caching. Additionally, callback data is limited to 64 bytes, and the platform does not currently support modals or ephemeral messages.

Read the documentation to get started.

Read more

Tobias Lins Hayden Bleasel
https://vercel.com/changelog/developer-role-now-available-for-pro-teams Developer role now available for Pro teams 2026-02-26T13:00:00.000Z

Pro teams can now assign the Developer role to their members. Previously only available for Enterprise teams, the Developer role gives Pro teams more granular access control.

Developers can safely deploy to projects on a team, with more limited team-wide configuration control and environment variables visibility.

Owners can assign the Developer role to any existing seat or invite new members from the team members settings.

Learn more about team level roles.

Read more

Jeremy Dopkin Michael Wenzel
https://vercel.com/changelog/dashboard-navigation-redesign-rollout New dashboard redesign is now the default 2026-02-26T13:00:00.000Z

The new dashboard navigation is now the default experience for all Vercel users.

Following a successful opt-in beta release in January it has now rolled out fully as of February 26, 2026, with several improvements made based on feedback.

The redesigned navigation includes:

  • New sidebar with horizontal tabs moved to a resizable sidebar that can be hidden when not needed

  • Consistent tabs for unified navigations across both team and project levels

  • Improved order with navigation items prioritized the most common developer workflows

  • Projects as filters so you can switch between team and project versions of the same page in one click

  • Optimized for mobile with floating bottom bar optimized for one-handed use

No action is required. The new navigation is available to all users automatically.

Open your dashboard to see the updated experience.

Read more

wits Timo Lins Christopher Skillicorn Andrew Gadzik Mery Kaftar
https://vercel.com/changelog/nano-banana-2-is-live-on-ai-gateway Nano Banana 2 is live on AI Gateway 2026-02-26T13:00:00.000Z

Gemini 3.1 Flash Image Preview (Nano Banana 2) is now available on AI Gateway.

This release improves visual quality while maintaining the generation speed and cost of flash-tier models.

Nano Banana 2 can use Google Image Search to ground outputs in real-world imagery. This helps with rendering lesser-known landmarks and objects by retrieving live visual data. This model also introduces configurable thinking levels (Minimal and High) to let the model reason through complex prompts before rendering. New resolutions and new aspect ratios (512p, 1:4 and 1:8) are available alongside the existing options to expand to support more types of creative assets.

To use this model, set model to google/gemini-3.1-flash-image-preview in the AI SDK. Nano Banana 2 is a multimodal model. Use `streamText` or `generateText` to generate images alongside text responses. This example shows how the model can use web search to find live data.

You can also change the thinking level: in this example, thinking is set to high for a more thorough response.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/blog/how-openevidence-built-a-healthcare-ai-that-physicians-can-trust How OpenEvidence built a healthcare AI that physicians actually trust 2026-02-25T13:00:00.000Z

Andy Yoon was scrolling through Slack when he saw the message: OpenEvidence had gone viral on TikTok.

Not "gaining traction.” Actually viral, reaching around two million views in less than a week. 

This is usually when you rally the troops, spin up emergency capacity, and start making phone calls you really didn't want to make.

Andy, Lead Frontend Engineer, did none of those things.

Instead, he watched the numbers climb. He checked the logs—everything green. Response times: still fast. Error rates: still near zero. Then he went back to whatever he was doing before, because there was nothing to fix.

"Vercel has just completely scaled with that usage," he says. "We've never had it fall over due to capacity or had to provision anything extra. Just being able to trust that it's there, to the point where we don't really even think about it, is amazing."

It was proof that they'd solved a problem most healthcare tech companies haven't figured out yet: how to move at startup speed while meeting hospital-grade reliability standards.

When failure isn't an option

The stakes are different for companies like OpenEvidence. If their product fails, it could result in someone making a bad medical decision. 

OpenEvidence is the most widely used clinical decision support platform among U.S. clinicians, supporting over 20 million clinical consultations in January 2026. Over 100 million Americans were treated by a doctor using OpenEvidence last year.

A general-purpose model can afford to be wrong, but a clinical tool cannot. Physicians expect speed, but they also expect stability, clarity, and trust.

This pressure sits on top of every technical decision at OpenEvidence: it has to work, every time.

A frontend engineer and a team of Python developers

When Andy joined OpenEvidence about three years ago, he discovered something that would make most frontend engineers nervous: he was basically the only one.

"I was pretty much the only engineer on our team coming from an actual frontend background," he says. "Most of our team works in Python and machine learning."

They couldn't afford infrastructure that needed constant babysitting. They needed something that would just work. Deploy code, it goes live. Traffic increases, it scales.

So OpenEvidence uses a hybrid architecture. The backend is built in Python and runs on Google Cloud Platform. It handles data ingestion, model orchestration, and core business logic, while the frontend is built with Next.js and deployed on ‌Vercel. ‌‌

"Given the makeup of our engineering team, Vercel has really scaled with our frontend so well," Andy notes. 

Each commit deploys automatically. Production deploys take five minutes. Preview URLs appear for every branch. For a small team supporting millions of medical consultations daily for almost half of all physicians in the US, it’s been indispensable.

Prototyping at speed

Before OpenEvidence became what it is today, it was dozens of other things first. Each proof of concept was deployed on Vercel as its own project with a custom domain. 

Vercel made it simple. Spin up a new project, connect a custom domain, push code, and you have what looks like a production environment. Stakeholders could click around and test workflows.

This ability to spin up projects in minutes helped the team find product-market fit. It also made it easier to win early enterprise partnerships.

When building out new features, preview deployments give them shareable links for live demos. Changes can be rolled out safely, because they can be reverted instantly if needed.

The 90% surprise

As OpenEvidence scaled to 1000x growth, the lead infrastructure engineer, Micah Smith, kept a close eye on compute costs. When Vercel introduced Fluid compute, it changed how serverless workloads run—combining on-demand execution with server-like efficiency, lower latency, and better performance under load.

The team enabled Fluid compute to see what would happen, and their serverless spend dropped by 90%. Same reliability. Faster speed. Fewer cold starts. 

"We reduced our serverless spend by 90% while maintaining the same performance, and even as we've scaled up to 1000x growth, Vercel is less than 5% of our overall infra spend." —Micah Smith, VP Engineering

The infrastructure is almost invisible, meaning more time spent on product experience and less time debugging tools or provisioning servers.

Threading the needle

"A lot of doctors and medical professionals are used to really outdated software," Andy says.

He's not wrong. Hospital software often looks like it was designed in the '90s, but those tools are reliable. OpenEvidence has to thread the needle, building a modern solution that upholds the reliability bar. 

Their viral moment proved the platform could handle a sudden influx while maintaining hospital-grade reliability.

It did.

Since launching, OpenEvidence has grown to serve over 40% of physicians in the United States. The frontend team is still small. The infrastructure still just works.


About OpenEvidence: OpenEvidence is the fastest-growing clinical decision support platform in the United States, and the most widely used medical search engine among U.S. clinicians. OpenEvidence is trusted by hundreds of thousands of verified healthcare professionals to make high-stakes clinical decisions at the point of care that are sourced, cited, and grounded in peer-reviewed medical literature. Founded with the mission to help doctors save lives and improve patient care, OpenEvidence is actively used daily, on average, by over 40% of physicians in the United States, spanning more than 10,000 hospitals and medical centers nationwide. Learn more at openevidence.com.

Read more

Nic Vargus
https://vercel.com/changelog/activity-log-now-tracks-100-of-team-and-project-changes Activity log now tracks 100% of team and project changes 2026-02-25T13:00:00.000Z

The activity log now captures every change made to your team and project settings, giving you complete visibility into who changed what and when.

Previously, some settings changes went untracked. With 88 new events added, activity log coverage is now 100%, so no action goes unrecorded.

Try it out or learn more about the activity log.

Read more

Luka Hartwig Darpan Kakadia
https://vercel.com/blog/security-boundaries-in-agentic-architectures Security boundaries in agentic architectures 2026-02-24T13:00:00.000Z

Most agents today run generated code with full access to your secrets.

As more agents adopt coding agent patterns, where they read filesystems, run shell commands, and generate code, they're becoming multi-component systems that each need a different level of trust.

While most teams run all of these components in a single security context, because that's how the default tooling works, we recommend thinking about these security boundaries differently.

Below we walk through:

  • The actors in agentic systems

  • Where security boundaries should go between them

  • An architecture for running agent and generated code in separate contexts

All agents are starting to look like coding agents

More agents are adopting the coding agent architecture. These agents read and write to a filesystem. They run bash, Python, or similar programs to explore their environment. And increasingly, agents generate code to solve particular problems.

Even agents that aren't marketed as "coding agents" use code generation as their most flexible tool. A customer support agent that generates and runs SQL to look up account data is using the same pattern, just pointed at a database instead of a filesystem. An agent that can write and execute a script can solve a broader class of problems than one limited to a fixed set of tool calls.

What goes wrong without boundaries

Consider an agent debugging a production issue. The agent reads a log file containing a crafted prompt injection.

The injection tells the agent to write a script that sends the contents of ~/.ssh and ~/.aws/credentials to an external server. The agent generates the script, executes it, and the credentials are gone.

This is the core risk of the coding agent pattern. Prompt injection gives attackers influence over the agent, and code execution turns that influence into arbitrary actions on your infrastructure. The agent can be tricked into exfiltrating data from the agent's own context, generating malicious software, or both. That malicious software can steal credentials, delete data, or compromise any service reachable from the machine the agent runs on.

The attack works because the agent, the code the agent generates, and the infrastructure all share the same level of access. To draw boundaries in the right places, you need to understand what these components are and what level of trust each one deserves.

Four actors in an agentic system

An agentic system has four distinct actors, each with a different trust level.

Agent

The agent is the LLM-driven runtime defined by its context, tools, and model. The agent runs inside an agent harness, which is the orchestration software, tools, and connections to external services that you build and deploy through a standard SDLC. You can trust the harness the same way you'd trust any backend service, but the agent itself is subject to prompt injection and unpredictable behavior. Information should be revealed on a need-to-know basis, i.e. an agent doesn't need to see database credentials to use a tool that executes SQL.

Agent secrets

Agent secrets are the credentials the system needs to function, including API tokens, database credentials, and SSH keys. The harness manages these responsibly, but they become dangerous when other components can access them directly. The entire architecture discussion below comes down to which components have a path to these secrets.

Generated code execution

The programs the agent creates and executes are the wildcard. Generated code can do anything the language runtime allows, which makes it the hardest actor to reason about. These programs may need credentials to talk to outside services, but giving generated code direct access to secrets means any prompt injection or model error can lead to credential theft.

Filesystem

The filesystem and broader environment are whatever the system runs on, whether a laptop, a VM, or a Kubernetes cluster. The environment can trust the harness, but it cannot trust the agent to have full access or run arbitrary programs without a security boundary.

These four actors exist in every agentic system. The question is whether you draw security boundaries between them or let them all run in the same trust domain.

A few design principles follow from these trust levels:

  • The harness should never expose its own credentials to the agent directly

  • The agent should access capabilities through scoped tool invocations, and those tools should be as narrow as possible. An agent performing support duties for a specific customer should receive a tool scoped to that customer's data, not a tool that accepts a customer ID parameter, since that parameter is subject to prompt injection.

  • Generated programs that need their own credentials are a separate concern, which the architectures below address

With these actors and principles in mind, here are the architectures we see in practice, ordered from least to most secure.

Zero boundaries: today's default

Coding agents like Claude Code and Cursor ship with sandboxes, but these are often off by default. In practice, many developers run agents with no security boundaries.

In this architecture, there are no boundaries between any of the four actors. The agent, the agent's secrets, the filesystem, and generated code execution all share a single security context. On a developer's laptop, that means the agent can read .env files and SSH keys. On a server, it means access to environment variables, database credentials, and API tokens. Generated code can steal any of these, delete data, and reach any service the environment can reach. The harness may prompt the user for confirmation before certain actions, but there is no enforced boundary once a tool runs.

Secret injection without sandboxing

A secret injection proxy sits outside the main security boundary and intercepts outbound network traffic, injecting credentials only as requests travel to their intended endpoint. The harness configures the proxy with the credentials and the domain rules, but the generated code never sees the raw secret values.

The proxy prevents exfiltration. Secrets can't be copied out of the execution context and reused elsewhere. But the proxy doesn't prevent misuse during active runtime. Generated software can still make unexpected API calls using the injected credentials while the system is running.

Secret injection is a backward-compatible path from a zero-boundaries architecture. You can add the proxy without restructuring how components run. The tradeoff is that the agent and generated code still share the same security context for everything except the secrets themselves.

Why sandboxing everything together isn't enough

A natural instinct is to wrap the agent harness and the generated code in a shared VM or sandbox. A shared sandbox isolates both from the broader environment, and that's genuinely useful. Generated programs can't infiltrate the wider infrastructure.

But in a shared sandbox, the agent and generated program still share the same security context. The generated code can still steal the harness's credentials or, if a secret injection proxy is in place, misuse credentials through the proxy. The sandbox protects the environment from the agent, but doesn't protect the agent from the agent's own generated code.

Separating agent compute from sandbox compute

The missing piece is running the agent harness and the programs the agent generates on independent compute, in separate VMs or sandboxes with distinct security contexts. The harness and the harness's secrets live in one context. The filesystem and generated code execution live in another, with no access to the agent's secrets.

Both Claude Code and Cursor offer sandboxed execution modes today, but adoption in desktop environments has been low because sandboxing can cause compatibility issues. In the cloud, this separation is more practical. You can give the generated code a VM tailored for the type of software the agent needs to run, which can actually improve compatibility.

In practice, this separation is a straightforward change. Agents perform tool invocations through an abstraction layer, and that abstraction makes it natural to route code execution to a separate environment without rewriting the agent itself.

These two workloads have very different compute profiles, which means separating them lets you optimize each one independently. The agent harness spends most of its time waiting on LLM API responses. On Vercel, Fluid compute is a natural fit for this workload because billing pauses during I/O and only counts active CPU time, which keeps costs proportional to actual work rather than billing idle time.

Generated code has the opposite profile. Agent-created programs are short-lived, unpredictable, and untrusted. Each execution needs a clean, isolated environment so that one program can't access secrets or state left behind by another. Sandbox products like Vercel Sandbox provide this through ephemeral Linux VMs that spin up per execution and are destroyed afterward. The VM boundary is what enforces the security context separation. Generated code inside the sandbox has no network path to the harness's secrets and no access to the host environment.

The sandbox works in both directions. The sandbox shields the agent's secrets from generated code, and shields the broader environment from whatever the generated code does.

Application sandbox with secret injection

The strongest architecture combines the application sandbox with secret injection. The combination gives you two properties that neither achieves alone:

  • Full isolation between the agent harness and generated programs, each running in their own security context

  • No direct access to credentials for the generated code, which can use secrets through the injection proxy while running but can't read or exfiltrate them. Injected headers overwrite any headers the sandbox code sets with the same name, preventing credential substitution attacks.

For production agentic systems, we recommend combining both. The agent harness runs as trusted software on standard compute. Generated code runs in an isolated sandbox. Secrets are injected at the network level, never exposed where generated code could access the secrets directly.

This separation of agent compute from sandbox compute will become the standard architecture for agentic systems. Most teams haven't made this shift yet because the default tooling doesn't enforce it. The teams that draw these boundaries now will have a meaningful security advantage as agents take on more sensitive workloads.

Safe secret injection is now available on Vercel Sandbox, read more in the documentation.

Read more

Malte Ubl Harpreet Arora
https://vercel.com/changelog/python-vercel-functions-bundle-size-limit-increased-to-500mb Python Vercel Functions bundle size limit increased to 500MB 2026-02-24T13:00:00.000Z

The bundle size limit for Vercel Functions using the Python runtime is now 500MB, increasing the maximum uncompressed deployment bundle size from 250MB.

Learn more in the functions limitations documentation, or deploy FastAPI or Flask on Vercel to get started.

Read more

Elvis Pranskevichus Greg Schofield Yury Selivanov Marcos Grappeggia
https://vercel.com/changelog/gpt-5-3-codex-is-now-on-ai-gateway GPT 5.3 Codex is now on AI Gateway 2026-02-24T13:00:00.000Z

GPT 5.3 Codex is now available on AI Gateway. GPT 5.3 Codex brings together the coding strengths of GPT-5.2-Codex and the reasoning depth of GPT-5.2 in a single model that's 25% faster and more token-efficient.

Built for long-running agentic work, the model handles research, tool use, and multi-step execution across the full software lifecycle, from debugging and deployment to product documents and data analysis. Additionally, you can steer it mid-task without losing context. For web development, it better understands underspecified prompts and defaults to more functional, production-ready output.

To use this model, set model to openai/gpt-5.3-codex in the AI SDK.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/slack-agent-skill-simplifies-building-slack-agents-with-coding-assistants Slack Agent Skill simplifies building Slack agents with coding assistants 2026-02-24T13:00:00.000Z

The Slack Agent Skill is now available, enabling developers to build and deploy Slack agents in a single session with their coding agent of choice.

The skill handles the complexity of OAuth configuration, webhook handlers, event subscriptions, and deployment so you can focus on what your agent should do rather than on infrastructure setup.

The wizard walks through five stages:

  1. Project setup: Choose your LLM provider and initialize from the Slack Agent Template

  2. Slack app creation: Generate a customized app manifest and create the app in Slack's console

  3. Environment configuration: Set up signing secrets, bot tokens, and API keys with validation

  4. Local testing: Run locally with ngrok and verify the integration

  5. Production deployment: Deploy to Vercel with environment variables configured automatically

Install the skill and run the wizard by invoking it in your coding agent (for example, /slack-agent new in Claude Code).

Try the skill to make your custom agent or use the Slack Agent Template to deploy right away and customize later.

Read more

Timothy Jordan
https://vercel.com/changelog/chat-sdk Introducing npm i chat – One codebase, every chat platform 2026-02-23T13:00:00.000Z

Building chatbots across multiple platforms traditionally requires maintaining separate codebases and handling individual platform APIs.

Today, we're open sourcing the new Chat SDK in public beta. It's a unified TypeScript library that lets teams write bot logic once and deploy it to Slack, Microsoft Teams, Google Chat, Discord, GitHub, and Linear.

The event-driven architecture includes type-safe handlers for mentions, messages, reactions, button clicks, and slash commands. Teams can build user interfaces using JSX cards and modals that render natively on each platform.

The SDK handles distributed state management using pluggable adapters for Redis, ioredis, and in-memory storage.

You can post messages to any provider with strings, objects, ASTs and even JSX!

Chat SDK post() functions accept an AI SDK text stream, enabling real-time streaming of AI responses and other incremental content to chat platforms.

The framework starts with the core chat package and scales through modular platform adapters. Guides are available for building a Slack bot with Next.js and Redis, a Discord support bot with Nuxt, a GitHub bot with Hono, and automated code review bots.

Explore the documentation to learn more.

Read more

Hayden Bleasel Malte Ubl Fernando Rojo John Phamous Nicolás Montone Vishal Yathish
https://vercel.com/changelog/safely-inject-credentials-in-http-headers-with-vercel-sandbox Safely inject credentials in HTTP headers with Vercel Sandbox 2026-02-23T13:00:00.000Z

Vercel Sandbox can now automatically inject HTTP headers into outbound requests from sandboxed code. This keeps API keys and tokens safely outside the sandbox VM boundary, so apps running inside the sandbox can call authenticated services without ever accessing the credentials. Header injection is configured as part of the network policy using transform. When the sandbox makes an HTTPS request to a matching domain, the firewall adds or replaces the specified headers before forwarding the request.

This is designed for AI agent workflows where prompt injection is a real threat. Even if an agent is compromised, there's nothing to exfiltrate, as the credentials only exist in a layer outside the VM.

Injection rules work with all egress network policy configurations, including open internet access. To allow general traffic while injecting credentials for specific services:

Live updates

Like all network policy settings, injection rules can be updated on a running sandbox without restarting it. This enables multi-phase workflows, inject credentials during setup, then remove them before running untrusted code:

Key highlights

  • Header overwrite: Injection applies to HTTP headers on outbound requests.

  • Full replacement: Injected headers overwrite any existing headers with the same name set by sandbox code, preventing the sandbox from substituting its own credentials.

  • Domain matching: Supports exact domains and wildcards (e.g., *.github.com). Injection only triggers when the outbound request matches.

  • Works with all policies: Combine injection rules with allow-all, or domain-specific allow lists.

Available to all Pro and Enterprise customers. Learn more in the documentation.

Read more

Valerian Roche Rob Herley
https://vercel.com/changelog/support-for-now-json-will-be-removed-on-march-31-2026 Support for now.json will be removed on March 31, 2026 2026-02-23T13:00:00.000Z

Support for the legacy now.json config file will be officially removed on March 31st, 2026. Migrate existing now.json files by renaming them to vercel.json, no other content changes are required.

For more advanced use cases, try vercel.ts for programmatic project configuration.

Learn more about configuring projects with vercel.json in the documentation.

Read more

Tom Knickman
https://vercel.com/blog/skills-night-69000-ways-agents-are-getting-smarter Skills Night: 69,000+ ways agents are getting smarter 2026-02-20T13:00:00.000Z

The room was full of people who had already used skills.

Tuesday night we hosted Skills Night in San Francisco, an event for developers building on and around skills.sh, the open skills ecosystem we've been growing since the idea started as a single weekend of writing. What began as Shu Ding sitting down to document everything he knows about React has grown into over 69,000 skills, 2 million skill CLI installs, and a community moving incredibly fast.

Here is what we learned.

Where this came from

The origin story is worth retelling because it shapes how we think about the project.

Shu Ding is one of the most talented web engineers I've ever worked with. He knows things about React and the browser that most people will never discover. Last year, he sat down on a weekend and wrote it all down. A kind of web bible. We wanted to figure out how to ship it. We considered a blog post or documentation that the next generation of models might eventually learn - but we wouldn't see the results until Claude Sonnet 8, or GPT-9. On the other hand, an MCP server felt too heavy for what was essentially a collection of markdown documents.

Skills made sense as the quickest way to deliver on-demand knowledge. While writing the instructions for installing React best practices, I ended up copying and pasting the same installation instructions for getting the skills into Cursor, Claude Code, Codex, and the other 10+ coding agents but with slightly different installation directories.

So I built a CLI to install it into every major coding agent at once. That became npx skills. We added telemetry to surface new skills as they got installed, which became the data that powers the leaderboard at skills.sh. The whole thing went from idea to production on Vercel in days. Malte Ubl, Vercel CTO, framed it perfectly: it's a package manager for agent context.

Now we are tracking 69,000 of them, and making them not just easy to discover but easy to install, with simple commands like just:

The security problem we needed to solve

Growth creates attack surface, and fast growth creates it even faster.

As soon as skills took off, quality variance followed. Ryan from Socket showed us a concrete example: a skill that looked completely clean at the markdown level but included a Python file that opened a remote shell on install. You would never catch that without looking at every file in the directory.

That is why we announced security partnerships with Gen, Socket, and Snyk to run audits across all skills and every new one that comes in.

  • Socket is doing cross-ecosystem static analysis combined with LLM-based noise reduction, reporting 95% precision, 98% recall, and 97% F1 across their benchmarks.

  • Gen is building a real-time agent trust layer called Sage that monitors every connection in and out of your agents, allowing them to run freely without risk of data exfiltration or prompt injection.

  • Snyk is bringing their package security background to the skills context.

We are building an Audits leaderboard to provide per-skill assessments and recommendations. The goal is not to lock things down. The goal is to let you go fast with confidence. We're always looking for new security partners who can bring unique perspectives to auditing skills and provide more trust signals for skills.

What the demos showed us

Eight partners showed demos on Tuesday, and a few themes kept coming up.

Skills close the training cutoff gap. Ben Davis ran a controlled experiment to demonstrate this.

He tried to get coding agents to implement Svelte remote functions, a relatively new API, four different ways: no context, a skills file with documentation, a skill pointing to the MCP, and a code example in the project.

Every approach with context worked.

The no-context run, which he had to force through a stripped-down model to prevent it from inferring solutions, produced completely wrong output. Models are smart enough to use patterns correctly when you give them the patterns. Without context, they fall back to stale training data.

The medium matters less than the content. The interesting takeaway from Ben's experiment was not that skills are the only way. It is that getting the right context in is what matters, and skills are the fastest starting point if you do not already have a baseline. Existing code examples, inline documentation, and MCP hints all work.

Skills are just the easiest way to distribute that context to anyone.

Agents can now drive the whole stack. Evan Bacon from Expo showed native iOS feature upgrades driven entirely by Claude Code using Expo skills.

New SwiftUI components, gesture-driven transitions, and tab bar updates were all applied automatically. They are also using LLDB integration in a work-in-progress skill that lets agents read the native iOS view hierarchy and fix notoriously hard keyboard handling bugs automatically.

Their production app, Expo Go, now auto-fixes every crash as it occurs. For anyone who has spent time wrestling with Xcode, that is a significant statement.

Skills are becoming infrastructure. Nick Khami showed off that Mintlify auto-generates a skill for every documentation site they host, including Claude Code's own docs, Coinbase, Perplexity, and Lovable.

Traffic to these sites is now 50% coding agents, up from 10% a year ago. The skill is not something the docs team writes anymore; it is a byproduct of having well-structured documentation. Sentry's David Cramer built Warden, a harness that runs skills as linters on pull requests via GitHub Actions, treating agents as a static analysis layer.

What we're building toward

Guillermo Rauch, Vercel CEO, said something Tuesday night that I keep thinking about: agents make mistakes.

They sometimes tell you you are absolutely right and proceed to do the wrong thing. Shipping quality in the AI era means not just celebrating how many tokens you are burning. It means raising the bar on what those tokens actually produce.

Skills are one answer to that problem. They are how we influence what agents create, keep them up to date with framework changes, and make them more token-efficient by giving them a straight path to the right answer instead of letting them stumble around.

Two million installs is real signal. The security partnerships make it something teams can rely on. And the demos showed that the most interesting skills work is not at the CLI level. It is in the agents and tools that are now treating skills as a first-class primitive for distributing knowledge at scale.

We will keep building. Come find us at skills.sh.

Read more

Andrew Qu
https://vercel.com/blog/video-generation-with-ai-gateway Video Generation with AI Gateway 2026-02-19T13:00:00.000Z

AI Gateway now supports video generation, so you can create cinematic videos with photorealistic quality, synchronized audio, generate personalized content with consistent identity, all through AI SDK 6.

Two ways to get started

Video generation is in beta and currently available for Pro and Enterprise plans and paid AI Gateway users.

  • AI SDK 6: Generate videos programmatically with the same interface you use for text and images. One API, one authentication flow, one observability dashboard across your entire AI pipeline.

  • AI Gateway Playground: Experiment with video models with no code in the configurable AI Gateway playground that's embedded in each model page. Compare providers, tweak prompts, and download results without writing code. To access, click any video gen model in the model list.

Four initial video models; 17 variations

  • Grok Imagine from xAI is fast and great at instruction following. Create and edit videos with style transfer, all in seconds.

  • Wan from Alibaba specializes in reference-based generation and multi-shot storytelling, with the ability to preserve identity across scenes.

  • Kling excels at image to video and native audio. The new 3.0 models support multishot video with automatic scene transitions.

  • Veo from Google delivers high visual fidelity and physics realism. Native audio generation with cinematic lighting and physics.

Understanding video requests

Video models require more than just describing what you want. Unlike image generation, video prompts can include motion cues (camera movement, object actions, timing) and optionally audio direction. Each provider exposes different capabilities through providerOptions that unlock fundamentally different generation modes. See the documentation for model-specific options.

Generation types

AI Gateway initially supports 4 types of video generation:

Type

Inputs

Description

Example use cases

Text-to-video

Text prompt

Describe a scene, get a video

Ad creative, explainer videos, social content

Image-to-video

Image, text prompt optional

Animate a still image with motion

Product showcases, logo reveals, photo animation

First and last frame

2 images, text prompt optional

Define start and end states, model fills in between

Before/after reveals, time-lapse, transitions

Reference-to-video

Images or videos

Extract a character from reference images or videos and place them in new scenes

Spokesperson content, consistent brand characters

Across the model creators, their current capabilities across the models on AI Gateway are listed below:

Model Creator

Capabilities

xAI

Text-to-video, image-to-video, video editing, audio

Wan

Text-to-video, image-to-video, reference-to-video, audio

Kling

Text-to-video, image-to-video, first and last frame, audio

Veo

Text-to-video, image-to-video, audio

Text-to-video

Describe what you want, get a video. The model handles visuals, motion, and optionally audio. Great for hyperrealistic, production-quality footage with just a simple text prompt.

Example: Programmatic video at scale. Generate videos on demand for your app, platform, or content pipeline. No licencing fees or production required, just prompts and outputs.

This example uses klingai/kling-v2.6-t2v to generate video from a text prompt with a specified aspect ratio and duration.

Example: Creative content generation. Turn a simple prompt into polished video clips for social media, ads, or storytelling with natural motion and cinematic quality.

By setting a very specific and descriptive prompt, google/veo-3.1-generate-001 generates video with immense detail and the exact desired motion.

Image-to-video

Provide a starting image and animate it. Control the initial composition, then let the model generate motion.

Example: Animate product images. Turn existing product photos into interactive videos.

The klingai/kling-v2.6-i2v model animates a product image after you pass an image URL and motion description in the prompt.

Example: Animated illustrations. Bring static artwork to life with subtle motion. Perfect for thematic content or marketing at scale.

Example: Lifestyle and product photography. Add subtle motion to food, beverage, or lifestyle shots for social content.

Here, a picture of coffee is rendered for a more interactive video, with lighting direction and minute details.

First and last frame

Define the start and end states, and the model generates a seamless transition between them.

Example: Before/after reveals. Outfit swaps, product comparisons, changes over time. Upload two images, get a seamless transition.

The start and end states are defined here with two images that used in the prompt and provider options.

In this example, klingai/kling-v3.0-i2v lets you define the start frame in image and the end frame in lastFrameImage. The model generates the transition between them.

Reference-to-video

Provide reference videos or images of a person/character, and the model extracts their appearance and voice to generate new scenes starring them with consistent identity.

In this example, 2 reference images of dogs are used to generate the final video.

Using alibaba/wan-v2.6-r2v-flash here, you can instruct the model to utilize the people/characters within the prompt. Wan suggests using character1, character2, etc. in the prompt for multi-reference to video to get the best results.

Video Editing

Transform existing videos with style transfer. Provide a video URL and describe the transformation you want. The model applies the new style while preserving the original motion.

Here, xai/grok-imagine-video utilizes a source video from a previous generation to edit into a watercolor style.

Get started

For more examples and detailed configuration options for video models, check out the Video Generation Documentation. You can also find simple getting started scripts with the Video Generation Quick Start.

Check out the changelogs for these video models for more detailed examples and prompts.

Read more

Jerilyn Zheng
https://vercel.com/changelog/grok-imagine-video-on-ai-gateway Grok Imagine Video on AI Gateway 2026-02-19T13:00:00.000Z

Generate high-quality videos with natural motion and audio using xAI's Grok Imagine Video, now in AI Gateway. Try it out now via the v0 Grok Creative Studio, AI SDK 6 or by selecting the model in the AI Gateway playground.

Grok Imagine is known for realistic motion and strong instruction following:

  • Fast Generation: Generates clips in seconds rather than minutes

  • Instruction Following: Understands complex prompts and follow-up instructions to tweak scenes

  • Video Editing: Transform existing videos by changing style, swapping objects, or altering scenes

  • Audio & Dialogue: Native audio generation with natural, expressive voices and accurate lip-sync

Three ways to get started

Video generation is in beta and currently available for Pro and Enterprise plans and paid AI Gateway users.

  • v0 Grok Creative Studio: The v0 team created a template that is powered by AI Gateway to create and showcase Grok Video and Image generations.

  • AI SDK 6: Generate videos programmatically AI SDK 6's generateVideo.

  • Gateway Playground: Experiment with video models with no code in the configurable AI Gateway playground that's embedded in each model page. Compare providers, tweak prompts, and download results without writing code. To access, click any video gen model in the model list.

Available Model

Model

Description

xai/grok-imagine-video

Text-to-video, image-to-video, and video editing

Simple: Text-to-Video

Generate a video from a text description.

In this example, xai/grok-imagine-video is used to generate a video of 2 swans. Note that you can also specify the duration of the output.

Advanced: Reference-to-Video

Transform an existing video into a new style:

In this example, using a previous generation from Grok Imagine Video, the output was transformed into an animated watercolor style.

The source video is used and edited, which is useful for style transfer, object swapping, and scene transformations.

Learn More

For more examples and detailed configuration options for Grok Imagine Video, check out the Video Generation Documentation. You can also find simple getting started scripts with the Video Generation Quick Start.

Read more

Walter Korman Jeremy Philemon Matt Lenhard Jerilyn Zheng
https://vercel.com/changelog/wan-models-on-ai-gateway Wan models on AI Gateway 2026-02-19T13:00:00.000Z

Generate stylized videos and transform existing footage with Alibaba's Wan models, now available through AI Gateway. Try them out now via AI SDK 6 or by selecting the models in the AI Gateway playground.

Wan produces artistic videos with smooth motion and can use existing content to keep videos consistent:

  • Character Reference (R2V): Extract character appearance and voice from reference videos/images to generate new scenes

  • Flash Variants: Faster generation times for quick iterations

  • Flexible Resolutions: Support for 480p, 720p, and 1080p output

Two ways to get started

Video generation is in beta and currently available for Pro and Enterprise plans and paid AI Gateway users.

  • AI SDK 6: Generate videos programmatically AI SDK 6's generateVideo.

  • Gateway Playground: Experiment with video models with no code in the configurable AI Gateway playground that's embedded in each model page. Compare providers, tweak prompts, and download results without writing code. To access, click any video gen model in the model list.

Available Models

Model

Type

Description

alibaba/wan-v2.6-t2v

Text-to-Video

Generate videos from text prompts

alibaba/wan-v2.6-i2v

Image-to-Video

Animate still images

alibaba/wan-v2.6-i2v-flash

Image-to-Video

Fast image animation

alibaba/wan-v2.6-r2v

Reference-to-Video

Character transfer from references

alibaba/wan-v2.6-r2v-flash

Reference-to-Video

Fast style transfer

alibaba/wan-v2.5-t2v-preview

Text-to-Video

Previous version

Simple: Text-to-Video with Audio

Generate a stylized video from a text description.

You can use detailed prompts and specify styles with the Wan models to achieve the desired output generation. The example here uses alibaba/wan-v2.6-t2v:

Advanced: Reference-to-Video

Generate new scenes using characters extracted from reference images or videos.

In this example, 2 reference images of dogs are used to generate the final video.

Using alibaba/wan-v2.6-r2v-flash here, you can instruct the model to utilize the people/characters within the prompt. Wan suggests using character1, character2, etc. in the prompt for multi-reference to video to get the best results.

Learn More

For more examples and detailed configuration options for Wan models, check out the Video Generation Documentation. You can also find simple getting started scripts with the Video Generation Quick Start.

Read more

Walter Korman Jeremy Philemon Matt Lenhard Sylvie Zhang Jerilyn Zheng
https://vercel.com/changelog/kling-video-models-on-ai-gateway Kling video models on AI Gateway 2026-02-19T13:00:00.000Z

Kling video models are now available in AI Gateway, including the newest Kling 3.0 models. Generate cinematic videos from text, images, or motion references with Kling's state-of-the-art video models, now available through AI Gateway and AI SDK.

Kling models are known for their image to video models and multishot capabilities:

  • Image-to-Video Capabilities: Strong at animating still images into video clips

  • Realistic Motion and Physics: Known for coherent motion, facial expressions, and physical interactions

  • High Resolution Output: Supports up to 1080p generation (pro mode)

  • Multishot Narratives: Kling 3.0 can generate multi-scene videos from a single narrative prompt

  • Audio Generation: Create synchronized sound effects and ambient audio alongside your video

  • First & Last Frame Control: Specify both start and end frames for precise scene transitions

Two ways to get started

Video generation is in beta and currently available for Pro and Enterprise plans and paid AI Gateway users.

  • AI SDK 6: Generate videos programmatically AI SDK 6's generateVideo.

  • Gateway Playground: Experiment with video models with no code in the configurable AI Gateway playground that's embedded in each model page. Compare providers, tweak prompts, and download results without writing code. To access, click any video gen model in the model list.

Available Models

Model

Type

Description

klingai/kling-v3.0-t2v

Text-to-Video

Latest generation, highest quality with multishot support

klingai/kling-v3.0-i2v

Image-to-Video, First-and-Last-Frame

Animate images with v3 quality and multiple frames

klingai/kling-v2.6-t2v

Text-to-Video

Audio generation support

klingai/kling-v2.6-i2v

Image-to-Video, First-and-Last-Frame

Use images as reference

klingai/kling-v2.5-turbo-t2v

Text-to-Video

Faster generation

klingai/kling-v2.5-turbo-i2v

Image-to-Video, First-and-Last-Frame

Faster generation

Simple: Text-to-Video with Audio

Generate a video from a text description.

In this example, model klingai/kling-v3.0-t2v is used to generate a video of a cherry blossom tree with no inputs other than a simple text prompt.

Advanced: Multishot Video

Generate a narrative video with multiple scenes with only a single prompt. Using Kling 3.0's multishot feature, the model intelligently cuts between shots to tell a complete story:

The prompt is written as a narrative with multiple distinct scenes for the best results. shotType: 'intelligence' lets the model decide optimal shot composition and sound: 'on' generates synchronized audio for the entire video. Note that the prompt here is in the providerOptions since this functionality is specific to Kling. The Kling 3.0 models support this functionality: here klingai/kling-v3.0-t2v is used.

Advanced: First & Last Frame Control

Control exactly how your video starts and ends by providing both a first frame and last frame image. This is perfect for time-lapse effects or precise scene transitions:

These 2 images were provided as start and end frames.

Using AI SDK 6, you can set image and lastFrameImage with your start and end frames. In this example, klingai/kling-v3.0-i2v is used for the model.

Learn More

For more examples and detailed configuration options for Kling models, check out the Video Generation Documentation. You can also find simple getting started scripts with the Video Generation Quick Start.

Read more

Walter Korman Jeremy Philemon Matt Lenhard Jerilyn Zheng
https://vercel.com/changelog/veo-video-models-on-ai-gateway Veo video models on AI Gateway 2026-02-19T13:00:00.000Z

Generate photorealistic videos with synchronized audio using Google's Veo models, now available through AI Gateway. Try them out now via AI SDK 6 or by selecting the models in the AI Gateway playground.

Veo models are known for their cinematic quality and audio generation:

  • Native Audio Generation: Automatically generate realistic sound effects, ambient audio, and even dialogue that matches your video

  • Up to 1080p Resolution: Generate videos at 720p and 1080p

  • Photorealistic Quality: Realism for nature, wildlife, and cinematic scenes

  • Image-to-Video: Animate still photos with natural motion

  • Fast Mode: Quicker generation when you need rapid iterations

Two ways to get started

Video generation is in beta and currently available for Pro and Enterprise plans and paid AI Gateway users.

  • AI SDK 6: Generate videos programmatically AI SDK 6's generateVideo.

  • Gateway Playground: Experiment with video models with no code in the configurable AI Gateway playground that's embedded in each model page. Compare providers, tweak prompts, and download results without writing code. To access, click any video gen model in the model list.

Available Models

Model

Description

google/veo-3.1-generate-001

Latest generation, highest quality

google/veo-3.1-fast-generate-001

Fast mode for quicker iterations

google/veo-3.0-generate-001

Full quality generation

google/veo-3.0-fast-generate-001

Fast mode generation

Simple: Text-to-Video with Audio

Describe a scene and get a video.

Generate a cinematic wildlife video with natural sound: here google/veo-3.1-generate-001 is used with generateAudio: true.

Advanced: Image-to-Video with Dialog

A common workflow to ensure quality is generating a custom image with Gemini 3 Pro Image (Nano Banana Pro), then bringing it to life with Veo, complete with motion and spoken dialog.

Starting image from Nano Banana Pro:

Use prompts with image input with the Veo models for more control over the output. This example uses google/veo-3.1-generate-001, which supports image to video.

Learn More

For more examples and detailed configuration options for Veo models, check out the Video Generation Documentation. You can also find simple getting started scripts with the Video Generation Quick Start.

Read more

Walter Korman Jeremy Philemon Matt Lenhard Jerilyn Zheng
https://vercel.com/changelog/access-billing-usage-cost-data-api Access billing usage and cost data via API 2026-02-19T13:00:00.000Z

Vercel now supports programmatic access to billing usage and cost data through the API and CLI. The new /billing/charges endpoint returns data in the FOCUS v1.3 open-standard format, allowing teams to ingest cost data into FinOps tools without custom transformation logic.

The endpoint supports 1-day granularity with a maximum date range of one year. Responses are streamed as newline-delimited JSON (JSONL) to handle large datasets efficiently.

SDK usage

curl usage

CLI usage

For quick introspection, the vercel usage command displays billing usage for the current period or a custom date range. This includes credit-use and costs for each service.

View usage for the current billing period

View usage for a custom date range

Vantage has also released a native integration that connects Vercel teams to Vantage accounts. This automatically syncs usage and cost data alongside other tools, simplifying cost observability. Read the Vantage announcement blog for details.

Learn more in the API documentation and CLI reference.

Read more

Shar Dara Mingchung Xia
https://vercel.com/changelog/streamdown-2-3 Streamdown 2.3 — Refreshed design and interactive playground 2026-02-19T13:00:00.000Z

Streamdown 2.3 enhances design consistency by applying a unified nested-card design to tables, code blocks, and Mermaid diagrams. Action buttons now remain sticky during scroll, and code blocks render plain text immediately to reduce perceived latency before syntax highlighting loads.

To accelerate testing, the new interactive playground supports real-time execution with custom markdown and editable props. This enables faster experimentation with configuration changes without spinning up a local project.

New hooks and utilities provide improved control over rendering. The useIsCodeFenceIncomplete hook detects in-progress fenced code blocks during streaming. Tables now support copying as Markdown, and a new HTML indentation normalization property handles inconsistent whitespace in raw input. Image rendering also includes improved error handling with custom messaging.

Documentation has been reorganized for easier reference. Plugin documentation for CJK, Math, and Mermaid is now consolidated into dedicated pages, and the redesigned homepage links directly to templates for faster onboarding.

This release also resolves issues with nested HTML block parsing, custom tag handling, Mermaid diagram artifacts, and Shiki syntax engine inconsistencies. Streamdown 2.3 ships with a fully cleared bug backlog.

Read the documentation for more information.

Read more

Hayden Bleasel
https://vercel.com/changelog/gemini-3-1-pro-is-live-on-ai-gateway Gemini 3.1 Pro is live on AI Gateway 2026-02-19T13:00:00.000Z

Gemini 3.1 Pro Preview from Google is now available on AI Gateway.

This model release brings quality improvements across software engineering and agentic workflows, with enhanced usability for real-world tasks in finance and spreadsheet applications. Gemini 3.1 Pro Preview introduces more efficient thinking across use cases, reducing token consumption while maintaining performance.

To use this model, set model to google/gemini-3.1-pro-preview in the AI SDK. This model supports the medium thinking level for finer control over the trade-offs between cost, performance, and speed.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/private-storage-for-vercel-blob-now-available-in-public-beta Private storage for Vercel Blob, now available in public beta 2026-02-19T13:00:00.000Z

Vercel Blob now supports private storage for sensitive files like contracts, invoices, and internal reports. Private storage requires authentication for all operations, preventing exposure via public URLs.

Public storage allows public reads for media assets, while private storage requires authentication.

Create a private store via the Storage dashboard or with the CLI:

CLI command

When created inside a linked Vercel project, the CLI prompts you to connect the store, automatically adding the BLOB_READ_WRITE_TOKEN environment variable. The SDK uses this variable to authenticate operations in your deployments.

SDK installation

To upload, use put or upload with the access: 'private' option.

Upload example

To download, use the get method to stream files.

Retrieval example

Private storage is in beta on all plans with standard Vercel Blob pricing.

Learn more about private storage.

Read more

Agustin Falco Vincent Voyer Priyanka Jindal
https://vercel.com/blog/we-ralph-wiggumed-webstreams-to-make-them-10x-faster We Ralph Wiggumed WebStreams to make them 10x faster 2026-02-18T13:00:00.000Z

When we started profiling Next.js server rendering earlier this year, one thing kept showing up in the flamegraphs: WebStreams. Not the application code running inside them, but the streams themselves. The Promise chains, the per-chunk object allocations, the microtask queue hops. After Theo Browne's server rendering benchmarks highlighted how much compute time goes into framework overhead, we started looking at where that time actually goes. A lot of it was in streams.

Turns out that WebStreams have an incredibly complete test suite, and that makes them a great candidate for doing an AI-based re-implementation in a purely test-driven and benchmark-driven fashion. This post is about the performance work we did, what we learned, and how this work is already making its way into Node.js itself through Matteo Collina's upstream PR.

The problem

Node.js has two streaming APIs. The older one (stream.Readable, stream.Writable, stream.Transform) has been around for over a decade and is heavily optimized. Data moves through C++ internals. Backpressure is a boolean. Piping is a single function call.

The newer one is the WHATWG Streams API: ReadableStream, WritableStream, TransformStream. This is the web standard. It powers fetch() response bodies, CompressionStream, TextDecoderStream, and increasingly, server-side rendering in frameworks like Next.js and React.

The web standard is the right API to converge on. But on the server, it is slower than it needs to be.

To understand why, consider what happens when you call reader.read() on a native WebStream in Node.js. Even if data is already sitting in the buffer:

  1. A ReadableStreamDefaultReadRequest object is allocated with three callback slots

  2. The request is enqueued into the stream's internal queue

  3. A new Promise is allocated and returned

  4. Resolution goes through the microtask queue

That is four allocations and a microtask hop to return data that was already there. Now multiply that by every chunk flowing through every transform in a rendering pipeline.

Or consider pipeTo(). Each chunk passes through a full Promise chain: read, write, check backpressure, repeat. An {value, done} result object is allocated per read. Error propagation creates additional Promise branches.

None of this is wrong. These guarantees matter in the browser where streams cross security boundaries, where cancellation semantics need to be airtight, where you do not control both ends of a pipe. But on the server, when you are piping React Server Components through three transforms at 1KB chunks, the cost adds up.

We benchmarked native WebStream pipeThrough at 630 MB/s for 1KB chunks. Node.js pipeline() with the same passthrough transform: ~7,900 MB/s. That is a 12x gap, and the difference is almost entirely Promise and object allocation overhead.

What we built

We have been working on a library called fast-webstreams that implements the WHATWG ReadableStream, WritableStream, and TransformStream APIs backed by Node.js streams internally. Same API, same error propagation, same spec compliance. The overhead is removed for the common cases.

The core idea is to route operations through different fast paths depending on what you are actually doing:

When you pipe between fast streams: zero Promises

This is the biggest win. When you chain pipeThrough and pipeTo between fast streams, the library does not start piping immediately. Instead, it records upstream links:

source → transform1 → transform2 → ...

When pipeTo() is called at the end of the chain, it walks upstream, collects the underlying Node.js stream objects, and issues a single pipeline() call. One function call. Zero Promises per chunk. Data flows through Node's optimized C++ path.

The result: ~6,200 MB/s. That is ~10x faster than native WebStreams and close to raw Node.js pipeline performance.

If any stream in the chain is not a fast stream (say, a native CompressionStream), the library falls back to either native pipeThrough or a spec-compliant pipeTo implementation.

When you read chunk by chunk: synchronous resolution

When you call reader.read(), the library tries nodeReadable.read() synchronously. If data is there, you get Promise.resolve({value, done}). No event loop round-trip. No request object allocation. Only when the buffer is empty does it register a listener and return a pending Promise.

The result: ~12,400 MB/s, or 3.7x faster than native.

The React Flight pattern: where the gap is largest

This is the one that matters most for Next.js. React Server Components use a specific byte stream pattern: create a ReadableStream with type: 'bytes', capture the controller in start(), enqueue chunks externally as the render produces them.

Native WebStreams: ~110 MB/s. fast-webstreams: ~1,600 MB/s. That is 14.6x faster for the exact pattern used in production server rendering.

The speed comes from LiteReadable, a minimal array-based buffer we wrote to replace Node.js's Readable for byte streams. It uses direct callback dispatch instead of EventEmitter, supports pull-based demand and BYOB readers, and costs about 5 microseconds less per construction. That matters when React Flight creates hundreds of byte streams per request.

Fetch response bodies: streams you don't construct yourself

The examples above all start with new ReadableStream(...). But on the server, most streams do not start that way. They start from fetch(). The response body is a native byte stream owned by Node.js's HTTP layer. You cannot swap it out.

This is a common pattern in server-side rendering: fetch data from an upstream service, pipe the response through one or more transforms, and forward the result to the client.

With native WebStreams, each hop in this chain pays the full Promise-per-chunk cost. Three transforms means roughly 6-9 Promises per chunk. At 1KB chunks, that gets you ~260 MB/s.

The library handles this through deferred resolution. When patchGlobalWebStreams() is active, Response.prototype.body returns a lightweight fast shell wrapping the native byte stream. Calling pipeThrough() does not start piping immediately. It just records the link. Only when pipeTo() or getReader() is called at the end does the library resolve the full chain: it creates a single bridge from the native reader into Node.js pipeline() for the transform hops, then serves reads from the buffered output synchronously.

The cost model: one Promise at the native boundary to pull data in. Zero Promises through the transform chain. Sync reads at the output.

The result: ~830 MB/s, or 3.2x faster than native for the three-transform fetch pattern. For simple response forwarding without transforms, it is 2.0x faster (850 vs 430 MB/s).

Benchmarks

All numbers are throughput in MB/s at 1KB chunks on Node.js v22. Higher is better.

Core operations

Operation

Node.js streams

fast

native

fast vs native

read loop

26,400

12,400

3,300

3.7x

write loop

26,500

5,500

2,300

2.4x

pipeThrough

7,900

6,200

630

9.8x

pipeTo

14,000

2,500

1,400

1.8x

for-await-of

4,100

3,000

1.4x

Transform chains

The Promise-per-chunk overhead compounds with chain depth:

Depth

fast

native

fast vs native

3 transforms

2,900

300

9.7x

8 transforms

1,000

115

8.7x

Byte streams

Pattern

fast

native

fast vs native

start + enqueue (React Flight)

1,600

110

14.6x

byte read loop

1,400

1,400

1.0x

byte tee

1,200

750

1.6x

Response body patterns

Pattern

fast

native

fast vs native

Response.text()

900

910

1.0x

Response forwarding

850

430

2.0x

fetch → 3 transforms

830

260

3.2x

Stream construction

Creating streams is also faster, which matters for short-lived streams:

Type

fast

native

fast vs native

ReadableStream

2,100

980

2.1x

WritableStream

1,300

440

3.0x

TransformStream

470

220

2.1x

Spec compliance

fast-webstreams passes 1,100 out of 1,116 Web Platform Tests. Node.js's native implementation passes 1,099. The 16 failures that remain are either shared with native (like the unimplemented type: 'owning' transfer mode) or are architectural differences that do not affect real applications.

How we are deploying this

The library can patch the global ReadableStream, WritableStream, and TransformStream constructors:

The patch also intercepts Response.prototype.body to wrap native fetch response bodies in fast stream shells, so fetch()pipeThrough()pipeTo() chains hit the pipeline fast path automatically.

At Vercel, we are looking at rolling this out across our fleet. We will do so carefully and incrementally. Streaming primitives sit at the foundation of request handling, response rendering, and compression. We are starting with the patterns where the gap is largest: React Server Component streaming, response body forwarding, and multi-transform chains. We will measure in production before expanding further.

The right fix is upstream

A userland library should not be the long-term answer here. The right fix is in Node.js itself.

Work is already happening. After a conversation on X, Matteo Collina submitted nodejs/node#61807, "stream: add fast paths for webstreams read and pipeTo." The PR applies two ideas from this project directly to Node.js's native WebStreams:

  1. read() fast path: When data is already buffered, return a resolved Promise directly without creating a

    ReadableStreamDefaultReadRequest object. This is spec-compliant because read() returns a Promise either way, and resolved promises still run callbacks in the microtask queue.

  2. pipeTo() batch reads: When data is buffered, batch multiple reads from the controller queue without creating per-chunk request objects. Backpressure is respected by checking desiredSize after each write.

The PR shows ~17-20% faster buffered reads and ~11% faster pipeTo. These improvements benefit every Node.js user for free. No library to install, no patching, no risk.

James Snell's Node.js performance issue #134 outlines several additional opportunities: C++-level piping for internally-sourced streams, lazy buffering, eliminating double-buffering in WritableStream adapters. Each of these could close the gap further.

We will keep contributing ideas upstream. The goal is not for fast-webstreams to exist forever. The goal is for WebStreams to be fast enough that it does not need to.

What we learned the hard way

The spec is smarter than it looks. We tried many shortcuts. Almost every one of them broke a Web Platform Test, and the test was usually right. The ReadableStreamDefaultReadRequest pattern, the Promise-per-read design, the careful error propagation: they exist because cancellation during reads, error identity through locked streams, and thenable interception are real edge cases that real code hits.

Promise.resolve(obj) always checks for thenables. This is a language-level behavior you cannot avoid. If the object you resolve with has a .then property, the Promise machinery will call it. Some WPT tests deliberately put .then on read results and verify that the stream handles it correctly. We had to be very careful about where {value, done} objects get created in hot paths.

Node.js pipeline() cannot replace WHATWG pipeTo. We hoped to use pipeline() for all piping. It causes 72 WPT failures. The error propagation, stream locking, and cancellation semantics are fundamentally different. pipeline() is only safe when we control the entire chain, which is why we collect upstream links and only use it for full fast-stream chains.

Reflect.apply, not .call(). The WPT suite monkey-patches Function.prototype.call and verifies that implementations do not use it to invoke user-provided callbacks. Reflect.apply is the only safe way. This is a real spec requirement.

We built most of fast-webstreams with AI

Two things made that viable:

The amazing Web Platform Tests gave us 1,116 tests as an immediate, machine-checkable answer to "did we break anything?" And we built a benchmark suite early on so we could measure whether each change actually moved throughput. The development loop was: implement an optimization, run the WPT suite, run benchmarks. When tests broke, we knew which spec invariant we had violated. When benchmarks did not move, we reverted.

The WHATWG Streams spec is long and dense. The interesting optimization opportunities sit in the gap between what the spec requires and what current implementations do. read() must return a Promise, but nothing says that Promise cannot already be resolved when data is buffered. That kind of observation is straightforward when you can ask an AI to analyze algorithm steps for places where the observable behavior can be preserved with fewer allocations.

Try it

fast-webstreams is available on npm as experimental-fast-webstreams. The "experimental" prefix is intentional. We are confident in correctness, but this is an area of active development.

If you are building a server-side JavaScript framework or runtime and hitting WebStreams performance limits, we would love to hear from you. And if you are interested in improving WebStreams in Node.js itself, Matteo's PR is a great place to start.

Read more

Malte Ubl
https://vercel.com/changelog/redesigned-search-and-filtering-for-runtime-logs Redesigned search and filtering for runtime logs 2026-02-18T13:00:00.000Z

The Runtime Logs search bar in your project dashboard has been redesigned to make filtering and exploring your logs faster and more intuitive.

  • Structured filters. When you type a filter like level:error or status:500, the search bar parses it into a visual pill you can read at a glance and remove with a click. Complex queries with multiple filters become easy to scan and edit without retyping anything

  • Smarter suggestions. As you type, the search bar suggests filter values based on your actual log data. Recent queries are saved per-project and appear at the top, so you can rerun common searches without retyping them

  • Better input handling. The search bar validates your filters as you type and flags errors with a tooltip so you can fix typos before running a search. Pasting a Vercel Request ID automatically converts it into a filter

These improvements are available now in your project dashboard. Learn more about runtime logs.

Read more

Luc Leray Vincent Voyer Timo Lins
https://vercel.com/blog/how-stably-ships-AI-testing-agents-in-hours-not-weeks How Stably ships AI testing agents in hours, not weeks 2026-02-17T13:00:00.000Z

How the 6-person team at Stably ships AI testing agents faster with Vercel—moving from weeks to hours. Their shift highlights how Vercel's platform eliminates infrastructure anxiety, boosting autonomous testing and enabling quick enterprise growth.

Read more

Alli Pope
https://vercel.com/changelog/automatic-build-fix-suggestions-with-vercel-agent Automatic build fix suggestions with Vercel Agent 2026-02-17T13:00:00.000Z

You can now get automatic code-fix suggestions for broken builds from the Vercel Agent, directly in GitHub pull request reviews or in the Vercel Dashboard.

When the Vercel Agent reviews your pull request, it now scans your deployments for build errors, and when it detects failures it automatically suggests a code fix based on your code and build logs.

In addition, Vercel Agent can automatically suggest code fixes inside the Vercel dashboard whenever a build error is detected, and suggests a code change to a GitHub Pull Request for review before merging with your code.

Get started with Vercel Agent code review in the Agent dashboard, or learn more in the documentation.

Read more

Dan Fox John Phamous Marcos Grappeggia Tom Dale Julian Benegas
https://vercel.com/changelog/automated-security-audits-now-available-for-skills-sh Automated security audits now available for skills.sh 2026-02-17T13:00:00.000Z

Skills on the skills.sh now have automated security audits to help developers use skills with confidence.

Working with our partners Gen, Socket, and Snyk, these independent security reports allow us to rapidly scale and audit over 60,000 skills and counting.

Skills.sh provides greater ecosystem support with:

  • Transparent results: Security audits appear publicly on each skill's detail page.

  • Leaderboard protection : Skills flagged as malicious are automatically hidden from the leaderboard and search results. If you navigate directly to a flagged skill, a warning note appears before installation.

  • Security validation: As of [email protected], adding skills clearly displays audit results and risk levels before installation.

Learn more at skills.sh.

Read more

Andrew Qu Liz Hurder
https://vercel.com/changelog/recraft-v4-on-ai-gateway Recraft V4 on AI Gateway 2026-02-17T13:00:00.000Z

Recraft V4 is now available on AI Gateway.

A text-to-image model built for professional design and marketing use cases, V4 was developed with input from working designers. The model has improvements with photorealism, with realistic skin, natural textures, and fewer synthetic artifacts. It also produces images with clean lighting and varied composition. For illustration, the model can generate original characters with less predictable color palettes.

There are 2 versions:

  • V4: Faster and more cost-efficient, suited for everyday work and iteration

  • V4 Pro: Generates higher-resolution images for print-ready assets and large-scale use

To use this model, set model to recraft/recraft-v4-pro or recraft/recraft-v4 in the AI SDK:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/vercel-sandbox-snapshots-now-allow-custom-retention-periods Vercel Sandbox snapshots now allow custom retention periods 2026-02-17T13:00:00.000Z

Snapshots created with Vercel Sandbox now have configurable expiration, instead of the previous 7 days limit, along with higher defaults.

The expiration can be configured between 1 day to infinity. If not provided, the default snapshot expiration is 30 days.

You can also configure this in the CLI.

Read the documentation to learn more about snapshots.

Read more

Tom Lienard Harpreet Arora Luke Phillips-Sheard
https://vercel.com/changelog/claude-sonnet-4-6-is-live-on-ai-gateway Claude Sonnet 4.6 is live on AI Gateway 2026-02-17T13:00:00.000Z

Claude Sonnet 4.6 from Anthropic is now available on AI Gateway with the 1M token context window.

Sonnet 4.6 approaches Opus-level intelligence with strong improvements in agentic coding, code review, frontend UI quality, and computer use accuracy. The model proactively executes tasks, delegates to subagents, and parallelizes tool calls, with MCP support for scaled tool use. As a hybrid reasoning model, Sonnet 4.6 delivers both near-instant responses and extended thinking within the same model.

To use this model, set model to anthropic/claude-sonnet-4.6 in the AI SDK. This model supports effort and thinking type adaptive:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/improved-streaming-runtime-logs-exports Improved streaming runtime logs exports 2026-02-17T13:00:00.000Z

With runtime logs, you can view and export your logs. Exports now stream directly to the browser - your download starts immediately and you can continue to use the Vercel dashboard while the export runs in the background. This eliminates the need to wait for large files to buffer.

Additionally, we've added two new options: You can now export exactly what's on your screen or all requests matching your current search.

All plans can export up to 10,000 requests per export, and Observability Plus subscribers can export up to 100,000 requests.

Exported log data is now indexed by request to ensure consistency with the Runtime Logs dashboard interface. Export limits are now applied by request to ensure that the exported data matches the filtered requests shown on the dashboard.

Learn more about runtime logs.

Read more

Vincent Voyer
https://vercel.com/changelog/qwen-3-5-plus-is-on-ai-gateway Qwen 3.5 Plus is on AI Gateway 2026-02-16T13:00:00.000Z

Qwen 3.5 Plus is now available on AI Gateway.

The model comes with a 1M context window and built-in adaptive tool use. Qwen 3.5 Plus excels at agentic workflows, thinking, searching, and using tools across multimodal contexts, making it well-suited for web development, frontend tasks, and turning instructions into working code. Compared to Qwen 3 VL, it delivers stronger performance in scientific problem solving and visual reasoning tasks.

To use this model, set model to alibaba/qwen3.5-plus in the AI SDK:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/stale-if-error-cache-control-header-is-now-supported Stale-if-error cache-control directive now supported for all responses 2026-02-13T13:00:00.000Z

Vercel CDN now supports the stale-if-error directive with Cache-Control headers, enabling more resilient caching behavior during origin failures.

You can now use the stale-if-error directive to specify how long (in seconds) a stale cached response can still be served if a request to the origin fails. When this directive is present and the origin returns an error, the CDN may serve a previously cached response instead of returning the error to the client. Stale responses may be served for errors like 500 Internal Server Errors, network failures, or DNS errors.

This allows applications to remain available and respond gracefully when upstream services are temporarily unavailable.

Read the stale-if-error documentation to learn more.

Read more

Shraddha Agarwal Luba Kravchenko
https://vercel.com/changelog/browserbase-joins-the-vercel-agent-marketplace Browserbase joins the Vercel Agent Marketplace 2026-02-12T13:00:00.000Z

Browserbase is now available on the Vercel Marketplace, allowing teams to run browser automation for AI agents without managing infrastructure.

This integration connects agents to remote browsers over the Chrome DevTools Protocol (CDP), enabling workflows that require interacting with real websites, such as signing in to dashboards, filling out forms, or navigating dynamic pages.

With this one-click integration, teams benefit from unified billing and infrastructure designed for long-lived, stateful sessions. Key capabilities include:

  • Install and connect with a single API key

  • Connect agents to remote browsers over CDP

  • Reduce operational complexity for browser-based agent workflows

  • Work with Vercel Sandbox and AI Gateway

Also available today is support for Web Bot Auth for Browserbase, enabling agents to reliably browse Vercel-hosted deployments without interruption from security layers.

Get started with Browserbase on the Vercel Marketplace or try this example to see it in action.

Read more

Tony Pan Zack Balda Hedi Zandi
https://vercel.com/changelog/use-minimax-m2-5-on-ai-gateway Use MiniMax M2.5 on AI Gateway 2026-02-12T13:00:00.000Z

MiniMax M2.5 is now available on AI Gateway.

M2.5 plans before it builds, breaking down functions, structure, and UI design before writing code. It handles full-stack projects across Web, Android, iOS, Windows, and Mac, covering the entire development lifecycle from initial system design through code review. Compared to M2.1, it adapts better to unfamiliar codebases and uses fewer search rounds to solve problems.

To use this model, set model to minimax/minimax-m2.5 in the AI SDK:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/new-deployments-with-vulnerable-versions-of-next-mdx-remote-are-now-blocked-by-default New deployments with vulnerable versions of the third-party package next-mdx-remote are now blocked by default 2026-02-12T13:00:00.000Z

Any new deployment containing a version of the third-party package next-mdx-remote that is vulnerable to CVE-2026-0969 will now automatically fail to deploy on Vercel.

We strongly recommend upgrading to a patched version regardless of your hosting provider.

This automatic protection can be disabled by setting the DANGEROUSLY_DEPLOY_VULNERABLE_CVE_2026_0969=1 environment variable on your Vercel project. Learn more

Read more

Tom Knickman
https://vercel.com/changelog/glm-5-is-live-on-ai-gateway GLM-5 is live on AI Gateway 2026-02-11T13:00:00.000Z

You can now access GLM-5 via AI Gateway with no other provider accounts required.

GLM-5 from Z.AI is now available on AI Gateway. Compared to GLM-4.7, GLM-5 adds multiple thinking modes, improved long-range planning and memory, and better handling of complex multi-step agent tasks. It's particularly strong at agentic coding, autonomous tool use, and extracting structured data from documents like contracts and financial reports.

To use this model, set model to zai/glm-5 in the AI SDK:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/advanced-egress-firewall-filtering-for-vercel-sandbox Advanced egress firewall filtering for Vercel Sandbox 2026-02-11T13:00:00.000Z

Vercel Sandbox can now enforce egress network policies through Server Name Indication (SNI) filtering and CIDR blocks, giving you control over which hosts a sandbox can reach. Outbound TLS connections are matched against your policy at the handshake, unauthorized destinations are rejected before any data is transmitted.

By default, sandboxes have unrestricted internet access. When running untrusted or AI generated code, you can lock down the network to only the services your workload actually needs. A compromised or hallucinated code snippet cannot exfiltrate data or make unintended API calls, traffic to any domain not on your allowlist is blocked.

Going beyond IP based rules to host based

The modern internet runs on hostnames, not IP addresses, a handful of addresses serve thousands of domains. Traditional IP-based firewall rules can't precisely distinguish between them.

Host-based egress control typically requires an HTTP proxy, but that breaks non-HTTP protocols like Redis and Postgres. Instead, we built an SNI-peeking firewall that inspects the initial unencrypted bytes of a TLS handshake to extract the target hostname. Since nearly all internet traffic is TLS-encrypted today, this covers all relevant cases. For legacy or non-TLS systems, we do also support IP/CIDR-based rules as a fallback.

Restrict to specific hosts at creation

Define which domains the sandbox can reach. Everything else is denied by default. Wildcard support makes it easy to allowlist services behind CDNs:

Adjust after initial setup

Policies can be updated dynamically on a running sandbox without restarting the process. Start with full internet access to install dependencies, lock it down before executing untrusted code, reopen to stream results after user approval, and then air gap again with deny-all, fully in one session:

Read the documentation to learn more about network egress firewall policies, available on all plans.

Read more

Valerian Roche Rob Herley
https://vercel.com/changelog/vercel-flags-is-now-in-public-beta Vercel Flags is now in public beta 2026-02-11T13:00:00.000Z

Vercel Flags is a feature flag provider built into the Vercel platform. It lets you create and manage feature flags with targeting rules, user segments, and environment controls directly in the Vercel Dashboard.

The Flags SDK provides a framework-native way to define and use these flags within Next.js and SvelteKit applications, integrating directly with your existing codebase:

And you can use them within your pages like:

For teams using other frameworks or custom backends, the Vercel Flags adapter supports the OpenFeature standard, allowing you to combine feature flags across various systems and maintain consistency in your flag management approach:

Vercel Flags is priced at $30 per 1 million flag requests ($0.00003 per event), where a flag request is any request to your application that reads the underlying flags configuration. A single request evaluating multiple feature flags of the same source project still counts as one flag request.

Vercel Flags is now in beta and available to teams on all plans.

Learn more about Vercel Flags to get started with feature flag management.

Read more

Dominik Ferber Luis Meyer Andy Schneider Vincent Derks William Bout Chris Widmaier
https://vercel.com/changelog/sign-in-with-apple-support Support for Sign in with Apple 2026-02-10T13:00:00.000Z

The login experience now supports Sign in with Apple, enabling faster access for users with Apple accounts.

If your Apple account uses an Apple email (@icloud.com, @mac.com, @me.com, etc.) that matches your Vercel account's email, you can use the Apple button from the login screen and your accounts will be automatically linked.

If the emails don't match, you can manually connect your Apple account from your account settings once logged in.

Read more

Mark Roberts Javier Bórquez
https://vercel.com/changelog/vercel-logs-cli-command-now-optimized-for-agents-with-historical-log vercel logs CLI command now optimized for agents with historical log querying 2026-02-10T13:00:00.000Z

The vercel logs command has been rebuilt with more powerful querying capabilities, designed with agent workflows in mind. You can now query historical logs across your projects and filter by specific criteria, such as project, deploymentID, requestID, and arbitrary strings, to find exactly what you need.

The updated command uses git context by default, automatically scoping logs to your current repository when run from a project directory. This makes it easy to debug issues during development without manually specifying project details.

Whether you're debugging a production issue or building automated monitoring workflows, the enhanced filtering gives you precise control over log retrieval across your Vercel projects.

Learn about Vercel CLI and vercel logs command.

Read more

Adrian Cooney
https://vercel.com/changelog/agents-can-now-access-runtime-logs-with-vercels-mcp-server Agents can now access runtime logs with Vercel's MCP server 2026-02-10T13:00:00.000Z

Agents can now access runtime logs through Vercel's MCP server.

The get_runtime_logs tool lets agents retrieve Runtime Logs for a project or deployment. Runtime logs include logs generated by Vercel Functions invocations in preview and production deployments, including function output and console.log messages.

This enables agents to:

  • debug failing requests

  • inspect function output

  • search logs for specific errors or request IDs

  • investigate runtime behavior across deployments

Get started with the Vercel MCP server.

Read more

Allen Zhou Adrian Cooney Marcos Grappeggia
https://vercel.com/changelog/posthog-joins-the-vercel-marketplace PostHog joins the Vercel Marketplace 2026-02-10T13:00:00.000Z

PostHog is now available in the Vercel Marketplace as a feature flags, experimentation and Analytics provider.

With this integration, you can now:

  • Declare flags in code using Flags SDK and the @flags-sdk/posthog adapter

  • Toggle features in real time for specific users or cohorts

  • Roll out changes gradually using percentage-based rollouts

  • Run A/B tests to validate impact before a full release

This integration helps teams building on Vercel ship with more confidence. You can test in production, reduce release risk, and make data-driven decisions based on real user behavior, all within your existing Vercel workflows.

Create a flags.ts file with an identify function and a flag check:

Check out the PostHog template to learn more about this integration.

Read more

Marketplace Team
https://vercel.com/blog/how-we-built-aeo-tracking-for-coding-agents How we built AEO tracking for coding agents 2026-02-09T13:00:00.000Z

AI has changed the way that people find information. For businesses, this means it's critical to understand how LLMs search for and summarize their web content.

We're building an AI Engine Optimization (AEO) system to track how models discover, interpret, and reference Vercel and our sites.

This started as a prototype focused only on standard chat models, but we quickly realized that wasn’t enough. To get a complete picture of visibility, we needed to track coding agents.

For standard models, tracking is relatively straightforward. We use AI Gateway to send prompts to dozens of popular models (e.g. GPT, Gemini, and Claude) and analyze their responses, search behavior, and cited sources.

Coding agents, however, behave very differently. Many Vercel users interact with AI through their terminal or IDE while actively working on projects. In early sampling, we found that coding agents perform web searches in roughly 20% of prompts. Because these searches happen inline with real development workflows, it’s especially important to evaluate both response quality and source accuracy.

Measuring AEO for coding agents requires a different approach than model-only testing. Coding agents aren’t designed to answer a single API call. They’re built to operate inside a project and expect a full development environment, including a filesystem, shell access, and package managers.

That creates a new set of challenges:

  1. Execution isolation: How do you safely run an autonomous agent that can execute arbitrary code?

  2. Observability: How do you capture what the agent did when each agent has its own transcript format, tool-calling conventions, and output structure?

The coding agent AEO lifecycle

Coding agents are typically accessed at some level through CLIs rather than APIs. Even if you’re only sending prompts and capturing responses, the CLI still needs to be installed and executed in a full runtime environment.

Vercel Sandbox solves this by providing ephemeral Linux MicroVMs that spin up in seconds. Each agent run gets its own sandbox and follows the same six-step lifecycle, regardless of the CLI it uses.

  1. Create the sandbox. Spin up a fresh MicroVM with the right runtime (Node 24, Python 3.13, etc.) and a timeout. The timeout is a hard ceiling, so if the agent hangs or loops, the sandbox kills it.

  2. Install the agent CLI. Each agent ships as an npm package (i.e., @anthropic-ai/claude-code, @openai/codex, etc.). The sandbox installs it globally so it's available as a shell command.

  3. Inject credentials. Instead of giving each agent a direct provider API key, we set environment variables that route all LLM calls through Vercel AI Gateway. This gives us unified logging, rate limiting, and cost tracking across every agent, even though each agent uses a different underlying provider (though the system allows direct provider keys as well).

  4. Run the agent with the prompt. This is the only step that differs per agent. Each CLI has its own invocation pattern, flags, and config format. But from the sandbox's perspective, it's just a shell command.

  5. Capture the transcript. After the agent finishes, we extract a record of what it did, including which tools it called, whether it searched the web, and what it recommended in the response. This is agent-specific (covered below).

  6. Tear down. Stop the sandbox. If anything went wrong, the catch block ensures the sandbox is stopped anyway so we don't leak resources.

In the code, the lifecycle looks like this.

Agents as config

Because the lifecycle is uniform, each agent can be defined as a simple config object. Adding a new agent to the system means adding a new entry, and the sandbox orchestration handles everything else.

runtime determines the base image for the MicroVM. Most agents run on Node, but the system supports Python runtimes too.

setupCommands is an array because some agents need more than a global install. For example, Codex also needs a TOML config file written to ~/.codex/config.toml.

buildCommand is a function that takes the prompt and returns the shell command to run. Each agent's CLI has its own flags and invocation style.

Using the AI Gateway for routing

We wanted to use the AI Gateway to centralize management of cost and logs. This required overriding the provider’s base URLs via environment variables inside the sandbox. The agents themselves don’t know this is happening and operate as if they are talking directly to their provider.

Here’s what this looks like for Claude Code:

ANTHROPIC_BASE_URL points to AI Gateway instead of api.anthropic.com. The agent's HTTP calls go to Gateway, which proxies them to Anthropic.

ANTHROPIC_API_KEY is set to empty string on purpose — Gateway authenticates via its own token, so the agent doesn't need (or have) a direct provider key.

This same pattern works for Codex (override OPENAI_BASE_URL) and any other agent that respects a base URL environment variable. Provider API credentials can also be used directly.

The transcript format problem

Once an agent finishes running in its sandbox, we have a raw transcript, which is a record of everything it did.

The problem is that each agent produces them in a different format. Claude Code writes JSONL files to disk. Codex streams JSON to stdout. OpenCode also uses stdout, but with a different schema. They use different names for the same tools, different nesting structures for messages, and different conventions.

We needed all of this to feed into a single brand pipeline, so we built a four-stage normalization layer:

  1. Transcript capture: Each agent stores its transcript differently, so this step is agent-specific.

  2. Parsing: Each agent has its own parser that normalizes tool names and flattens agent-specific message structures into a single unified event type.

  3. Enrichment: Shared post-processing that extracts structured metadata (URLs, commands) from tool arguments, normalizing differences in how each agent names its args.

  4. Summary and brand extraction: Aggregate the unified events into stats, then feed into the same brand extraction pipeline used for standard model responses.

Stage 1: Transcript capture

This happens while the sandbox is still running (step 5 in the lifecycle from the previous section).

Claude Code writes its transcript as a JSONL file on the sandbox filesystem. We have to find and read it out after the agent finishes:

Codex and OpenCode both output their transcripts to stdout, so capture is simpler — filter the output for JSON lines:

The output of this stage is the same for all agents: a string of raw JSONL. But the structure of each JSON line is still completely different per agent, and that's what the next stage handles.

Stage 2: Parsing tool names and message shapes

We built a dedicated parser for each agent that does two things at once: normalizes tool names and flattens agent-specific message structures into a single formatted event type.

Tool name normalization

The same operation has different names across agents:

Operation

Claude Code

Codex

OpenCode

Read a file

Read

read_file

read

Write a file

Write

write_file

write

Edit a file

StrReplace

patch_file

patch

Run a command

Bash

shell

bash

Search the web

WebFetch

(varies)

(varies)

Each parser maintains a lookup table that maps agent-specific names to ~10 canonical names:

Message shape flattening

Beyond naming, the structure of events varies across agents:

  • Claude Code nests messages inside a message property and mixes tool_use blocks into content arrays.

  • Codex has Responses API lifecycle events (thread.started, turn.completed, output_text.delta) alongside tool events.

  • OpenCode bundles tool call + result in the same event via part.tool and part.state.

The parser for each agent handles these structural differences and collapses everything into a single TranscriptEvent type:

The output of this stage is a flat array of TranscriptEvent[] , which is the same shape regardless of which agent produced it.

Stage 3: Enrichment

After parsing, a shared post-processing step runs across all events. This extracts structured metadata from tool arguments so that downstream code doesn't need to know that Claude Code puts file paths in args.path while Codex uses args.file:

Stage 4: Summary and brand extraction

The enriched TranscriptEvent[] array gets summarized into aggregate stats (total tool calls by type, web fetches, errors) and then fed into the same brand extraction pipeline used for standard model responses. From this point forward, the system doesn't know or care whether the data came from a coding agent or a model API call.

Orchestration with Vercel Workflow

This entire pipeline runs as a Vercel Workflow. When a prompt is tagged as "agents" type, the workflow fans out across all configured agents in parallel and each gets its own sandbox:

What we’ve learned

  • Coding agents contribute a meaningful amount of traffic from web search. Early tests on a random sample of prompts showed that coding agents execute search around 20% of the time. As we collect more data we will build a more comprehensive view of agent search behavior, but these results made it clear that optimizing content for coding agents was important.

  • Agent recommendations have a different shape than model responses. When a coding agent suggests a tool, it tends to produce working code with that tool, like an import statement, a config file, or a deployment script. The recommendation is embedded in the output, not just mentioned in prose.

  • Transcript formats are a mess. And they are getting messier as agent CLI tools ship rapid updates. Building a normalization layer early saved us from constant breakage.

  • The same brand extraction pipeline works for both models and agents. The hard part is everything upstream: getting the agent to run, capturing what it did, and normalizing it into a structure you can grade.

What’s next

  • Open sourcing the tool. We're planning to release an OSS version of our system so other teams can track their own AEO evals, both for standard models and coding agents.

  • Deep dive on methodology. We are working on a follow-up post covering the full AEO eval methodology: prompt design, dual-mode testing (web search vs. training data), query-as-first-class-entity architecture, and Share of Voice metrics.

  • Scaling agent coverage. Adding more agents as the ecosystem grows and expanding the types of prompts we test (not just "recommend a tool" but full project scaffolding, debugging, etc.).

Read more

Eric Dodds Allen Zhou
https://vercel.com/blog/anyone-can-build-agents-but-it-takes-a-platform-to-run-them Anyone can build agents, but it takes a platform to run them 2026-02-09T13:00:00.000Z

Prototyping is democratized, but production deployment isn't.

AI models have commoditized code and agent generation, making it possible for anyone to build sophisticated software in minutes. Claude can scaffold a fully functional agent before your morning coffee gets cold. But that same AI will happily architect a $5,000/month DevOps setup when the system could run efficiently at $500/month.

In a world where anyone can build internal tools and agents, the build vs. buy equation has fundamentally changed. Competitive advantage no longer comes from whether you can build. It comes from rapid iteration on AI that solves real problems for your business and, more importantly, reliably operating those systems at scale.

To do that, companies need an internal AI stack as robust as their external product infrastructure. That's exactly what Vercel's agent orchestration platform provides.

Build vs. buy ROI has fundamentally changed

For decades, the economics of custom internal tools only made sense at large-scale companies. The upfront engineering investment was high, but the real cost was long-term operation with high SLAs and measurable ROI. For everyone else, buying off-the-shelf software was the practical option.

AI has fundamentally changed this equation. Companies of any size can now create agents quickly, and customization delivers immediate ROI for specialized workflows:

Today the question isn’t build vs. buy. The answer is build and run. Instead of separating internal systems and vendors, companies need a single platform that can handle the unique demands of agent workloads.

Every company needs an internal AI stack

The number of use cases for internal apps and agents is exploding, but here's the problem: production is still hard.

Vibe coding has created one of the largest shadow IT problems in history, and understanding production operations requires expertise in security, observability, reliability, and cost optimization. These skills remain rare even as building becomes easier.

The ultimate challenge for agents isn't building them, it's the platform they run on.

The platform is the product: how our data agent runs on Vercel

Like OpenAI, we built our own internal data agent named d0 (OSS template here). At its core, d0 is a text-to-SQL engine, which is not a new concept. What made it a successful product was the platform underneath.

Using Vercel’s built-in primitives and deployment infrastructure, one person built d0 in a few weeks using 20% of their time.

This was only possible because Sandboxes, Fluid compute and AI Gateway automatically handled the operational complexity that would have normally taken months of engineering effort to scaffold and secure.

Today, d0 has completely democratized data access that was previously limited to professional analysts. Engineers, marketers, and executives can all ask questions in natural language and get immediate, accurate answers from our data warehouse.

Here’s how it works:

  • A user asks a question in Slack: "What was our Enterprise ARR last quarter?" d0 receives the message, determines the right level of data access based on the permissions of the user, and starts the agent workflow.

  • The agent explores a semantic layer: The semantic layer is a file system of 5 layers of YAML-based configs that describe our data warehouse, our metrics, our products, and our operations.

  • AI SDK handles the model calls: Streaming responses, tool use, and structured outputs all work out of the box. We didn't build custom LLM plumbing, we used the same abstractions any Vercel developer can use.

  • Agent steps are orchestrated durably: If a step fails (Snowflake timeout, model hiccup), Vercel Workflows handles retries and state recovery automatically.

  • Automated actions are executed in isolation: File exploration, SQL generation, and query execution all happen in a secure Vercel Sandbox. Runaway operations can't escape, and the agent can execute arbitrary Python for advanced analysis.

  • Multiple models are used to balance cost and accuracy: AI Gateway routes simple requests to fast models and complex analysis to Claude Opus, all in one code base.

  • The answer arrives in Slack: formatted results, often with a chart or Google Sheet link, are delivered back to the Slack using the AI SDK Chatbot primitive.

Vercel is the platform for agents

Vercel provides the infrastructure primitives purpose-built for agent workloads, both internal and customer-facing. You build the agent, Vercel runs it. And it just works.

Using our own agent orchestration platform has enabled us to build and manage an increasing number of custom agents.

Internally, we run:

  • A lead qualification agent

  • d0, our analytics agent

  • A customer support agent (handles 87% percent of initial questions)

  • An abuse detection agent that flags risky content

  • A content agent that turns Slack threads into draft blog posts.

On the product side:

  • v0 is a code generation agent, and

  • Vercel Agent can review pull requests, analyze incidents, and recommend actions.

Both products run on the same primitives as our internal tools.

Sandboxes give agents a secure, isolated environment for executing sensitive autonomous actions. This is critical for protecting your core systems. When agents generate and run untested code or face prompt injection attacks, sandboxes contain the damage within isolated Linux VMs. When agents need filesystem access for information discovery, sandboxes can dynamically mount VMs with secure access to the right resources.

Fluid compute automatically handles the unpredictable, long-running compute patterns that agents create. It’s easy to ignore compute when agents are processing text, but when usage scales and you add data-heavy workloads for files, images, and video, cost becomes an issue quickly. Fluid compute automatically scales up and down based on demand, and you're only charged for compute time, keeping costs low and predictable.

AI Gateway gives you unified access to hundreds of models with built-in budget control, usage monitoring, and load balancing across providers. This is important for avoiding vendor lock-in and getting instant access to the latest models. When your agent needs to handle different types of queries, AI Gateway can route simple requests to fast, inexpensive models while sending complex analysis to more capable ones. If your primary provider hits rate limits or goes down, traffic automatically fails over to backup providers.

Workflows give agents the ability to perform complex, multi-step operations reliably. When agents are used for critical business processes, failures are costly. Durable orchestration provides retry logic and error handling at every step so that interruptions don't require manual intervention or restart the entire operation.

Observability reveals what agents are actually doing beyond basic system metrics. This data is essential for debugging unexpected behavior and optimizing agent performance. When your agent makes unexpected decisions, consumes more tokens than expected, or underperforms, observability shows you the exact prompts, model responses, and decision paths, letting you trace issues back to specific model calls or data sources.

Build your agents, Vercel will run them

In the future, every enterprise will build their version of d0. And their internal code review agent. And their customer support routing agent. And hundreds of other specialized tools.

The success of these agents depends on the platform that runs them. Companies who invest in their internal AI stack now will not only move faster, they'll experience far higher ROI as their advantages compound over time.

Read more

Eric Dodds Jeanne Grosser
https://vercel.com/changelog/new-token-formats-and-secret-scanning Introducing new token formats and secret scanning 2026-02-09T13:00:00.000Z

When Vercel API credentials are accidentally committed to public GitHub repositories, gists and npm packages, Vercel now automatically revokes them to protect your account from unauthorized access.

When the exposed credentials are detected, you'll receive notifications and can review any discovered tokens and API keys in your dashboard. This detection is powered by GitHub secret scanning and brings an extra layer of security to all Vercel and v0 users.

As part of this change, we've also updated token and API key formats to make them visually identifiable. Each credential type now includes a prefix:

We recommend reviewing your tokens and API keys regularly, rotating long-lived credentials, and revoking unused ones.

Learn more about account security.

Read more

Mark Roberts Aaron Morris Mery Kaftar Bel Curcio
https://vercel.com/blog/introducing-geist-pixel Introducing Geist Pixel 2026-02-06T13:00:00.000Z

Today, we're expanding the Geist font family with Geist Pixel.

Geist Pixel is a bitmap-inspired typeface built on the same foundations as Geist Sans and Geist Mono, reinterpreted through a strict pixel grid. It's precise, intentional, and unapologetically digital.

Same system, new texture

Geist Pixel isn't a novelty font. It's a system extension.

Just like Geist Mono was created for developers, Geist Pixel was designed with real usage in mind, not as a visual gimmick, but as a functional tool within a broader typographic system.

It includes five distinct variants, each exported separately:

  • Geist Pixel Square

  • Geist Pixel Grid

  • Geist Pixel Circle

  • Geist Pixel Triangle

  • Geist Pixel Line

Every glyph is constructed on a consistent pixel grid, carefully tuned to preserve rhythm, spacing, and legibility. The result feels both nostalgic and contemporary, rooted in early screen typography, but designed for modern products that ship to real users.

This matters because pixel fonts often break in production. They don't scale properly across viewports, their metrics conflict with existing typography, or they're purely decorative. Geist Pixel was built to solve these problems, maintaining the visual texture teams want while preserving the typographic rigor products require.

It shares the same core principles as the rest of the Geist family:

  • Clear structure

  • Predictable metrics

  • Strong alignment across layouts

  • Designed to scale across platforms and use cases

Getting started is easy

Get started with Geist Pixel and start building. Install it directly:

Exports and CSS variables:

  • GeistPixelSquare: --font-geist-pixel-square

  • GeistPixelGrid: --font-geist-pixel-grid

  • GeistPixelCircle: --font-geist-pixel-circle

  • GeistPixelTriangle: --font-geist-pixel-triangle

  • GeistPixelLine: --font-geist-pixel-line

And use it in layout.tsx, e.g. for GeistPixelSquare:

Learn more in the README.

Designed for the web and for modern products

While many pixel fonts are purely expressive, Geist Pixel is meant to ship. It works in real UI contexts: banners, dashboards, experimental layouts, product moments, and systems where typography becomes part of the interface language.

Special care was put into:

  • Vertical metrics aligned with Geist and Geist Mono

  • Consistent cap height and x-height behavior

  • Multiple variants for different densities and use cases

  • Seamless mixing with the rest of the Geist family

It's designed for the web, for modern products, and for an era where interfaces are increasingly shaped by AI-driven workflows.

Crafted on a grid, refined by hand

Although Geist Pixel is grid-based, it wasn't generated mechanically.

Each glyph was manually refined to avoid visual noise, uneven weight distribution, and awkward diagonals. Corners, curves, and transitions were adjusted pixel by pixel to maintain clarity at small sizes and personality at larger scales. Horizontal metrics use a semi-mono approach, and letterforms take inspiration from both its Mono and Sans counterparts. Constraints weren't a limitation, they were the design tool.

Geist Pixel ships with:

  • 5 variants

  • 480 glyphs

  • 7 stylistic sets

  • 32 supported languages

Built with the same system mindset as Geist and Geist Mono, it's easy to adopt without breaking layout or rhythm.

Already shaping what's next

Even before its public release, Geist Pixel has already started influencing the visual language of Vercel. Since being shared internally a few weeks ago, it's found its way into explorations, experiments, and early redesign work, shaping tone, texture, and expression across the product. In many ways, it's already part of the system.

One family, expanding

With Geist, Geist Mono, and now Geist Pixel, the family spans a broader range, from highly functional UI text to expressive, system-driven display moments.

And we're not stopping here. Geist Serif is already in progress. Same system thinking. A new voice.

Download Geist Pixel and start building.


None of this would have been possible without an incredible group of people behind the scenes. Huge thanks to Andrés Briganti for the obsessive level of craft and care poured into the design of the font itself, and to Guido Ferreyra for his support refining and tuning the font along the way; to Luis Gutierrez Rico for bringing Geist Pixel to life through motion and subtle magic; to Christopher Kindl for helping us put together the landing page and obsessing over those small details that make everything feel just right; to Marijana Pavlinić for constantly pushing us with bold, unexpected, and wildly creative ideas; and to Zahra Jabini for the coordination, technical support, and for making sure all the pieces actually came together. This was a true team effort, and I'm incredibly grateful to have built this with all of you.

Read more

Evil Rabbit
https://vercel.com/changelog/sanity-vercel-marketplace Sanity is now available on the Vercel Marketplace 2026-02-06T13:00:00.000Z

Sanity is now available on the Vercel Marketplace as a native CMS integration. Teams can now install, configure, and manage Sanity directly from the Vercel dashboard, eliminating manual API token setup and environment variable configuration.

This integration keeps CMS setup inside your existing Vercel workflow instead of requiring a separate dashboard for provisioning and account management.

Get started with the integration

Define your content schema, set up the client, and start fetching content. Schemas define your content structure in code, specifying document types and their fields.

Register your schema types in an index file so Sanity can load them.

The Sanity client connects your application to your content. The Marketplace integration provisions the project ID as an environment variable automatically.

With the client configured, you can fetch content using GROQ (Graph-Relational Object Queries), Sanity's query language for requesting exactly the fields you need.

That's all you need to go from install to fetching content. Install Sanity from the Vercel Marketplace to get started, or deploy the Next.js + Sanity Personal Website template to start from a working example.

Read more

Marketplace Team
https://vercel.com/changelog/simplified-file-retrieval-from-vercel-sandbox-environments Simplified file retrieval from Vercel Sandbox environments 2026-02-06T13:00:00.000Z

The Vercel Sandbox SDK now includes two new methods that make file retrieval simple.

When you run code in a Vercel Sandbox, that code can generate files like a CSV report, a processed image, or a PDF invoice. These files are created inside isolated VMs, so they need to be retrieved across a network boundary. Until now, this required manual stream handling with custom piping.

Download a file

If you want to download a generated report from your sandbox to your local machine, you can use downloadFile() to seamlessly stream the contents.

Read file contents to buffer

Both methods handle the underlying stream operations automatically. For example, if your sandbox runs a script that generates a chart as a PNG, you can pull it out with a single call to readFileToBuffer(), no manual stream wiring needed.

Learn more about the Sandbox SDK or explore the updated documentation.

Read more

Laurens Duijvesteijn Rob Herley
https://vercel.com/blog/the-vercel-ai-accelerator-is-back-with-6-million-in-credits The Vercel AI Accelerator is back with $6m in credits 2026-02-05T13:00:00.000Z

Building an AI business is no small feat. Delivering a great agentic product requires infrastructure that handles deployment, security, and scale automatically, but that's table stakes. Startups also need community support, mentorship, investor connections, platform credits, and visibility.

That's why we created the Vercel AI Accelerator. Last year, we hosted our second cohort of 40 early-stage teams from across the globe. They joined us for six weeks of learning, building, and shipping, hearing from speakers in leadership at AWS, Anthropic, Cursor, Braintrust, MongoDB, HubSpot, Vercel, and more. The program culminated with a demo day in San Francisco that drew hundreds from across the industry.

This year, the Accelerator is back with another cohort of 40 teams building the future of AI. Applications are open now until February 16th.

Program benefits

The AI Accelerator provides access to thousands of dollars in credits from Vercel, v0, AWS, and a variety of AI platforms. Participants also get to join an exclusive group of AI builders within the Vercel Community. Here are the full details:

  • Credits from Vercel, v0, AWS, and leading AI platforms, including Anthropic, Cursor, ElevenLabs, Hugging Face, Cartesia, Roboflow, Modal, Julius.ai, Sentry, Vanta, Auth0, Browserbase, WorkOS, Supabase, Autonoma, and Neon

  • Join an exclusive group of AI builders within the Vercel Community to share progress and exchange ideas during the program

  • Participate in weekly sessions with industry leaders connect you directly with AI startup founders, investors, and technical leaders through fireside chats and office hours

  • Access production-focused guides, templates, and videos designed to accelerate development cycles and help ship faster

  • Present your product to industry leaders and VCs at demo day, creating visibility and potential fundraising opportunities

Platform credits and prizes

Every company accepted into the Accelerator receives thousands of dollars in credits from partner platforms. Finalists earn over $100K each in additional credit prizes, providing the compute and infrastructure resources needed to build and scale AI applications without operational overhead.

Six weeks of focused development

The 40 selected teams will join us from March 2nd to April 16th for six weeks of building and networking.

The program includes:

  • Direct access to leading AI builders

  • Exclusive AI content

  • Community connections

  • Optional IRL meet ups

  • Welcome and mid-point goal check-ins

  • Mentorship from VC partners

The program ends with a demo day in San Francisco on April 16th:

  • The audience will include leaders and investors

  • Judges will select 3 winners to receive over $100k in resources

  • The first-place winner will receive an investment from Vercel Ventures

Our previous demo day featured product launches from 26 teams and attracted hundreds of industry professionals. Judges from AWS, Vercel, Cursor, Modal, OpenAI, and Roboflow selected the winners.

Since that event, several teams have raised venture funding rounds, secured enterprise customers, and established partnerships through connections made during the accelerator.

Infrastructure for AI development

We continue investing across the AI stack, from SDKs to templates, supporting how developers build modern apps and agents. Recent AI releases include Workflow DevKit, Sandboxes, Skills.sh, and git support in v0.

These tools are designed to handle infrastructure automatically so teams can focus on building AI products. Companies like Sensay, Chatbase, and Leonardo.ai are built with Next.js and deployed on Vercel.

Focus on what matters

AI agents can now operate autonomously on code, making decisions and taking actions without constant human oversight. This shift requires infrastructure that scales automatically and handles operational complexity behind the scenes. Vercel provides that foundation, giving startups the freedom to focus on building AI applications that solve real problems.

In our last cohort we backed early-stage teams like Stably AI, Cervo, Bear AI, and General Translation.

Apply to the Vercel AI Accelerator and join a cohort of developers building AI applications that operate independently and scale automatically. Applications close February 16th.

Applicants must be Vercel customers at or above the age of majority in your jurisdiction. Applicants must be able to commit to the full 6 weeks of the program. Applicants must not be located in, or otherwise subject to restrictions imposed by, U.S. sanctions laws.We are looking for pre-seed ideas. Applications will be judged based on quality of submission, founder background, and overall potential for impact and scalability.

Read more

Alli Pope
https://vercel.com/changelog/claude-opus-4.6-on-ai-gateway Use Claude Opus 4.6 on AI Gateway 2026-02-05T13:00:00.000Z

Anthropic's latest flagship model, Claude Opus 4.6, is now available on AI Gateway. Built to power agents that handle real-world work, Opus 4.6 excels across the entire development lifecycle. Opus 4.6 is also the first Opus model to support the extended 1M token context window.

The model introduces adaptive thinking, a new parameter that lets the model decide when and how much to reason. This approach enables more efficient responses while maintaining quality across programming, analysis, and creative tasks, delivering equal/better performance than extended thinking. Opus 4.6 can interleave thinking and tool calls within a single response.

To use the model, set model to anthropic/claude-opus-4.6. The following example also uses adaptive thinking and the effort parameter.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/build-logs-now-support-interactive-links Build logs now support interactive links 2026-02-04T13:00:00.000Z

URLs in build logs are now interactive. Navigate directly to internal and external resources without manually copying and pasting. External links open in a new tab.

This eliminates any extra steps you may encounter when investigating build issues or following documentation links.

Learn more about accessing build logs.

Read more

Mitul Shah
https://vercel.com/changelog/parallel-web-search-is-now-on-ai-gateway Parallel's Web Search and tools are live on Vercel 2026-02-04T13:00:00.000Z

You can now use Parallel's LLM-optimized web search and other tools all across Vercel.

AI Gateway

Unlike provider-specific web search tools that only work with certain models, Parallel's web search tool works universally across all providers. This means you can add web search capabilities to any model without changing your implementation.

To use through AI SDK, set parallel_search: gateway.tools.parallelSearch() in tools.

Parallel web search extracts relevant excerpts from web pages, making it ideal for agentic tasks and real-time information retrieval. For more control, you can also configure the tool to use specific parameters.

For agentic workflows, use mode: 'agentic' to get concise, token-efficient search results that work well in multi-step reasoning.

Time-sensitive queries can control cache freshness with maxAgeSeconds, while domain-specific search lets you restrict results to trusted sources or exclude noisy domains.

Parallel web search requests are charged at exactly the same rate as the Parallel API. $5 per 1,000 requests (includes up to 10 results per request) and additional results beyond 10 charged at $1 per 1,000 additional results. Read the docs for more information and details on how to use the tool.

AI SDK

AI SDK supports Parallel as a tool for both web search and extraction. To use, simply install the parallel-web tool package.

View the docs for more details on how to utilize the tools.

Vercel Marketplace

You can utilize all Parallel products: Search, Extract, Task, Findall, and Monitoring in Vercel Agent Marketplace with centralized billing through Vercel and a single API key. To get started, go to the Parallel integration and connect your account or deploy the Next.js template to integrate Parallel's web research APIs with Vercel in action.

Get started with Parallel for your AI applications through AI Gateway, the AI SDK tool package, or Vercel Marketplace.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/parallel-joins-the-vercel-agent-marketplace Parallel joins the Vercel Agent Marketplace 2026-02-04T13:00:00.000Z

Parallel is now available on the Vercel Agent Marketplace with native integration support.

Parallel provides web tools and agents designed for LLM-powered applications, including Search, Extract, Tasks, FindAll, and Monitoring capabilities. The Vercel integration provides a single API key that works across all Parallel products, with billing handled directly through your Vercel account.

For developers building AI features on Vercel, Parallel enables agents to access the open web for tasks like answering questions, monitoring changes, and extracting structured data. Since Parallel returns results optimized for LLM consumption, your agents can resolve tasks with fewer round trips and reduced cost and latency.

Install Parallel from the Marketplace or deploy the Next.js template to see Parallel's web research APIs integrated with Vercel in action.

Read more

Marketplace Team
https://vercel.com/blog/making-agent-friendly-pages-with-content-negotiation Making agent-friendly pages with content negotiation 2026-02-03T13:00:00.000Z

Agents fetch web pages to answer questions, write code, and complete tasks. When an agent requests a page, it gets everything your browser gets, including navigation menus, stylesheets, JavaScript bundles, tracking scripts, and footer links, when all it needs is the structured text on the page. That extra markup confuses the agent, consumes its context window, and makes every request more expensive.

What is content negotiation

What agents need is a way to request just the text content of a page, without the browser-specific markup. Content negotiation solves this. It's a standard HTTP mechanism where the client specifies its preferred format via the Accept header, and the server returns the matching representation. Many agents already send Accept: text/markdown when fetching pages, and a server that supports content negotiation can return clean, structured text from the same URL that serves HTML to a browser.

We've updated many of our pages, including our blog and changelog, to support content negotiation. This post walks through how it works, how we implemented it in Next.js, and how to add markdown sitemaps so agents can discover your content.

How agents request markdown

When an agent fetches a page, it includes an Accept header with its format preferences:

Accept: text/markdown, text/html, */*

By listing text/markdown first, the agent signals that markdown is preferred over HTML when available. This works better than hosting separate .md URLs because content negotiation requires no site-specific knowledge. Any agent that sends the right header gets markdown automatically, from any site that supports it.

Try it yourself:

curl https://vercel.com/blog/self-driving-infrastructure -H "accept: text/markdown"

Implementing content negotiation in Next.js

The implementation has two parts, a rewrite rule in next.config.ts that detects the header and a route handler that returns markdown.

The rewrite checks the Accept header on every incoming request. When it contains text/markdown, the request gets routed to a dedicated markdown endpoint instead of the default HTML page:

The route handler serves the markdown. Our blog content lives in our CMS as rich text, so the route handler converts it to markdown on the fly. If your content is already authored in markdown, you can serve it directly without a conversion step.

The rich-text-to-markdown conversion preserves the content's structure. Code blocks keep their syntax highlighting markers, headings maintain their hierarchy, and links remain functional. The agent receives the same information as the HTML version, just in a format optimized for token efficiency.

Performance benefits

The HTML version of this page is around 500KB. The markdown version is 3KB, a 99.37% reduction in payload size. For agents operating under token limits, smaller payloads mean they can consume more content per request and spend their budget on actual information instead of markup.

We keep the HTML and markdown versions synchronized using Next.js 16 remote cache and shared slugs, so when content updates in our CMS, both versions refresh simultaneously.

Markdown sitemaps for agent discovery

Content negotiation also works for sitemaps. XML sitemaps are flat lists of URLs with no titles, no hierarchy, and no indication of what each page is about. A markdown sitemap gives agents a structured table of contents with human-readable titles and parent-child relationships, so they can understand what content exists on your site and navigate to what they need.

We serve markdown sitemaps for both our blog and documentation.

Here's the route handler we use to generate a markdown sitemap for blog posts:

For documentation or other content with nested sections, a recursive renderer preserves the hierarchy so agents understand which pages are children of which topics:

You can see this in action with the Vercel documentation sitemap.

For agents that don't send the Accept header, a link tag in your HTML <head> provides an alternative discovery path:

<link rel="alternate" type="text/markdown" title="LLM-friendly version" href="/llms.txt" />

Making your site agent-friendly

Content negotiation, markdown sitemaps, and link rel="alternate" tags give agents three ways to find and consume your content efficiently. You can read this page as markdown to see the full output, or append .md to any blog or changelog URL on vercel.com.

For an implementation reference, see how to serve documentation for agents in our knowledge base.

Read more

Zach Cowan Mitul Shah
https://vercel.com/blog/making-agent-friendly-pages-with-content-negotiation Making agent-friendly pages with content negotiation 2026-02-03T13:00:00.000Z

Agents browse the web, but they read differently than humans. They don't need CSS, client-side JavaScript, or images. All of that markup fills up their context window and consumes tokens without adding useful information. What agents need is clean, structured text.

That's why we've updated our blog and changelog pages to make markdown accessible to agents while still delivering a full HTML and CSS experience to human readers. This works through content negotiation, an HTTP mechanism where the server returns different formats for the same content based on what the client requests. No duplicate content or separate sites.

How agents request content 

Agents use the HTTP Accept header to specify what formats they prefer. Claude Code, for example, sends this header when fetching pages:

Accept: text/markdown, text/html, */*

By listing text/markdown first, the agent signals that markdown is preferred over HTML when available. Many agents are starting to explicitly prefer markdown this way.

Try it out by sending a curl request:

curl https://vercel.com/blog/self-driving-infrastructure -H "accept: text/markdown"

Our middleware examines the Accept header on incoming requests and detects these preferences. When markdown is preferred, it routes the request to a Next.js route handler that converts our Contentful rich-text content into markdown.

This transformation preserves the content's structure. Code blocks keep their syntax highlighting markers, headings maintain their hierarchy, and links remain functional. The agent receives the same information as the HTML version, just in a format optimized for token efficiency.

Performance benefits

A typical blog post weighs 500KB with all the HTML, CSS, and JavaScript. However, the same content as markdown is only 2KB. That's a 99.6% reduction in payload size.

For agents operating under token limits, smaller payloads mean they can consume more content per request and spend their budget on actual information instead of markup. They work faster and hit limits less often.

We maintain synchronization between HTML and markdown versions using Next.js 16 remote cache and shared slugs. When content updates in Contentful, both versions refresh simultaneously.

How agents discover available content

Agents need to discover what's available. We implemented a markdown sitemap that lists all content in a format optimized for agent consumption. The sitemap includes metadata about each piece, including publication dates, content types, and direct links to both HTML and markdown versions. This gives agents a complete map of available information and lets them choose the format that works best for their needs.

Want to see this in action? Add .md to the end of this page's URL to get the markdown version.

Read more

Zach Cowan Mitul Shah
https://vercel.com/blog/the-vercel-oss-bug-bounty-program-is-now-available The Vercel OSS Bug Bounty program is now available 2026-02-03T13:00:00.000Z

Security is foundational to everything we build at Vercel. Our open source projects power millions of applications across the web, from small side projects to demanding production workloads at Fortune 500 companies. That responsibility drives us to keep investing in security for the platform and the broader ecosystem.

Today, we're opening the Vercel Open Source Software (OSS) bug bounty program to the public on HackerOne. We're inviting security researchers everywhere to find vulnerabilities, challenge assumptions, and help us reduce risk for everyone building with these tools.

Since August 2025, we've run a private bug bounty for our open source software with a small group of researchers. That program produced multiple high-severity reports across our Tier 1 projects and helped us refine our processes for triage, fixes, coordinated disclosure, and CVE publication. Now we're ready to expand.

Building on our foundation of security investment

Last fall, we opened a bug bounty program focused on Web Application Firewall and the React2Shell vulnerability class. Rather than wait for bypasses to surface in the wild, we took a proactive approach: pay security researchers to find them first.

That program paid out over $1M across dozens of researchers who helped us find and fix vulnerabilities before attackers could. The lesson was clear. Good incentives and clear communication turn researchers into partners, not adversaries.

Opening our private OSS bug bounty program to the public is the natural next step. Security vulnerabilities in these projects don't just affect Vercel; they affect everyone who builds with these tools. Finding and fixing them protects millions of end-users.

Which projects are covered

All Vercel open source projects are in scope. The projects listed below represent the core of the Vercel open source ecosystem. These are the frameworks, libraries, and tools that millions of developers rely on daily.

Core projects included in the HackerOne program

Project

Description

Next.js

React framework for production web applications

Nuxt

Vue.js framework for modern web development

SWR

React Hooks library for data fetching

Svelte

Framework for building user interfaces

Turborepo

High-performance build system for monorepos

AI SDK

TypeScript toolkit for AI applications

vercel (CLI)

Command-line interface for Vercel platform

workflow

Durable workflow execution engine

flags

Feature flags SDK

ms

Tiny millisecond conversion utility

nitrojs

Universal server engine

async-sema

Semaphore for async operations

skills

The open agent skills tool: npx skills

These are the projects where vulnerabilities have the highest potential impact, and where we prioritize incident response, vulnerability management, and CVE publication.

How to participate

If you’re a security researcher and ready to start hunting, visit HackerOne to find everything you need: scope details, reward ranges, and submission guidelines.

When you find a vulnerability, submit it through HackerOne with clear reproduction steps. Our security team reviews every submission and works directly with researchers through the disclosure process. We're committed to fast response times and transparent communication.

We appreciate the researchers who take the time to dig into our code and report issues responsibly. Your work helps keep these projects safer for everyone.

Join our bug bounty program or learn more about security at Vercel.

Read more

Andy Riancho
https://vercel.com/blog/introducing-the-new-v0 Introducing the new v0 2026-02-03T13:00:00.000Z

Since v0 became generally available in 2024, more than 4 million people have used it to turn their ideas into apps in minutes. v0 has helped people get promotions, win more clients, and work more closely with developers.

AI lowered the barrier to writing code. Now we're raising the bar for shipping it.

Today, v0 evolves vibe coding from novelty to business critical. Built for production apps and agents, this release includes enterprise-grade security and integrations teams can use to ship real software, not just spin up demos.

The limitations of vibe coding

We're at an inflection point where anyone can create software. But this freedom has created three problems for the enterprise.

Vibe coding is now the world's largest shadow IT problem. AI-enabled software creation is already happening inside every enterprise, and employees are shipping security flaws alongside features: credentials copied into prompts, company data published to the public internet, and databases get deleted, all with no audit trail.

Demos are easy to generate, but production features aren't. Prototyping is one of the most popular use cases for marketers and PMs, but the majority of real software work happens on existing apps, not one-off creations. Prototypes fail because they live outside real codebases, require rewrites before production, and create handoffs between tools and teams.

The old Software Development Life Cycle is overloaded with dead-ends. The legacy SDLC relies on countless PRDs, tickets, and review meetings. Feedback cycles take weeks or months. Vibe coding has overloaded these outdated processes with thousands of good ideas that will never see the light of day, frustrating engineers and their stakeholders.

We took these problems to heart and rebuilt v0 from the ground up.

From 0 to shipped: What's new

Work on existing codebases

Instead of engineers spending weeks on re-writes for production, v0’s new sandbox-based runtime can import any GitHub repo and automatically pull environment variables, and configurations from Vercel.

Every prompt generates production-ready code in a real environment, and it lives in your repo. No more copying code back and forth.

Bring git to your entire team

Historically, marketers and PMs weren’t comfortable setting up and troubleshooting a local dev environment. With v0, they don’t have to.

A new Git panel lets you create a new branch for each chat, open PRs against main, and deploy on merge. Pull requests are first-class and previews map to real deployments. For the first time, anyone on a team, not just engineers, can ship production code through proper git workflows.

Democratize data, safely

Building internal reports and data apps typically requires painful setup of ETL pipelines and scheduled jobs. With v0, you can connect your app directly to the tables you need.

Secure integrations with Snowflake and AWS databases mean anyone can build custom reporting, add rich context to their internal tools, and automate data-triggered processes.

Stay secure by default

Vibe coding tools optimize for speed and novelty, discarding decades of software engineering best practices.

v0 is built on Vercel, where security is built-in by default and configurable for common compliance needs. Set deployment protection requirements, connect securely to enterprise systems, and set proper access controls for every app.

How our customers use the new v0

  • Product leaders turn PRDs into prototypes, and prototypes into PRs, shipping the right features, fast. They go from "tell sales there's another delay" to "it's shipped."

  • Designers work against real code, refining layouts, tweaking components, and previewing production with each update. They go from "another ticket for frontend" to "it's shipped."

  • Marketers turn ideas into site updates immediately, edit landing pages, changing images, fixing copy, and publishing, all without opening a ticket. They go from "please, it's a quick change" to "it's shipped."

  • Engineers unblock stakeholders without breaking prod, making quick fixes, importing repos, and letting business users open PRs, all in a single tab. They go from "I can't keep up with the backlog" to "it's shipped."

  • Data teams ship dashboards the business actually uses, building custom reports and analytics on top of real data with just a few prompts. They go from "that's buried in a notebook" to "it's shipped."

  • GTM teams close deals with the demo customers actually asked for, create live previews, mock data, and branded experiences in minutes. They go from "let's show the standard deck" to "it's shipped."

What's next

Today, you can use v0 to ship production apps and websites. 2026 will be the year of agents.

Soon, you’ll be able to build end-to-end agentic workflows in v0, AI models included, and deploy them on Vercel’s self-driving infrastructure.

Welcome to the new v0. We can’t wait to see what you build.

Sign up or log in to try the new v0 today.

Snowflake, GitHub, AWS are trademarks of their respective owners.

Read more

Zeb Hermann
https://vercel.com/changelog/ai-gateway-and-one-click-deploys-now-available-on-trae AI Gateway and one-click deploys now available on TRAE 2026-02-03T13:00:00.000Z

ByteDance's coding agent TRAE now integrates both AI Gateway and direct Vercel deployments, bringing unified AI access and instant production shipping to over 1.6 million monthly active developers. Teams can now access hundreds of models through a single API key and deploy applications directly to Vercel from the TRAE interface.

AI Gateway provides unified access to models from Anthropic, OpenAI, Google, xAI, DeepSeek, Z.AI, MiniMax, Moonshot AI, and more without managing multiple provider accounts.

The integration includes automatic failover that routes around provider outages, zero markup on AI tokens, and unified observability to monitor both deployments and AI usage. Meanwhile, the Vercel deployment integration handles authorization automatically and returns live URLs immediately after clicking Deploy.

SOLO Mode

Setting up Vercel deployment

In SOLO mode, click the + tab and choose Integrations to connect your Vercel account. When your project is ready, click Deploy in the chat panel to ship directly to production.

Once linked, all projects can immediately deploy to Vercel and are also visible in your Vercel dashboard.

Setting up AI Gateway

In Integrations, choose Vercel AI Gateway as your AI Service and add your API key from the Vercel AI Gateway dashboard. Select any model and start coding with automatic failover, low latency, and full observability.

IDE Mode

TRAE's IDE mode supports AI Gateway as a model provider with access to the full range of available models alongside direct deployment capabilities.

Configuration

You can switch models with a single configuration change while maintaining unified billing through Vercel. This creates a complete development experience where teams write code with any AI model, then ship to production with one click from the same interface.

Get started with AI Gateway or explore the documentation to learn more.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/turbo-build-machines-by-default-for-new-pro-projects Turbo build machines by default for new Pro projects 2026-02-03T13:00:00.000Z

Turbo build machines are now the default for all new Pro projects and projects upgrading from Hobby to the Pro plan.

Turbo build machines were introduced in October for all paid plans, delivering 30vCPUs and 60GB of memory for faster build performance.

Teams adopting Turbo build machines have seen significant build time improvements:

  • up to 30% faster for builds under 2 minutes

  • up to 50% faster for builds that take 2-10 minutes

  • up to 70% faster for builds over 10 minutes

Learn more in the documentation or customize your build machine in settings.

Read more

Mehul Kar Marcos Grappeggia Cody Wong Jon Vincent
https://vercel.com/changelog/copy-visual-context-to-agents Copy visual context to agents from Vercel Toolbar 2026-02-03T13:00:00.000Z

Vercel Toolbar now includes "Copy for Agents" functionality that captures complete visual context from comments, providing coding agents with the technical details they need to understand deployment feedback across your application.

When teams copy comments using this feature, agents receive structured context including page URL and viewport dimensions, selected text and node path information, React component tree details, and the original comment text. This helps agents understand exactly where issues occur in your deployed application and what changes are needed.

Sample context output:

This structured format eliminates the need to manually explain deployment context to agents, enabling direct copying from the toolbar with complete technical details for component location and implementation.

The feature is available to all Vercel users immediately.

Learn more about Vercel Toolbar or get started with Agents.

Read more

George Karagkiaouris
https://vercel.com/changelog/workflow-event-sourcing Workflow 4.1 Beta: Event-sourced architecture 2026-02-03T13:00:00.000Z

Workflow 4.1 Beta changes how workflows track state internally. Instead of updating records in place, every state change is now stored as an event, and current state is reconstructed by replaying the log. This release also adds support for provider-executed tools and higher throughput.

What event sourcing means for workflows

Event sourcing is a persistence pattern where state changes are stored as a sequence of events rather than by updating records in place. Instead of storing "this run is completed," the system stores "run_created, then run_started, then run_completed" and reconstructs the current state by replaying those events.

In Workflow 4.1, runs, steps, and hooks are no longer mutable database records. They're materializations of an append-only event log. Each event captures a timestamp and context, and the runtime derives current state by processing events in order.

This architecture makes workflows more reliable in three ways:

  • Self-healing: If a queue message is lost or a race condition occurs, replaying the workflow route detects missing state and re-enqueues the necessary messages. Old runs required manual intervention to recover from queue downtime; new runs recover automatically.

  • Complete audit trail: The event log lets you replay the exact sequence that led to any state, which makes debugging distributed workflows much easier.

  • Consistency: Events are append-only, so partial failures during a write can't leave entities in an inconsistent state. The event log is the single source of truth.

For a deeper look at the event model, including state machine diagrams for run, step, and hook lifecycles, see the Event Sourcing documentation.

Other updates

  • Improved throughput: The workflow queue system now processes many thousands of steps per second. When dependencies allow, multiple steps execute in parallel.

  • Provider-executed tools: @workflow/ai now supports provider-executed tools like Google Search and WebSearch, which run on the model provider's infrastructure rather than in your workflow.

  • NestJS support: The new @workflow/nest package adds build support for NestJS applications, handling dependency injection patterns so workflows integrate with existing NestJS services.

  • Top-level using declarations: The SWC plugin now supports the TC39 Explicit Resource Management proposal inside step and workflow functions, enabling automatic resource cleanup.

  • Custom class serialization: Client mode now supports custom class serialization, with a classes object in manifest.json that declares serializable types.

  • Fixed double-serialization of tool output in @workflow/ai

Learn more about Workflow or get started with your first workflow.

Read more

Pranay Prakash Karthik Kalyanaraman Peter Wielander John Lindquist
https://vercel.com/changelog/zero-configuration-support-for-koa Zero-configuration support for Koa 2026-02-03T13:00:00.000Z

Vercel now supports Koa applications, an expressive HTTP middleware framework to make web applications and APIs more enjoyable to write, with zero-configuration.

Backends on Vercel use Fluid compute with Active CPU pricing by default. This means your Koa app will automatically scale up and down based on traffic, and you only pay for what you use.

Visit the Koa on Vercel documentation for more details.

Read more

Jeff See
https://vercel.com/changelog/python-3-13-and-3-14-are-now-available Python 3.13 and 3.14 are now available 2026-02-02T13:00:00.000Z

Builds and Functions now support Python 3.13 and Python 3.14 alongside the previously supported Python 3.12. Projects without a specified Python version continue using Python 3.12 by default.

The default will switch to Python 3.14 in the coming months. To continue using Python 3.12, specify an upper bound in your project manifest (pyproject.toml or Pipfile) as shown in the examples below.

See the Python documentation to learn more about Python support on Vercel.

Read more

Elvis Pranskevichus Ricardo Gonzalez Greg Schofield
https://vercel.com/blog/vercel-sandbox-is-now-generally-available Run untrusted code with Vercel Sandbox, now generally available 2026-01-30T13:00:00.000Z

AI agents are changing how software gets built. They clone repos, install dependencies, run tests, and iterate in seconds.

Despite the change in software, most infrastructure was built for humans, not agents.

Traditional compute assumes someone is in the loop, with minutes to provision and configure environments. Agents need secure, isolated environments that start fast, run untrusted code, and disappear when the task is done.

Today, Vercel Sandbox is generally available, the execution layer for agents, and we're open-sourcing the Vercel Sandbox CLI and SDK for the community to build on this infrastructure.

Built on our compute platform

Vercel processes over 2.7 million deployments per day. Each one spins up an isolated microVM, runs user code, and disappears, often in seconds.

To do that at scale, we built our own compute platform.

Internally code-named Hive, it’s powered by Firecracker and orchestrates microVM clusters across multiple regions. When you click Deploy in v0, import a repo, clone a template, or run vercel in the CLI, Hive is what makes it feel quick.

Sandbox brings that same infrastructure to agents.

Why agents need different infrastructure

Agents don’t work like humans. They spin up environments, execute code, tear them down, and repeat the cycle continuously.

That shifts the constraints toward isolation, security, and ephemeral operation, not persistent, long-running compute.

Agents need:

  • Sub-second starts for thousands of sandboxes per task

  • Full isolation when running untrusted code from repositories and user input

  • Ephemeral environments that exist only as long as needed

  • Snapshots to restore complex environments instantly instead of rebuilding

  • Fluid compute with Active CPU pricing for cost and performance efficiency

We’ve spent years solving these problems for deployments. Sandbox applies the same approach to agent compute.

What is Vercel Sandbox?

Vercel Sandbox provides on-demand Linux microVMs. Each sandbox is isolated, with its own filesystem, network, and process space.

You get sudo access, package managers, and the ability to run the same commands you’d run on a Linux machine.

Sandboxes are ephemeral by design. They run for as long as you need, then shut down automatically, and you only pay for active CPU time, not idle time.

This matches how agents work. A single task can involve dozens of start, run, and teardown cycles, and the infrastructure needs to keep up.

How teams are using Sandbox

Roo Code

Roo Code builds AI coding agents that work across Slack, Linear, GitHub, and their web interface. When you trigger an agent, you get a running application to interact with, not just a patch.

Snapshots changed their architecture. They snapshot the environment so later runs can restore a known state instead of starting from scratch, skipping repo cloning, dependency installs, and service boot time.

Blackbox AI

Blackbox AI built Agents HQ, a unified orchestration platform that integrates multiple AI coding agents through a single API. It runs tasks inside Vercel Sandboxes.

This supports horizontal scaling for high-volume concurrent execution. Blackbox can dispatch tasks to multiple agents in parallel, each in an isolated sandbox, without resource contention.

Create your first sandbox with one command in the CLI

Explore the documentation to get started, and check out the open-source SDK.

Read more

Harpreet Arora Dan Fein
https://vercel.com/changelog/vercel-sandboxes-ga Vercel Sandboxes are now generally available 2026-01-30T13:00:00.000Z

Vercel Sandboxes are now generally available, providing an ephemeral compute primitive for safely executing untrusted code.

It lets teams run AI agent-generated outputs, unverified user uploads, and third-party code without exposing production systems.

Each sandbox runs inside Firecracker microVMs, isolated from your infrastructure, so code running in a sandbox is blocked from accessing environment variables, database connections, and cloud resources.

Sandboxes are in production use by teams including v0, Blackbox AI and RooCode.

To bootstrap a simple Node.js application that creates a Vercel sandbox, use the code below:

Or get started with the CLI by opening an interactive shell:

Explore the documentation to get started, and check out the open-source SDK and CLI.

Read more

Guðmundur Bjarni Ólafsson Laurens Duijvesteijn Tom Lienard Gal Schlezinger Andy Waller Tiago Ventura Loureiro Amy Burns Luke Phillips-Sheard
https://vercel.com/changelog/cubic-joins-the-vercel-agents-marketplace cubic joins the Vercel Agents Marketplace 2026-01-30T13:00:00.000Z

The Vercel Agents Marketplace now includes cubic, an AI code reviewer that that deploys thousands of AI agents to find and fix bugs in your PRs and codebase.

Most code review tools only see what changed. cubic sees how those changes connect to everything else. It learns from your team’s past reviews and gets better over time.

Key capabilities include:

  • Catching bugs, regressions, and security vulnerabilities in PRs and existing codebases; continuously running 1000s of agents

  • Identifies senior engineers on your team and learns from their comment history

  • Applying fixes automatically through background agents

With cubic handling the first pass, teams spend less time on manual review and more time merging changes. Custom coding standards get enforced across repositories, helping keep code consistent as teams scale.

Get started with cubic or explore the Vercel Agents Marketplace to discover more tools.

Read more

Marketplace Team
https://vercel.com/changelog/assistloop-joins-the-vercel-agents-marketplace AssistLoop joins the Vercel Agents Marketplace 2026-01-30T13:00:00.000Z

AssistLoop is now available in the Vercel Marketplace as an AI-powered customer support integration.

The integration connects natively with Vercel, so adding AI-driven customer support takes minutes. With AssistLoop, teams can:

  • Install AssistLoop with minimal setup using an Agent ID

  • Add AI-powered support directly to Next.js apps

  • Train agents on internal docs, FAQs, or knowledge bases

  • Customize the assistant to match your brand

  • Review conversations and hand off to human support when needed

This integration fits naturally into existing Vercel workflows, with unified billing, automatic environment variables, and no manual configuration. Teams can ship AI-powered support faster without managing separate dashboards or complex setup.

AssistLoop automatically injects NEXT_PUBLIC_ASSISTLOOP_AGENT_ID into your project environment. Add the widget script to your site:

Get started

Deploy the AssistLoop Next.js template from the Marketplace to see it in action.

Read more

Marketplace Team
https://vercel.com/blog/how-stripe-built-a-game-changing-app-in-a-single-flight-with-v0 How Stripe built a game-changing app in a single flight with v0 2026-01-28T13:00:00.000Z

What would traditionally require months of product-development coordination and building across multiple teams was achieved by one person in a single flight.

Read more

Nic Vargus
https://vercel.com/changelog/skew-protection-now-supports-prebuilt-deployments Skew Protection now supports prebuilt deployments 2026-01-28T13:00:00.000Z

Skew Protection can now be used with vercel deploy --prebuilt deployments.

For teams building locally and uploading with --prebuilt, you can now set a custom deploymentId in your next.config.js:

This ID is written to routes-manifest.json and used by Vercel for skew protection routing. You control the ID lifecycle, using the same ID across multiple prebuilt deployments or updating it when deploying new versions.

This feature enables Skew Protection support for the specific workflow of building applications locally and then uploading them to Vercel.

Learn more about Skew Protection.

Read more

Brooke Mosby
https://vercel.com/changelog/vercel-agent-investigations-now-available-in-slack Vercel Agent investigations now available in Slack 2026-01-28T13:00:00.000Z

Anomaly alerts proactively monitor your application for usage or error anomalies. When we detect an issue, we send an alert by email, Slack or webhook. Vercel Agent investigates anomaly alerts to find out what's happening in your logs and metrics to help you identify the root cause.

With our updated Slack integration, investigations now appear directly in Slack alert messages as a threaded response. This eliminates the need to click into the Vercel dashboard and gives you context to triage the alert directly in Slack.

This feature is available for teams using Observability Plus. 10 investigations are included at no additional cost for Observability Plus subscribers.

Learn more about Vercel Agent investigations.

Read more

Julia Shi Fabio Benedetti Timo Lins Malavika Tadeusz
https://vercel.com/changelog/tag-based-cache-invalidation-now-available-for-all-responses Tag-based cache invalidation now available for all responses 2026-01-28T13:00:00.000Z

Vercel's CDN now supports tag-based cache invalidation, giving you granular control over cached content across all frameworks and backends.

Responses can now be tagged using the Vercel-Cache-Tag header with a comma-separated list of tags as a new cache organization mechanism to group related content and invalidate it together, rather than just purging your entire cache when content changes.

This complements existing headers that cache responses on Vercel's CDN, like Cache-Control, CDN-Cache-Control, and Vercel-CDN-Cache-Control and exposes the same underlying technology that powers Next.js Incremental Static Regeneration (ISR) to any framework or backend.

We recommend Next.js applications continue using Incremental Static Regeneration (ISR) for built-in cache tagging and invalidation without managing cache headers manually.

How it works

After a response has a cache tag, you can invalidate it through dashboard settings, the Vercel CLI, the Function API, or the REST API.

Vercel's CDN reads Vercel-Cache-Tag and strips it before sending the response to the client. If you apply cache tags via rewrites from a parent to a child project, and both projects belong to the same team, cached responses on the parent project also include the corresponding tags from the child project.

This is available starting today on all plans at no additional cost. Read the cache invalidation documentation to learn more.

Read more

Luba Kravchenko Steven Salat
https://vercel.com/blog/how-sensay-went-from-zero-to-product-in-six-weeks How Sensay went from zero to product in six weeks 2026-01-27T13:00:00.000Z

Sensay went from zero to an MVP launch in six weeks by leaning on Vercel previews, feature flags, and instant rollbacks. The team kept one codebase, moved fast through pivots, and shipped without a DevOps team.

Read more

Eric Dodds
https://vercel.com/blog/agents-md-outperforms-skills-in-our-agent-evals AGENTS.md outperforms skills in our agent evals 2026-01-27T13:00:00.000Z

We expected skills to be the solution for teaching coding agents framework-specific knowledge. After building evals focused on Next.js 16 APIs, we found something unexpected.

A compressed 8KB docs index embedded directly in AGENTS.md achieved a 100% pass rate, while skills maxed out at 79% even with explicit instructions telling the agent to use them. Without those instructions, skills performed no better than having no documentation at all.

Here's what we tried, what we learned, and how you can set this up for your own Next.js projects.

The problem we were trying to solve

AI coding agents rely on training data that becomes outdated. Next.js 16 introduces APIs like 'use cache', connection(), and forbidden() that aren't in current model training data. When agents don't know these APIs, they generate incorrect code or fall back to older patterns.

The reverse can also be true, where you're running an older Next.js version and the model suggests newer APIs that don't exist in your project yet. We wanted to fix this by giving agents access to version-matched documentation.

Two approaches for teaching agents framework knowledge

Before diving into results, a quick explanation of the two approaches we tested:

  • Skills are an open standard for packaging domain knowledge that coding agents can use. A skill bundles prompts, tools, and documentation that an agent can invoke on demand. The idea is that the agent recognizes when it needs framework-specific help, invokes the skill, and gets access to relevant docs.

  • AGENTS.md is a markdown file in your project root that provides persistent context to coding agents. Whatever you put in AGENTS.md is available to the agent on every turn, without the agent needing to decide to load it. Claude Code uses CLAUDE.md for the same purpose.

We built a Next.js docs skill and an AGENTS.md docs index, then ran them through our eval suite to see which performed better.

We started by betting on skills

Skills seemed like the right abstraction. You package your framework docs into a skill, the agent invokes it when working on Next.js tasks, and you get correct code. Clean separation of concerns, minimal context overhead, and the agent only loads what it needs. There's even a growing directory of reusable skills at skills.sh.

We expected the agent to encounter a Next.js task, invoke the skill, read version-matched docs, and generate correct code.

Then we ran the evals.

Skills weren't being triggered reliably

In 56% of eval cases, the skill was never invoked. The agent had access to the documentation but didn't use it. Adding the skill produced no improvement over baseline:

Configuration

Pass Rate

vs Baseline

Baseline (no docs)

53%

Skill (default behavior)

53%

+0pp

Zero improvement. The skill existed, the agent could use it, and the agent chose not to. On the detailed Build/Lint/Test breakdown, the skill actually performed worse than baseline on some metrics (58% vs 63% on tests), suggesting that an unused skill in the environment may introduce noise or distraction.

This isn't unique to our setup. Agents not reliably using available tools is a known limitation of current models.

Explicit instructions helped, but wording was fragile

We tried adding explicit instructions to AGENTS.md telling the agent to use the skill.

This improved the trigger rate to 95%+ and boosted the pass rate to 79%.

Configuration

Pass Rate

vs Baseline

Baseline (no docs)

53%

Skill (default behavior)

53%

+0pp

Skill with explicit instructions

79%

+26pp

A solid improvement. But we discovered something unexpected about how the instruction wording affected agent behavior.

Different wordings produced dramatically different results:

Instruction

Behavior

Outcome

"You MUST invoke the skill"

Reads docs first, anchors on doc patterns

Misses project context

"Explore project first, then invoke skill"

Builds mental model first, uses docs as reference

Better results

Same skill. Same docs. Different outcomes based on subtle wording changes.

In one eval (the 'use cache' directive test), the "invoke first" approach wrote correct page.tsx but completely missed the required next.config.ts changes. The "explore first" approach got both.

This fragility concerned us. If small wording tweaks produce large behavioral swings, the approach feels brittle for production use.

Building evals we could trust

Before drawing conclusions, we needed evals we could trust. Our initial test suite had ambiguous prompts, tests that validated implementation details rather than observable behavior, and a focus on APIs already in model training data. We weren't measuring what we actually cared about.

We hardened the eval suite by removing test leakage, resolving contradictions, and shifting to behavior-based assertions. Most importantly, we added tests targeting Next.js 16 APIs that aren't in model training data.

APIs in our focused eval suite:

  • connection() for dynamic rendering

  • 'use cache' directive

  • cacheLife() and cacheTag()

  • forbidden() and unauthorized()

  • proxy.ts for API proxying

  • Async cookies() and headers()

  • after(), updateTag(), refresh()

All the results that follow come from this hardened eval suite. Every configuration was judged against the same tests, with retries to rule out model variance.

The hunch that paid off

What if we removed the decision entirely? Instead of hoping agents would invoke a skill, we could embed a docs index directly in AGENTS.md. Not the full documentation, just an index that tells the agent where to find specific doc files that match your project's Next.js version. The agent can then read those files as needed, getting version-accurate information whether you're on the latest release or maintaining an older project.

We added a key instruction to the injected content.

This tells the agent to consult the docs rather than rely on potentially outdated training data.

The results surprised us

We ran the hardened eval suite across all four configurations:

Final pass rates:

Configuration

Pass Rate

vs Baseline

Baseline (no docs)

53%

Skill (default behavior)

53%

+0pp

Skill with explicit instructions

79%

+26pp

AGENTS.md

docs index

100%

+47pp

On the detailed breakdown, AGENTS.md achieved perfect scores across Build, Lint, and Test.

Configuration

Build

Lint

Test

Baseline

84%

95%

63%

Skill (default behavior)

84%

89%

58%

Skill with explicit instructions

95%

100%

84%

AGENTS.md

100%

100%

100%

This wasn't what we expected. The "dumb" approach (a static markdown file) outperformed the more sophisticated skill-based retrieval, even when we fine-tuned the skill triggers.

Why does passive context beat active retrieval?

Our working theory comes down to three factors.

  1. No decision point. With AGENTS.md, there's no moment where the agent must decide "should I look this up?" The information is already present.

  2. Consistent availability. Skills load asynchronously and only when invoked. AGENTS.md content is in the system prompt for every turn.

  3. No ordering issues. Skills create sequencing decisions (read docs first vs. explore project first). Passive context avoids this entirely.

Addressing the context bloat concern

Embedding docs in AGENTS.md risks bloating the context window. We addressed this with compression.

The initial docs injection was around 40KB. We compressed it down to 8KB (an 80% reduction) while maintaining the 100% pass rate. The compressed format uses a pipe-delimited structure that packs the docs index into minimal space:

The full index covers every section of the Next.js documentation:

The agent knows where to find docs without having full content in context. When it needs specific information, it reads the relevant file from the .next-docs/ directory.

Try it yourself

One command sets this up for your Next.js project:

npx @next/codemod@canary agents-md

This functionality is part of the official @next/codemod package.

This command does three things:

  1. Detects your Next.js version

  2. Downloads matching documentation to .next-docs/

  3. Injects the compressed index into your AGENTS.md

If you're using an agent that respects AGENTS.md (like Cursor or other tools), the same approach works.

What this means for framework authors

Skills aren't useless. The AGENTS.md approach provides broad, horizontal improvements to how agents work with Next.js across all tasks. Skills work better for vertical, action-specific workflows that users explicitly trigger, like "upgrade my Next.js version," "migrate to the App Router," or applying framework best practices. The two approaches complement each other.

That said, for general framework knowledge, passive context currently outperforms on-demand retrieval. If you maintain a framework and want coding agents to generate correct code, consider providing an AGENTS.md snippet that users can add to their projects.

Practical recommendations:

  • Don't wait for skills to improve. The gap may close as models get better at tool use, but results matter now.

  • Compress aggressively. You don't need full docs in context. An index pointing to retrievable files works just as well.

  • Test with evals. Build evals targeting APIs not in training data. That's where doc access matters most.

  • Design for retrieval. Structure your docs so agents can find and read specific files rather than needing everything upfront.

The goal is to shift agents from pre-training-led reasoning to retrieval-led reasoning. AGENTS.md turns out to be the most reliable way to make that happen.


Research and evals by Jude Gao. CLI available at npx @next/codemod@canary agents-md

Read more

Jude Gao
https://vercel.com/changelog/introducing-the-vercel-api-cli-command Introducing the vercel api CLI command 2026-01-27T13:00:00.000Z

[email protected] adds a new api command, giving direct access to the full suite of Vercel APIs from your terminal.

The api command provides a direct access point for AI agents to interact with Vercel through the CLI. Agents like Claude Code can access Vercel directly with no additional configuration required. If an agent has access to the environment and the Vercel CLI, it inherits the user's access permissions automatically.

List available APIs with vercel api ls, build requests interactively with vercel api, or send requests directly with vercel api [endpoint] [options].

Get started with npx vercel@latest api --help.

Read more

Tom Knickman
https://vercel.com/blog/agent-skills-explained-an-faq Agent skills explained: An FAQ 2026-01-26T13:00:00.000Z

Learn what agents skills are, how to install them, how agents use them, and best practices for implementation.

Read more

Eric Dodds Andrew Qu
https://vercel.com/changelog/trinity-large-preview-is-on-ai-gateway Trinity Large Preview is on AI Gateway 2026-01-26T13:00:00.000Z

You can now access Trinity Large Preview via AI Gateway with no other provider accounts required.

Trinity Large Preview is optimized for reasoning-intensive workloads, including math, coding tasks, and complex multi-step agent workflows. It is designed to handle extended multi-turn interactions efficiently while maintaining high inference throughput.

To use this model, set model to arcee-ai/trinity-large-preview in the AI SDK:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/kimi-k2-5-on-ai-gateway Kimi K2.5 is live on AI Gateway 2026-01-26T13:00:00.000Z

You can now access Kimi K2.5 via AI Gateway with no other provider accounts required.

Kimi K2.5 is Moonshot AI's most intelligent and versatile model yet, achieving open-source state-of-the-art performance across agent tasks, coding, visual understanding, and general intelligence. It has more advanced coding abilities compared to previous iterations, especially with frontend code quality and design expressiveness. This enables the creation of fully functional interactive user interfaces with dynamic layouts and animations.

To use this model, set model to moonshotai/kimi-k2.5 in the AI SDK:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/qwen-3-max-thinking-now-available-on-ai-gateway Qwen 3 Max Thinking now available on AI Gateway 2026-01-26T13:00:00.000Z

You can now access Qwen 3 Max Thinking via AI Gateway with no other provider accounts required.

Qwen 3 Max Thinking integrates thinking and non-thinking modes for improved performance on complex reasoning tasks. The model autonomously selects and uses its built-in search, memory, and code interpreter tools during conversations without requiring manual tool selection. The tools reduce hallucinations and provide real-time information.

To use this model, set model to alibaba/qwen3-max-thinking in the AI SDK:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/claude-code-max-via-ai-gateway-available-now-for-claude-code Claude Code Max via AI Gateway, available now for Claude Code 2026-01-26T13:00:00.000Z

AI Gateway now supports the Claude Code Max subscription for the Claude Code CLI. This allows developers to use their existing subscription on Anthropic models with no additional cost while getting unified observability, usage tracking, and monitoring through Vercel’s platform.

Setup

Set up your environment variables in your shell configuration file (~/.zshrc or ~/.bashrc)

Replace your-ai-gateway-api-key with your actual AI Gateway API key.

Start Claude Code

Log in with your Claude subscription

If you're not already logged in, Claude Code will prompt you to authenticate. Choose Option 1 - Claude account with subscription and log in with your Anthropic account.

If you encounter issues, try logging out with claude /logout and logging in again.

Your Claude Code requests now route through AI Gateway, giving you full visibility into usage patterns and costs while using your Max subscription.

How it works

When you configure Claude Code to use AI Gateway, Claude Code continues to authenticate with Anthropic. It sends its Authorization header and AI Gateway acts as either a passthrough proxy to Anthropic or, when it needs to fall back, a router to other providers.

Since the Authorization header is reserved for Claude subscription credentials, AI Gateway uses a separate header x-ai-gateway-api-key for its own authentication. This allows both auth mechanisms to coexist.

Read more about how to configure Claude Code Max with AI Gateway in the docs.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/live-model-performance-metrics-accessible-via-ai-gateway Live model performance metrics accessible via AI Gateway 2026-01-26T13:00:00.000Z

AI Gateway now displays throughput and latency metrics across hundreds of models, helping you choose the right model based on live performance data.

Metrics appear in three places and are updated every hour:

  • Model list: Best performance per model (P50 latency and throughput)

  • Model detail pages: Provider-level performance breakdown

  • REST API: Rolling endpoint performance aggregates (latency and throughput, P50/P95)

Model list

The AI Gateway model list now includes sortable columns for latency and throughput. Each row displays the best P50 metrics (lowest latency, highest throughput) for that model across all its available providers. Metrics are updated every hour and based on live AI Gateway customer requests.

Sort by throughput to find the fastest token generation, or by latency to find models with the quickest time-to-first-token.

Model detail pages

On the individual model pages, you can see P50 latency and throughput for each provider that has recorded usage. This helps you compare provider performance for the same model and choose the best option for your use case.

To access these pages, click on any model in the model list to get a more detailed view of the breakdown across all the providers that carry the model in AI Gateway. Metrics are refreshed hourly and only appear for providers with sufficient traffic.

Here is an example for openai/gpt-oss-120b:

Similar to the overall model list, you can sort by latency and throughput across providers on the model detail pages.

REST API

These metrics are also available programmatically via the endpoints REST API. To use this, replace [ai-gateway-string] with the creator/model-name for the model of interest.

This returns live hourly P50 and P95 latency (ms TTFT) and throughput (T/s) for the specified model, by provider. Here is an example output from the endpoint for the Cerebras provider for zai/glm-4.7.

If you want to query the full list of models, you can also use the model metrics endpoint in conjunction with https://ai-gateway.vercel.sh/v1/models.

Read more

Jeremy Philemon Walter Korman Jerilyn Zheng
https://vercel.com/changelog/skills-v1-1-1-interactive-discovery-open-source-release-and-agent-support Skills v1.1.1: Interactive discovery, open source release, and agent support 2026-01-26T13:00:00.000Z

[email protected] adds interactive skill discovery and is now fully open source.

The new interactive discovery keeps the workflow simple for developers, while also giving agents a clear path to discover skills programmatically by replacing the deprecated npx add-skill command with the updated npx skills interface.

You can now use npx skills find to search as you type and discover skills interactively. For AI agents, Skills includes a meta "find-skills" skill, along with a non-interactive mode designed for automated workflows, and support for 27 coding agents.

Skills maintenance is also simpler with the new npx skills update command, which refreshes your local skills without manual steps.

The full codebase is available on GitHub at Skills.

Migration

The previous npx add-skill command is deprecated. Use npx skills find for interactive discovery, and use npx skills update to refresh existing skills.

Get started with npx skills@latest or explore the Skills repository.

Read more

Andrew Qu
https://vercel.com/changelog/summaries-of-cve-2025-59471-and-cve-2025-59472 Summaries of CVE-2025-59471 and CVE-2025-59472 2026-01-26T13:00:00.000Z

Two medium-severity denial-of-service vulnerabilities were discovered in self-hosted Next.js applications. Both issues can cause server crashes through memory exhaustion under specific configurations. No data exposure or privilege escalation is possible. 

Applications hosted on Vercel’s platform are not affected by these issues, and require no customer action.

Summary

CVE-2025-59471 (CVSS 5.9) affects the Image Optimizer when external image optimization is enabled via remotePatterns. The /_next/image endpoint loads remote images fully into memory without enforcing a maximum size, allowing an attacker to trigger out-of-memory conditions using very large images hosted on an allowed domain.

CVE-2025-59472 (CVSS 5.9) affects applications with Partial Pre-Rendering (PPR) enabled in minimal mode. The PPR resume endpoint accepts unauthenticated POST requests and processes attacker-controlled data, allowing memory exhaustion through unbounded request buffering or decompression.

Affected Versions

CVE-2025-59471

  • Next.js versions >=10 through <15.5.10

  • Next.js versions >=16 through <16.1.5

CVE-2025-59472

  • Next.js versions >=15 through <15.6.0-canary.61

  • Next.js versions >=16 through <16.1.5

Impact

Both vulnerabilities can cause the Node.js process to terminate due to memory exhaustion, resulting in application downtime.

CVE-2025-59471 requires external image optimization to be enabled and the attacker to control a large image hosted on an allowed domain.

CVE-2025-59472 only affects applications running with the experimental.ppr: true or cacheComponents: true configuration options and NEXT_PRIVATE_MINIMAL_MODE=1 as an environment variable.

Resolution

Fixed in:

  • 15.5.10

  • 15.6.0-canary.61

  • 16.1.5

  • 16.2.0-canary.9

Workaround:

For self-hosted deployments unable to upgrade immediately:

  • Restrict or remove untrusted remotePatterns

  • Disable Partial Pre-Rendering or minimal mode

  • Apply strict request size limits at the reverse proxy layer

Credit

We thank Andrew MacPherson for their responsible disclosure through our bug bounty program.

References

Read more

Josh Story Andy Riancho Jimmy Lai
https://vercel.com/changelog/summary-of-cve-2026-23864 Summary of CVE-2026-23864 2026-01-26T13:00:00.000Z

Summary

Multiple high-severity vulnerabilities in React Server Components were responsibly disclosed. Importantly, these vulnerabilities do not allow for Remote Code Execution.

We created new rules to address these vulnerabilities and deployed them to the Vercel WAF to automatically protect all projects hosted on Vercel at no cost. However, do not rely on the WAF for full protection. Immediate upgrades to a patched version are required.

Impact

React CVE-2026-23864 (CVSS 7.5)

CVE-2026-23864 addresses multiple denial of service vulnerabilities in React Server Components. The vulnerabilities are triggered by sending specially crafted HTTP requests to Server Function endpoints, and could lead to server crashes, out-of-memory exceptions or excessive CPU usage; depending on the vulnerable code path being exercised, the application configuration and application code.

These vulnerabilities are present in versions 19.0.x, 19.1.x, and 19.2.x of the following packages:

  • react-server-dom-parcel

  • react-server-dom-webpack

  • react-server-dom-turbopack

These packages are included in the following frameworks and bundlers:

  • Next.js: 13.x, 14.x, 15.x, and 16.x.

  • Other frameworks and plugins that embed or depend on React Server Components implementation (e.g., Vite, Parcel, React Router, RedwoodSDK, Waku)

Resolution

After creating mitigations to address this vulnerability, we deployed them across our globally-distributed platform to protect our customers. We still recommend upgrading to the latest patched version.

Updated releases of React and affected downstream frameworks include fixes to prevent this issue. All users should upgrade to a patched version as soon as possible.

Fixed in

  • React: 19.0.4, 19.1.5, 19.2.4.

  • Next.js: 15.0.8, 15.1.12, 15.2.9, 15.3.9, 15.4.11, 15.5.10, 15.6.0-canary.61, 16.0.11, 16.1.5, 16.2.0-canary.9

Frameworks and bundlers using the aforementioned packages should install the latest versions provided by their respective maintainers.

Credit

We thank Mufeed VH from Winfunc Research, Joachim Viide, RyotaK from GMO Flatt Security and Xiangwei Zhang of Tencent Security YUNDING LAB for their responsible disclosure.

References

Read more

Josh Story Jimmy Lai Andy Riancho
https://vercel.com/changelog/use-ai-gateway-with-clawdbot Use AI Gateway with Clawdbot 2026-01-24T13:00:00.000Z

Clawdbot is a personal AI assistant powered by Claude with persistent memory. It can browse the web, run shell commands, and manage files across any operating system.

You can use Clawdbot with Vercel AI Gateway to access hundreds of models from multiple providers through a single endpoint. AI Gateway provides unified API access across models without managing separate API keys.

Create an API key in the AI Gateway dashboard, then install Clawdbot:

Run the onboarding wizard:

Select Vercel AI Gateway as your provider and enter your AI Gateway API key.

You can then choose from hundreds of available models. Your AI assistant is now running and ready to help with tasks across your system.

See the AI Gateway docs for more details on Clawdbot and more integrations.

Read more

Timo Lins Jerilyn Zheng
https://vercel.com/changelog/vercel-now-supports-customizing-platform-error-pages Vercel now supports customizing platform error pages 2026-01-23T13:00:00.000Z

You can now customize error pages for platform errors on Vercel, replacing generic error pages with your own branded experiences. Custom error pages display when Vercel encounters uncaught errors like function invocation timeouts or other platform errors.

How it works

You can implement custom error pages using your framework’s conventions and Vercel will automatically locate them, for example with Next.js you can simply place a 500/page.tsx or static 500.html page in the public directory.

To enrich error pages with request-specific context, you can use the following metadata tokens:

  • ::vercel:REQUEST_ID:: - Contains the Vercel request ID

  • ::vercel:ERROR_CODE:: - The specific error code e.g. FUNCTION_INVOCATION_TIMEOUT

This feature is available for Enterprise teams and enabled automatically across all projects. No additional configuration required.

See the documentation to get started or reference the following implementations: Custom error pages with App Router or Custom error pages with public directory.

Read more

Chandan Rao Jas Garcha Sudais Moorad Priyanka Jindal
https://vercel.com/changelog/configure-build-machine-settings-across-all-projects Configure build machine settings across all projects 2026-01-23T13:00:00.000Z

Build and deployment settings can now be configured at the team level and applied across all projects, compared to the previous project-by-project setup.

Build Machines let you choose the compute resources for each build to optimize build times:

  • Standard build machines with 4 vCPUs and 8 GB of memory

  • Enhanced build machines with 8 vCPUs and 16 GB of memory

  • Turbo build machines with 30 vCPUs and 60 GB of memory

On-Demand Concurrent Builds control how many builds can run in parallel and whether builds skip the queue.

You can now apply configurations to all projects at once, or make targeted changes across multiple projects from a single interface.

Get started with team-level settings.

Read more

Cody Wong Marcos Grappeggia Mitul Shah Mehul Kar
https://vercel.com/changelog/faster-deploys-with-improved-function-caching Faster deploys with improved function caching 2026-01-23T13:00:00.000Z

Function uploads are now skipped when code hasn't changed, reducing build times by 400-600ms on average and up to 5 seconds for larger builds.

Previously, deployment-specific environment variables like VERCEL_DEPLOYMENT_ID were included in the function payload, making every deployment unique even with identical code. These variables are now injected at runtime, allowing Vercel to recognize unchanged functions and skip redundant uploads.

This optimization applies to Vercel Functions without a framework, and projects using Python, Go, Ruby, and Rust. Next.js projects will receive the same improvement soon.

The optimization is applied automatically to all deployments with no configuration required.

Learn more about functions and builds in our documentation.

Read more

Andrew Healey Janos Szathmary Javi Velasco Felix Haus
https://vercel.com/blog/testing-if-bash-is-all-you-need Testing if "bash is all you need" 2026-01-22T13:00:00.000Z

We invited Ankur Goyal from Braintrust to share how they tested the "bash is all you need" hypothesis for AI agents.

There's a growing conviction in the AI community that filesystems and bash are the optimal abstraction for AI agents. The logic makes sense: LLMs have been extensively trained on code, terminals, and file navigation, so you should be able to give your agent a shell and let it work.

Even non-coding agents may benefit from this approach. Vercel's recent post on building agents with filesystems and bash showed this by mapping sales calls, support tickets, and other structured data onto the filesystem. The agent greps for relevant sections, pulls what it needs, and builds context on demand.

But there's an alternative view worth testing. Filesystems may be the right abstraction for exploring and retrieving context, but what about querying structured data? We built an eval harness to find out.

Setting up the eval

We tasked agents with querying a dataset of GitHub issues and pull requests. This type of semi-structured data mirrors real-world use cases like customer support tickets or sales call transcripts.

Question complexity ranged from:

  • Simple queries: "How many open issues mention 'security'?"

  • Complex queries: "Find issues where someone reported a bug and later someone submitted a pull request claiming to fix it"

Three agent approaches competed:

  1. SQL agent: Direct database queries against a SQLite database containing the same data

  2. Bash agent: Using just-bash to navigate and query JSON files on the filesystem

  3. Filesystem agent: Basic file tools (search, read) without full shell access

Each agent received the same questions and was scored on accuracy.

Initial results

Agent

Accuracy

Avg Tokens

Cost

Duration

SQL

100%

155,531

$0.51

45s

Bash

52.7%

1,062,031

$3.34

401s

Filesystem

63.0%

1,275,871

$3.89

126s

SQL dominated. It hit 100% accuracy while bash achieved just 53%. Bash also used 7x more tokens and cost 6.5x more, while taking 9x longer to run. Even basic filesystem tools (search, read) outperformed full bash access, hitting 63% accuracy.

You can explore the SQL experiment, bash experiment, and filesystem experiment results directly.

One surprising finding was that the bash agent generated highly sophisticated shell commands, chaining find, grep, jq, awk, and xargs in ways that rarely appear in typical agent workflows. The model clearly has deep knowledge of shell scripting, but that knowledge didn't translate to better task performance.

Debugging the results

The eval revealed substantive issues requiring attention.

Performance bottlenecks. Commands that should run in milliseconds were timing out at 10 seconds. stat() calls across 68,000 files were the culprit. The just-bash tool received optimizations addressing this.

Missing schema context. The bash agent didn't know the structure of the JSON files it was querying. Adding schema information and example commands to the system prompt helped, but not enough to close the gap.

Eval scoring issues. Hand-checking failed cases revealed several questions where the "expected" answer was actually wrong, or where the agent found additional valid results that the scorer penalized. Five questions received corrections addressing ambiguities or dataset mismatches.

  • "Which repositories have the most unique issue reporters" was ambiguous between org-level and repo-level grouping

  • Several questions had expected outputs that didn't match the actual dataset

  • The bash agent sometimes found more valid results than the reference answers included

The Vercel team submitted a PR with the corrections.

After fixes to both just-bash and the eval itself, the performance gap narrowed considerably.

The hybrid approach

Then we tried a different idea. Instead of choosing one abstraction, give the agent both:

  • Let it use bash to explore and manipulate files

  • Also provide access to a SQLite database when that's the right tool

The hybrid agent developed an interesting behavior. It would run SQL queries, then verify results by grepping through the filesystem. This double-checking is why the hybrid approach consistently hits 100% accuracy, while pure SQL occasionally gets things wrong.

You can explore the hybrid experiment results directly.

The tradeoff is cost. The hybrid approach uses roughly two times as many tokens as pure SQL, since it reasons about tool choice and verifies its work.

Key learnings

After all the fixes to just-bash, the eval dataset, and data loading issues, bash-sqlite emerged as the most reliable approach. The "winner" wasn't raw accuracy on a single run, but consistent accuracy through self-verification.

Over 200 messages and hundreds of traces later, we had:

  • Fixed performance bottlenecks in just-bash

  • Corrected five ambiguous or wrong expected answers in the eval

  • Found a data loading bug that caused off-by-one errors

  • Watched agents develop sophisticated verification strategies

The bash agent's tendency to check its own work turned out to be valuable just not for accuracy, but also for surfacing problems that would have gone unnoticed with a pure SQL approach.

What this means for agent design

For structured data with clear schemas, SQL remains the most direct path. It's fast, well-understood, and uses fewer tokens.

For exploration and verification, bash provides flexibility that SQL can't match. Agents can inspect files, spot-check results, and catch edge cases through filesystem access.

But the bigger lesson is about evals themselves. The back-and-forth between Braintrust and the Vercel team, with detailed traces at every step, is what actually improved the tools and the benchmark. Without that visibility, we'd still be debating which abstraction "won" based on flawed data.

Run your own benchmarks

The eval harness is open source.

You can swap in your own:

  • Dataset (customer tickets, sales calls, logs, whatever you're working with)

  • Agent implementations

  • Questions that matter to your use case


This post was written by Ankur Goyal and the team at Braintrust, who build evaluation infrastructure for AI applications. The eval harness is open source and integrates with just-bash from Vercel.

Read more

Ankur Goyal Andrew Qu
https://vercel.com/changelog/new-dashboard-navigation-available New dashboard navigation available 2026-01-22T13:00:00.000Z

A redesign of the navigation in the dashboard is now available as an opt-in experience. This new navigation maintains full functionality while streamlining access to your most-used features.

  • New Sidebar — Moved horizontal tabs to a resizable sidebar that can be hidden when not needed

  • Consistent Tabs — Unified sidebar navigation with consistent links across team and project levels

  • Improved Order — Reordered navigation items to prioritize the most common developer workflows

  • Projects as Filters — Switch between team and project versions of the same page in one click

  • Optimized for Mobile — New mobile navigation featuring a floating bottom bar optimized for one-handed use

Try the new navigation today before it rolls out to all users.

Read more

wits Timo Lins Christopher Skillicorn Andrew Gadzik Mery Kaftar
https://vercel.com/changelog/filesystem-snapshots-supported-on-vercel-sandboxes Filesystem snapshots supported on Vercel Sandboxes 2026-01-22T13:00:00.000Z

Vercel Sandbox now supports filesystem snapshots to capture your state. You can capture a Sandbox's complete filesystem state as a snapshot and launch new Sandboxes from that snapshot using the Sandbox API.

This eliminates repeated setup when working with expensive operations like dependency installation, builds, or fixture creation. Create the environment once, snapshot it, then reuse that exact filesystem state across multiple isolated runs.

How snapshots work

Snapshots capture the entire filesystem of a running Sandbox. New Sandboxes can launch from that snapshot, providing immediate access to pre-installed dependencies and configured environments.

Key capabilities

  • Create a snapshot from any running Sandbox with sandbox.snapshot()

  • Launch new Sandboxes from snapshots via source: { type: 'snapshot', snapshotId }

  • Reuse the same snapshot with multiple Sandboxes for parallel testing and experimentation

See the documentation to get started with snapshots.

Read more

Guðmundur Bjarni Ólafsson Laurens Duijvesteijn Tom Lienard Andy Waller Tiago Ventura Loureiro
https://vercel.com/changelog/cron-jobs-now-visible-in-deployment-summary Cron Jobs now visible in Deployment Summary 2026-01-21T13:00:00.000Z

You can now view the Cron Jobs from your application in the Deployment Summary section of Deployments.

Try it out by deploying a Vercel Cron Job template. Once you deploy, Vercel automatically registers your cron jobs. Learn more in the Cron Jobs documentation.

Read more

Mehul Kar Andy Schneider
https://vercel.com/changelog/ai-code-elements AI Code Elements 2026-01-21T13:00:00.000Z

Today we're releasing a brand new set of components designed to help you build the next generation of IDEs, coding apps and background agents.

<Agent />

A composable component for displaying an AI SDK ToolLoopAgent configuration with model, instructions, tools, and output schema.

<CodeBlock />

Building on what we've learned from Streamdown, we massively improved the code block component with support for a header, icon, filename, multiple languages and a more performant renderer.

<Commit />

The Commit component displays commit details including hash, message, author, timestamp, and changed files.

<EnvironmentVariables />

The EnvironmentVariables component displays environment variables with value masking, visibility toggle, and copy functionality.

<FileTree />

The FileTree component displays a hierarchical file system structure with expandable folders and file selection.

<PackageInfo />

The PackageInfo component displays package dependency information including version changes and change type badges.

<Sandbox />

The Sandbox component provides a structured way to display AI-generated code alongside its execution output in chat conversations. It features a collapsible container with status indicators and tabbed navigation between code and output views.

<SchemaDisplay />

The SchemaDisplay component visualizes REST API endpoints with HTTP methods, paths, parameters, and request/response schemas.

<Snippet />

The Snippet component provides a lightweight way to display terminal commands and short code snippets with copy functionality. Built on top of shadcn/ui InputGroup, it's designed for brief code references in text.

<StackTrace />

The StackTrace component displays formatted JavaScript/Node.js error stack traces with clickable file paths, internal frame dimming, and collapsible content.

<Terminal />

The Terminal component displays console output with ANSI color support, streaming indicators, and auto-scroll functionality.

<TestResults />

The TestResults component displays test suite results (like Vitest) including summary statistics, progress, individual tests, and error details.

Bonus: <Attachments />

Not code related, but since attachment were being used in Message, PromptInput and more, we broke it out into its own component - a flexible, composable attachment component for displaying files, images, videos, audio, and source documents.

Read more

Hayden Bleasel
https://vercel.com/changelog/use-skills-in-your-ai-sdk-agents-via-bash-tool Use skills in your AI SDK agents via bash-tool 2026-01-21T13:00:00.000Z

Skills support is now available in bash-tool, so your AI SDK agents can use the skills pattern with filesystem context, Bash execution, and sandboxed runtime access.

This gives your agent a consistent way to pull in the right context for a task, using the same isolated execution model that powers filesystem-based context retrieval.

This allows giving your agent access to the wide variety of publicly available skills, or for you to write your own proprietary skills and privately use them in your agent.

Read the bash-tool changelog for background and check out createSkillTool documentation.

Read more

Malte Ubl
https://vercel.com/changelog/apply-code-suggestions-from-vercel-agent-with-one-click Apply code suggestions from Vercel Agent with one click 2026-01-21T13:00:00.000Z

You can now apply suggested code fixes from the Vercel Agent directly in the Vercel Dashboard.

When the Vercel Agent reviews your pull request, suggestions include a View suggestion button that lets you commit the fix to your PR branch, including changes that touch multiple files.

Suggestions open in the dashboard, where you can accept them in bulk or apply them one by one.

After you apply a suggestion, the review thread is automatically resolved. You can also track multiple concurrent Vercel Agent jobs from the Tasks page.

Get started with Vercel Agent code review in the Agent dashboard, or learn more in the documentation.

Read more

Dan Fox Julian Benegas Marcos Grappeggia Tom Dale John Phamous
https://vercel.com/changelog/introducing-the-montreal-canada-vercel-region-yul1 Introducing the Montréal, Canada region (yul1) 2026-01-20T13:00:00.000Z

Montréal, Canada (yul1) is now part of Vercel’s global delivery network, expanding our footprint to deliver lower latency and improved performance for users in Central Canada.

The new Montréal region extends our globally distributed CDN’s caching and compute closer to end users, reducing latency without any changes required from developers. Montréal is generally available and handling production traffic.

Teams can configure Montréal as an execution region for Vercel Functions, powered by Fluid compute to enhance resource efficiency, minimize cold starts, and scale automatically with demand.

Teams with Canadian data residency requirements can also use Montréal to keep execution in Canada.

Learn more about Vercel Regions and Montréal regional pricing

Read more

Matheus Fernandes
https://vercel.com/changelog/introducing-skills-the-open-agent-skills-ecosystem Introducing skills, the open agent skills ecosystem 2026-01-20T13:00:00.000Z

We released skills, a CLI for installing and managing skill packages for agents.

Install a skill package with npx skills add <package>.

So far, skills has been used to install skills on: amp, antigravity, claude-code, clawdbot, codex, cursor, droid, gemini, gemini-cli, github-copilot, goose, kilo, kiro-cli, opencode, roo, trae, and windsurf.

Today we’re also introducing skills.sh, a directory and leaderboard for skill packages.

Use it to:

  • discover new skills to enhance your agents

  • browse skills by category and popularity

  • track usage stats and installs across the ecosystem

Get started with npx skills add vercel-labs/agent-skills and explore skills.sh.

Read more

Andrew Qu
https://vercel.com/changelog/cron-jobs-now-support-100-per-project-on-every-plan Cron jobs now support 100 per project on every plan 2026-01-20T13:00:00.000Z

Cron jobs on Vercel no longer have per-team limits, and per-project limits were lifted to 100 on all plans.

Previously, all plans had a cap of 20 cron jobs per project, with per-team limits of 2 for Hobby, 40 for Pro, and 100 for Enterprise.

To get started, add cron entries to vercel.json:

You can also deploy the Vercel Cron Job template.

Once you deploy, Vercel automatically registers your cron jobs. Learn more in the Cron Jobs documentation.

Read more

Andy Schneider Malte Ubl Marcos Grappeggia
https://vercel.com/changelog/recraft-image-models-now-on-ai-gateway Recraft image models now on AI Gateway 2026-01-19T13:00:00.000Z

Recraft models are now available via Vercel's AI Gateway with no other provider accounts required. You can access Recraft's image models, V3 and V2.

These image models excel at photorealism, accurate text rendering, and complex prompt following. V3 supports long multi-word text generation with precise positioning, anatomical correctness, and native vector output. It includes 20+ specialized styles from realistic portraits to pixel art.

To use this model, set model to recraft/recraft-v3 in the AI SDK. This model supports generateImage.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/improved-environment-variables-ui Improved environment variables UI 2026-01-16T13:00:00.000Z

The environment variables UI is now easier to manage across shared and project environment variables.

You can spend less time scrolling, use larger hit targets, and view details only when you need them.

Learn more in the environment variables documentation.

Read more

John Phamous Manuel Muñoz Solera Mery Kaftar
https://vercel.com/blog/aws-databases-are-now-live-on-the-vercel-marketplace-and-v0 AWS databases are now live on the Vercel Marketplace and v0 2026-01-15T13:00:00.000Z

AWS databases are now available in the Vercel Marketplace and v0.

Read more

Tom Occhino Hedi Zandi
https://vercel.com/changelog/ssh-into-running-sandboxes-with-the-sandbox-cli SSH into running Vercel Sandboxes with the CLI 2026-01-15T13:00:00.000Z

You can now open secure, interactive shell sessions to running Sandboxes with the Vercel Sandbox CLI.

Note: While you’re connected, the Sandbox timeout is automatically extended in 5-minute increments to help avoid unexpected disconnections, for up to 5 hours.

Learn more in the Sandbox CLI docs.

Read more

Gal Schlezinger Tom Lienard
https://vercel.com/changelog/experimental-build-mode-hono-express Experimental build mode for Hono and Express projects 2026-01-15T13:00:00.000Z

Users can opt in to an experimental build mode for Hono and Express projects, which lets you filter logs by route, similar to Next.js.

It also updates the build pipeline with better module resolution:

  • Relative imports no longer require file extensions

  • TypeScript path aliases are supported

  • Improved ESM and CommonJS interoperability

To enable it, set VERCEL_EXPERIMENTAL_BACKENDS=1 in your project's environment variables.

Read more

Jeff See
https://vercel.com/changelog/openresponses-api-now-supported-on-vercel-ai-gateway OpenResponses API now supported on Vercel AI Gateway 2026-01-15T13:00:00.000Z

Vercel AI Gateway is a day 0 launch partner for the OpenResponses API, an open-source specification from OpenAI for multi-provider AI interactions.

OpenResponses provides a unified interface for text generation, streaming, tool calling, image input, and reasoning across providers.

AI Gateway supports OpenResponses for:

  • Text generation: Send messages and receive responses from any supported model.

  • Streaming: Receive tokens as they're generated via server-sent events.

  • Tool calling: Define functions that models can invoke with structured arguments.

  • Image input: Send images alongside text for vision-capable models.

  • Reasoning: Enable extended thinking with configurable effort levels.

  • Provider fallbacks: Configure automatic fallback chains across models and providers.

Use OpenResponses with your AI Gateway key, and switch models across providers by changing the model string.

You can also use OpenResponses for more complex cases, like tool calling.

Read the OpenResponses API documentation or view the specification.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/on-demand-vercel-agent-code-reviews On-demand Vercel Agent code reviews 2026-01-15T13:00:00.000Z

You can now trigger a Vercel Agent code review on demand.

When Vercel post comments on your GitHub pull request, you can now click the Review with Vercel Agent button from the deployment table to trigger a code review.

If you want more control, enable automatic code reviews in Team Settings → Agent → Review PRs Automatically.

Get started by installing the Vercel GitHub App or read the documentation.

Read more

Marcos Grappeggia Tom Dale Dan Fox
https://vercel.com/blog/use-perplexity-web-search-with-vercel-ai-gateway Use Perplexity Web Search with Vercel AI Gateway 2026-01-14T13:00:00.000Z

Models are powerful, but they're limited to their training data and knowledge cutoff date. When users ask about today's news, current prices, or the latest API changes, models can offer outdated information or admit they don't know.

Provider-agnostic web search on AI Gateway changes this. With a single line of code, you can give any model the ability to search the web in real-time. It works with OpenAI, Anthropic, Google, and every other provider available through AI Gateway.

Read more

Dan Fein Jerilyn Zheng
https://vercel.com/blog/introducing-react-best-practices Introducing: React Best Practices 2026-01-14T13:00:00.000Z

We've encapsulated 10+ years of React and Next.js optimization knowledge into react-best-practices, a structured repository optimized for AI agents and LLMs.

Read more

Shu Ding Andrew Qu
https://vercel.com/changelog/node-js-runtime-now-defaults-to-version-24-for-vercel-sandbox Node.js runtime now defaults to version 24 for Vercel Sandbox 2026-01-14T13:00:00.000Z

Vercel Sandbox for Node.js now uses Node.js 24 by default. This keeps the Node.js runtime aligned with the latest Node.js features and performance improvements.

If you don’t explicitly configure a runtime, Sandbox will use Node.js 24 (as shown below).

Read the Sandbox documentation to learn more.

Read more

Andy Waller
https://vercel.com/changelog/access-perplexity-web-search-on-vercel-ai-gateway-with-any-model Access Perplexity Web Search on Vercel AI Gateway with any model 2026-01-14T13:00:00.000Z

You can now give any model the ability to search the web using Perplexity through Vercel's AI Gateway.

AI Gateway supports Perplexity Search as a universal web search tool that works with all models, regardless of provider. Unlike native search tools that are exclusive to specific providers, Perplexity Search can be added to all models.

To use Perplexity Search with the AI SDK, import gateway.tools.perplexitySearch() from @ai-sdk/gateway and pass it in the tools parameter as perplexity_search to any model.

Some example use cases include:

Models without native search: Enable web search on models like zai/glm-4.7 or any from any other providers that don't expose a built-in search tool.

Developer tooling and CI assistants: Get current package versions, recently merged PRs, release notes, or docs updates.

Consistency with fallbacks: Maintain search behavior across multiple providers without rewriting search logic.

For more information, see the AI Gateway Perplexity Web Search docs.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/gpt-5-2-codex-now-available-on-vercel-ai-gateway GPT 5.2 Codex now available on Vercel AI Gateway 2026-01-14T13:00:00.000Z

You can now access GPT 5.2 Codex with Vercel's AI Gateway and no other provider accounts required. GPT 5.2 Codex combines GPT 5.2's strength in professional knowledge work with GPT 5.1 Codex Max's agentic coding capabilities.

GPT 5.2 Codex is better at working on long running coding tasks compared to predecessors and can handle more complex tasks like large refactors and migrations more reliably. The model has stronger vision performance for more accurate processing of screenshots and charts that are shared while coding. GPT 5.2 Codex also surpasses GPT 5.1 Codex Max in cyber capabilities and outperformed the previous model in OpenAI's Professional Capture-the-Flag (CTF) cybersecurity eval.

To use the GPT 5.2 Codex, with the AI SDK, set the model to openai/gpt-5.2-codex:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/ai-voice-elements AI Voice Elements 2026-01-14T13:00:00.000Z

Today we're releasing a brand new set of components for AI Elements designed to work with the Transcription and Speech functions of the AI SDK, helping you build the next generation of voice agents, transcription services and apps powered by natural language.

Persona

The Persona component displays an animated AI visual that responds to different conversational states. Built with Rive WebGL2, it provides smooth, high-performance animations for various AI interaction states including idle, listening, thinking, speaking, and asleep. The component supports multiple visual variants to match different design aesthetics.

Speech Input

The SpeechInput component provides an easy-to-use interface for capturing voice input in your application. It uses the Web Speech API for real-time transcription in supported browsers (Chrome, Edge), and falls back to MediaRecorder with an external transcription service for browsers that don't support Web Speech API (Firefox, Safari).

Transcription

The Transcription component provides a flexible render props interface for displaying audio transcripts with synchronized playback. It automatically highlights the current segment based on playback time and supports click-to-seek functionality for interactive navigation.

Audio Player

The AudioPlayer component provides a flexible and customizable audio playback interface built on top of media-chrome. It features a composable architecture that allows you to build audio experiences with custom controls, metadata display, and seamless integration with AI-generated audio content.

Microphone Selector

The MicSelector component provides a flexible and composable interface for selecting microphone input devices. Built on shadcn/ui's Command and Popover components, it features automatic device detection, permission handling, dynamic device list updates, and intelligent device name parsing.

Voice Selector

The VoiceSelector component provides a flexible and composable interface for selecting AI voices. Built on shadcn/ui's Dialog and Command components, it features a searchable voice list with support for metadata display (gender, accent, age), grouping, and customizable layouts. The component includes a context provider for accessing voice selection state from any nested component.

Read more

Hayden Bleasel
https://vercel.com/changelog/reduced-build-times-for-large-projects Reduced build times for large projects 2026-01-14T13:00:00.000Z

We shipped build system optimizations that reduce overhead for projects with many input files, large node_modules, or large build outputs.

Expensive disk operations (large file detection and folder size calculations) are no longer on the critical path for successful builds. These calculations now only run when a build fails, or when you enable the VERCEL_BUILD_SYSTEM_REPORT environment variable.

Builds complete 2.8 seconds faster on average, with larger builds seeing improvements of up to 12 seconds.

See the builds documentation for details.

Read more

Janos Szathmary Andrew Healey
https://vercel.com/blog/nick-bogaty-joins-vercel-as-chief-revenue-officer Nick Bogaty joins Vercel as Chief Revenue Officer 2026-01-13T13:00:00.000Z

It's a thrilling time to work in Sales at Vercel. The web is transitioning from pages to agents, and Vercel is building the self-driving infrastructure to power it. We've assembled a Sales organization that equally understands the continually shifting technical landscape and pressing business needs to stay flexible, move fast, and be secure in the AI era.

We're rethinking how Sales operates, and we're building the most AI-forward go-to-market organization in the industry. To lead this charge, we're welcoming Nick Bogaty as our Chief Revenue Officer.

Read more

Jeanne Grosser
https://vercel.com/changelog/docs-pages-support-markdown-responses Docs pages support Markdown responses 2026-01-13T13:00:00.000Z

You can now request Vercel documentation as Markdown by sending the Accept header with the value text/markdown.

This makes it easier to use docs content in agentic and CLI workflows, indexing pipelines, and tooling that expects text/markdown.

Markdown responses include a sitemap.md link at the end. You and your agent can use it to discover additional docs pages programmatically.

Read more

Anthony Shew
https://vercel.com/changelog/project-level-deployment-suffixes Project-level deployment suffixes 2026-01-13T13:00:00.000Z

You can now set deployment suffixes at the project level, instead of using a single suffix across your team.

Project-level suffixes follow the same requirements as team-level suffixes. Your team must own the domain, and the domain must use Vercel nameservers.

Configure suffixes in your project settings.

Read more

Rhys Sullivan
https://vercel.com/changelog/set-team-wide-defaults-for-deployment-protection Set team-wide defaults for Deployment Protection 2026-01-13T13:00:00.000Z

Set team-wide defaults for Deployment Protection

You can now set a team-wide default for Deployment Protection.

New projects start with Deployment Protection set to Standard Protection, which protects Preview Deployments by default.

Choose the default protection level for new projects: All Deployments, Standard Protection, or None.

If you set All Deployments as the default, every new project is protected as soon as it’s created.

See the Deployment Protection documentation.

Read more

Javier Bórquez Kit Foster
https://vercel.com/changelog/protection-bypass-for-automation-multiple-secrets Protection bypass for automation now supports multiple secrets 2026-01-13T13:00:00.000Z

Vercel projects now support multiple Protection Bypass for Automation secrets.

This makes it easier to rotate secrets and use different secrets for different workflows. Each bypass can also include a note, so it’s easier to track what it’s used for.

Learn more in the Deployment Protection documentation.

Read more

Kit Foster
https://vercel.com/blog/how-mux-shipped-durable-video-workflows-with-their-mux-ai-sdk How Mux shipped durable video workflows with their @mux/ai SDK 2026-01-12T13:00:00.000Z

We invited Dylan Jhaveri from Mux to share how they shipped durable workflows with their @mux/ai SDK.

AI workflows have a frustrating habit of failing halfway through. Your content moderation check passes, you're generating video chapters, and then you hit a network timeout, a rate limit, or a random 500 from a provider having a bad day. Now you're stuck. Do you restart from scratch and pay for that moderation check again? Or do you write a bunch of state management code to remember where you left off?

This is where durable execution changes everything.

When we set out to build @mux/ai, an open-source SDK to help our customers build AI features on top of Mux's video infrastructure, we faced a fundamental question: how do we ship durable workflows in a way that's easy for developers to adopt, without forcing them into complex infrastructure decisions?

The answer was Vercel's Workflow DevKit.

Read more

Dylan Jhaveri
https://vercel.com/changelog/web-interface-guidelines-now-available-as-an-agent-command Web Interface Guidelines now available as an agent command 2026-01-12T13:00:00.000Z

You can now install Vercel's Web Interface Guidelines as skill/command for your agent.

Run /web-interface-guidelines to review UI code for accessibility, keyboard support, form behavior, animation, performance, and more.

Install with a single command:

curl -fsSL https://vercel.com/design/guidelines/install | bash

Supports Claude Code, Cursor, OpenCode, Windsurf, and Gemini CLI. For other agents, use the command prompt directly or add AGENTS.md to your project.

Learn more about using the Web Interface Guidelines with Agents.

Read more

John Phamous
https://vercel.com/changelog/build-cache-storage-increased-for-larger-build-machines Build cache storage increased for larger build machines 2026-01-12T13:00:00.000Z

Projects that use larger build machines now have increased limits for build caches.

  • Enhanced build machines have a 3 GB limit

  • Turbo build machines have a 7 GB limit

These limit increases come at no additional charge.

Learn more about Vercel's build cache.

Read more

Luke Phillips-Sheard
https://vercel.com/changelog/streamdown-v2 Streamdown v2: Smaller bundle size and new Remend options 2026-01-12T13:00:00.000Z

Today, we're releasing a major update to Streamdown, our drop-in replacement for react-markdown, designed for AI-powered streaming.

Streamdown Plugins

The most requested feature since launch has been to reduce the bundle size. Streamdown v2 now ships with a bundle size much smaller than the previous version and uses a plugin-based architecture.

Carets

Streamdown now includes built-in caret (cursor) indicators that display at the end of streaming content. Carets provide a visual cue to users that content is actively being generated, similar to a blinking cursor in a text editor.

Configurable Remend

Our underlying markdown-healing library Remend is now configurable. You can choose how much healing you would prefer during the markdown streaming.

Read more

Hayden Bleasel
https://vercel.com/blog/how-to-build-agents-with-filesystems-and-bash How to build agents with filesystems and bash 2026-01-09T13:00:00.000Z

Many of us have built complex tooling to feed our agents the right information. It's brittle because we're guessing what the model needs instead of letting it find what it needs. We've found a simpler approach. We replaced most of the custom tooling in our internal agents with a filesystem tool and a bash tool. Our sales call summarization agent went from ~$1.00 to ~$0.25 per call on Claude Opus 4.5, and the output quality improved. We used the same approach for d0, our text-to-SQL agent.

The idea behind this is that LLMs have been trained on massive amounts of code. They've spent countless hours navigating directories, grepping through files, and managing state across complex codebases. If agents excel at filesystem operations for code, they'll excel at filesystem operations for anything. Agents already understand filesystems.

Customer support tickets, sales call transcripts, CRM data, conversation history. Structure it as files, give the agent bash, and the model brings the same capabilities it uses for code navigation.

Read more

Ashka Stephen
https://vercel.com/changelog/limit-on-demand-concurrent-builds-to-one-build-per-branch Limit on-demand concurrent builds to one build per branch 2026-01-09T13:00:00.000Z

On-Demand Concurrent Builds let builds start immediately, without waiting for other deployments to finish.

You can now configure this feature to run one active build per branch. When enabled, deployments to the same branch are queued. After the active build finishes, only the most recent queued deployment starts building. Older queued deployments are skipped. Deployments on different branches can still build concurrently.

Enable this in your project settings or learn more in the documentation.

Read more

Cody Wong Ali Smesseim Andrew Healey Luke Phillips-Sheard Marcos Grappeggia
https://vercel.com/changelog/bookmark-domains-on-vercel-domains Bookmark domains on Vercel Domains 2026-01-08T13:00:00.000Z

You can now bookmark domains on Vercel Domains for purchasing at a later date.

To save a domain, either:

  • Click on a search result, and select "Save for later"

  • Select the bookmark icon on a domain in your cart

You can then view your saved domains and add them to your cart from the "Saved" tab.

Try it now at vercel.com/domains

Read more

Elliot Dauber Ethan Niser
https://vercel.com/blog/how-we-made-v0-an-effective-coding-agent How we made v0 an effective coding agent 2026-01-07T13:00:00.000Z

Last year we introduced the v0 Composite Model Family, and described how the v0 models operate inside a multi-step agentic pipeline. Three parts of that pipeline have had the greatest impact on reliability. These are the dynamic system prompt, a streaming manipulation layer that we call “LLM Suspense”, and a set of deterministic and model-driven autofixers that run after (or while!) the model finishes streaming its response.

What we optimize for

The primary metric we optimize for is the percentage of successful generations. A successful generation is one that produces a working website in v0’s preview instead of an error or blank screen. But the problem is that LLMs running in isolation encounter various issues when generating code at scale.

In our experience, code generated by LLMs can have errors as often as 10% of the time. Our composite pipeline is able to detect and fix many of these errors in real time as the LLM streams the output. This can lead to a double-digit increase in success rates.

Read more

Max Leiter
https://vercel.com/changelog/introducing-bash-tool-for-filesystem-based-context-retrieval Introducing bash-tool for filesystem-based context retrieval 2026-01-07T13:00:00.000Z

We open-sourced bash-tool, the Bash execution engine used by our text-to-SQL agent that we recently re-architected to reduce our token usage, improve the accuracy of the agent's responses, and improve the agent's overall performance.

bash-tool gives your agent a way to find the right context by running bash-like commands over files, then returning only the results of those tool calls to the model.

Context windows can fill up quickly if you include large amounts of text into a prompt. As agents tend to do well with Unix-style workflows like find, grep, jq, and pipes, with bash-tool you can now keep large context local, in a filesystem, and let the agent use those commands to retrieve smaller slices of context on demand.

bash-tool provides bash, readFile, and writeFile tools for AI SDK agents, working with both in-memory and sandboxed environments, and:

  • runs on top of just-bash, which interprets bash scripts directly in TypeScript without a shell process or arbitrary binary execution

  • you can preload that filesystem with your files at startup, so your agent can search them when needed without pasting everything into the prompt

  • it supports running in-memory or in a custom isolated VM

If you need a real shell, a real filesystem, or custom binaries, you can run the same tool against a Sandbox-compatible API for full VM isolation.

Read more

Malte Ubl Andrew Qu
https://vercel.com/changelog/secure-compute-is-now-self-serve Secure Compute is now self-serve 2026-01-07T13:00:00.000Z

Teams can now create, update and delete Secure Compute networks directly from the Vercel dashboard, the API, and Terraform.

Secure Compute networks provide private connectivity between your Vercel Functions and backend infrastructure and let you control regional placement, addressing, egress and failover of your projects.

Now you can:

  • Self-service network management with no contract amendment or manual provisioning required.‍

  • Manage existing Secure Compute capabilities directly, including Region and Availability Zone selection, active/passive failover, private CIDR selection, NAT/egress behavior are now manageable via self-serve flows.

  • Automate & integrate with full network lifecycle support through the Dashboard, public API, and Terraform so teams can manage networks interactively or declaratively.

  • And coming soon: Self-serve Site-to-Site VPN connections via the Dashboard, API, and Terraform, Secure Compute for Pro customers and PrivateLink connectivity.

This is available today for Enterprise teams.

Check out the documentation to get started.​​​​‌‍​‍​‍‌‍

Read more

Yanick Bélanger Lakshay Bhushan Shilpa Apte
https://vercel.com/changelog/vercel-agent-code-reviews-now-follow-your-code-guidelines Vercel Agent code reviews now follow your code guidelines 2026-01-06T13:00:00.000Z

Vercel Agent now applies your repository’s coding guidelines during code reviews.

Add an AGENTS.md file to your repository, or use existing formats like CLAUDE.md, .cursorrules, or .github/copilot-instructions.md.

Agent automatically detects and applies these guidelines to provide context-specific feedback for your codebase.

No configuration required. Learn more about code guidelines.

Read more

Julian Benegas John Phamous Marcos Grappeggia
https://vercel.com/changelog/ai-gateway-support-for-claude-code AI Gateway support for Claude Code 2026-01-05T13:00:00.000Z

You can now use Claude Code through Vercel AI Gateway via its Anthropic-compatible API endpoint.

Route Claude Code requests through AI Gateway to centralize usage and spend, view traces in observability, and benefit from failover between providers for your model of choice.

Log out if you're already logged in, then set these environment variables to configure Claude Code to use AI Gateway:

Setting ANTHROPIC_API_KEY to an empty string is required. Claude Code checks this variable first, and if it's set to a non-empty value, it will use that instead of ANTHROPIC_AUTH_TOKEN.

Start Claude Code. Requests will route through AI Gateway:

See the Claude Code documentation for details.

Read more

Walter Korman Harpreet Arora Casey Gowrie Matt Lenhard
https://vercel.com/blog/stopping-the-slow-death-of-internal-tools Stopping the slow death of internal tools 2025-12-27T13:00:00.000Z

Companies spend millions of dollars in time and money trying to build internal tools. These range from lightweight automations and dashboards to fully custom systems with dedicated engineering teams.

Most businesses can’t justify focusing developers on bespoke operational tools, so non-technical teams resort to brittle and insecure workarounds: custom Salesforce formulas and fields, complex workflow automations, spreadsheets, and spiderwebs of integrations across platforms. They are trying to build software without actually building software, and most of the tools end up collecting dust.

v0’s AI agent changes this equation. Business users can build and publish real code and apps on the same platform that their developers use, safely integrate with internal and external systems, and secure everything behind existing SSO authentication.

Read more

Zeb Hermann Eric Dodds
https://vercel.com/blog/pixel-portraits-ai-generated-trading-cards Pixel Portraits: AI generated trading cards 2025-12-23T13:00:00.000Z

At our recent Next.js Conf and Ship AI events, we introduced an activation that blended technical experimentation with playful nostalgia. The idea started long before anyone stepped into the venue. As part of the online registration experience for both events, attendees could prompt and generate their own trading cards, giving them an early taste of the format and creating the foundation for what we wanted to bring into the real world.

Read more

Daniel Linthwaite
https://vercel.com/blog/we-removed-80-percent-of-our-agents-tools We removed 80% of our agent’s tools 2025-12-22T13:00:00.000Z

It got better.

We spent months building a sophisticated internal text-to-SQL agent, d0, with specialized tools, heavy prompt engineering, and careful context management. It worked… kind of. But it was fragile, slow, and required constant maintenance.

So we tried something different. We deleted most of it and stripped the agent down to a single tool: execute arbitrary bash commands. We call this a file system agent. Claude gets direct access to your files and figures things out using grep, cat, and ls.

The agent got simpler and better at the same time. 100% success rate instead of 80%. Fewer steps, fewer tokens, faster responses. All by doing less.

Read more

Andrew Qu
https://vercel.com/blog/ai-sdk-6 AI SDK 6 2025-12-22T13:00:00.000Z

With over 20 million monthly downloads and adoption by teams ranging from startups to Fortune 500 companies, the AI SDK is the leading TypeScript toolkit for building AI applications. It provides a unified API, allowing you to integrate with any AI provider, and seamlessly integrates with Next.js, React, Svelte, Vue, and Node.js. The AI SDK enables you to build everything from chatbots to complex background agents.

Read more

Gregor Martynus Lars Grammel Aayush Kapoor Josh Singh Nico Albanese
https://vercel.com/changelog/minimax-m2-1-now-live-on-vercel-ai-gateway MiniMax M2.1 now live on Vercel AI Gateway 2025-12-22T13:00:00.000Z

You can now access MiniMax's latest model, M2.1, with Vercel's AI Gateway and no other provider accounts required.

MiniMax M2.1 is faster than its predecessor M2, with clear improvements specifically in coding use cases and complicated multi-step tasks with tool calls. M2.1 writes higher quality code, is better at following instructions for difficult tasks, and has a cleaner reasoning process. The model has breadth in addition to depth, with improved performance across multiple coding languages (Go, C++, JS, C#, TS, etc.) and refactoring, feature adds, bug fixes, and code review.

To start building with MiniMax M2.1 via AI SDK, set the model to minimax/minimax-m2.1:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/glm-4-7-available-on-vercel-ai-gateway GLM-4.7 available on Vercel AI Gateway 2025-12-22T13:00:00.000Z

You can now access Z.ai's latest model, GLM-4.7, with Vercel's AI Gateway and no other provider accounts required.

GLM-4.7 comes with major improvements in coding, tool usage, and multi-step reasoning, especially with complex agentic tasks. The model also has a more natural tone for a better conversational experience and can product a more refined aesthetic for front-end work.

To start building with GLM-4.7 via AI SDK, set the model to zai/glm-4.7:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/function-start-type-now-available-in-runtime-logs Function start type now available in Runtime Logs 2025-12-22T13:00:00.000Z

For any request involving a Vercel Function invocation, you can now view the function start type in the right hand details panel of Runtime Logs.

A Function invocation can be either: Hot, Hot (prewarmed) or Cold. When a Function was invoked and it's a Cold start, we also display the cold start duration like: Cold (280ms).

Try it out or learn more about Runtime Logs.

Read more

Vincent Voyer Tom Lienard
https://vercel.com/blog/our-million-dollar-hacker-challenge-for-react2shell Our $1 million hacker challenge for React2Shell 2025-12-19T13:00:00.000Z

In the weeks following React2Shell's disclosure, our firewall blocked over 6 million exploit attempts targeting deployments running vulnerable versions of Next.js, with 2.3 million in a single 24-hour period at peak.

This was possible thanks to Seawall, the deep request inspection layer of the Vercel Web Application Firewall (WAF). We worked with 116 security researchers to find every WAF bypass they could, paying out over $1 million and shipping 20 unique updates to our WAF in 48 hours as new techniques were reported. The bypass techniques they discovered are now permanent additions to our firewall, protecting every deployment on the platform.

But WAF rules are only the first line of defense. We are now disclosing for the first time an additional defense-in-depth against RCE on the Vercel platform that operates directly on the compute layer. Data from this defense-in-depth allows us to state with high confidence that the WAF was extraordinarily effective against exploitation of React2Shell.

This post is about what we built to protect our customers and what it means for security on Vercel going forward.

Read more

Malte Ubl
https://vercel.com/changelog/vercel-ts Introducing vercel.ts: Programmatic project configuration 2025-12-19T13:00:00.000Z

Vercel now supports vercel.ts, a new TypeScript-based configuration file that brings type safety, dynamic logic, and better developer experience to project configuration.

vercel.ts lets you express configuration as code by defining advanced routing, request transforms, caching rules, and cron jobs, going beyond what static JSON can express. In addition to full type safety, this also allows access to environment variables, shared logic, and conditional behavior.

All projects can now use vercel.ts (or .js, .mjs, .cjs, .mts) for project configuration. Properties are defined identically to vercel.json and can be enhanced using the new @vercel/config package.

Try the playground to explore vercel.ts, learn how to migrate from an existing vercel.json, or read the documentation and the @vercel/config package.

Read more

Pranav Karthik Matthew Stanciu Mark Knichel
https://vercel.com/changelog/chat-with-vercel-marketplace-integrations-using-vercel-agent Chat with Vercel Marketplace integrations using Vercel Agent 2025-12-18T13:00:00.000Z

You can now interact with installed Marketplace integrations using Vercel Agent in the Dashboard. This feature launches with support from Marketplace providers including Neon, Supabase, Dash0, Stripe, Prisma and Mux, with more coming soon.

You can use Vercel Agent, a chat-based interface to talk to Marketplace Providers MCP (Model Context Protocol), allowing you to query, debug, and manage connected services directly from Vercel. Tools exposed by providers are available automatically, with authentication and configuration handled by Vercel.

Available free for Vercel Pro and Enterprise customers, with an optional Read-Only mode for safe exploration and debugging.

How to get started

  • Install or visit a supported Marketplace integration

  • Click on Agent Tools in the left navigation to open the chat interface.

  • Your installed integration's tools load automatically and are ready to use.

Learn more and get started in the documentation.

Read more

Tony Pan Dima Voytenko Hedi Zandi Justin Kropp Ismael Rumzan
https://vercel.com/changelog/reduced-prices-for-tlds-site-space-website-fun-online-store-tech Reduced prices for TLDs .site, .space, .website, .fun, .online, .store, .tech 2025-12-18T13:00:00.000Z

Vercel Domains now offers reduced prices for the following TLDs:

  • .site: Now $1.99, down from $2.99

  • .space: Now $1.99, down from $4.99

  • .website: Now $1.99, down from $4.99

  • .fun: Now $1.99, down from $4.99

  • .online: Now $1.99, down from $2.99

  • .store: Now $1.99, down from $2.99

  • .tech: Now $7.99, down from $13.99

Prices for premium domains are not affected by this pricing change.

Get your domain today at vercel.com/domains.

Read more

Elliot Dauber Ethan Niser Rhys Sullivan Mark Glagola
https://vercel.com/changelog/bulk-redirects-ui-api-and-cli-now-generally-available Bulk redirects UI, API, and CLI now generally available 2025-12-18T13:00:00.000Z

Vercel users can now configure bulk redirects using UI, API, or CLI without a new deployment.

Vercel's bulk redirects allow up to one million static URL redirects per project. In addition to bulk redirects support via vercel.json, these new changes simplify how teams can manage large-scale migrations, quickly fix broken links, handle expired pages, and more.

You can modify redirects individually, or in bulk by uploading CSV files. Redirect changes are initially staged for testing before publishing to production, and include a version history to see and restore historical versions.

This feature is available for Pro and Enterprise customers, with rates for additional capacity:

  • Pro: 1,000 bulk redirects included per project

  • Enterprise: 10,000 bulk redirects included per project

  • Additional capacity: starts at $50/month per 25,000 redirects

Get started with bulk redirects, or learn more.

Read more

Ben Roberts Mark Knichel Andrew Gadzik Tim Caswell Matthew Stanciu Sudais Moorad
https://vercel.com/changelog/preview-urls-optimized-for-multi-tenant-platforms Preview URLs optimized for multi-tenant platforms 2025-12-17T13:00:00.000Z

Vercel helps you create multi-tenant platforms, where a single project can be backed by tens of thousands of domains, like vibe coding platforms, website builders, e-commerce storefronts and more. We're making it even easier to build those styles of apps today by introducing dynamic URL prefixes.

Dynamic URL prefixes allow you to prefix your existing deployment urls with {data}---, for example tenant-123---project-name-git-branch.yourdomain.dev

This will route the traffic to project-name-git-branch.yourdomain.dev while keeping tenant-123--- in the url which your app can extract and route based on it.

Previously, preview URLs were designed to match a specific preview deployment exactly and Vercel wouldn’t have enough information to route domains to a specific preview deployment.

Now you can:

  • Create unique preview URLs for each tenant

  • Encode metadata, routing context, or automation signals directly in the URL

  • Use flexible URL structures such as: tenant-123---project-name-git-branch.yourdomain.dev

Preview URLs for multi-tenant platforms are available for Pro and Enterprise teams, and require a Preview Deployment Suffix (a Pro add-on).

Try the demo or to get started, go to your team's settings to set your Preview Deployment Suffix. Then, follow our guide on configuring multi-tenant preview URLs.

Read more

Rhys Sullivan Kim Neuwirth
https://vercel.com/changelog/aws-databases-now-available-on-the-vercel-marketplace AWS databases now available on the Vercel Marketplace 2025-12-17T13:00:00.000Z

Today we’re introducing native support for AWS databases including Amazon Aurora PostgreSQL, Amazon Aurora DSQL, and Amazon DynamoDB on the Vercel Marketplace.

This gives developers a direct path to provision and manage scalable, production-ready AWS databases from within the Vercel dashboard with no manual setup required, and:

  • One-click support for creating a new AWS account, provisioning new AWS databases and linking them to your Vercel projects.

  • Improved developer experience with simplified region selection, secure credential handling, and unified monitoring of AWS database resources from Vercel.

  • Automatic environment variable for connection strings and credentials, securely stored within your Vercel project.

  • Free starter plan for new AWS customers, including $100 in credits, with deep links to manage or upgrade plans in the AWS console.

  • And coming soon: Provision databases into your existing AWS account, attach them to your projects, and access AWS databases directly inside v0.

Getting started

  1. Navigate to the Vercel Marketplace and select AWS

  2. Choose Create new account to provision a database

  3. Select your database type, region, and plan (including a free starter plan with $100 in credits for new AWS customers) and hit create

  4. Connect it to your project. Vercel automatically handles credentials and configuration

You can also try a working example by deploying the Movie Fetching Database template to see the integration end-to-end.

Read more

Michael Toth Dima Voytenko Marc Greenstock Hedi Zandi Marc Brakken Yasoob Rasheed
https://vercel.com/changelog/gemini-3-flash-is-now-available-on-the-vercel-ai-gateway Gemini 3 Flash is now available on the Vercel AI Gateway 2025-12-17T13:00:00.000Z

You can now access Google's latest Gemini model, Gemini 3 Flash, with Vercel's AI Gateway and no other provider accounts required.

It is Google's most intelligent model that is optimized for speed, with Gemini 3's pro-grade reasoning alongside flash-level latency, efficiency, and cost. Gemini 3 Flash significantly outperforms the previous Gemini 2.5 models, beating Gemini 2.5 Pro across most benchmarks, while using 30% less tokens and is 3x faster at a fraction of the cost.

To use the Gemini 3 Flash with the AI SDK, set the model to google/gemini-3-flash:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Matt Lenhard Jerilyn Zheng
https://vercel.com/blog/cline-on-ai-gateway Cline now runs on Vercel AI Gateway 2025-12-16T13:00:00.000Z

Cline, the leading open-source coding agent built for developers and teams, now runs on the Vercel AI Gateway.

With more than 1 million developers and 4 million installations, Cline brings an AI coding partner directly into the development environment, grounded in the values of openness and transparency.

To support that mission at scale, the team needed infrastructure that matched those principles: fast, reliable, and built on open standards.

Read more

Harpreet Arora Dan Fein
https://vercel.com/changelog/vercel-knowledge-base Vercel Knowledge Base 2025-12-16T13:00:00.000Z

Vercel Knowledge Base is a new home for guides, tutorials, and best practices for developers building on Vercel, including how to:

You can use the Knowledge Base to find and explore guides for specific use cases with:

  • Semantic AI search: describe what you're trying to achieve

  • AI chat: ask our agent about a guide

  • Filters: search guides by Vercel product or feature

Read more

Delba de Oliveira Ismael Rumzan Skully Paoli
https://vercel.com/changelog/export-observability-query-results-to-csv-or-json Export Observability query results to CSV or JSON 2025-12-16T13:00:00.000Z

You can now export the results from your Observability queries as CSV or JSON files. This allows you to analyze, share, and process your Vercel observability data outside of the Vercel dashboard.

Click the download icon on any query to export your query results instantly.

This feature is available for all teams with Observability Plus.

Try it out or learn more about Query.

Read more

Timo Lins Damien Simonin Feugas
https://vercel.com/blog/how-to-prompt-v0 How to prompt v0 2025-12-15T13:00:00.000Z

Working with v0 is like working with a highly skilled teammate who can build anything you need. v0 is more than just a tool, it’s your building partner. And like with any great collaborator, the quality of what you get depends on how clearly you communicate.

Read more

Esteban Suárez
https://vercel.com/blog/build-smarter-workflows-with-notion-and-v0 Build smarter workflows with Notion and v0 2025-12-15T13:00:00.000Z

Notion has become the trusted, connected workspace for teams. It's where your PRDs, specs, and project context live. v0 helps those teams turn ideas into dashboards, apps, and prototypes. Today, those workflows connect.

You can now securely connect v0 to your Notion workspace, so everything it builds is grounded in your existing docs and databases.

Wherever your team's knowledge lives in Notion, v0 can now build on top of it.

Read more

Caroline Ciaramitaro
https://vercel.com/changelog/split-web-analytics-data-by-any-dimension Split Web Analytics data by any dimension 2025-12-12T13:00:00.000Z

Web Analytics now allows you to split data across any dimension.

You can now break down your Web Analytics data across any dimension, not just Flags and Flag Values. This update expands support to 11 dimensions, which are:

  • Paths

  • Routes

  • Host names

  • Countries

  • Devices

  • OS

  • Referrers

  • Flags

  • Flag values

  • Events names

  • Event properties

With dimension splits and filters, you can dig deeper into user activity and better understand how different segments are using your application.

This feature is available to all Vercel users with the Web Analytics package installed. Web Analytics Plus subscribers also gain the enhanced capability of splitting data by UTM parameters.

Try it out or learn more about Web Analytics.

Read more

Damien Simonin Feugas Timo Lins
https://vercel.com/changelog/add-cache-tags-from-function-responses-regardless-of-framework Add cache tags from Function responses, regardless of framework 2025-12-12T13:00:00.000Z

You can now add one or more cache tags to your Function response by importing the addCacheTag function from @vercel/functions npm package.

import { addCacheTag } from '@vercel/functions'

Once the cached response has a tag associated with it, you can later invalidate the cache in one of several ways:

Available on all plans and all frameworks.

Learn more about cache invalidation.

Read more

Steven Salat Shraddha Agarwal Kelly Davis
https://vercel.com/changelog/push-notifications-support-on-desktop-and-mobile Push notifications support on desktop and mobile 2025-12-12T13:00:00.000Z

Push notifications are now available on both desktop and mobile, with support for all notification types.

To start receiving push notifications from Vercel:

  • Go to Notification Settings in the Vercel dashboard

  • Enable the push notification channel for any notification type

To allow mobile notifications on your phone:

  • Open the Vercel Dashboard in your mobile browser

  • Opt in to push notifications when prompted

Try it out or learn more about notifications.

Read more

Michael Wenzel Christopher Skillicorn
https://vercel.com/changelog/referer-now-available-in-runtime-logs Referer now available in runtime logs 2025-12-12T13:00:00.000Z

For any request displayed on runtime logs in the Vercel dashboard, you can now view the referer (if any) for that request in the right hand details panel.

This allows you to understand the source of that request and more easily debug issues.

Try it out or learn more about Runtime Logs.

Read more

Vincent Voyer
https://vercel.com/changelog/react-server-components-security-update-dos-and-source-code-exposure React Server Components security update: DoS and Source Code Exposure 2025-12-11T13:00:00.000Z

See the Security Bulletin for the latest updates.

Summary

Two additional vulnerabilities in React Server Components have been identified: a high-severity Denial of Service (CVE-2025-55184) and a medium-severity Source Code Exposure (CVE-2025-55183). These issues were discovered while security researchers examined the patches for the original React2Shell vulnerability. The initial fix was incomplete and did not fully prevent denial-of-service attacks for all payload types, resulting in CVE-2025-67779.

Importantly, none of these new issues allow for Remote Code Execution.

We created new rules to address these vulnerabilities and deployed them to the Vercel WAF to automatically protect all projects hosted on Vercel at no cost. However, do not rely on the WAF for full protection. Immediate upgrades to a patched version are required.

Impact

Denial of Service (CVE-2025-55184)

A malicious HTTP request can be crafted and sent to any App Router endpoint that, when deserialized, can cause the server process to hang and consume CPU.

Source Code Exposure (CVE-2025-55183)

A malicious HTTP request can be crafted and sent to any App Router endpoint that can return the compiled source code of Server Actions. This could reveal business logic, but would not expose secrets unless they were hardcoded directly into Server Action's code.

These vulnerabilities are present in versions 19.0.0, 19.0.1, 19.1.0, 19.1.1, 19.1.2, 19.2.0, and 19.2.1 of the following packages:

  • react-server-dom-parcel

  • react-server-dom-webpack

  • react-server-dom-turbopack

These packages are included in the following frameworks and bundlers:

  • Next.js: 13.x, 14.x, 15.x, and 16.x.

  • Other frameworks and plugins that embed or depend on React Server Components implementation (e.g., Vite, Parcel, React Router, RedwoodSDK, Waku)

Resolution

After creating mitigations to address these vulnerabilities, we deployed them across our globally-distributed platform to protect our customers. We still recommend upgrading to the latest patched version.

Updated releases of React and affected downstream frameworks include fixes to prevent these issues. All users should upgrade to a patched version as soon as possible.

Fixed in

  • React: 19.0.2, 19.1.3, 19.2.2.

  • Next.js: 14.2.35, 15.0.7, 15.1.11, 15.2.8, 15.3.8, 15.4.10, 15.5.9, 15.6.0-canary.60, 16.0.10, 16.1.0-canary.19.

Frameworks and bundlers using the aforementioned packages should install the latest versions provided by their respective maintainers.

Credit

Thanks to RyotaK from GMO Flatt Security Inc. and Andrew MacPherson for identifying and responsibly reporting these vulnerabilities, and the Meta Security and React teams for their partnership.

References

Read more

Liz Hurder
https://vercel.com/changelog/gpt-5-2-models-now-available-on-vercel-ai-gateway GPT 5.2 models now available on Vercel AI Gateway 2025-12-11T13:00:00.000Z

You can now access OpenAI's latest GPT-5.2 models with Vercel's AI Gateway and no other provider accounts required.

These models perform better than the GPT-5.1 model series, with noted improvements in professional knowledge work, coding, and long-context reasoning. Other highlights include fewer hallucinations, more accurate vision to interpret graphs and visualizations, strong complex front-end work capabilities, and better information retention working with long documents.

There are 3 models available on AI Gateway:

  • GPT-5.2 Chat (openai/gpt-5.2-chat) is the model used in ChatGPT, best suited for everyday work and learning.

  • GPT-5.2 (openai/gpt-5.2) is for deeper work and complex tasks involving coding or long documents.

  • GPT-5.2 Pro (openai/gpt-5.2-pro) is best suited for the most difficult questions and tasks with large amounts of reasoning.

To use the GPT-5.2 models with the AI SDK, set the model to the respective model slug (noted above):

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Rohan Taneja Matt Lenhard Jerilyn Zheng
https://vercel.com/blog/vercel-launches-partner-certification Vercel launches partner certification 2025-12-10T13:00:00.000Z

We're proud to introduce the inaugural cohort of Vercel Certified Solution Partners. These eleven industry-leading teams share our commitment to create a faster, more accessible, and more innovative web.

This program is designed not only to validate partner expertise, but also to help customers confidently choose teams who understand their needs, technical requirements, and the experiences they aim to deliver.

Through partner certification, customers are matched with teams proven to deliver exceptional outcomes with Next.js and Vercel, from ambitious redesigns and complex enterprise migrations to new product development.

Read more

Joey Malysz Alex Hawley
https://vercel.com/changelog/node-js-24-lts-is-now-available-on-sandbox Node.js 24 LTS is now available on Sandbox 2025-12-10T13:00:00.000Z

Vercel Sandbox now supports Node.js version 24.

To run a Sandbox with Node.js 24, upgrade @vercel/sandbox to version 1.1.0 or above and set the runtime property to node24:

Read our Sandbox documentation to learn more.

Read more

Andy Waller
https://vercel.com/blog/inside-workflow-devkit-how-framework-integrations-work Inside Workflow DevKit: How framework integrations work 2025-12-09T13:00:00.000Z

When we announced the Workflow Development Kit (WDK) at Ship AI just over a month ago, we wanted it to reflect our Open SDK Strategy, allowing developers to build with any framework and deploy to any platform.

At launch, WDK supported Next.js and Nitro. Today it works with eight frameworks, including SvelteKit, Astro, Express, and Hono, with TanStack Start and React Router in active development. This post explains the pattern behind those integrations and how they work under the hood.

Read more

Adrian Lam
https://vercel.com/changelog/fastapi-lifespan-events-are-now-supported-on-vercel FastAPI Lifespan Events are now supported on Vercel 2025-12-09T13:00:00.000Z

Vercel now supports lifespan events for FastAPI apps. This allows you to define logic that can execute on startup and graceful shutdown—such as managing database connections or flushing external logs.

Deploy FastAPI on Vercel or visit the FastAPI on Vercel documentation.

Read more

Ricardo Gonzalez Tom Lienard
https://vercel.com/changelog/unified-security-actions-dashboard Unified security actions dashboard 2025-12-08T13:00:00.000Z

Vercel now provides a unified dashboard that surfaces any security issues requiring action from your team. When a critical vulnerability or security-related task is detected, the dashboard automatically groups your affected projects and guides you through the steps needed to secure them.

This view appears as a banner whenever action is required, and can be accessed anytime through the dashboard search.

Most CVEs are handled automatically through WAF rules and other protections, but when user action is needed, they will appear here.

  • Automatic detection of security vulnerabilities that require user intervention - When the platform identifies a vulnerability or configuration that cannot be fully mitigated by Vercel’s autonomous protections, it’s surfaced here with clear instructions.

  • Project grouping based on required actions - Current categories include unpatched dependencies, manual fix required, unprotected preview deployments. Additional groups will appear over time as new protections and checks are added.

  • Support for both automated remediation - When possible, Vercel Agent offers one-click automated upgrades and PRs.

  • Support for manual remediation - For cases requiring manual updates or where GitHub access isn’t available, we provide direct instructions such as: npx fix-react2shell-next

Stay secure with less effort

The unified dashboard helps teams act quickly during critical moments, consolidate required fixes in one place, and maintain a stronger security posture across all projects.

Explore the dashboard to view any required updates.

Read more

wits Allen Zhou Tom Dale Tom Knickman
https://vercel.com/changelog/automated-react2shell-vulnerability-patching-is-now-available Automated React2Shell vulnerability patching is now available 2025-12-08T13:00:00.000Z

Vercel Agent now detects vulnerable packages in your project, and automatically generates pull requests with fixes to upgrade them to patched versions.

Powered by Vercel's self-driving infrastructure, these auto-fix upgrades are available at no cost and help teams stay secure with minimal manual effort.

  • Automatic detection of vulnerable React, Next.js, and related RSC packages

  • Automatic PR creation

  • Full execution and verification of updates inside isolated Sandbox environments

  • Preview links generated with PR, to manually validate updates

About React2Shell React2Shell (CVE-2025-55182) is a critical remote code execution vulnerability in React Server Components that affects React 19 and frameworks that use it like Next.js. Specially crafted requests can trigger unintended code execution if your application is running a vulnerable version. Immediate upgrades are required for all projects using affected React and Next.js releases.

Get the latest updates on React2Shell or view the new dashboard here.

Read more

Allen Zhou wits Tom Dale
https://vercel.com/changelog/rust-runtime-now-in-public-beta-for-vercel-functions Rust runtime now in public beta for Vercel Functions 2025-12-08T13:00:00.000Z

Today, we are launching first-class support for the Rust runtime beta.

This new release of native support, as an evolution of the community Rust runtime, brings the full benefits of Vercel Functions, including Fluid compute (with HTTP response streaming and Active CPU pricing) and an increased environment variable limit from 6KB to 64KB.

Rust deployments automatically integrate with Vercel's existing logging, observability, and monitoring systems.

To get started, create a Cargo.toml file and a handler function like in the example below:

Deploy to Vercel today with one of our starter templates Rust Hello World and Rust Axum, or read more in the Function docs.

Read more

Florentin Eckl
https://vercel.com/blog/resources-for-protecting-against-react2shell React2Shell Security Bulletin 2025-12-05T13:00:00.000Z

CVE-2025-55182 is a critical vulnerability in React that requires immediate action.

Next.js and other frameworks that React are affected.

Read the bulletin and act now.

Read more

Talha Tariq Jimmy Lai
https://vercel.com/changelog/rewrites-and-redirects-now-available-in-runtime-logs Rewrites and redirects now available in runtime logs 2025-12-05T13:00:00.000Z

Vercel users can now view requests that make rewrites or redirects directly in the Vercel dashboard in runtime logs.

By default, these requests are filtered out on the Runtime Logs page. To view these requests on the Logs page, you can filter for Rewrites or Redirects in the Resource dropdown.

  • Rewrites: shows the destination of the rewrite

  • Redirects: shows the redirect status code and location

This feature is available to all users. Try it out or learn more about runtime logs.

Read more

Luc Leray Vincent Voyer Andrew Gadzik
https://vercel.com/changelog/new-deployments-of-vulnerable-next-js-applications-are-now-blocked-by New deployments of vulnerable Next.js applications are now blocked by default 2025-12-05T13:00:00.000Z

Any new deployment containing a version of Next.js that is vulnerable to CVE-2025-66478 will now automatically fail to deploy on Vercel.

We strongly recommend upgrading to a patched version regardless of your hosting provider. Learn more

This automatic protection can be disabled by setting the DANGEROUSLY_DEPLOY_VULNERABLE_CVE_2025_66478=1 environment variable on your Vercel project. Learn more

Read more

Tom Knickman Luke Phillips-Sheard
https://vercel.com/changelog/introducing-platform-elements Introducing Platform Elements 2025-12-05T13:00:00.000Z

As part of the new Vercel for Platforms product, you can now use a set of prebuilt UI blocks and actions to add functionality directly to your application.

An all-new library of production-ready shadcn/ui components and actions help you launch (and upgrade) quickly.

Blocks:

Actions:

You can install Platforms components with the Vercel Platforms CLI. For example:

Start building with Platform Elements using our Quickstart for Multi-Tenant or Multi-Project platforms.

Read more

Hayden Bleasel Rhys Sullivan Kim Neuwirth
https://vercel.com/changelog/introducing-vercel-for-platforms Introducing Vercel for Platforms 2025-12-05T13:00:00.000Z

You can now build platforms with the new Vercel for Platforms product announced today, making it easy to create and run customer projects on behalf of your users.

Two platform modes are available: Multi-Tenant and Multi-Project, allowing you to deploy with a single codebase or many, across any number of domains.

Multi-Tenant Platforms

Run a single codebase that serves many customers with:

  • Wildcard domains (*.yourapp.com) with automatic routing and SSL.

  • Custom domain support via SDK, including DNS verification and certificate management.

  • Routing Middleware for hostname parsing and customer resolution at the edge.

  • Single deployment model: deploy once, changes apply to all tenants.

Add custom domains to your app in seconds:

Multi-Project Platforms

Create a separate Vercel project per customer with:

  • Programmatic project creation with the Vercel SDK.

  • Isolation of builds, functions, environment variables, and settings per customer.

  • Support for different frameworks per project.

Deploy your customer's code into isolated projects in seconds:

Today we are also introducing Platform Elements, a new library to make building on platforms easier.

Start building with our Quickstart for Multi-Tenant or Multi-Project platform.

Read more

Hayden Bleasel Rhys Sullivan Kim Neuwirth
https://vercel.com/changelog/gpt-5-1-codex-max-now-available-on-vercel-ai-gateway GPT 5.1 Codex Max now available on Vercel AI Gateway 2025-12-05T13:00:00.000Z

You can now access OpenAI's latest Codex models, GPT-5.1 Codex Max with Vercel's AI Gateway and no other provider accounts required.

Using a process called compaction, GPT-5.1 Codex Max has been trained to operate across multiple context windows and on real-world software engineering tasks. GPT-5.1 Codex Max is faster and more token efficient compared to previous Codex models, optimized for long-running coding tasks, and can maintain context and reasoning over long periods without needing to start new sessions.

To use GPT-5.1 Codex Max with the AI SDK, set the model to openai/gpt-5.1-codex-max.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/domains-must-now-be-managed-at-the-team-level Domains must now be managed at the team level 2025-12-04T13:00:00.000Z

Managing domains at the account level is no longer supported. Domains must now be managed at the team level, which simplifies access control, collaboration, and unified billing.

Domains that are currently linked to accounts will continue to resolve, serve traffic, and renew as usual, but any changes will require moving the domain to a team.

When viewing an account-level domain, you'll now be prompted to select a destination team to transfer your domain and all project domains, DNS records, and aliases will move to that team and continue to work after the move.

Read more

Rhys Sullivan
https://vercel.com/changelog/nova-2-lite-now-available-on-vercel-ai-gateway Nova 2 Lite now available on Vercel AI Gateway 2025-12-03T13:00:00.000Z

You can now access Amazon's latest model Nova 2 Lite via Vercel's AI Gateway with no other provider accounts required. Nova 2 Lite is a reasoning model for everyday workloads that can process text, images, and videos to generate text.

To use Nova 2 Lite, set model to amazon/nova-2-lite in the AI SDK. Extending thinking is disabled by default. To enable reasoning for this model, set maxReasoningEffort in the providerOptions. The reasoning content is redacted and displays as such, but users are still charged for these tokens.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Read the docs, view the AI Gateway model leaderboard, or use the model directly in our model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/new-npm-package-for-automatic-recovery-of-broken-streaming-markdown New npm package for automatic recovery of broken streaming markdown 2025-12-03T13:00:00.000Z

Remend is a new standalone package that brings intelligent incomplete Markdown handling to any application.

Previously part of Streamdown's Markdown termination logic, Remend is now a standalone library (npm i remend) you can use in any application.

Why it matters

AI models stream Markdown token-by-token, which often produces incomplete syntax that breaks rendering. For example:

  • Unclosed fences

  • Half-finished bold/italic markers

  • Unterminated links or lists

Without correction, these patterns fail to render, leak raw Markdown, or disrupt layout:

Remend automatically detects and completes unterminated Markdown blocks, ensuring clean, stable output during streaming.

As the stream continues and the actual closing markers arrive, the content seamlessly updates, giving users a polished experience even mid-stream.

It works with any Markdown renderer as a pre-processor. For example:

Remend powers the markdown rendering in Streamdown and has been battle-tested in production AI applications. It includes intelligent rules to avoid false positives and handles complex edge cases like:

  • Mathematical expressions with underscores in LaTeX blocks

  • Product codes and variable names with asterisks/underscores

  • List items with formatting markers

  • Nested brackets in links

To get started, either use it through Streamdown or install it standalone with:

Read more

Hayden Bleasel
https://vercel.com/changelog/cve-2025-55182 Summary of CVE-2025-55182 2025-12-03T13:00:00.000Z

See the React2Shell security bulletin for the latest updates.

Summary

A critical-severity vulnerability in React Server Components (CVE-2025-55182) affects React 19 and frameworks that use it, including Next.js (CVE-2025-66478). Under certain conditions, specially crafted requests could lead to unintended remote code execution.

We created new rules to address this vulnerability and quickly deployed to the Vercel WAF to automatically protect all projects hosted on Vercel at no cost. However, do not rely on the WAF for full protection. Immediate upgrades to a patched version are required. We also worked with the React team to deliver recommendations to the largest WAF and CDN providers.

We still strongly recommend upgrading to a patched version regardless of your hosting provider.

Impact

Applications using affected versions of the React Server Components implementation may process untrusted input in a way that allows an attacker to perform remote code execution. The vulnerability is present in versions 19.0, 19.1.0, 19.1.1, and 19.2.0 of the following packages: :

  • react-server-dom-parcel (19.0.0, 19.1.0, 19.1.1, and 19.2.0)

  • react-server-dom-webpack (19.0.0, 19.1.0, 19.1.1, and 19.2.0)

  • react-server-dom-turbopack (19.0.0, 19.1.0, 19.1.1, and 19.2.0)

These packages are included in the following frameworks and bundlers:

  • Next.js with versions ≥14.3.0-canary.77, ≥15 and ≥16

  • Other frameworks and plugins that embed or depend on React Server Components implementation (e.g., Vite, Parcel, React Router, RedwoodSDK, Waku)

Resolution

After creating mitigations to address this vulnerability, we deployed them across our globally-distributed platform to quickly protect our customers. We still recommend upgrading to the latest patched version.

Updated releases of React and affected downstream frameworks include hardened handling of user inputs to prevent unintended behavior. All users should upgrade to a patched version as soon as possible. If you are on Next.js 14.3.0-canary.77 or a later canary release, downgrade to the latest stable 14.x release.

Fixed in:

  • React: 19.0.1, 19.1.2, 19.2.1

  • Next.js: 15.0.5, 15.1.9, 15.2.6, 15.3.6, 15.4.8, 15.5.7, 15.6.0-canary.58, 16.0.7

Frameworks and bundlers using the aforementioned packages should install the latest versions provided by their respective maintainers.

Credit

Thanks to Lachlan Davidson for identifying and responsibly reporting the vulnerability, and the Meta Security and React team for their partnership.

References

Read more

Aaron Brown Jimmy Lai Luba Kravchenko Sage Abraham Andy Riancho
https://vercel.com/changelog/vercel-agent-installation Vercel Agent can now install Web Analytics and Speed insights for you 2025-12-03T13:00:00.000Z

Vercel Agent can now automatically update your codebase and submit a PR to add Web Analytics and Speed Insights to your project.

Vercel Agent analyzes your project configuration and connected GitHub repository, installs the relevant package, adds the relevant code snippet, and creates a pull request with the proposed changes.

To have Vercel Agent install Web Analytics or Speed Insights to your project:

  1. Go to the Analytics or Speed Insights page of the dashboard.

  2. Enable the feature.

  3. Click Implement to start Vercel Agent.

  4. Review the pull request and merge when ready.

Once the pull request is merged and deployed, tracking starts automatically.

Vercel Agent installations are now available in Public Beta for all teams. Try it out for Web Analytics or Speed Insights.

Read more

Allen Zhou Tom Dale Casey Gowrie
https://vercel.com/blog/bfcm-2025 Billions of requests: Black Friday-Cyber Monday 2025 2025-12-02T13:00:00.000Z

Every year, Black Friday and Cyber Monday reveal how people shop, browse, and discover products at global scale. For Vercel, the weekend doesn’t require a different operating mode. The platform behaves the same way it does every day, only with higher traffic volume.

A live dashboard showed the traffic as it played out.

This year, traffic reached more than 115.8 billion total requests, reflecting 33.6% year-over-year growth with consistent performance throughout the events.

The traffic shape told a familiar story. Requests dipped on Thanksgiving as people stepped away from screens, then surged on Black Friday, stayed elevated through the weekend, and built into a second wave on Cyber Monday.

These rhythms played out across every major geography, and the platform adapted continuously without configuration changes or manual intervention.

Below is a snapshot of what the weekend looked like.

Key metrics from the weekend

  • 115,836,126,847 total requests - Global traffic delivered with consistent performance.

  • 518,027 peak requests per second - Traffic delivered at peak demand

  • 6,120,247 deployments - New versions of applications shipped

  • 24,086,391 AI Gateway requests - AI routing kept responses fast across providers

  • 43,213,555,901 Fluid compute invocations - Dynamic workloads scaled automatically

  • 56,926,096,915 cache hits - Fast delivery directly from globally distributed regions

  • 1,809,912,897 ISR reads - Initial regional loads of refreshed content, does not count cached responses

  • 1,517,476,504 ISR writes - Catalog, pricing, and content updates propagated instantly.

  • 7,507,223,309 firewall actions - Threats filtered before reaching applications

  • 415,683,895 bots blocked - Automated abuse stopped early

  • 2,408,122,336 humans verified - Legitimate shoppers passed security checks

  • Top regions: US, DE, GB, IN, BR, SG, JP - with high activity across all 20 global regions

Global scale and natural traffic rhythms

Of the 115 billion requests that flowed through Vercel from November 28 through December 1, the United States led activity with over 40.7 billion requests, followed by Germany, the United Kingdom, India, Brazil, Singapore, and Japan. Traffic moved across time zones throughout the weekend, with peaks in one region balanced by lower activity in others.

Vercel handled these shifts the same way it manages everyday production traffic, scaling the global network and compute layer to match real user behavior.

AI Gateway supported AI-native shopping at global scale

AI continued to shape how shoppers discovered products, searched catalogs, and received personalized help. More than 24 million AI requests passed through AI Gateway across the BFCM window. Retailers used AI for search, recommendations, guided browsing, and customer support. AI Gateway routed these queries across providers and regions, maintaining low latency and resilience even when demand fluctuated.

AI is now part of the normal shopping experience. Routing, failover, and provider coordination are essential when millions of customers depend on AI-powered workflows. AI Gateway delivered this consistency at global scale.

Fluid compute matched real traffic automatically

Fluid compute handled more than 43.2 billion function invocations across the weekend. Teams used it for personalization logic, cart behavior, content evaluation, and AI inference. Fluid adjusts instantly to incoming traffic. Capacity increases the moment volume rises and scales back when demand settles. With automated pre-warming, and no tuning or configuration required.

This elasticity is part of the platform’s normal operation and ensures applications stay responsive regardless of traffic shape.

Incremental Static Regeneration kept content fresh and fast

Product catalogs, pricing details, inventory indicators, and promotions changed continuously throughout the event. ISR processed more than 1.8 billion reads and 1.5 billion writes, refreshing content across the CDN without requiring redeploys or adding strain to backend systems.

Shoppers received accurate information at static performance speeds, even as merchandisers updated content minute by minute.

Security systems filtered billions of unwanted requests

Security activity increased alongside customer traffic. More than 7.5 billion firewall actions were taken across system rules and customer-defined WAF logic. These protections stopped invalid and malicious traffic at the edge, preserving compute capacity and ensuring stable performance.

Bot Management also operated at massive scale. More than 415 million bot attempts were identified and blocked, while more than 2.4 billion legitimate human interactions were verified through invisible checks by both Vercel BotID and system security defenses. Retailers protected checkout flows, inventory endpoints, and account systems without introducing friction.

Caching delivered fast performance worldwide

More than 56.9 billion requests were served directly from Vercel’s global cache. These cache hits reduced latency, decreased backend load, and ensured fast page delivery throughout the weekend. Caching works in tandem with ISR and Fluid compute, forming a multi-layer performance system where static assets stay fast, dynamic updates propagate instantly, and compute is reserved for real application logic.

A platform built for everyday traffic, even when every day is bigger

The 2025 traffic profile reinforces that large events do not require special preparation when the delivery layer is built to adapt to real user behavior. Cache hits, ISR updates, Fluid compute, AI routing, and firewall filtering worked together to absorb global demand. Development teams deployed more than 6.1 million times across the weekend, shipping updates continuously with the confidence provided by instant rollbacks and predictable reliability.

Looking ahead

As AI-driven experiences expand, as personalization deepens, and as global traffic patterns evolve, the infrastructure behind these applications must adapt at the speed of real users. Vercel’s software layer does this every day. Black Friday through Cyber Monday simply highlight the scale of normal operations.

If you are preparing for your next peak moment, explore how AI Gateway, Fluid compute, ISR, and Vercel’s security tools fit into your architecture. The teams who thrive during high-pressure events are the ones who adopt adaptive infrastructure before they need it.

Read more

Dan Fein
https://vercel.com/blog/investing-in-the-python-ecosystem Investing in the Python ecosystem 2025-12-02T13:00:00.000Z

The team behind Gel Data is joining Vercel to help us invest in the Python ecosystem. Led by Python core developer Yury Selivanov and contributor Elvis Pranskevichus, they will bring world-class support for Python on the AI Cloud.

Read more

Lindsey Simon Mike Curtis
https://vercel.com/changelog/deploy-steps-are-now-up-to-21-faster Deploy steps are now up to 21% faster 2025-12-02T13:00:00.000Z

On average, the deploy step is now 17% faster, reducing total time to go live by 1.67 seconds. Projects with a large number of functions will see even greater improvements, with up to 2.8 seconds saved on average.

During the deploy step, Vercel uploads static assets, provisions and uploads like Vercel Functions, processes routing metadata, and prepares the deployment to receive traffic. This phase is now faster due to reduced idle time and increased concurrency across these operations.

Check out the documentation to learn more about builds.

Read more

Andrew Healey
https://vercel.com/changelog/mistral-large-3-now-available-on-vercel-ai-gateway Mistral Large 3 now available on Vercel AI Gateway 2025-12-02T13:00:00.000Z

You can now access Mistral's latest model Mistral Large 3 via Vercel's AI Gateway with no other provider accounts required. Mistral Large 3 is Mistral's most capable model to date. It has a sparse mixture-of-experts architecture with 41B active parameters (675B total), and is Mistral’s first mixture-of-experts model since the Mixtral series.

To use Mistral Large 3, set model to mistral/mistral-large-3 in the AI SDK.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Read the docs, view the AI Gateway model leaderboard, or use the model directly in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/blog/aws-databases-coming-to-the-vercel-marketplace AWS Databases coming to the Vercel Marketplace 2025-12-01T13:00:00.000Z

We are expanding our partnership with AWS to make it faster for developers to build and scale with AWS infrastructure. On December 15th, Aurora PostgreSQL, Amazon DynamoDB, and Aurora DSQL will be available as native integrations in the Vercel Marketplace.

These integrations bring the power and scalability of AWS databases directly into your Vercel workflow, so you can focus on shipping products, agents, and websites instead of configuring infrastructure.

Read more

Tom Occhino Hedi Zandi
https://vercel.com/changelog/trinity-mini-model-now-available-in-vercel-ai-gateway Trinity Mini model now available in Vercel AI Gateway 2025-12-01T13:00:00.000Z

You can now access Arcee AI's latest model Trinity Mini via Vercel's AI Gateway with no other provider accounts required. Trinity Mini is an open weight MoE reasoning model with 26B parameters (3B active) trained end-to-end in the U.S.

To use Trinity Mini, set model to arcee-ai/trinity-mini in the AI SDK.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Read the docs, view the AI Gateway model leaderboard, or use the model directly in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/deepseek-v3-2-now-in-vercel-ai-gateway DeepSeek V3.2 models now available in Vercel AI Gateway 2025-12-01T13:00:00.000Z

You can now access DeepSeek's latest models, DeepSeek V3.2 and DeepSeek V3.2 Speciale, via Vercel's AI Gateway with no other provider accounts required.

DeepSeek V3.2 supports combined thinking and tool use, handling agent-style operations (tool calls) in both reasoning and non-reasoning modes. DeepSeek V3.2 Speciale is optimized for maximal reasoning performance, and is suited for complex task use cases but requires higher token usage and does not support tool use.

To use the DeepSeek V3.2 models, set model to the following in the AI SDK:

  • Non-thinking: deepseek/deepseek-v3.2

  • Thinking: deepseek/deepseek-v3.2-thinking

  • Speciale: deepseek/deepseek-v3.2-speciale

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Read the docs, view the AI Gateway model leaderboard, or use DeepSeek V3.2 models directly in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/runtime-logs-now-appear-in-the-dashboard-6x-faster Runtime logs now appear in the dashboard 6x faster 2025-12-01T13:00:00.000Z

We've improved our logs infrastructure resulting in significantly better performance when interacting with logs on Vercel:

  • Logs appear up to 6× faster in the dashboard, with 90% of entries visible within 5 seconds of execution. These improvements make Live Mode a more responsive experience.

  • Filtering and querying Runtime Logs is now up to 30% faster, with 80% of filter counts now resolving in under 1 second, to find what you need quicker.

Learn more about Runtime Logs.

Read more

Luc Leray Vincent Voyer Tobias Lins
https://vercel.com/changelog/image-only-models-available-in-vercel-ai-gateway Image-only models available in Vercel AI Gateway 2025-11-28T13:00:00.000Z

You can now access image-only models via Vercel's AI Gateway with no other provider accounts required. In addition to multimodal models with image generation capabilities that are currently available in AI Gateway (e.g., GPT-5.1, Nano Banana Pro, etc.), these image-only models are exclusively for image generation. The models include: Black Forest Labs:

  • FLUX.2 Flex: bfl/flux-2-flex

  • FLUX.2 Pro: bfl/flux-2-pro

  • FLUX.1 Kontext Max: bfl/flux-kontext-max

  • FLUX.1 Kontext Pro: bfl/flux-kontext-pro

  • FLUX 1.1 Pro Ultra: bfl/flux-pro-1.1-ultra

  • FLUX 1.1 Pro: bfl/flux-pro-1.1

  • FLUX.1 Fill Pro: bfl/flux-pro-1.0-fill

Google:

  • Imagen 4.0 Generate 001: google/imagen-4.0-generate

  • Imagen 4.0 Fast Generate 001: google/imagen-4.0-fast-generate

  • Imagen 4.0 Ultra Generate 001: google/imagen-4.0-ultra-generate

To use these models, set model to the corresponding slug from above in the AI SDK. These models support generateImage.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Read the docs on image generation, view the AI Gateway model leaderboard, or try these models directly in the model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/changelog/sign-in-with-vercel Sign in with Vercel now generally available 2025-11-26T13:00:00.000Z

Sign in with Vercel is now generally available, enabling developers to add Vercel as a sign-in method to their apps and projects.

You can create an app, configure its scopes, and start setting up sign-in directly from the Vercel dashboard, without having to manage users or sign-in methods yourself.

Built on OAuth and OpenID, Sign in with Vercel allows you to:

  • Sign users in to your apps and projects with their existing Vercel account.

  • Fetch the user’s info, such as their name, email, and avatar.

  • Receive ID tokens, access tokens, and refresh tokens for interacting with Vercel.

Take a look at our example app to get started.

Read more

Ana Jovanova Balázs Orbán Christopher Skillicorn Javier Bórquez Enric Pallerols Mark Roberts
https://vercel.com/changelog/fast-you-talk-and-to-tlds-now-available-on-vercel-domains .fast, .you, .talk, and .to TLDs now available on Vercel Domains 2025-11-25T13:00:00.000Z

Vercel Domains now supports the TLDs .fast, .you, .talk, and .to..

Domains with these TLDs can now be purchased at vercel.com/domains, and you can also now transfer domains with these TLDs onto the Vercel platform for easy integration and use with projects and deployments.

Try it here.

Read more

Elliot Dauber Maggie Valentine Rhys Sullivan Ethan Niser Mark Glagola
https://vercel.com/changelog/node-js-24-lts-is-now-generally-available-for-builds-and-functions Node.js 24 LTS is now generally available for builds and functions 2025-11-25T13:00:00.000Z

Node.js version 24 is now available as a runtime for builds and functions using Node.

To use version 24, go to Project Settings -> Build and Deployment -> Node.js Version and select 24.x. This is also the default version for new projects.

This new version's highlights:

  • V8 Engine Upgrade: Node.js 24 ships with the V8 JavaScript engine version 13.6, bringing performance enhancements and new JavaScript features such as Float16Array and Error.isError

  • Global URLPattern API: Simpler URL routing and matching without the need for external libraries or complex regular expressions

  • Undici v7: The built-in fetch API benefits from faster HTTP performance, improved HTTP/2 & HTTP/3 support, and more efficient connection handling

  • npm v11: It comes with an updated version of npm, improving the compatibility with modern JavaScript packages

The current version used is 24.11.0 and will be automatically updated, with only the major version version (24.x) being guaranteed.

Read our Node.js runtime documentation to learn more.

Read more

Felix Haus Marcos Grappeggia
https://vercel.com/changelog/flux-2-pro-image-model-is-now-available-on-vercel-ai-gateway FLUX.2 Pro image model is now available on Vercel AI Gateway 2025-11-25T13:00:00.000Z

You can now access the newest image model FLUX.2 Pro from Black Forest Labs via Vercel's AI Gateway with no other provider accounts required.

FLUX.2 Pro is a newly trained base model designed for advanced visual intelligence, offering higher-resolution outputs (up to 4MP), improved knowledge of the real world, and precise control over lighting and spatial composition. It introduces multi-reference input, enhanced character and product consistency, exact color matching, and expanded control options compared to the FLUX.1 models.

FLUX.2 Pro differs fundamentally from the other models with image generation capability currently available in AI Gateway. This model is a pure image-focused rectified-flow transformer model, in contrast with the multimodal LLMs already integrated. To use this model, set model to bfl/flux-2-pro in the AI SDK. This model supports generateImage.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Read the docs for more detailed examples on how to use FLUX.2 Pro with the AI SDK and OpenAI-compatible API, view the AI Gateway model leaderboard, or try these models directly in the model playground.

Read more

Walter Korman Rohan Taneja Jerilyn Zheng
https://vercel.com/blog/how-we-built-the-v0-ios-app How we built the v0 iOS app 2025-11-24T13:00:00.000Z

We recently released v0 for iOS, Vercel’s first mobile app. As a company focused on the web, building a native app was new territory for us.

Our goal was to build an app worthy of an Apple Design Award, and we were open-minded on the best tech stack to get there. To that end, we built dozens of iterations of the product prior to our public beta. We experimented with drastically different tech stacks and UI patterns.

We took inspiration from apps which speak the iPhone’s language, such as Apple Notes and iMessage. v0 had to earn a spot on your Home Screen among the greats.

After weeks of experimentation, we landed on React Native with Expo to achieve this. We are pleased with the results, and our customers are too. In fact, the influx of messages from developers asking how the app feels so native compelled us to write a technical breakdown of how we did it.

Table of contents

How we built the v0 chat experience

When you’re away from your computer, you might have a quick idea you want to act on. Our goal was to let you turn that idea into something tangible, without requiring context switching. v0 for iOS is the next generation of your Notes app, where your ideas get built in the background.

We did not set out to build a mobile IDE with feature parity with our website. Instead, we wanted to build a simple, delightful experience for using AI to make things on the go. The centerpiece of that experience is the chat.

To build a great chat, we set the following requirements:

  • New messages animate in smoothly

  • New user messages scroll to the top of the screen

  • Assistant messages fade in with a staggered transition as they stream

  • The composer uses Liquid Glass and floats on top of scrollable content

  • Opening existing chats starts scrolled to the end

  • Keyboard handling feels natural

  • The text input lets you paste images and files

  • The text input supports pan gestures to focus and blur it

  • Markdown is fast and supports dynamic components

While a number of UI patterns have emerged for AI chat in mobile apps, there is no equivalent set of patterns for AI code generation on mobile.

We hadn’t seen these features in existing React Native apps, so we found ourselves inventing patterns on the fly. It took an extraordinary amount of work, testing, and coordination across each feature to make it meet our standards.

Building a composable chat

To meet our requirements, we structured our chat code to be composable on a per-feature basis.

Our chat is powered by a few open source libraries: LegendList, React Native Reanimated, and React Native Keyboard Controller. To start, we set up multiple context providers.

The provider wraps the MessagesList:

Next, our messages list implements these features as composable plugins, each with its own hook.

The following sections break down each hook to demonstrate how they work together.

Sending your first message

When you send a message on v0, the message bubble smoothly fades in and slides to the top. Immediately after the user message is done animating, the assistant messages fade in.

When the user sends a message, we set a Reanimated shared value to indicate the animation should begin. Shared values let us update state without triggering re-renders.

With our state tracked in Reanimated, we can now animate our UserMessage.

Notice that UserMessageContent is wrapped with an Animated.View which receives props from useFirstMessageAnimation.

How useFirstMessageAnimation works

This hook is responsible for 3 things:

  1. Measure the height of the user message with itemHeight, a Reanimated shared value

  2. Fade in the message when isMessageSendAnimating

  3. Signal to the assistant message that the animation is complete

Thanks to React Native’s New Architecture, ref.current.measure() in useLayoutEffect is synchronous, giving us height on the first render. Subsequent updates fire in onLayout.

Based on the message height, window height, and current keyboard height, getAnimatedValues constructs the easing, start, and end states for translateY and progress. The resulting shared values are passed to useAnimatedStyle as transform and opacity respectively.

And there we have it. Our first message fades in using Reanimated. Once it’s done animating, we’re ready to fade in the first assistant message response.

Fading in the first assistant message

Similar to UserMessage, the assistant message content is wrapped in an animated view that fades in after the user message animation completes.

This fade in behavior is only enabled for the first assistant message in the chat, where index === 1. Messages in existing chats will have different behavior than messages in new chats.

What happens if you open an existing chat that has one user message and one assistant message? Will it animate in again? No, because the animations here only apply if isMessageSendAnimating is true, which gets set onSubmit and cleared when you change chats.

Sending messages in an existing chat

We’ve covered how v0 handles animating in messages for new chats. For existing chats, however, the logic is entirely distinct. Rather than rely on Reanimated animations, such as the one in useFirstMessageAnimation, we rely on an implementation of scrollToEnd().

So all we need to do is scroll to end if we’re sending a message in an existing chat, right?

In a perfect world, this is all the logic we’d need. Let’s explore why it’s not enough.

If you recall from the introduction, one of our requirements is that new messages have to scroll to the top of the screen. If we simply call scrollToEnd(), then the new messages will show at the bottom of the screen.

We needed a strategy to push the user message to the top of the chat. We referred to this as “blank size”: the distance between the bottom of the last assistant message, and the end of the chat.

To float the content to the top of the chat, we had to push it up by the amount equal to the blank size. Thanks to synchronous height measurements in React Native's New Architecture, this was possible to do on each frame without a flicker. But it still required a lot of trickery and coordination.

In the image above, you’ll notice that the blank size is dynamic. Its height depends on the keyboard’s open state. And it can change on every render, since the assistant message streams in quickly and with unpredictable sizes.

Dynamic heights are a common challenge in virtualized lists. The frequently-updating blank size took that challenge to a new level. Our list items have dynamic, unknown heights that update frequently, and we need them to float to the top.

For long enough assistant messages, the blank size could be zero, which introduced a new set of edge cases.

How we solved it

We tried many different approaches to implementing blank size. We tried a View at the bottom of the ScrollView with height, bottom padding on the ScrollView itself, translateY on the scrollable content, and minimum height on the last system message. All of these ended up with strange side effects and poor performance, often due to the need for a layout with Yoga.

We ultimately landed on a solution that uses the contentInset property on ScrollView to handle the blank size without jitters. contentInset maps directly to the native property on UIScrollView in UIKit.

We then paired contentInset together with scrollToEnd({ offset }) when you send a message.

An assistant message’s blank size is determined by the combination of its own height, the height of the user message that comes before it, and the height of the chat container.

Implementing useMessageBlankSize

To implement blank size, we start with a hook called useMessageBlankSize in the assistant message:

useMessageBlankSize is responsible for the following logic:

  1. Synchronously measure the assistant message

  2. Measure the user message before it

  3. Calculate the minimum distance for the blank size below the assistant message

  4. Keep track of what the blank size should be when the keyboard is opened or closed

  5. Set the blankSize shared value at the root context provider

Lastly, we consume blankSize and pass it to the contentInset of our ScrollView:

useAnimatedProps from Reanimated lets us update props on the UI thread on each frame without triggering re-renders. contentInset saw great performance and worked far better than every previous attempt.

Taming the keyboard

Building a good chat experience hinges on elegant keyboard handling. Achieving native feel in this area was tedious and challenging with React Native. When v0 iOS was in public beta, Apple released iOS 26. Every time a new iOS beta version came out, our chat seemingly broke entirely. Each iOS release turned into a game of cat-and-mouse of reproducing tiny discrepancies and jitters.

Luckily, Kiryl, the maintainer of react-native-keyboard-controller, helped us address these issues, often updating the library within 24 hours of Apple releasing a new beta.

Building useKeyboardAwareMessageList

We used many of the hooks provided by React Native Keyboard Controller to build our own keyboard management system tailored to v0’s chat.

useKeyboardAwareMessageList is our custom React hook responsible for all of our keyboard handling logic. We render it alongside our chat list, and it abstracts away everything we need to make the keyboard feel right.

While the consumption is a one liner, its internals are about 1,000 lines of code with many unit tests. useKeyboardAwareMessageList primarily relies on the upstream useKeyboardHandler, handling events like onStart, onEnd, and onInteractive, together with a number of Reanimated useAnimatedReaction calls to retry events in particular edge cases.

useKeyboardAwareMessageList also handles a number of strange behaviors in iOS. For example, if you send an app to the background when the keyboard is open and then refocus the app, iOS will inexplicably fire the keyboard onEnd event three times. Because we relied on imperative behavior when events fired, we came up with tricks to dedupe repeat events and track app state changes.

useKeyboardAwareMessageList implements the following features:

  1. Shrink the blankSize when the keyboard opens

  2. If you’re scrolled to the end of the chat, and there’s no blank size, shift content up when the keyboard opens

  3. If you have scrolled high up enough, and there’s no blank size, show the keyboard on top of the content, without shifting the content itself

  4. When the user interactively dismisses the keyboard via the scroll view or text input, drag it down smoothly

  5. If you’re scrolled to the end of the chat, and the blank size is bigger than the keyboard, the content should stay in place

  6. If you’re scrolled to the end of the chat and the blank size is greater than zero, but it should be zero when the keyboard is open, shift content up so that it lands above the keyboard

There was no single trick to get this all working. We spent dozens of hours using the app, noticing imperfections, tracing issues, and rewriting the logic until it felt right.

Scrolling to end initially

When you open an existing chat, v0 starts the chat scrolled to end. This is similar to using the inverted prop on React Native’s FlatList , which is common for bottom-to-top chat interfaces.

However, we decided not to use inverted since it felt incompatible with an AI chat where messages stream in multiple times per second. We opted not to autoscroll as the assistant message streams. Instead, we let the content fill in naturally under the keyboard, together with a button to scroll to the end. This follows the same behavior as ChatGPT’s iOS app.

That said, we wanted an inverted-list-style experience when you first opened an existing chat. To make this work, we call scrollToEnd when a chat first becomes visible.

Due to a complex combination of dynamic message heights and blank size, we had to call scrollToEnd multiple times. If we didn’t, our list would either not scroll properly, or scroll too late. Once the content has scrolled, we call hasScrolledToEnd.set(true) to fade in the chat.

Floating composer

Inspired by iMessage’s bottom toolbar in iOS 26, we built a Liquid Glass composer with a progressive blur.

We used @callstack/liquid-glass to add interactive Liquid Glass. By wrapping the glass views with LiquidGlassContainerView, we automatically get the view morphing effect.

Make it float

After adding the Liquid Glass, the next step was making it float on top of the chat content.

In order to make the composer float on top of the scrollable content, we took the following steps:

  1. Add position: absolute; bottom: 0 to the composer

  2. Wrap the composer in KeyboardStickyView from react-native-keyboard-controller

  3. Synchronously measure the composer, and store its height in context using a shared value

  4. Add the composerHeight.get() to our ScrollView’s native contentInset.bottom property

However, this was not enough. We are still missing one key behavior.

As you type, the text input’s height can increase. When you type new lines, we want to simulate the experience of typing in a regular, non-absolute-positioned input. We had to find a way to shift the chat messages upwards, but only if you are scrolled to the end of the chat.

In the video below, you can see both cases. At the start of the video, content shifts up with new lines since the chat is scrolled to the end. However, after scrolling up in the chat, typing new lines will not shift the content.

useScrollWhenComposerSizeUpdates

Enter useScrollWhenComposerSizeUpdates. This hook listens to the height of the composer and automatically scrolls to end when needed. To consume it, we simply call it in MessagesList:

First, it sets up an effect using useAnimatedReaction to track composer height changes.

Next, we call autoscrollToEnd. As long as you’re close enough to the end of the scrollable area, we automatically scroll to the end of the chat. Without this, entering new lines in the composer would overlap the bottom of the scrollable area.

useScrollWhenComposerSizeUpdates lets us conditionally simulate the experience of a view that is not absolute-positioned.

As we saw in earlier code, we unfortunately relied on a number of setTimeout and requestAnimationFrame calls to scrollToEnd. That code will understandably raise eyebrows, but it was the only way we managed to get scrolling to end working properly. We’re actively collaborating with Jay, the maintainer of LegendList, to build a more reliable approach.

Make it feel native

React Native’s built-in TextInput felt out of place in a native chat app.

By default, when you set multiline={true}, the TextInput shows ugly scroll indicators, which is inconsistent with most chat apps. Swiping up and down on the input will bounce its internal content, even if you haven’t typed any text yet. Additionally, the input doesn't support interactive keyboard dismissal.

To fix these issues, we applied a patch to RCTUITextView in native code. This patch disables scroll indicators, removes bounce effects, and enables interactive keyboard dismissal.

Our patch also adds support for swiping up to focus the input. We realized we needed this after watching testers frustratingly swipe up expecting the keyboard to open.

While maintaining a patch across React Native updates is not ideal, it was the most practical solution we found. We would have preferred an official API for extending native views without patching, and we plan on contributing this patch to React Native core if there is community interest.

Pasting images

To support pasting images and files in the text input, we used an Expo Module that listens to paste events from the native UIPasteboard.

If you paste long enough text, onPaste will automatically turn the pasted content into a .txt file attachment.

Since it was difficult to extend the existing TextInput in native code, we use a TextInputWrapper component which wraps TextInput and traverses its subviews in Swift. For more in-depth examples of creating native wrapper components, you can watch my 2024 talk, “Don’t be afraid to build a native library”.

Fading in streaming content

When an AI’s assistant message streams in, it needs to feel smooth. To achieve this, we created two components:

  1. <FadeInStaggeredIfStreaming />

  2. <TextFadeInStaggeredIfStreaming />

As long as an element gets wrapped by one of these components, its children will smoothly fade in with a staggered animation.

Under the hood, these components render a variation of FadeInStaggered, which handles the state management:

useIsAnimatedInPool is a custom state manager outside of React that allows a limited number of ordered elements to get rendered at once. Elements request to join the pool when they mount, and isActive indicates if they should render an animated node.

After the onFadedIn callback fires, we evict the element from the pool, rendering its children directly without the animated wrapper. This helps us limit the number of animated nodes that are active at once.

Lastly, FadeIn renders a staggered animation with a delay of 32 milliseconds between elements. The staggered animations run on a schedule, animating a batch of 2 items at a time. When the queue of staggered items becomes higher than 10, we increase the number of batched items according to the size of the queue.

TextFadeInStaggeredIfStreaming uses a similar strategy. We first chunk words into individual text nodes, then we create a unique pool for text elements with a limit of 4. This ensures that no more than 4 words will fade in at a time.

One issue we faced with this approach is that it relies heavily on firing animations on mount. As a result, if you send a message, go to another chat, and then come back to the original chat before the message is done sending, it will remount and animate once again.

To mitigate this, we implemented a system that keeps track of which content you've already seen animate across chats. The implementation uses a DisableFadeProvider towards the top of the message in the tree. We consume it in the root fade component to avoid affecting the pool if needed.

While it might look unusual to explicitly rely on useState's initial value in a non-reactive way, this let us reliably track elements and their animation states based on their mount order.

Sharing code between web and native

When we started building the v0 iOS app, a natural question arose: how much code should we share between web and native?

Given how mature the v0 web monorepo was, we decided to share types and helper functions, but not UI or state management. We also made a concerted effort to migrate business logic from client to server, letting the v0 mobile app be a thin wrapper over the API.

Building a shared API

Sharing the backend API routes between a mature Next.js app and a new mobile app introduced challenges. The v0 web app is powered by React Server Components and Server Actions, while the mobile app functions more like a single-page React app.

To address this, we built an API layer using a hand-rolled backend framework. Our framework enforces runtime type safety by requiring input and output types specified with Zod.

After defining the routes, we generate an openapi.json file based on each route’s Zod types. The mobile app consumes the OpenAPI spec using Hey API, which generates helper functions to use with Tanstack Query.

This effort led to the development of the v0 Platform API. We wanted to build the ideal API for our own native client, and we ultimately decided to make that same API available to everyone. Thanks to this approach, v0 mobile uses the same routes and logic as v0’s Platform API customers.

On each commit, we run tests to ensure that changes to our OpenAPI spec are compatible with the mobile app.

In the future, we hope to eliminate the code generation step entirely with a type-level RPC wrapper around the Platform API.

Styling

v0 uses react-native-unistyles for styles and theming. My experience with React Native has taught me to be cautious of any work done in render. Unlike other styling libraries we evaluated, Unistyles provides comprehensive theming without re-rendering components or accessing React Context.

Native menus

Beyond Unistyles for themes and styles, we did not use a JS-based component library. Instead, we relied on native elements where possible.

For menus, we used Zeego, which relies on react-native-ios-context-menu to render the native UIMenu under the hood. Zeego automatically renders Liquid Glass menus when you build with Xcode 26.

Native alerts

React Native apps on iOS 26 experienced the Alert pop-up rendering offscreen. We reproduced this in our own app and in many popular React Native apps. We patched it locally and worked with developers from Callstack and Meta to upstream a fix in React Native.

Native bottom sheets

For bottom sheets, we used the built-in React Native modal with presentationStyle="formSheet". However, this came with a few downsides which we addressed with patches.

Modal dragging issues

First, when dragging the sheet down, it temporarily froze in place before properly dismissing. To resolve this, we patched React Native locally. We worked with Callstack to upstream our patch into React Native, and it’s now live in 0.82.

Fixing Yoga flickering

If you put a View with flex: 1 inside a modal with a background color, and then drag the modal up and down, the bottom of the view flickers aggressively.

To solve this, we patched React Native locally to support synchronous updates for modals in Yoga. We collaborated with developers from Callstack, Expo and Meta to upstream this change into React Native core. It's now live in React Native 0.82.

Looking forward

After building our first app using React Native with Expo, we aren’t looking back. If you haven't tried v0 for iOS yet, download it and let us know what you think with an App Store review.

We're hiring developers to join the Vercel Mobile team. If this kind of work excites you, we'd love to hear from you.

At Vercel, we're committed to building ambitious products at the highest caliber. We want to make it easy for web and native developers to do the same, and we plan to open-source our findings. Please reach out on X if you would like to beta test an open source library for AI chat apps. We look forward to partnering with the community to continue improving React Native.

Read more

Fernando Rojo
https://vercel.com/blog/security-through-design-creating-the-improved-firewall-experience Security through design: Creating the improved Firewall experience 2025-11-24T13:00:00.000Z

At Vercel, we believe security should be intuitive, not intimidating. The best security tool is the one that's actually used. It should be clear, useful, and never in the way.

But that's not always the norm. Security tooling can often feel like a tradeoff against shipping velocity. When UX is an afterthought, teams leave tools off or in "logging mode" forever, even when risks are high.

That's why we've redesigned the Vercel Firewall experience from the ground up. The new UI helps you see more, do more, and feel confident in your app's resilience to attacks.

Designing for every Vercel user

The redesign started with listening. Users told us:

  • I want to easily see active DDoS events

  • I need more information on what the Firewall blocked

  • I need a faster way to investigate traffic alerts or spikes

Developers, SREs, and security teams all use the Firewall for maintenance and troubleshooting. They configure rules, monitor traffic, and respond to unusual activity.

The new Firewall UI is designed for everyone using Vercel. It surfaces clear, actionable information, simplifies navigation, and helps teams resolve issues quickly when it matters most.

A better way to see and secure your traffic

The new design brings together visibility, context, and control in one view.

  • A redesigned overview page provides a unified, high-signal view of Firewall activity

  • New sidebar navigation offers one click to Overview, Traffic, Rules, and Audit Log

  • Key activity and alert feeds surface unusual patterns and potential threats

  • Improved inspection tools make it faster to move from alert to insight

A new overview for all security events

The Overview page is your high-level control center for the Firewall. It gives you a clear, birds-eye view of your site’s security posture. The traffic chart remains at the top, and we now surface the most important information based on recent activity.

Four tables surface key Firewall activity so you can see the current state and act quickly when needed:

  • Alerts shows recently mitigated DDoS attacks

  • Rules displays top rule activity by volume

  • Events list mitigations taken by Firewall

  • Denied IPs show blocked connections by client IP

Comprehensive traffic intelligence

The new Traffic page focuses entirely on understanding activity across your site. You can now drill down into the detection signals that you care about the most, and filter those signals based on specific mitigation actions on the traffic tab. These updates make it easier to spot patterns or anomalies before they become problems.

We now surface dedicated feeds for:

  • Top IPs

  • Top JA4 digests

  • Top AS names

  • Top User Agents

  • Top Request Paths

  • Rules with most activity

Dedicated rules and activity

Firewall Rules now have a dedicated tab on the sidebar. You can see and manage all of your WAF custom rules in this view, including Bot Protection, Managed Rulesets, IP Blocking, and more. We’ve also moved the Audit Log to a dedicated tab for full visibility into Firewall changes.

Faster event inspection

Clicking an alert or event now opens a detailed view directly in the page. You can dive deeper into Firewall activity and investigate suspicious traffic or DDoS attacks without context switching, helping you diagnose issues faster and take action immediately.

Security designed for you

Security is usability. When tools are clear and well-designed, teams act faster and stay safer, without sacrificing shipping velocity.

We'd love your feedback. Explore the new Firewall experience today in your Vercel Dashboard and share your thoughts in the Vercel Community.

Read more

Sage Abraham Liz Hurder Ethan Shea Tom Bremer William Bout
https://vercel.com/blog/workflow-builder-build-your-own-workflow-automation-platform Workflow Builder: Build your own workflow automation platform 2025-11-24T13:00:00.000Z

Today we're open-sourcing Workflow Builder, a complete visual automation platform powered by the Workflow Development Kit (WDK).

The project includes a visual editor, execution engine, and infrastructure, giving you what you need to build your own workflow automation tools and agents. Deploy it to Vercel and customize it for your use case.

Read more

Chris Tate Hayden Bleasel Adrian Lam
https://vercel.com/changelog/deployments-can-now-require-cryptographically-verified-commits Deployments can now require cryptographically-verified commits 2025-11-24T13:00:00.000Z

Vercel now supports commit verification, letting you protect your deployments by requiring commits to be cryptographically verified before they’re deployed from GitHub.

Enable it for GitHub-connected projects in your project settings.

Learn more about commit signing and verification on GitHub or read more about the setting in our docs.

Read more

Tom Knickman
https://vercel.com/changelog/convex-joins-the-vercel-marketplace Convex joins the Vercel Marketplace 2025-11-24T13:00:00.000Z

Convex is now available on the Vercel Marketplace, giving developers an easy way to add a real-time backend to any Vercel project. You can create and connect a Convex project directly from the Vercel dashboard and get a fully configured backend without manual setup.

With the new integration, you can:

  • Provision a Convex project directly from Vercel dashboard

  • Manage accounts and billing in one place

  • Get real-time data sync with built-in caching and consistency

  • Use Convex’s data model and functions alongside Vercel’s full developer workflow

Install Convex from the Marketplace and start building with a fully connected backend in just a few clicks.

Read more

Hedi Zandi Tony Pan Dima Voytenko Marc Brakken
https://vercel.com/changelog/shai-hulud-2-0-supply-chain-compromise Shai-Hulud 2.0 Supply Chain Compromise 2025-11-24T13:00:00.000Z

Multiple npm packages from various web services were compromised through account takeover/developer compromise. A malicious actor was able to add a stealthy loader to the package.json file that locates the Bun runtime, silently installs, then executes a malicious script.

Our investigation has shown that no Vercel environment was impacted and we are notifying a small set of customers with affected builds.

Impact to Vercel Customers

Vercel has taken immediate steps to address this for our customers. As an initial step, we reset the cache for projects that pulled in any of the vulnerable packages while we continue to investigate whether any loaders successfully ran.

  • As of this publication, no Vercel-managed systems or internal build processes have been impacted.

  • Preliminary analysis identified a limited set of Vercel customer builds referencing the compromised packages.

  • Impacted customers are being contacted directly with detailed mitigation steps.

We will continue to issue updates throughout our investigation.

Read more

Aaron Brown
https://vercel.com/changelog/you-can-now-configure-advanced-sampling-rules-for-vercel-drains You can now configure advanced sampling rules for Vercel Drains 2025-11-24T13:00:00.000Z

You can now configure advanced sampling rules when exporting data to a third-party observability tool when using Vercel Drains.

Advanced sampling rules allow you to configure sampling rates for specific environments and path prefixes, providing more granular control over cost management.

Vercel Drains is available to Pro and Enterprise teams. Advanced sampling rules can be configured on drains exporting logs or traces.

Try it out or learn more about Vercel Drains.

Read more

Darpan Kakadia Timo Lins Adrian Cooney Luc Leray Malavika Tadeusz
https://vercel.com/changelog/claude-opus-4-5-now-available-in-vercel-ai-gateway Claude Opus 4.5 now available in Vercel AI Gateway 2025-11-24T13:00:00.000Z

You can now access Anthropic's latest model, Claude Opus 4.5, via Vercel's AI Gateway with no other provider accounts required.

Claude Opus 4.5 is suited for demanding reasoning tasks and complex problem solving. This model has improvements in general intelligence and vision compared to previous iterations. It excels at difficult coding tasks and agentic workflows, especially those with computer use and tool use, and can effectively handle context usage and external memory files. Frontend coding and design are established strengths, particularly for developing real-world web applications.

To use Claude Opus 4.5, set model to anthropic/claude-opus-4.5 in the AI SDK. There is a new effort parameter for this model. This parameter affects all types of tokens and controls the level of token usage when responding to a request. By default, effort is set to high and is independent of the thinking budget. To use it in AI Gateway with the AI SDK, set effort for the provider in providerOptions, as seen below in the example.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Read the docs, view the AI Gateway model leaderboard, or use Claude Opus 4.5 directly in our model playground.

Read more

Walter Korman Matt Lenhard Jerilyn Zheng
https://vercel.com/changelog/streamdown-1-6-is-now-available-to-run-faster-and-ship-less-code Streamdown 1.6 is now available to run faster and ship less code 2025-11-24T13:00:00.000Z

Vercel Streamdown 1.6 is now available with major improvements to performance, bundle size, and the authoring experience.

Streamdown now runs faster and ships less code thanks to memoization, LRU caching, optimized string operations, and the removal of regexes.

Several product enhancements include:

  • Code Blocks, Mermaid, and Math components are now lazy-loaded with React.lazy() and Suspense, only loading when used.

  • The code highlighting system has been rebuilt with a new tokenization approach that’s simpler, more efficient, and includes line numbers.

  • A custom markdown renderer replaces React Markdown, giving Streamdown a lighter core and more room for future optimizations.

  • Static Mode adds support for rendering markdown without streaming, ideal for blogs and other static use cases as it reduces streaming overhead.

  • Mermaid blocks now support custom error components for handling parsing issues.

  • Diagrams can be exported as SVG, PNG, or source code, and the fullscreen view includes zoom and pan controls (thanks to zhdzb).

Update to Vercel Streamdown 1.6 today with npm i streamdown@latest or read more about Streamdown here.

Read more

Hayden Bleasel
https://vercel.com/changelog/open-your-vercel-dashboard-from-the-vercel-cli Open your Vercel dashboard from the Vercel CLI 2025-11-24T13:00:00.000Z

You can now open your current project in the Vercel Dashboard directly from the command line using vercel open.

This gives you quick access to your project without needing to navigate manually in your browser.

Update vercel to 48.10.0 or newer with npm i -g vercel to give it a try.

See docs for more.

Read more

Brooke Mosby
https://vercel.com/blog/vercel-open-source-program-fall-2025-cohort Vercel Open Source Program: Fall 2025 cohort 2025-11-21T13:00:00.000Z

In April, we launched the Vercel Open Source Program to give maintainers the resources, credits, and support they need to ship faster and scale confidently. The first group joined through our spring 2025 cohort.

Today we are welcoming the fall 2025 cohort.

From AI-native apps and developer infrastructure to design systems and creative tooling, open-source builders continue to amaze us. Meet the creators and explore their projects.

Read more

Kap Sev Gabby Shires
https://vercel.com/blog/self-driving-infrastructure Self-driving infrastructure 2025-11-21T13:00:00.000Z

AI has transformed how we write code. The next transformation is how we run it.

At Vercel, we’re building self-driving infrastructure that autonomously manages production operations, improves application code using real-world insights, and learns from the unpredictable nature of production itself.

Our vision is a world where developers express intent, not infrastructure. Where ops teams set principles, not individual configurations and alerts. Where the cloud doesn’t just host your app, it understands, optimizes, and evolves it.

Read more

Malte Ubl Tom Occhino Dan Fein
https://vercel.com/changelog/vercel-agent-investigations-now-included-in-observability-plus Vercel Agent investigations now included in Observability Plus 2025-11-21T13:00:00.000Z

Vercel Agent investigations are now included in Observability Plus, adding 10 investigations to every billing cycle at no extra cost to your subscription.

Investigations help teams diagnose and resolve incidents faster, and run automatically on error alerts. When an alert flags suspicious activity, such as unexpected spikes in usage or errors, Vercel Agent investigates the issue, identifies the likely root cause, analyzes the impact, and suggests next steps for remediation.

Teams can purchase Vercel Agent credits to run additional investigations. Investigations are public beta for Pro and Enterprise teams with Observability Plus.

Try it out or learn more about Vercel Agent investigations.

Read more

Julia Shi Ethan Shea Malavika Tadeusz
https://vercel.com/changelog/grok-4-1-fast-models-now-available-on-vercel-ai-gateway Grok 4.1 Fast models now available on Vercel AI Gateway 2025-11-20T13:00:00.000Z

You can now access xAI's latest models, Grok 4.1 Fast Reasoning and Grok 4.1 Fast Non-Reasoning, via Vercel's AI Gateway with no other provider accounts required. These models have a 2M context window and are designed for agentic tool calling.

Grok 4.1 Fast Reasoning is best suited for structured reasoning and agentic operations that require high accuracy, whereas Grok 4.1 Fast Non-Reasoning is tailored to speed.

To use the Grok 4.1 Fast models in AI Gateway with the AI SDK, set model to xai/grok-4.1-fast-reasoning or xai/grok-4.1-fast-non-reasoning.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/you-can-now-invalidate-the-cdn-cache-by-providing-a-source-image You can now invalidate the CDN cache by providing a source image 2025-11-20T13:00:00.000Z

Vercel Image Optimization dynamically transforms source images to reduce file size while maintaining high quality on the visitor's browser.

You can now invalidate the CDN cache by providing a source image.

This feature marks all transformed images derived from that source image as stale. The next request serves stale content instantly while revalidation happens in the background, with no latency impact for users.

There are several ways to invalidate a source image:

In addition to invalidating by source image, you can also delete by source image if the origin is gone. Deleting the cache can increase latency while new content is generated, or cause downtime if your origin is unresponsive. We recommend you use with caution.

This is available on all plans using the new image optimization price.

Learn more about cache invalidation.

Read more

Steven Salat Shraddha Agarwal Luba Kravchenko
https://vercel.com/changelog/improved-analytics-experience-now-available-on-the-vercel-firewall Improved analytics experience now available on the Vercel Firewall 2025-11-20T13:00:00.000Z

We have launched improvements to the Vercel Firewall UI, simplifying your application security monitoring and analysis. Vercel Firewall includes the System firewall and DDoS mitigations, Web Application Firewall, and Bot Management capabilities.

The updated experience surfaces more information on security events and mitigations, and allows for easier event investigations, bringing together all security events analytics in one place.

The updates include:

  • An updated Overview page for a consolidated view of DDoS attacks, and activity across system rules, custom rules, and IP blocks.

  • A new Traffic page that allows you to drill down into top sources of traffic (IPs, request paths, JA4 digests, ASN, user agents) and filter by actions (allowed, logged, denied, challenged, rate limited)

  • Simplified UX for writing custom rules or queries so that you can take actions or do analysis without friction

Learn more about the Firewall or visit the Firewall tab on your project to see the updates.

Read more

Priyanka Jindal Sage Abraham Liz Hurder William Bout
https://vercel.com/changelog/nano-banana-pro-gemini-3-pro-image-now-available-in-the-ai-gateway Nano Banana Pro (Gemini 3 Pro Image) now available in the AI Gateway 2025-11-20T13:00:00.000Z

You can now access Google's cutting edge image model, Nano Banana Pro (Gemini 3 Pro Image), via Vercel's AI Gateway with no other provider accounts required.

Nano Banana Pro (Gemini 3 Pro Image) is designed to work for more advanced use cases than Nano Banana. This model introduces improvements specifically for professional and creative workflows, like the generation of diagrams with accurate labeling and integration of web search information for images with up-to-date information. Nano Banana Pro also supports higher resolution generation and higher multi-image input limits for better compositing.

To use Nano Banana Pro in AI Gateway with the AI SDK, set model to google/gemini-3-pro-image. Note that this is a multi-modal model and therefore uses generateText for the actual image generation.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Read the AI Gateway docs for examples on how to use Nano Banana Pro to generate images, view the AI Gateway model leaderboard or try to generate images in our model playground.

Read more

Walter Korman Rohan Taneja Jeremy Philemon Jerilyn Zheng
https://vercel.com/blog/vercel-collaborates-with-google-for-gemini-3-pro-launch Vercel collaborates with Google for Gemini 3 Pro Preview launch 2025-11-18T13:00:00.000Z

The Gemini 3 Pro Preview model, released today, is now available through the Vercel AI Gateway and on v0.app. Thanks to Google, Vercel has been testing Gemini 3 Pro Preview across v0, Next.js, AI SDK, and Vercel Sandbox over the past several weeks.

We've noticed the model has an increased focus on coding, multimodal reasoning, and tool use, though it's seen improvements across the board.

From our testing, Gemini 3 Pro Preview delivers substantial improvements in instruction following and response consistency. It shows almost a 17% increase in correctness over its predecessor on our Next.js evals, putting in the top 2 models on the leaderboard.

Read more

Dan Fein Matt Lenhard Max Leiter Harpreet Arora
https://vercel.com/changelog/gemini-3-pro-now-available-in-vercel-ai-gateway Gemini 3 Pro now available in Vercel AI Gateway 2025-11-18T13:00:00.000Z

You can now access Google's latest model, Gemini 3 Pro, via Vercel's AI Gateway with no other provider accounts required.

Gemini 3 Pro excels at challenging tasks involving reasoning or agentic workflows. In particular, the model improves on Gemini 2.5 Pro's performance in multi-step function calling, planning, reasoning over complex images/long documents, and instruction following.

To use Gemini 3 Pro in AI Gateway with the AI SDK, set model to google/gemini-3.0-pro-preview. Gemini 3 Pro is a reasoning model, and you can specify the level of thinking. Include the providerOptions configuration with includeThoughts like the example below to enable reasoning text.

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Walter Korman Matt Lenhard Jerilyn Zheng
https://vercel.com/changelog/vercel-now-supports-build-commands-for-fastapi-and-flask Vercel now supports Build Commands for FastAPI and Flask 2025-11-17T13:00:00.000Z

You can now can easily deploy FastAPI and Flask with custom Build Commands, expanding support for Python projects on Vercel.

In addition to defining a Build Command in the project Settings dashboard, you can also define a build script in[tool.vercel.scripts]inside your pyproject.toml.

This script will run after dependencies are installed, but before your application is deployed.

Learn more about the Build Command for Python projects.

Read more

Ricardo Gonzalez
https://vercel.com/changelog/support-for-elysia Elysia can now be automatically deployed on Vercel 2025-11-17T13:00:00.000Z

Elysia, a popular ergonomic TypeScript framework with end-to-end type safety, can now be deployed instantly on Vercel.

When deployed, Vercel will now automatically identify your app is running Elysia and provision the optimal resources to run it efficiently.

By default, Elysia will use Node. You can opt-in to the Bun runtime by adding the bunVersion line below to your vercel.json.

Backends on Vercel use Fluid compute with Active CPU pricing by default, so you only pay for time where your code is actively using CPU.

Deploy Elysia on Vercel, or visit the documentation for Elysia or Bun Runtime at Vercel.

Read more

Jeff See Marcos Grappeggia Austin Merrick Anthony Shew
https://vercel.com/changelog/bulk-redirects-are-now-generally-available Bulk redirects are now generally available 2025-11-13T13:00:00.000Z

Vercel now supports bulk redirects, allowing up to one million static URL redirects per project.

This feature adds import options for formats like CSV and JSON, so teams can more easily manage large-scale migrations, fix broken links, handle expired pages, and more.

To use bulk redirects, set the bulkRedirectsPath field in your vercel.json to a file or folder containing your redirects. These will be automatically imported at build time.

This feature is available for Pro and Enterprise customers, and includes rates for additional capacity:

  • Pro: 1,000 bulk redirects included per project

  • Enterprise: 10,000 bulk redirects included per project

  • Additional capacity: starts at $50/month per 25,000 redirects

Get started with bulk redirects.

Read more

Ben Roberts Mark Knichel Tim Caswell Andrew Gadzik Matthew Stanciu Sudais Moorad
https://vercel.com/changelog/gpt-5-1-codex-models-now-available-in-vercel-ai-gateway GPT 5.1 Codex models now available in Vercel AI Gateway 2025-11-13T13:00:00.000Z

You can now access OpenAI's latest Codex models, GPT-5.1 Codex and GPT-5.1 Codex mini with Vercel's AI Gateway and no other provider accounts required. These Codex models are optimized for long-running, agentic coding tasks and are able to maintain context and reasoning over longer sessions without degradation.

To use these models with the AI SDK, set the model to openai/gpt-5.1-codex or openai/gpt-5.1-codex-mini:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Rohan Taneja Walter Korman Jerilyn Zheng
https://vercel.com/changelog/gpt-5-1-models-now-available-in-vercel-ai-gateway GPT 5.1 models now available in Vercel AI Gateway 2025-11-13T13:00:00.000Z

You can now access OpenAI's latest models, GPT-5.1 Instant and GPT-5.1 Thinking, using Vercel's AI Gateway with no other provider accounts required.

  • GPT-5.1 Instant offers improved instruction following, adaptive reasoning, and warmer, more conversational responses.

  • GPT-5.1 Thinking builds on GPT-5 Thinking with dynamic performance tuning that prioritizes speed for simple tasks and deeper reasoning for complex ones.

To use these models with the AI SDK, set the model to openai/gpt-5.1-instant or openai/gpt-5.1-thinking:

AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Rohan Taneja Walter Korman Jerilyn Zheng
https://vercel.com/changelog/rollbar-joins-the-vercel-marketplace Rollbar joins the Vercel Marketplace 2025-11-12T13:00:00.000Z

Rollbar is now available as a native integration on the Vercel Marketplace, bringing real-time error monitoring and code-first observability directly into your Vercel workflow.

With Rollbar, developers can automatically detect, track, debug, and resolve faster across deployments, connecting every issue back to the exact release and commit that introduced it. This helps teams move quickly while staying confident in production.

In just a few clicks, you can:

  • Manage accounts and billing in one place

  • Connect Rollbar to one or many Vercel projects in minutes

  • Automatically track deployments and tie errors to the specific revision that caused them

  • Keep environments and source maps aligned across Rollbar and Vercel for clean, readable stack traces

Install Rollbar from the Vercel Marketplace.

Read more

Hedi Zandi
https://vercel.com/blog/vercel-the-anti-vendor-lock-in-cloud Vercel: The anti-vendor-lock-in cloud 2025-11-10T13:00:00.000Z

Vendor lock-in matters when choosing a cloud platform. Cloud platforms can lock you in by requiring you to build against their specific primitives. Vercel takes a different approach: you write code for your framework, not for Vercel.

On AWS, you configure Lambda functions, NAT Gateways, and DynamoDB tables. On Cloudflare, you write Workers, use KV stores, Durable Objects, and bind services with Worker Service Bindings. These primitives only exist with that vendor, which means migrating to another platform requires rewriting your application architecture.

Too often, cloud platforms make these choices for you. They define proprietary primitives, APIs, and services that pull your code deeper into their ecosystem until leaving becomes impractical.

At Vercel, we believe the opposite approach creates better software and a healthier web. We want developers to stay because they want to, not because they have to. That means building open tools, embracing standards, and ensuring your code remains portable no matter where it runs.

So Vercel works differently. It interprets your framework code and provisions infrastructure automatically. Your application does not need to know it runs on Vercel. You do not need to import Vercel modules or call Vercel APIs. This is framework-defined infrastructure.

Read more

Malte Ubl Tom Occhino Kevin Corbett
https://vercel.com/changelog/model-fallbacks-now-available-in-vercel-ai-gateway Model fallbacks now available in Vercel AI Gateway 2025-11-10T13:00:00.000Z

Vercel's AI Gateway now supports fallback models for when models fail or are unavailable. In addition to safeguarding against provider-level failures, model fallbacks can help with errors and capability mismatches between models (e.g., multimodal, tool-calling, etc.).

Fallback models will be tried in the specified order until a request succeeds or no options remain. Any error, such as context limits, unsupported inputs, or provider outages, can trigger a fallback. Requests are billed based on the model that completes successfully.

This example shows an instance where the primary model does not support multimodal capabilities, falling back to models that do. To use, specify the model fallbacks in models within providerOptions:

To have pre-defined provider routing in addition to model routing, specify both models and providers (order or only) in providerOptions:

AI Gateway also includes built-in observability, Bring Your Own Key support, and supports OpenAI-compatible API.

Read more

Walter Korman Jerilyn Zheng
https://vercel.com/changelog/support-for-tanstack-start Support for TanStack Start 2025-11-10T13:00:00.000Z

Vercel detects and supports TanStack Start applications, a full-stack framework powered by TanStack Router for React and Solid.

Create a new TanStack Start app or add nitro() to vite.config.ts in your existing application to easily deploy your projects:

TanStack Start apps on Vercel use Fluid compute with Active CPU pricing by default. This means your TanStack Start app will automatically scale up and down based on traffic, and you only pay for what you use, not for idle function time.

Visit the TanStack Start on Vercel documentation to learn more

Read more

Austin Merrick Marcos Grappeggia
https://vercel.com/blog/how-nous-research-used-botid-to-block-automated-abuse-at-scale How Nous Research used BotID to block automated abuse at scale 2025-11-07T13:00:00.000Z

AI lab Nous Research made Hermes, their open-source language model, free for one week to increase accessibility. Within days, automated scripts overwhelmed the service with fake accounts performing high-volume inference requests across thousands of accounts to bypass rate limits.

Despite having Cloudflare Turnstile in place, bulk signups continued. The abuse led to wasted inference compute and inflated identity provider bills. After the promotion ended, Nous realized that before reintroducing any kind of free tier, it needed a stronger layer of bot protection.

Read more

Liz Hurder Andrew Qu
https://vercel.com/changelog/post-quantum-crypto Vercel now supports post-quantum cryptography 2025-11-07T13:00:00.000Z

HTTPS connections to the Vercel network are now secured with post-quantum cryptography.

Most web encryption today could be broken by future quantum computers. While this threat isn’t immediate, attackers can capture encrypted traffic today and decrypt it later as quantum technology advances.

Vercel now supports post-quantum encryption during TLS handshakes, protecting applications against these future risks. Modern browsers will automatically use it with no configuration or additional cost required.

Read more about encryption and how we secure your deployments.

Read more

Matthew Stanciu
https://vercel.com/changelog/ai-domain-search-now-available-via-vercel-domains AI domain search now available via Vercel Domains 2025-11-07T13:00:00.000Z

You can now search for domains on Vercel using AI-powered smart search.

Press space in the search bar to enter smart search mode. This mode uses AI to suggest domain names based on your input.

In smart search, you can:

  • Click a domain name to generate similar suggestions.

  • Search across all supported TLDs for that name.

Try it at vercel.com/domains.

Read more

Elliot Dauber Maggie Valentine Ethan Niser Rhys Sullivan Mark Glagola
https://vercel.com/changelog/vercel-sandbox-cli-is-now-available Vercel Sandbox CLI is now available 2025-11-07T13:00:00.000Z

We’ve introduced the Vercel Sandbox CLI, a command-line interface for managing isolated compute environments. Built on the familiar Docker CLI model, developers can now:

  • Create and run sandboxes for Node.js (node22) or Python (python3.13) workloads.

  • Execute commands inside existing sandboxes.

  • Copy files between local and remote environments.

  • List, stop, and remove sandboxes across projects and teams.

  • Run interactively with support for --tty, --interactive, and --publish-port for port forwarding.

  • Automate workflows via authentication tokens, environment variables, and timeouts.

Full reference now available in the Sandbox CLI docs.

Read more

Gal Schlezinger
https://vercel.com/blog/how-ai-gateway-runs-on-fluid-compute How AI Gateway runs on Fluid compute 2025-11-06T13:00:00.000Z

AI Gateway is a Node.js service for connecting to hundreds of AI models through a single interface. It processes billions of tokens per day. The secret behind that scale is Fluid.

Read more

Malte Ubl Walter Korman Dan Fein
https://vercel.com/blog/what-we-learned-building-agents-at-vercel What we learned building agents at Vercel 2025-11-06T13:00:00.000Z

Agents present incredible promise for increased productivity and higher quality outcomes in enterprises. Companies are already using them to streamline customer support, code reviews, and sales operations.

When building custom internal agents, the challenge isn't whether AI can create value, it's identifying the problems it's ready to solve today, at a cost that makes sense for the business.

At Vercel, we are going through the same AI transformation as our customers. We use our own products to build agents that help us move faster and spend more time on meaningful work.

After months of experimentation, we’ve turned our learnings into a repeatable methodology for finding and investing in AI projects that have the highest likelihood of creating significant business impact.

Read more

Malte Ubl Eric Dodds
https://vercel.com/changelog/moonshot-ai-kimi-k2-thinking-and-kimi-k2-thinking-turbo-are-now-available Moonshot AI's Kimi K2 Thinking models are now available on Vercel AI Gateway 2025-11-06T13:00:00.000Z

You can now access Moonshot AI's latest and most powerful thinking models, Kimi K2 Thinking and Kimi K2 Thinking Turbo, using Vercel's AI Gateway with no other provider accounts required.

Kimi K2 Thinking is oss and excels at deep reasoning, handling up to 200–300 sequential tool calls, and achieves top results on benchmarks for reasoning and coding. Kimi K2 Thinking Turbo is a high speed version of Kimi K2 Thinking and is best suited for scenarios requiring both deep reasoning and low latency.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher-than-provider-average uptime.

To use it with the AI SDK, set the model to moonshotai/kimi-k2-thinking or moonshotai/kimi-k2-thinking-turbo:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Rohan Taneja Walter Korman Jerilyn Zheng
https://vercel.com/changelog/cve-2025-52662-xss-on-nuxt-devtools CVE-2025-52662: XSS on Nuxt DevTools 2025-11-06T13:00:00.000Z

A medium-severity security vulnerability in Nuxt DevTools was responsibly disclosed, and has been fixed for version 2.6.4. This issue may have allowed Nuxt auth token extraction via XSS under certain configurations.

Nuxt DevTools users are encouraged to upgrade to the latest version. Read more details below.

Summary

A vulnerability chain in Nuxt DevTools allows remote code execution in development environments through a combination of cross-site scripting (XSS), authentication token exfiltration, and path traversal.

Impact

The vulnerability exists in the DevTools authentication page where error messages are rendered without proper sanitization, enabling DOM-based XSS. An attacker can exploit this to steal authentication tokens and leverage a path traversal vulnerability in the WebSocket message handler to write arbitrary files outside the intended directory, leading to remote code execution when configuration files are overwritten.

Resolution

The XSS was resolved by displaying errors as textContent instead of innterHTML in:

  • Nuxt DevTools 2.6.4

Workarounds

  • Avoid publicly exposing Nuxt DevTools or running Nuxt in production using Dev mode

Credit

Thanks to @yuske for responsible disclosure.

References

Read more

Aaron Brown Anthony Fu
https://vercel.com/changelog/cve-2025-48985-input-validation-bypass-on-ai-sdk CVE-2025-48985: Input Validation Bypass on AI SDK 2025-11-06T13:00:00.000Z

A low-severity security vulnerability in Vercel's AI SDK was responsibly disclosed, and has been fixed for 5.0.52, 6.0.0-beta.* The issue may have allowed users to bypass filetype whitelists when uploading files.

Vercel customers are encouraged to upgrade to the latest version. Read more details below.

Summary

Vulnerability in Vercel's AI SDK prompt conversion pipeline where improper URL-to-data mapping allows attackers to substitute arbitrary downloaded bytes for different supported URLs within the same prompt. The vulnerability occurs in the convert-to-language-model-prompt.ts file when filtering downloaded results could cause index misalignment between the downloadedFiles array and the original plannedDownloads array.

Impact

When processing mixed supported and unsupported URLs, the filtering operation removes null entries for supported URLs, causing the remaining downloaded data to be incorrectly associated with different URL keys. This results in bytes from an unsupported URL being mapped to a supported URL slot, allowing attackers to inject arbitrary content while bypassing URL-based trust and content validation mechanisms.

This affects most methods that accepted images or files as inputs, unless explicit data validation was implemented outside of the SDK. Namely the generateText() and streamText() functions.

Resolution

The issue was resolved by mapping files before filtering out empty ones to retain the correct index in:

  • 5.0.52

  • 6.0.0-beta.*

Workarounds

  • Implementing custom filetype validation logic outside of the SDK.

Credit

Thanks to @aphantom for responsible disclosure.

References

Read more

Aaron Brown Gregor Martynus
https://vercel.com/changelog/skew-protection-max-age-now-supports-the-full-deployment-lifetime Skew Protection max age now supports the full deployment lifetime 2025-11-06T13:00:00.000Z

Skew Protection helps ensure that requests for a user's session are consistently routed to the same deployment, even when new versions are being rolled out.

You can now configure your project's Skew Protection max age to persist for the entire lifetime of your deployments. This removes the previous limits of 12 hours on Pro and 7 days on Enterprise.

Set the value to any duration less than or equal to your project's Deployment Retention policy.

Learn more about Skew Protection and enable it in your project.

Read more

Steven Salat
https://vercel.com/changelog/pro-edge-config-pricing Edge Config reads and writes now billed per unit 2025-11-06T13:00:00.000Z

Edge Config Reads and Writes are moving from package-based to per-unit pricing on the Pro plan. You’ll continue paying the same effective rates, but at the start of your next billing cycle you’ll now be billed per unit to align costs directly with your usage.

The new rates are:

  • Edge Config Reads: $0.000003 per read (prev. $3 per 1M reads)

  • Edge Config Writes: $0.01 per write (prev. $5 per 500 writes)

Per‑unit billing scales more smoothly across team sizes and usage patterns. It also helps teams on Pro use Edge Config without immediately consuming a large portion of the included monthly usage credit.

Get started or learn more about Edge Config.

Read more

Blake Mealey Casey O'Keefe Shar Dara
https://vercel.com/changelog/free-botid-deep-analysis Free Vercel BotID Deep Analysis through January 15 2025-11-05T13:00:00.000Z

BotID Deep Analysis, Vercel’s advanced bot protection system, will be free for all Pro and Enterprise customers from November 5 to January 15, 2026.

BotID is an invisible CAPTCHA to stop advanced, human-like bots from attacking high-value endpoints like registrations, AI invocations, and checkouts. Deep Analysis, our most advanced solution, uses thousands of telemetry points for real-time client-side checks.

To participate, visit the Bot Management section in the Firewall dashboard and opt in. BotID usage will not be billed during this period. Regular billing resumes on January 16.

Read more

Andrew Qu Liz Hurder
https://vercel.com/blog/build-and-deploy-data-applications-on-snowflake-with-v0 Build and deploy data applications on Snowflake with v0 2025-11-04T13:00:00.000Z

We're announcing an integration with Snowflake for v0. With this, you can connect v0 to Snowflake, ask questions about your data, and build data-driven Next.js applications that deploy directly to Snowflake.

The application and authentication are managed through Vercel's secure vibe coding architecture, while compute runs on Snowflake's secure and governed platform, ensuring that your data never leaves your Snowflake environment.

Sign up for the waitlist to get notified when it's ready for testing.

Read more

Max Leiter Jason Wiker Nicolás Montone
https://vercel.com/changelog/route-build-traffic-through-static-ips Route build traffic through Static IPs 2025-11-04T13:00:00.000Z

You can now choose whether build traffic, such as calls to external APIs or CMS data sources during the build process, routes through your Static IPs.

To enable this, go to your Project Settings → Connectivity → toggle "Use static IPs for builds."

By default, this setting is disabled. When enabled, both build and function traffic will route through Static IPs and count toward Private Data Transfer usage.

This is available to all teams using Static IPs.

Try it out or learn more here.

Read more

Yanick Bélanger Miroslav Simulcik Jas Garcha
https://vercel.com/changelog/redirects-and-rewrites-now-available-in-observability Redirects and rewrites now available in Observability 2025-11-03T13:00:00.000Z

Improved observability into redirects and external rewrites is now available to all Vercel customers.

External rewrites forward requests to APIs or websites outside your Vercel project, effectively allowing Vercel to function as a reverse proxy or standalone CDN.

Customers on all plans get new views that offer visibility into key rewrite metrics:

  • Total external rewrites

  • External rewrites by hostnames

Customers on Pro and Enterprise plans can upgrade to Observability Plus to get:

  • Connection latency to external host

  • Rewrites by source/destination paths

  • Routes and paths for redirect location

Drains have also been updated to support the following:

View external rewrites or learn more about Observability.

Read more

Andrew Gadzik Mark Knichel Sudais Moorad
https://vercel.com/blog/botid-deep-analysis-catches-a-sophisticated-bot-network-in-real-time BotID Deep Analysis catches a sophisticated bot network in real-time 2025-10-31T13:00:00.000Z

On October 29 at 9:44am, BotID Deep Analysis detected an unusual spike in traffic patterns across one of our customer's projects. Traffic increased by 500% above normal baseline. What made this particularly interesting wasn't just the volume increase. The spike appeared to be coming from legitimate human users.

Our team immediately began investigating and reached out to the customer to discuss what appeared to be an influx of bot traffic cleverly disguised as human activity. But before we could even complete that conversation, something remarkable happened: Deep Analysis, powered by Kasada’s machine learning backend, had already identified the threat and adapted to correctly classify it.

Read more

Andrew Qu Liz Hurder
https://vercel.com/blog/vercel-agent-can-now-run-ai-investigations Vercel Agent can now run AI investigations 2025-10-31T13:00:00.000Z

Vercel is reimagining incident response for the agentic age.

At Ship AI, we launched Vercel Agent Investigations in Public Beta, a new skill of Vercel Agent that automatically detects issues in your application, conducts root cause analysis, and provides actionable remediation plans to resolve incidents faster. Vercel Agent already helps teams with AI-powered code reviews. Now, it's expanding to help with incident response.

By combining our newly-released anomaly alerts with investigations, we're improving how development teams respond to and resolve production issues.

Read more

Malavika Tadeusz Liz Hurder
https://vercel.com/changelog/zero-configuration-support-for-fastify Zero-configuration support for Fastify 2025-10-31T13:00:00.000Z

Vercel now supports Fastify applications, a web framework highly focused on providing the best developer experience with the least overhead and a powerful plugin architecture, with zero-configuration.

Backends on Vercel use Fluid compute with Active CPU pricing by default. This means your Fastify app will automatically scale up and down based on traffic, and you only pay for what you use.

Deploy Fastify on Vercel or visit the Fastify on Vercel documentation

Read more

Austin Merrick Jeff See Marcos Grappeggia
https://vercel.com/changelog/microfrontends-now-generally-available Microfrontends now generally available 2025-10-31T13:00:00.000Z

Microfrontends support on Vercel is now generally available, enabling you to split large applications into smaller, independently deployable units that render as one cohesive experience for users.

Each team can use their own framework and release cadence, while Vercel handles edge composition and routing for a seamless user experience.

Since the public beta, we've improved domains routing support, added microfrontends to Observability, and simplified onboarding. Vercel is serving nearly 1 billion microfrontends routing requests per day, and over 250 teams, including Cursor, The Weather Company, and A+E Global Media are already deploying microfrontends.

Pricing

  • Included: 2 microfrontend projects

  • Additional projects: $250 per project per month (available on Pro and Enterprise plans)

  • Routing: $2 per million routing requests

Pricing starts today for new projects and on November 30, 2025 for existing ones. If already used, the 3rd project for existing microfrontends users during the beta will continue to be free.

Get started with microfrontends, clone one of our examples, or learn more in our documentation.

Read more

Mark Knichel Kit Foster
https://vercel.com/changelog/caching-details-now-available-in-runtime-logs Caching details now available in Runtime Logs 2025-10-31T13:00:00.000Z

You can now view more details on how Vercel's CDN globally serves cached content to users as quickly as possible.

In the right-hand panel of the Runtime Logs page, we now list:

  • Cache key: A unique identifier for a specific version of a cached page

  • Cache tags: Tags associated with the cached data

  • Revalidation reason: If a revalidation took place, the reason why the content was being revalidated (time-based, tag-based, or deployment-based)

This is available to all Vercel users at no additional cost. Try it out or learn more about Runtime Logs.

Read more

Luc Leray Shraddha Agarwal Steven Salat Luba Kravchenko Timo Lins
https://vercel.com/blog/vercel-achieves-tisax-al2-compliance-to-serve-automotive-partners Vercel achieves TISAX AL2 compliance to serve automotive partners 2025-10-29T13:00:00.000Z

We’re proud to share that Vercel has successfully completed its assessment for the Trusted Information Security Assessment Exchange (TISAX) Level 2 (AL2). This milestone reinforces our commitment to delivering secure, reliable, and compliant infrastructure to our global customers, particularly those in the automotive and manufacturing sectors that require specific security and data protections. This achievement builds on our broader compliance program, which includes ISO/IEC 270001:2022, SOC 2 Type II, PCI DSS, HIPAA and more.

Read more

Kacee Taylor
https://vercel.com/changelog/openai-gpt-oss-safeguard-20b-now-available-in-vercel-ai-gateway OpenAI's GPT-OSS-Safeguard-20B now available in Vercel AI Gateway 2025-10-29T13:00:00.000Z

You can now access OpenAI's latest open source model, GPT-OSS-Safeguard-20B using Vercel's AI Gateway with no other provider accounts required.

GPT-OSS-Safeguard-20B is a fine-tuned version of its general-purpose GPT-OSS model, designed for developers to implement custom, policy-driven content moderation.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK, set the model to openai/gpt-oss-safeguard-20b:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Rohan Taneja Walter Korman Jerilyn Zheng
https://vercel.com/changelog/vercel-achieves-tisax-al2 Vercel achieves TISAX AL2 2025-10-29T13:00:00.000Z

Vercel has achieved Trusted Information Security Assessment Exchange (TISAX) Assessment Level 2 (AL2), a security standard widely adopted across the automotive and manufacturing industries to evaluate information security and the use of cloud services within the supply chain.

Customers can access Vercel’s TISAX assessment results directly through the ENX portal.

To view the assessment details:

  • Sign in to your account on the ENX portal

  • Search for Vercel or look up the following details:

    • Assessment ID: AMR06H-1

    • Scope ID: SYN3TM

Read our blog post to learn more about TISAX and automotive compliance on Vercel.

Read more

Ivy Warren Kacee Taylor
https://vercel.com/blog/bun-runtime-on-vercel-functions Bun runtime on Vercel Functions 2025-10-28T13:00:00.000Z

We now support Bun as a runtime option for Vercel Functions, available in Public Beta. You can choose between Node.js and Bun for your project, configuring runtime behavior based on workload. We're working closely with the Bun team to bring this capability to production.

This flexibility allows you to choose what works best for your use case. Use Node.js for maximum compatibility or switch to Bun for compute-intensive applications that benefit from faster execution.

Through internal testing, we've found that Bun reduced average latency by 28% in CPU-bound Next.js rendering workloads compared to Node.js.

These gains come from Bun's runtime architecture, built in Zig with optimized I/O and scheduling that reduce overhead in JavaScript execution and data handling.

Read more

Tom Lienard Javi Velasco Jeff See Eric Dodds Kevin Corbett
https://vercel.com/changelog/minimax-m2-now-available-in-vercel-ai-gateway MiniMax M2 now available for free in Vercel AI Gateway 2025-10-28T13:00:00.000Z

You can now access MiniMax's latest open source model, MiniMax M2 using Vercel's AI Gateway with no other provider accounts required. The model is free to use until Nov 7th, 2025. Focused on agentic use, Minimax M2 is very efficient to serve, with only 10B active parameters per forward pass.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK, set the model to minimax/minimax-m2:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Rohan Taneja Matt Lenhard Walter Korman Harpreet Arora
https://vercel.com/changelog/bun-runtime-now-in-public-beta-for-vercel-functions Bun runtime now in Public Beta for Vercel Functions 2025-10-28T13:00:00.000Z

The Bun runtime is now available in Public Beta for Vercel Functions.

You can choose between Node.js and Bun as your project runtime, selecting the best option for your workload.

Benchmarks show Bun reduced average latency by 28% for CPU-bound Next.js rendering compared to Node.js.

To use Bun in Vercel Functions, set the runtime globally in your project's vercel.json:

We currently support the following frameworks, with more on the way:

  • Next.js

  • Hono

  • Express

  • Nitro

Bun supports TypeScript with zero configuration. Here's an example with Hono:

Or get started with one of our starter templates:

Bun deployments automatically integrate with Vercel's existing logging, observability, and monitoring systems.

See benchmarks in our blog post, or read the docs to learn more.

Read more

Tom Lienard Javi Velasco Jeff See
https://vercel.com/blog/david-totten-joins-vercel-to-lead-global-field-engineering David Totten Joins Vercel to Lead Global Field Engineering 2025-10-27T13:00:00.000Z

The next era of enterprise technology will be defined by shortening the distance between idea and impact. At Vercel we help companies move faster, turning velocity into measurable business outcomes by anticipating customer needs and owning results end-to-end.

To help us lead in this new era, David Totten is joining Vercel as VP of Global Field Engineering. David brings two decades of experience building and scaling technical teams, and he'll lead a unified organization that includes Sales Engineering, Developer Success, Professional Services, and Customer Support Engineering.

Read more

Jeanne Grosser
https://vercel.com/blog/ship-ai-2025-recap Vercel Ship AI 2025 recap 2025-10-27T13:00:00.000Z

Earlier this year we introduced the foundations of the AI Cloud: a platform for building intelligent systems that think, plan, and act. Last week at Ship AI, we showed what comes next.

We launched new SDKs, infrastructure, and open source templates that make building production-ready agents as intuitive as building a standard feature. You can now define, deploy, and operate intelligent workflows on the same platform that powers your apps.

Whether you're building your first agent or delivering it to millions of users, these releases make AI development as accessible and scalable as web development.

Read more

Dan Fein
https://vercel.com/changelog/ai-chat-now-available-on-vercel-docs AI Chat now available on Vercel docs 2025-10-24T13:00:00.000Z

We're excited to announce that AI Chat is now live within the Vercel docs along with a subtle design overhaul. You can now get instant, conversational assistance directly on all docs pages.

  • Ask about anything on the Vercel docs

  • Load specific pages as context to get page-aware answers

  • Copy chat as Markdown for sharing or saving conversations

How to use it

  • Go to vercel.com/docs

  • Click the Ask AI button at the top right corner in the header of any Vercel docs page to start asking questions

  • Use the Ask AI about this page at the top of each docs page to load a page as context for focused learning, or copy the conversation as Markdown to share with your team.

Read more

Rich Haines Manuel Muñoz Solera Chris Kindl Maggie Valentine Nico Albanese
https://vercel.com/changelog/manage-next-js-server-actions-in-the-vercel-firewall Manage Next.js Server Actions in the Vercel Firewall 2025-10-24T13:00:00.000Z

The Vercel Firewall and Observability Plus has first-class support for Server Actions.

Starting with Next.js 15.5, customers can now configure custom rules targeting specific server action names. In the example below, you can rate limit app/auth/actions.ts#getUser actions to 100 requests per minute per IP address.

Server Action Name is available in the Firewall for all plans at no additional cost. Read the docs to learn more.

Read more

Sage Abraham
https://vercel.com/blog/you-can-just-ship-agents You can just ship agents 2025-10-23T13:00:00.000Z

Building agents should feel like shaping an idea rather than fighting a maze of code or infrastructure.

And we've seen this story before. A decade ago, the web moved from hand‑rolled routing and homegrown build scripts to opinionated frameworks and a platform that understood what developers were trying to do. Velocity went up, quality followed, and a generation of products appeared as if overnight.

AI is following the same arc, but the stakes and surface area are larger because what you build is no longer a set of pages. It is a system that intelligently reasons, plans, and acts.

Built on the foundations of Framework-defined Infrastructure, Vercel AI Cloud provides the tooling, infrastructure primitives, developer experience, and platform to bypass the complexity. You focus entirely on what you're building, with confidence in what's powering it under the hood.

Read more

Dan Fein
https://vercel.com/blog/ai-agents-and-services-on-the-vercel-marketplace AI agents and services on the Vercel Marketplace 2025-10-23T13:00:00.000Z

Agents and agentic AI give developers new ways to move faster and build better. They create connected, autonomous systems that continuously improve applications and raise the bar for speed and quality.

But typically, integrating AI services means managing separate dashboards, billing systems, and authentication flows for each tool. A team using three different AI services might waste hours wiring up each integration before writing a single line of application code.

Today, we're introducing the AI agents and services category to the Vercel marketplace. You can now add AI-powered workflows to your projects through native Vercel integrations with unified billing, observability, and installation flows built into the platform.

Read more

Tom Occhino Hedi Zandi
https://vercel.com/blog/introducing-workflow Built-in durability: Introducing Workflow Development Kit 2025-10-23T13:00:00.000Z

Building reliable software shouldn't require mastering distributed systems.

Yet for developers building AI agents or data pipelines, making async functions reliable typically requires message queues, retry logic, and persistence layers. Adding that infrastructure often takes longer than writing the actual business logic.

The Workflow Development Kit (WDK) is an open source TypeScript framework that makes durability a language-level concept. It runs on any framework, platform, and runtime. Functions can pause for minutes or months, survive deployments and crashes, and resume exactly where they stopped.

Read more

Pranay Prakash Dan Fein Nate Rajlich Gal Schlezinger Dan Erickson
https://vercel.com/blog/zero-config-backends-on-vercel-ai-cloud Zero-config backends on Vercel AI Cloud 2025-10-23T13:00:00.000Z

The same ease of use you expect from Vercel, now extended to your backends.

Since we introduced the AI Cloud at Vercel Ship, teams have been building AI applications that go beyond simple prompt-to-response patterns. These apps orchestrate multi-step workflows, spawn sub-agents, and run processes that take hours or days. They need backends that process data, run inference, and respond to real-time events.

You can now deploy the most popular Python and TypeScript backend frameworks with zero configuration. Vercel reads your framework and automatically provisions the infrastructure to run it.

Read more

Marcos Grappeggia Dan Fein
https://vercel.com/blog/introducing-vercel-agent Introducing Vercel Agent: Your new Vercel teammate 2025-10-23T13:00:00.000Z

We're launching Vercel Agent, an AI teammate for your development workflow. Vercel Agent uses AI, deep platform expertise, your application code, and telemetry data from across Vercel to help you ship faster with higher quality.

Starting today, Vercel Agent is available in Public Beta with two core skills: Code Review and Investigations.

Read more

Dan Fein Liz Hurder
https://vercel.com/changelog/introducing-ai-agents-and-services-on-the-vercel-marketplace Introducing AI agents & services on the Vercel Marketplace 2025-10-23T13:00:00.000Z

The Vercel Marketplace now includes a dedicated AI Agents & Services category, making it easier for developers to integrate AI-powered automation, observability, and infrastructure directly into their projects.

This category introduces native support for agentic integrations with unified authentication, provisioning, and billing across providers, all within the Vercel platform.

Agents - Off-the-shelf agents that reason and act on your behalf

  • CodeRabbit: Automated code review and PR feedback

  • Corridor: Real-time security and threat detection

  • Sourcery: Code review and generation assistance

AI services - Infrastructure for building and scaling your own agents

  • Braintrust: Evaluation and monitoring frameworks

  • Kernel: Cloud browser infrastructure for agentic workloads

  • Mixedbread: Multimodal AI search and retrieval across documents, code, media, and more

  • Kubiks: Multi-step workflow orchestration and remediation

  • Chatbase: Analytics and tuning for conversational agent

  • Autonoma: AI-driven testing for your web and mobile apps to build, run, and analyze without writing code

  • Browser Use: Natural language browser control for web automation

  • Descope: No-code identity and authentication workflows for users, partners, and AI agents

Explore the agentic marketplace, read our blog, and check out the documentation.

Read more

Dima Voytenko Josh Wolk Hedi Zandi Tony Pan Justin Kropp Michael Arguin Michael Toth Ismael Rumzan
https://vercel.com/changelog/open-source-workflow-dev-kit-is-now-in-public-beta Open source Workflow Development Kit is now in public beta 2025-10-23T13:00:00.000Z

Workflow Development Kit, a framework for building durable, long-running processes, is now in public beta.

Workflow Development Kit brings durability, reliability, and observability to async JavaScript so you can build apps and AI agents that suspend, resume, and maintain state with ease.

Turning functions into durable workflows is made simple by the "use workflow" directive:

Key highlights include:

  • Reliability by simply adding "use workflow" to make async functions durable. No manual wiring of queues, no schedulers, no YAML.

  • Mark Steps to denote with "use step". Retries are automatic.

  • Durability. Call sleep to pause without holding compute, then resume in place.

  • Built-in observability. Traces, logs, and metrics for every run. Pause, replay, and time travel while debugging.

  • No lock-in. Develop locally and deploy to Vercel or any other cloud.

Learn more about Workflow or read the documentation.

Read more

Pranay Prakash Nate Rajlich Adrian Lam Dillon Mulroy Gal Schlezinger JJ Kasper Peter Wielander
https://vercel.com/changelog/vercel-python-sdk-in-beta Vercel Python SDK is now available in beta 2025-10-23T13:00:00.000Z

The Vercel Python SDK is now available in beta, bringing first-class Python support for many Vercel features such as Vercel Sandbox, Blob, and the Runtime Cache API.

To get started, install the vercel package with pip install vercel.

Vercel Python SDK lets you directly interact with Vercel primitives via python code like:

Run untrusted code in isolated, ephemeral environments using Vercel Sandbox:

Interact with Vercel’s Blob storage API:

And store and retrieve data across Functions, Routing Middleware, and Builds within the same region using the Runtime Cache API:

Get started with pip install vercel.

Read more

Ricardo Gonzalez Brooke Mosby Marcos Grappeggia Anthony Shew
https://vercel.com/changelog/vercel-agent-investigations-now-in-public-beta Vercel Agent Investigations now in Public Beta 2025-10-23T13:00:00.000Z

Vercel Agent can now run AI investigations on anomaly alerts to help teams diagnose and resolve incidents faster. AI investigations streamline incident response, improve production stability, and reduce alert fatigue to accelerate your team's shipping velocity.

When an anomaly alert detects suspicious activity, such as unexpected spikes in usage or errors, Vercel Agent can investigate the issue, identify the likely root cause, analyze the impact, and suggest next steps for remediation.

For greater control, you can also manually trigger an AI investigation directly from the anomaly alert details page.

Vercel Agent investigations are now in public beta for Pro and Enterprise teams with Observability Plus. Pricing is usage-based, and teams can receive $100 Vercel Agent credits to get started.

Try it out or learn more about Vercel Agent investigations.

Read more

Ethan Shea Julia Shi Timo Lins Fabio Benedetti Damien Simonin Feugas Liz Hurder Amy Burns Malavika Tadeusz
https://vercel.com/changelog/turbo-build-machines Faster builds with Turbo build machines 2025-10-22T13:00:00.000Z

Turbo build machines are now available for all paid plans, offering our fastest build performance yet with 30 vCPUs and 60GB of memory.

Turbo machines are ideal for Turbopack builds, and large monorepos that run tasks in parallel, accelerating static generation and dependency resolution.

Enable Turbo build machines per project, with usage-based pricing.

Learn more in the documentation.

Read more

Marcos Grappeggia Andrew Healey Marc Codina Segura Luke Phillips-Sheard Mehul Kar Ali Smesseim
https://vercel.com/blog/update-regarding-vercel-service-disruption-on-october-20-2025 Update regarding Vercel service disruption on October 20, 2025 2025-10-21T13:00:00.000Z

At Vercel, our philosophy is to take ownership for, not blame, our vendors. Customers use our services to gain velocity, reliability, and ship wonderful products. Whether we picked A or B as one of the components of our “circuit design” is entirely our responsibility.

Vercel is fully accountable for this incident, even if it's now public that it was triggered by the unexpected outage of AWS us-east-1 (called iad1 region). Vercel uses AWS infrastructure primitives, is part of the AWS marketplace, offers secure connectivity to AWS services, and shares a long history with AWS of pioneering serverless computing.

To our customers, Vercel is unequivocally responsible for this outage.

Our goal is to simplify the cloud and offer its best version. Through framework-defined infrastructure, we help developers focus on the application layer by deploying global infrastructure resources that are highly optimized. We operate our Compute, CDN, and Firewall services across 19 AWS regions, terminating and securing traffic in 95 cities and 130+ global points of presence.

Yesterday, we fell short of this promise. While a significant amount of traffic was still served, and we shielded customers from the exposure to a single global point of failure, our ambition is to enable customers to never drop a single request, even in the event of an outage.

Read more

Guillermo Rauch Malte Ubl Matthew Binshtok
https://vercel.com/changelog/dynamically-extend-timeout-of-an-active-sandbox Dynamically extend timeout of an active Sandbox 2025-10-21T13:00:00.000Z

You can now extend the duration of a running Vercel Sandbox using the new extendTimeout method.

This lets long-running sandboxes stay active beyond their initial timeout, making it easier to support workflows like chained agentic tasks or multi-step code generation that take longer than expected.

You can extend the timeout multiple times until the maximum runtime for your plan is reached.

Pro and Enterprise plans support up to 5 hours, with the Hobby plan supporting up to 45 minutes.

Get started with Sandbox now and learn more in the docs.

Read more

Laurens Duijvesteijn Guðmundur Bjarni Ólafsson Andy Waller Mariano Cocirio
https://vercel.com/changelog/preview-links-between-microfrontends-projects-now-serve-all-paths Preview links between microfrontends projects now serve all paths 2025-10-21T13:00:00.000Z

Teams using microfrontends can now visit all routes from any domain in the microfrontends group, enabling teams to test their full site experience without broken links or missing pages.

Previously, the microfrontend group's root domain would be the only one to serve the paths hosted by child microfrontends. Now, preview links between all microfrontends projects automatically serve all routes in the group.

With this new feature:

  • Preview links on child microfrontends now route paths to other microfrontends in the group, eliminating 404s.

  • Deployments built from the same commit or branch automatically link to each other, making it easier to test changes in monorepos.

  • Fallback routing ensures that requests to microfrontends not built on the same branch are still resolved.

This feature is enabled by default for all new microfrontends, and will be rolling out slowly for existing teams.

Learn more or get started with microfrontends today.

Read more

Kit Foster Mark Knichel
https://vercel.com/changelog/zero-configuration-support-for-nestjs Zero-configuration support for NestJS 2025-10-17T13:00:00.000Z

Vercel now supports NestJS applications, a popular framework for building efficient, scalable Node.js server-side applications, with zero-configuration.

Backends on Vercel use Fluid compute with Active CPU pricing by default. This means your NestJS app will automatically scale up and down based on traffic, and you only pay for what you use.

Deploy NestJS on Vercel or visit the NestJS on Vercel documentation

Read more

Austin Merrick Jeff See Marcos Grappeggia
https://vercel.com/changelog/braintrust-joins-the-vercel-marketplace Braintrust joins the Vercel Marketplace 2025-10-16T13:00:00.000Z

Braintrust is now available on the Vercel Marketplace, bringing AI evaluation and observability directly into the Vercel workflow.

With this new integration, developers can automatically stream traces and evaluation data from Vercel to Braintrust with just a few clicks, gaining full visibility into model quality and user experience in real time.

With Braintrust on Vercel Marketplace, you can:

  • Ship agents and AI features with built-in evaluation and observability

  • Run evals and monitor model quality in production

  • Benchmark and compare performance across LLMs

Explore the template to deploy the example today, with easy setup and unified billing.

Read more

Hedi Zandi Tony Pan
https://vercel.com/blog/agents-at-work-a-partnership-with-salesforce-and-slack Agents at work, a partnership with Salesforce and Slack 2025-10-15T13:00:00.000Z

Every generation of software moves interfaces closer to where people think and work. Terminals gave way to GUIs. GUIs gave way to browsers. And now, the interface is language itself. Conversation has become the most natural way to build, explore, and decide.

At the center of this shift is a new pattern: the AI agent. Today, software doesn’t have to wait for clicks or configuration, but understands user intent, reason about it, and takes action.

The question for enterprises isn’t if they’ll adopt agents, but where those agents will live. Our answer: where work already happens.

That’s why Vercel and Salesforce are partnering to help teams build, ship, and scale AI agents across the Salesforce ecosystem, starting with Slack. Together, we’re bringing the intelligence and flexibility of the Vercel AI Cloud to the places teams collaborate every day.

Read more

Zack Ciesinski Matt Lewis Dan Fein
https://vercel.com/blog/running-next-js-inside-chatgpt-a-deep-dive-into-native-app-integration Running Next.js inside ChatGPT: A deep dive into native app integration 2025-10-15T13:00:00.000Z

When OpenAI announced the Apps SDK with Model Context Protocol (MCP) support, it opened the door to embedding web applications directly into ChatGPT. But there's a significant difference between serving static HTML in an iframe and running a full Next.js application with client-side navigation, React Server Components, and dynamic routing.

This is the story of how we bridged that gap. We created a Next.js app that runs natively inside ChatGPT's triple-iframe architecture, complete with navigation and all the modern features you'd expect from a Next.js application.

Read more

Andrew Qu
https://vercel.com/blog/talha-tariq-joins-vercel-as-cto-security Talha Tariq joins Vercel as CTO of Security 2025-10-15T13:00:00.000Z

As AI reshapes how software is built and deployed, the surface area for attacks is growing rapidly. Developers are shipping faster than ever, and we’re seeing new code paths, new threat models, and new vulnerabilities.

That’s why I’m excited to share that Talha Tariq is joining Vercel as our CTO of Security.

Talha brings deep expertise in security at scale, having served as CISO & CIO at HashiCorp for seven years before becoming CTO (Security) at IBM following its acquisition. There, he oversaw security across all IBM divisions including software, AI, and post-quantum cryptography.

Read more

Guillermo Rauch
https://vercel.com/blog/just-another-black-friday Just another (Black) Friday 2025-10-15T13:00:00.000Z

For teams on Vercel, Black Friday is just another Friday. The scale changes, but your storefronts and apps stay fast, reliable, and ready for spikes in traffic.

Many of the optimizations required for peak traffic are already built into the platform. Rendering happens at the edge, caching works automatically, and protection layers are on by default.

What’s left for teams is refinement: confirming observability is set up, tightening security rules, and reviewing the dashboards that matter most.

Last year, Vercel created a live Black Friday Cyber Monday dashboard that showcased our scale in real-time, showing the spikes. Overall, from Friday to Thursday, Vercel served 86,702,974,965 requests across its network, reaching a peak of 1,937,097 requests per second.

Helly Hansen, a major technical apparel brand, entered the weekend with this confidence. Before the event, they moved from client-heavy rendering to Vercel’s CDN and saw:

Read more

Sharon Toh Dan Fein
https://vercel.com/changelog/introducing-trace-drains-on-the-vercel-marketplace Introducing Trace Drains on the Vercel Marketplace 2025-10-15T13:00:00.000Z

You can now use Vercel Drains to send traces and logs from your projects to your preferred Marketplace observability providers with native integrations, including Braintrust, Dash0, Statsig, and Kubiks with more providers coming soon.

This integration allows developers to stream traces and evaluation data from Vercel directly into these providers for observability, debugging, and performance monitoring.

The Trace Drain API extends Vercel’s observability surface to the Marketplace ecosystem, allowing providers to:

  • Deliver rich visibility into performance and debugging data

  • Integrate natively with logging, and analytics tools through Vercel

  • Build tighter feedback loops between deployments and infrastructure insights

  • Offer customers a fully connected experience without manual setup

This update gives teams more flexibility to use their preferred observability tools while maintaining a single, unified developer experience inside Vercel.

Try it out or learn more about this update available to Pro and ENT customers.

Read more

Tony Pan Dima Voytenko Hedi Zandi Justin Kropp
https://vercel.com/changelog/claude-haiku-4-5-now-available-in-vercel-ai-gateway Claude Haiku 4.5 now available in Vercel AI Gateway 2025-10-15T13:00:00.000Z

You can now access Anthropic's latest model, Claude Haiku 4.5 using Vercel's AI Gateway with no other provider accounts required. Haiku 4.5 matches Sonnet 4's performance on coding, computer use, and agent tasks at substantially lower cost and faster speeds.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, set the model to anthropic/claude-haiku-4.5:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

To deliver high performance and reliability to Claude Haiku 4.5, AI Gateway leverages multiple model providers under the hood, including Anthropic, Bedrock and Vertex AI.

Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

Read more

Rohan Taneja Walter Korman Harpreet Arora
https://vercel.com/changelog/build-commits-to-the-same-branch-without-waiting Commits to the same branch now build with no queues 2025-10-14T13:00:00.000Z

Vercel now builds multiple commits to the same branch at the same time when On-Demand Concurrent Builds is enabled. Previously, a new commit would wait for the previous build on that branch to finish before starting. This update eliminates that queue, allowing commits to start building as soon as they arrive.

Visit the On-demand concurrent builds documentation to learn more.

Read more

Luke Phillips-Sheard Marcos Grappeggia Joe Haddad Janos Szathmary
https://vercel.com/changelog/anomaly-alerts-now-in-public-beta Anomaly alerts now in public beta 2025-10-14T13:00:00.000Z

Teams using Observability Plus can now receive alerts when anomalies are detected in their applications to help quickly identify, investigate and resolve unexpected behavior.

Alerts help monitor your app in real-time by surfacing unexpected changes in usage or error patterns:

  • Usage anomalies: unusual patterns in your application metrics, such as edge requests or function duration.

  • Error anomalies: abnormal error patterns, such as sudden spikes in 5XX responses on a specific route

View alerts directly in your dashboard, or subscribe via email, Slack, or webhooks to get notified wherever your team works.

Alerts are available public beta for Pro and Enterprise customers with Observability Plus.

Try it out or learn more about Alerts.

Read more

Fabio Benedetti Timo Lins Julia Shi Tobias Lins Malavika Tadeusz
https://vercel.com/changelog/zero-configuration-flask-backends Zero-configuration Flask backends 2025-10-10T13:00:00.000Z

Flask, one of the most popular Python web application frameworks, can now be deployed instantly on Vercel with no configuration changes needed.

Vercel's framework-defined infrastructure now recognizes and deeply understands Flask applications. This update removes the need for redirects in vercel.json or using the /api folder.

Backends on Vercel use Fluid compute with Active CPU pricing by default. This means your Flask app will automatically scale up and down based on traffic, and you only pay for time where your code is actively using CPU.

Deploy Flask on Vercel or visit the Flask on Vercel documentation.

Read more

Ricardo Gonzalez Marcos Grappeggia
https://vercel.com/changelog/expanded-role-based-access-control-rbac-for-enterprise-teams Expanded Role-Based Access Control (RBAC) for Enterprise teams 2025-10-10T13:00:00.000Z

Vercel’s Role-Based Access Control (RBAC) system now supports multiple roles per user and introduces extended permissions for finer-grained access control across Enterprise teams.

What’s new:

  • Multi-role support: Assign multiple roles to a single user within Enterprise teams.

  • Security role: A new team role dedicated to managing security and compliance settings.

  • Extended permissions: Add granular capabilities that layer on top of team and project roles for precise control.

  • Access groups integration: Access Groups now support team roles and extended permissions in Directory Sync mappings.

The new extended permissions include:

  • Create Project: Create new projects.

  • Full Production Deployment: Deploy, rollback, and promote to production.

  • Usage Viewer: View usage, prices, and invoices (read-only).

  • Integration Manager: Install and manage integrations and storage.

  • Environment Manager: Create and manage project environments.

  • Environment Variable Manager: Create and manage environment variables.

Extended permissions apply when paired with a compatible team role. Learn more in the Role-Based Access Control documentation.

Read more

Bel Curcio Javier Bórquez Enric Pallerols Christopher Skillicorn
https://vercel.com/blog/fluid-compute-benchmark-results Server rendering benchmarks: Fluid Compute and Cloudflare Workers 2025-10-09T13:00:00.000Z

Independent developer Theo Browne recently published comprehensive benchmarks comparing server-side rendering performance between Fluid compute and Cloudflare Workers. The tests measured 100 iterations across Next.js, React, SvelteKit, and other frameworks.

The results showed that for compute-bound tasks, Fluid compute performed 1.2 to 5 times faster than Cloudflare Workers, with more consistent response times.

Read more

Kevin Corbett Dan Fein Eric Dodds
https://vercel.com/changelog/chatgpt-apps-support-on-vercel ChatGPT apps support on Vercel 2025-10-09T13:00:00.000Z

You can now build and deploy ChatGPT apps directly on Vercel, with full support for modern web frameworks.

ChatGPT apps let you integrate custom UI components and functionality within ChatGPT, deployed and served by Vercel.

Frameworks like Next.js can now power these experiences using the Model Context Protocol (MCP), running natively inside the OpenAI sandbox rather than in a nested iframe. Check out our Next.js template.

Build your ChatGPT apps with:

  • Next.js features like server-side rendering (SSR) and React Server Components

  • Vercel platform capabilities such as preview deployments, instant rollback, and a seamless dev-to-production pipeline

Get started by building and deploying ChatGPT apps on Vercel using Next.js, Apps SDK, and mcp-handler.

Read more

Andrew Qu Allen Zhou Malte Ubl
https://vercel.com/changelog/block-vercel-deployment-promotions-with-github-actions Block Vercel deployment promotions with GitHub Actions 2025-10-09T13:00:00.000Z

You can now block a deployment from being promoted to production until selected GitHub Actions complete successfully.

On Vercel, every deployment starts in a preview environment, this feature ensures that only verified builds that pass tests or other automated checks are released to production.

Deployment Checks are available for all projects connected to GitHub repositories.

Configure them in your project settings or learn more in the docs.

Read more

Tom Knickman Cody Wong Anthony Shew Jeff See Austin Merrick Mitul Shah Marcos Grappeggia
https://vercel.com/changelog/new-domains-registrar-api-for-domain-search-pricing-purchase-and-management New Domains Registrar API for domain search, pricing, purchase, and management 2025-10-08T13:00:00.000Z

You can now programmatically search, price, buy, renew, and transfer domains with Vercel’s new Domains Registrar API, complementing the new in-product Domains experience.

The API provides endpoints for:

  • Catalog & pricing: list supported TLDs; get TLD and per-domain pricing.

  • Availability: check single or bulk availability.

  • Orders & purchases: buy domains (including bulk) and fetch order status by ID.

  • Transfers: retrieve auth codes, transfer in, and track transfer status.

  • Management: renew, toggle auto-renewal, update nameservers, and fetch TLD-specific contact schemas.

Explore the API docs.

Read more

Elliot Dauber Dillon Mulroy Ethan Niser Rhys Sullivan Mark Glagola Maggie Valentine
https://vercel.com/changelog/anomaly-alerts-now-available-via-email Anomaly alerts now available via email 2025-10-07T13:00:00.000Z

Enterprise customers with Observability Plus can now receive anomaly alerts by email or in-app notifications, in addition to existing delivery options: webhooks, Slack, and the dedicated alerts dashboard.

Currently, two types of anomaly alerts are available:

  • Usage anomalies: Detects unusual spikes in key billable metrics

  • Error anomalies: Detects sudden increases in 5XX responses on a specific route or path

Anomaly alerts are available in limited beta for Enterprise customers with Observability Plus.

Try it out or learn more about Alerts.

Read more

Fabio Benedetti Timo Lins Tobias Lins Malavika Tadeusz
https://vercel.com/changelog/improved-cli-experience-when-linking-and-creating-environment-variables Improved CLI experience when linking and creating environment variables​​​​‌‍​‍​‍‌‍ ‌​‍‌‍‍‌‌‍‌‌‍‍‌‌‍‍​‍​‍​‍‍​‍​‍‌‍​‌‍ ‌‍‍‌‌​‌‍‌‌‌‍‍‌‌​‌‍‌‍‌‌‌‌‍​​‍‍‌‍​‌‍ ‌‍‌​‍​‍​‍​​‍​‍‌‍‍​‌​‍‌‍‌‌‌‍‌‍​‍​‍​‍‍​‍​‍‌‍‍​‌‌​‌‌​‌​​‌​​‍‍​‍ ​‍ 2025-10-05T13:00:00.000Z

Here are some of the key improvements introduced in version 50.0.0:

  • After successfully linking a project, the CLI now prompts you to pull your Project’s Environment Variables to keep your local setup aligned with your deployed configuration.

  • Input for new Environment Variables is now masked during interactive entry

  • When connecting to an existing project with link, the CLI now shows an interactive selector if you have fewer than 100 Projects.

  • Fixed an issue where vc link --repo would incorrectly prefix project names.

  • Commands that support the ls argument now have standardized behavior. Extra or unexpected arguments will consistently produce a clear error and exit early, ensuring predictable and reliable results across all ls commands. This change may require updates to scripts that depended on the previous behavior.

Read more

Cody Wong Austin Merrick Tom Knickman Dimitri Mitropoulos
https://vercel.com/changelog/python-package-manager-uv-is-now-available-for-builds-with-zero Python package manager uv is now available for builds with zero configuration 2025-10-03T13:00:00.000Z

Vercel now uses uv, a fast Python package manager written in Rust, as the default package manager during the installation step for all Python builds.

This change makes builds 30-65% faster and adds support for more dependency formats. In addition to requirements.txt or Pipfile, projects can now declare dependencies with a uv.lock or pyproject.toml file.

Learn more about the Python runtime on Vercel.

Read more

Ricardo Gonzalez Luke Phillips-Sheard
https://vercel.com/changelog/invalidate-the-cdn-cache-by-tag Invalidate the CDN cache by tag 2025-10-03T13:00:00.000Z

You can now invalidate CDN cache contents by tag.

This marks all cached content associated with the tag as stale. The next request serves stale content instantly while revalidation happens in the background, with no latency impact for users.

There are several ways to invalidate content:

In addition to invalidating by tag if the origin content changes, you can also delete by tag if the origin content is gone. However, deleting the cache can increase latency while new content is generated or cause downtime if your origin is unresponsive, so use with caution.

Available on all plans. Learn more about cache invalidation.

Read more

Luba Kravchenko Steven Salat
https://vercel.com/changelog/static-ips-are-now-available-for-more-secure-connectivity Static IPs are now available for more secure connectivity 2025-10-02T13:00:00.000Z

Pro and Enterprise teams can now use Static IPs to securely connect to external services like databases that require IP allowlisting. Traffic from builds and functions routes through consistent, shared static IPs.

To enable Static IPs, you can access Connectivity > Static IPs from your project or team settings.

Static IPs are offered in addition to Secure Compute, which remains available for teams that need a fully dedicated VPC model. Note, Secure Compute has also moved within the Connectivity settings of your projects and teams.

This is part of our move to bring more enterprise features self-serve.

Read the docs or enable Static IPs here.

Read more

Miroslav Simulcik Yanick Bélanger Jas Garcha Manuel Muñoz Solera Jeff Pope Blake Mealey
https://vercel.com/changelog/faster-time-to-start-for-v0-builds Faster time-to-start for v0 builds 2025-10-02T13:00:00.000Z

Publishing v0 apps is now 1.1s faster on average.

We reduced the time it takes to send source files during deployment creation, improving the overall deployment pipeline and shortening feedback loops for developers.

Deploy today on v0.app.

Read more

Marc Codina Segura Dimitri Mitropoulos Gaspar Garcia
https://vercel.com/changelog/deployment-level-configuration-for-fluid-compute Deployment-level configuration for Fluid compute 2025-10-02T13:00:00.000Z

You can now configure Fluid compute on a per-deployment basis.

By setting "fluid": true in your vercel.json, Fluid compute will be activated for that specific deployment. You can also enable or disable Fluid regardless of project level settings.

This allows teams to selectively test and adopt Fluid compute without changing the global project settings.

Read more in our documentation.

Read more

Florentin Eckl Tom Lienard
https://vercel.com/blog/series-f Towards the AI Cloud: Our Series F 2025-09-30T13:00:00.000Z

Today, Vercel announced an important milestone: a Series F funding round valuing our company at $9.3 billion. The $300M investment is co-led by longtime partners at Accel and new investors at GIC, alongside other incredible supporters. We're also launching a ~$300M tender offer for certain early investors, employees, and former employees.

To all the customers, investors, and Vercelians who have been on this journey with us: thank you.

Read more

Guillermo Rauch
https://vercel.com/changelog/stripe-is-now-available-in-beta-on-the-vercel-marketplace Stripe is now available in beta on the Vercel Marketplace 2025-09-30T13:00:00.000Z

Stripe is now available in beta on the Vercel Marketplace as a new payment provider.

You can now provision a fully functional Stripe claimable sandbox directly from Vercel with no setup required. When ready, link it to a Stripe account and soon promote it to production.

This makes it easy for teams to move from prototype to production for use cases like:

  • Ecommerce storefronts: Test complete checkout flows before launch.

  • SaaS billing: Validate subscriptions, usage-based pricing, and invoicing.

  • Demos and templates: Share preconfigured environments for testing or client demos.

  • Developer onboarding: Give teams instant access to ready-to-use Stripe sandboxes.

Get started today with this example to build your first online simple store using Vercel and Stripe.

Read more

Dima Voytenko Hedi Zandi
https://vercel.com/changelog/view-and-query-bot-verification-data-in-vercel-observability View & query bot verification data in Vercel Observability 2025-09-30T13:00:00.000Z

Vercel inspects every request to identify bot traffic. For requests claiming to come from a verified source, Vercel cross-checks against its directory of verified bots and validates them against strict verification criteria.

We've added three new dimensions to the query builder when analyzing Edge Requests to help you understand bot activity to your projects:

  • Bot name: Identify specific bots

  • Bot category: Group bots by type

  • Bot verified: Distinguish between verified, spoofed, and unverifiable bots

Additionally, the Edge Requests dashboard in Observability now displays verification badges next to bot names.

All users can view bot verification badge while Observability Plus subscribers can query this data at no extra cost.

Try it out or learn more about Observability and Observability Plus.

Read more

Casey Gowrie Sage Abraham Julia Shi Timo Lins Malavika Tadeusz
https://vercel.com/blog/collaborating-with-anthropic-on-claude-sonnet-4-5 Collaborating with Anthropic on Claude Sonnet 4.5 to power intelligent coding agents 2025-09-29T13:00:00.000Z

Claude Sonnet 4.5 is now available on Vercel AI Gateway with full support in AI SDK. We’ve been testing the model in v0, across our Next.js build pipelines, and inside our new Coding Agent Platform template. The model shows improvements in design sensibility and code quality, with measurable gains when building and linting Next.js applications.

Claude Sonnet 4.5 builds on Anthropic's strengths in reasoning and coding. When paired with the Vercel AI Cloud, it powers a new class of developer workflows where AI can plan, execute, and ship changes safely inside your repositories.

Read more

Dan Fein Chris Tate Harpreet Arora
https://vercel.com/changelog/node-js-vercel-functions-now-support-per-path-request-cancellation Node.js Vercel Functions now support per-path request cancellation 2025-09-26T13:00:00.000Z

Vercel Functions using Node.js can now detect when a request is cancelled and stop execution before completion. This is configurable on a per-path basis, and includes actions like navigating away, closing a tab, or hitting stop on an AI chat to terminate compute processing early.

This reduces unnecessary compute, token generation, and sending data the user would never see.

To enable cancellation, add "supportsCancellation": true to your vercel.json configuration. You can apply it to specific paths or all functions:

Once enabled, you can listen for cancellation using Request.signal.aborted or the abort event:

If you’re using the AI SDK, forward the abortSignal to your stream:

Learn more about cancelling Function requests.

Read more

Craig Andrews
https://vercel.com/blog/cdn-request-collapsing Preventing the stampede: Request collapsing in the Vercel CDN 2025-09-25T13:00:00.000Z

When you deploy a Next.js app with Incremental Static Regeneration (ISR), pages get regenerated on-demand after their cache expires. ISR lets you get the performance benefits of static generation while keeping your content fresh.

But there's a problem. When many users request the same ISR route at once and the cache is expired, each request can trigger its own function invocation. This is called a "cache stampede." It wastes compute, overloads your backend, and can cause downtime.

The Vercel CDN now prevents this with request collapsing. When multiple requests hit the same uncached path, only one request per region invokes a function. The rest wait and get the cached response.

Vercel automatically infers cacheability for each request through framework-defined infrastructure, configuring our globally distributed router. No manual configuration needed.

Read more

Sachin Raja
https://vercel.com/changelog/vercel-domains-at-cost-pricing-and-the-fastest-on-the-web Vercel Domains overhauled with instant search and at-cost pricing 2025-09-25T13:00:00.000Z

We’ve rebuilt Vercel Domains end to end, making it faster, simpler, and more affordable to find and buy the right domain for your project.

  • Search without login: Look up domains instantly, even when you’re not signed in.

  • At-cost pricing: Domains are offered at registrar cost, with savings up to 50% on popular TLDs.

  • Transparent results: Availability and pricing surface instantly, with no upsells or unnecessary add-ons.

  • Fastest search on the web: Real-time, streaming results show availability and premium status instantly.

  • Expanded TLD coverage: Support for more registries so every project can find the right home.

  • Bulk checkout: Purchase multiple domains in a single streamlined transaction.

This update makes Vercel Domains the fastest way to claim a name and get to production. As part of the overhaul, we’ve partnered with name.com as our upstream registrar, delivering better pricing and reliability.

An upcoming blog will share how we built this speed using structured concurrency, layered caching, Bloom filters, and partitioned batching.

Try it now at vercel.com/domains.

Read more

Dillon Mulroy Rhys Sullivan Ethan Niser Elliot Dauber Mark Glagola Maggie Valentine
https://vercel.com/changelog/zero-config-fastapi-backends Zero-configuration FastAPI backends 2025-09-25T13:00:00.000Z

FastAPI, a modern, high-performance, web framework for building APIs with Python, is now supported with zero-configuration.

Vercel's framework-defined infrastructure now recognizes and deeply understands FastAPI applications. This update removes the need for redirects in vercel.json or using the /api folder.

Backends on Vercel use Fluid compute with Active CPU pricing by default. This means your FastAPI app will automatically scale up and down based on traffic, and you only pay for what you use.

Deploy FastAPI on Vercel or visit the FastAPI on Vercel documentation.

Read more

Ricardo Gonzalez
https://vercel.com/changelog/request-collapsing-for-isr-cache-misses Request collapsing for ISR cache misses 2025-09-25T13:00:00.000Z

The Vercel CDN now prevents cache stampedes through request collapsing on an expired Incremental Static Regeneration (ISR) page into a single function invocation per region. Without collapsing, simultaneous requests each trigger regeneration, wasting compute and overloading backends. With collapsing, one request regenerates the page while others wait and return the cached result.

This improves reliability, reduces backend load, and saves significant compute at scale.

The feature is applied automatically for cacheable routes. Cacheability is inferred from framework metadata, so no configuration is required.

Implementation details are available in the Preventing the stampede: Request collapsing in the Vercel CDN blog post.

Read more

Sachin Raja
https://vercel.com/changelog/query-data-on-external-api-requests-in-vercel-observability Query data on external API requests in Vercel Observability 2025-09-24T13:00:00.000Z

Observability Plus now supports querying and visualizing external API requests.

Observability Plus's query builder allows customers to explore their Vercel data and visualize traffic, performance, and other key metrics. You can now author custom queries on request counts or time to first byte (TTFB) for external API calls, such as fetch requests to an AI provider.

TTFB queries include breakdowns by average, min, max, p75, p90, p95, and p99. You can also filter or group results by request hostname to focus on specific APIs.

The query builder is available to Pro and Enterprise teams using Observability Plus.

Learn more about Observability and Observability Plus.

Read more

Tobias Lins
https://vercel.com/changelog/claimed-deployments-now-include-third-party-resources Claimed deployments now include third-party resources 2025-09-23T13:00:00.000Z

AI platforms, coding tools, and workflow apps can now create projects on Vercel that users can later claim as their own, transferring deployment ownership together with any resources by third-party providers.

How it works:

  1. Instant deployment: Any third-party can use the Vercel API to create a project, deploy an application, and attach a resource store (such as a database).

  2. Claim and transfer: When a user claims the Vercel deployment, the attached resources automatically move with it. Full ownership of the complete deployment is handed off to the user.

This is available today through Prisma, the first Vercel Marketplace provider to support instant deployment. Prisma customers can now spin up a database and a Vercel-hosted app together as a single, bundled stack.

We’re expanding this flow to more Marketplace providers so they can pair their products, such as authentication, observability, and workflow services, with Vercel deployments through one-click claiming.

Check out our Claim Deployments live demo and documentation to learn more.

Read more

Tony Pan Michael Toth Hedi Zandi Dima Voytenko Justin Kropp
https://vercel.com/blog/botid-uncovers-hidden-seo-poisoning BotID uncovers hidden SEO poisoning 2025-09-22T13:00:00.000Z

Your traffic is spiking and you spot suspicious bot activity in your logs. You deploy BotID expecting to find malicious scrapers, but the results show verified Google bots. Normal crawlers doing their job. But then you notice what they're actually searching for on your site. Queries that have nothing to do with your business. What do you do?

This exact scenario recently played out at one of the largest financial institutions in the world. What they discovered was a years-old SEO attack still generating suspicious traffic patterns.

Read more

Andrew Qu Kevin Corbett
https://vercel.com/changelog/anomaly-alerts-now-include-error-spikes Anomaly alerts now include error spikes 2025-09-22T13:00:00.000Z

Enterprise customers with Observability Plus can now be alerted when error events deviate from normal behavior, helping teams catch issues earlier.

The system automatically detects and groups abnormal error patterns, such as sudden spikes in 5XX responses on a specific route. Alert detail pages include relevant log lines, making it easier to investigate and resolve the underlying cause.

Error anomaly detection, joining anomaly alerts for unusual app metric usage, is available in limited beta for Enterprise customers with Observability Plus

Try it out or learn more about Alerts.

If you have feedback or questions, drop them in the Vercel Community thread.

Read more

Fabio Benedetti Damien Simonin Feugas Julia Shi Timo Lins Malavika Tadeusz
https://vercel.com/changelog/filter-deployments-by-author Filter deployments by author 2025-09-22T13:00:00.000Z

You can now filter deployments in the dashboard by author, using username, email, or Git username. Filters persist in the URL, making it easy to share filtered views with your team.

See now in your project's deployments.

Read more

Mitul Shah Marcos Grappeggia
https://vercel.com/blog/how-we-made-global-routing-faster-with-bloom-filters How we made global routing faster with Bloom filters 2025-09-19T13:00:00.000Z

Recently, we shipped an optimization to our global routing service that reduced its memory usage by 15%, improved time-to-first-byte (TTFB) from the 75th percentile and above by 10%, and significantly improved routing speeds for websites with many static paths.

A small number of websites, with hundreds of thousands of static paths, were creating a bottleneck that slowed down our entire routing service. By replacing a slow JSON parsing operation with a Bloom filter, we brought path lookup latency down to nearly zero and improved performance for everyone.

Read more

Matthew Stanciu Tim Caswell
https://vercel.com/changelog/observability-plus-replacing-legacy-monitoring Observability Plus replacing legacy Monitoring 2025-09-19T13:00:00.000Z

Observability Plus will be replacing the legacy Monitoring subscription for authoring custom queries on Vercel data. With Observability Plus, you have access to an expanded data set and more options for visualizing your Vercel data than was possible with legacy Monitoring. Observability Plus also allows you to save queries and visualizations to Notebooks to share insights and collaborate with team members.

Monitoring will be sunset for Pro customers at the end date of their billing cycle in November.

For Pro customers still subscribed to the legacy Monitoring SKU, we recommend subscribing to Observability Plus before their November billing cycle to continue to access custom queries on their Vercel data. Pro customers who have already subscribed to Observability Plus do not need to take any action.

Learn more about authoring custom queries with Observability Plus.

Read more

Julia Shi Damien Simonin Feugas Timo Lins Chris Widmaier Malavika Tadeusz
https://vercel.com/blog/what-you-need-to-know-about-vibe-coding What you need to know about vibe coding 2025-09-18T13:00:00.000Z

In February 2025, Andrej Karpathy introduced the term vibe coding: a new way of coding with AI, “[where] you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

Just months later, vibe coding is completely reshaping how developers and non-developers work. Over 90% of U.S. developers use AI coding tools, adoption is accelerating for other roles, and English has become the fastest growing programming language in the world.

We explore this shift in detail in our new State of Vibe Coding. Here are a few of the key takeaways.

Read more

Zeb Hermann Keith Messick
https://vercel.com/blog/scale-to-one-how-fluid-solves-cold-starts Scale to one: How Fluid solves cold starts 2025-09-18T13:00:00.000Z

Cold starts have long been the Achilles’ heel of traditional serverless. It’s not just the delay itself, but when the delay happens. Cold starts happen when someone new discovers your app, when traffic is just starting to pick up, or during those critical first interactions that shape whether people stick around or convert.

Traditional serverless platforms shut down inactive instances after a few minutes to save costs. But then when traffic returns, users are met with slow load times while new instances spin up. This is where developers would normally have to make a choice. Save money at the expense of unpredictable performance, or pay for always-on servers that increase costs and slow down scalability.

But what if you didn't have to choose? That’s why we built a better way.

Powered by Fluid compute, Vercel delivers zero cold starts for 99.37% of all requests. Fewer than one request in a hundred will ever experience a cold start. If they do happen, they are faster and shorter-lived than on a traditional serverless platform.

Through a combination of platform-level optimizations, we've made cold starts a solved problem in practice. What follows is how that’s possible and why it works at every scale.

Read more

Malte Ubl Tom Lienard
https://vercel.com/changelog/ai-code-reviews-by-vercel-agent-now-in-beta AI code reviews by Vercel Agent now in Public Beta 2025-09-18T13:00:00.000Z

Vercel Agent can now conduct code reviews, with validated suggestions that address issues across correctness, security, and performance.

These reviews are fully codebase-aware, looking beyond the diff to any relevant files. Proposed patches are generated and validated in Vercel Sandboxes before they ever reach your PR.

Key features also include:

  • Optimizations for frameworks like Next.js, React, Nuxt, and Svelte, with support for TypeScript, Python, Go, and more

  • High-signal, inline comments for human review including diffs, analysis, and reproduction steps for transparency

  • In-dashboard Observability for metrics like files read, review time, cost, and more

  • Configuration options to review all, public, or private repositories, and skip draft PRs

Available in public beta for all Pro and Enterprise teams, pricing is fully usage based with a $100 Vercel Agent credit.

Try it for free in the new Agent dashboard, read more in the docs, or provide feedback in Vercel Community.

Read more

Joe Haddad Casey Gowrie Dan Fox John Phamous Allen Zhou Harpreet Arora
https://vercel.com/blog/generate-static-ai-sdk-tools-from-mcp-servers-with-mcp-to-ai-sdk Addressing security and quality issues with MCP tools in AI Agent 2025-09-17T13:00:00.000Z

Model Context Protocol (MCP) is emerging as a standard protocol for federating tool calls between agents. Enterprises are starting to adopt MCP as a type of microservice architecture for teams to reuse each other's tools across different AI applications.

But there are real risks with using MCP tools in production agents. Tool names, descriptions, and argument schemas become part of your agent's prompt and can change unexpectedly without warning. This can lead to security, cost, and quality issues even when the upstream MCP server has not been compromised or is not intentionally malicious.

We built mcp-to-ai-sdk to reduce these issues. It is a CLI that generates static AI SDK tool definitions from any MCP server. Definitions become part of your codebase, so they only change when you explicitly update them.

Read more

Malte Ubl Andrew Qu
https://vercel.com/blog/ai-agents-at-scale-roxs-vercel-powered-revenue-operating-system AI agents at scale: Rox’s Vercel-powered revenue operating system 2025-09-16T13:00:00.000Z

Rox is building the next-generation revenue operating system. By deploying intelligent AI agents that can research, prospect, and engage on behalf of sellers, Rox helps enterprises manage and grow revenue faster.

From day one, Rox has built their applications on Vercel. With Vercel's infrastructure powering their web applications, Rox ships faster, scales globally, and delivers consistently fast experiences to every customer.

Read more

Jerry Zhou
https://vercel.com/changelog/shai-halud-supply-chain-campaign-expanded-impact-and-vercel-response Shai-Halud Supply Chain Campaign — Expanded Impact & Vercel Response 2025-09-16T13:00:00.000Z

Summary

The Shai-Halud supply chain campaign has escalated. What began with the Qix compromise affecting ~18 core npm packages (chalk, debug, ansi-styles, etc.) has since spread:

  • Over 40 additional packages attacked via the Tinycolor “worm” vector.

  • The CrowdStrike / crowdstrike-publisher namespace was also compromised, with multiple trojanized releases.

  • The DuckDB maintainer account (duckdb_admin) published malicious versions matching the same wallet-drainer malware used in the Qix incidents. No Vercel customers were impacted in that DuckDB subset.

Impact to Vercel Customers

  • We identified a small set of 10 Vercel customer projects whose builds depended (directly or transitively) on the compromised package versions.

  • Impacted customers have been notified and provided with project-level guidance.

  • In the DuckDB incident, no Vercel customer build was affected.

What We Did

Action

Status

Blocklisted known compromised versions from the Tinycolor, CrowdStrike, Qix, and DuckDB-affected packages

✅ Completed

Purged build caches for Vercel projects using those versions

✅ Completed for impacted projects

Coordinated safe rebuilds with clean dependencies / pinned safe versions

✅ In progress / completed for impacted ones

Raised platform alerting & detection thresholds for new package publishes matching the Shai-Halud indicators

✅ Elevated monitoring active

What We’re Watching & Doing

  • Working closely with npm, open-source maintainers, and ecosystem security partners to track any further spread of Shai-Halud.

  • Enhancing our supply chain defenses so that

    deployments on Vercel remain secure by default

    : stricter policies on lifecycle/postinstall scripts, lockfile hygiene, and registry validation.

  • Tightening internal CI/CD controls and developer tooling to catch suspicious package behavior early.

Recommendations for Vercel Users

  • For teams using pnpm, consider enabling the new minimumReleaseAge setting introduced in pnpm 10.16 to delay dependency updates (e.g., 24 hours). This helps reduce risk from compromised versions that are discovered and removed shortly after publishing.

  • Audit your dependencies (direct & transitive) to check for packages from these affected namespaces.

  • Rebuild with pinned safe versions and clean lockfiles (pnpm ci).

  • Rotate any npm / GitHub / CI/CD tokens that may have been used in environments where compromised dependencies were present.

  • Inspect GitHub repos for unauthorized workflows or unexpected .github/workflows additions.

  • Enforce least privilege (especially in automated workflows), and limit lifecycle script permissions.

Timeline

  • September 8, 2025

    — Qix / Tinycolor / core package compromise discovered.

  • September 9, 2025

    — DuckDB issue identified.

  • September 15-16, 2025

    — CrowdStrike / Tinycolor “worm” style propagation detected; Vercel detection expanded.

  • September 16, 2025

    — Customer notifications, cache purges, safe rebuilds underway.

References

Read more

Aaron Brown Tom Knickman Matthew Binshtok
https://vercel.com/changelog/builds-now-start-up-to-30-faster Builds now start up to 30% faster 2025-09-16T13:00:00.000Z

The build cache stores files from previous builds to speed up future ones. We've improved its performance by downloading parts of the cache in parallel using a worker pool.

This decreased the build initialization time by 30% on average, reducing build times by up to 7 seconds for all plans.

This is enabled automatically for all new builds and adds to the build initialization improvements previously launched.

Learn more about builds on Vercel.

Read more

Ali Smesseim Guðmundur Bjarni Ólafsson Janos Szathmary Luke Phillips-Sheard
https://vercel.com/blog/how-helly-hansen-migrated-to-vercel-and-drove-80-black-friday-growth Helly Hansen migrated to Vercel and drove 80% Black Friday growth 2025-09-15T13:00:00.000Z

Founded in 1877, Helly Hansen is a global leader in technical apparel, but its digital experience wasn't living up to its legacy. Operating across 38 global markets with multiple brands (including HellyHansen.com, HHWorkwear.com, and Musto.com), the company was being held back by an outdated tech stack that slowed site speeds and frustrated customers.

Through an incremental migration to Next.js and Vercel, Helly Hansen improved Core Web Vitals from red to green, increased developer agility, and delivered a record-breaking Black Friday Cyber Monday, building a foundation for future innovation.

Read more

Alina Weinstein
https://vercel.com/blog/introducing-vercel-drains Introducing Vercel Drains: Complete observability data, anywhere 2025-09-15T13:00:00.000Z

Vercel Log Drains are now Vercel Drains.

Why? They’re not just for logs anymore, as you can now also export OpenTelemetry traces, Web Analytics events, and Speed Insights metrics.

Drains give you a single way to stream observability data out of Vercel and into the systems your team already rely on.

Read more

Dan Fein
https://vercel.com/changelog/updated-defaults-for-deployment-retention Updated defaults for deployment retention 2025-09-15T13:00:00.000Z

Starting October 15, 2025, Vercel will update the default deployment retention policy for all projects currently using the legacy “unlimited” setting:

  • Canceled Deployments - 30 days, with a maximum of 1 year.

  • Errored Deployments - 3 months, with a maximum of 1 year.

  • Pre-Production Deployments - 6 months, with a maximum of 3 years.

  • Production Deployments - 1 year, with a maximum of 3 years.

Projects with a custom deployment retention setting will not be affected. Additionally, before October 15, the "unlimited" option will become unavailable when modifying retention policies.

Team owners can configure a default retention policy to be applied to any new projects created under the team on Teams > Security & Privacy > Deployment Retention Policy. This policy can also be easily applied to all existing projects.

Note that your 10 most recent production deployments and any currently aliased deployment will never be deleted, regardless of age.

Learn more about Deployment Retention.

Read more

Luke Phillips-Sheard Marc Codina Segura Jay Gengelbach Matthew Binshtok
https://vercel.com/blog/introducing-x402-mcp-open-protocol-payments-for-mcp-tools Introducing x402-mcp: Open protocol payments for MCP tools 2025-09-12T13:00:00.000Z

AI agents are improving at handling complex tasks, but a recurring limitation emerges when they need to access paid external services. The current model requires pre-registering with every API, managing keys, and maintaining separate billing relationships. This workflow breaks down if an agent needs to autonomously discover and interact with new services.

x402 is an open protocol that addresses this by adding payment directly into HTTP requests. It uses the 402 Payment Required status code to let any API endpoint request payment without prior account setup.

We built x402-mcp to integrate x402 payments with Model Context Protocol (MCP) servers and the Vercel AI SDK.

Read more

Ethan Niser
https://vercel.com/changelog/qwen3-next-models-are-now-supported-in-vercel-ai-gateway Qwen3-Next models are now supported in Vercel AI Gateway 2025-09-12T13:00:00.000Z

You can now access Qwen3 Next, two ultra-efficient models from QwenLM, designed to activate only 3B parameters, using Vercel's AI Gateway with no other provider accounts required.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, start by installing the package:

Then set the model to alibaba/qwen3-next-80b-a3b-thinking:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway and access the model here.

Read more

Walter Korman Harpreet Arora
https://vercel.com/changelog/402-mcp-enables-x402-payments-in-mcp 402-mcp enables x402 payments in MCP 2025-09-12T13:00:00.000Z

Introducing x402-mcp, a library that integrates with the AI SDK to bring x402 paywalls to Model Context Protocol (MCP) servers to let agents discover and call pay for MCP tools easily and securely.

With x402-mcp, you can define MCP servers with paidTools that require payment to run, enabling account-less, low-latency, anonymous payments directly in AI workflows. Payments confirm in ~100–200ms, with fees under $0.01 and support for minimums under $0.001.

Getting started is easy, here's how you can define a paid tool:

And integrating with AI SDK MCP Clients takes just one function to enable payments:

Read more about x402 or try our full stack x402 AI Starter Kit.

Read more

Ethan Niser
https://vercel.com/changelog/new-vercel-cli-login-flow New Vercel CLI login flow 2025-09-12T13:00:00.000Z

The vercel login command now uses the industry-standard OAuth 2.0 Device Flow, making authentication more secure and intuitive. You can sign in from any browser-capable device.

When approving a login, be sure to verify the location, IP, and request time before granting access to your Vercel account.

Email-based login (vercel login [email protected]) and the flags --github, --gitlab, --bitbucket, --oob, and team are deprecated. These methods will no longer be supported beginning February 26, 2026, except for the team method (SAML-based login), which remains supported until June 1, 2026.

Note: Support had previously been extended from the original deprecation date of February 1, 2026 to June 1, 2026. To prioritize user security, we are moving the deprecation date forward: most of these methods will be removed on February 26, 2026, with the team method following on June 1, 2026.

Upgrade today with npm i vercel@latest

Learn more in the docs.

Read more

Balázs Orbán Bel Curcio Christopher Skillicorn Enric Pallerols Mark Roberts
https://vercel.com/changelog/longcat-flash-chat-model-is-now-supported-in-vercel-ai-gateway LongCat-Flash Chat model is now supported in Vercel AI Gateway 2025-09-11T13:00:00.000Z

You can now access LongCat Flash Chat, a new model from Meituan focused on agentic tool use, using Vercel AI Gateway with no other provider accounts required. The model dynamically activates parameters, based on contextual demands.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, start by installing the package:

Then set the model to meituan/longcat-flash-chat:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway and access the model here.

Read more

Walter Korman Rohan Taneja Harpreet Arora
https://vercel.com/blog/mongodb-atlas-is-now-available-on-the-vercel-marketplace MongoDB Atlas is now available on the Vercel Marketplace 2025-09-10T13:00:00.000Z

MongoDB Atlas is now available on the Vercel Marketplace. Developers can now provision a fully managed MongoDB database directly from your Vercel dashboard and connect it to your project without leaving the platform.

Adding a database to your project typically means managing another account, working through connection setup, and coordinating billing across services. The Vercel Marketplace brings these tools into your existing workflow, so you can focus on building rather than configuring.

Read more

Hedi Zandi
https://vercel.com/changelog/chatgpt-is-now-supported-on-vercel-mcp ChatGPT can now integrate with Vercel MCP 2025-09-10T13:00:00.000Z

You can now use ChatGPT with Vercel MCP, our official Model Context Protocol (MCP) server. For security, Vercel MCP currently supports AI clients that have been reviewed and approved by Vercel.

Connectors within ChatGPT are available in beta to Pro and Plus accounts on the web.

Follow the steps below to set up Vercel as a connector within ChatGPT:

  • Enable developer mode: Go to Settings → Connectors → Advanced → Developer mode.

  • Add Vercel MCP

    • Open ChatGPT settings

    • In the Connectors tab, click Create

      • Name: Vercel

      • MCP server URL: https://mcp.vercel.com.

      • Authentication: OAuth

    • Click Create

You should now be able to select Vercel as a connector in Developer Mode chats.

With Vercel MCP you can give agents access to protected deployments, analyze build logs, and more.

Read more about using AI tools with Vercel MCP.

Read more

Anthony Shew Allen Zhou Brooke Mosby Andrew Qu Mark Roberts
https://vercel.com/changelog/mongodb-atlas-joins-the-vercel-marketplace MongoDB Atlas joins the Vercel Marketplace 2025-09-10T13:00:00.000Z

You can now provision MongoDB Atlas directly from the Vercel Marketplace.

Spin up a fully managed MongoDB Atlas database, connect it to you Vercel project, and start building without leaving the Vercel dashboard.

This MongoDB Atlas native integration provides:

  • A flexible document model for structured and unstructured data

  • Built-in search, including vector and semantic search

  • Horizontal scaling with replica sets and sharding

  • Free, pre-provisioned, or serverless deployment options

This integration removes the friction of switching between dashboards or managing complex setup, giving developers a fast, modern data layer to power web and AI applications on Vercel.

Get started with MongoDB Atlas on the Vercel Marketplace, available to customers on all plans.

Learn more in the blog post and deploy the MongoDB Atlas Forum template on Vercel.

Read more

Dima Voytenko Marc Brakken Tony Pan Michael Arguin Hedi Zandi Justin Kropp
https://vercel.com/changelog/vercel-sandbox-maximum-duration-extended-to-5-hours Vercel Sandbox maximum duration extended to 5 hours 2025-09-10T13:00:00.000Z

Pro and Enterprise teams can now run Vercel Sandboxes for up to 5 hours (up from 45 minutes).

This new max duration unlocks workloads that require longer runtimes, such as large-scale data processing, end-to-end testing pipelines, and long-lived agentic workflows.

Get started with Sandbox now and learn more in the docs.

Read more

Laurens Duijvesteijn Tom Lienard Andy Waller
https://vercel.com/blog/the-second-wave-of-mcp-building-for-llms-not-developers The second wave of MCP: Building for LLMs, not developers 2025-09-09T13:00:00.000Z

When the MCP standard first launched, many teams rushed to ship something. Many servers ended up as thin wrappers around existing APIs with minimal changes. A quick way to say "we support MCP".

At the time, this made sense. MCP was new, teams wanted to get something out quickly, and the obvious approach was mirroring existing API structures. Why reinvent when you could repackage?

But the problem with this approach is LLMs don’t work like developers. They don’t reuse past code or keep long term state. Each conversation starts fresh. LLMs have to rediscover which tools exist, how to use them, and in what order. With low level API wrappers, this leads to repeated orchestration, inconsistent behavior, and wasted effort as LLMs repeatedly solve the same puzzles.

MCP works best when tools handle complete user intentions rather than exposing individual API operations. One tool that deploys a project end-to-end works better than four tools that each handle a piece of the deployment pipeline.

Read more

Boris Besemer Andrew Qu
https://vercel.com/blog/new-pro-pricing-plan A more flexible Pro plan for modern teams 2025-09-09T13:00:00.000Z

We’re updating Vercel’s Pro plan to better align with how modern teams collaborate, how applications consume infrastructure, and how workloads are evolving with AI. Concretely, we’re making the following changes:

Read more

Tom Occhino
https://vercel.com/changelog/hipaa-baas-are-now-available-to-pro-teams HIPAA BAAs are now available to Pro teams 2025-09-09T13:00:00.000Z

Pro teams can now enter into a Business Associate Agreement (BAA) to support HIPAA-compliant workloads on Vercel. The BAA is available self-serve through the dashboard with no Enterprise contract required.

Vercel supports HIPAA compliance as a business associate by implementing technical and organizational safeguards, conducting annual audits, and offering breach notification in line with HIPAA requirements. Compliance is a shared responsibility between you and Vercel. Teams are responsible for configuring security features, managing access, and validating third-party services.

This update makes it easier for healthcare-focused applications to meet regulatory obligations without upgrading to Enterprise.

Frequently asked questions:

Read more about other updates to Pro and switch to the new Pro pricing.

Read more

Jen Chen Jas Garcha Shar Dara
https://vercel.com/changelog/no-build-queues-on-demand-concurrent-builds-now-on-by-default No build queues: On-demand concurrent builds now on by default 2025-09-09T13:00:00.000Z

Teams on the new Pro pricing model will now have on-demand concurrent builds enabled by default. This ensures builds across projects start immediately without waiting in a queue, except when multiple builds target the same Git branch.

You can manage this setting at any time using the new bulk enable feature, even if your team is not yet on the new Pro pricing model.

Learn more in the documentation read more about the updates to Pro, and switch to the new Pro pricing.

Read more

Janos Szathmary Felix Haus Mariano Cocirio Christopher Skillicorn
https://vercel.com/changelog/spend-management-now-enabled-by-default-on-pro Spend Management now enabled by default on Pro 2025-09-09T13:00:00.000Z

Spend Management is now enabled for new Pro teams, and will be enabled by default for existing teams when they switch to the new pricing model.

All Pro teams will have a budget set by default based on any previous usage, if any, and any teams with existing budgets will be unaffected. This can be changed at any time in spend management settings.

Email alerts will be sent when nearing the spend threshold based on your on-demand spend. This can be changed at any time here.

This new default ensures you receive proactive cost signals to manage your spend. Your deployments will continue without interruption unless a hard limit is manually configured.

Read more about the updates to Pro and switch to the new Pro pricing.

Read more

Jas Garcha Shar Dara Christian Pickett Bryan Mishkin Jeff Pope Christopher Skillicorn Blake Mealey Chloe Tedder
https://vercel.com/changelog/free-viewer-seats-now-available-on-pro Free Viewer seats now available on Pro 2025-09-09T13:00:00.000Z

Pro teams can now add unlimited Viewer seats at no additional charge so team members can collaborate more flexibly and cost-efficiently.

Viewers can access project dashboards, deployments, analytics, and more, but can’t see sensitive data, deploy, or change production settings. This is ideal for any team members looking to collaborate via access to the dashboard and preview deployments.

Previously, all seats on Pro were paid. Moving forward, you can add two main types of seats:

  • Developer seats (Owner, Member): Team members that deploy, debug, and configure. These seats remain $20.

  • Viewer seats: Team members that do not deploy. These seats are free.

Viewers can easily request an upgrade from team owners directly in the dashboard.

Read more about the updates to Pro and switch to the new Pro pricing.

Read more

Jas Garcha Javier Bórquez Manuel Muñoz Solera Christopher Skillicorn George Karagkiaouris
https://vercel.com/changelog/included-pro-usage-is-now-credit-based Included Pro usage is now credit-based 2025-09-09T13:00:00.000Z

The Pro plan now includes $20 in monthly usage credit instead of fixed allocations across metrics like data transfer, compute, caching, and more. This plan update replaces static usage buckets with a more flexible system that adapts to your workload.

In addition to the above, the new Pro pricing model includes:

Read more about the updates to Pro and switch to the new Pro pricing.

Read more

Jas Garcha Shar Dara Caleb Boyd Christian Pickett Blake Mealey George Karagkiaouris Shu Uesugi Dan Fein Christopher Skillicorn Michael Wenzel Bryan Mishkin Jeff Pope Gary Tyr Suyog Rao
https://vercel.com/blog/critical-npm-supply-chain-attack-response-september-8-2025 Critical npm supply chain attack response - September 8, 2025 2025-09-08T13:00:00.000Z

On September 9, 2025, the campaign extended to DuckDB-related packages after the duckdb_admin account was breached. These releases contained the same wallet-drainer malware, confirming this was part of a coordinated effort targeting prominent npm maintainers.

While Vercel customers were not impacted by the DuckDB incident, we continue to track activity across the npm ecosystem with our partners to ensure deployments on Vercel remain secure by default.

Read more

Aaron Brown
https://vercel.com/changelog/streamdown-2-2 Streamdown 2.2 - animated streaming and better support for custom HTML 2025-09-08T13:00:00.000Z

Streamdown 2.2 delivers animated per-word text streaming, improved custom HTML handling, and drop-in compatibility with ReactMarkdown - making it easier to adopt Streamdown in existing projects.

Animated streaming

By importing the Streamdown stylesheet and enabling the new animated prop, streaming content renders with smooth per-word text animation. This provides a polished experience for AI chat interfaces and other real-time text applications.

Better custom HTML support

The components prop now accepts custom HTML attributes by adding elements to allowedTags. The Remend engine has also been improved to strip incomplete HTML tags during streaming, preventing visual glitches from partial markup.

ReactMarkdown compatibility

Streamdown now supports the remaining ReactMarkdown props, making migration a one-line replacement. Projects using ReactMarkdown can swap to Streamdown without refactoring component configurations.

This release also removes CommonJS builds, adds bundled-language aliases for common JavaScript, TypeScript, and shell labels, and includes numerous rendering and security fixes across tables, code blocks, LaTeX, and Mermaid diagrams.

Learn more in the Streamdown docs.

Read more

Hayden Bleasel
https://vercel.com/changelog/package-installation-for-v0-builds-is-now-70-faster Package installation for v0 builds is now ~70% faster. 2025-09-08T13:00:00.000Z

Average npm install time for v0 builds dropped from 5s to 1.5s by optimizing how dependencies are resolved and cached during build execution.

This is in addition to a recent improvement to time-to-start for v0 builds, with more improvements in progress to further reduce installation and overall build time.

Deploy today on v0.app.

Read more

Balazs Varga Janos Szathmary
https://vercel.com/changelog/ai-sdk-and-ai-gateway-now-integrated-in-github-actions AI SDK and AI Gateway now integrated in GitHub Actions 2025-09-08T13:00:00.000Z

You can now use the vercel/ai-action@v2 GitHub Action to access the AI SDK and AI Gateway, generating text or structured JSON directly in your workflows by specifying a prompt, model, and api-key.

This integration enables new AI powered use cases for Github Actions, like summarizing what made it into a release, a light PR code review, comment moderation, or finding duplicate or relevant issues. For example, you can use it to triage issues like:

Learn more and see examples in the Github Actions marketplace or view the source code.

Read more

Gregor Martynus
https://vercel.com/changelog/bulk-enable-on-demand-concurrent-builds-across-projects Bulk enable on-demand concurrent builds across projects 2025-09-08T13:00:00.000Z

Pro teams can now remove build queues across all projects with just one click by bulk enabling on-demand concurrent builds.

On-demand concurrency scales build compute capacity dynamically, so all builds for a project start as soon as they are requested, except when multiple builds target the same Git branch.

To get started, visit your Pro team's billing settings to:

  • Enable or disable for all existing projects

  • Search and pick specific projects where it should be active

Learn more in the documentation.

Read more

Janos Szathmary Felix Haus
https://vercel.com/changelog/vercel-functions-now-support-graceful-shutdown Vercel Functions now support graceful shutdown 2025-09-08T13:00:00.000Z

Vercel Functions running on Node.js and Python runtimes now support graceful shutdown, giving you up to 500 milliseconds to run cleanup tasks before termination.

When a function is terminated, such as during scale-down, the runtime receives a SIGTERM signal. You can now use this signal to run cleanup tasks like closing database connections or flushing external logs.

Learn more about the SIGTERM signal.

Read more

Tom Lienard
https://vercel.com/changelog/export-more-data-with-vercel-drains Export traces, web analytics events, and speed insights datapoints to any destination 2025-09-05T13:00:00.000Z

Users can export OpenTelemetry traces, Web Analytics events, and Speed Insights data points from Vercel to any third-party tool. We’ve expanded our Log Drains infrastructure, enabling users to stream more raw data out of Vercel and into external systems.

With Vercel Drains, users can configure custom HTTP endpoints to receive data in multiple encodings — JSON, NDJSON, or Protobuf.

Pro and Enterprise teams can export data to external systems at the same $0.50 per GB rate.

Try it out or learn more about Vercel Drains.

Read more

Darpan Kakadia Luc Leray Adrian Cooney Vincent Voyer Luka Hartwig Timo Lins Malavika Tadeusz
https://vercel.com/changelog/zero-configuration-express-backends Zero-configuration Express backends 2025-09-05T13:00:00.000Z

Express, a fast, unopinionated, minimalist web framework for Node.js, is now supported with zero-configuration.

Vercel's framework-defined infrastructure now recognizes and deeply understands Express applications. This update removes the need for redirects in vercel.json or using the /api folder.

Deploy Express on Vercel or visit the Express on Vercel documentation.

Read more

Jeff See
https://vercel.com/blog/stress-testing-biomes-nofloatingpromises-lint-rule Stress testing Biome's noFloatingPromises lint rule 2025-09-04T13:00:00.000Z

Recently we partnered with the Biome team to strengthen their noFloatingPromises lint rule to catch more subtle edge cases. This rule prevents unhandled Promises, which can cause silent errors and unpredictable behavior. Once Biome had an early version ready, they asked if we could help stress test it with some test cases.

At Vercel, we know good tests require creativity just as much as attention to detail. To ensure strong coverage, we wanted to stretch the rule to its limits and so we thought it would be fun to turn this into a friendly internal competition. Who could come up with the trickiest examples that would still break the updated lint rule?

Part of the fun was learning together, but before we dive into the snippets, let’s revisit what makes a Promise “float”.

Read more

Dimitri Mitropoulos
https://vercel.com/changelog/moonshot-ais-kimi-k2-0905-model-is-now-supported-in-vercel-ai-gateway Moonshot AI's Kimi K2 0905 model is now supported in Vercel AI Gateway 2025-09-04T13:00:00.000Z

You can now access Kimi K2 0905, a new model from Moonshot AI focused on agentic coding with a 256K context window, using Vercel AI Gateway with no other provider accounts required.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, start by installing the package:

Then set the model to moonshotai/kimi-k2-0905:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

To deliver high performance to Kimi K2, AI Gateway leverages multiple model providers under the hood, including direct to Moonshot AI, Groq, and Fireworks AI.

Learn more about AI Gateway.

Read more

Walter Korman Rohan Taneja Harpreet Arora
https://vercel.com/blog/open-sdk-strategy Open SDK strategy 2025-09-03T13:00:00.000Z

At Vercel, our relationship with open source is foundational. We do not build open source software to make money. Rather, we’re building an enduring business that enables us to continue developing great open source software. We believe in improving the default quality of software for everyone, everywhere, whether they are Vercel customers or not. A rising tide lifts all boats.

Read more

Tom Occhino Daniel Roe
https://vercel.com/changelog/cve-2025-57822 CVE-2025-57822 2025-08-29T13:00:00.000Z

Summary

A vulnerability affecting Next.js Middleware has been addressed. It impacted versions prior to v14.2.32 and v15.4.7, and involved a Server-Side Request Forgery (SSRF) risk introduced by misconfigured usage of the NextResponse.next() function within middleware. Applications that reflected a user's request headers in this function, rather than passing them through the request object, could unintentionally allow the server to issue requests to attacker-controlled destinations.

A patch applied on August 25th, 2025 eliminated exposure for Vercel customers running the affected versions.

Impact

In affected configurations, an attacker could:

  • Influence the destination of internal requests triggered by middleware routing logic

  • Perform SSRF against internal infrastructure if user-controlled headers (e.g.,

    Location) were forwarded or interpreted without validation

  • Potentially access sensitive internal resources or services unintentionally exposed via internal redirect behavior

This issue is exploitable in self-hosted deployments where developers use custom middleware logic and do not adhere to documented usage of NextResponse.next({ request }). It is not exploitable on Vercel infrastructure, which isolates and protects internal request behavior.

Resolution

The issue was resolved by updating the internal middleware logic to prevent unsafe fallback behavior when request is omitted from the next() call. This ensures the origin server behavior cannot be unintentionally altered by user-supplied headers or misrouted requests.

Fix available in:

  • Next.js v14.2.32

  • Next.js v15.4.7

Workarounds

For users who cannot upgrade immediately:

  • Ensure middleware follows official guidance: Use NextResponse.next({ request })to explicitly pass the request object

  • Avoid forwarding user-controlled headers to downstream systems without validation

  • Ensure headers that should never be sent from client to server are not reflected back to the client via NextResponse.next, such as Location.

Credit

Thanks to Dominik Prodinger at RootSys, and Nicolas Lamoureux and the Latacora team for their responsible disclosure.

References

Read more

Aaron Brown Zack Tanner Shohei Maeda Luba Kravchenko
https://vercel.com/changelog/cve-2025-55173 CVE-2025-55173 2025-08-29T13:00:00.000Z

Summary

A vulnerability affecting Next.js Image Optimization has been addressed. It impacted versions prior to v15.4.5 and v14.2.31, and involved a scenario where attacker-controlled external image servers could serve crafted responses that result in arbitrary file downloads with attacker-defined filenames and content.

Your Vercel deployments are safe by default. A patch applied on July 29th, 2025 eliminated exposure for all Vercel-hosted customers. Self-hosted deployments should upgrade to v15.4.5 or v14.2.31 to remediate the issue.

Impact

Under certain configurations (images.domains or permissive images.remotePatterns), a malicious actor could:

  • Trigger the download of a file from a Next.js app with attacker-controlled content and filename

  • Exploit this behavior for phishing, drive-by downloads, or social engineering scenarios

This issue requires that:

  • The target app has external image domains or patterns configured

  • The remote server is attacker-controlled or attacker-influenced

  • A user is tricked into clicking a crafted URL

Resolution

The issue was resolved by updating the image optimizer logic to avoid falling back to the upstream’s Content-Type header when magic number detection fails. This ensures that responses are only cached when confidently identified as image content and do not mistakenly reuse cache keys for user-specific responses.

The fix was included in:

  • Next.js v15.4.5

  • Next.js v14.2.31

Credit

Thanks to kristianmagas for the responsible disclosure.

References

Read more

Aaron Brown Steven Salat Zack Tanner
https://vercel.com/changelog/cve-2025-57752 CVE-2025-57752 2025-08-29T13:00:00.000Z

Summary

A vulnerability affecting Next.js Image Optimization has been addressed. It impacted versions prior to v15.4.5 and v14.2.31, and involved a cache poisoning issue that caused sensitive image responses from API routes to be cached and subsequently served to unauthorized users.

Vercel deployments were never impacted by this vulnerability.

Impact

When API routes are used to return image content that varies based on headers (e.g., Cookie, Authorization), and those images are passed through Next.js Image Optimization, the optimized image may be cached without including those request headers as part of the cache key. This can lead to:

  • Unauthorized disclosure of user-specific or protected image content

  • Cross-user leakage of conditional content via CDN or internal cache

This issue arises without user interaction and requires no elevated privileges, only a prior authorized request to populate the cache.

Resolution

The issue was resolved by ensuring request headers aren’t forwarded to the request that is proxied to the image endpoint. This ensures that the image endpoint cannot be used to serve images that require authorization data and thus cannot be cached.

Fix available in:

  • Next.js v15.4.5

  • Next.js v14.2.31

Credit

Thanks to reddounsf for the responsible disclosure.

References

Read more

Aaron Brown Steven Salat Zack Tanner
https://vercel.com/blog/preparing-for-the-worst-our-core-database-failover-test Preparing for the worst: Our core database failover test 2025-08-28T13:00:00.000Z

Many engineering teams have disaster recovery plans. But unless those plans are regularly exercised on production workloads, they don’t mean much. Real resilience comes from verifying that systems remain stable under pressure. Not just in theory, but in practice.

On July 24, 2025, we successfully performed a full production failover of our core control-plane database from Azure West US to East US 2 with zero customer impact.

This was a test across all control-plane traffic: every API request, every background job, every deployment and build operation. Preview and development traffic routing was affected, though our production CDN traffic, served by a separate globally-replicated DynamoDB architecture, remained completely isolated and unaffected across our 19 regions.

This operation was a deliberate, high-stakes exercise. We wanted to ensure that if the primary region became unavailable, our systems could continue functioning with minimal disruption. The result: a successful failover with zero customer downtime, no degraded performance in production, and no postmortem needed.

Read more

Matheus Fernandes Matthew Binshtok
https://vercel.com/changelog/s1ngularity-supply-chain-attack-in-nx-packages s1ngularity: supply chain attack in Nx packages 2025-08-27T13:00:00.000Z

Threat actors published modified versions of the Nx package and some of its supporting libraries to the npm registry with the goal of exfiltrating developer and service credentials.

Builds on Vercel are safe from this vulnerability by default. Visit the GitHub advisory to check if your local or other CI environments are impacted.

Summary

A malicious version of the Nx package and some Nx ecosystem libraries were published to the npm registry using a stolen npm token, starting at 6:32 PM EDT on August 26, 2025. The compromised packages were removed from the npm registry by the Nx team, ending at 10:44 PM EDT on the same day.

The affected packages contained a postinstall script that scanned the user's file system using an LLM to exfiltrate secrets and credentials when installing an affected package. Exfiltrated secrets were posted as an encoded string into a GitHub repo that the script would create in the victim's GitHub account. For more information, visit the advisory on GitHub from the Nx team.

Impact for Vercel customers

By default, Vercel customers are not impacted, and can only be affected by the compromised Nx packages if they took specific steps leveraging the build container's flexibility.

Four conditions are required for the postinstall script to exfiltrate data from a Vercel build:

  • The script uses the GitHub CLI (gh) to acquire a GitHub token. The GitHub CLI is not installed in Vercel's build container by default. For the GitHub CLI to be present in your build, it must be installed as part of your user-defined build process.

  • The script requires a GitHub authentication token to be present on the machine invoking the GitHub CLI. The Vercel build container does not contain customer GitHub tokens by default. For the GitHub token to be present in your build, it must be added to the build container as part of your user-defined build process.

  • The script depends on the machine having at least one of the Claude Code (claude), Gemini (gemini), or Q (q) CLIs installed. The Vercel build container does not have any of these installed by default. For any of these CLIs to be present in your build, they must be installed as part of your user-defined build process.

  • A build must have installed a compromised version of Nx or Nx ecosystem packages.

We did not identify any builds on Vercel meeting this pattern. We encourage you to evaluate other environments, local and cloud, that may have been vulnerable to this attack.

Resolution

New builds will not be able to download the affected packages. The Nx team has removed affected packages from npm, and we have purged the build caches for any projects that contained affected packages in their dependencies during a build.

Additionally, we've notified a small number of users who installed one or more of the malicious packages during a build. Vercel team owners should check for an email titled "s1ngularity: supply chain attack in Nx packages" from [email protected].

References

Read more

Aaron Brown Anthony Shew Tom Knickman Felix Haus Matheus Fernandes Andy Riancho
https://vercel.com/changelog/anomaly-alerts-now-in-limited-beta-for-enterprise-customers Anomaly alerts now in limited beta for Enterprise customers 2025-08-27T13:00:00.000Z

Enterprise customers can now receive alerts when anomalies are detected in their applications, in order to quickly identify and mitigate issues.

  • Anomaly detection: Automatically identifies unusual patterns in your application metrics.

  • Webhook integration: Subscribe to alerts and route them into your existing monitoring systems.

  • Slack notifications: Get alerts delivered directly to your team channels.

Alerts are available in limited beta for Enterprise customers with Observability Plus.

Try it out or learn more about Alerts.

Read more

Fabio Benedetti Damien Simonin Feugas Julia Shi Timo Lins Chris Widmaier Tobias Lins Malavika Tadeusz
https://vercel.com/changelog/build-slack-agents-with-vercel-slack-bolt Deploy Slack's Bolt.js to Vercel with @vercel/slack-bolt 2025-08-27T13:00:00.000Z

We've published @vercel/slack-bolt, our official adapter for deploying Slack's Bolt for JavaScript to Vercel's AI Cloud.

Bolt provides a type-safe library for responding to Slack webhook events. However, Slack's API requires a response within three seconds or users are faced with timeouts. This has made it hard to build Slack agents on traditional serverless platforms.

Our adapter uses Fluid compute’s streaming and waitUntil to acknowledge responses within Slack’s deadline while your agent continues working in the background.

This adapter works with any function or framework using the Web API Request object such as Hono, Nitro or Next.js.

Get started with our Slack Agent Template today or visit the library on npm.

Read more

Matt Lewis
https://vercel.com/changelog/saml-sso-is-now-available-to-pro-teams SAML SSO is now available to Pro teams 2025-08-26T13:00:00.000Z

SAML-based Single Sign-On (SAML SSO) is now available as an add-on to all Pro teams and can be configured directly in the dashboard. This includes support for major identity providers like Okta, Azure AD, and Google Workspace.

Previously limited to Enterprise plans, SAML SSO on Pro enables secure, centralized access control without requiring a contract.

Read more about other updates to Pro and switch to the new Pro pricing.

Read more

Jas Garcha Javier Bórquez Christopher Skillicorn
https://vercel.com/changelog/30-day-runtime-log-retention-now-available-in-observability-plus 30-day runtime log retention, now available in Observability Plus 2025-08-26T13:00:00.000Z

Teams with Observability Plus now have 30 days of runtime log retention. These logs include detail about requests, Vercel Functions and Routing Middleware invocations, cache activity, and more.

You can view, query, inspect, and share up to 14 consecutive days of log data at once.

This extended retention is available at no additional cost for Pro and Enterprise plans with Observability Plus enabled.

Try it out or learn more about Runtime Logs.

Read more

Damien Simonin Feugas Timo Lins Malavika Tadeusz
https://vercel.com/changelog/devin-raycast-windsurf-and-goose-now-supported-on-vercel-mcp Devin, Raycast, Windsurf, and Goose now supported on Vercel MCP 2025-08-25T13:00:00.000Z

You can now use Devin, Raycast, Windsurf, and Goose with Vercel MCP, our official Model Context Protocol (MCP) server. For security, Vercel MCP currently supports AI clients that have been reviewed and approved by Vercel.

Follow the steps below to get started with each client:

Devin

  1. Navigate to Devin's Settings > MCP Marketplace

  2. Search for Vercel and select the MCP

  3. Click Install

Raycast

  1. Run the Install Server command

  2. Enter the following details:

    • Name: Vercel

    • Transport: HTTP

    • URL: https://mcp.vercel.com

  3. Click Install

Windsurf

  1. Add the snippet below to your mcp_config.json file

Goose

  1. Click here for a one-click installation of the Vercel MCP.

With Vercel MCP you can give agents access to protected deployments, analyze build logs, and more.

Read more about using AI tools with Vercel MCP.

Read more

Brooke Mosby Anthony Shew Andrew Qu Mark Roberts Allen Zhou
https://vercel.com/blog/ai-powered-prototyping-with-design-systems AI-powered prototyping with design systems 2025-08-22T13:00:00.000Z

Prototyping with AI should feel fast, collaborative, and on brand. Most AI tools have cracked the "fast" and "collaborative" parts, but can struggle with feeling "on-brand". This disconnect usually stems from a lack of context.

For v0 to produce output that looks and feels right, it needs to understand your components. That includes how things should look, how they should behave, how they work together, and all of the other nuances.

Most design systems aren’t built to support that kind of reasoning.

However, a design system built for AI enables you to generate brand-aware prototypes that look and feel production ready. Let's look at why giving v0 this context creates on-brand prototypes and how you can get started.

Read more

Will Sather
https://vercel.com/changelog/deploy-xmcp-servers-with-zero-configuration Deploy xmcp servers with zero-configuration 2025-08-22T13:00:00.000Z

Vercel now supports xmcp, a framework for building and shipping MCP servers with TypeScript, with zero-configuration.

xmcp uses file-based routing to create tools for your MCP server.

Once you've created a file for your tool, you can use a default export in a way that feels familiar to many other file-based routing frameworks. Below, we create a "greeting" tool.

Learn more about deploying xmcp to Vercel in the documentation.

Read more

Anthony Shew
https://vercel.com/blog/ai-gateway-is-now-generally-available AI Gateway: Production-ready reliability for your AI apps 2025-08-21T13:00:00.000Z

Building an AI app can now take just minutes. With developer tools like the AI SDK, teams can build both AI frontends and backends that accept prompts and context, reason with an LLM, call actions, and stream back results.

But going to production requires reliability and stability at scale. Teams that connect directly to a single LLM provider for inference create a fragile dependency: if that provider goes down or hits rate limits, so does the app. As AI workloads become mission-critical, the focus shifts from integration to reliability and consistent model access. Fortunately, there's a better way to run.

AI Gateway, now generally available, ensures availability when a provider fails, avoiding low rate limits and providing consistent reliability for AI workloads. It's the same system that has powered v0.app for millions of users, now battle-tested, stable, and ready for production for our customers.

Read more

Walter Korman Harpreet Arora
https://vercel.com/changelog/ai-gateway-is-now-generally-available AI Gateway is now generally available 2025-08-21T13:00:00.000Z

AI Gateway is now generally available, providing a single unified API to access hundreds of AI models with transparent pricing and built-in observability.

With sub-20ms latency routing across multiple inference providers, AI Gateway delivers:

  • Transparent pricing with no markup on tokens (including Bring Your Own Keys)

  • Automatic failover for higher availability

  • High rate limits

  • Detailed cost and usage analytics

You can use AI Gateway with the AI SDK or through the OpenAI-compatible endpoint. With the AI SDK, it’s just a simple model string switch.

Get started with a single API call:

Read more about the announcement, learn more about AI Gateway, or get started now.

Read more

Walter Korman Jeremy Philemon Sam Chitgopekar Josh Lipman Dan Erickson Rohan Taneja Allen Zhou Harpreet Arora
https://vercel.com/changelog/introducing-streamdown Introducing Streamdown: Open source Markdown for AI streaming 2025-08-21T13:00:00.000Z

Streamdown is a new open source, drop-in Markdown renderer built for AI streaming. It powers the AI Elements Response component, but can also be used standalone to give developers a fully composable, independently managed option with npm i streamdown.

Streamdown is designed to handle unterminated chunks, interactive code blocks, math, and other cases that are unreliable with existing Markdown packages.

It's available now, and ships with:

  • Tailwind typography styles: Preconfigured classes for headings, lists, and code blocks

  • GitHub Flavored Markdown: Tables, task lists, and other GFM features

  • Interactive code blocks: Shiki highlighting with built-in copy button

  • Math support: LaTeX expressions via remark-math and KaTeX

  • Graceful chunk handling: Proper formatting for unterminated Markdown chunks

  • Security hardening: Safe handling of untrusted content with restricted images and links

You can get started with start with AI Elements:

Or as a standalone package:

Read the docs and upgrade your AI-powered streaming.

Read more

Hayden Bleasel
https://vercel.com/blog/rethinking-prototyping-requirements-and-project-delivery-at-code-and-theory Rethinking prototyping, requirements, and project delivery at Code and Theory 2025-08-20T13:00:00.000Z

Code and Theory is a digital-first creative and technology agency that blends strategy, design, and engineering. With a team structure split evenly between creatives and engineers, the agency builds systems for global brands like Microsoft, Amazon, and NBC that span media, ecommerce, and enterprise tooling.

With their focus on delivering expressive, scalable digital experiences, the team uses v0 to shorten the path from idea to working software.

Read more

Alli Pope
https://vercel.com/blog/a-proposal-for-inline-llm-instructions-in-html <script type="text/llms.txt"> 2025-08-20T13:00:00.000Z

How do you tell an AI agent what it needs to do when it hits a protected page? Most systems rely on external documentation or pre-configured knowledge, but there's a simpler approach.

What if the instructions were right there in the HTML response?

llms.txt is an emerging standard for making content such as docs available for direct consumption by AIs. We’re proposing a convention to include such content directly in HTML responses as <script type="text/llms.txt">.

Read more

Malte Ubl
https://vercel.com/changelog/give-agents-access-to-protected-deployments-via-vercels-mcp-server Agents can now access protected deployments via Vercel’s MCP server 2025-08-19T13:00:00.000Z

Two new tools are now available in Vercel’s MCP server:

  • get_access_to_vercel_url Generates a shareable URL that allows agent tools such as web fetch or Playwright to access deployments protected by Vercel Authentication. The URL is temporary and grants access without requiring login credentials.

  • web_fetch_vercel_url Allows agents to directly fetch content from deployments protected by Vercel Authentication, even if a normal fetch would return 401 Unauthorized or 403 Forbidden.

Get started with the Vercel MCP server.

Read more

Malte Ubl Kit Foster
https://vercel.com/changelog/node-js-vercel-functions-now-support-fetch-web-handlers Node.js Vercel Functions now support fetch web handlers 2025-08-19T13:00:00.000Z

Vercel Functions running on the Node.js runtime now support the fetch web handlers, improving interoperability across JavaScript runtimes and frameworks.

You can still export individual HTTP methods, if preferred.

Learn more about fetch web handlers in the docs.

Read more

Tom Lienard Jeff See Pooya Pasa
https://vercel.com/blog/if-agents-are-building-your-app-who-gets-the-w-2 If agents are building your app, who gets the W-2? 2025-08-18T13:00:00.000Z

Autonomous coding agents are not the future. They are already here. Agents can now design, build, test, and deploy an entire full-stack feature from front end to back end without a human touching the keyboard.

The reality is that while this technology has advanced quickly, Generally Accepted Accounting Principles (GAAP) have not traditionally focused on the cost of tools used in development. Under current U.S. GAAP, you can capitalize certain third-party software costs if they are a direct cost of creating software during the application development stage. Historically, though, developer tools were treated as overhead because their cost could not be directly tied to capitalizable work. Under GAAP, work that meets the criteria should be capitalized. When agents perform that work, they should be treated no differently than salaried engineers.

Read more

Keith Messick Werner Schwock
https://vercel.com/changelog/native-support-for-sveltekits-new-opentelemetry-spans Native support for SvelteKit's new OpenTelemetry spans 2025-08-18T13:00:00.000Z

Vercel now directly integrates with SvelteKit's new server-side OpenTelemetry spans.

To get started, activate experimental tracing in SvelteKit:

And create the tracing instrumentation file with the Vercel OpenTelemetry collector:

Traces generated during tracing sessions will now include the built-in SvelteKit spans. You can also configure other collectors. See the SvelteKit observability docs for more information.

Read more

Elliott Johnson
https://vercel.com/changelog/vercel-sandbox-increases-concurrency-and-port-limits Vercel Sandbox increases concurrency and port limits 2025-08-18T13:00:00.000Z

Pro and Enterprise teams can now run up to 2,000 Vercel Sandboxes concurrently (up from 150), with each now able to expose up to 4 ports for external access.

This enables larger traffic spikes for workloads like untrusted code execution, batch jobs, and automated testing, as well as more complex applications with multiple services or protocols running side-by-side.

If you need a higher amount of concurrent sandboxes, you can contact our sales team to explore higher limits for your projects.

Learn more in the Vercel Sandbox docs.

Read more

Laurens Duijvesteijn
https://vercel.com/changelog/botid-deep-analysis-model-improved-for-fake-hardware-detection Improved fake hardware detection with Vercel BotID 2025-08-15T13:00:00.000Z

Vercel BotID Deep Analysis now uses an updated detection model that expands fingerprinting coverage for bespoke headless browsers and simulated device hardware.

BotID is an invisible CAPTCHA that classifies sophisticated bots without interrupting real users. The new Deep Analysis model enables more accurate identification of stealthy automation frameworks and spoofed hardware profiles in real time.

These updates take effect immediately for BotID Deep Analysis users with no action required, but we recommend upgrading to the latest [email protected].

Get started with BotID today.

Read more

Andrew Qu
https://vercel.com/blog/the-three-types-of-ai-bot-traffic-and-how-to-handle-them The three types of AI bot traffic and how to handle them 2025-08-13T13:00:00.000Z

AI bot traffic is growing across the web. We track this in real-time, and the data reveals three types of AI-driven crawlers that often work independently but together create a discovery flywheel that many teams disrupt without realizing it.

Not all bots are harmful. Crawlers have powered search engines for decades, and we've spent just as long optimizing for them. Now, large language models (LLMs) need training data, and the AI tools built on them need timely, relevant updates. This is the next wave of discoverability and getting it right from the start can determine whether AI becomes a growth channel or a missed opportunity.

Blocking AI crawlers today is like blocking search engines in the early days and then wondering why organic traffic vanishes. As users shift from Googling for web pages to prompting for direct answers and cited sources, the advantage will go to sites that understand each type of bot and choose where access creates value.

Read more

Kevin Corbett
https://vercel.com/blog/the-real-serverless-compute-to-database-connection-problem-solved The real serverless compute to database connection problem, solved 2025-08-13T13:00:00.000Z

There is a long-standing myth that serverless compute inherently requires more connections to traditional databases. The real issue is not the number of connections needed during normal operation, but that some serverless platforms can leak connections when functions are suspended.

In this post, we show why this belief is incorrect, explain the actual cause of the problem, and provide a straightforward, simple-to-use solution.

Read more

Malte Ubl
https://vercel.com/blog/how-coxwave-delivers-genai-value-faster-with-vercel How Coxwave delivers GenAI value faster with Vercel 2025-08-13T13:00:00.000Z

Coxwave helps enterprises build GenAI products that work at scale. With their consulting arm, AX, and their analytics platform, Align, they support some of the world’s most technically sophisticated companies, including Anthropic, Meta, Microsoft, and PwC.

Since the company’s founding in 2021, speed has been a defining trait. But speed doesn’t just mean fast models. For Coxwave, it means fast iteration, fast validation, and fast value delivery.

To meet that bar, Coxwave reimagined their web app strategy with Next.js and Vercel.

Read more

Alli Pope
https://vercel.com/changelog/introducing-the-runtime-cache-api Introducing the Runtime Cache API 2025-08-13T13:00:00.000Z

You can now access Vercel's Runtime Cache via API.

The Runtime Cache is an ephemeral cache for storing and retrieving data across Functions, Routing Middleware, and Builds within the same region. It supports tag-based invalidation for precise and efficient cache control.

You can get started with the API like this:

You can monitor hit rates, invalidation patterns, and storage usage across your applications in the Observability dashboard's Runtime Cache tab.

Runtime Cache reads and writes are billed regionally based on the runtime region.

Learn more about Runtime Cache in the docs.

Read more

Luba Kravchenko Kelly Davis
https://vercel.com/blog/cutting-delivery-times-in-half-with-v0 Cutting delivery times in half with v0 2025-08-12T13:00:00.000Z

Ready.net is a core platform that helps utility companies manage their financing and compliance, the company works with a wide network of state-level stakeholders. New feature requirements come in fast, often vague, and always critical.

With limited design resources supporting three teams, the company needed a way to speed up the loop between ideation, validation, and delivery. That’s where v0 came in.

Read more

Alli Pope
https://vercel.com/changelog/claude-sonnet-4-now-supports-1m-token-context-in-vercel-ai-gateway Claude Sonnet 4 now supports 1M token context in Vercel AI Gateway 2025-08-12T13:00:00.000Z

You can now leverage Claude Sonnet 4's updated 1 million-token context window with Vercel's AI Gateway with no other provider accounts required. This release from Anthropic enables significantly larger inputs such as full codebases (~75,000+ lines) or large document sets.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, start by installing the package:

Then set the model to anthropic/claude-4-sonnet:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

To deliver high performance and reliability to Claude Sonnet 4, AI Gateway leverages multiple model providers under the hood, including Anthropic and Bedrock.

Learn more about AI Gateway and view the new AI Gateway model leaderboard.

Read more

Walter Korman Sam Chitgopekar Harpreet Arora
https://vercel.com/changelog/auto-recharge-available-in-ai-gateway Auto-recharge available in AI Gateway 2025-08-12T13:00:00.000Z

Vercel AI Gateway now supports automatic credit recharging (top-ups), optionally refilling your balance before it runs out to keep your apps running without interruption.

Auto-recharge is off by default and can be enabled or updated anytime in the AI Gateway dashboard or team billing settings. Set your top-up amount and trigger balance, optionally add a monthly spend limit, and your credits will automatically refill.

Learn more about AI Gateway.

Read more

Jeremy Philemon Walter Korman Harpreet Arora
https://vercel.com/changelog/vercels-bot-verification-now-supports-web-bot-auth Vercel's bot verification now supports Web Bot Auth 2025-08-12T13:00:00.000Z

We collaborated with industry partners to advance the IETF proposal for Web Bot Auth and Vercel's bot verification system supports the new protocol. Now, Bot Protection can use HTTP Message Signatures to verify traffic automation from dynamic and distributed sources.

Vercel maintains a comprehensive and actively curated directory of known bots that are verified by IP, reverse DNS, and now Web Bot Auth, which verifies bots via public-key cryptography in signed headers. This ensures that legitimate automation, like SEO crawlers, performance monitoring tools, and platform-integrated AI bots, can reliably access your site, while spoofed bots are blocked.

Web Bot Auth's asymmetric signature proves the authenticity of the traffic regardless of its network origin, making it ideal for bots running in dynamic or serverless environments.

Verified Bots using Web Bot Auth include signed headers to authenticate each request, allowing them to be recognized and allowed through Bot Protection and Challenge Mode. For example, ChatGPT operator signs its requests using Web Bot Auth, so is now allowed.

Learn more about Bot Management.

Read more

Sage Abraham
https://vercel.com/changelog/vercel-botid-now-leverages-vercels-verified-bot-directory Vercel BotID now leverages Vercel's verified bot directory 2025-08-12T13:00:00.000Z

Starting in [email protected], BotID’s Deep Analysis mode provides authenticated information for verified bots based on Vercel's directory of known and verified bots. This allows developers to detect verified bots in real time and make programmatic decisions based on bot identity.

This allows you to securely allow known bots that are good for your business (such as agentic bots that purchase on behalf of users) while blocking other bots and sophisticated abuse.

BotID is an invisible CAPTCHA that classifies sophisticated bots without interrupting real users. With this update, developers using Deep Analysis now get additional context about the bot itself, such as source IP range, reverse DNS, and user-agent validation, helping teams fine-tune how bots are handled before taking action.

Get started with BotID and check out the documentation for verified bots in BotID.

Read more

Andrew Qu Sage Abraham
https://vercel.com/blog/v0-app v0.dev -> v0.app 2025-08-11T13:00:00.000Z

With a single prompt, anyone can go from idea to deployed app with UI, content, backend, and logic included.

v0 is now agentic, helping you research, reason, debug, and plan. It can collaborate with you or take on the work end-to-end.

From product managers writing specs to recruiters launching job boards, v0 is changing how teams operate.

Read more

Zeb Hermann
https://vercel.com/changelog/new-instant-rollback-flow Add context when using Instant Rollback 2025-08-11T13:00:00.000Z

You can now include a reason when performing an Instant Rollback.

This message is visible to your team in the project overview and can include links or notes explaining the rollback. You can also update it at any time.

Learn more about Instant Rollback.

Read more

Jay Gengelbach
https://vercel.com/changelog/cursor-now-supported-on-vercel-mcp Cursor now supported on Vercel MCP 2025-08-09T13:00:00.000Z

You can now use Cursor with Vercel MCP, our official Model Context Protocol (MCP) server. To ensure secure access, Vercel MCP currently supports AI clients that have been reviewed and approved by Vercel.

With Vercel MCP you can explore projects, inspect failed deployments, fetch logs, and more, now all without leaving Cursor.

To connect, either use click here for a one-click setup or add the following to your .cursor/mcp.json:

Once added, Cursor will prompt you to log in with your Vercel account.

Read more about using Cursor in Vercel MCP.

Read more

Mark Roberts Brooke Mosby Allen Zhou Andrew Qu Tom Knickman Anthony Shew Aparna Sinha
https://vercel.com/blog/how-zapier-scales-product-partnerships-with-v0 How Zapier scales product partnerships with v0 2025-08-08T13:00:00.000Z

Zapier is the leading AI orchestration platform, helping businesses turn intelligent insights into automated actions across nearly 8,000 apps. As AI tools and agents become more capable, Zapier provides the connective tissue to operationalize them, bridging the gap between decision and execution. 

Powered by Zapier extends this capability to partners. It enables SaaS and AI companies to embed Zapier’s automation engine directly into their products without needing to build or maintain thousands of integrations in-house.

But explaining to partners what that experience can look like in their product was a challenge. Needing to move quickly with finite resources, the Zapier team could take a few weeks to design and build a clickable prototype. Now, with v0, the Powered by Zapier team can generate high-fidelity demos in just a few hours. The result: better conversations with partners, faster implementation cycles, and more integrations shipped for end users.

Read more

Alli Pope
https://vercel.com/changelog/vlt-is-now-available-in-builds-via-zero-configuration vlt is now available in builds via zero configuration 2025-08-08T13:00:00.000Z

Vercel now supports the vlt package manager for builds with zero configuration in builds.

Starting today, Projects that contain a vlt-lock.json file will automatically run vlt install as the default Install Command using vlt.

vlt requires node 20.x to run and is only available in the modern build image.

Learn more about package manager support on Vercel.

Read more

Luke Phillips-Sheard
https://vercel.com/changelog/bulk-upgrade-deprecated-node-js-versions Bulk upgrade deprecated Node.js versions 2025-08-08T13:00:00.000Z

Team owners and members can now upgrade all projects using Node.js 18 or earlier to Node.js 22 with one click in the Vercel Dashboard.

This updates the Node.js version in project settings. If your project also defines a version in package.json, you'll need to update it manually. Existing deployments are not affected.

View and upgrade deprecated Node.js projects now.

Read more

Ali Smesseim
https://vercel.com/changelog/improved-metrics-search-in-observability-plus Improved metrics search in Observability Plus 2025-08-08T13:00:00.000Z

We’ve improved the metrics search and navigation experience in Vercel Observability, making it faster and easier to build custom queries.

You can now:

  • Quickly find metrics by typing partial names or common abbreviations like TTFB for "time to first byte"

  • Browse all available metrics for an event in a side-by-side view

  • Use keyboard shortcuts for faster navigation

  • Access an optimized interface on mobile devices

These updates are available now for all teams with Observability Plus.

Try it out or learn more about Observability and Observability Plus.

Read more

Tobias Lins Timo Lins
https://vercel.com/blog/vercel-collaborates-with-openai-for-gpt-5-launch Vercel collaborates with OpenAI for GPT-5 launch 2025-08-07T13:00:00.000Z

The GPT-5 family of models, released today, is now available through AI Gateway and in production on v0.dev. Thanks to OpenAI, Vercel has been testing these models over the past few weeks in v0, Next.js, AI SDK, and Vercel Sandbox.

From our testing, GPT-5 is noticeably better at frontend design than previous models. It generates polished, balanced UIs with clean, composable code. Internally, we’ve already started using GPT-5 for Vercel's in-dashboard Agent and for v0.dev/gpt-5. GPT-5 shows strong performance in agent-based workflows. We have been impressed with it's long-context reasoning and ability to handle multiple tools in parallel have been especially effective in powering Vercel Agent.

Read more

Aparna Sinha Harpreet Arora Javi Velasco Walter Korman Gaspar Garcia Janos Szathmary
https://vercel.com/blog/gartner-mq-visionary-2025 Vercel is the only vendor to be recognized as a Visionary in the 2025 Gartner® Magic Quadrant™ for Cloud-Native Application Platforms 2025-08-07T13:00:00.000Z

At Vercel, we're building the platform that delivers every pixel and token, and powers every frontend, backend, and agent on the web. With more than 4M weekly active domains and 115B weekly requests served by Vercel, the most forward-thinking teams are choosing Vercel’s AI Cloud to deliver fast, secure, full-stack applications with zero friction, infinite scale, and complete developer freedom. We’re proud to be the only vendor named a Visionary in the 2025 Gartner® Magic Quadrant™ for Cloud Native Application Platforms. We believe that this recognition serves as validation: the future of the web is being built on Vercel.

Read more

Jeanne Grosser
https://vercel.com/changelog/gpt-5-gpt-5-mini-and-gpt-5-nano-are-now-available-in-vercel-ai-gateway GPT-5, GPT-5-mini, and GPT-5-nano are now available in Vercel AI Gateway 2025-08-07T13:00:00.000Z

You can now access GPT-5, GPT-5-mini, and GPT-5-nano by OpenAI, models designed to push the frontier of reasoning and domain expertise, using Vercel's AI Gateway with no other provider accounts required.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, start by installing the package:

Then set the model to either openai/gpt-5 or openai/gpt-5-mini or openai/gpt-5-nano:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway and view the new AI Gateway model leaderboard.

Read more

Walter Korman Harpreet Arora Jeremy Philemon Rohan Taneja Josh Singh Sam Chitgopekar Josh Lipman
https://vercel.com/blog/introducing-vercel-mcp-connect-vercel-to-your-ai-tools Introducing Vercel MCP: Connect Vercel to your AI tools 2025-08-06T13:00:00.000Z

Today, we're launching the official Vercel MCP server, now in Public Beta. Vercel MCP is a secure, OAuth-compliant interface that lets AI clients interact with your Vercel projects.

AI tools are becoming a core part of the developer workflow, but they've lacked secure, structured access to infrastructure like Vercel. With Vercel MCP, supported tools like Cursor and Claude can securely access logs, docs, and project metadata directly from within your development environment or AI assistant.

Read more

Allen Zhou Brooke Mosby Mark Roberts Andrew Qu Anthony Shew Tom Knickman Aparna Sinha
https://vercel.com/changelog/introducing-ai-elements Introducing AI Elements: Prebuilt, composable AI SDK components 2025-08-06T13:00:00.000Z

AI Elements is a new open source library of customizable React components for building interfaces with the Vercel AI SDK.

Built on shadcn/ui, it provides full control over UI primitives like message threads, input boxes, reasoning panels, and response actions.

For example, you can use useChat from the AI SDK to manage state and streaming, and render responses using AI Elements.

Getting started

To install the components, you can initialize with our CLI, and pick your components, import them, and start building.

Read the docs and start building better AI interfaces, faster.

Read more

Hayden Bleasel Ryan Haraki
https://vercel.com/changelog/microfrontends-support-is-now-in-public-beta Microfrontends support is now in Public Beta 2025-08-06T13:00:00.000Z

Microfrontends support is now available in Public Beta. Microfrontends allow you to split large applications into smaller ones so that developers can move more quickly.

This support lets teams and large apps build and test independently, while Vercel assembles and routes the app into a single experience. This reduces build times, supports parallel development, and enables gradual legacy migration.

Developers can use the Vercel Toolbar to iterate and test their apps independently, while navigations between microfrontends benefit from prefetching and prerendering for fast transitions between the applications.

To get started with microfrontends, clone one of our examples or follow the quickstart guide:

  1. In the Vercel dashboard, navigate to the Microfrontends tab in Settings

  2. Create a microfrontends group containing all of your microfrontend projects

  3. Add the @vercel/microfrontends package to each microfrontend application

  4. Add a microfrontends.json configuration file to the default app, test in Preview, and deploy to Production when ready

Learn more about microfrontends in our docs, or contact Vercel or your account team directly for more information.

Read more

Mark Knichel Kit Foster Tom Knickman Justin Kropp
https://vercel.com/changelog/claude-4-1-opus-is-now-supported-in-vercel-ai-gateway Claude Opus 4.1 is now supported in Vercel AI Gateway 2025-08-05T13:00:00.000Z

You can now access Claude Opus 4.1, a new model released today, using Vercel's AI Gateway with no other provider accounts required. This release from Anthropic improves agentic task execution, real-world coding, and reasoning over the previous Opus 4 model.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, start by installing the package:

Then set the model to anthropic/claude-4.1-opus:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

To deliver high performance and reliability to Claude Opus 4.1, AI Gateway leverages multiple model providers under the hood, including Anthropic and Bedrock.

Learn more about AI Gateway and view the new AI Gateway model leaderboard.

Read more

Walter Korman Harpreet Arora Rohan Taneja Josh Lipman
https://vercel.com/changelog/gpt-oss-20b-and-gpt-oss-120b-are-now-supported-in-vercel-ai-gateway gpt-oss-20b and gpt-oss-120b are now supported in Vercel AI Gateway 2025-08-05T13:00:00.000Z

You can now access gpt-oss-20b and gpt-oss-120b by OpenAI, open-weight reasoning models designed to push the open model frontier, using Vercel's AI Gateway with no other provider accounts required.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, start by installing the package:

Then set the model to either openai/gpt-oss-20b or openai/gpt-oss-120b:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

To deliver high performance and reliability to gpt-oss, AI Gateway leverages multiple model providers under the hood, including Groq, Baseten, Cerebras, and Huggingface.

Learn more about AI Gateway and view the new AI Gateway model leaderboard.

Read more

Walter Korman Harpreet Arora Jeremy Philemon Rohan Taneja Josh Singh Sam Chitgopekar Josh Lipman
https://vercel.com/blog/v0-vibe-coding-securely v0: vibe coding, securely 2025-08-04T13:00:00.000Z

Vibe coding has changed how software gets built. Tools like v0 make it possible to turn ideas into working prototypes in seconds. Anthropic's CEO predicts 90% of code will be AI-generated in 3-6 months. Adoption is accelerating fast, and for many builders, we're already there.

But here's the uncomfortable truth: The faster you build, the more risk you create

Last week, a viral app leaked 72k selfies and government IDs. This wasn’t a hack or advanced malware. It was caused by default settings, misused variables, and the absence of guardrails. A misconfigured Firebase bucket that was mistakenly left public for anyone to access. The app was built quickly, shipped without security review, and went viral.

Read more

Ty Sbano Liz Hurder Kevin Corbett
https://vercel.com/changelog/vercels-mcp Vercel MCP now in Public Beta 2025-08-04T13:00:00.000Z

Vercel's official MCP (Model Control Protocol) server is now live at mcp.vercel.com in Public Beta. This server providing a remote interface with OAuth-based authorization for AI tools to securely interact with your Vercel projects.

The server integrates with AI assistants, such as Claude.ai, Claude Code and Claude for desktop, and tools like VS Code, to:

  • Search and navigate Vercel documentation

  • Manage projects and deployments

  • Analyze deployment logs

Vercel MCP fully implements the latest MCP Authorization and Streamable HTTP specifications for enhanced security and performance.

This update enhances collaboration between AI-driven workflows and Vercel ecosystems.

For more details, read our the documentation.

Read more

Mark Roberts Allen Zhou Andrew Qu Brooke Mosby Anthony Shew Tom Knickman Aparna Sinha
https://vercel.com/changelog/new-custom-visualization-in-vercel-observability New custom visualization in Vercel Observability 2025-08-04T13:00:00.000Z

Observability Plus users can now choose between line charts, volume charts, tables, or a big number when visualizing data returned by queries. Both the queries and their visualization settings can be saved to shareable notebooks.

This update replaces fixed presets with customizable controls and is available now at no extra cost for teams on Observability Plus.

Try it out or learn more about Observability and Observability Plus.

Read more

Damien Simonin Feugas Timo Lins
https://vercel.com/blog/shipped-on-vercel A new wave of software, shipped on Vercel 2025-08-01T13:00:00.000Z

Shipped on Vercel showcases real apps in production on Vercel, built by teams rethinking how the web works.

Read more

Reem Ateyeh Alli Pope
https://vercel.com/changelog/deploy-hono-backends-with-zero-configuration Deploy Hono backends with zero configuration 2025-08-01T13:00:00.000Z

Vercel now natively supports Hono, a fast, lightweight backend framework built on web standards, with zero-configuration.

With the code above, use Vercel CLI to develop and deploy your Hono application:

With this improved integration, Vercel's framework-defined infrastructure now recognizes and deeply understands Hono applications, ensuring they benefit from optimizations made from builds, deployments, and application delivery.

Now, new Hono applications deployed to Vercel benefit from Fluid compute, with Active CPU pricing, automatic cold start optimizations, background processing, and much more.

Deploy Hono on Vercel or visit Hono's Vercel documentation.

Read more

Jeff See
https://vercel.com/blog/summer-2025-oss-program Vercel Open Source Program: Summer cohort 2025-07-31T13:00:00.000Z

In April, we launched the Vercel Open Source Program, a developer initiative that gives maintainers the resources, credits, and support they need to ship faster and scale confidently, starting with the spring 2025 cohort.

We're now honored to announce the summer 2025 cohort.

From AI-powered calendars to beautifully styled React Native components, open source builders continue to amaze us. Here are the 28 projects from the summer cohort.

Read more

Kap Sev
https://vercel.com/blog/ai-sdk-5 AI SDK 5 2025-07-31T13:00:00.000Z

With over 2 million weekly downloads, the AI SDK is the leading open-source AI application toolkit for TypeScript and JavaScript. Its unified provider API allows you to use any language model and enables powerful integrations into leading web frameworks.

Read more

Lars Grammel Nico Albanese Josh Singh
https://vercel.com/blog/join-the-v0-ambassador-program Join the v0 Ambassador Program 2025-07-29T13:00:00.000Z

Since launch, we’ve seen a growing wave of people building with v0 and sharing what they’ve created, from full-stack apps to UI experiments.

Now, we’re going a step further by sponsoring builders innovating and showcasing what’s possible with v0.

Today we’re launching the v0 Ambassador Program as a way to recognize and enable members of our community who create, share, and inspire.

Apply to join the v0 Ambassador Program and help others discover the magic of what's possible with v0.

Read more

Alli Pope Esteban Suárez
https://vercel.com/changelog/z-ais-glm-4-5-and-glm-4-5-air-are-now-supported-in-vercel-ai-gateway Z.ai's GLM-4.5 and GLM-4.5 Air are now supported in Vercel AI Gateway 2025-07-29T13:00:00.000Z

You can now access GLM-4.5 and GLM-4.5 Air, new flagship models from Z.ai designed to unify frontier reasoning, coding, and agentic capabilities, using Vercel's AI Gateway with no other provider accounts required.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, start by installing the package:

Then set the model to either zai/glm-4.5 or zai/glm-4.5-air:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

Learn more about AI Gateway.

Read more

Walter Korman Harpreet Arora
https://vercel.com/blog/fluid-how-we-built-serverless-servers Fluid: How we built serverless servers 2025-07-28T13:00:00.000Z

A few months ago, we announced Fluid compute, an approach to serverless computing that uses resources more efficiently, minimizes cold starts, and significantly reduces costs. More recently at Vercel Ship 2025, we introduced Active CPU pricing for even more cost-effective compute on Vercel.

Fluid compute with Active CPU pricing powers over 45 billion weekly requests, saving customers up to 95% and never charging CPU rates for idle time.

Behind the scenes, it took over two years to build the required infrastructure to make this possible.

Read more

Tom Lienard
https://vercel.com/changelog/generate-shareable-snapshots-of-observability-charts Generate shareable snapshots of Observability charts 2025-07-28T13:00:00.000Z

You can now quickly share snapshots of any chart in Vercel Observability, making it easier to collaborate during debugging and incident response.

Hover over a chart and press ⌘+C or Ctrl+C to copy a URL that opens a snapshot of the chart in Vercel Observability. The snapshot includes the same time range, filters, and settings as when copied.

The link includes a preview image of the chart that unfurls in tools like Slack and Teams. Share links are public to ease sharing, but unguessable and ignored by search robots.

Try it out or learn more about Observability and Observability Plus.

Read more

Vincent Voyer
https://vercel.com/blog/model-context-protocol-mcp-explained Model Context Protocol (MCP) explained: An FAQ 2025-07-25T13:00:00.000Z

Model Context Protocol (MCP) is a new way to help standardize the way large language models (LLMs) access data and systems, extending what they can do beyond their training data. It standardizes how developers expose data sources, tools, and context to models and agents, enabling safe, predictable interactions and acting as a universal connector between AI and applications.

Instead of building custom integrations for every AI platform, developers can create an MCP server once and use it everywhere.

Read more

Dan Fein Andrew Qu
https://vercel.com/blog/vercel-and-solara6-partner-to-build-better-ecommerce-experiences Vercel and Solara6 partner to build better ecommerce experiences 2025-07-25T13:00:00.000Z

Vercel is partnering with Solara6, a digital agency known for building high-performing ecommerce experiences for customers like Kate Spade, Coach, and Mattress Firm.

Their work emphasizes AI-powered efficiencies, fast iteration cycles, and user experience, while prioritizing measurable outcomes. Solara6 customers see improvements in their developer velocity, operational costs, page load times, conversion rates, and organic traffic.

Read more

Grace Roehl
https://vercel.com/changelog/qwen3-coder-is-now-supported-in-vercel-ai-gateway Qwen3-Coder is now supported in Vercel AI Gateway 2025-07-25T13:00:00.000Z

You can now access Qwen3 Coder, a model from QwenLM, an Alibaba Cloud company, designed to handle complex, multi-step coding workflows, using Vercel's AI Gateway with no other provider accounts required.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, start by installing the package:

Then set the model to alibaba/qwen3-coder:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

To deliver high performance and reliability to Qwen3 Coder, AI Gateway leverages multiple model providers under the hood, including Cerebras, DeepInfra, and Parasail.

Learn more about AI Gateway.

Read more

Walter Korman Harpreet Arora
https://vercel.com/changelog/growthbook-joins-the-vercel-marketplace Growthbook joins the Vercel Marketplace 2025-07-24T13:00:00.000Z

GrowthBook, the open-source experimentation platform, is now available as a native integration on the Vercel Marketplace. Easily add feature flags and A/B testing to your Vercel projects with minimal setup.

With GrowthBook on Vercel, you can:

  • Declare flags in code using Flags SDK and the @flags-sdk/growthbook adopter

  • Sync feature flags directly to Vercel Edge Config, powering low latency evaluation

  • Bring your own data using GrowthBook’s warehouse-native A/B testing platform

Explore the Template to view and deploy the example, with one-click setup and unified billing.

Read more

Hedi Zandi
https://vercel.com/blog/build-your-own-ai-app-builder-with-the-v0-platform-api Build your own AI app builder with the v0 Platform API 2025-07-23T13:00:00.000Z

The v0 Platform API is a text-to-app API that gives developers direct access to the same infrastructure powering v0.dev.

Currently in beta, the platform API exposes a composable interface for developers to automate building web apps, integrate code generation into existing features, and build new products on top of LLM-generated UIs.

Read more

Chris Tate Alli Pope
https://vercel.com/changelog/botid-now-available-for-all-frameworks Vercel BotID now 
available for all frameworks 2025-07-22T13:00:00.000Z

You can now use Vercel BotID to protect your most sensitive endpoints in any JavaScript framework, like SvelteKit and Nuxt.

BotID is our advanced bot protection for high-value endpoints like registration, checkout, and AI interactions. Since launch, it has already protected nearly a million API requests.

Installing or upgrading to [email protected] adds support for universal JavaScript environments with the new initBotId({ protect: ... }) function.

Here's an example of initBotId used to set up BotID in SvelteKit:

Check out the updated documentation for setup instructions across all supported frameworks.

Read more

Elliott Johnson Andrew Qu
https://vercel.com/changelog/transform-rules-are-now-available-in-vercel-json Transform rules are now available in vercel.json 2025-07-22T13:00:00.000Z

You can now define transform rules in vercel.json to modify HTTP request and response headers or query parameters, without changing application code.

Unlimited transform rules are available for all customers, and let you:

  • Set, append, or delete request headers, response headers, and query parameters

  • Use conditional logic to apply changes based on request metadata

  • Match by equality, inequality, prefixes, suffixes, inclusion in string arrays, or numeric comparisons for fine-grained control

This expands the flexibility of Vercel's CDN, which already supports routing behavior like redirects and rewrites to external origins.

For example:

Refer to the transform rules documentation for detailed examples.

Read more

Charlie Meyer
https://vercel.com/changelog/openai-compatible-api-endpoints-now-supported-in-ai-gateway OpenAI-compatible API endpoints now supported in AI Gateway 2025-07-21T13:00:00.000Z

You can now use OpenAI-compatible client libraries and tools with AI Gateway through a simple URL change, allowing you to access 100s of models with no code rewrites required.

Here is a Python example with the OpenAI client library:

This makes it easy to keep your current tools and workflows while improving uptime, tokens per minute, quotas, and reliability via provider failover and adding observability through the AI Gateway.

Learn more in the AI Gateway docs and see more examples here.

Read more

Walter Korman Harpreet Arora
https://vercel.com/changelog/open-vercel-documentation-pages-in-ai-providers Open Vercel documentation pages in AI providers 2025-07-18T13:00:00.000Z

You can now copy Vercel documentation pages as markdown, or open them directly in v0, Claude or ChatGPT.

This allows you to use documentation content as context when working with AI tools. Visit any documentation page and use the dropdown in the top right of the page.

Using the copy page dropdown

  • Navigate to any documentation page

  • Click the copy page dropdown in the top right corner

  • Select your provider or copy as markdown

The page content will be formatted and loaded into the selected AI provider.

Read more

Rich Haines
https://vercel.com/blog/grep-a-million-github-repositories-via-mcp Grep a million GitHub repositories via MCP 2025-07-17T13:00:00.000Z

Grep now supports the Model Context Protocol (MCP), enabling AI apps to query a million public GitHub repositories using a standard interface. Whether you're building in Cursor, using Claude, or integrating your own agent, Grep can now serve as a searchable code index over HTTP.

Read more

Dan Fox Andrew Qu
https://vercel.com/changelog/moonshot-ai-kimi-k2-model-is-now-supported-in-vercel-ai-gateway Moonshot AI's Kimi K2 model is now supported in Vercel AI Gateway 2025-07-15T13:00:00.000Z

You can now access Kimi K2, a new mixture-of-experts (MoE) language model from Moonshot AI, using Vercel's AI Gateway with no other provider accounts required.

AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime.

To use it with the AI SDK v5, start by installing the package:

Then set the model to moonshotai/kimi-k2:

Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

To deliver high performance and reliability to Kimi K2, AI Gateway leverages multiple model providers under the hood, including direct to Moonshot AI, Groq, DeepInfra, Fireworks AI, and Parasail.

Learn more about AI Gateway.

Read more

Walter Korman
https://vercel.com/changelog/oauth-support-added-to-mcp-adapter OAuth support added to MCP Adapter 2025-07-15T13:00:00.000Z

Secure your MCP servers with OAuth using version 1.0.0 of the MCP Adapter, which now includes official support for the MCP Authorization spec. This release introduces:

  • Helper functions for OAuth-compliant authorization flows

  • A new withMcpAuth wrapper for securing routes

  • One-click deployable examples with popular auth providers like Better Auth, Clerk, Descope, Stytch, and WorkOS

Here’s an example of how to integrate auth in your MCP server:

Additionally, use the protectedResourceHandler to expose resource server metadata for compliant clients. Learn more in the MCP Auth documentation.

Start building secure MCP servers

Deploy an example MCP server by cloning our Next.js MCP template, or explore starter integrations from our auth partners:

Read more

Allen Zhou Andrew Qu
https://vercel.com/changelog/search-any-public-github-repo-with-grep Search any public GitHub repo with Grep 2025-07-14T13:00:00.000Z

You can now use Grep to search any public repository on GitHub, no longer limited to the 1M+ pre-indexed repos.

To search a specific repo, use grep.app/[owner]/[repo].

For example: visit grep.app/vercel/ai and start typing a search query (try streamText).

Get quick, full-text and regular expression search across the repo without any setup.

Read more

Dan Fox
https://vercel.com/changelog/clerk-joins-the-vercel-marketplace Clerk joins the Vercel Marketplace 2025-07-14T13:00:00.000Z

Clerk is now available as an authentication provider on the Vercel Marketplace.

Built for modern frameworks like Next.js, Clerk simplifies authentication while giving teams full control over UI, sessions, and user roles, all tightly integrated with Vercel’s deployment model.

With the integration, you get access to:

  • Instant provisioning of Clerk apps from your Vercel dashboard

  • Complete user management with hosted dashboards, sessions, and roles

  • Built-in and scalable billing and subscription management

Get started with Clerk on the Vercel Marketplace.

Read more

Dima Voytenko Hedi Zandi
https://vercel.com/changelog/more-secure-deployment-protection More Secure Deployment Protection 2025-07-14T13:00:00.000Z

Deployment Protection safeguards preview and production URLs so that users can't access the domains that you don't want them to. Starting today, the Standard Deployment Protection option has been updated for new projects to protect all automatically generated domains, including the production branch git domain (for example project-git-main.vercel.app). Existing projects can update to this new behavior in the Project settings page in the Vercel dashboard.

Read more

Kit Foster
https://vercel.com/blog/the-ai-cloud-a-unified-platform-for-ai-workloads The AI Cloud: A unified platform for AI workloads 2025-07-10T13:00:00.000Z

For over a decade, Vercel has helped teams develop, preview, and ship everything from static sites to full-stack apps. That mission shaped the Frontend Cloud, now relied on by millions of developers and powering some of the largest sites and apps in the world.

Now, AI is changing what and how we build. Interfaces are becoming conversations and workflows are becoming autonomous.

We've seen this firsthand while building v0 and working with AI teams like Browserbase and Decagon. The pattern is clear: developers need expanded tools, new infrastructure primitives, and even more protections for their intelligent, agent-powered applications.

At Vercel Ship, we introduced the AI Cloud: a unified platform that lets teams build AI features and apps with the right tools to stay flexible, move fast, and be secure, all while focusing on their products, not infrastructure.

Read more

Dan Fein
https://vercel.com/changelog/vercel-blob-now-available-in-all-vercel-regions Vercel Blob now available in all Vercel Regions 2025-07-10T13:00:00.000Z

You can now create Vercel Blob stores in any of the 19 Vercel Regions.

Selecting a region closer to your Functions and users allows you to optimize upload speed and comply with data-residency requirements when required.

Selecting a region is available at creation time in the Vercel dashboard or when using the Vercel CLI (version 44.3.0).

Learn more about Vercel Blob in the documentation.

Read more

Luis Meyer Vincent Voyer
https://vercel.com/changelog/v0-platform-api-now-in-beta v0 Platform API now in beta 2025-07-09T13:00:00.000Z

The v0 Platform API is now available in public beta. The v0 Platform API is a text-to-app API — it provides programmatic access to v0’s app generation pipeline:

  • Generating code for web apps from prompts

  • Structured parsing of generated code

  • Automatic error fixing

  • Link with a rendered preview

This API also supports programmatic control of v0.dev, including creating and managing both chats and projects. We'll be bringing more of v0.dev's functionality into the Platform API soon.

The v0 Platform API is designed for integration into development workflows, automation scripts, and third-party tools. Check out our TypeScript SDK and documentation to get started.

Read more

Chris Tate Nicolás Montone Fernando Rojo
https://vercel.com/changelog/web-application-firewall-control-now-available-with-vercel-json Web Application Firewall control now available with vercel.json 2025-07-09T13:00:00.000Z

You can now control Vercel’s Web Application Firewall (WAF) actions directly in vercel.json, alongside existing support in the dashboard, API, and terraform.

This approach provides a structured way for both developers and agents to declaratively define and push rules to projects. Agents can use code-generating prompts to author new rules that are easily injected into the project’s vercel.json.

The has and missing matchers have also been enhanced to support more expressive conditions across headers, rewrites, redirects, and routes. Matching options include:

  • String equality and inequality

  • Regular expressions

  • Prefixes and suffixes

  • Inclusion and exclusion from string arrays

  • Numeric comparisons

The following example shows how to deny a request that is prefixed by a specific header:

Read more about Vercel's WAF and configuring WAF rules in vercel.json.

Read more

Charlie Meyer
https://vercel.com/changelog/inngest-joins-the-vercel-marketplace Inngest joins the Vercel Marketplace 2025-07-09T13:00:00.000Z

You can now install Inngest directly from the Vercel Marketplace to quickly build reliable background jobs and AI workflows for your Next.js app.

Inngest is a great fit to add AI features or emerging agentic patterns to your Vercel projects:

  • Write background jobs directly in your `app/` directory

  • Full support for preview environments and branching

  • One-click install and integrated billing with a generous free tier (100K executions/month)

Start building workflows with Inngest on the Vercel Marketplace today.

Read more

Hedi Zandi
https://vercel.com/blog/nuxtlabs-joins-vercel NuxtLabs joins Vercel 2025-07-08T13:00:00.000Z

NuxtLabs, creators and stewards of Nitro and Nuxt, are joining Vercel.

Read more

Guillermo Rauch
https://vercel.com/changelog/sandbox-now-supports-sudo-and-installing-rpm-packages Sandbox now supports sudo and installing RPM packages 2025-07-04T13:00:00.000Z

You can now run commands with sudo inside Vercel Sandbox, giving you full control over runtime environment setup, just like on a traditional Linux system.

This makes it possible to install system dependencies at runtime, like Go, Python packages, or custom binaries, before executing your code.

sudo is available via the runCommand method:

The sandbox sudo configuration is designed to be easy to use:

  • PATH is preserved

  • HOME is set to /root

  • Custom environment variables like env: { FOO: "bar" } are passed through

With sudo on Sandbox it's easier to run untrusted code in isolated environments with the right permissions, with no workarounds required.

Learn more about Vercel Sandbox and sudo in the documentation.

Read more

Laurens Duijvesteijn Javi Velasco Guðmundur Bjarni Ólafsson
https://vercel.com/changelog/correlate-logs-and-traces-with-opentelemetry-in-vercel-log-drains Correlate logs and traces with OpenTelemetry in Vercel Log Drains 2025-07-04T13:00:00.000Z

Vercel now automatically correlates logs with distributed traces for customers using OpenTelemetry to instrument their applications.

Traces are a way to collect data about the performance and behavior of your application and help identify the cause of performance issues, errors, and other problems. OpenTelemetry (OTel) is an open source project that allows you to instrument your application to collect traces.

When a request is traced using OTel, Vercel will enrich the relevant logs with trace and span identifiers. This allows you to correlate your individual logs to a trace or span.

This feature is available to customers using log drains through our integrations with Datadog and Dash0. No action is required and log to trace correlation will happen automatically going forward for customers using OTel with any of these integrations.

Learn more about correlating logs to traces using log drains.

Read more

Darpan Kakadia
https://vercel.com/changelog/cve-2025-49005 CVE-2025-49005 2025-07-03T13:00:00.000Z

Summary

A cache poisoning vulnerability affecting Next.js App Router >=15.3.0 < 15.3.3 / Vercel CLI 41.4.1–42.2.0 has been resolved. The issue allowed page requests for HTML content to return a React Server Component (RSC) payload instead under certain conditions. When deployed to Vercel, this would only impact the browser cache, and would not lead to the CDN being poisoned. When self-hosted and deployed externally, this could lead to cache poisoning if the CDN does not properly distinguish between RSC / HTML in the cache keys.

Impact

Under specific conditions involving App Router, middleware redirects, and omitted Vary headers, applications may:

  • Serve RSC payloads in place of HTML

  • Cache these responses at the browser or CDN layer

  • Display broken or incorrect client content

This issue occurs in environments where middleware rewrites or redirects result in improper cache key separation, because the cache-busting parameter added by the framework is stripped by the user’s redirect.

Resolution

The issue was resolved in Next.js 15.3.3 by:

  • Ensuring the Vary header is correctly set to distinguish between different content types

Customers hosting on Vercel with deployments that used the impacted CLI versions must redeploy their applications to receive the fix.

Workarounds

  • Manually add the Vary header on RSC responses to differentiate between RSC and HTML payloads. Specifically, Vary: RSC, Next-Router-State-Tree, Next-Router-Prefetch.

  • Apply a unique cache-busting search parameter to the middleware redirect destination

Credit

Thanks to internal incident response teams and affected Vercel customers for timely reports and debugging assistance.

References

Read more

Aaron Brown Zack Tanner
https://vercel.com/changelog/cve-2025-49826 CVE-2025-49826 2025-07-03T13:00:00.000Z

Summary

A vulnerability affecting Next.js has been addressed. It impacted versions >=15.1.0 <15.1.8 and involved a cache poisoning bug leading to a Denial of Service (DoS) condition.

Impact

This issue does not impact customers hosted on Vercel.

Under certain conditions, this issue may allow a HTTP 204 response to be cached for static pages, leading to the 204 response being served to all users attempting to access the page.

This issue required the below conditions to be exploitable:

  • Using an affected version of Next.js, and;

    • A route using cache revalidation with ISR (next start or standalone mode); and

    • A route using SSR, with a CDN configured to cache 204 responses.

Resolution

The issue was resolved by removing the problematic code path that would have caused the 204 response to be set. Additionally, we removed the race condition that could have led to this cache poisoning by no longer relying on a shared response object to populate the Next.js response cache.

Credit

Thanks to Allam Rachid (zhero) and Allam Yasser (inzo_) for responsible disclosure.

References

Read more

Aaron Brown Zack Tanner
https://vercel.com/changelog/new-usage-dashboard-for-pro-customers New usage dashboard for Pro customers 2025-07-03T13:00:00.000Z

Pro teams can now access a new usage dashboard (recently introduced to Enterprise customers) with improved filtering, detailed breakdowns, and export options to better understand usage and costs by product and project.

You can now break down usage by:

  • Product to quickly identify usage, drill down into spikes, and track costs of a single or set of products

  • Team and project to understand your costs and monitor team activity across all or specific apps

  • CSV exports for external analysis via integration into your cost observability tools and spreadsheets

Explore the new dashboard today.

Read more

Christian Pickett Shar Dara Caleb Boyd Chloe Tedder Manuel Muñoz Solera
https://vercel.com/changelog/zero-configuration-support-for-nitro Zero-configuration support for Nitro 2025-07-03T13:00:00.000Z

Vercel now supports Nitro applications, a backend toolkit for building web servers, with zero-configuration.

Nitro powers frameworks like Nuxt.js, TanStack Start, and SolidStart.

Deploy Nitro on Vercel or visit Nitro's Vercel documentation.

Read more

Austin Merrick
https://vercel.com/blog/vercel-ship-2025-recap Vercel Ship 2025 recap 2025-06-26T13:00:00.000Z

My first week at Vercel coincided with something extraordinary: Vercel Ship 2025.

Vercel Ship 2025 showcased better building blocks for the future of app development. AI has made this more important than ever. Over 1,200 people gathered in NYC for our third annual event, to hear the latest updates in AI, compute, security, and more.

Read more

Keith Messick
https://vercel.com/changelog/new-webhook-events-for-domain-management New webhook events for domain management 2025-06-26T13:00:00.000Z

You can now subscribe to webhook events for deeper visibility into domain operations on Vercel.

New event categories include:

  • Domain transfers: Track key stages in inbound domain transfers.

  • Domain renewals: Monitor renewal attempts and auto-renew status changes, ideal for catching failures before they impact availability.

  • Domain certificates: Get notified when certificates are issued, renewed, or removed, helping you maintain valid HTTPS coverage across environments.

  • DNS changes: Receive alerts when DNS records are created, updated, or deleted.

  • Project Domain Management: Detect domain lifecycle changes across projects, including creation, updates, verification status, and reassignment.

These events are especially valuable for multi-tenant platforms that dynamically assign domains per user or customer. They also help teams build monitoring and alerting into critical domain and certificate operations.

For details on how to subscribe, visit the webhook documentation.

Read more

Ethan Niser
https://vercel.com/blog/introducing-botid ​Introducing BotID, invisible bot filtering for critical routes 2025-06-25T13:00:00.000Z

Modern sophisticated bots don’t look like bots. They execute JavaScript, solve CAPTCHAs, and navigate interfaces like real users. Tools like Playwright and Puppeteer can script human-like behavior from page load to form submission.

Traditional defenses like checking headers or rate limits aren't enough. Bots that blend in by design are hard to detect and expensive to ignore.

Enter BotID: A new layer of protection on Vercel.

Think of it as an invisible CAPTCHA to stop browser automation before it reaches your backend. It’s built to protect critical routes where automated abuse has real cost such as checkouts, logins, signups, APIs, or actions that trigger expensive backend operations like LLM-powered endpoints.

Read more

Jen Chang Andrew Qu Dan Fein Kevin Corbett
https://vercel.com/blog/introducing-active-cpu-pricing-for-fluid-compute Introducing Active CPU pricing for Fluid compute 2025-06-25T13:00:00.000Z

Fluid compute exists for a new class of workloads. I/O bound backends like AI inference, agents, MCP servers, and anything that needs to scale instantly, but often remains idle between operations. These workloads do not follow traditional, quick request-response patterns. They’re long-running, unpredictable, and use cloud resources in new ways.

Fluid quickly became the default compute model on Vercel, helping teams cut costs by up to 85% through optimizations like in-function concurrency.

Today, we’re taking the efficiency and cost savings further with a new pricing model: you pay CPU rates only when your code is actively using CPU.

Read more

Dan Fein Mariano Cocirio
https://vercel.com/changelog/vercel-queues-is-now-in-limited-beta Vercel Queues is now in Limited Beta 2025-06-25T13:00:00.000Z

Vercel Queues is a message queue service built for Vercel applications, in Limited Beta.

Vercel Queues lets you offload work by sending tasks to a queue, where they’ll be processed in the background. This means users don’t have to wait for slow operations to finish during a request, and your app can handle retries and failures more reliably.

Under the hood, Vercel Queues uses an append-only log to store messages and ensures tasks such as AI video processing, sending emails, or updating external services are persisted and never lost.

Key features of Vercel Queues:

  • Pub/Sub pattern: Topic-based messaging allowing for multiple consumer groups

  • Streaming support: Handle payloads without loading them entirely into memory

  • Streamlined auth: Automatic authentication via OIDC tokens

  • SDK: TypeScript SDK with full type safety

If you have any questions, let us know in the Vercel Community.

Read more

Joe Haddad Harpreet Arora Pranay Prakash
https://vercel.com/changelog/vercel-agent-now-in-limited-beta Vercel Agent now in Limited Beta 2025-06-25T13:00:00.000Z

Vercel Agent is now available in Limited Beta. Agent is an AI assistant built into the Vercel dashboard that analyzes your app performance and security data.

Agent focuses on Observability, summarizing anomalies, identifying likely causes, and recommending specific actions. These actions can span across the platform, including managing firewall rules in response to traffic spikes or geographic anomalies, and identifying optimization opportunities within your application.

Insights appear contextually as detailed notebooks with no configuration required.

Sign up with Vercel Community, express your interest in participating, and we'll reach out to you.

Read more

Ethan Shea Tom Bremer Timo Lins Adrien Thebo
https://vercel.com/changelog/vercel-botid-is-now-generally-available Vercel BotID is now generally available 2025-06-25T13:00:00.000Z

Vercel BotID is an invisible CAPTCHA with no visible challenges or manual bot management required.

BotID is a new protection layer on Vercel designed for public, high-value routes such as checkouts, signups, AI chat interfaces, LLM-powered endpoints, and public APIs that are targets for sophisticated bots mimicking real user behavior.

Unlike IP-based or heuristic systems, BotID:

  • Silently collects thousands of signals that distinguish human users from bot

  • Mutates these detections on every page load, evading reverse engineering and sophisticated bypasses

  • Streams attack data into a global machine learning mesh, collectively strengthening protection for all customers

Powered by Kasada, BotID integrates into your application with a type-safe SDK:

  • Client-side detection using the <BotIdClient> component

  • Server-side verification with the checkBotId function

  • Automatic labeling of logs and telemetry for blocked sessions

BotID traffic is visible in the Firewall dashboard and can be filtered by verdict (pass or fail), user agent, country, IP address, request path, target path, JA4 digest, and host.

Read the announcement or documentation to learn more, or try BotID today.

Read more

Andrew Qu
https://vercel.com/changelog/vercel-microfrontends-is-now-in-limited-beta Vercel Microfrontends is now in Limited Beta 2025-06-25T13:00:00.000Z

Vercel Microfrontends is now available in Limited Beta for Enterprise teams, enabling you to deploy and manage multiple frontend applications that appear as one cohesive application to users.

This allows you to split large applications into smaller, independently deployable units that each team can build, test, and deploy using their own tech stack, while Vercel handles integration and routing across the platform.

  • Faster development for large apps: Smaller units reduce build times and enable teams to move independently

  • Independent team workflows: Each team manages its own deployment pipeline and framework

  • Incremental migration: Modernize legacy systems piece by piece without slow, large-scale rewrites

Learn more about Vercel Microfrontends. Reach out to your account representative or contact sales to join the limited beta.

Read more

Mark Knichel Kit Foster Eric Spishak-Thomas
https://vercel.com/changelog/rolling-releases-are-now-generally-available Rolling Releases are now generally available 2025-06-25T13:00:00.000Z

Rolling Releases are now generally available, allowing safe, incremental rollouts of new deployments with built-in monitoring, rollout controls, and no custom routing required.

Each rollout starts at a defined stage and can either progress automatically or be manually promoted to a full release. You can configure rollout stages per project and decide how each stage progresses, with updates propagating globally in under 300ms through our fast propagation pipeline.

Rolling releases also include:

  • Real-time monitoring: Track and compare error rates and Speed Insights (like Core Web Vitals, Time to First Byte, and more) between versions

  • Flexible controls: Rollouts can be managed via REST API, CLI, the project dashboard, or the Vercel Terraform provider

  • Version-labeled logs: Logs and telemetry are labeled by deployment for easier debugging

Pro and Enterprise teams can enable Rolling Releases on one project at no additional cost. Enterprise customers can upgrade to unlimited projects.

Learn more about Rolling Releases or enable it on your project.

Read more

Brooke Mosby Jay Gengelbach Mariano Cocirio Cody Brouwers Dimitri Mitropoulos Mitul Shah
https://vercel.com/changelog/higher-defaults-and-limits-for-vercel-functions-running-fluid-compute Higher defaults and limits for Vercel Functions running Fluid compute 2025-06-25T13:00:00.000Z

The default limits for Vercel Functions using Fluid compute have increased, with longer execution times, more memory, and more CPU.

The default execution time, for all projects on all plans, is now 300 seconds (5 minutes):

Plan

Default

Maximum

Hobby

300s (previously 60s)

300s (previously 60s)

Pro

300s (previously 90s)

800s

Enterprise

300s (previously 90s)

800s

Memory and CPU instance sizes have also been updated:

  • Standard (default) is now 1 vCPU / 2 GB (previously 1 vCPU / 1.7 GB)

  • Performance is now 2 vCPU / 4 GB (previously 1.7 vCPU / 3 GB)

These increased instances are enabled by Active CPU pricing, which charges based on actual compute time. Periods of memory-only usage are billed at a significantly lower rate, making longer executions more cost-efficient.

You can view logs to determine if your functions are hitting execution limits and adjust the max duration or upgrade your plan as needed.

Learn more about Vercel Function limits.

Read more

Tom Lienard Mariano Cocirio Doug Parsons Florentin Eckl Balazs Varga
https://vercel.com/changelog/edge-middleware-and-edge-functions-are-now-powered-by-vercel-functions Edge Middleware and Edge Functions are now powered by Vercel Functions 2025-06-25T13:00:00.000Z

Functions using the Edge runtime now run on the unified Vercel Functions infrastructure.

This applies to both before and after the cache:

  • Edge Middleware is now Vercel Routing Middleware, a new infrastructure primitive that runs full Vercel Functions with Fluid compute before the cache

  • Edge Functions are now Vercel Functions using the Edge Runtime after the cache

With these changes, all functions including those running the Edge runtime are:

  • Fluid compute-ready: Runs on Fluid compute for better performance and cost efficiency

  • Multi-runtime: Supports Node.js and Edge runtimes

  • Framework-driven: Deployed automatically from supported framework code

  • Consistent pricing: Uses unified Vercel Functions pricing based on Active CPU time across all compute types

Vercel Routing Middleware is now generally available to all users.

Learn more about Routing Middleware.

Read more

Gal Schlezinger Mariano Cocirio Shohei Maeda Kiko Beats Florentin Eckl Tiago Ventura Loureiro Seiya Nuta Tom Lienard Doug Parsons
https://vercel.com/changelog/run-untrusted-code-with-vercel-sandbox Run untrusted code with Vercel Sandbox 2025-06-25T13:00:00.000Z

Vercel Sandbox is a secure cloud resource powered by Fluid compute. It is designed to run untrusted code, such as code generated by AI agents, in isolated and ephemeral environments.

Sandbox is a standalone SDK that can be executed from any environment, including non-Vercel platforms. Sandbox workloads run in ephemeral, isolated microVMs via the new Sandbox SDK, supporting execution times up to 45 minutes.

Sandbox uses the Fluid compute model and charges based on Fluid’s new Active CPU time, meaning you only pay for compute when actively using CPU. See Sandbox pricing for included allotments and pricing for Hobby and Pro teams.

Now in Beta and available to customers on all plans. Learn more about Vercel Sandbox.

Read more

Guðmundur Bjarni Ólafsson Laurens Duijvesteijn Javi Velasco Mariano Cocirio Ali Smesseim Fabio Benedetti Andy Waller
https://vercel.com/changelog/lower-pricing-with-active-cpu-pricing-for-fluid-compute Lower pricing with Active CPU pricing for Fluid compute 2025-06-25T13:00:00.000Z

Vercel Functions on Fluid Compute now use Active CPU pricing, which charges for CPU only while it is actively doing work. This eliminates costs during idle time and reduces spend for workloads like LLM inference, long-running AI agents, or any task with idle time.

Active CPU pricing is built on three core metrics:

  • Active CPU: Time your code is actively executing in an instance. Priced at $0.128 per hour

  • Provisioned Memory: Memory allocated to the instance, billed at a lower rate. Priced at $0.0106 per GB-Hour

  • Invocations: One charge per function call

An example of this in action:

A function running Standard machine size at 100% active CPU would now cost ~$0.149 per hour (1 Active CPU hour + 2 GB of provisioned memory). Previously this would have cost $0.31842 per hour (1.7 GB Memory × $0.18).

Active CPU pricing is now enabled by default for all Hobby, Pro, and new Enterprise teams. For existing Enterprise customers, availability depends on your current plan configuration.

This change takes effect after a redeploy.

Learn more about Fluid compute with Active CPU pricing and read the announcement.

Read more

Mariano Cocirio Harpreet Arora Tom Lienard Doug Parsons Balazs Varga Florentin Eckl
https://vercel.com/changelog/ai-gateway-is-now-in-beta AI Gateway is now in Beta 2025-06-25T13:00:00.000Z

AI Gateway gives you a single endpoint to access a wide range of AI models across providers, with better uptime, faster responses, no lock-in.

Now in Beta, developers can use models from providers like OpenAI, xAI, Anthropic, Google, and more with:

  • Usage-based billing at provider list prices

  • Bring-Your-Own-Key support

  • Improved observability, including per-model usage, latency, and error metrics

  • Simplified authentication

  • Fallback and provider routing for more reliable inference

  • Higher throughput and rate limits

Try AI Gateway for free or check out the documentation to learn more.

Read more

Walter Korman Harpreet Arora Pranathi Peri Jeremy Philemon Logan Liffick
https://vercel.com/blog/wpp-and-vercel-bringing-ai-to-the-creative-process WPP and Vercel: Bringing AI to the creative process 2025-06-24T13:00:00.000Z

Today, we're announcing an expansion of our partnership with WPP. A first-of-its-kind agency collaboration that now brings v0 and AI SDK directly to WPP's global network of creative teams and their clients.

Read more

Jen Chang
https://vercel.com/changelog/manually-purge-the-cdn-cache Manually purge the CDN cache 2025-06-24T13:00:00.000Z

Users with the Member role can now purge Vercel’s CDN cache manually, either via the project's cache settings dashboard or by running vercel cache purge --type=cdn in CLI version 44.2.0 or later.

By default, the CDN cache is purged automatically with each new deployment. For cases where you want to refresh cached content instantly (without waiting for a new build), you can now manually purge the global CDN cache in milliseconds.

This is especially useful for persistent cache scenarios, like Image Optimization, where paths are cached across deployments. If upstream images have changed, you can now force a refresh instantly.

Learn more in the documentation.

Read more

Steven Salat Agustin Falco
https://vercel.com/changelog/vercel-blob-cli-is-now-available Vercel Blob CLI is now available 2025-06-24T13:00:00.000Z

The Vercel CLI (version 43.3.0) now includes Blob commands, allowing you to manage your Vercel Blob stores and files directly from the terminal.

Learn more about the Vercel Blob CLI and Vercel Blob.

Read more

Luis Meyer
https://vercel.com/blog/keith-messick-joins-vercel-as-cmo Keith Messick joins Vercel as CMO 2025-06-23T13:00:00.000Z

Vercel is evolving to meet the expanding potential of AI while staying grounded in the principles that brought us here. We're extending from frontend to full stack, deepening our enterprise capabilities, and powering the next generation of AI applications, including integrating AI into our own developer tools.

Today, we’re welcoming Keith Messick as our first Chief Marketing Officer to support this growth and (as always) amplify the voice of the developer.

Read more

Jeanne Grosser
https://vercel.com/changelog/dashboard-universal-search Find teams, projects, and pages in the Vercel dashboard with universal search 2025-06-23T13:00:00.000Z

There is now a search feature in the top right corner of every page in the vercel.com dashboard.

This search allows you to instantly find:

  • Teams

  • Projects

  • Deployments (by branch)

  • Pages

  • Settings

For more complex queries you can also ask the Navigation Assistant. This AI-powered feature can locate any page in the dashboard and apply filters based on your question.

Learn more about Find in the documentation.

Read more

wits Timo Lins Christopher Skillicorn Andrew Gadzik
https://vercel.com/changelog/turso-cloud-joins-the-vercel-marketplace Turso Cloud joins the Vercel Marketplace 2025-06-20T13:00:00.000Z

Turso now offers a native integration with Vercel, available as Database & Storage provider in the Marketplace.

The Turso integration brings fast, distributed SQLite databases to your Vercel projects with:

  • Seamless integration with Vercel, including one-click setup and unified billing

  • Create unlimited SQLite databases in the cloud for production workloads. Serverless access or sync.

  • A developer-friendly experience, configurable through Vercel CLI workflows

Get started with Turso on the Vercel Marketplace.

Read more

Hedi Zandi Justin Kropp
https://vercel.com/changelog/2fa-team-enforcement Two-factor authentication (2FA) team enforcement 2025-06-19T13:00:00.000Z

Teams can now require all members to enable two-factor authentication (2FA) for added security.

Team owners can enable enforcement in the Security & Privacy section of team settings.

Owner controls

Member restrictions

Once enforcement is enabled, members without 2FA will be restricted from:

  • Triggering builds from pull requests

  • Accessing new preview deployments

  • Viewing the team dashboard

  • Making API requests

  • Using access tokens

Enforcement lock-in & visibility

  • Members of a team with 2FA enforcement cannot disable 2FA unless they leave the team

  • In each user’s account settings, teams that require 2FA are now listed for clarity

Enable 2FA enforcement today, and learn more in our docs.

Read more

Enric Pallerols Bel Curcio Christopher Skillicorn Meg Bird
https://vercel.com/changelog/create-and-share-queries-with-notebooks-in-vercel-observability Create and share queries with notebooks in Vercel Observability 2025-06-19T13:00:00.000Z

Observability Plus users can now create a collection of queries in notebooks to collaboratively explore their observability data.

Queries in Vercel Observability allow you to explore log data and visualize traffic, performance, and other key metrics, and can now be saved to notebooks.

By default, notebooks are only visible to the user who created the notebook, but you have the option to share a notebook with all members of your team.

This is available to Observability Plus subscribers at no additional cost.

Try it out or learn more about Observability and Observability Plus.

Read more

Julia Shi Damien Simonin Feugas Tobias Lins Timo Lins Malavika Tadeusz
https://vercel.com/blog/tray-ai-cut-build-times-from-a-day-to-minutes-with-vercel Tray.ai cut build times from a day to minutes with Vercel 2025-06-16T13:00:00.000Z

Tray.ai is a composable AI integration and automation platform that enterprises use to build smart, secure AI agents at scale.

To modernize their marketing site, they partnered with Roboto Studio to migrate off their legacy solution and outdated version of Next.js. The goal: simplify the architecture, consolidate siloed repos, and bring content and form management into one unified system.

After moving to Vercel, builds went from a full day to just two minutes.

Read more

Alli Pope
https://vercel.com/changelog/introducing-the-dubai-vercel-region-dxb1 Introducing the Dubai Vercel region (dxb1) 2025-06-16T13:00:00.000Z

Dubai (dxb1) is now part of Vercel’s delivery network, extending our global CDN's caching and compute to reduce latency for users in the Middle East, Africa, and Central Asia without requiring any changes.

The new Dubai region serves as the first stop for end-users based on proximity and network conditions. It's generally available and serving billions of requests.

Teams can configure Dubai as an execution region for Vercel Functions, which supports Fluid compute to increase resource and cost efficiency, minimize cold starts, and scale dynamically with demand.

Learn more about Vercel Regions and Dubai's regional pricing.

Read more

Matheus Fernandes
https://vercel.com/blog/building-efficient-mcp-servers Building efficient MCP servers 2025-06-12T13:00:00.000Z

The Model Context Protocol (MCP) standardizes how to build integrations for AI models. We built the MCP adapter to help developers create their own MCP servers using popular frameworks such as Next.js, Nuxt, and SvelteKit. Production apps like Zapier, Composio, Vapi, and Solana use the MCP adapter to deploy their own MCP servers on Vercel, and they've seen substantial growth in the past month.

MCP has been adopted by popular clients like Cursor, Claude, and Windsurf. These now support connecting to MCP servers and calling tools. Companies create their own MCP servers to make their tools available in the ecosystem.

The growing adoption of MCP shows its importance, but scaling MCP servers reveals limitations in the original design. Let's look at how the MCP specification has evolved, and how the MCP adapter can help.

Read more

Andrew Qu
https://vercel.com/changelog/improved-team-overview-page Improved team overview page 2025-06-12T13:00:00.000Z

We've improved the team overview in the Vercel dashboard:

  • Activity is now sorted by your activity only

  • Projects can be filtered by git repository

  • Usage for the team is now shown as a card on the overview directly

To learn more about the Vercel dashboard, visit the documentation.

Read more

George Karagkiaouris Christopher Skillicorn Sam Saliba
https://vercel.com/changelog/improved-unhandled-node-js-errors-in-fluid-compute Improved unhandled Node.js errors in Fluid compute 2025-06-12T13:00:00.000Z

Fluid compute now gracefully handles uncaught exceptions and unhandled rejections in Node.js by logging the error, allowing inflight requests to complete, and then exiting the process.

This prevents concurrent requests running on same fluid instance from being inadvertently terminated in case of unhandled errors, providing the isolation of traditional serverless invocations.

Enable Fluid for your existing projects, and learn more in our blog and documentation.

Read more

Tom Lienard
https://vercel.com/blog/designing-and-building-the-vercel-ship-conference-platform Designing and building the Vercel Ship conference platform 2025-06-11T13:00:00.000Z

Our two conferences (Vercel Ship and Next.js Conf) are our chance to show what we've been building, how we're thinking, and cast a vision of where we're going next.

It's also a chance to push ourselves to create an experience that builds excitement and reflects the quality we strive for in our products. For Vercel Ship 2025, we wanted that experience to feel fluid and fast.

This is a look at how we made the conference platform and visuals, from ferrofluid-inspired 3D visuals and generative AI workflows, to modular component systems and more.

Read more

Genny Dee Daniel Linthwaite Yav Punchev James Clements
https://vercel.com/blog/how-were-adapting-seo-for-llms-and-ai-search How we’re adapting SEO for LLMs and AI search 2025-06-10T13:00:00.000Z

Search is changing. Backlinks and keywords aren’t enough anymore. AI-first interfaces like ChatGPT and Google’s AI Overviews now answer questions before users ever click a link (if at all). Large language models (LLMs) have become a new layer in the discovery process, reshaping how, where, and when content is seen.

This shift is changing how visibility works. It’s still early, and nobody has all the answers. But one pattern we're noticing is that LLMs tend to favor content that explains things clearly, deeply, and with structure.

"LLM SEO" isn’t a replacement for traditional search engine optimization (SEO). It’s an adaptation. For marketers, content strategists, and product teams, this shift brings both risk and opportunity. How do you show up when AI controls the first impression, but not lose sight of traditional ranking strategies?

Here’s what we’ve noticed, what we’re trying, and how we’re adapting.

Read more

Kevin Corbett Malte Ubl
https://vercel.com/changelog/filter-runtime-logs-for-fatal-function-errors Filter runtime logs for fatal function errors 2025-06-10T13:00:00.000Z

You can now filter runtime logs to view fatal function errors, such as Node.js crashes, using the Fatal option in the levels filter.

When a log entry corresponds to a fatal error, the right-hand panel will display Invocation Failed in the invocation details.

Try it out or learn more about runtime logs

Read more

Darpan Kakadia Timo Lins
https://vercel.com/blog/building-secure-ai-agents Building secure AI agents 2025-06-09T13:00:00.000Z

An AI agent is a language model with a system prompt and a set of tools. Tools extend the model's capabilities by adding access to APIs, file systems, and external services. But they also create new paths for things to go wrong.

The most critical security risk is prompt injection. Similar to SQL injection, it allows attackers to slip commands into what looks like normal input. The difference is that with LLMs, there is no standard way to isolate or escape input. Anything the model sees, including user input, search results, or retrieved documents, can override the system prompt or event trigger tool calls.

If you are building an agent, you must design for worst case scenarios. The model will see everything an attacker can control. And it might do exactly what they want.

Read more

Malte Ubl
https://vercel.com/changelog/models-api-v0-1.5-beta v0-1.5-md & v0-1.5-lg now in beta on the Models API 2025-06-09T13:00:00.000Z

Beta access is now available for v0-1.5-md (128K token context) and v0-1.5-lg (512K token context) on our Models API.

For full details and examples, see the Models API docs: https://vercel.com/docs/v0/api

Read more

Chris Tate Aryaman Khandelwal
https://vercel.com/changelog/observability-added-to-ai-gateway-alpha Observability added to AI Gateway alpha 2025-06-09T13:00:00.000Z

The AI Gateway, currently in alpha for all users, lets you switch between ~100 AI models without needing to manage API keys, rate limits, or provider accounts.

Vercel Observability now includes a dedicated AI section to surface metrics related to the AI Gateway. This update introduces visibility into:

  • Requests by model

  • Time to first token (TTFT)

  • Request duration

  • Input/output token count

  • Cost per request (free while in alpha)

You can view these metrics across all projects or drill into per-project and per-model usage to understand which models are performing well, how they compare on latency, and what each request would cost in production.

Learn more about Observability.

Read more

Julia Shi Walter Korman Nico Albanese Ethan Shea Pranathi Peri Harpreet Arora
https://vercel.com/changelog/claude-code-and-cursor-agent-no-longer-require-a-team-seat Claude Code and Cursor Agent no longer require a team seat 2025-06-06T13:00:00.000Z

We've updated our build logic to ensure Git commits authored by Claude Code or Cursor Agent can trigger deployments on Vercel. A team seat is not required.

If your agent encounters any issues building on Vercel, please contact us.

Read more

Anthony Shew
https://vercel.com/changelog/bot-protection-is-now-generally-available Bot Protection is now generally available 2025-06-05T13:00:00.000Z

Vercel Web Application Firewall's Bot Protection managed ruleset is now generally available for all users, at no additional cost.

Bot Protection helps reduce automated traffic from non-browser sources and allows you to respond based on two action choices:

  • Log Only Action: Logs identified bot traffic in the Firewall tab without blocking requests

  • Challenge Action: Serves a browser challenge to traffic from non-browser sources. Verified bots are automatically excluded

During the beta period, Bot Protection challenged over 650 million requests of potential non-browser requests.

Bot Protection complements Vercel's existing mitigations, which already block common threats like DDoS attacks, low quality traffic, and spoofed traffic. It adds an extra layer of protection for any automated traffic that is not clearly malicious.

Learn more about the Bot Protection managed ruleset and the Vercel Firewall. If you'd like your bot to be verified as well, head over to bots.fyi.

Read more

Sage Abraham Casey Gowrie Yanick Bélanger Joe Haddad Dany Volk Adrien Thebo Malavika Tadeusz
https://vercel.com/changelog/pre-generate-domain-ssl-certs-now-in-dashboard Pre-generate SSL certs, now in the Domains dashboard 2025-06-05T13:00:00.000Z

You can now pre-generate SSL certificates directly from the Vercel Domains dashboard, enabling zero-downtime domain migrations without using the CLI.

After adding an existing domain to your project, select Pre-Generate Certificate to issue certificates before updating DNS records and initiating the remainder of your domain migration.

You can still import a zone file or use Domain Connect to migrate DNS records from your previous provider.

Try it out or learn more in the docs.

Read more

Ryan Haraki
https://vercel.com/blog/the-no-nonsense-approach-to-ai-agent-development The no-nonsense approach to AI agent development 2025-06-04T13:00:00.000Z

AI agents are software systems that take over tasks made up of manual, multi-step processes. These often require context, judgment, and adaptation, making them difficult to automate with simple rule-based code.

While traditional automation is possible, it usually means hardcoding endless edge cases. Agents offer a more flexible approach. They use context to decide what to do next, reducing manual effort on tedious steps while keeping a review process in place for important decisions.

The most effective AI agents are narrow, tightly scoped, and domain-specific.

Here's how to approach building one.

Read more

Malte Ubl
https://vercel.com/changelog/new-firewall-challenge-metrics-now-available New firewall challenge metrics now available 2025-06-03T13:00:00.000Z

You can now monitor and query Vercel Firewall challenge outcomes using two new metrics:

  • challenge-solved – Visitor solved the challenge and was granted access (indicates a real user)

  • challenge-failed – Visitor submitted an invalid challenge solution (the request was blocked)

These metrics help evaluate rule effectiveness and reduce friction when users are unnecessarily challenged (high success rates).

Now available in the Firewall dashboard and in the Observability Plus' query builder with no additional setup required.

Learn more about custom rules and managed rulesets.

Read more

Adrien Thebo
https://vercel.com/blog/v0-composite-model-family Introducing the v0 composite model family 2025-06-01T13:00:00.000Z

We recently launched our AI models v0-1.5-md, v0-1.5-lg, and v0-1.0-md in v0. Today, we're sharing a deep dive into the composite model architecture behind those models. They combine specialized knowledge from retrieval-augmented generation (RAG), reasoning from state-of-the-art large language models (LLMs), and error fixing from a custom streaming post-processing model.

While this may sound complex, it enables v0 to achieve significantly higher quality when generating code. Further, as base models improve, we can quickly upgrade to the latest frontier model while keeping the rest of the architecture stable.

Read more

Aryaman Khandelwal Gaspar Garcia Ido Pesok Max Leiter
https://vercel.com/blog/fluid-compute-evolving-serverless-for-ai-workloads Fluid compute: Evolving serverless for AI workloads 2025-05-30T13:00:00.000Z

AI’s rapid evolution is reshaping the tech industry and app development. Traditional serverless computing was designed for quick, stateless web app transactions. LLM interactions require a different sustained compute and continuous execution patterns.

Read more

Collier Kirkland
https://vercel.com/changelog/fluid-compute-now-supports-isr-background-and-on-demand-revalidation Fluid compute now supports ISR background and on-demand revalidation 2025-05-30T13:00:00.000Z

Fluid compute now supports both background and on-demand Incremental Static Regeneration (ISR) across all Vercel projects.

This means ISR functions now benefit from Fluid's performance and concurrency efficiency with no config changes needed. If you’ve redeployed recently, you’re already using it.

Fluid compute reuses existing resources before creating new ones, reducing costs by up to 85% for high-concurrency workloads. It delivers server-like efficiency with serverless flexibility with:

  • Optimized concurrency

  • Scale from zero to infinity

  • Minimal cold starts

  • Usage-based pricing

  • Full Node.js and Python support

  • No infrastructure management

  • Background tasks with waitUntil

Enable Fluid for your existing projects, and learn more in our blog and documentation.

Read more

Tom Lienard
https://vercel.com/changelog/faster-login-flow-and-new-google-sign-in-support Faster login flow and new Google Sign-in support 2025-05-30T13:00:00.000Z

The login experience has been redesigned for faster access and now includes full support for Google Sign-in, including Google One Tap.

If your existing Vercel account's email matches your Google email, you can use the Google button from the login screen and your accounts will be automatically linked.

If the emails don’t match, you can manually connect your Google account from your account settings once logged in.

Read more

Javier Bórquez Bel Curcio Kit Foster George Karagkiaouris
https://vercel.com/changelog/ai-query-prompting-now-available-in-observability-plus AI query prompting now available in Observability Plus 2025-05-28T13:00:00.000Z

Observability Plus users can now use natural language to create new queries or modify existing ones by adding filters, changing time ranges, or grouping results.

Queries allow customers to explore log data and visualize traffic, performance, and other key metrics.

AI prompts generate queries in the standard format, and are represented in the URL so they can be shared and bookmarked.

Example prompts include:

  • Show all 500 errors in the last 24 hours

  • Show me the top bandwidth for incoming requests

  • Show me the top hostnames grouped by country

  • All requests challenged by DDoS mitigations by user agent

  • Find all requests with the keyword "timeout" grouped by path

This is available to all Observability Plus users at no additional cost.

View the dashboard or learn more about Observability and Observability Plus.

Read more

Julia Shi Timo Lins Ethan Shea
https://vercel.com/changelog/cve-2025-48068 CVE-2025-48068 2025-05-28T13:00:00.000Z

A low-severity vulnerability in the Next.js dev server has been addressed.

Summary

This vulnerability affects Next.js versions 13.0.0 through 14.2.29 and 15.0.0 through 15.2.1. It includes two related issues affecting the local development server: Cross-Site WebSocket Hijacking (CSWSH) and Cross-Origin Script Inclusion. Both stem from the lack of origin validation on development server resources.

Impact

When running next dev, a malicious website can:

  • Initiate a WebSocket connection to localhost and interact with the local development server if the project uses the App Router, potentially exposing internal component code.

  • Inject a <script> tag referencing predictable paths for development scripts (e.g., /app/page.js), which are then executed in the attacker's origin. This can allow extraction of source code

The root cause is insufficient origin verification on local development server resources, including the WebSocket server and static script endpoints. This issue is similar to CVE-2018-14732, though scoped strictly to local development use.

Resolution

This issue was fixed in Next.js versions 14.2.30 and 15.2.2. These releases introduce a configuration option to enable origin checks, which help prevent unauthorized cross-origin requests to the local development server. You can learn how to enable this option after upgrading to a patched version by visiting our documentation page. Note that this configuration is currently opt-in and will become the default in a future major release.

Workarounds

  • Avoid browsing untrusted websites while running the local development server

  • Implement local firewall or proxy rules to block unauthorized access to the development server

For Vercel Customers

This CVE affects local development, no mitigation are required for applications in production on Vercel.

Credit

Thanks to sapphi-red and Radman Siddiki for responsibly disclosing this issue.

References

Read more

Aaron Brown
https://vercel.com/blog/vercel-security-roundup-improved-bot-defenses-dos-mitigations-and-insights Vercel security roundup: improved bot defenses, DoS mitigations, and insights 2025-05-23T13:00:00.000Z

Since February, Vercel blocked 148 billion malicious requests from 108 million unique IP addresses. Every deployment automatically inherits these protections, keeping your workloads secure by default and enabling your team to focus on shipping rather than incidents. Our real-time DDoS filtering, managed Web Application Firewall (WAF), and enhanced visibility ensure consistent, proactive security.

Here's what's new since February.

Read more

Liz Hurder Kevin Corbett
https://vercel.com/changelog/middleware-insights-now-available-in-vercel-observability Middleware insights now available in Vercel Observability 2025-05-23T13:00:00.000Z

The Vercel Observability dashboard now includes a dedicated view for middleware, showing invocation counts and performance metrics.

Observability Plus users get additional insights and tooling:

  • Analyze invocations by request path, matched against your middleware config

  • Break down middleware actions by type (e.g., redirect, rewrite)

  • View rewrite targets and frequency

  • Query middleware invocations using the query builder

View the dashboard or learn more about Observability and Observability Plus.

Read more

Tobias Lins Gal Schlezinger
https://vercel.com/changelog/rate-limiting-now-available-on-hobby-with-higher-included-usage-on-pro Rate limiting now available on Hobby, with higher included usage on Pro 2025-05-23T13:00:00.000Z

Rate limiting now has higher included usage and broader availability to help protect your applications from abuse and manage traffic effectively.

The first 1,000,000 allowed rate limit requests per month are now included. Hobby teams also get 1 free rate limit rule per project, up to the same included allotment.

These changes are now effective and have been automatically applied to your account.

Learn more about configuring rate limits or create a new rate limiting rule now.

Read more

Dany Volk Casey Gowrie
https://vercel.com/changelog/faster-cdn-proxying-to-external-origins Faster CDN proxying to external origins 2025-05-23T13:00:00.000Z

We’ve optimized connection pooling in our CDN to reduce latency when connecting to external backends, regardless of traffic volume.

  • Lower latency: Improved connection reuse and TLS session resumption reduce response times by up to 60% in some regions, with a 15–30% average improvement.

  • Reduced origin load: 97% connection reuse and more efficient TLS resumption significantly cut the number of new handshakes required.

This is now live across all Vercel deployments at no additional cost.

Read more

Casey Gowrie Joe Haddad
https://vercel.com/changelog/external-api-caching-insights-now-in-observability External API caching insights now in Observability 2025-05-22T13:00:00.000Z

The Observability dashboard now surfaces caching behavior for external API calls using Vercel Data Cache.

External APIs page, you’ll see a new column indicating how many requests were served from the cache vs. the origin.

Caching insights are available per hostname for all users, and per path for Observability Plus subscribers.

View the external API dashboard or learn more about Vercel Data Cache.

Read more

Tobias Lins Timo Lins Ethan Shea
https://vercel.com/blog/vapi-mcp-server-on-vercel How Vapi built their MCP server on Vercel 2025-05-21T13:00:00.000Z

Vercel recently published a Model Context Protocol (MCP) adapter that makes it easy to spin up an MCP server on most major frameworks.

Vapi is building an API for building real-time voice agents. They handle orchestration, scaling, and telephony to provide a completely model-agnostic and interchangeable interface for building agents. 

Vapi rebuilt their MCP server on Vercel, letting users create agents, automate testing, analyze transcripts, build workflows, and give agents access to all of Vapi’s endpoints.

Read more

Elizabeth Trykin Andrew Qu
https://vercel.com/blog/vercel-blob-now-generally-available Vercel Blob is now generally available: Cost-efficient, durable storage 2025-05-21T13:00:00.000Z

Storage should be simple to set up, globally available, and built to last, without slowing you down or adding complexity. It should feel native to your app.

That's why we built Vercel Blob: Amazon S3-backed storage that's deeply integrated with Vercel's global application delivery and automated caching, with predictable pricing to serve public assets cost-efficiently at scale.

Vercel Blob is now generally available. It's already storing and serving over 400 million files, and powers production apps like v0 and the Vercel Dashboard.

Read more

Vincent Voyer Luis Meyer Dan Fein
https://vercel.com/blog/ai-gateway Introducing the AI Gateway null

Note: This blog is outdated, please reference this page for the latest information or read the docs here. The Vercel AI Gateway is now available for alpha testing.

Built on the AI SDK 5 alpha, the Gateway lets you switch between ~100 AI models without needing to manage API keys, rate limits, or provider accounts. The Gateway handles authentication, usage tracking, and in the future, billing.

Get started with AI SDK 5 and the Gateway, or continue reading to learn more.

Why we're building the AI Gateway

The current speed of AI development is fast and is only getting faster.

There's a new state-of-the-art model released almost every week. Frustratingly, this means developers have been locked into a specific provider or model API in their application code. We want to help developers ship fast and keep up with AI progress, without needing 10 different API keys and provider accounts.

Production AI applications often run into capacity issues or rate limiting due to high demand. Infrastructure providers move quickly to bring models online and keep up with this demand, but this can come at the expense of performance or availability.

The AI Gateway will allow you to load balance across providers, or fail over if a provider has downtime or degradation in performance. Model inference costs keep dropping and providers are competing on quality, performance, and price. The Gateway helps you quickly take advantage of these cost savings.

We're taking what we've learned scaling v0 to millions of users, by quickly load balancing and switching between a mixture of providers, and turning that infrastructure into the AI Gateway.

Integration with the AI SDK

We built the AI SDK to create a common abstraction for AI model APIs across modalities like text, images, and audio.

The AI SDK is free and open source, and works with any model or infrastructure provider. The AI Gateway is a separate Vercel product built on top of the AI SDK.

We're building these products with high cohesion, but loose coupling. The Gateway will take full advantage of AI SDK features like tool calling, function arguments, streaming, retries, attachments, and structured outputs.

Pricing

During the AI Gateway alpha, usage is free with rate limits based on your Vercel plan tier. These rate limits are similar to the current AI SDK Playground.

We plan to support pay-as-you-go pricing when the Gateway reaches general availability. Model pricing will follow the provider's market rates, updated regularly. We are also planning to explore bring-your-own-key in the future.

What's coming next

  • Load balancing and model failover

  • Pay-as-you-go billing

  • Bring-your-own-key support

  • Unified logging, usage tracking, and observability

  • OpenAI-compatible API

Start exploring AI Gateway

We're shipping this in alpha to get your input and early feedback. Tag us on X to share your work and tell us what you want to see from the AI Gateway.

For more information, get started with our demo applications:

For model support and more usage examples, visit ai-sdk.dev/model-library.

Read more

Walter Korman Lars Grammel
https://vercel.com/changelog/vercel-blob-is-now-generally-available Vercel Blob is now generally available 2025-05-21T13:00:00.000Z

Vercel Blob is now generally available, bringing high-performance, globally scalable object storage into your workflows and apps.

Blob storage’s underlying S3 infrastructure ensures 99.999999999% durability, and already stores over 400 million files while powering production apps like v0.dev.

Pricing is usage-based:

  • Storage: $0.023 per GB per month

  • Simple API operations (e.g. Reads): $0.40 per million

  • Advanced operations (e.g. Uploads): $5.00 per million

  • Blob Data Transfer: starting at $0.050 per GB

Pricing applies to:

  • New Blob stores starting today

  • Existing stores starting June 16, 2025

Hobby users now get increased free usage: 1 GB of storage and 10 GB of Blob Data Transfer per month.

Get started with Vercel Blob and learn more in the documentation.

Read more

Vincent Voyer Luis Meyer Harpreet Arora Agustin Falco
https://vercel.com/changelog/vercel-blob-insights-now-available-in-observability Vercel Blob insights now available in Observability 2025-05-19T13:00:00.000Z

The Observability dashboard now includes a dedicated tab for Vercel Blob, which provides visibility into how Blob stores are used across your applications.

At the team level, you can see total data transfer, download volume, cache activity, and API operations. You can also drill into activity by user agent, edge region, and client IP.

This allows you to understand usage patterns, identify inefficiencies, and optimize how your application stores and serves assets.

Try it out or learn more about Vercel Blob.

Read more

Luis Meyer Vincent Voyer Ethan Shea
https://vercel.com/changelog/hypertune-joins-the-vercel-marketplace Hypertune joins the Vercel Marketplace 2025-05-19T13:00:00.000Z

Hypertune now offers a native integration with Vercel Marketplace.

You can find it as a Flags & Experimentation provider in the Flags tab.

The Hypertune integration offers:

Install and access on Vercel with one-click setup and unified billing.

Deploy the Hypertune template built for Vercel Marketplace today.

Read more

Hedi Zandi Aaron Morris
https://vercel.com/blog/how-fern-delivers-6m-monthly-views-and-80-faster-docs-with-vercel How Fern delivers 6M+ monthly views and 80% faster docs with Vercel 2025-05-15T13:00:00.000Z

Fern is improving how teams build and host documentation. As a multi-tenant platform, Fern enables companies like Webflow and ElevenLabs to create, customize, and serve API documentation from a single Next.js application—scaling seamlessly across multiple customer domains. With 6 million+ page views per month and 1 million+ unique visitors, performance and reliability are key.

By running on Vercel’s infrastructure, Fern benefits from automatic caching, optimized content delivery, and instant scalability, all while maintaining a fast iteration cycle for development. Additionally, their migration to Next.js App Router has driven a 50-80% reduction in page load times, improving navigation speed and Lighthouse scores for customers worldwide.

Read more

Alli Pope
https://vercel.com/changelog/45-percent-faster-build-initialization 45% faster build initialization 2025-05-15T13:00:00.000Z

Builds on Vercel now initialize 45% faster on average, reducing build times by around 15 seconds for Pro and Enterprise teams.

Build initialization includes steps like restoring the build cache and fetching your code before the Build Command runs. These improvements come from continued enhancements to Hive, Vercel’s build infrastructure.

This improvement also reduced I/O wait times for file writes inside the build container by 75%, improving performance for the entire build.

Learn more about builds on Vercel.

Read more

Janos Szathmary Andrew Healey Carlos Galdino Guðmundur Bjarni Ólafsson Marc Codina Segura Gargi Sharma
https://vercel.com/blog/how-consensys-rebuilt-metamask-io-with-vercel-and-next-js How Consensys rebuilt MetaMask.io with Vercel and Next.js 2025-05-14T13:00:00.000Z

Since 2014, Consensys has shaped the web3 movement with tools like and Linea, Infura, and MetaMask—the most widely used self-custodial wallet on the web, with millions of users across the globe.

As the blockchain ecosystem quickly matured, the need for a site that could move as fast as the teams building it became clear. To meet that demand, Consensys migrated MetaMask.io to Next.js and Vercel, creating an architecture built for scale, speed, and continuous iteration.

Read more

Alli Pope
https://vercel.com/blog/updated-v0-pricing Updated v0 pricing 2025-05-13T13:00:00.000Z

We’re updating how pricing works in v0. Usage is now metered on input and output tokens which convert to credits, instead of fixed message counts.

This gives you more predictable pricing as you grow and increases the amount of usage available on our free tier.

Existing v0 users will transition to the new pricing at the start of your next billing period. New users will start on the improved pricing today.

Read more

Aryaman Khandelwal
https://vercel.com/changelog/proxied-responses-now-cacheable-via-cdn-cache-control-headers Proxied responses now cacheable via CDN-Cache-Control headers 2025-05-13T13:00:00.000Z

Vercel’s CDN, which can proxy requests to external backends, now caches proxied responses using the CDN-Cache-Control and Vercel-CDN-Cache-Control headers. This aligns caching behavior for external backends with how Vercel Functions are already cached.

This is available starting today, on all plans, at no additional cost.

Per the Targeted HTTP Cache Control spec (RFC 9213), these headers support standard directives like max-age and stale-while-revalidate, enabling fine-grained control over CDN caching without affecting browser caches.

You can return the headers directly from your backend, or define them in vercel.json under the headers key if your backend can't be modified.

No configuration changes or redeployments required. Return the header (or set it in vercel.json) to improve performance, reduce origin load, and ensure fresh content.

Learn more about CDN-Cache-Control headers.

Read more

Casey Gowrie Joe Haddad
https://vercel.com/changelog/new-one-click-ai-bot-managed-ruleset New one-click AI bot managed ruleset 2025-05-13T13:00:00.000Z

You can now block AI crawlers and scrapers like GPTBot (OpenAI), ClaudeBot (Anthropic), PerplexityBot, Bytespider (ByteDance), and others with a single toggle using the AI bot managed ruleset. Now available for free on all plans.

The ruleset is managed by Vercel and updates automatically as new crawlers appear, with no additional action required. This protection operates with zero latency impact to legitimate traffic.

For more complete coverage, combine with Bot Filter to catch AI bots that attempt to spoof user agents to disguise themselves as legitimate browsers or omit proper identification headers.

AI crawlers now generate more traffic than human users on many popular sites, driving up infrastructure costs and raising copyright and data usage concerns. Many of these crawlers do not respect robots.txt or similar directives, making manual solutions unreliable.

Enable the ruleset or learn more in the documentation.

Read more

Casey Gowrie Sage Abraham Dany Volk Joe Haddad
https://vercel.com/changelog/resources-tab-allows-instant-searching-and-filtering-of-functions-middleware Resources tab allows instant searching and filtering of functions, middleware, and static assets 2025-05-13T13:00:00.000Z

The Resources tab is replacing the Functions tab for deployments in the Vercel Dashboard, allowing you to see more than Functions. You'll now see the Resources tab when viewing a deployment in the Vercel Dashboard where you can see, search, and filter:

  • Middleware: Any configured matchers

  • Static Assets: Files (HTML, CSS, JS, images, fonts, and more) and their sizes

  • Functions: The type, runtime, size, and regions

You can use the three dot menu (...) to jump to the Logs, Analytics, Speed Insights, or Observability tab filtered to a given function.

Read more about using the Vercel Dashboard to view and manage your deployments.

Read more

wits Christopher Skillicorn
https://vercel.com/blog/spring25-oss-program The spring 2025 cohort of Vercel’s Open Source Program 2025-05-12T13:00:00.000Z

Open source runs the world. The frameworks, libraries, and tools we rely on are strengthened by communities that share ideas, review code, and build in the open.

At Vercel, we want to help those communities thrive. That’s why we launched the Vercel Open Source Program: a developer initiative that gives maintainers the resources, credits, and support they need to ship faster and scale confidently.

Four times a year, we’ll welcome a new cohort of projects into the program. Members receive $3,600 in Vercel credits, perks from partners, and a dedicated Slack space to learn from one another.

Today we are announcing this spring's cohort.

Read more

Kap Sev
https://vercel.com/changelog/new-quick-actions-in-observability New quick actions in Observability 2025-05-09T13:00:00.000Z

You can now quickly copy, filter, or exclude individual results in views and query results.

  • Copy is available across all Observability views

  • Filter and exclude are available for custom query search results

These quick actions help make it easier to explore and refine your Observability queries.

Now available for Observability and Observability Plus customers.

Try it out in Observability.

Read more

Timo Lins
https://vercel.com/changelog/new-usage-dashboard-for-enterprise-users New usage dashboard for Enterprise users 2025-05-08T13:00:00.000Z

Enterprise teams with Managed Infrastructure Unit (MIU) commitments can now access a new usage dashboard with improved filtering, detailed breakdowns, and export options to better understand usage and costs by product and project.

You can now break down usage by:

  • Product to quickly identify usage, drill down into spikes, and track costs of a single or set of products

  • Team and project to understand your costs and monitor team activity across all or specific apps

  • CSV exports for external analysis via integration into your cost observability tools and spreadsheets

Explore the new dashboard today.

Read more

Christian Pickett Shar Dara Caleb Boyd Chloe Tedder Manuel Muñoz Solera
https://vercel.com/changelog/cdn-origin-timeout-increased-to-two-minutes CDN origin timeout increased to two minutes 2025-05-08T13:00:00.000Z

Vercel’s CDN will now wait up to 120 seconds for your backend to start sending data, up from 30 seconds. This extended proxied request timeout is now available on all plans at no additional cost.

The proxied request timeout defines how long our CDN allows your external backend to respond before canceling the request. After the initial byte is received, your backend can take longer than two minutes to complete the request, as long as it continues sending data at least once every 120 seconds.

This update improves reliability for workloads with long processing times, such as LLM generation or complex data queries, and reduces the chance of 504 gateway timeouts.

This change is effective immediately, with no action or configuration required.

Read more

Casey Gowrie Joe Haddad
https://vercel.com/changelog/up-to-80-pricing-reduction-for-web-analytics Up to 80% pricing reduction for Web Analytics 2025-05-08T13:00:00.000Z

We’ve increased included limits and reduced the price of Web Analytics event and the Web Analytics Plus add-on by up to 80%.

Web Analytics is now billed:

  • Per single event, instead of 100K increments

  • At $0.00003 per event ($3 per 100K, a 79% decrease from $14 per 100K)

  • At $10/month for the Plus add-on (an 80% decrease, from $50/month)

Web Analytics Plus is an optional add-on that unlocks increased retention and UTM parameters.

Included event limits have increased:

  • Hobby: 50K events/month (20x increase, from 2.5K)

  • Pro: 100K events/month (4x increase, from 25K)

Learn more about Web Analytics pricing.

Read more

Damien Simonin Feugas Tobias Lins Caleb Boyd Harpreet Arora Chris Widmaier
https://vercel.com/blog/introducing-the-flags-explorer-first-party-integrations-and-updates Introducing the Flags Explorer, first-party integrations, and updates to the Flags SDK 2025-05-07T13:00:00.000Z

Experimentation, A/B testing, and feature flags serve as essential tools for delivering better user experiences, ensuring smoother rollouts, and empowering teams to iterate quickly with confidence. We're making it easier to bring flags into your workflow with:

Read more

Dominik Ferber Alli Pope
https://vercel.com/changelog/mcp-server-support-on-vercel MCP server support on Vercel 2025-05-07T13:00:00.000Z

Model Context Protocol (MCP) is a way to build integrations for AI models.

Vercel now supports deploying MCP servers (which AI models can connect to) as well as MCP clients (AI chatbot applications which call the servers).

Get started with our Next.js MCP template today.

How is MCP different than APIs?

APIs allow different services to communicate together. MCP is slightly different.

Rather than thinking about MCP like a REST API, you can instead think about it like a tailored toolkit that helps an AI achieve a particular task. There may be multiple APIs and other business logic used behind the scenes for a single MCP tool.

If you are already familiar with tool-calling in AI, MCP is a way to invoke tools hosted on a different server.

MCP now supports a protocol similar to other web APIs, namely using HTTP and OAuth. This is an improvement from the previous stateful Server-Sent Events (SSE) protocol.

Deploying MCP servers to Vercel

To simplify building MCP servers on Vercel, we’ve published a new package, @vercel/mcp-adapter, which supports both the older SSE transport and the newer stateless HTTP transport.

The majority of MCP clients currently only support the SSE transport option. To handle state required for the SSE transport, you can integrate a Redis server through any provider in our marketplace like Upstash and Redis Labs.

We’ve already seen customers successfully deploying MCP servers in production. One customer has seen over 90% savings using Fluid compute on Vercel versus traditional serverless. Fluid enables you to have full Node.js or Python compatibility, while having a more cost effective and performant platform for AI inference and agentic workloads.

Get started with MCP

Vercel's AI SDK has built-in support for connecting your Node.js or Next.js apps to MCP servers.

We’re looking forward to future MCP servers built with the HTTP transport and starting to explore the latest developments like OAuth support.

Other Vercel projects like shadcn/ui are exploring ways to integrate MCP. If you have suggestions for MCP server use cases on Vercel, you can share your feedback in our community.

Read more

Andrew Qu Malte Ubl
https://vercel.com/changelog/bot-activity-and-crawler-insights-now-in-observability Bot activity and crawler insights now in Observability 2025-05-07T13:00:00.000Z

Vercel Observability now provides detailed breakdowns for individual bots and bot categories, including AI crawlers and search engines. Users across all plans can view this data in the Observability > Edge Requests dashboard.

Additionally, Observability Plus users can:

  • Filter traffic by bot category, such as AI

  • View metrics for individual bots

  • Break down traffic by bot or category in the query builder

Inspect bot and crawler activity in your Observability dashboard now.

Read more

Tobias Lins
https://vercel.com/changelog/flags-explorer-is-now-generally-available Flags Explorer is now generally available 2025-05-07T13:00:00.000Z

The Flags Explorer lets you override feature flags for your own session, without affecting colleagues, and without signing into your flag provider. This enables you to test features in production before they go live and keeps you in the flow.

This feature is now generally available for all customers. Hobby, Pro, and Enterprise plans include 150 overrides per month, with unlimited overrides available for $250 per month on Pro and Enterprise.

Teams that used Flags Explorer during the beta have 30 days to activate the new unlimited option before the 150 overrides per month limit takes effect. This can be done in the Vercel dashboard or directly through the Vercel Toolbar.

Additionally, The Flags SDK automatically respects overrides set by the Flags Explorer, no matter which adapter you're using.

Learn more about Flags Explorer.

Read more

Andy Schneider Dominik Ferber Aaron Morris Christopher Skillicorn Chris Widmaier
https://vercel.com/changelog/faster-builds-now-available-with-compute-upgrades-on-paid-plans Faster builds now available with compute upgrades on paid plans 2025-05-07T13:00:00.000Z

Projects with on-demand concurrent builds can now use enhanced build machines to improve build performance.

Available on all paid plans, these machines offer double the resources: 8 CPUs, 16 GB memory, and 58 GB disk. This reduces both build time and total build minutes used. Existing customers are already seeing up to 25% faster builds with no changes required.

Enhanced builds can be enabled per project and are billed per minute.

Enterprise customers can run all concurrent builds, including pre-allocated build slots and on-demand, on higher-spec machines.

Enable on-demand enhanced builds and learn more in our documentation.

Read more

Andrew Healey Marc Codina Segura Janos Szathmary Mariano Cocirio
https://vercel.com/blog/join-the-vercel-ai-accelerator Join the Vercel AI Accelerator 2025-05-06T13:00:00.000Z

The Vercel AI Accelerator is back. This year, we'll work with 40 teams building the future of AI. Over six weeks, participants get the tools, infrastructure, and support to create next-generation AI apps.

Applications are open now until May 17.

Read more

Alli Pope
https://vercel.com/changelog/session-tracing-now-available Track a request's full lifecycle with session tracing 2025-05-05T13:00:00.000Z

Session tracing is now available to all Vercel users, providing end-to-end visibility into the timing of each step in a request's lifecycle, from when it enters Vercel’s infrastructure to execution inside your Vercel Functions.

With session tracing you can:

  • Start tracing sessions on your deployments directly from the Vercel Toolbar, no setup required.

  • View spans for Vercel's routing, caching, middleware, and function layers as well as those instrumented in your code.

  • Share traces with teammates for faster debugging and optimization.

  • Use tracing alongside logs and metrics to debug, optimize, and improve iteration speed.

Session tracing is free to customers on all plans.

To get started, find Tracing in the Vercel Toolbar, or learn more in the docs.

Read more

Andrew Gadzik Tom Lienard wits Will Turner Casey Gowrie Luc Leray Darpan Kakadia Dima Voytenko Sam Saliba Gary Borton Jas Garcha
https://vercel.com/blog/how-v0-is-building-seo-optimized-sites-by-default How v0 is building SEO-optimized sites by default 2025-05-02T13:00:00.000Z

Building for the web goes beyond speed and aesthetics, discoverability matters just as much. While AI can accelerate web development, it often skips over performance, accessibility, or SEO best practices that matter for discoverability. With v0, you don’t have to compromise. Every interface you generate is fast, accessible, and SEO-optimized by default.

v0 integrates with Next.js and deploys to Vercel, giving you structured metadata, performance tuning, and Server Side Rendering (SSR). The result is better Core Web Vitals, pages that load quickly and return full HTML, making them easier for search engines to crawl and index.

Read more

Alli Pope
https://vercel.com/changelog/information-disclosure-in-flags-sdk-cve-2025-46332 Information disclosure in Flags SDK (CVE-2025-46332) 2025-05-02T13:00:00.000Z

Vercel discovered and patched an information disclosure vulnerability in the Flags SDK, affecting versions:

  • flags ≤ 3.2.0

  • @vercel/flags ≤ 3.1.1

This is being tracked as CVE-2025-46332. We have published an automatic mitigation for the default configuration of the Flags SDK on Vercel.

We recommend upgrading to [email protected] (or migrating from @vercel/flags to flags) to remediate the issue. Further guidance can be found in the upgrade guide.

Impact and analysis

A malicious actor could determine the following under specific conditions:

  • Flag names

  • Flag descriptions

  • Available options and their labels (e.g. true, false)

  • Default flag values

Flags providers were not accessible. No write access nor additional customer data was exposed, this is limited to the values noted above.

Automatic mitigation

Vercel implemented a network-level mitigation to prevent the default flags discovery endpoint at /.well-known/vercel/flags being reachable, which automatically protects Vercel deployments against exploitation of this issue.

While uncommon, if you are exposing the flags discovery endpoint through custom paths, you can also implement a custom WAF rule to restrict access to these endpoints as a mitigation, for example when using:

  • Pages Router, as the original non-rewritten route would still be accessible, e.g. /api/vercel/flags

  • Microfrontends, as each application may use a distinct flags discovery endpoint

Recommendations

We recommend that all users upgrade to [email protected]. Flags Explorer will be disabled and show a warning notice until you upgrade to the latest version.

More information can be found in the upgrade guide.

Read more

Dominik Ferber Jack Wilson
https://vercel.com/changelog/serve-personalized-content-faster-with-vary-support Serve personalized content faster with Vary support 2025-05-02T13:00:00.000Z

Vercel now fully supports the HTTP Vary header, making it easier to cache personalized content across all plans with no configuration required.

The Vary header tells caches which request headers to include when generating cache keys. This allows Vercel’s application delivery network to store and serve different versions of a page based on headers like X-Vercel-IP-Country or Accept-Language, so users get fast, localized content without recomputation.

By returning the above headers your site caches and serves country-specific content.

A visitor from the United States receives the US-specific cached version, and visitors from other countries receive the version for their locale, with no recomputation required.

Learn more about caching personalized content in Vercel's application network documentation.

Read more

Luba Kravchenko Joe Haddad
https://vercel.com/blog/ios-developers-can-now-offer-commission-free-payments-on-web iOS developers can now offer commission-free payments on web 2025-05-01T13:00:00.000Z

Yesterday, a federal court made a decisive ruling in Epic Games v. Apple: Apple violated a 2021 injunction by continuing to restrict developers from linking to external payment methods, and by imposing a 27% fee when they did.

The ruling represents a major shift for native app developers.

Read more

Fernando Rojo
https://vercel.com/changelog/create-custom-waf-rules-directly-from-the-vercel-firewall-tab Create custom WAF rules directly from the Vercel Firewall tab 2025-05-01T13:00:00.000Z

You can now create custom WAF rules directly from the chart displayed on the Firewall tab of the Vercel dashboard.

When viewing your traffic grouped by a parameter (like IP address, user agent, or request path), you can now select "Create Custom Rule" within the actions menu of any displayed time series. This automatically generates an editable draft of the custom WAF rule that matches the selected parameter.

Once the WAF rule is saved and published, it's immediately propagated across our global network.

This feature is available to all users across all plans at no additional cost.

Learn more about the Vercel Firewall.

Read more

Casey Gowrie Malavika Tadeusz
https://vercel.com/changelog/optionally-disable-deployment_status-webhook-events-for-github-actions Optionally disable deployment_status webhook events for GitHub Actions 2025-05-01T13:00:00.000Z

You can now disable the deployment_status webhook event that Vercel sends to GitHub when Vercel is connected to your GitHub repository.

When deployment_status events are enabled, GitHub's pull request activity will create a log with a status event for every deployment. While this can keep your team better informed, it can also create noisy event logs for repositories with many deployment events, especially in monorepos with many projects.

Disabling these events prevents repeated messages from cluttering your GitHub PR's event history, giving you a cleaner, more focused view of your pull request activity. The Vercel Github comment containing links to your preview deployments will continue to be posted as before.

The deployment_status event is most often used as a trigger for GitHub Actions. We recommend migrating to repository_dispatch events to simply workflows with richer Vercel deployment information.

Learn more in the documentation.

Read more

Erika Rowland Tom Knickman
https://vercel.com/changelog/checks-api-support-added-for-marketplace-integration-providers Checks API support added for Marketplace integration providers 2025-04-30T13:00:00.000Z

Providers building native integrations for the Vercel Marketplace can now use the Checks API to deliver deeper functionality for their users.

With Vercel's Checks API, you can define and run custom tests and assertions after every deployment, then surface actionable results directly in the Vercel dashboard.

As a testing provider, you can implement checks such as reliability tests (e.g. API availability, runtime errors), performance tests (e.g. response time thresholds, load simulation), or Web Vitals (e.g. layout shift). This helps developers catch real-world issues earlier in their workflow, powered by your integration.

When building your integration, keep these best practices in mind:

  • Offer minimal or no-configuration solutions so developers can easily run checks

  • Provide a guided onboarding experience from installation to first results

  • Display clear, actionable outcomes directly in the Vercel dashboard

  • Document ways to extend or customize checks for advanced users

Learn more in the Checks API documentation.

Read more

Fabio Benedetti Dima Voytenko Hedi Zandi Justin Kropp
https://vercel.com/changelog/protection-against-react-router-and-remix-vulnerabilities-cve-2025-43864 Protection against React Router and Remix vulnerabilities 2025-04-26T13:00:00.000Z

Security researchers reviewing the Remix web framework have discovered two high-severity vulnerabilities in React Router. Vercel proactively deployed mitigation to the Vercel Firewall and Vercel customers are protected.

CVE-2025-43864 and CVE-2025-43865 enable an external party to modify the response using certain request headers, which can lead to cache poisoning Denial of Service (DoS). CVE 43865 enables vulnerabilities such as stored Cross Site Scripting (XSS).

Impact and analysis

When we learned about the vulnerability, we started analyzing the impact to the Vercel platform. Here are our findings and recommendations:

  • We were able to reproduce the vulnerability and demonstrate that cache poisoning is trivial, including stored Cross Site Scripting (XSS) injections

  • The only precondition is that the customer used an impacted version of Remix / React Router (v7.0.0 branch prior to version v7.5.2) and Cache-Control headers

  • The impact can extend to any visitor of the application after the cache is poisoned, regardless of authentication state or any other request headers

  • Vercel customers using React Router between v7.0.0 and v7.5.1 were impacted before our Firewall mitigation

  • We have deployed mitigations for attacks by stripping the X-React-Router-Spa-Mode and X-React-Router-Prerender-Data headers from the request in the Vercel Firewall. New requests are now protected across all deployments on the Vercel platform. We confirmed our mitigation approach with the Remix / React Router team.

  • In addition to mitigating future requests, we have preemptively purged CDN response caches on our network out of caution.

Both issues have been patched in React Router 7.5.2. We recommend updating to the latest version and redeploying.

If you are using additional layers of caching, including Cloudflare or other CDNs, we recommend purging those caches separately. Thank you to zhero for disclosing the vulnerability.

Read more

Casey Gowrie Ethan Shea
https://vercel.com/changelog/improved-experience-for-managing-project-domains Improved experience for managing project domains 2025-04-25T13:00:00.000Z

We’ve redesigned the Project Domains page with faster search, smoother navigation, and clearer visibility into your domain configurations.

Faster Browsing and Cleaner Overviews

Navigating and understanding your domain setup is now quicker and more direct:

  • Live Search: Start typing in the search bar, and your domain list will filter as you type without needing an exact match.

  • Infinite Scroll: We've replaced the "View More" button with smooth, infinite scrolling so you can browse without interruptions.

  • Cleaner View: Key information like associated Redirects and Environments are now displayed inline within the domain list, giving you a comprehensive overview at a glance without needing to click into individual domain details.

Streamlined Configuration and Setup

Configuring DNS and adding new domains is now more focused and user-friendly:

  • Focused DNS Configuration in Modals: We’ve moved the DNS configuration instructions into a modal. This allows you to focus solely on configuring the domain you’ve added.

  • Guided Full-Page Add Flow: Adding a new domain is now a clearer, step-by-step process with our new full-page add flow. We guide you through the necessary configurations to ensure a correct setup from the start.

  • Smarter Domain Validation: We’ve added better validation to input, improved error messages around adding wildcard domains, and improved clarity around adding www & apex domains.

To learn more about managing Domains on Vercel, read the docs.

Read more

Rhys Sullivan Meg Bird
https://vercel.com/changelog/pro-customers-can-now-deploy-faster-with-on-demand-concurrency-builds Pro customers can now deploy faster without build queues 2025-04-25T13:00:00.000Z

When multiple team members deploy to Vercel at once, builds by default have a queue. Now, you can remove these queues, enabling your builds to start immediately.

This is available to both Pro and Enterprise customers with new per-minute pricing and can be applied in the following ways:

  • Manually, per deployment, for urgent builds

  • Automatically, at the project level, to avoid queues by default

Concurrent build slots remain available for teams with steady, high-volume workloads.

Learn about on-demand concurrent builds and enable them on your project.

Read more

Mariano Cocirio Ali Smesseim Luke Phillips-Sheard Balazs Varga Harpreet Arora Janos Szathmary
https://vercel.com/changelog/pricing-for-on-demand-concurrent-builds-reduced-by-over-50-percent Pricing for on-demand concurrent builds reduced by over 50% 2025-04-25T13:00:00.000Z

Pricing for on-demand concurrent builds, which allow deployments to bypass build queues, has been reduced by more than 50%. Usage increments have also been lowered from 10 minutes to 1 minute.

On-demand concurrent builds are available to both Pro and Enterprise customers, and complement existing build slots with the following recommendations:

  • Use on-demand for bursty workloads or priority deploys

  • Use slots for large, frequent builds with predictable volume

This change also applies to all customers using Enhanced On-demand builds, which allocate more memory to build compute for faster deployment times.

Learn about on-demand concurrent builds and enable them on your project.

Read more

Mariano Cocirio Ali Smesseim Luke Phillips-Sheard Balazs Varga Harpreet Arora Janos Szathmary
https://vercel.com/changelog/updates-to-vercel-toolbar-shortcuts Updates to Vercel Toolbar shortcuts 2025-04-24T13:00:00.000Z

You can now customize keyboard shortcuts for the Vercel Toolbar. Replace default shortcuts for hiding and opening the Toolbar Menu, and add shortcuts for frequently used tools.

To configure shortcuts, find Keyboard Shortcuts under Preferences in the Toolbar Menu. The browser extension is needed to customize shortcuts for hiding the toolbar and opening the Toolbar Menu.

The default shortcut to show and hide the Toolbar Menu is changing to reduce conflicts with sites that have their own Cmd+K menus.

  • Mac: changing from K to ^ (control)

  • Windows: changing from Ctrl K to Ctrl

Learn more about the Vercel Toolbar.

Read more

wits Christopher Skillicorn
https://vercel.com/blog/one-click-bot-protection-now-in-public-beta Bot Protection: One-click managed ruleset now in public beta 2025-04-23T13:00:00.000Z

The Vercel Web Application Firewall (WAF) inspects billions of requests every day to block application-layer threats, such as cross-site scripting, traversal, and application DDoS attacks. While we already inspect and block malicious bot traffic, we wanted to provide better, more precise controls to fine tune your application security.

Today, we're launching the Bot Protection managed ruleset, free for all users on all plans. With a single click, you can protect your application from bot attacks.

Read more

Malavika Tadeusz Liz Hurder
https://vercel.com/changelog/bot-protection-is-now-in-public-beta Bot Protection is now in public beta 2025-04-23T13:00:00.000Z

Vercel Web Application Firewall now includes a new Bot Protection managed ruleset, available in public beta for all users.

Bot Protection helps reduce automated traffic from non-browser sources and allows you to respond based on two action choices:

  • Log Only Action: Logs identified bot traffic in the Firewall tab without blocking requests

  • Challenge Action: Serves a browser challenge to traffic from non-browser sources. Verified bots are automatically excluded

To avoid disrupting legitimate automated traffic that's not already covered by Verified Bots, you can configure custom WAF rules using the bypass action for specific requests.

To enable the ruleset:

  1. In your project dashboard, navigate to the Firewall tab and select Configure

  2. Under Bot Management, navigate to Bot Protection

  3. Select Log or Challenge

  4. Select Review Changes and review the changes to be applied

  5. Select Publish to apply the changes to your production deployment

Bot Protection complements Vercel's existing mitigations, which already block common threats like DDoS attacks, low quality traffic, and spoofed traffic. It adds an extra layer of protection for any automated traffic that is not clearly malicious.

During this public beta period, we’ve set up a thread on the Vercel Community where you can share your feedback, feature requests, and experiences with Bot Protection.

Learn more about the Bot Protection managed ruleset and the Vercel Firewall. Edit: During the beta period, we renamed the Bot Filter managed ruleset to Bot Protection

Read more

Sage Abraham Casey Gowrie Yanick Bélanger Joe Haddad Marco Cornacchia Malavika Tadeusz
https://vercel.com/changelog/prisma-joins-the-vercel-marketplace Prisma joins the Vercel Marketplace 2025-04-23T13:00:00.000Z

Prisma is now available as a storage provider on the Vercel Marketplace, offering Prisma Postgres, a serverless database optimized for fullstack and edge applications.

With automated account creation, integrated billing through Vercel, and a generous free tier, developers can now get started with Prisma Postgres in just a few clicks, no separate signup required.

With the Prisma native integration, Vercel users get:

  • A high-performance Postgres database with zero cold starts

  • Automatic scaling with built-in global caching and connection pooling

  • Visual data management and AI-powered performance suggestions

Get started with Prisma on the Vercel Marketplace. Available to customers on all plans.

Read more

Hedi Zandi Justin Kropp Alex Martin Dima Voytenko Jake Uskoski Chris Tate
https://vercel.com/changelog/node-js-vercel-functions-now-support-request-cancellation Node.js Vercel Functions now support request cancellation 2025-04-23T13:00:00.000Z

Vercel Functions using Node.js can now detect when a request is cancelled and stop execution before completion. This includes actions like navigating away, closing a tab, or hitting stop on an AI chat to terminate compute processing early.

This reduces unnecessary compute, token generation, and sending data the user never see.

You can listen for cancellation using Request.signal.aborted or the abort event:

If you're using the AI SDK, forward the abortSignal to your stream:

Learn more about cancelling Function requests.

Read more

Craig Andrews Mariano Cocirio
https://vercel.com/changelog/fluid-compute-is-now-the-default-for-new-projects Fluid compute is now the default for new projects 2025-04-23T13:00:00.000Z

New Vercel projects now run on Fluid compute by default.

This update follows Fluid’s general availability, its adoption across large-scale production apps, and all v0.dev deployments shipping with Fluid enabled by default.

Fluid compute reuses existing instances before spawning new ones, cutting costs by up to 85% for high-concurrency workloads. It combines the efficiency of servers with the flexibility of serverless:

  • Concurrent requests per function

  • Scale from zero to infinity

  • Minimal cold starts

  • Usage-based, pay as you go

  • Full Node.js and Python support

  • No infrastructure to manage

  • Background tasks with waitUntil

Enable Fluid for your existing projects, and learn more in our blog and documentation.

Read more

Tom Lienard Doug Parsons Florentin Eckl Mariano Cocirio
https://vercel.com/changelog/cve-2025-32421 CVE-2025-32421 2025-04-22T13:00:00.000Z

A low severity cache poisoning vulnerability was discovered in Next.js.

Summary

This affects versions >14.2.24 through <15.1.6 as a bypass of the previous CVE-2024-46982. The issue happens when an attacker exploits a race condition between two requests — one containing the?__nextDataRequest=1 query parameter and another with the x-now-route-matches header.

Some CDN providers may cache a 200 OK response even in the absence of explicit cache-control headers, enabling a poisoned response to persist and be served to subsequent users.

Affected Versions

  • Next.js versions >14.2.24 through <15.1.6

Impact

This vulnerability allows an attacker to poison the CDN cache by injecting the response body from a non-cacheable data request (?__nextDataRequest=1) into a normal request that retains cacheable headers, such as Cache-Control: public, max-age=300.

No backend access or privileged escalation is possible through this vulnerability.

This issue was verified using automated tooling that repeatedly triggers the race condition. Successful exploitation depends on precise timing and the presence of a vulnerable CDN configuration. A Python-based proof of concept script was shared by the reporter and used to validate this behavior on live targets prior to the patch.

Patches

This issue was patched in 15.1.6 and 14.2.24 by stripping the x-now-route-matches header from incoming requests.

Vercel Platform Mitigation

Applications hosted on Vercel's platform are not affected by this issue, as the platform does not cache responses based solely on 200 OK status without explicit cache-control headers.

Workarounds

For self-hosted Next.js deployments unable to upgrade immediately, you can mitigate this vulnerability by:

  • Stripping the x-now-route-matches header from all incoming requests at your CDN

  • Setting cache-control: no-store for all responses under risk

We strongly recommend only caching responses with explicit cache-control headers.

Credit

Thank you to Allam Rachid (zhero;) for the responsible disclosure. They were awarded as part of our bug bounty program.

Read more

Ty Sbano
https://vercel.com/blog/becoming-an-ai-engineering-company Becoming an AI engineering company 2025-04-18T13:00:00.000Z

In today's rapidly evolving tech landscape, AI has moved from research labs to everyday tools with stunning speed. I wanted to share my perspective, not only as a CTO at Vercel, but as an engineer who's seen a few revolutions over the past 30 years.

Read more

Malte Ubl
https://vercel.com/changelog/protection-against-react-router-vulnerability-cve-2025-31137 Protection against React Router vulnerability CVE-2025-31137 2025-04-17T13:00:00.000Z

Security researchers reviewing the Remix web framework have recently discovered a high-severity vulnerability in React Router that allows URL manipulation through the Host/ X-Forwarded-Host header.

Our investigation determined that Vercel and our customers are unaffected:

  • We use query parameters as part of the cache key, which protects against cache poisoning driven by the _data query praram.

  • The @vercel/remix adapter uses X-Forwarded-Host similarly to the Express adapter, but it is not possible for an end user to send X-Forwarded-Host to a Function hosted on Vercel.

A patch has been issued and released in Remix 2.16.3 / React Router 7.4.1. We recommend customers update to the latest version.

Read more about CVE-2025-31137.

Read more

Casey Gowrie
https://vercel.com/changelog/lower-pricing-for-fast-data-transfer Lower pricing for Fast Data Transfer 2025-04-17T13:00:00.000Z

Today we are lowering the price of Fast Data Transfer (FDT) for Vercel regions in Asia Pacific, Latin America, and Africa by up to 50%.

The new FDT regional pricing is rolling out for all Pro and Enterprise plans:

  • All new Pro and Enterprise users will be charged the new price moving forward.

  • For existing Pro users, the new pricing applies starting today.

  • For existing Enterprise users, it will apply at the start of the next billing cycle (typically monthly).

Vercel Region

Old price per GB

New price per GB

Cape Town, South Africa (cpt1)

$0.39

$0.28

Hong Kong (hkg1)

$0.30

$0.16

Mumbai, India (bom1)

$0.33

$0.20

Osaka, Japan (kix1)

$0.31

$0.16

Sao Paulo, Brazil (gru1)

$0.44

$0.22

Seoul, South Korea (icn1)

$0.47

$0.35

Singapore (sin1)

$0.30

$0.16

Sydney, Australia (syd1)

$0.32

$0.16

Tokyo, Japan (hnd1)

$0.31

$0.16

Learn more about Fast Data Transfer or review your FDT usage on the Usage page.

Read more

Harpreet Arora Malavika Tadeusz Shar Dara
https://vercel.com/changelog/enhanced-builds-now-have-double-the-compute Enhanced Builds now have double the compute 2025-04-17T13:00:00.000Z

Enhanced Builds now offer double the compute capacity, further improving performance for large codebases and CPU-intensive builds.

Available to Enterprise customers, Enhanced Builds are designed for teams working with monorepos or frameworks that run tasks in parallel—like dependency resolution, transpilation, or static generation.

Customers already using Enhanced Builds are seeing, with no action required, up to 25% reductions in build times.

Learn more in our documentation or speak to your Vercel account team to enable Enhanced Builds.

Read more

Andrew Healey Marc Codina Segura Mariano Cocirio
https://vercel.com/blog/life-of-a-request-application-aware-routing Life of a Vercel request: Application-aware routing 2025-04-15T13:00:00.000Z

Routing is a fundamental part of delivering applications, but it’s often treated as an afterthought—tacked onto the caching layer and configured through complex YAML or manual click-ops. This can introduce friction for teams, increase the risk of misconfigurations, and slow down deployments, especially as applications grow in complexity.

Vercel takes a different approach: routing is built into the platform as an application-aware gateway that understands your codebase. This unlocks a range of capabilities that simplify development by reducing configuration overhead, minimizing latency, and enabling more advanced architectures.

The gateway has full context of your deployments, domains, and logic. It supports standard routing and custom rules, but goes beyond reverse proxying by interpreting application logic in real time to make smarter decisions, like skipping unnecessary compute.

Here’s how Vercel routes requests—and why it makes building performant, complex apps easier.

Read more

Dan Fein
https://vercel.com/blog/update-on-spain-and-laliga-blocks-of-the-internet Update on Spain and LALIGA blocks of the internet 2025-04-15T13:00:00.000Z

A Spanish court has granted LALIGA the power to block IP addresses associated with unauthorized football streaming—without distinguishing between infringing and non-infringing services. As a result, legitimate, unrelated websites that people depend on are now inaccessible in Spain.

Read more

Malte Ubl Matheus Fernandes
https://vercel.com/blog/migrating-grep-from-create-react-app-to-next-js Migrating Grep from Create React App to Next.js 2025-04-14T13:00:00.000Z

Grep is extremely fast code search. You can search over a million repositories for specific code snippets, files, or paths. Search results need to appear instantly without loading spinners.

Originally built with Create React App (CRA) as a fully client-rendered Single-Page App (SPA), Grep was fast—but with CRA now deprecated, we wanted to update the codebase to make it even faster and easier to maintain going forward.

Here's how we migrated Grep to Next.js—keeping the interactivity of a SPA, but with the performance improvements from React Server Components.

Read more

Ethan Niser Kevin Corbett
https://vercel.com/changelog/vercel-observability-is-now-route-aware-for-sveltekit-apps Vercel Observability is now route-aware for SvelteKit apps 2025-04-14T13:00:00.000Z

SvelteKit routes with dynamic segments—like /blog/[slug]—are now individually recognized and surfaced by Vercel Observability. This replaces the previous behavior where all dynamic routes appeared under a single /fn entry.

This is available with version 5.7.0 of @sveltejs/adapter-vercel. Upgrade to unlock improved observability for your SvelteKit projects.

Learn more about Vercel Observability.

Read more

Tobias Lins Rich Harris
https://vercel.com/changelog/legacy-build-image-is-being-deprecated Legacy build image is being deprecated on September 1, 2025 2025-04-10T13:00:00.000Z

Node.js 18 (LTS support ends April 30, 2025) and the Vercel legacy build image will be deprecated on September 1, 2025. If you are still using the legacy build image on this date, new builds will display an error.

What changes between the legacy build image and latest build image?

  • The minimum version of Node.js is now 20.x

  • The Python toolchain version is now 3.12

  • The Ruby toolchain version is now 3.3.x

How do I know if I am still using the legacy build image?

Will my existing deployments be affected?

Existing deployments will not be affected. However, the Node.js version will need to be updated on your next deployment.

How can I see if my projects are affected?

You can see which projects are affected by this deprecation by running the following commands:

How do I upgrade?

To upgrade with the dashboard, visit the Build and Deployment settings for your project and upgrade the version.

To upgrade with code, use the engines field in package.json:

This date coincides with the previously announced deprecation of Node.js 18 on the Vercel platform. Learn more about differences between build images.

Read more

Anthony Shew Ali Smesseim
https://vercel.com/blog/introducing-chatbot Introducing Chatbot Template 2025-04-09T13:00:00.000Z

Update: Chat SDK has been renamed to Chatbot template, and a new Chat SDK is now available to provide a unified language for chat bots across Slack, Teams, GitHub, and Discord.

The AI SDK powers incredible applications across the web, and today we're announcing the Chatbot—a best-in-class, production-ready template for building conversational AI applications like ChatGPT or Claude artifacts.

Read more

Jared Palmer Jeremy Philemon
https://vercel.com/changelog/grok-3-now-available-on-vercel-marketplace Grok 3 now available on Vercel Marketplace 2025-04-09T13:00:00.000Z

xAI's latest and most powerful Grok 3 models are now available through the Vercel Marketplace, bringing state-of-the-art AI capabilities to your Vercel projects.

To get started, you can use the AI SDK xAI provider in your project:

Then, install the xAI Marketplace Integration with Vercel CLI (or from the dashboard):

Once you've accepted the terms, you'll be able to use Grok models from within your project, with no additional steps necessary.

To help you get started, we've also made a ready-to-deploy Next.js xAI starter template. To learn more about xAI on Vercel, read our announcement and the documentation.

Read more

Hedi Zandi Dima Voytenko Walter Korman Justin Kropp Fabio Benedetti Alex Martin René-Pier Deshaies-Gélinas Jake Uskoski
https://vercel.com/blog/expanding-observability-on-vercel Expanding observability on Vercel 2025-04-08T13:00:00.000Z

The Vercel Marketplace adds new integrations from Sentry, Checkly, and Dash0. You can now use the tools you already trust to monitor, measure, and debug your apps. No custom setup. No change to how you build or deploy.

These tools connect directly through the Vercel Marketplace with integrated billing, single sign-on, and access to provider dashboards, giving you deep visibility without the setup overhead.

Read more

Hedi Zandi Alli Pope
https://vercel.com/changelog/automatic-mitigation-of-crawler-delay-via-skew-protection Automatic mitigation of Google and Bing crawl delay, via Vercel’s Skew Protection 2025-04-08T13:00:00.000Z

Google and Bing web crawlers occasionally crawl a document, but render it up to several weeks later using a headless browser. This delay between document crawl and assets download (which happens during render) can cause indexing failures if the website has been re-deployed since the crawl.

Vercel now automatically protects against such indexing failures for projects that have Skew Protection enabled.

This was achieved by extending the maximum age for Skew Protection to 60 days for requests coming from major search engine bots, such as Googlebot and Bingbot. This means that assets deployed up to 60 days ago will still be accessible to search engines when they render your document.

Regardless of the maximum age configured in the dashboard, Pro and Enterprise accounts using Skew Protection will automatically be protected from this delay, thereby improving SEO.

Learn more about Skew Protection and enable it in your project. Also, check out our SEO research on how Google handles JavaScript throughout the indexing process, which provides a deeper dive into the search rendering process.

Read more

Steven Salat Malte Ubl
https://vercel.com/changelog/sentry-checkly-and-dash0-join-the-vercel-marketplace Sentry, Checkly, and Dash0 join the Vercel Marketplace 2025-04-08T13:00:00.000Z

New native integrations from Sentry, Checkly, and Dash0 are now available on the Vercel Marketplace, helping make it easier to monitor, debug, and optimize your applications—all in one place.

  • Sentry: Real-time error tracking and performance monitoring for faster issue resolution

  • Checkly: End-to-end monitoring and synthetic checks for your frontend and APIs

  • Dash0: Log management and structured observability, built with a developer-first experience. Dash0 also supports Native Log Drains, allowing you to stream logs from your Vercel projects to external logging systems for deeper insights and centralized monitoring

This launch introduces Log Drain support for native integrations—a capability that was previously only available to connectable accounts.

These integrations offer frictionless onboarding, single sign-on, and integrated billing through Vercel, making it easy to get started in just a few clicks.

Explore the new observability integrations.

Read more

Hedi Zandi Fabio Benedetti Dima Voytenko Justin Kropp René-Pier Deshaies-Gélinas
https://vercel.com/blog/protectd-evolving-vercels-always-on-denial-of-service-mitigations Protectd: Evolving Vercel’s always-on denial-of-service mitigations 2025-04-07T13:00:00.000Z

Securing web applications is core to the Vercel platform. It’s built into every request, every deployment, every layer of our infrastructure. Our always-on Denial-of-Service (DoS) mitigations have long run by default—silently blocking attacks before they ever reach your applications.

Last year, we made those always-on mitigations visible with the release of the Vercel Firewall, which allows you to inspect traffic, apply custom rules, and understand how the platform defends your deployments.

Now, we’re introducing Protectd, our next-generation real-time security engine. Running across all deployments, Protectd reduces mitigation times for novel DoS attacks by over tenfold, delivering faster, more adaptive protection against emerging threats.

Let's take a closer look at how Protectd extends the Vercel Firewall by continuously mapping complex relationships between traffic attributes, analyzing, and learning from patterns to predict and block attacks.

Read more

Casey Gowrie Joe Haddad
https://vercel.com/changelog/trigger-github-actions-with-enriched-deployment-data-from-vercel Trigger GitHub Actions with enriched deployment data from Vercel 2025-04-07T13:00:00.000Z

You can now trigger GitHub Actions workflows in response to Vercel deployment events with enriched data using repository_dispatch events. These events are sent from Vercel to GitHub, enabling more flexible, cost-efficient CI workflows, and easier end-to-end testing for Vercel deployments.

Previously, we recommended using deployment_status events, but these payloads were limited and required extra parsing or investigation to understand what changed.

With repository_dispatch, Vercel sends custom JSON payloads with full deployment context—allowing you to reduce Github Actions overhead and streamline your CI pipelines.

We recommend migrating to repository_dispatch for a better experience. deployment_status events will continue to work for backwards compatibility.

Read more

Erika Rowland Tom Knickman
https://vercel.com/changelog/llama-4-is-now-available-on-vercel-marketplace Llama 4 is now available on Vercel Marketplace 2025-04-05T13:00:00.000Z

Meta’s latest and most powerful Llama 4 models are now available through the Vercel Marketplace via Groq.

To get started for free, install the Groq integration in the Vercel dashboard or add Groq to your existing projects with the Vercel CLI:

You can then use the AI SDK Groq provider with Lama 4:

For a full demo, check out the official Groq chatbot template (which now uses Llama 4) or compare Llama 4 against other models side-by-side on our AI SDK Playground. To learn more, visit our AI documentation.

Read more

Walter Korman
https://vercel.com/changelog/run-and-share-custom-queries-in-observability-plus Run and share custom queries in Observability Plus 2025-04-04T13:00:00.000Z

Observability Plus customers can now create and share custom queries directly from the Observability dashboard—making it easier to investigate specific metrics, routes, and application behavior without writing code.

The new query interface lets you:

  • Filter by route to focus on specific pages and metrics

  • Use advanced filtering, with auto-complete—no query language needed

  • Analyze charts in the context of routes and projects

  • Share queries instantly via URL or Copy button

This new querying experience builds on the Monitoring dashboard, helping you stay in context as you drill deeper into your data.

To try it out, open your Observability dashboard and select Explore query arrows on any chart or the query builder from the ellipsis menu.

Learn more about running queries in Observability and its available metrics.

Read more

Julia Shi Damien Simonin Feugas Timo Lins
https://vercel.com/blog/how-paige-grew-revenue-by-22-with-shopify-next-js-and-vercel How PAIGE grew revenue by 22% with Shopify, Next.js, and Vercel 2025-04-03T13:00:00.000Z

PAIGE, a leading denim and apparel retailer, faced significant technical complexity due to their existing ecommerce architecture. Seeking a faster and more reliable online experience, they reimagined their ecommerce strategy by adopting a simpler headless tech stack—one powered by Shopify, Next.js, and Vercel—that ultimately boosted their Black Friday revenue by 22% and increased conversion rates by 76%.

Read more

Alina Weinstein
https://vercel.com/changelog/vercel-secure-compute-now-supports-multiple-environments Vercel Secure Compute now supports multiple environments 2025-04-03T13:00:00.000Z

Teams using Vercel Secure Compute can now associate each project environment—Production, Preview, and custom—with a distinct Secure Compute network, directly from the project settings. This simplifies environment-specific network isolation within a single project.

To connect your project's environments to Secure Compute:

  1. Navigate to your project's Secure Compute settings

  2. For every environment you want to connect to Secure Compute:

    • Select an active network

    • Optionally, select a passive network to enable failover

    • Optionally, enable builds to include the project's build container in the network

  3. Click Save to persist your changes

Learn more about Secure Compute.

Read more

Miroslav Simulcik Meg Bird Bel Curcio
https://vercel.com/changelog/2fa-is-now-available Two-Factor Authentication (2FA) is now available 2025-04-03T13:00:00.000Z

Users can now secure their accounts using Two-Factor Authentication (2FA) with Time-based One-Time Passwords (TOTP), commonly provided by authenticator apps like Google Authenticator or Authy. Your current Passkeys (WebAuthn keys) can also be used as second factors. 2FA adds an extra security layer to protect your account even if the initial login method is compromised.

To Enable 2FA:

  1. Navigate to Authentication in Account Settings and enable 2FA

  2. Log in using your existing method (email OTP or Git provider) as your first factor

  3. Complete authentication with a TOTP authenticator as your second factor

Important information:

  • Passkey logins (WebAuthn) are inherently two-factor and won't prompt for additional verification

  • Team-scoped SAML SSO logins delegate authentication responsibility to your identity provider (IdP) and won't require an additional factor within Vercel

Visit your account settings to enable 2FA today, or check out our documentation to learn more.

Read more

Enric Pallerols Meg Bird Bel Curcio
https://vercel.com/changelog/cve-2025-30218 CVE-2025-30218 2025-04-02T13:00:00.000Z

In the process of remediating CVE-2025-29927, we looked at other possible exploits of Middleware. We independently verified this low severity vulnerability in parallel with two reports from independent researchers.

Summary

To mitigate CVE-2025-29927, Next.js validated the x-middleware-subrequest-id which persisted across multiple incoming requests:

However, this subrequest ID is sent to all requests, even if the destination is not the same host as the Next.js application.

Initiating a fetch request to a third-party within Middleware will send the x-middleware-subrequest-id to that third party.

Impact

While the exploitation of this vulnerability is unlikely due to an attacker requiring control of the third-party, we want to be proactive. We were already planning on removing this recursion prevention logic from Middleware—it was not supported in newer updates to Middleware to support the Node.js runtime—this disclosure expedited our efforts to bring parity between runtimes.

Vercel customers are protected with mitigations already implemented within our platform environment. We still encourage teams to update to the latest Next.js patch version or their chosen backport. Other infrastructure providers which host Next.js applications are not impacted by this, as it is specific to Vercel's implementation of recursion protection.

Remediation

This advisory was published in alignment with our new internal process for disclosure of vulnerabilities within OSS packages, based on our postmorten of CVE-2025-29927. We’ve patched 15.x, and offered backports for versions 12.x through 14.x, making an exception to our newly published LTS policy.

We’ve also worked proactively with new partners to Next.js for early disclosure. If you are an infrastructure provider and want to work with us, please email [email protected].

Credit

Thank you to Jinseo Kim (kjsman) and ryotak for the responsible disclosure. These researchers were awarded as part of our bug bounty program.

Read more

Ty Sbano
https://vercel.com/blog/the-no-nonsense-guide-to-composable-commerce The no-nonsense guide to composable commerce 2025-04-01T13:00:00.000Z

Composable commerce projects frequently become overly complex, leading to missed objectives and unnecessary costs. At Vercel, we take a no-nonsense approach to composable commerce that's solely focused on business outcomes. Architecture should serve the business, not the other way around. Ivory tower architectures disconnected from clear business goals inevitably lead to projects plagued by runaway costs. Here are five truths we stand by when it comes to composable commerce:

Read more

Malte Ubl
https://vercel.com/changelog/attack-challenge-mode-now-allows-verified-bots-and-vercel-cron-jobs Attack Challenge Mode now allows verified bots and Vercel cron jobs 2025-04-01T13:00:00.000Z

Verified webhook providers—including Stripe and PayPal—are now automatically allowed in Attack Challenge Mode, ensuring uninterrupted payment processing. Well-behaved bots from major search engines, such as Googlebot, and analytics platforms are also supported.

Vercel Cron Jobs are now exempt from challenges when running in the same account. Like other trusted internal traffic, they bypass Attack Challenge Mode automatically.

To block specific known bots, create a custom rule that matches their User Agent. Known bots are validated to be authentic and cannot be spoofed to bypass Attack Challenge Mode.

Learn more about Attack Challenge Mode and how Vercel maintains its directory of legitimate bots.

Read more

Malavika Tadeusz Sage Abraham Adrien Thebo Casey Gowrie Joe Haddad
https://vercel.com/changelog/yarn-2-dependency-caching-now-supported Yarn 2+ dependency caching now supported 2025-03-31T13:00:00.000Z

Vercel now caches dependencies for projects using Yarn 2 and newer, reducing install times and improving build performance. Previously, caching was only supported for npm, pnpm, Bun, and Yarn 1.

To disable caching, set the environment variable VERCEL_FORCE_NO_BUILD_CACHE with a value of 1 in your project settings.

If you're using Yarn 4, enable Corepack, as recommended by Yarn.

Visit the Build Cache documentation to learn more.

Read more

Austin Merrick
https://vercel.com/changelog/flags-sdk-3-2 Flags SDK 3.2 2025-03-31T13:00:00.000Z

The Flags SDK 3.2 release adds support for precomputed feature flags in SvelteKit, making it easier to experiment on marketing pages while keeping them fast and avoiding layout shift.

Precomputed flags evaluate in Edge Middleware to decide which variant of a page to show. This keeps pages static, resulting in low global latency as static variants can be served through the Edge Network.

Precompute handles the combinatory explosion when using multiple feature flags statically. Generate different variants of a page at build time, rely on Incremental Static Regeneration to only build a specific combinations on demand, and more.

We also improved the Flags SDK documentation by splitting it across different frameworks and explicitly listing all providers that have adapters for the Flags SDK.

Learn more about the Flags SDK with SvelteKit and the precompute pattern.

Read more

Simon Holthausen Dominik Ferber
https://vercel.com/blog/postmortem-on-next-js-middleware-bypass Postmortem on Next.js Middleware bypass 2025-03-25T13:00:00.000Z

Last week, we published CVE-2025-29927 and patched a critical severity vulnerability in Next.js. Here’s our post-incident analysis and next steps.

Read more

Ty Sbano
https://vercel.com/changelog/vercel-firewall-proactively-protects-against-vulnerability-with-middleware Protection against Next.js CVE-2025-29927 2025-03-22T13:00:00.000Z

A security vulnerability in Next.js was responsibly disclosed, which allows malicious actors to bypass authorization in Middleware when targeting the x-middleware-subrequest header.

Vercel customers are not affected. We still recommend updating to the patched versions. Learn more about CVE-2025-29927.

Read more

Aaron Brown
https://vercel.com/blog/ai-sdk-4-2 AI SDK 4.2 2025-03-21T13:00:00.000Z

The AI SDK is an open-source toolkit for building AI applications with JavaScript and TypeScript. Its unified provider API allows you to use any language model and enables powerful UI integrations into leading web frameworks such as Next.js and Svelte.

Read more

Lars Grammel Jared Palmer Nico Albanese
https://vercel.com/changelog/flags-sdk-now-supports-openfeature Flags SDK now supports OpenFeature 2025-03-21T13:00:00.000Z

The Flags SDK adapter for OpenFeature allows using any Node.js OpenFeature provider with the Flags SDK. Pick from a wide range of flag providers, while benefiting from the Flag SDK's tight integration into Next.js and SvelteKit.

OpenFeature is an open specification that provides a vendor-agnostic, community-driven API for feature flagging that works with your favorite feature flag management tool or in-house solution. OpenFeature exposes various providers through a unified API.

The Flags SDK sits between your application and the source of your flags, helping you follow best practices and keep your website fast. Use the Flags SDK OpenFeature adapter in your application to load feature flags from all compatible Node.js OpenFeature providers, including:

  • AB Tasty

  • Bucket

  • Cloudbees

  • Confidence by Spotify

  • ConfigCat

  • DevCycle

  • Environment Variables Provider

  • FeatBit

  • flagd

  • Flipt

  • GO Feature Flag

  • GrowthBook

  • Hypertune

  • Kameleoon

  • LaunchDarkly

  • PostHog

  • Split

View the OpenFeature adapter or clone the template to get started.

Read more

Dominik Ferber
https://vercel.com/blog/xai-and-vercel-partner-to-bring-zero-friction-ai-to-developers xAI and Vercel partner to bring zero-friction AI to developers 2025-03-20T13:00:00.000Z

Vercel provides the tools and infrastructure to build AI-native web applications. We're partnering with xAI to bring their powerful Grok models directly to Vercel projects through the Vercel Marketplace—and soon v0—with no additional signup required.

To help you get started, xAI is introducing a new free tier through Vercel to enable quick prototyping and experimentation. These Grok models now power our official Next.js AI chatbot template with the AI SDK.

This is a part of our ongoing effort to make using AI frictionless on Vercel.

Read more

Jared Palmer
https://vercel.com/changelog/xai-joins-the-vercel-marketplace xAI joins the Vercel Marketplace 2025-03-20T13:00:00.000Z

xAI's Grok models are now available in the Vercel Marketplace, making it easy to integrate conversational AI into your Vercel projects.

  • Get started with xAI's free plan—no additional signup through the Marketplace

  • Access Grok's large language models (LLMs) directly from your Vercel projects

  • Simplify authentication and API key management through automatically configured environment variables

  • Pay only for what you use with integrated billing through Vercel

To get started, you can use the AI SDK xAI provider in your project:

Then, install the xAI Marketplace Integration with Vercel CLI (or from the dashboard):

Once you've accepted the terms, you'll be able to use Grok models from within your project, with no additional steps necessary.

To help you get started, we've also made a ready-to-deploy Next.js xAI starter template. To learn more about xAI on Vercel, read our announcement and the documentation.

Read more

Hedi Zandi Dima Voytenko Walter Korman Justin Kropp Fabio Benedetti Alex Martin René-Pier Deshaies-Gélinas Jake Uskoski
https://vercel.com/changelog/lockfile-aware-deployment-skipping-for-monorepos Lockfile-aware deployment skipping for monorepos 2025-03-20T13:00:00.000Z

Vercel now maps dependencies in your package manager’s lockfile to applications in your monorepo. Deployments only occur for applications using updated dependencies.

This feature is based on Turborepo's lockfile analysis, supporting the package managers listed as stable in Turborepo's Support Policy.

Previously, any change to the lockfile would redeploy all applications in the monorepo since it was treated as a shared input. Now, Vercel inspects the lockfile’s contents to determine which applications have dependency changes, further reducing potential queue times.

Learn more about skipping unaffected projects in monorepos.

Read more

Dimitri Mitropoulos Tom Knickman Chris Olszewski
https://vercel.com/changelog/vercel-firewall-protects-against-the-samlstorm-vulnerability Vercel Firewall protects against the SAMLStorm vulnerability 2025-03-18T13:00:00.000Z

We have deployed a proactive security update to the Vercel Firewall, protecting against a recently disclosed vulnerability in the xml-crypto package, dubbed SAMLStorm (CVE-2025-29774 and CVE-2025-29775). This vulnerability, which affects various SAML implementations, could allow attackers to bypass authentication mechanisms.

What This Means for Vercel Customers

  • Automatic protection with the Vercel Firewall: Vercel Firewall automatically mitigates this risk for you, but updating xml-crypto is still recommended

  • Update xml-crypto: If you're using xml-crypto package 6.0.0 and earlier, or a package that depends on xml-crypto, update to 6.0.1, 3.2.1, or 2.1.6 for the patched versions

  • We'll continue to monitor for new developments and provide updates as necessary

See the SAMLStorm report for more details on the vulnerability, and reach out to Vercel Support if you have questions.

Read more

Aaron Brown Casey Gowrie Sage Abraham
https://vercel.com/changelog/groq-fal-and-deepinfra-join-the-vercel-marketplace Groq, fal, and DeepInfra join the Vercel Marketplace 2025-03-18T13:00:00.000Z

The Vercel Marketplace now has an AI category for tools to integrate AI models and services directly into Vercel projects.

Groq, fal, and DeepInfra are available as first-party integrations, allowing users to:

  • Seamlessly connect and experiment with various AI models to power generative applications, embeddings, and more

  • Deploy and run inference with high-performance AI models, optimized for speed and efficiency

  • Leverage single sign-on and integrated billing through Vercel, including new prepaid options for better cost control

With prepaid plan options, users can now manage AI costs more predictably by purchasing credits upfront from a model provider. These credits can be used across any model offered by that provider.

Explore the new AI category, read the docs, and get started with Groq, fal, and DeepInfra on the Vercel Marketplace, available to users on all plans.

You can also explore the most popular models from each provider in the AI SDK playground.

Read more

Hedi Zandi Dima Voytenko Justin Kropp Fabio Benedetti Walter Korman Alex Martin Shu Uesugi René-Pier Deshaies-Gélinas Mitul Shah Jake Uskoski
https://vercel.com/changelog/reduced-log-drains-costs-with-smaller-billable-increments Reduced Log Drains costs with smaller billable increments 2025-03-17T13:00:00.000Z

We’ve updated Log Drains pricing on all Pro and Enterprise plans, reducing the charge increments.

Data transferred for Log Drains will be billed at $0.50 per 1GB, instead of the previous $10 per 20GB, providing more precise usage tracking and better cost efficiency.

Learn more about Log Drains.

Read more

Harpreet Arora Andrew Barba Chris Widmaier
https://vercel.com/changelog/vercel-marketplace-integrations-now-available-in-v0 Vercel Marketplace integrations now available in v0 2025-03-14T13:00:00.000Z

Users of v0—our collaborative AI assistant used to design, iterate, and scale full-stack applications—can now leverage integrations from the Vercel Marketplace, starting with Upstash, Neon, and Supabase.

Install directly from the project sidebar or within v0’s chat interface. When added, these integrations redirect you to the Vercel Marketplace where you can configure environment variables, available to both Vercel and v0.

Explore a example generation.

Read more

Max Leiter Ishaan Dey Aryaman Khandelwal
https://vercel.com/changelog/faster-domain-aliasing-for-large-scale-multi-tenant-applications Faster domain aliasing for large-scale multi-tenant applications 2025-03-14T13:00:00.000Z

Bulk aliasing for multi-tenant applications now runs significantly faster, reducing total aliasing time by up to 95%.

Multi-tenant applications on Vercel let a single project serve many customers behind the scenes. These applications are often fronted by hundreds or thousands of domains. Previously, aliasing—the process of pointing a domain to a different deployment—was a slow process that added significant overhead to deployments.

This optimization is now live for all customers and has led to dramatic improvements, like:

  • App with 13,254 domains: ~10min → 28 seconds

  • App with 23,743 domains: 8min 37secs → 26 seconds

Learn more about multi-tenant applications on Vercel.

Read more

Mark Glagola
https://vercel.com/blog/jeanne-dewitt-grosser-joins-vercel-as-coo Jeanne DeWitt Grosser joins Vercel as COO 2025-03-13T13:00:00.000Z

When I started Vercel, my vision was simple: make building for the web more accessible and more powerful. That belief has fueled Vercel’s growth, empowering developers to bring their biggest ideas to life.

Today, we’re welcoming Jeanne DeWitt Grosser, former Chief Business Officer at Stripe, as Vercel’s Chief Operating Officer to help further this mission. Vercel is building the foundation to power the next billion developers. Achieving this vision requires strong leadership and operational excellence. As COO, Jeanne will lead our go-to-market function.

Read more

Guillermo Rauch
https://vercel.com/blog/personalization-strategies-that-power-ecommerce-growth Personalization strategies that power ecommerce growth 2025-03-07T13:00:00.000Z

Personalization works best when it’s intentional. Rushing into it without the right approach can lead to higher costs, slower performance, and poor user experience. The key is to implement incrementally, with the right tools, while maintaining performance.

When personalization is implemented effectively, it drives real business results, returning $20 for every $1 spent and driving 40% more revenue.

Let's look at what personalization is, how to implement it correctly, and why Next.js and Vercel achieve optimal outcomes.

Read more

Collier Kirkland
https://vercel.com/changelog/increased-hobby-usage-limits-for-image-optimization Increased Hobby usage limits for Image Optimization 2025-03-07T13:00:00.000Z

We've increased Image Optimization included usage for Hobby teams:

  • Image Transformations: from 3K to 5K per month

  • Image Cache Reads: from 180K to 300K per month

  • Image Cache Writes: from 60K to 100K per month

Learn more about Image Optimization pricing and its recent recent price reduction.

Read more

Steven Salat Harpreet Arora
https://vercel.com/changelog/overview-page-in-observability Overview page in Observability 2025-03-06T13:00:00.000Z

Vercel Observability now includes an overview page that provides a high-level view of your application's performance.

This new dashboard aggregates key metrics from Edge Requests, Fast Data Transfer, and Vercel Functions, giving you instant insights into request and data transfer volumes, as well as function performance.

Each metric also serves as a starting point for deeper analysis, with one-click access to their dedicated dashboards for more detailed insights.

Try it in your Observability dashboard.

Read more

Tobias Lins Timo Lins
https://vercel.com/changelog/vercel-firewall-rule-builder-now-supports-or-for-rule-condition-groups Vercel Firewall rule builder now supports `OR` for rule condition groups 2025-03-05T13:00:00.000Z

The Vercel Firewall now supports using an OR operator to link condition groups within a custom WAF rule.

Previously, customers could only use an AND operator to join condition groups. This update now supports AND or OR , allowing customers to create more complex WAF actions.

Learn more about the Vercel Firewall or navigate to your Firewall tab to customize rules.

Read more

Yanick Bélanger Sage Abraham Malavika Tadeusz
https://vercel.com/blog/how-fluid-compute-works-on-vercel How Fluid compute works on Vercel 2025-03-03T13:00:00.000Z

Fluid compute is Vercel’s next-generation compute model designed to handle modern workloads with real-time scaling, cost efficiency, and minimal overhead. Traditional serverless architectures optimize for fast execution, but struggle with requests that spend significant time waiting on external models or APIs, leading to wasted compute.

To address these inefficiencies, Fluid compute dynamically adjusts to traffic demands, reusing existing resources before provisioning new ones. At the center of Fluid is Vercel Functions router, which orchestrates function execution to minimize cold starts, maximize concurrency, and optimize resource usage. It dynamically routes invocations to pre-warmed or active instances, ensuring low-latency execution.

By efficiently managing compute allocation, the router prevents unnecessary cold starts and scales capacity only when needed. Let's look at how it intelligently manages function execution.

Read more

Mariano Cocirio Collier Kirkland
https://vercel.com/blog/using-the-ai-sdk-to-build-sitecore-streams-ai-powered-brand-aware-assistant Using the AI SDK to build Sitecore Stream's AI-powered brand aware assistant 2025-03-03T13:00:00.000Z

Sitecore—a leading digital experience platform—wanted to create a transformative AI tool that would help marketers connect more deeply with their brand assets, driving both consistency and creativity. Using the AI SDK, they lunched Sitecore Stream—a dynamic, AI-powered brand assistant that empowers marketers to interact with their brand content in visually interactive and conversational way.

Read more

Alli Pope
https://vercel.com/changelog/automatic-pnpm-v10-support Automatic pnpm v10 support 2025-02-28T13:00:00.000Z

Vercel now supports pnpm v10.

New projects with a pnpm-lock.yaml file with lockfileVersion: '9.0' will automatically use pnpm v10 for Install and Build Commands. Existing projects will continue to use pnpm v9 for backwards compatibility, since pnpm v9 also uses lockfileVersion: '9.0'.

Check your build logs to see which version a deployment uses. If you'd like to manually upgrade or downgrade your version, use Corepack.

Visit the package managers documentation to learn more.

Read more

Austin Merrick Sean Massa
https://vercel.com/changelog/improvements-to-vercel-firewall-system-bypass-rules Improvements to Vercel Firewall system bypass rules 2025-02-28T13:00:00.000Z

System bypass rules allow Pro and Enterprise customers to configure firewall rules to skip Vercel system mitigations, including DDoS protection, for specific IPs and CIDR ranges. Although we strongly recommend against disabling protections, customers—particularly ones that deploy a proxy in front of Vercel—may experience traffic issues that can be mitigated by deploying system bypass rules.

Improvements to the system bypass rules give customers additional control over how the rules are deployed, including:

  • Expanded support beyond production domains to preview domains

  • Added support for single domain rules for preview deployment URLs and aliases

  • Expanded project-scoped bypass rules to include all domains connected to a project

  • Increased limits for system bypass rules for Pro to 25 and Enterprise to 100 (from 3 and 5 respectively)

Learn more about the Vercel Firewall.

Read more

Sage Abraham
https://vercel.com/changelog/fast-data-transfer-for-rewrites-between-a-teams-projects-is-now-free Fast Data Transfer for rewrites between your team's projects is now free 2025-02-27T13:00:00.000Z

External rewrites between projects within the same team now use Fast Data Transfer only for the destination request. This change makes Fast Data Transfer for the original request free.

Commonly used as a reverse proxy or for microfrontend architectures, rewrites can be configured in vercel.json, middleware, or next.config.ts to route requests between the same or separate Vercel projects without changing the URL shown to the user.

Usage for external rewrites to the same team:

  • Fast Data Transfer for the original and destination request have been optimized and consolidated into a single stream, reducing overall transfer.

  • Each external rewrite triggers a full request lifecycle, including routing and Web Application Firewall checks, ensuring security policies are enforced per project, and counts as a separate Edge Request.

Learn about rewrites and monitor your Fast Data Transfer usage and observability.

Read more

Mark Knichel Harpreet Arora Malavika Tadeusz
https://vercel.com/changelog/statsig-joins-the-vercel-marketplace Statsig joins the Vercel Marketplace 2025-02-27T13:00:00.000Z

The Vercel Marketplace now has an Experimentation category to allow developers to work with feature flagging and experimentation providers in Vercel projects.

Statsig—a modern feature management, experimentation, and analytics platform—is now available as a first-party integration in this new category, so users can:

  • Connect Statsig with your Vercel projects directly from the Vercel Marketplace

  • Leverage integrated billing through Vercel

  • Sync your Statsig experiments into Edge Config for ultra-low latency

  • Manage and roll out features progressively, run A/B tests, and track real-time results

Additionally, you can use the Flags SDK to load experiments and flags from Statsig using the newly released @flags-sdk/statsig provider.

Explore the template or get started with Statsig on the Vercel Marketplace, available to users on all plans.

Read more

Hedi Zandi Dominik Ferber Aaron Morris Andy Schneider Fabio Benedetti Chris Widmaier Justin Kropp
https://vercel.com/changelog/ip-address-details-added-in-the-vercel-firewall-dashboard IP address details added in the Vercel Firewall dashboard 2025-02-27T13:00:00.000Z

The Vercel Firewall dashboard now displays enriched IP address data, including the autonomous system (AS) name, AS number (ASN), and geolocation on hover.

This information helps identify the origin of an attack, determine the owner of an IP address, and create targeted custom rules to block malicious traffic.

Learn more about the Vercel Firewall.

Read more

Manuel Muñoz Solera Yanick Bélanger Malavika Tadeusz
https://vercel.com/changelog/middleware-now-supports-node-js Middleware now supports Node.js 2025-02-26T13:00:00.000Z

Middleware support for the Node.js runtime is now available, providing full Node.js support for authentication, personalization, and more—using familiar APIs.

Middleware continues to be deployed globally on Vercel, regardless of the runtime used. We are first releasing support for Node.js Middleware in Next.js 15.2.

This experimental feature requires the Next.js canary channel. Upgrade to next@canary and enable the nodejs experimental flag in your config to use it:

You must also specify the Node.js runtime in your middleware file:

Deploy now with Next.js 15.2.

Read more

Gal Schlezinger JJ Kasper Seiya Nuta Mariano Cocirio Javi Velasco
https://vercel.com/changelog/granular-branch-matching-for-git-configuration-in-vercel-json Granular branch matching for Git configuration in vercel.json 2025-02-25T13:00:00.000Z

Vercel now supports glob patterns (like testing-*) in the git.deploymentEnabled field, giving you more control over branch deployments.

Previously, you could disable deployments for specific branches by explicitly naming them. Now, you can use patterns to match multiple branches at once.

For example, the configuration below prevents deployments on Vercel if the branch begins with internal-.

Learn more about Git configuration.

Read more

Tom Knickman
https://vercel.com/changelog/changes-to-supported-tld-registrations Changes to supported TLD registrations 2025-02-25T13:00:00.000Z

We’ve updated our list of supported Top-Level Domains (TLDs) registrations, adding new options and removing select ones as we refine our domain offerings.

Newly supported TLD registrations We now support 66 additional TLDs, including:

- Generic domains (e.g. .page, .food, and .hosting) - Professional domains like (e.g. .lawyer, .phd, and .inc) - Lifestyle domains (e.g. .beauty, .living, and .lifestyle) - Interest-based domains (e.g. .guitars, .yachts, and .watches)

TLD registrations no longer supported We have removed registration support for select TLDs, including:

- Various country-code TLDs (ccTLDs, e.g. .at, .lu, .ma) - Regional TLDs (e.g. .berlin, .wales, .istanbul) - Multiple compound TLDs (e.g. .com.co, .org.pl, .co.nz)

Future plans for TLD registration support

We’re continuing to improve our domain offerings by:

  • Enhancing support for country-code TLDs (ccTLDs), with plans to reintroduce select options.

  • Expanding our portfolio with additional generic TLDs (gTLDs).

These changes take effect immediately. Existing registrations, renewals, and services for deprecated TLDs remain unaffected.

Read more

Dillon Mulroy Mark Glagola Meg Bird Rhys Sullivan Anders Hagström
https://vercel.com/changelog/one-click-linking-from-usage-to-vercel-observability-dashboards One-click linking from Usage to Vercel Observability dashboards 2025-02-25T13:00:00.000Z

Metrics on the Usage dashboard now offer one-click access to corresponding Vercel Observability dashboards, making it easier to dive deeper into team and project usage.

This new linking is available today for:

  • Vercel Functions

  • Edge Network

  • Image Optimization

  • Incremental Static Regeneration

  • Builds (when viewing per project)

Try it from your Usage dashboard and learn more about Vercel Observability.

Read more

Damien Simonin Feugas
https://vercel.com/blog/integrating-vercel-and-sitecore-for-2x-faster-development-times-and-111 Integrating Vercel and Sitecore for 2x faster development times and 111% higher conversions 2025-02-24T13:00:00.000Z

Avanade is the world’s leading expert on Microsoft which delivers AI-driven solutions for cloud, data analytics, cybersecurity, and ERP.

The team at Avanade started on a comprehensive transformation, ultimately adopting Next.js, Vercel, and Sitecore XM Cloud to establish a modern, composable system capable of delivering highly responsive experiences to global clients. This migration helped replace monolithic systems with tightly coupled components and manual deployments, which hurt performance and feature rollouts.

Read more

Alli Pope
https://vercel.com/changelog/new-monorepo-projects-now-skip-builds-with-unchanged-code-by-default New monorepo projects now skip builds with unchanged code by default 2025-02-24T13:00:00.000Z

Previously, we added opt-in support for skipping builds with unchanged code in monorepos to reduce build queueing.

This behavior is now the default for new projects. To enable deployment skipping in an existing project, visit the Build and Deployment settings for the project.

Additionally, this setting has been added to the Vercel provider for Terraform in 2.10.0.

Learn more about skipping deployments.

Read more

Mitch Vostrez Matthew Binshtok
https://vercel.com/changelog/observability-for-edge-requests-now-includes-more-traffic-parameters Observability for Edge Requests now includes more traffic parameters 2025-02-24T13:00:00.000Z

We’ve expanded the Edge Request dashboard in Vercel Observability to show additional request data by:

  • User agent

  • IP address

  • JA4

  • Referrer

  • Hostname

Available on all plans, these insights help you monitor traffic patterns and identify potential threats, which you can address using Vercel Firewall.

Route-level data is available to Observability Plus customers.

View your Edge Request dashboard and learn more about Vercel Observability.

Read more

Fabio Benedetti Tobias Lins
https://vercel.com/blog/vercel-security-roundup-faster-defenses-and-better-visibility-for-your-apps Vercel security roundup: Faster defenses and better visibility for your apps 2025-02-21T13:00:00.000Z

Every second, Vercel blocks attacks before they reach your applications—keeping businesses online and developers focused on shipping, not security incidents.

Vercel’s security capabilities combine real-time DDoS mitigation, a powerful Web Application Firewall (WAF), and seamless SIEM integrations to provide always-on protection without added complexity.

Here’s what happened in the last quarter.

Read more

Liz Hurder
https://vercel.com/changelog/consolidated-build-and-deployment-settings Consolidated Build and Deployment settings 2025-02-21T13:00:00.000Z

We’ve simplified the Project Settings page, bringing all build customization options under a unified Builds and Deployment section.

Vercel framework-defined infrastructure automatically detects settings for many frontend frameworks, but you can still customize build options to fit your needs.

Learn more about project settings and how to configure a build.

Read more

Mariano Cocirio Balazs Varga Luke Phillips-Sheard
https://vercel.com/changelog/sync-projects-with-vercel-related-projects Sync projects with @vercel/related-projects 2025-02-20T13:00:00.000Z

The new @vercel/related-projects package helps sync deployment information across separate Vercel projects, ensuring your applications always reference the latest preview or production deployment URLs without manual updates or environment variable changes.

Previously, developers had to manually enter deployment URLs, manage connection strings, or use environment variables to keep the projects communicating effectively. Now, this data is automatically available and updated at both build and runtime.

For example, a monorepo containing:

  • A frontend Next.js project that fetches data from an API

  • An backend Express.js API project that serves the data

Related Projects can now ensure that each preview deployment of the frontend automatically references the corresponding preview deployment of the backend, avoiding the need for hardcoded values when testing changes that span both projects.

Related Projects are linked using a Vercel project ID. You can find your project ID in the project Settings page in the Vercel dashboard.

Learn more about linking related projects.

Read more

Tom Knickman Mark Knichel
https://vercel.com/changelog/npm-i-flags npm i flags 2025-02-20T13:00:00.000Z

The Flags SDK—our open source library for using feature flags in Next.js and SvelteKit applications—is now available under the new package name flags.

The new name signals our commitment to open source and the independence of the package from any specific entity or platform. Our framework-first approach of the SDK aims to simplify usage, avoid client-side flag evaluation, and improve user experience by eliminating layout shifts.

We are working on adapters with partners like Statsig, Optimizely, and LaunchDarkly to ensure a seamless integration with the Flags SDK.

Until now, each provider established their own approach to using feature flags in frameworks like Next.js, which led to duplicate efforts across the industry and drift in implementations. Going forward, the Flags SDK will help all feature flag and experimentation providers benefit from its tight integration to frameworks, while retaining their unique capabilities.

If you are using @vercel/flags, make sure you are updating to version 3.1.1 and switch your imports and package.json to flags.

Learn more in our redesigned documentation and examples.

Read more

Dominik Ferber Aaron Morris Andy Schneider Mitul Shah Delba de Oliveira Manuel Muñoz Solera Chris Widmaier
https://vercel.com/changelog/improved-traffic-visibility-on-firewall-overview-page Improved traffic visibility on Firewall overview page 2025-02-20T13:00:00.000Z

The Vercel Firewall overview page now shows improved visibility into your traffic and the Firewall status. Navigate to your Firewall page to see:

  • Status of the system firewall

  • A warning banner if a reverse proxy is inhibiting Vercel's ability to protect your site

  • Tabbed view for easier traffic filtering

  • Rules displayed below the chart with better readability

The Vercel Firewall automatically mitigates DDoS attacks for all Vercel deployments. You can further secure your site with custom rules and IP blocking, and by turning on Attack Challenge Mode when under high-volume attacks.

Learn more about the Vercel Firewall.

Read more

Marco Cornacchia Yanick Bélanger Sage Abraham Malavika Tadeusz
https://vercel.com/changelog/new-observability-dashboard-for-image-optimization New Observability dashboard for Image Optimization 2025-02-19T13:00:00.000Z

Vercel Observability now includes a dedicated dashboard for Image Optimization, providing deeper insights into image transformations and efficiency.

This update follows the introduction of a new pricing model, and includes:

  • Transformation insights: View formats, quality settings, and width adjustments.

  • Optimization analysis: Identify high-frequency transformations to help inform caching strategies.

  • Bandwidth savings: Compare transformed images against their original sources to measure bandwidth reduction and efficiency.

  • Image-specific views: See all referrers and unique variants of an optimized image in one place.

This dashboard is available to customers on all plans and is compatible with both the new and legacy pricing models.

View your Image Optimization dashboard and learn more about new pricing changes and Image Optimization.

Read more

Ethan Shea Timo Lins
https://vercel.com/changelog/deployment-integration-actions-for-marketplace-integrations Deployment integration actions for Marketplace integrations 2025-02-19T13:00:00.000Z

Marketplace integration providers can now register integration actions for deployments, allowing for automated resource-side tasks such as database branching, environment variable overrides, and readiness checks.

When a user deploys a project that has connected Marketplace integration with configured actions, the deployment will pause and wait for all integration actions to complete successfully. This ensures that the deployed resources are properly set up before the deployment proceeds. Users will also receive helpful suggestions within the integration about which actions are available and should be executed.

Learn more about integration actions.

Read more

Dima Voytenko Hedi Zandi Justin Kropp
https://vercel.com/changelog/vercel-observability-for-functions-now-offers-at-a-glance-key-insights Observability for Vercel Functions now offers a quick-view of key insights 2025-02-18T13:00:00.000Z

Observability's Vercel Functions dashboard now shows quick-view tiles with key metrics, such as:

  • Active compute model, like Fluid compute, which enhances efficiency, minimizes cold starts, and optimizes performance

  • Compute saved with Fluid compute enabled

  • Average memory usage for your functions

  • P75 Time to First Byte (TTFB) for performance monitoring

  • Cold start frequency to track optimization impact

These insights are available for all plans.

Learn more about Observability and Fluid compute.

Read more

Timo Lins Tobias Lins
https://vercel.com/changelog/faster-transformations-and-reduced-pricing-for-image-optimization Faster transformations and reduced pricing for Image Optimization 2025-02-18T13:00:00.000Z

We’ve optimized our Image Optimization infrastructure, including:

  • 60% faster transformations

  • New, opt-in reduced pricing

Previously, usage was measured by the number of unique source images ($5 per 1K source images). You can now opt into usage based on transformations with regional pricing, starting from:

  • Image Transformations: $0.05 per 1K image transformations

  • Image Cache Reads: $0.40 per 1M cache read units

  • Image Cache Writes: $4.00 per 1M cache write units

This new pricing model is opt-in through your project settings.

  • There are no changes to existing customers using Image Optimization

  • New projects for existing customers will also have no changes

  • New customers will start on the new pricing today

Pro and self-serve Enterprise customers can view the projected cost difference when enabling in settings. All Enterprise customers can also reach out to their account team to discuss new pricing.

Hobby customers have been moved to the new model's included allotments.

Learn more about Image Optimization pricing.

Read more

Steven Salat Ethan Shea Agustin Falco Harpreet Arora Malavika Tadeusz Timo Lins
https://vercel.com/changelog/additional-options-for-sharing-deployments-externally Additional options for sharing deployments externally 2025-02-14T13:00:00.000Z

You can now share deployments with external collaborators. Previously, invitations, access requests, and shareable links were limited to the preview URL for a branch or custom aliases.

The share modal—accessible by selecting Share on a deployment page or from the Vercel Toolbar menu—now allows sharing the specific deployment you are on or the always up-to-date preview URL for the branch.

Read more about sharing deployments.

Read more

George Karagkiaouris Kit Foster Christopher Skillicorn
https://vercel.com/changelog/automated-dns-configuration-with-domain-connect Automated DNS configuration with Domain Connect 2025-02-14T13:00:00.000Z

Vercel now supports Domain Connect, an open standard that simplifies DNS configuration. With one click, you can set up your domain without manually copying DNS records—saving time and reducing errors.

Cloudflare-managed domains are now supported with more providers coming soon.

To get started: Add a new domain to your Vercel project, and Vercel will detect if your domain qualifies for setup through Domain Connect, prompting you to proceed automatically or configure it manually.

We're also implementing Domain Connect as a DNS provider, enabling external services to configure Vercel Domains just as easily.

Learn more about Vercel domains.

Read more

Rhys Sullivan Meg Bird Mark Glagola Dillon Mulroy
https://vercel.com/changelog/support-for-react-router-v7 Support for React Router v7 2025-02-13T13:00:00.000Z

Vercel now supports React Router v7 applications when used as a framework:

This includes support for server-rendered React Router applications using Vercel's Fluid compute. Further, the Vercel preset intelligently splits application bundles across Vercel Functions, and supports custom server entry points.

Deploy React Router to Vercel or learn more about React Router on Vercel.

Read more

Nathan Rajlich
https://vercel.com/blog/bridging-the-gap-between-design-and-code-with-v0 Bridging the gap between design and code with v0 2025-02-12T13:00:00.000Z

Speakeasy specializes in building developer-focused SDKs—to help developers build their products. They adopted v0 to bridge the workflow from design to code, using it to accelerate rapid prototyping and reduce implementation time.

Read more

Alli Pope
https://vercel.com/changelog/manage-multiple-vercel-function-regions-in-the-dashboard Manage multiple Vercel Function regions in the dashboard 2025-02-12T13:00:00.000Z

Pro and Enterprise plans can now select multiple regions for Vercel Functions directly from the dashboard. This update simplifies configuration by removing the need to define regions in vercel.json.

Multi-region support is available for all Vercel Functions and supports Vercel's implementation of Fluid compute, which encourages a dense global compute model that positions dynamic functions closer to your data.

Visit your project’s Settings tab to customize your regions or learn more about configuring regions for Vercel Functions.

Read more

Florentin Eckl Mariano Cocirio
https://vercel.com/changelog/redeploy-without-leaving-project-settings Redeploy without leaving project settings 2025-02-12T13:00:00.000Z

When updating project settings, such as environment variables, Vercel will now automatically prompt you to redeploy.

A toast notification will appear when you change any settings that require a redeploy to take effect. After clicking Redeploy, you can track the progress of your deployment.

Learn more about project settings.

Read more

Jhey Tompkins
https://vercel.com/changelog/split-tgz-is-now-the-default-cli-archive-deployment-behavior Split-tgz is now the default CLI archive deployment behavior 2025-02-11T13:00:00.000Z

Archive deployments are useful for deploying large projects with thousands of files from the CLI.

We previously released the split-tgz archive deployment as a new archive option: vercel deploy --archive=split-tgz. This new capability offered up to 30% faster uploads and avoided file upload size limits.

We’ve confirmed split-tgz’s stability and made it the default behavior for tgz. This means the separate split-tgz option is now deprecated as the split-tgz functionality and benefits power the default tgz option.

Learn more about CLI archive deployments.

Read more

Austin Merrick Trek Glowacki Nathan Rajlich Jeff See
https://vercel.com/changelog/vercel-database-templates-now-support-any-marketplace-provider Vercel database templates now support any marketplace provider 2025-02-11T13:00:00.000Z

We’ve updated our database starter templates to support selecting any Postgres or Redis provider available in the Vercel Marketplace when deploying.

These templates are now provider-agnostic, allowing developers to seamlessly integrate alternative database and key-value store solutions while maintaining the same developer experience.

Check out the documentation to learn how to deploy your own.

Read more

Fabio Benedetti Hedi Zandi
https://vercel.com/changelog/enhanced-firewall-data-now-available-in-monitoring Enhanced firewall data now available in Monitoring 2025-02-07T13:00:00.000Z

Monitoring now has better firewall support, offering insights into your firewall rules:

  • Filter blocked requests by actions and custom firewall rules

  • More fields are now displayed when available:

    • IP Country

    • User Agent

    • Route

    • Request Path

    • Region

These metrics are available for all Observability Plus and Monitoring customers.

Monitoring recently became part of Observability Plus.

Read more

Ethan Shea
https://vercel.com/changelog/faster-deploy-times-for-large-builds Faster deploy times for large builds 2025-02-06T13:00:00.000Z

We optimized the deploy step of the build process to reduce build times by 2.8 seconds at P99, 760ms at P75, and 410ms on average.

For customers with a large number of Vercel Functions (100+), builds are more than 50 seconds faster. Several customers have time savings of over 2 minutes.

Check out the documentation to learn more about builds.

Read more

Andrew Healey
https://vercel.com/changelog/new-execution-duration-limit-for-edge-functions New execution duration limit for Edge Functions 2025-02-06T13:00:00.000Z

Starting on March 1st, 2025, we will begin the rollout of a new execution duration limit of 300 seconds for Vercel Functions using the Edge runtime.

Previously, Edge Functions had no fixed timeout for streaming responses, leading to unpredictable behavior based on system resources and traffic. With this update, Edge Functions will consistently allow streaming responses for up to 300 seconds, including post-response tasks like waitUntil().

Learn more about Vercel Functions using the Edge runtime.

Read more

Shohei Maeda Kiko Beats
https://vercel.com/changelog/deployment-pages-now-display-key-configuration-settings Deployment pages now display key configuration settings 2025-02-05T13:00:00.000Z

Project Overview and Deployment Details pages now include a Deployment Configuration section under the deployment card.

Expand to view snapshots of Fluid Compute, Function CPU, Deployment Protection, Skew Protection, and Secure Compute settings.

This section is available for all new deployments moving forward. It will appear on your Project Overview page after your next production deployment.

Read more

Michael Wenzel Manuel Muñoz Solera Sam Saliba Gary Borton Henry Heffernan
https://vercel.com/changelog/dark-mode-expanded-search-and-more-in-grep Dark mode, expanded search, and more repositories in Grep 2025-02-05T13:00:00.000Z

We've made improvements to Grep, our tool for quick code search.

  • You can now search across 1,000,000 public git repositories

  • The app has been rebuilt with Next.js 15, improving performance with Partial Prerendering

  • Support for dark mode

Try Grep today.

Read more

Dan Fox Ethan Niser
https://vercel.com/blog/introducing-fluid-compute Introducing Fluid compute 2025-02-04T13:00:00.000Z

While dedicated servers provide efficiency and always-on availability, they often lead to over-provisioning, scaling challenges, and operational overhead. Serverless computing improves this with auto-scaling and pay-as-you-go pricing, but can suffer from cold starts and inefficient use of idle time.

It’s time for a new, balanced approach. Fluid compute evolves beyond serverless, trading single-invocation functions for high-performance mini-servers. This model has helped thousands of early adopters maximize resource efficiency, minimize cold starts, and reduce compute costs by up to 85%.

Read more

Mariano Cocirio
https://vercel.com/changelog/vercel-functions-can-now-run-on-fluid-compute Vercel Functions can now run on Fluid compute 2025-02-04T13:00:00.000Z

Vercel Functions can now run on Fluid compute, bringing improvements in efficiency, scalability, and cost effectiveness. Fluid is now available for all plans.

What’s New

  • Optimized concurrency: Functions can handle multiple requests per instance, reducing idle time and lowering compute costs by up to 85% for high-concurrency workloads

  • Cold start protection: Fewer cold starts with smarter scaling and pre-warmed instances

  • Optimized scaling: Functions scale before instances, moving beyond the traditional 1:1 invocation-to-instance model

  • Extended function lifecycle: Use waitUntil to run background tasks after responding to the client

  • Runaway cost protection: Detects and stops infinite loops and excessive invocations

  • Multi-region execution: Requests are routed to the nearest of your selected compute region for better performance

  • Node.js and Python support: No restrictions on native modules or standard libraries

Enable Fluid today or learn more in our blog and documentation.

Read more

Mariano Cocirio Dan Fein Tom Lienard Doug Parsons Florentin Eckl Javi Velasco Angela Zhang Mike Curtis Tiago Ventura Loureiro Nanda Syahrasyad
https://vercel.com/changelog/enterprise-teams-can-now-ship-faster-without-build-queues Enterprise teams can now ship faster without build queues 2025-01-31T13:00:00.000Z

On-demand concurrent builds automatically and dynamically scale builds, increasing build capacity and shipping velocity.

Starting today, new projects in Enterprise teams will use on-demand concurrency by default to eliminate build queue bottlenecks. You can turn this feature on for existing projects at any time with urgent on-demand concurrent builds or enable it at the project level.

You are charged for on-demand concurrency based on the number of 10-minute build slots required to allow the builds to proceed as explained in usage and limits.

Check out the documentation to learn more about on-demand concurrent builds .

Read more

Mariano Cocirio Janos Szathmary Andrew Healey Ali Smesseim Felix Haus Balazs Varga Luke Phillips-Sheard Marc Codina Segura
https://vercel.com/blog/isr-on-vercel-is-now-faster-and-more-cost-efficient ISR on Vercel is now faster and more cost-efficient 2025-01-30T13:00:00.000Z

When Next.js introduced Incremental Static Regeneration (ISR) in 2020, it changed how developers build for the web. ISR combines the speed of static generation with the flexibility of dynamic rendering, enabling sites to update content without requiring full rebuilds.

Vercel has supported ISR from day one, making it easy for teams at The Washington Post, Algolia, and Sonos to serve fresh content while keeping page loads fast.

Read more

Luba Kravchenko Malavika Tadeusz Greta Workman
https://vercel.com/changelog/incremental-static-regeneration-is-now-faster-and-cheaper Incremental Static Regeneration (ISR) is now faster and more cost-efficient 2025-01-30T13:00:00.000Z

Incremental Static Regeneration (ISR) enables you to update content in the background without needing to redeploying your application. You can scale CMS or content-backed applications to millions of pages without having slow builds.

We've optimized our infrastructure to make ISR faster and more cost-efficient:

  • Smaller writes: ISR cache writes are now compressed by default, using fewer ISR write and read units (8KB chunks) per update and lowering Fast Origin Transfer (FOT) costs. Both reads and writes are now compressed.

  • Region-aware caching: The ISR cache is now available in all regions and automatically aligns with your functions' region. If your project spans multiple regions, the most cost-effective location is chosen automatically. This improves performance, especially for traffic outside North America, and regional pricing applies.

Redeploy your project to apply these updates or learn more about ISR.

Update: The rollout of this change completed on February 5th, 2025 around 8am PST.

Read more

Luba Kravchenko Kelly Davis Harpreet Arora
https://vercel.com/changelog/edge-function-metrics-now-available-in-monitoring Edge Function metrics now available in Monitoring 2025-01-30T13:00:00.000Z

Monitoring now includes three new metrics for Edge Functions to provide a comprehensive view of your Edge Function activity and performance:

These metrics are available for all Observability Plus and Monitoring customers.

Monitoring recently became part of Observability Plus.

Read more

Tobias Lins
https://vercel.com/changelog/filter-for-your-own-requests-in-logs Filter for your own requests in Logs 2025-01-29T13:00:00.000Z

You can now filter logs to display only requests made from your browser. This simplifies debugging by isolating your requests in high-traffic environments. It matches your IP address and User Agent to incoming requests.

Visit your project's Logs tab and toggle the user filter to get started or learn more about runtime logs.

Read more

Timo Lins Tobias Lins
https://vercel.com/changelog/clients-blocked-by-persistent-actions-now-receive-a-403-forbidden-response Clients blocked by persistent actions now receive a 403 Forbidden response 2025-01-28T13:00:00.000Z

Starting today, when the Vercel Web Application Firewall (WAF) blocks a client with a persistent action, it will respond with a 403 Forbidden status instead of failing silently. This change now makes it clear that the connection is being intentionally denied. 

Persistent actions in the WAF help reduce edge request load and stop malicious traffic earlier, cutting down unnecessary processing for your applications.

Learn more about persistent actions.

Read more

Casey Gowrie Sage Abraham
https://vercel.com/blog/working-with-figma-and-custom-design-systems-in-v0 Working with Figma and custom design systems in v0 2025-01-27T13:00:00.000Z

v0’s ability to import existing Figma files allows designers and developers to bridge the gap between design tools and AI-driven development. This feature extracts context from Figma files, along with any supplementary visuals, and passes them into v0's generation process.

Read more

Siddharth Sharma Alli Pope
https://vercel.com/blog/mitigating-denial-of-wallet-risks-with-vercel Mitigating Denial of Wallet risks with Vercel 2025-01-24T13:00:00.000Z

Unlike traditional cyberattacks that target code or infrastructure vulnerabilities, Denial of Wallet (DoW) attacks focus on draining a service's operational budget.

At Vercel, we're building controls and anomaly detection to help you defend against these threats and protect your applications.

Read more

Ty Sbano
https://vercel.com/changelog/project-settings-are-now-searchable Project settings are now searchable 2025-01-24T13:00:00.000Z

You can now search within project settings in the Vercel Dashboard, making it easier to quickly find a specific setting.

To get to your project settings:

  1. Select a project from your Team Overview page

  2. Select the Settings tab.

Learn more about project settings.

Read more

Kostyantyn Voytenko Christopher Skillicorn Gary Borton
https://vercel.com/changelog/firefox-extension-for-vercel-toolbar Firefox extension for Vercel Toolbar 2025-01-24T13:00:00.000Z

The Vercel Toolbar extension is now available for Firefox, in addition to Chrome.

With this extension you can use the Vercel Toolbar on your production deployments, set preferences for when the toolbar appears and activates, and drag and release to add a screenshot of a selected area to a comment.

Install the Firefox extension from the Firefox Browser Add Ons page to get started or visit our documentation to learn more about the Vercel Toolbar and browser extensions.

Read more

George Karagkiaouris Christopher Skillicorn
https://vercel.com/changelog/preview-firewall-status-and-web-analytics-from-project-overview Preview your site's Firewall status and Web Analytics from the Project Overview 2025-01-24T13:00:00.000Z

The project overview page now shows a preview of your production traffic and firewall status.

The Vercel Firewall automatically mitigates DDoS attacks for all Vercel deployments. You can further secure your site with custom rules and IP blocking, and by turning on Attack Challenge Mode when under high-volume attacks. On the project overview page you'll see the status of the firewall, requests blocked and challenged in the past 24 hours, and a warning if a reverse proxy is inhibiting Vercel's ability to protect your site.

Vercel Web Analytics gives you insight into your site's visitors and traffic. When the feature is enabled, you'll see your site's traffic on the project overview page.

Learn more about Vercel Web Analytics and Vercel Firewall.

Read more

Michael Wenzel Manuel Muñoz Solera Marco Cornacchia Sam Saliba
https://vercel.com/blog/vercel-acquires-tremor Vercel acquires Tremor to invest in open source React components 2025-01-22T13:00:00.000Z

Tremor is an open source library built on top of React, Tailwind CSS, and Radix. It consists of 35 unique components and 300 blocks that can be copy-pasted to build visually rich and interactive dashboards. The Tremor community has seen impressive growth with over 16,000 stars, 300,000 monthly downloads, and 5,500,000 installs to date.

Today, Tremor and its cofounders Severin Landolt and Christopher Kindl are joining Vercel’s Design Engineering team where they'll be working on UI components for the Vercel Dashboard, v0, and more.

This acquisition strengthens our commitment to open source and providing developers with the best tools for building exceptional user interfaces.

Read more

Tom Occhino
https://vercel.com/changelog/self-serve-domain-renewals-and-redemptions-now-available Self-serve domain renewals and redemptions now available 2025-01-21T13:00:00.000Z

Self-serve domain renewals and redemptions are now available in the Vercel dashboard.

Previously limited to automatic renewals to ensure uninterrupted service, domains can now be manually renewed at your convenience with just a few clicks, directly from the dashboard.

Domain renewals

To renew your domains directly:

  1. Navigate to your team's Domains tab

  2. Click the three dots next to the domain you want to renew

  3. Select Renew

Additionally, you may click the Renew Domain button on any domain detail page.

Domain redemptions

For expired domains with a redemption period (typically 30 days), you can now recover them directly in the dashboard:

  • Start the redemption process on the domain detail page

  • A redemption fee will be applied, depending on the domain registry

Read our domains renewal documentation for more information.

Read more

Dillon Mulroy Rhys Sullivan Meg Bird Anders Hagström
https://vercel.com/blog/ai-sdk-4-1 AI SDK 4.1 2025-01-20T13:00:00.000Z

The AI SDK is an open-source toolkit for building AI applications with JavaScript and TypeScript. Its unified provider API allows you to use any language model and enables powerful UI integrations into leading web frameworks such as Next.js and Svelte.

Read more

Lars Grammel Jared Palmer Nico Albanese Walter Korman
https://vercel.com/changelog/claim-deployments Claim Deployments now available for fast and secure deployment transfers 2025-01-20T13:00:00.000Z

Multi-tenant platforms, like AI agents and visual building apps, can now easily transfer deployment ownership directly to users or teams.

How it works:

  • Deployment creation: Any third-party can create a new deployment using the Vercel CLI or using the Vercel API: POST /files and POST /deployments

  • Initiate transfer: The Vercel API endpoint is then used to generate a claim-deployment URL for that deployment.

  • User confirms their team: The user selects their Vercel team and completes the transfer.

Check out our documentation to learn more.

Read more

Ana Jovanova Bel Curcio Christopher Skillicorn Marc Greenstock
https://vercel.com/changelog/node-js-18-is-being-deprecated Node.js 18 is being deprecated on September 1, 2025 2025-01-16T13:00:00.000Z

Following the Node.js 18 end of life on April 30, 2025, we are deprecating Node.js 18 for Builds and Functions on September 1, 2025.

Will my existing deployments be affected?

No, existing deployments with Serverless Functions will not be affected.

When will I no longer be able to use Node.js 18?

On September 1, 2025, Node.js 18 will be disabled in project settings. Existing projects using 18 as the version for Functions will display an error when a new deployment is created.

How can I upgrade my Node.js version?

You can configure your Node.js version in project settings or through the engines field in package.json.

How can I see which of my projects are affected?

You can see which of your projects are affected by this deprecation with:

Read more

Ali Smesseim Trek Glowacki
https://vercel.com/changelog/cli-archive-deployments-are-now-up-to-30-faster-with-split-tgz-archive CLI archive deployments are now up to 30% faster with split-tgz archive option 2025-01-16T13:00:00.000Z

The archive option was introduced for CLI deployments hitting rate limits like the limit on the maximum amount of files. Prebuilt deployments commonly use archive uploads as they generate thousands of files at build time.

Previously, archive deployments were always compressed into one large file with the only existing --archive option, tgz. Deployments using tgz may hit the file size upload limit. Additionally, uploading one large archive file is slower than uploading multiple file parts.

The beta split-tgz format resolves these issues by splitting large archives into smaller parts. split-tgz avoids the static file upload limit and uploads large prebuilt projects up to 30% faster.

Example usage: vercel deploy --archive=split-tgz

Learn more about CLI deployments.

Read more

Austin Merrick Trek Glowacki Nathan Rajlich Jeff See
https://vercel.com/changelog/audit-logs-with-siem-integration-now-generally-available Audit logs with SIEM integration now generally available 2025-01-16T13:00:00.000Z

Audit logs are now generally available for Enterprise customers, and can be integrated with SIEMs for real-time export.

Audit logs provide an auditable trail of key events and changes within your Vercel team. With an immutable record, you can track who performed an action, what was done, and when—with access to up to 90 days of historical data.

Enterprise customers can also configure a real-time audit log stream to their existing Security Information and Event Management (SIEM) tools, such as Datadog or Splunk. Additionally, logs can be sent to durable object storage solutions like Amazon S3, Google Cloud Storage, or a custom HTTP POST endpoint.

For more details, check out the Audit Log documentation or contact your account manager.

Read more

Javier Bórquez Luka Hartwig Dom Busser Harpreet Arora
https://vercel.com/changelog/buns-text-lockfile-is-now-supported-with-zero-configuration Bun's text lockfile is now supported with zero configuration 2025-01-16T13:00:00.000Z

Projects using Bun's new text bun.lock lockfile can now be deployed to Vercel with zero configuration.

While Vercel already supports Bun's binary bun.lockb lockfile, Bun v1.1.39 introduces a new text-based lock file with bun install --save-text-lockfile. Bun plans to make this the default in v1.2.

Learn more about package managers supported by Vercel.

Read more

Austin Merrick Sean Massa Trek Glowacki
https://vercel.com/changelog/mux-joins-the-vercel-marketplace Mux joins the Vercel Marketplace 2025-01-16T13:00:00.000Z

The Vercel Marketplace has a new Video category for tools that allow developers to integrate video functionality into any project.

The first integration in the Video category is Mux, an API-first platform for video. With the first-party Mux integration, Vercel users can:

  • Add video streaming and playback capabilities with minimal setup

  • Access real-time video performance data and analytics

  • Leverage integrated billing through Vercel

Get started with Mux on the Vercel Marketplace, available to customers on all plans.

Read more

Hedi Zandi Fabio Benedetti Justin Kropp
https://vercel.com/changelog/flags-sdk-3-0 Flags SDK 3.0 2025-01-16T13:00:00.000Z

The Flags SDK is a library that gives developers tools to use feature flags in Next.js and SvelteKit applications.

The Flags SDK version 3.0 adds:

  • Pages Router support so feature flags can be used in App Router and Pages Router

  • New adapters architecture that allows the SDK to integrate with various data sources and feature flag providers

  • A new identify concept that allows you to establish an evaluation context for your feature flags. With this addition, you can tailor flags and experiments for individual users or groups

With this release, the repository is now open source and under the MIT License, providing more transparency and allowing for community contributions and integrations.

Check out the new Flags SDK documentation with updated examples to learn more.

Read more

Dominik Ferber Andy Schneider Aaron Morris
https://vercel.com/changelog/upgraded-pci-dss-version-3-2-1-to-4-0 Upgraded PCI DSS version 3.2.1 to 4.0 2025-01-15T13:00:00.000Z

We have completed our Self-Assessment Questionnaire Attestation of Compliance (SAQ-D AOC) for Service Providers under PCI DSS v4.0.

A copy of our PCI DSS compliance documentation can be obtained through our Trust Center. For additional information about our SAQ-D AOC report or Responsibility Matrix, please contact us.

Learn how we support ecommerce customers who require PCI compliance for payment processing.

Read more

Ty Sbano Kacee Taylor Aaron Brown
https://vercel.com/changelog/bounce-rate-support-in-web-analytics Bounce rate support in Web Analytics 2025-01-15T13:00:00.000Z

You can now see the bounce rate of your visitors in Web Analytics.

With bounce rate, you're able to see the percentage of users visiting a single page without navigating any further.

When filtering for a route or path, the bounce rate adapts and shows how many users bounced on a specific page.

Learn more about filtering in Web Analytics.

Read more

Tobias Lins Timo Lins Damien Simonin Feugas
https://vercel.com/changelog/the-vercel-toolbar-is-now-more-compact-and-dynamic The Vercel Toolbar is now more compact and dynamic 2025-01-14T13:00:00.000Z

The Vercel Toolbar has a new compact design, making it easier to access the tools you use most.

  • Compact design: The toolbar is now smaller and only runs when you click or tap to activate it or when visiting from a link that contains a comment thread, draft link, or flag override

  • Shortcuts: Your most recently used tools will pin to the top of your menu for easy access

  • Visit with Toolbar: When visiting projects and deployments from the dashboard, you'll see a "Visit" button that gives you the option to load the toolbar upon opening

  • Browser extension controls: Users with the browser extension enabled can control when the toolbar is active or hidden under "Preferences" in the toolbar menu

Learn more about the Vercel Toolbar and its features.

Read more

George Karagkiaouris wits Christopher Skillicorn Sam Saliba Gary Borton
https://vercel.com/changelog/python-support-added-to-in-function-concurrency-beta Python support added to in-function concurrency beta 2025-01-14T13:00:00.000Z

Python is now supported in the ongoing in-function concurrency public beta.

In-function concurrency optimizes functions to handle multiple invocations simultaneously, improving resource efficiency. By reusing active instances instead of creating new ones, it reduces idle compute time and associated costs.

In-function concurrency is particularly beneficial for workloads with external API or database calls, such as AI models, where functions often sit idle while waiting for responses.

The in-function concurrency public beta is available to Pro and Enterprise customers using Standard or Performance Function CPU, and can be enabled through your dashboard. Real-time tracking of resource savings is available in Observability.

Learn more in our blog post and documentation, or get started with our template by enabling In-function concurrency in your project settings.

Read more

Tom Lienard
https://vercel.com/changelog/improved-log-visibility-for-function-durations-and-memory Improved log visibility for function durations and memory 2025-01-13T13:00:00.000Z

Logs now indicate when Vercel Functions reach (or near) their maximum duration or memory allocation for each request.

Logs also include quick links to configure function maximum duration, CPU & memory, region, and Node.js Version directly from requests.

View your project's logs.

Read more

Timo Lins
https://vercel.com/changelog/improvement-to-how-dates-display-in-the-dashboard Improvement to how dates display in the dashboard 2025-01-13T13:00:00.000Z

Dates across the dashboard now provide more precision.

  • For the first three days, dates are displayed as relative (e.g. "10m ago")

  • After three days, they switch to absolute values (e.g. "Jan 3")

This builds on a recent update where hovering over dates reveals more information, including the exact timestamp.

Read more

wits
https://vercel.com/blog/transforming-how-you-work-with-v0 Transforming how you work with v0 2025-01-10T13:00:00.000Z

With v0, Vercel's AI-powered pair programmer, anyone can participate in prototyping, building on the web, or expressing new ideas.

While v0 was initially created by developers for developers, now v0's capabilities extend far beyond coding, offering benefits to professionals across various industries. Let's explore how v0 can enhance productivity and creativity in different roles.

Read more

Alli Pope
https://vercel.com/changelog/updated-logging-limits-for-vercel-functions Updated logging limits for Vercel Functions 2025-01-09T13:00:00.000Z

The runtime log limits for Vercel Functions have been increased, allowing for significantly larger log entries. These updates replace the previous 4KB-per-line restriction, and they are now live for all projects.

The runtime log limits are now:

  • Log line size: Up to 256KB per log line.

  • Log line count: Up to 256 individual log lines per request.

  • Total log size per request: Up to 1MB (sum of all log lines in a single request).

Learn more about our logs in our documentation.

Read more

Craig Andrews
https://vercel.com/changelog/requesters-public-ip-postal-code-now-available-in-vercel-functions Requester's public IP postal code now available in Vercel Functions 2025-01-09T13:00:00.000Z

The x-vercel-ip-postal-code header is now part of Vercel’s geolocation capabilities, providing the postal code associated with the requester’s public IP address. This complements existing headers like x-vercel-ip-country, x-vercel-ip-city, and x-vercel-ip-country-region.

The x-vercel-ip-postal-code header is accessible in Vercel Functions, including Edge Middleware. Here's a TypeScript example:

Postal codes are also available via the @vercel/functions package:

For more information on headers and geolocation, see Vercel’s request header documentation.

Read more

Shohei Maeda
https://vercel.com/changelog/ai-enhanced-search-for-next-js-documentation AI-enhanced search for Next.js documentation 2025-01-08T13:00:00.000Z

You can now get AI-assisted answers to your questions from the Next.js docs search:

  • Use natural language to ask questions about the docs

  • View recent search queries and continue conversations

  • Easily copy code and markdown output

  • Leave feedback to help us improve the quality of responses

Start searching with ⌘K (or Ctrl+K on Windows) menu on nextjs.org/docs.

Read more

Jhey Tompkins Gaspar Garcia
https://vercel.com/blog/salesforce-incremental-migration Headless Salesforce: An incremental migration from monolith to composable 2025-01-07T13:00:00.000Z

For ecommerce teams running Salesforce Commerce Cloud, the platform's monolithic design can feel like a double-edged sword. While its out-of-the-box capabilities promise rapid deployments, they often hinder frontend flexibility and innovation. But what if you could unlock a new level of performance—without risking your core business?

That’s exactly what a global sportswear brand achieved. Their headless Salesforce migration strategy halved their load times, cut cart abandonment by 28%, and increased mobile conversion rates by 15%. All without a disruptive, big-bang migration.

Here’s how they did it and how you can too.

Read more

Alice Alexandra Moore
https://vercel.com/changelog/runtime-logs-can-now-be-filtered-by-request-type-and-vercel-resource Runtime logs can now be filtered by request type and Vercel resource 2025-01-07T13:00:00.000Z

The "Contain Types" filter in runtime logs has been replaced by two new filters for better clarity:

  1. Resource: Filters which infrastructure resource within the Vercel Edge Network was used to serve the request. Examples include Serverless Functions, Edge Cache, and Edge Middleware

  2. Request Type: Filters which framework-defined mechanism or rendering strategy was used by the request. Examples include API routes, Incremental Static Regeneration (ISR), and cron jobs

These updates provide more granular insights into how your requests are processed. Both filters are available on all plans starting today.

Learn more about how Vercel processes requests.

Read more

Luc Leray Timo Lins
https://vercel.com/changelog/speed-insights-usage-can-now-be-viewed-by-project Speed Insights usage can now be viewed by Project 2025-01-07T13:00:00.000Z

You can now view your Speed Insights traffic broken down by project in the Usage tab.

Learn more about Speed Insights.

Read more

Julia Shi
https://vercel.com/changelog/python-vercel-functions-now-have-streaming-enabled-by-default Python Vercel Functions now have streaming enabled by default 2025-01-06T13:00:00.000Z

Streaming is now enabled by default for all Vercel Functions using the Python runtime, completing the rollout plan announced last year. Python functions can now send data to the client as it’s generated, rather than waiting for the entire response—particularly beneficial for use cases like AI applications and real-time updates.

The VERCEL_FORCE_PYTHON_STREAMING environment variable is no longer necessary, as streaming is now applied automatically in your new deployments.

With streaming responses, the runtime log format and frequency have been updated.

For more details, visit our documentation or get started with our template.

Read more

Tom Lienard Mariano Cocirio
https://vercel.com/blog/building-the-black-friday-cyber-monday-live-dashboard Building the Black Friday-Cyber Monday live dashboard 2024-12-24T13:00:00.000Z

This year, we built a Black Friday-Cyber Monday (BFCM) dashboard to celebrate the success of our customers through the busy retail weekend. The dashboard gave a real-time look inside Vercel's infrastructure, showing live metrics for deployments, requests, blocked traffic, and more.

Building a data-heavy, real-time dashboard with a good user experience comes with challenges. Let's walk through how we overcame them.

Read more

Nanda Syahrasyad
https://vercel.com/changelog/free-vercel-remote-cache Vercel Remote Cache is now free 2024-12-20T13:00:00.000Z

Vercel Remote Cache is now free for all plans, resulting in immediate savings for over 43,000 existing teams.

Vercel Remote Cache speeds up developer and CI workflows by storing build outputs and logs for your team's Turborepo or Nx tasks, ensuring you never do the same work twice.

Fees accrued for usage prior to the change must be paid. Going forward, users will not see usage fees for Remote Cache. Your use of Remote Cache remains subject to our Fair use Guidelines.

Learn more about Vercel Remote Cache.

Read more

Anthony Shew Tom Knickman Harpreet Arora
https://vercel.com/changelog/vercel-firewall-now-supports-bypassing-system-mitigations-for-specific-ips Vercel Firewall now supports bypassing system mitigations for specific IPs 2024-12-20T13:00:00.000Z

Pro and Enterprise customers can now configure firewall rules to bypass system mitigations, including DDoS protection, for specific IPs and CIDR ranges.

We strongly recommend against bypassing protections. However, if you feel like the protections are blocking legitimate traffic, this feature presents a break-glass option. This may be particularly applicable if you have a proxy in front of Vercel that provides DDoS protection and which may interfere with Vercel's protection.

To configure system bypass rules:

  1. Navigate to the Firewall in the Vercel dashboard

  2. Click Configure at the top right to access the configuration page

  3. Use the System Bypass Rules section at the bottom to specify the IP address or CIDR range to bypass mitigations for your production domains

Pro customers can set up to 3 bypass rules and Enterprise customers can set up to 5.

Learn more about Vercel Firewall's automatic DDoS mitigation.

Read more

Sage Abraham Marco Cornacchia Malavika Tadeusz
https://vercel.com/blog/optimizing-secure-builds-with-hive-and-secure-compute Optimizing secure build infrastructure with Secure Compute 2024-12-18T13:00:00.000Z

In our previous blog post, we introduced Hive, the internal codename for Vercel’s low-level compute platform, powering all of our builds. However, some builds come with unique security requirements. For these, Hive integrates seamlessly with Vercel's Secure Compute, which enables teams to securely connect with their backends through private connections without compromising performance.

Since moving Secure Compute to Hive, provisioning times have dropped from 90 seconds to 5 seconds and build performance has improved by an average of 30%, delivering both speed and reliability for even the most sensitive workloads.

Read more

Mariano Cocirio Guðmundur Bjarni Ólafsson
https://vercel.com/blog/the-rise-of-the-ai-crawler The rise of the AI crawler 2024-12-17T13:00:00.000Z

AI crawlers have become a significant presence on the web. OpenAI's GPTBot generated 569 million requests across Vercel's network in the past month, while Anthropic's Claude followed with 370 million. For perspective, this combined volume represents about 20% of Googlebot's 4.5 billion requests during the same period.

After analyzing how Googlebot handles JavaScript rendering with MERJ, we turned our attention to these AI assistants. Our new data reveals how Open AI’s ChatGPT, Anthropic’s Claude, and other AI tools crawl and process web content.

We uncovered clear patterns in how these crawlers handle JavaScript, prioritize content types, and navigate the web, which directly impact how AI tools understand and interact with modern web applications.

Read more

Giacomo Zecchini Alice Alexandra Moore Malte Ubl Ryan Siddle
https://vercel.com/blog/technical-audits Technical audits: Optimizing cost, performance, and productivity 2024-12-12T13:00:00.000Z

Every 100ms of latency can cost ecommerce applications up to 8% in sales conversion. At scale, this can cost millions in revenue.

Complexity compounds as applications grow, making these performance issues harder to diagnose and fix. Audits help teams navigate these challenges systematically.

This article covers strategies we've developed across hundreds of real-world audits.

Read more

Dom Sipowicz Luis Alvarez Lorenzo Palmes Storni Alice Alexandra Moore
https://vercel.com/changelog/vercel-observability-is-now-generally-available Vercel Observability is now generally available 2024-12-12T13:00:00.000Z

Vercel Observability is now available to all Vercel customers, delivering framework-aware insights to optimize infrastructure and application performance.

Included with all plans, Observability offers visibility—at both the team and project levels—into key metrics aligned with your app's architecture, such as:

  • Vercel Functions usage: Invocations, durations, and error rates

  • In-function concurrency: Resource and cost-savings for customers with in-function concurrency enabled

  • External API requests: Outgoing API calls by hostname

  • Edge Requests: Request volumes by routes, including dynamic routes like /blog/[slug]

  • Fast Data Transfer: Path-level insights on requests by incoming, outgoing, and total data transfer

  • Builds: Resource usage and build-step latency

  • ISR caching: Route-level read and writes usage, and total function duration during revalidation

In addition to the above, customers on Pro and Enterprise plans can upgrade to Observability Plus for:

  • Extended, 30-day retention

  • Full access to all data fields and aggregated latency stats

  • Monitoring for advanced querying

  • Path-level compute analytics

Pricing for Observability Plus starts at $10/month, with pro-rated on-demand usage at $1.20 per million events.

Monitoring is now part of Observability Plus. Existing Monitoring users benefit from the new lower rate of $1.20 per million events without taking action, and can migrate to Observability Plus for access to the complete suite.

Learn more about Vercel Observability.

Read more

Tobias Lins Dom Busser Chris Widmaier Ethan Shea Damien Simonin Feugas Caleb Boyd Timo Lins Amy Burns
https://vercel.com/changelog/monitoring-pricing-reduced-up-to-87 Monitoring pricing reduced up to 87% 2024-12-12T13:00:00.000Z

Monitoring pricing has been reduced to $1.20 per million events. This new pricing is effective immediately, with no action required.

Monitoring is also now part of Observability Plus, which can be enabled in Vercel Observability—now generally available. This enhanced suite builds on the Monitoring query engine, offering deeper insights into request handling, caching, compute, and build infrastructure.

To ensure uninterrupted workflows, both Monitoring and Observability remain visible in the dashboard for current Monitoring users.

Read more about Monitoring for existing users, Observability, and Observability Plus.

Read more

Ethan Shea Tobias Lins Dom Busser Caleb Boyd Harpreet Arora Damien Simonin Feugas Timo Lins Amy Burns Chris Widmaier
https://vercel.com/blog/extra-space-storages-build-times-became-17x-faster-with-vercel Extra Space Storage's build times became 17x faster with Vercel 2024-12-11T13:00:00.000Z

As the largest self-storage company in the U.S., Extra Space Storage manages over 3,800 stores nationwide. Delivering a consistent, high-quality digital experience to their customers is essential, and their engineering team recognized the need for faster iteration and more stability in their customer acquisition channels—public websites and kiosks.

However, their legacy architecture was creating bottlenecks, impacting time-to-market for new features, and slowing down development. By partnering with Vercel, Extra Space Storage was able to achieve their vision of improving their DevOps processes for their website and enable quicker customer feedback.

Read more

Greta Workman
https://vercel.com/changelog/custom-environments-are-now-available-on-vercel Custom Environments are now available on Vercel 2024-12-11T13:00:00.000Z

Custom Environments are now available on Vercel. With this feature, you can define an additional pre-production environment, such as staging or QA, directly within the Vercel dashboard, without relying on external workarounds or multiple projects.

This functionality allows you to reshape your release workflow by separating it from code management. Environments can now operate independently of branches, offering greater flexibility for specific organizational workflows, targeted deployments, and managing multiple development environments across teams.

Customers on the Pro plan have the ability to configure one Custom Environment and customers on the Enterprise plan can configure up to 12 from the dashboard.

Learn more about Custom Environments.

Read more

Mariano Cocirio Trek Glowacki Cody Brouwers Mitch Vostrez Jeff See Paulo Guarnier De Mitri Amy Burns Sean Massa Henry Heffernan
https://vercel.com/changelog/vercel-firewall-now-stops-ddos-attacks-up-to-40x-faster Vercel Firewall now stops DDoS attacks up to 40x faster 2024-12-10T13:00:00.000Z

The Vercel Firewall—enabled by default on all plans—now features upgraded network analysis powered by real-time stream processing of web traffic. This enhancement stops volumetric DDoS attacks 40x faster and low-and-slow attacks 10x faster.

By blocking malicious traffic and mitigating DDoS attacks earlier, the Firewall further reduces costs by preventing threats from reaching your applications and backends.

This improvement is live for all Vercel customers today with no action required.

Learn more about how Vercel Firewall protects your apps.

Read more

Casey Gowrie Joe Haddad
https://vercel.com/blog/vercel-and-aws-partner-on-ai-tools-and-experiences Vercel and AWS partner on AI tools and experiences 2024-12-09T13:00:00.000Z

Last week at AWS re:Invent 2024, the Vercel team met with thousands of builders in the Developer Solutions Zone, celebrated v0's launch on AWS Marketplace, and hosted hundreds of customers and partners with various event activations. Now, we're taking our AWS Partnership further:

Vercel has been selected for a Strategic Collaboration Agreement (SCA) with AWS—to deliver the next generation of AI-enabled developer tooling and experiences.

This collaboration underscores the value of Vercel and AWS together as a one-stop shop for teams building AI experiences.

Read more

Shriya Hahn
https://vercel.com/changelog/introducing-the-vercel-typescript-sdk Introducing the Vercel TypeScript SDK 2024-12-09T13:00:00.000Z

We’ve published a TypeScript-native SDK for working with the Vercel API.

This SDK includes:

  • Full type safety for accessing the Vercel REST API with Zod schemas

  • New documentation for every function, argument, and type

  • Better tree-shaking support with optional standalone functions

  • Intuitive error handling and detailed error messages

  • Configurable retry strategies (including backoffs)

This SDK can be used to automate every part of Vercel’s platform including:

  • Deployment automation and management

  • Project creation and configuration

  • Domain management

  • Team and user administration

  • Environment variable management

  • Logs and monitoring

  • Integration configuration

View the docs or explore the repo.

Read more

Lee Robinson Ismael Rumzan
https://vercel.com/changelog/nile-and-motherduck-join-the-vercel-marketplace Nile and MotherDuck join the Vercel Marketplace 2024-12-09T13:00:00.000Z

Nile and MotherDuck are now available as first-party integrations on the Vercel Marketplace.

You can integrate Nile's database services or leverage MotherDuck's data analysis capabilities directly from the Vercel dashboard, complete with integrated billing and CLI provisioning.

Get started with the Vercel Marketplace, available to customers on all plans.

Read more

Hedi Zandi
https://vercel.com/blog/life-of-a-request-securing-your-apps-traffic-with-vercel Life of a Vercel request: Securing your app's traffic with Vercel 2024-12-05T13:00:00.000Z

In any given week, Vercel Firewall blocks over one billion malicious connections—proactively safeguarding your app before the first request arrives. Defining access rules ensures your infrastructure scales only for legitimate traffic, keeping resources secure and associated costs in check.

With Vercel, application protection is integrated into every step of the request lifecycle. It starts with the platform-wide Vercel Firewall—active by default for all users—and extends to Deployment Protection and the Web Application Firewall (WAF) which give you granular security control and defense-in-depth.

Read more

Dan Fein
https://vercel.com/changelog/lower-prices-for-domains-on-vercel Lower prices for domains on Vercel 2024-12-05T13:00:00.000Z

We have lowered the prices for purchasing domains by up to 50%.

Vercel offers hundreds of top-level domains (TLDs) for purchase, including the most popular TLDs like .com and our most recent addition of .ai domains.

Vercel automatically configures and manages nameservers and SSL certificates for your domain, with fast domain search and automatic DNS setup for easy deployment of your next idea.

Buy your first domain or explore all supported domains.

Read more

Dillon Mulroy Harpreet Arora
https://vercel.com/changelog/runtime-logs-now-show-event-sequences-for-vercel-requests Runtime logs now show event sequences for Vercel requests 2024-12-04T13:00:00.000Z

Runtime logs now offer a request-centric interface to streamline debugging and provide deeper traffic insights:

  • Request anatomy UI: Visualize each request’s lifecycle in a sequential view, from the firewall through middleware and function execution. Gain a full picture of how your app processes traffic at every stage.

  • Improved log viewer: A full-width design enhances readability, grouping all log lines for a request into one panel, including middleware and function invocations.

These are available on all plans starting today.

Learn more about how Vercel processes requests.

Read more

Julia Shi Luc Leray Tobias Lins Timo Lins
https://vercel.com/blog/black-friday-cyber-monday-2024-recap Billions of dollars, billions of requests: Black Friday-Cyber Monday 2024 2024-12-03T13:00:00.000Z

The Black Friday-Cyber Monday (BFCM) stakes are high. Billions of dollars are on the line with consumers racing to save money over the biggest shopping days of the year.

This year, Vercel celebrated the success of our customers by building a live dashboard showing activity across the platform for BFCM.

Read more

Dan Fein
https://vercel.com/blog/retailer-sees-10m-increase-in-sales-on-vercel Retailer sees $10M increase in sales on Vercel 2024-11-27T13:00:00.000Z

Founded 30 years ago, this top global retailer has established itself as a leader in the sportswear and apparel industry. With a diverse product range that includes athletic performance gear, footwear, accessories, and casual apparel, the company is renowned for its commitment to innovation and quality. Listed on the NYSE, the retailer reported a revenue of almost 6 billion in 2024 and employs approximately 16,000 people worldwide. Despite a challenging retail environment, it continues to excel in ecommerce, showcasing a 3% growth in direct-to-consumer revenue to 2.3 billion, with ecommerce accounting for 41% of this segment.

Read more

Alina Weinstein
https://vercel.com/changelog/temporarily-disable-vercel-firewall-system-ddos-mitigations Temporarily disable Vercel Firewall system DDoS mitigations 2024-11-27T13:00:00.000Z

Pro and Enterprise customers now have the ability to temporarily disable all automatic system mitigations, including DDoS mitigations, by the Vercel Firewall.

We strongly recommend against disabling protections. However, if you feel like the protections are blocking legitimate traffic this feature presents a break-glass option. This may be particularly applicable if you have a proxy in front of Vercel that provides DDoS protection and which may interfere with Vercel's protection.

To temporarily disable system mitigations, visit the Firewall tab within the Vercel dashboard and click the ellipsis menu at the top right to access additional options. Once you confirm that you would like to temporarily disable all system mitigations, all traffic to your project will bypass Vercel Firewall system DDoS mitigations for a period of 24 hours.

Vercel Firewall's system defenses are automatically enabled for all projects on all plans, mitigating billions of malicious connection attempts every week, and preventing resource abuse. Customers must exercise extreme caution when disabling automated defenses as no attack will be blocked.

Please note that you are responsible for all usage fees incurred when using this feature, including illegitimate traffic that may otherwise have been blocked.

Learn more about Vercel Firewall's automatic DDoS mitigation.

Read more

Sage Abraham Malavika Tadeusz
https://vercel.com/blog/from-minutes-to-seconds-how-meter-accelerates-delivery-with-vercel-and-next From minutes to seconds: How Meter accelerates delivery with Vercel and Next.js 2024-11-26T13:00:00.000Z

Meter provides a full-stack networking solution that makes it easy for any business, organization, or school—of any size—to get access to the internet. They have two application layers built on top of their vertically integrated technical architecture: Meter Command, a generative UI for IT and Networking teams, and Meter Dashboard, their main web interface. Meter’s adoption of Vercel has enhanced performance, simplified workflows, and empowered their team to iterate rapidly—not only across Command and Dashboard, but throughout their interconnected stack of hardware, software, and operations.

Choosing Vercel for speed and integrations

Prior to migrating, Meter’s Dashboard product was hosted through various AWS solutions, with long build times and limited visibility into changes. When evaluating options, Meter's team prioritized fast iteration, speed of deployment, and the seamless integration that Vercel provides for both frontend and backend processes.

Monorepo management made easier

The team implemented a two-phase migration of Dashboard to Vercel, first transferring over core components and then integrating additional features. Challenges such as managing remote caching and consolidating to a monorepo were handled with Vercel’s support for React, Vite builds, and previews for every feature branch.

Today, all of Meter's deployed assets live in a unified repository, enabling easier code management and collaboration across teams. Improved build times—down from over 10 minutes to less than a minute—and Vercel’s flexible rollback capabilities have increased the reliability and scalability of their deployments.

Moving and shipping faster

Since the Dashboard migration, the team has noted substantial benefits in CI/CD iteration speed, which helps them quickly push and review code in a production-like environment. Vercel's integrated git workflow allows for daily production pushes, enabling faster feature releases and reduced need for manual QA.

Meter built Command, on Next.js and Vercel. Command enables Meter users to get information about their networks, take action, and create custom, real-time software—all in natural language and at the speed of a web search.

Vercel and Next.js allow for rapid iteration on the frontend and easily sync with the backend data processing that powers these interactions via Next.js' API Routes. The engineering team can focus on refining the model architecture that powers the product’s backend without worrying about underlying infrastructure details.

The ability to push changes quickly, view updates immediately on their dev site, and iterate efficiently has been transformative for the team working on Command. Vercel ensures that Command remains performant by maintaining a clear separation between client-side and server-side logic, while still allowing seamless communication between the two.

Get started with Vercel

Meter’s engineering team has observed a marked increase in performance, scalability, and user experience. With every feature branch previewed and reviewed before going live, the team has found a reliable process for maintaining high standards across their products.

As Meter continues to refine its vertically integrated hardware, firmware, and software stack, the streamlined workflow and increased speed on Vercel will enable them to deliver even more powerful products to their customers and partners.

Read more

Alli Pope
https://vercel.com/blog/how-notion-powers-rapid-and-performant-experimentation How Notion powers rapid and performant experimentation 2024-11-25T13:00:00.000Z

Notion is a connected workspace that allows users to write, plan, and organize, all enhanced with built-in AI. With a platform as flexible as Notion, the challenge for their website team lies in communicating the vast range of use cases—from personal projects like planning trips to enterprise-level tasks like managing company documentation. That’s a huge total addressable market that attracts many millions of diverse visitors to their website every week. As these numbers continue to rapidly grow and personas expand, Notion needed a website capable of rapid iteration and experimentation to help their message resonate with more people.

Read more

Alli Pope
https://vercel.com/changelog/node-js-22-lts-is-now-available Node.js 22 LTS is now generally available for builds and functions 2024-11-22T13:00:00.000Z

Starting today, Node.js version 22 is available as the runtime for your builds and functions leveraging Node. To use version 22, go to Project Settings > General > Node.js Version and select 22.x. This is also the default version for new projects.

Node.js 22 highlights:

The current version used by Vercel is 22.11.0 and will automatically update minor and patch releases. Therefore, only the major version (22.x) is guaranteed.

Read our Node.js runtime documentation to learn more.

Read more

Nathan Rajlich Austin Merrick Sean Massa Tom Lienard Janos Szathmary Guðmundur Bjarni Ólafsson
https://vercel.com/changelog/streaming-is-now-supported-in-vercel-functions-for-the-python-runtime Streaming is now supported in Vercel Functions for the Python runtime 2024-11-22T13:00:00.000Z

Streaming is now supported and will soon be enabled by default in Vercel Functions for the Python runtime, allowing functions to send data to the client as it’s generated rather than waiting for the full response. This is particularly useful for AI applications.

This change will be rolled out progressively. Starting today, it will apply to all new projects and will take effect for all existing projects on January 5, 2025. On this date, projects using Log Drains will be migrated, and streaming responses will impact the format and frequency of runtime logs.

To enable streaming as the default for your Vercel Functions using Python, add the VERCEL_FORCE_PYTHON_STREAMING=1 environment variable in your project. Streaming will then be enabled on your next production deployment.

For more information, read the Python streaming documentation or get started with our template.

Read more

Tom Lienard Mariano Cocirio
https://vercel.com/blog/life-of-a-vercel-request-navigating-the-edge-network Life of a Vercel request: Navigating the Edge Network 2024-11-21T13:00:00.000Z

Vercel’s framework-defined infrastructure provisions cloud resources while providing full transparency, from the initial build to every incoming request. Developers can track how static assets are distributed globally, functions handle ISR revalidation, and resources manage routing, server-side rendering, and more.

As users visit your app, granular metrics reveal which resources were leveraged to serve their request. This series unpacks the Vercel Edge Network and associated resource allocation, exploring each stage of a request, and how Vercel streamlines the process.

With a clear understanding of these metrics and optimization strategies, you can deliver better user experiences while improving resource consumption and reducing costs.

Read more

Dan Fein
https://vercel.com/blog/vercel-acquires-grep Vercel acquires Grep to accelerate code search 2024-11-20T13:00:00.000Z

Grep allows developers to quickly search code across over 500,000 public git repositories. With the acquisition, founder Dan Fox will also be joining Vercel’s AI team to continue building Grep to enhance code search for developers.

Read more

Jared Palmer Dan Fox
https://vercel.com/changelog/pro-customers-can-now-configure-up-to-3-regions-for-vercel-functions Pro customers can now configure up to 3 regions for Vercel Functions 2024-11-20T13:00:00.000Z

Pro customers can now set up to three regions for their Vercel Functions, enabling compute to run closer to distributed data sources for faster responses and improved performance. When multiple Vercel Function regions are configured, user requests that require compute will be routed to the closest specified region.

Previously, functions for Pro customers were restricted to a single region. Increasing to three regions enables:

  • Global low-latency

  • Maintaining high compute density leading to higher cache hit rates and lower cold starts

  • Compatibility with standard database replication like Postgres read replicas

This also adds an extra layer of redundancy, complementing the built-in multi-Availability Zone redundancy of Vercel Functions.

To configure additional regions, add a regions property to your vercel.json.

Redeploy your project for the changes to take effect. Learn more about configuring regions.

Read more

Tom Lienard Mariano Cocirio
https://vercel.com/changelog/vercel-blob-now-supports-file-upload-progress Vercel Blob now supports file upload progress 2024-11-20T13:00:00.000Z

Vercel Blob can now track file upload progress, enabling for a better user experience when uploading files.

With the latest @vercel/blob package, you can use the new onUploadProgress callback to display progress during file uploads. In the Dashboard, you'll also see the upload progress for your files.

Try it out or learn more about Vercel Blob.

Read more

Vincent Voyer Luis Meyer
https://vercel.com/changelog/skew-protection-is-now-enabled-by-default-for-new-projects Skew Protection is now enabled by default for new projects 2024-11-19T13:00:00.000Z

Skew Protection eliminates version differences between web clients and servers—available for Pro and Enterprise customers. Starting today, new projects will have Skew Protection enabled by default.

Existing projects will not be changed, however you can manually enable Skew Protection in the project's settings.

Skew Protection ensures client-side code matches the server-side code for the corresponding deployment for a period of time or until a hard page refresh. This protects from version mismatch errors when creating a new deployment, such as file name changes from hashed bundles or even post backs from Server Actions.

Learn more about Skew Protection.

Read more

Steven Salat
https://vercel.com/blog/ai-sdk-4-0 AI SDK 4.0 2024-11-18T13:00:00.000Z

The AI SDK is an open-source toolkit for building AI applications with JavaScript and TypeScript. Its unified provider API allows you to use any language model and enables powerful UI integrations into leading web frameworks such as Next.js and Svelte.

Read more

Lars Grammel Jared Palmer Nico Albanese Walter Korman
https://vercel.com/blog/vercel-partner-program-updates Accelerating partner success: Vercel’s new Partner Program benefits 2024-11-15T13:00:00.000Z

At Vercel, we believe in the power of partnership and collaboration to drive innovation and mutual success. One in two sales and project delivery is done in collaboration with our partners. Last month, over 35 partners sponsored and supported Next.js Conf—our annual open-source conference—where over 1,000 people gathered in San Francisco and tens of thousands online from around the world. From championing an open web, supporting industry alliances, to developing joint features that enhance customer and user experiences, we're achieving more together.

Read more

Jen Chang
https://vercel.com/changelog/neon-now-available-on-vercel-marketplace Neon now available on Vercel Marketplace 2024-11-15T13:00:00.000Z

Neon joins the Vercel Marketplace with its Postgres solution, offering integrated billing and automated account provisioning, directly from the Vercel Dashboard.

This integration replaces the existing Vercel Postgres, allowing new users to immediately create Neon databases right from Vercel Marketplace.

In the coming months, we’ll begin a zero-downtime migration for all existing stores, requiring no action from users and with no change in pricing. Current Vercel Postgres users will retain uninterrupted access to their databases and can continue creating new stores with Vercel Postgres until the migration is complete, after which new store creation will shift to the Neon Marketplace integration.

The Vercel Marketplace is available to customers on all plans.

Get started with Neon on Vercel.

Read more

Hedi Zandi Fabio Benedetti Adrian Cooney Justin Kropp
https://vercel.com/changelog/web-analytics-now-has-route-support Web Analytics now has route support 2024-11-15T13:00:00.000Z

With the 1.4.0 release of @vercel/analytics, you can see route-level insights when you filter in Web Analytics. This update includes:

  • Support for frontend frameworks: Dynamic route segments are now supported in frameworks like Next.js, SvelteKit, and Remix with the latest version of the package

  • Advanced filtering: Apply filters based on routes to see page views and custom events per defined route

This feature is available to all Web Analytics customers.

Learn more about filtering in Web Analytics.

Read more

Damien Simonin Feugas Tobias Lins
https://vercel.com/changelog/vercel-now-supports-one-click-bluesky-dns-configuration Vercel now supports one-click Bluesky DNS configuration 2024-11-14T13:00:00.000Z

Bluesky is now a preset DNS option for domains, simplifying the process to set your Bluesky handle to a Vercel domain. Upon updating your domain's DNS, you will need to visit Bluesky settings to complete domain verification.

Read our Bluesky domain guide for a complete walkthrough or learn more about DNS Presets.

Read more

Dillon Mulroy
https://vercel.com/blog/life-of-a-vercel-request-what-happens-when-a-user-presses-enter Life of a Vercel request: What happens when a user presses enter 2024-11-13T13:00:00.000Z

When developers push code, Vercel’s framework-defined infrastructure analyzes the codebase and intelligently provisions cloud resources. When requests come in, Vercel’s infrastructure instantly routes them to the nearest data center over a high-speed, low-latency network, delivering a response right back to the user.

Vercel handles all of this behind the scenes. But understanding how your framework code powers the infrastructure—from deployment to request handling—gives you insight into how Vercel’s components work together, and enables you to further optimize user experiences.

Here’s how Vercel manages requests at every stage.

Read more

Dan Fein
https://vercel.com/changelog/introducing-vercel-firewall-notifications Introducing Vercel Firewall DDoS mitigation notifications 2024-11-12T13:00:00.000Z

You can now receive alerts when the Vercel Firewall detects and automatically mitigates a DDoS attack on your Vercel project. With these alerts, you can get notified immediately when traffic to your application is blocked or challenged, so that you can review attack traffic and take any further action in a timely manner.

To get started, you can either set up a webhook to get notified through a defined HTTP endpoint, or use the Vercel Slack app to receive notifications in your Slack workspace.

These alerts are available on Pro and Enterprise plans.

Learn more about Vercel Firewall.

Read more

Sage Abraham Dany Volk
https://vercel.com/blog/vercel-named-a-visionary-in-2024-gartner-magic-quadrant-for-cloud Vercel named a Visionary in 2024 Gartner® Magic Quadrant™ for Cloud Application Platforms 2024-11-08T13:00:00.000Z

The Frontend Cloud is designed for developers and teams that care deeply about user experiences. Whether you're serving billions of users or building your first project, the Frontend Cloud helps you remove friction from the development and delivery process. This allows you to focus on building your product instead of managing and configuring the infrastructure required to make it work.

Read more

Malte Ubl Paul Staelin
https://vercel.com/changelog/improvements-to-vercel-secure-compute-builds-provisioning-time Improvements to Vercel Secure Compute builds provisioning time 2024-11-08T13:00:00.000Z

Provisioning time for Vercel Secure Compute builds has decreased from 1-2 minutes to under 5 seconds—a 20x speed improvement.

These builds require provisioning of build containers with custom configurations for each customer’s security needs. Now, this tailored container-generation process is significantly faster, reducing overall deployment times.

Additionally, builds are consolidated closer to the Secure Compute regions such as Frankfurt, Sao Paulo, Oregon, N. Virginia, Sydney, or Ireland, enhancing efficiency even further.

Learn more about Vercel Secure Compute.

Read more

Guðmundur Bjarni Ólafsson Gargi Sharma Carlos Galdino Miroslav Simulcik Marc Greenstock
https://vercel.com/blog/motortrend-shifting-into-overdrive-with-vercel MotorTrend: Shifting into overdrive with Vercel 2024-11-07T13:00:00.000Z

MotorTrend—a Warner Bros. Discovery company and the world’s leading media company on all things automotive—needed a digital experience as powerful as the vehicles they showcase. Bogged down by a legacy tech stack, their development team faced frustratingly long build times and a cumbersome release process. They knew a complete redesign wasn't the answer—they needed a platform upgrade.

Read more

Dan Fein
https://vercel.com/changelog/next-js-ai-chatbot-template-3-0 Next.js AI Chatbot Template 3.0 2024-11-07T13:00:00.000Z

The Next.js AI Chatbot template has been updated to use Next.js 15, React 19, and the Auth.js for Next beta.

The template's UI has also been redesigned to include a model switcher to make it easier for users to experiment with different models. The new side-by-side UI keeps your users' output and chat messages on screen simultaneously.

Try the demo or deploy your own.

Read more

Jeremy Philemon Shadcn
https://vercel.com/changelog/track-build-metrics-and-resource-consumption-with-observability Track build metrics and resource consumption with Observability 2024-11-07T13:00:00.000Z

Users in the limited beta of Observability can now get additional metrics related to builds, including:

  • Build time: Understand your build time across time, identifying changes to longer build times faster.

  • Memory and disk usage: See how effective your builds are by viewing your average memory and disk usage.

  • Build steps: Explore P50 and P90 durations of each build step.

Observability is in limited beta for current Monitoring users and can be accessed from the new Observability tab in your Vercel projects.

Read more

Andrew Healey Balazs Varga Felix Haus Janos Szathmary Tobias Lins Timo Lins Mariano Cocirio
https://vercel.com/blog/break-the-news-not-the-site Break the news, not the site: Leading news organizations upgrade their infrastructure ahead of the election 2024-10-31T13:00:00.000Z

When major political developments unfold, millions rush to news websites, putting immense pressure on digital infrastructure. With global audiences, slow-loading websites or crashes during a major event can be catastrophic for a news organization.

Read more

Alina Weinstein
https://vercel.com/blog/a-deep-dive-into-hive-vercels-builds-infrastructure A deep dive into Vercel’s build infrastructure 2024-10-30T13:00:00.000Z

Vercel has a new low-level untrusted and ephemeral compute platform—designed to give us the control needed to securely and efficiently manage and run builds. Since November 2023, this compute platform, internally codenamed "Hive", has powered Vercel’s builds, enabling key improvements like enhanced build machines and a 30% improvement in build performance.

The platform operates on the fundamental assumption that we’re executing potentially malicious code on multi-tenant machines, requiring it to be safe, reliable, performant, and cost-effective. It’s architected to handle multiple use cases and can be composed in different ways depending on what’s needed. Most recently, Hive allowed us to reduce provisioning times for Secure Compute customers from 90 seconds to 5 seconds, while also improving their build speeds.

We built Hive because we needed finer control and more granular management to continuously improve Vercel’s infrastructure, to meet the growing demands of our customers and to fulfill our vision of delivering the best development experience in the world.

Read more

Mariano Cocirio Guðmundur Bjarni Ólafsson
https://vercel.com/changelog/improved-support-for-pnpm-corepack-and-monorepos Improved support for pnpm, Corepack, and monorepos 2024-10-30T13:00:00.000Z

We've improved the experience of deploying projects using pnpm, Corepack, and Turborepo together.

Previously, combinations of these tools could result in unexpected behavior or complex build errors. Clear error and warning messages have been added, explaining how to fix problems when incompatibilities exist.

For example, a project with Corepack enabled, specifying [email protected], and a lockfile of version 6.0 would previously see the warning: Ignoring not compatible lockfile. Now, this is handled with a clearer error message: Detected lockfile "6.0" which is not compatible with the intended corepack package manager "[email protected]". Update your lockfile or change to a compatible corepack version.

Additionally, each package previously had to define its own packageManager. The root package.json#packageManager is now detected in monorepo projects with Corepack enabled and applied to all packages.

Read more

Austin Merrick
https://vercel.com/changelog/view-advanced-function-metrics-with-observability View advanced function metrics with Observability 2024-10-30T13:00:00.000Z

Users in the limited beta of Observability can now view advanced insights for serverless Vercel Functions. Explore low level metrics about function execution, including:

  • CPU throttle and memory usage: Understand CPU usage and memory consumption and see if you could benefit from upgrading the function to more resources

  • Time to First Byte (TTFB): See how quickly your function responds to requests by sending the first bytes of the response

  • Function start type: View cold start and pre-warmed function invocation rates

Observability is in limited beta for current Monitoring users and can be accessed from the new Observability tab in your Vercel projects.

Read more

Tobias Lins Timo Lins Ethan Shea
https://vercel.com/changelog/filter-by-custom-date-ranges-in-speed-insights Filter by custom date ranges in Speed Insights 2024-10-29T13:00:00.000Z

You can now filter by custom date ranges in Speed Insights. Select any custom time period in the date range picker, or drag across the graph to quickly focus on specific period.

Learn more about Speed Insights or enable Speed Insights for your project.

Read more

Damien Simonin Feugas Timo Lins
https://vercel.com/changelog/device-type-support-and-improved-breakdowns-in-web-analytics Device type support and improved breakdowns in Web Analytics 2024-10-28T13:00:00.000Z

You can now inspect and filter device types in Vercel Web Analytics, and apply filters to view page views and custom events for each device.

Additionally, we updated the overview by showing percentages instead of absolute numbers to see the overall distribution. You can still explore the total numbers when expanding a certain panel by clicking the "View all" button.

These features are available to Web Analytics users on all plans.

Check out our documentation to learn more.

Read more

Timo Lins
https://vercel.com/blog/recap-next-js-conf-2024 Recap: Next.js Conf 2024 2024-10-25T13:00:00.000Z

Our fifth annual Next.js Conf finished yesterday, where we shared our research and upcoming improvements to the framework, as well as what's new in the community and Next.js ecosystem. Over 1,000 people in the Next.js community gathered in San Francisco and tens of thousands around the world watched online to see what's new with Next.js.

Read more

Lee Robinson Delba de Oliveira
https://vercel.com/blog/whats-new-in-svelte-5 What's new in Svelte 5 2024-10-23T13:00:00.000Z

With its compiler-first approach, fine-grained reactivity, and ability to integrate with any JavaScript project, Svelte stands apart from other frameworks.

At Vercel, we're big fans of Svelte—deeply invested in its success and constantly working to make our platform the best place to build and deploy Svelte apps.

With the arrival of Svelte 5, let's explore what makes this release exciting.

Read more

Alice Alexandra Moore Rich Harris
https://vercel.com/blog/maximizing-outputs-with-v0-from-ui-generation-to-code-creation Maximizing outputs with v0: From UI generation to code creation 2024-10-23T13:00:00.000Z

v0 is a powerful tool for generating high-quality UIs and code, and it's also an educational asset for designing and creating on the web. It leverages deep integrations with libraries and modern frameworks like Next.js and React. Whether you're looking to scaffold a new project, fetch data, or create 3D graphics, v0 is designed to meet all your frontend development needs.

To get the highest quality generations, you need to be able to craft input prompts to guide v0 well. The better you guide v0 and understand its strengths, the more accurate and relevant the responses you'll get.

In this post, we’ll look at how you can get the most out of v0’s core features, UI generation abilities, code generation, and developer support.

Read more

Alli Pope Aryaman Khandelwal
https://vercel.com/changelog/openid-connect-federation-now-generally-available OpenID Connect (OIDC) Federation now generally available 2024-10-23T13:00:00.000Z

Vercel's OpenID Connect (OIDC) Federation is now generally available. Strengthen your security by replacing long-lived environment variable credentials with short-lived, RSA-signed JWTs for builds and Vercel Functions.

Use Vercel’s OIDC Identity Provider (IdP) to issue tokens for cloud providers and services like AWS, Azure, Firebase, and Salesforce.

With general availability, we are also introducing a new Team Issuer mode, which mints OIDC tokens with a URL unique to your team. This allows you to configure your cloud environment with stricter zero trust configurations.

To enable Vercel OIDC, update your project's security settings and integrate it using the @vercel/functions package. If you're already using Vercel OIDC, we recommend opting into Team Issuer mode in those settings.

Check out the documentation and blog post to learn more.

Read more

Marc Greenstock Bel Curcio Christopher Skillicorn
https://vercel.com/blog/bnp-paribas-open-serving-up-scores-and-experiences-in-real-time-with-work BNP Paribas Open: Serving up scores and experiences in real time with Work & Co and Vercel 2024-10-22T13:00:00.000Z

The prestigious BNP Paribas Open, held annually in Indian Wells, California, attracts top tennis talent and a global audience. To match its world-class status, the tournament required a significant digital upgrade, enabling dynamic, real-time tracking and engagement with hundreds of players for their fanbase of millions.

Read more

Alina Weinstein
https://vercel.com/blog/how-vercel-adopted-microfrontends How Vercel adopted microfrontends 2024-10-22T13:00:00.000Z

Vercel's main website, once a single large Next.js application, serves both our website visitors and logged-in dashboard. But, as Vercel grew, this setup revealed opportunities for improvement. Build times grew, dependency management became more intricate, and workflows needed optimization. Minor changes triggered full builds, affecting isolated development and CI pipelines.

It was clear a change was needed.

Read more

Mark Knichel Dan Fein Brian Emerick
https://vercel.com/changelog/choose-ip-visibility-for-log-drains Choose IP visibility for Log Drains 2024-10-22T13:00:00.000Z

Since IP addresses can be considered personal information under certain data privacy laws, we're giving you the ability to configure whether Vercel forwards IP addresses to your Log Drains. Now, similar to Monitoring, you can disable this forwarding in the security settings of your Team.

This can be done by Owner and Admin roles on Pro and Enterprise plans.

Read more

Julia Shi Luc Leray Chris Widmaier
https://vercel.com/changelog/upstash-joins-the-vercel-marketplace Upstash joins the Vercel Marketplace 2024-10-22T13:00:00.000Z

Upstash has joined the Vercel Marketplace with three of its core products: KV, Vector, and QStash. These services offer integrated billing, automated account provisioning, and direct access to the Upstash console from directly within the Vercel Dashboard.

This integration replaces Vercel KV, and in the coming months, we will begin a zero-downtime migration for all existing stores with no action required and no change in price.

Existing Vercel KV users will continue to have full access to their current stores without any changes during the transition, and new stores can be created via the Upstash Marketplace Integration.

Get started with the Vercel Marketplace, available to customers on all plans.

Read more

Hedi Zandi Fabio Benedetti Justin Kropp
https://vercel.com/blog/eval-driven-development-build-better-ai-faster Eval-driven development: Build better AI faster 2024-10-17T13:00:00.000Z

AI changes how we build software. In combination with developers, it creates a positive feedback loop where we can achieve better results faster.

However, traditional testing methods don't work well with AI's unpredictable nature. As we've been building AI products at Vercel, including v0, we've needed a new approach: eval-driven development.

This article explores the ins and outs of evals and their positive impact on AI-native development.

Read more

Malte Ubl Alice Alexandra Moore Ido Pesok
https://vercel.com/blog/v0-plans-for-teams v0 plans for teams are here 2024-10-15T13:00:00.000Z

Last October we introduced v0—a generative user interface system powered by natural language and AI. Users generated over four million designs, creating everything from sophisticated dashboards to polished marketing pages.

Now, v0 is like having an expert programmer sitting next to you. It's an assistant that specializes in web technologies and frameworks to help you generate functional code and UI from best practices, migrate or debug existing code, or learn to code for the first time.

Starting today, v0 is available to teams of all sizes, with plans designed to help you collaborate and scale securely. v0 Team and v0 Enterprise plans offer security features like SSO and, for Enterprise, the ability to opt out of data training, while helping you share and reuse knowledge and generations across your whole team.

Read more

Jared Palmer Jueun Grace Yun Aryaman Khandelwal
https://vercel.com/changelog/updated-default-retention-policy-for-canceled-deployments Updated default retention policy for canceled deployments 2024-10-12T13:00:00.000Z

We recently introduced the ability to manage deployment retention—a way to automatically clean up your project's deployments after a set period of time.

Starting November 18, 2024, we are changing the default retention policy for canceled deployments only. Canceled deployments will be automatically deleted after 30 days for all projects, unless your project has a custom deployment retention setting. There will be no impact to deployments that are not canceled.

Learn more about Deployment Retention.

Read more

Brooke Mosby
https://vercel.com/changelog/deployment-protection-now-supports-protected-rewrites Deployment Protection now supports protected rewrites 2024-10-11T13:00:00.000Z

We've improved how Vercel Authentication handles rewrites between protected deployments. If you have access to the deployment and it belongs to the same team as the original deployment, we now automatically grant access to the rewritten deployment.

Previously, when rewriting to a protected deployment, Vercel Authentication would redirect through vercel.com to authenticate the user, causing the rewrite to become a redirect.

Automatic access between protected rewrites is only applicable if you are already authenticated with Vercel on the original deployment. This new behavior does not apply to rewrites when you authenticate using Shareable Links, Protection Bypass for Automation, or Password Protection.

Read more about Deployment Protection in our docs.

Read more

Kit Foster
https://vercel.com/blog/add-3d-to-your-web-projects-with-v0-and-react-three-fiber Add 3D to your web projects with v0 and React Three Fiber 2024-10-10T13:00:00.000Z

React Three Fiber (R3F) is a powerful React renderer for three.js that simplifies building 3D graphics using React's component-based architecture. Whether you're building complex environments, animations, or interactive scenes, R3F makes it accessible—even if you're not an expert at math or physics.

With R3F support in v0, our AI-powered development assistant, you can incorporate 3D designs in your projects by chatting with v0 using natural language. Let's explore how to use v0 and R3F to create interactive 3D scenes to elevate your web designs.

Read more

Alli Pope
https://vercel.com/blog/how-emburse-increased-site-performance-by-4x-with-vercel How Emburse increased site performance by 4x with Vercel 2024-10-10T13:00:00.000Z

Emburse manages travel and expense for over 12 million users in 120 countries. They were operating a legacy stack and needed to modernize, so they partnered with Rangle, a leading digital transformation consultancy and Vercel Expert.

Together with Rangle, Emburse implemented Vercel, Next.js, and Sanity, significantly improving the site’s performance and speed while addressing key concerns for their marketing team.

Read more

Alli Pope
https://vercel.com/changelog/improved-monorepo-support-in-recent-previews Improved monorepo support in Recent Previews 2024-10-10T13:00:00.000Z

When you make a commit to a monorepo, the Recent Previews section on your team overview page will now show an expandable row containing preview, source, and deployment links for all deployments triggered by your commit, across all projects.

Recent Previews gives you easy access to the previews you have viewed or deployed recently. Learn more in the dashboard overview documentation.

Read more

Michael Wenzel wits Christopher Skillicorn Gary Borton Sam Saliba
https://vercel.com/blog/leveraging-vercel-and-the-ai-sdk-to-deliver-a-seamless-ai-powered-experience Leveraging Vercel and the AI SDK to deliver a seamless, AI-powered experience as a solo founder 2024-10-09T13:00:00.000Z

ChatPRD is an AI co-pilot designed for product managers, enabling them to write product requirements documents, brainstorm roadmaps, and improve overall efficiency around product work. As a solo founder, Claire Vo built ChatPRD from the ground up. In just nine months, the platform has garnered 20,000 users and is now focusing on expanding its features to support team collaboration.

Central to this rapid growth and development has been the AI SDK on Vercel.

Read more

Alli Pope
https://vercel.com/blog/how-chatbase-scaled-rapidly-with-vercels-developer-experience-and-ai-sdk How Chatbase scaled rapidly with Vercel's developer experience and AI SDK 2024-10-09T13:00:00.000Z

Chatbase helps companies build chat-based AI agents that are trained on their data to chat with users and perform tasks. It handles customer support, sales, lead generation, and more.

From the beginning, they prioritized building a platform that could move fast in the competitive market. To achieve this, they chose Vercel and Next.js as the tech stack for their app and marketing website, along with Vercel's AI SDK, which enabled them to quickly iterate and deliver AI-driven features.

By prioritizing iteration speed, Chatbase grew to 500K monthly visitors and $4M ARR in 1.5 years. Vercel's developer experience (DX) allows the team to focus on innovation, not infrastructure. The AI SDK enables rapid implementation of custom chats and model optimizations.

Read more

Alli Pope
https://vercel.com/blog/how-supabase-increased-signups-through-the-vercel-marketplace How Supabase increased signups through the Vercel Marketplace 2024-10-07T13:00:00.000Z

The Supabase integration on the Vercel Marketplace provided a frictionless onboarding experience, ensuring zero loss in fidelity or developer experience. Developers can easily set up and manage Supabase databases directly from the Vercel CLI or dashboard

Since the launch, Supabase has seen a notable increase in new signups, as the Marketplace has become their largest partner channel. Based on Supabase’s increased business and early success, we’re excited for what’s next and the future of Vercel Marketplace.

Read more

Hedi Zandi
https://vercel.com/blog/ledgers-solution-to-traffic-spike-stability-with-vercel Navigating Web3 dynamism: Ledger's solution to traffic spike stability with Vercel 2024-10-04T13:00:00.000Z

In the world of crypto, market surges and unexpected events create unpredictable traffic spikes, like Gunna wearing a diamond-encrusted Ledger at the Met Gala.

For Ledger—a leading provider of hardware wallets—capturing this no-notice interest becomes crucial for Ledger’s online presence, which may see traffic fluctuate from 1-5 million users monthly. Navigating the dynamism of the crypto market requires a technical infrastructure as resilient and secure as Ledger’s hardware products.

Read more

Alina Weinstein
https://vercel.com/changelog/vercel-terraform-provider-now-supports-vercel-firewall Vercel Terraform Provider now supports Vercel Firewall resources 2024-10-04T13:00:00.000Z

The Vercel Terraform Provider now allows you to customize and control the Vercel Firewall through Infrastructure as Code (IaC).

Key resources and their capabilities include:

For example, to create a new rule that challenges requests where the user_agent contains curl:

Get started with the Terraform provider for Vercel today. If you already have Terraform installed, upgrade by running:

Read more

Sage Abraham
https://vercel.com/blog/serverless-servers-node-js-with-in-function-concurrency Serverless servers: Efficient serverless Node.js with in-function concurrency 2024-10-03T13:00:00.000Z

We’re sharing a first look at a new version of Vercel Functions with support for in-function concurrency that brings the best of servers to serverless functions.

We’ve been testing this new version with customers and are seeing a 20%-50% reduction in compute usage and respective cost reduction without latency impact.

It’s a serverless product optimized specifically for interactive workloads such as server-rendering of web pages, APIs, and AI applications. Vercel Functions continue to offer native Node.js support with accelerated cold-start performance based on V8 bytecode and instance pre-warming for production workloads.

Read more

Malte Ubl
https://vercel.com/changelog/in-function-concurrency-now-in-public-beta In-function concurrency now in public beta 2024-10-03T13:00:00.000Z

In-function concurrency is now in public beta, and allows a single function instance to handle multiple invocations concurrently, improving resource utilization by taking advantage of idle time in existing function instances.

Traditionally, serverless architecture maps one function instance to a single invocation. With in-function concurrency, overlapping invocations can increase efficiency by 20%-50%, reducing gigabyte-hours and lowering costs.

​As part of the beta, we’re limiting the number of concurrent invocations per instance, and will be gradually increasing the limit based on feedback. Note, this capability may increase latency for purely CPU-bound workloads.

In-function concurrency public beta is available for all Pro and Enterprise customers using Standard or Performance Function CPU, you can enable it through your dashboard and track resource savings in real time.

Read our blog post and documentation for more information.

Read more

Doug Parsons Tom Lienard Craig Andrews Javi Velasco Florentin Eckl Mariano Cocirio
https://vercel.com/blog/vercel-waf-upgrade-brings-persistent-actions-rate-limiting-and-api-control Vercel WAF upgrade brings persistent actions, rate limiting, and API control 2024-10-02T13:00:00.000Z

At Vercel Ship, we introduced the new Web Application Firewall (WAF), an application-layer firewall that complements our platform-wide firewall. This enables our customers to implement custom or managed rulesets, such as protection against the OWASP Top 10 risks.

Since its release, Vercel’s WAF has blocked billions of malicious requests, demonstrating its resilience and reliability across a wide variety of use cases, from small startups to large enterprise deployments.

Read more

Dan Fein
https://vercel.com/changelog/application-aware-observability-in-limited-beta Application-aware Observability now in limited beta 2024-10-02T13:00:00.000Z

We're beginning to roll out new Observability capabilities which will give enhanced insights into your application's performance and behavior in the Vercel dashboard.

This will provide detailed analytics across functions, data transfer, caching, and API requests to bring further observability to Vercel's framework-defined infrastructure.

New insights include:

  • Vercel Functions and external API requests: Monitor function behavior and external requests, including invocations, durations, and error rates

  • Vercel Edge Network: Track data transfer, ISR usage, and edge requests with detailed insights into cache success, revalidations, and geo-based performance

Observability is now in limited beta for current Monitoring customers and can be accessed from the new Observability tab in your Vercel projects.

For advanced platform observability, explore our integrations with Sentry, Datadog, Honeycomb, and more.

Read more

Ethan Shea Timo Lins Tobias Lins Malavika Tadeusz Chris Widmaier Dom Busser
https://vercel.com/changelog/vercel-waf-rate-limiting-now-generally-available Vercel WAF rate limiting now generally available 2024-10-02T13:00:00.000Z

Vercel Web Application Firewall (WAF) rate limiting is now generally available, giving you precise control over request volumes to your applications.

With over 15 parameters, including target path, headers, method, and cookies, you can define the business logic for rate limiting. Then, apply a rate-limiting algorithm tied to IP, JA4 digest, headers, or user agent to control the frequency of matching traffic within your set limits.

When paired with persistent actions, rate limiting can help reduce resource abuse across Edge Requests, Middleware, Data Transfer, and Function execution.

Rate limiting with a fixed-window algorithm is available today for Pro customers, with an additional token-bucket algorithm available to Enterprise customers. Pricing for rate limiting is regional starting at $.50 per 1 million allowed requests.

Add rate limiting using a template or read the rate limiting documentation to learn more.

Read more

Dany Volk Joseph Collins Andrew Barba Kevin Rupert
https://vercel.com/changelog/vercel-waf-now-supports-persistent-actions Vercel WAF now supports persistent actions 2024-10-02T13:00:00.000Z

Vercel Web Application Firewall (WAF) now supports persistent actions to block repeat offenders who trigger firewall rules.

These persistent actions enforce specific responses—such as blocking—against clients for a defined period, ranging from 1-60 minutes. While active, these actions prevent unnecessary processing by blocking requests earlier in their lifecycle, reducing edge request load.

You can apply persistence to existing rules for actions like deny, challenge, and rate-limiting, adding an extra layer of control to your firewall logic.

Learn more about persistent actions.

Read more

Andrew Barba Brooke Mosby Joseph Collins Sudais Moorad Kevin Rupert
https://vercel.com/changelog/streaming-now-enabled-by-default-for-all-node-js-vercel-functions Streaming now enabled by default for all Node.js Vercel Functions 2024-10-01T13:00:00.000Z

Streaming is now enabled by default for all Vercel Functions running on Node.js for Pro and Enterprise teams, marking the final step in the plan we published on July 8th, 2024. This means Vercel Functions can now send data to the client as it’s generated, instead of waiting for the entire response.

The VERCEL_FORCE_NODEJS_STREAMING environment variable is no longer required—streaming is now applied automatically upon deployment.

Logging changes: Streaming responses will alter the format and frequency of your runtime logs. If you are using Log Drains, check that your ingestion pipeline can process the new log format and increased log frequency.

Read our blog post and documentation for more information.

Read more

Craig Andrews Javi Velasco Kiko Beats Mariano Cocirio
https://vercel.com/blog/accelerating-developer-velocity-and-creating-high-impact-web-teams Accelerating developer velocity and creating high-impact web teams 2024-09-27T13:00:00.000Z

High-performing web teams focus on innovation, not infrastructure. Vercel's framework-defined infrastructure accelerates developer velocity, allowing teams to build faster and more efficiently.

By handling the heavy lifting of cloud infrastructure, Vercel's Frontend Cloud empowers teams to focus on coding user interfaces and logic. This enables them to meet the six core principles of web delivery: speed, global presence, scalability, reliability, security, and dynamism. With infrastructure managed, teams can streamline workflows and collaboration, delivering impactful products.

Read more

Dan Fein
https://vercel.com/changelog/vercel-now-supports-ai-domains Purchase .ai domains directly on Vercel 2024-09-27T13:00:00.000Z

You can now purchase .ai domains on Vercel. Secure your new domain and use the Vercel AI SDK to create an application, or check out our AI templates for inspiration.

For a comprehensive list of all supported domains, check out the documentation here.

Read more

Dillon Mulroy Kylie Czajkowski Kunal Jain
https://vercel.com/changelog/vpc-peering-now-available-as-self-service-for-vercel-secure-compute VPC Peering now available as self-service for Vercel’s Secure Compute 2024-09-25T13:00:00.000Z

We’ve introduced major improvements to Secure Compute, now offering self-service capabilities for Enterprise customers, along with increased flexibility and control over Secure Compute networks.

Key updates include:

  • VPC Peering management: VPC peering connections can be initiated from AWS and accepted directly through the Vercel dashboard. Pending connections are clearly displayed for review and approval.

  • Failover regions: Failover regions can now be configured in the dashboard. If the active region is unavailable, Vercel will switch to a network in the failover region.

  • Improved UI for networks: A new networks page provides detailed information and configuration which includes peering connections, IP addresses, and projects.

This update simplifies network management and enhances secure, self-service connections between Vercel and AWS environments. Check out the documentation to learn more.

Read more

Miroslav Simulcik Meg Bird Marc Greenstock Bel Curcio
https://vercel.com/blog/preventing-infrastructure-abuse-with-vercel-firewall Preventing infrastructure abuse with Vercel Firewall 2024-09-24T13:00:00.000Z

In any given week, Vercel Firewall blocks around 1 billion suspicious TCP connections, with some days seeing upwards of 7 billion malicious HTTP requests. Vercel's platform is designed to automatically mitigate DDoS attacks, blocking thousands of these threats every day to keep your site secure and operational without user intervention. Vercel is built to minimize disruptions and safeguard your resources from unnecessary costs by serving only legitimate traffic.

Read more

Dan Fein Tom Lienard
https://vercel.com/blog/ai-sdk-3-4 AI SDK 3.4 2024-09-20T13:00:00.000Z

The AI SDK is an open-source toolkit for building AI applications with JavaScript and TypeScript. Its unified provider API allows you to use any language model and enables powerful UI integrations into leading web frameworks such as Next.js and Svelte.

Read more

Lars Grammel Jared Palmer Jeremy Philemon Nico Albanese
https://vercel.com/blog/from-cdns-to-frontend-clouds From CDNs to Frontend Clouds 2024-09-20T13:00:00.000Z

Web apps today are judged on six core principles: speed, global presence, scalability, dynamism, reliability, and security. Users and businesses now expect excellence in all six categories, making them non-negotiable.

As user experiences have become more engaging and dynamic, the limitations of Content Delivery Networks (CDNs) and Infrastructure as Code (IaC)—once the industry standards for application delivery—are becoming increasingly apparent.

CDNs, designed for static content, fail to meet the demands of modern, interactive and real-time web applications. At the same time, while IaC enables programmatic management, its use for web applications often leads to building undifferentiated tooling and systems rather than dedicating those resources to more bespoke, complex infrastructure challenges.

These technologies have not kept pace with the evolving web, leading to an emerging and compelling solution: frontend clouds that abstract away complex infrastructure, enabling next-generation content, experiences, and application delivery. This shift allows developers to focus on what truly matters—innovating to enhance web applications, drive business value, and delight users.

Read more

Malte Ubl Dan Fein
https://vercel.com/changelog/improvements-to-vercel-toolbar Improvements to Vercel Toolbar: Shrinking when inactive, removal of avatars, and more 2024-09-19T13:00:00.000Z

We have made a number of improvements to ensure the Vercel Toolbar is there when you need it but stays out of the way when you don't:

  • Shrinks when inactive: The shrunken toolbar shows the comment, share (or flags), and menu icons. The full toolbar will show on hover. You can turn off shrinking in the toolbar menu under Preferences.

  • No Longer Shows Avatars: We removed the avatars of team members who viewed the deployment. This reduces visual noise and quiets the toolbar's presence in the network tab.

  • Normal Text Selection: The toolbar no longer adds its own highlight or opens the thread editor when text is selected. It shows a small comment indicator which can be clicked to start drafting a comment on the selected text.

To quickly toggle the toolbar you can use ⌘ + . on MacOS or Ctrl + . on Windows. You can also drag it to a different side of your page, and it will remember to spawn there next time.

See the documentation for more information about the Vercel Toolbar.

Read more

wits Gary Borton Sam Saliba Christopher Skillicorn
https://vercel.com/changelog/install-marketplace-integrations-from-the-vercel-cli Install Marketplace Integrations from the Vercel CLI 2024-09-19T13:00:00.000Z

You can now install integrations from the Vercel Marketplace directly through the Vercel CLI.

The Vercel Marketplace offers native integrations that allow you to use provider products —currently Supabase, Redis, and EdgeDB—directly from the Vercel dashboard without leaving the platform or creating separate accounts.

Running the vc i command will:

  • Install the integration (e.g. vc i supabase to install Supabase)

  • Automatically provision resources as part of the integration installation, as required by the provider products

  • Get enhanced error messages in the terminal for troubleshooting of any installation issues

Check out the documentation to learn more.

Read more

Luka Hartwig Hedi Zandi
https://vercel.com/changelog/content-link-can-now-be-used-with-contentful Content Link can now be used with Contentful 2024-09-18T13:00:00.000Z

With Content Link, previously known as Visual Editing, you can click-to-edit content on your Vercel site, with a direct link to exactly where your content lives in your CMS.

This functionality is now available for Pro and Enterprise customers using Contentful as their CMS and are on their Premium plan. To see all supported CMSs, visit our docs.

When enabled, Contentful’s APIs return source maps for certain visual fields that have links to the correct field in Contentful, such as rich text, description fields, and lists. Markdown is currently not supported.

Check out the documentation to get started.

Read more

wits Sam Saliba
https://vercel.com/blog/managing-275-thousand-pages-and-8-million-assets-with-isr Managing 275 thousand pages and 8 million assets at top speed with ISR 2024-09-17T13:00:00.000Z

As the world’s leading in-person car auction enterprise, Mecum Auction Company has sold some of the most famous vehicles in the world. And while their digital platform had capably evolved over the years, it was hitting its limit, hindering their ability to create listings quickly. With the help of digital agency Americaneagle.com, Mecum adopted a new, composable stack—giving them confidence that their website would be fast, performant, and reliable.  

Read more

Alli Pope
https://vercel.com/blog/isr-a-flexible-way-to-cache-dynamic-content ISR: A flexible way to cache dynamic content 2024-09-16T13:00:00.000Z

Incremental Static Regeneration (ISR) is a caching strategy that combines the perks of multiple rendering techniques to bring users dynamic content at cached speeds.

This guide explores how ISR fits into your overall caching strategy, how it can benefit your architecture, and how to implement it effectively. We'll compare ISR to traditional caching methods and provide real-world examples to illustrate its advantages.

Read more

Alice Alexandra Moore
https://vercel.com/blog/summer-internship-at-vercel Deploying dreams: An inside look at a summer internship with Vercel 2024-09-13T13:00:00.000Z

Hello! I’m Aryan. I am currently a student at UC Berkeley, studying Electrical Engineering and Computer Science (EECS). This summer, I had the opportunity to be an intern at Vercel. It’s been an unforgettable experience. As my internship comes to a close and I head back to school, I want to share a behind-the-scenes look at what an internship at Vercel is like.

Read more

Aryan Vichare
https://vercel.com/blog/whats-new-in-react-19 What’s new in React 19 2024-09-04T13:00:00.000Z

React 19 is near. The React Core Team announced a React 19 release candidate (RC) this past April. This major version brings several updates and new patterns, aimed at improving performance, ease of use, and developer experience.

Many of these features were introduced as experimental in React 18, but they will be marked as stable in React 19. Here’s a high-level look at what you need to know to be ready.

Read more

Michael Novotny
https://vercel.com/changelog/sign-up-or-sign-in-with-a-one-time-password-otp Sign up or sign in with a One-Time Password (OTP) 2024-09-04T13:00:00.000Z

Sign up or sign in with email using one time password. This update offers three key benefits:

  • Enhanced security: 6-digit OTPs are single-use and short-lived, significantly reducing the risk of unauthorized access through intercepted or stolen credentials.

  • Flexible sign-up/sign-in: Start on one device or window, finish on another. No need to keep the original tab open.

  • No magic links: Avoid the hassle of clicking email links — enter the OTP to complete the process.

Sign up today to get started.

Read more

Balázs Orbán Christopher Skillicorn Marc Greenstock Bel Curcio
https://vercel.com/blog/transforming-customer-support-with-ai-how-vercel-decreased-tickets Transforming customer support with AI: How Vercel decreased tickets by 31% 2024-09-03T13:00:00.000Z

McKinsey's latest AI survey shows 65% of organizations now regularly use AI — nearly double from just ten months ago, with many using it to increase efficiency in critical areas like customer support.

At Vercel, we integrated AI into our support workflow. Our AI agent reduced human-handled tickets by 31%, allowing us to maintain high support standards while serving a growing customer base.

Read more

Alina Weinstein
https://vercel.com/changelog/restrict-repository-deployments-to-specific-teams Restrict repository deployments to specific teams 2024-09-02T13:00:00.000Z

Enterprise customers can now restrict deployment permissions for their entire Git scope, for example their GitHub organization, to their Vercel teams.

Restricted deployment permissioning ensures that all repositories in the protected Git scope can only be deployed by authorized teams on Vercel.

This protects against intentional and accidental deployments of protected repositories.

Learn more in our documentation.

Read more

Christopher Skillicorn Kit Foster Bel Curcio
https://vercel.com/changelog/rest-api-for-the-vercel-firewall REST API for the Vercel Firewall 2024-09-02T13:00:00.000Z

The Vercel Firewall now has a REST API, with the ability to:

  • Read the current Firewall configuration

  • Create and manage custom rules

  • Enable or disable Attack Challenge Mode

  • Control and manage a list of blocked IP addresses

Learn more and get started today with the REST API documentation.

Read more

Sage Abraham Andrew Barba
https://vercel.com/blog/enhancing-security-of-backend-connectivity-with-openid-connect Enhancing security of backend connectivity with OpenID Connect 2024-08-28T13:00:00.000Z

In 2014, the OpenID Foundation introduced a new standard for authenticating people online, known as OpenID Connect (OIDC). This standard was initially created to simplify the authentication process for users, providing a streamlined and secure way to log into various services. Today, Vercel leverages OIDC to enhance the security of backend connectivity, enabling developers to replace long-lived credentials with more secure, temporary tokens.

Read more

Dan Fein Marc Greenstock
https://vercel.com/blog/introducing-the-vercel-marketplace Introducing the Vercel Marketplace 2024-08-28T13:00:00.000Z

Last year, we added storage solutions to our platform, introducing our first-party Blob and Edge Config, as well as partner solutions like Postgres by Neon and KV by Upstash. We heard your feedback—you want more providers and different types of integrations.

Today, we’re launching the first version of the Vercel Marketplace. It supports storage solutions from Supabase, Redis, and EdgeDB, at the same price as going direct. These integrations come with features like integrated billing, direct connections to provider consoles, and more.

Read more

Hedi Zandi Tom Occhino
https://vercel.com/changelog/integrated-billing-for-supabase-redis-and-edgedb Integrated billing for Supabase, Redis, and EdgeDB 2024-08-28T13:00:00.000Z

Vercel now has native integrations with Supabase, Redis, and EdgeDB.

Start for free or purchase storage at the same price as going direct. Our new storage add-ons include integrated billing, direct access to provider consoles, and more.

In the coming months, we will begin a zero-downtime migration for Vercel Postgres and KV to our new marketplace. Postgres will transition to our Neon integration, and KV will transition to our Upstash integration. No action is required on your part.

Get started with the Vercel Marketplace, available to customers on all plans.

Read more

Hedi Zandi Dima Voytenko Dom Busser Fabio Benedetti Adrian Cooney Marc Greenstock Luka Hartwig Justin Kropp
https://vercel.com/changelog/lower-pricing-for-log-drains Lower pricing for Log Drains 2024-08-28T13:00:00.000Z

Log Drains recently became generally available with a new usage-based pricing model.

For the past three months, customers have been able to monitor their Log Drains usage on the dashboard, sample traffic, and reconfigure sources as needed.

Based on your feedback, we've reduced the price of Log Drains by increasing the included data transfer by 300%—from 5GB to 20GB. Log Drains will cost $10 per 20GB (previously $10 per 5GB) at the start of your next billing cycle.

You can view your current Log Drains usage on the Usage page.

Read more

Luc Leray Tobias Lins Chris Widmaier
https://vercel.com/changelog/configure-retention-periods-for-deployments Configure retention periods for deployments 2024-08-23T13:00:00.000Z

You can now configure the retention period for deployments through the dashboard and CLI.

For example, canceled and errored deployments might be set to 30 days retention, while production deployments might be set to 1 year. Recently deleted deployments are shown in your project settings and can be instantly restored within 30 days of deletion.

Learn more in our documentation.

Read more

Brooke Mosby Pranathi Peri
https://vercel.com/blog/devolver-ships-game-websites-73-faster-with-vercel Devolver ships game websites 73% faster with Vercel 2024-08-21T13:00:00.000Z

As publishers of leading independent games, the team at Devolver is never short on work. But as a small engineering team, they felt limited by their clunky infrastructure and were spending more time on system management than they needed. With Vercel, the Devolver team was able to cut time spent on system management and configuration by more than half, allowing them to bring game websites to life 73% faster. Soon after adopting Vercel, the team was even able to launch five websites during a 30-minute press conference without any issues.

Read more

Alli Pope
https://vercel.com/changelog/bytecode-caching-for-serverless-functions-by-default Bytecode caching for Serverless Functions by default 2024-08-21T13:00:00.000Z

We recently introduced bytecode caching—an experimental feature built on our new Rust-based core for Vercel Functions—designed to drastically reduce start times during increasingly rare cold starts. Even when cold starts do occur, their impact is now minimal and barely noticeable.

After validating the stability and performance improvements of bytecode caching, the feature is now stable and the default for all Node.js 20+ Vercel Functions.

This change reduces global cold start times by up to 60%, exceeding our initial benchmarks and observations. The improvement is particularly significant for functions that load a large amount of JavaScript, with smaller functions experiencing less impact.

Bytecode caching is automatically enabled for all functions running on Node.js 20 and using CommonJS (e.g., Next.js). Additionally, we're working to extend this support to include ESM for broader compatibility. Learn more in our blog post.

Read more

Javi Velasco Tom Lienard Mariano Cocirio
https://vercel.com/changelog/view-logs-over-time-with-new-time-series-chart View logs over time with new time series chart 2024-08-21T13:00:00.000Z

You can now visualize Runtime Logs with a time series chart.

  • Observe the distribution of info, warning, and error logs over time

  • Analyze and understand your application's behavior more effectively

  • Use the drag-to-select feature to filter logs for specific time ranges

Learn more about Runtime Logs.

Read more

Darpan Kakadia Tobias Lins Timo Lins
https://vercel.com/blog/using-the-ai-sdk-to-fix-edge-case-errors-in-our-code Using the AI SDK to fix edge-case errors in our code 2024-08-15T13:00:00.000Z

Recently, there was an issue affecting our customers when trying to purchase a domain containing non-English characters. This problem became apparent when these domain purchases consistently failed, creating a significant roadblock for users wanting to expand their online presence with internationalized domain names (IDNs).

Read more

Rickey McGregor Dillon Mulroy
https://vercel.com/blog/how-to-build-scalable-ai-applications How to build scalable AI applications 2024-08-12T13:00:00.000Z

In today's AI-driven landscape, your business's competitive edge lies in how effectively you integrate AI into your product and workflows.

This guide focuses on three critical aspects of building scalable AI applications:

Read more

Alice Alexandra Moore
https://vercel.com/blog/update-regarding-vercel-service-disruption-on-august-7-2024 Update regarding Vercel service disruption on August 7, 2024 2024-08-09T13:00:00.000Z

On August 7, 2024, Vercel's Edge Middleware and Edge Functions experienced a significant outage affecting many customers. We sincerely apologize for the service disruption.

Vercel’s platform is designed to minimize the risk of global downtime. As standard practice, we use staggered rollouts for both code and configuration changes. Every aspect of our infrastructure is designed to gracefully fail over to the next available region in the event of an incident, and ensures no single point of failure across infrastructure components. However, on Wednesday, an upstream provider for a subset of our compute infrastructure went into a globally erroneous configuration state.

This event tested our infrastructure's resilience and how we respond to a global provider failure. Let’s break down what happened, how we responded, and the steps we’re taking to eliminate this as a possible failure mode.

Read more

Guillermo Rauch
https://vercel.com/blog/vercel-ai-sdk-3-3 Vercel AI SDK 3.3 2024-08-06T13:00:00.000Z

The Vercel AI SDK is a toolkit for building AI applications with JavaScript and TypeScript. Its unified API allows you to use any language model and provides powerful UI integrations into leading web frameworks such as Next.js and Svelte.

Read more

Lars Grammel Jared Palmer Jeremy Philemon Nico Albanese
https://vercel.com/blog/how-to-integrate-ai-into-your-business How to integrate AI into your business 2024-08-06T13:00:00.000Z

Implementing AI in your business can be challenging due to the rapid pace of change, the complexity of integration, and the need for specialized skills.

This guide helps leaders identify and evaluate AI use cases. We'll also show you how Vercel's Frontend Cloud and AI SDK can speed up your AI projects. Companies like Tome, Chick-fil-A, Chatbase, Runway, and Suno are already using these tools to bring AI into their apps and workflows.

Read more

Hugo Charré Alice Alexandra Moore
https://vercel.com/changelog/filter-by-custom-date-ranges-in-web-analytics Filter by custom date ranges in Web Analytics 2024-08-06T13:00:00.000Z

You can now choose custom date ranges in Web Analytics. Select any custom time period in the date range picker, or drag across the graph to quickly focus on specific period.

Learn more about Web Analytics or enable Web Analytics for your project.

Read more

Timo Lins
https://vercel.com/changelog/improved-live-mode-in-runtime-logs Improved Live Mode in Runtime Logs 2024-08-05T13:00:00.000Z

You can now toggle live streaming for Runtime Logs to update every ~5 seconds without clearing existing logs or manual refreshes.

Runtime logs capture crucial information from server-side rendering, API routes, Vercel Functions, and more. For advanced use cases, you can export logs to external endpoints or integrations using Log Drains.

Learn more about Runtime Logs.

Read more

Luc Leray Timo Lins Darpan Kakadia
https://vercel.com/blog/protecting-your-app-and-wallet-against-malicious-traffic Protecting your app (and wallet) against malicious traffic 2024-08-02T13:00:00.000Z

Let's explore how to block traffic with the Firewall, set up soft and hard spend limits, apply code-level optimizations, and more to protect your app against bad actors.

If you’re on our free tier, you don’t need to worry. When your app passes the included free usage, it is automatically paused and never charged.

Read more

Lee Robinson
https://vercel.com/changelog/improved-user-experience-for-account-settings Improved user experience for Account Settings 2024-08-02T13:00:00.000Z

We've revamped the Account Settings with a new, intuitive navigation structure by breaking down into three different sections - Overview, Activity, and Settings.

The Overview page now offers a quick snapshot of your teams and domains, including the option to request access to teams you're not part of.

The Activity page presents a chronological list of events for the last 12 months. 

The Settings page consolidates all user-specific options, including authentication, billing, and access tokens. 

This streamlined layout aims to enhance clarity and simplify account management for all users.

Read more

Nanda Syahrasyad Kylie Czajkowski Pranathi Peri
https://vercel.com/blog/beyond-menu-scaling-with-hypertune-and-vercel Achieving feature rollouts with ultra-low latency and zero impact to conversion 2024-08-01T13:00:00.000Z

Beyond Menu is a popular food delivery service in the US that connects restaurants and diners. Their Next.js app is deployed on Vercel and serves millions of hungry visitors every month.

To scale their development, they decided to adopt feature flags for gradual rollouts, instant rollbacks, A/B testing, trunk-based development and easier collaboration both internally and with beta users.

They knew they needed to evaluate feature flags and A/B tests on both the server and the client. And since they used the App Router, the solution needed to work with React Server Components, Client Components and different rendering modes like static, dynamic and partial prerendering.

At Beyond Menu, every millisecond impacts conversion, so they turned to Vercel's Edge Config and Hypertune for seamless feature flag management without layout shifts.

Read more

Alli Pope
https://vercel.com/blog/how-google-handles-javascript-throughout-the-indexing-process How Google handles JavaScript throughout the indexing process 2024-07-31T13:00:00.000Z

Understanding how search engines crawl, render, and index web pages is crucial for optimizing sites for search engines. Over the years, as search engines like Google change their processes, it’s tough to keep track of what works and doesn’t—especially with client-side JavaScript.

Read more

Giacomo Zecchini Alice Alexandra Moore Ryan Siddle Malte Ubl
https://vercel.com/changelog/performance-improvements-and-setting-update-for-the-vercel-toolbar Performance improvements and setting update for the Vercel Toolbar 2024-07-30T13:00:00.000Z

The Vercel Toolbar now loads up to 10x faster and uses hardware acceleration for smoother interactions. The toolbar has features like comments and feature flags, and developer tools like the interaction timing tool for optimizing INP.

You can also now toggle toolbar visibility based on the environment (preview or production) for your team or project from the dashboard. This is under the "Vercel Toolbar" section in general settings

When the toolbar is on, individual users can still hide and unhide it using the keyboard shortcut ⌘ + . (Mac) or Ctrl + . (Windows), or disable it for their session with the option in the toolbar menu (☰).

Learn more about toolbar settings and functionality.

Read more

George Karagkiaouris wits
https://vercel.com/changelog/fasthtml-htmx-python-vercel FastHTML projects can now be deployed with zero configuration 2024-07-29T13:00:00.000Z

You can now deploy FastHTML Python projects on Vercel with zero configuration.

FastHTML is a new next-generation web framework for fast, scalable web applications with minimal, compact code. It builds on top of popular foundations like ASGI and HTMX. You can now deploy FastHTML with Vercel CLI or by pushing new changes to your git repository.

Deploy the FastHTML template or run vercel init fasthtml in your terminal to get started.

Read more

Nathan Rajlich Sean Massa
https://vercel.com/blog/flags-as-code-in-next-js Flags as code in Next.js 2024-07-26T13:00:00.000Z

We recently introduced a new Flags SDK that allows using feature flags, in Next.js and SvelteKit, and works with any feature flag provider—or when using no flag provider at all. It's not meant to be a competitor to other feature flag providers. Instead, it’s a tool that sits between your application and the source of your flags, helping you follow best practices for using feature flags and experiments, keeping your website fast.

Follow along below to get started with the Flags SDK, beginning with a simple feature flag to more sophisticated cases, discussing tradeoffs along the way.

Read more

Dominik Ferber
https://vercel.com/blog/elkjops-digital-transformation-with-next-js-and-vercel Elkjøp's Digital Transformation: Powering Retail Innovation with Next.js and Vercel 2024-07-24T13:00:00.000Z

With over $1B in revenue flowing through their digital properties, Elkjøp (Elgiganten), Nordic subsidiary of Currys PLC and leading consumer electronics retailer in the region, knew their digital presence needed to reflect their in-store commitment to innovation and excellence. Their previous ecommerce platform, built on Angular and self-hosted on Kubernetes, had become a source of frustration for both customers and internal teams. Slow page loads, SEO struggles, and inefficient developer experience were impacting the bottom line and hindering their ability to deliver the exceptional online shopping experience their customers deserved.

Read more

Greta Workman
https://vercel.com/changelog/instantly-redirect-traffic-using-custom-vercel-firewall-rules Instantly redirect traffic using custom Vercel Firewall rules 2024-07-24T13:00:00.000Z

You can now redirect requests to a new page using custom Firewall rules, adding to the existing challenge and block actions.

Publishing custom rules does not require a new deployment and will instantly propagate across the global Vercel Edge Network. Therefore, using custom rule redirects in moderation could provide a fast alternative to Edge Network redirects, particularly in emergency situations.

Firewall redirects execute before Edge Network configuration redirects (e.g. vercel.json or next.config.js) are evaluated.

Custom rules are available for free on all plans.

Read more

Andrew Barba Joseph Collins
https://vercel.com/changelog/improvements-to-command-line-logs Improvements to command line logs 2024-07-24T13:00:00.000Z

Vercel CLI v35 introduces new commands to access of deployment and runtime logs:

  • vercel deploy --logs deploys and shows build logs

  • vercel inspect --logs shows build logs for an existing deployment

  • vercel logs now follows runtime logs of an existing deployment

You can now use the --json option to stream logs as JSON. This makes it easier to parse and filter logs using tools like jq.

To use these features, update to the latest version of the Vercel CLI:

Read more

Damien Simonin Feugas Julia Shi
https://vercel.com/blog/turbopack-moving-homes Turbopack updates: Moving homes 2024-07-23T13:00:00.000Z

Turbopack is a new JavaScript/TypeScript bundler we’ve been cooking at Vercel. Building on 10+ years of learnings from webpack, we want to build a bundler that can be used with many frameworks.

We’re moving the Turbopack codebase into the Next.js repository—and wanted to share an update on our progress with Turbopack so far, as well as where we’re headed.

Read more

Benjamin Woodruff Anthony Shew Tim Neutkens
https://vercel.com/blog/how-to-choose-the-best-rendering-strategy-for-your-app How to choose the best rendering strategy for your app 2024-07-23T13:00:00.000Z

Web rendering has evolved from simple server-rendered HTML pages to highly interactive and dynamic applications, and there are more ways than ever to present your app to users.

Static Site Generation (SSG), Server-Side Rendering (SSR), Client-Side Rendering (CSR), Incremental Static Regeneration (ISR), and experimental Partial Prerendering (PPR) have all been developed to optimize performance, SEO, and user experience in various situations.

Read more

Alice Alexandra Moore
https://vercel.com/changelog/automatically-skip-unnecessary-deployments-in-monorepos Automatically skip unnecessary deployments in monorepos 2024-07-22T13:00:00.000Z

Vercel now automatically skip builds for unchanged code in your monorepo.

Projects without changes in their source code (or the source code of internal dependencies) will be skipped, reducing build queuing and improving the time to deployment for affected projects.

This feature is powered by Turborepo, and works with any monorepo using workspaces. For more advanced customization, like canceling builds based on branches, you can configure an Ignored Build Step.

Learn more about skipping unaffected projects.

Read more

Tom Knickman Gaspar Garcia Mehul Kar Nicholas Yang Dimitri Mitropoulos
https://vercel.com/changelog/longer-history-available-in-speed-insights Longer history available in Speed Insights 2024-07-22T13:00:00.000Z

We've increased the viewable history in Speed Insights for all plan types:

  • Hobby: Now 7 days (up from 24 hours)

  • Pro: Now 30 days (up from 7 days)

  • Enterprise: Now 90 days (up from 30 days)

Measure your site's performance over longer periods, at no additional cost.

Learn more about Speed Insights or enable Speed Insights for your project.

Read more

Timo Lins Damien Simonin Feugas Tobias Lins
https://vercel.com/changelog/improvements-to-support-center Improvements to Support Center 2024-07-19T13:00:00.000Z

The Support Center now has an improved design to make it easier to understand the state of your support cases. You can now find cases by:

  • Searching the subject lines

  • Filtering by status

  • Sorting by Last Updated, Date Created and Severity

Support Center is available to Pro and Enterprise customers.

Read more

Aryan Vichare John Phamous Matthew Sweeney
https://vercel.com/changelog/new-utilities-to-work-with-vercel-functions New utilities to work with Vercel Functions 2024-07-17T13:00:00.000Z

@vercel/functions now includes new utilities:

  • geolocation: Returns location information of the incoming request

  • ipAddress: Returns the IP address of the incoming request

  • getEnv: Returns system environment variables from Vercel

Install the latest package to use these methods today:

Learn more in the documentation.

Read more

Kiko Beats
https://vercel.com/changelog/improved-cdn-performance Improved CDN Performance 2024-07-16T13:00:00.000Z

We've improved our Edge Network performance by increasing the initial TCP congestion window by 300%. This enhancement allows sending more data in the initial and subsequent round-trips, resulting in faster page loads for websites of all sizes.

End users will experience significant speed improvements when first loading any site hosted on Vercel, with many sites seeing up to 3x faster initial page loads. The larger initial congestion window allows data transfer to ramp up more quickly, reaching higher speeds in fewer round-trips. This optimization is particularly beneficial for high-latency connections, such as those on mobile devices.

This performance upgrade is available immediately for all Vercel customers across all plans, with no action required. Your sites will automatically benefit from these improvements without any changes needed on your part.

Read more

Casey Gowrie Joe Haddad
https://vercel.com/changelog/fast-origin-transfer-is-now-automatically-compressed Fast Origin Transfer is now automatically compressed 2024-07-15T13:00:00.000Z

We’ve improved Fast Origin Transfer—our Edge Network’s ability to transfer data from every region globally to the origin—to be compressed by default.

Fast Origin Transfer is incurred when using any of Vercel’s compute projects, like Functions, Middleware, and Incremental Static Regeneration (ISR). Starting today, all data transfer between edge regions and the origin location is now automatically compressed. This matches the behavior of Fast Data Transfer.

Learn more about Fast Origin Transfer and how to optimize.

Read more

Tom Lienard Craig Andrews Doug Parsons
https://vercel.com/changelog/log-drains-now-support-the-vercel-firewall Log Drains now support the Vercel Firewall 2024-07-15T13:00:00.000Z

You can now drain Vercel Firewall actions to external providers through Log Drains.

Requests denied by the Vercel Firewall will be drained with the firewall source. This includes the following events:

  • Requests blocked by a Custom Rule

  • Requests blocked by Challenge Mode

  • Requests blocked Managed Rules (e.g. OWASP CRS)

  • Requests blocked by an IP Rule

If a rule is set to log or to bypass, requests will not be sent to Log Drains. Firewall actions are also surfaced inside of Monitoring.

Learn more about the Log Drains.

Read more

Andrew Barba Joseph Collins Sage Abraham
https://vercel.com/changelog/vercel-firewall-now-supports-localized-challenge-pages Vercel Firewall now supports localized challenge pages 2024-07-10T13:00:00.000Z

The Vercel Firewall now localizes the challenge page text to 22 different languages.

Challenges are automatically served for malicious traffic or when defined through custom rules. The updated page also features a new design, which supports light and dark mode.

Learn more about the Vercel Firewall.

Read more

Andrew Barba Joseph Collins Sage Abraham Kevin Rupert Evil Rabbit
https://vercel.com/changelog/oidc-federation-now-available-in-beta OpenID Connect (OIDC) Federation now available in Beta 2024-07-09T13:00:00.000Z

Vercel now supports OpenID Connect (OIDC) Federation, enabling you to enhance your security by replacing long-lived environment variable credentials with short-lived, RSA-signed JWTs for external requests in both builds and Vercel Functions.

You can now leverage Vercel's OIDC Identity Provider (IdP) to issue persistent tokens for cloud providers such as AWS, Azure, GCP, and more.

Enable OIDC in your project's security settings and leverage the @vercel/functions package for integration with third-party providers, like this:

Learn more about OpenID Connect Federation in the documentation.

Read more

Marc Greenstock Bel Curcio Christopher Skillicorn
https://vercel.com/changelog/improvements-to-runtime-logs Improvements to Runtime Logs 2024-07-08T13:00:00.000Z

Runtime logs now have improved filtering and visibility of request details:

  • Query Params Visibility: View query parameters for each request directly in the UI.

  • Request ID Filtering: Filter logs by request ID using the new filter icon next to each ID.

These improvements are available to all Vercel customers.

Read more

Julia Shi
https://vercel.com/blog/understanding-vercel-functions Understanding Vercel Functions 2024-07-05T13:00:00.000Z

Vercel Functions run code in response to user traffic without the need to manage your own infrastructure, provision servers, or manage hardware.

Read more

Lee Robinson
https://vercel.com/blog/vercel-functions-streaming-to-be-framework-agnostic Function streaming to be framework-agnostic on Vercel 2024-07-04T13:00:00.000Z

In 2023, Vercel Functions added support for streaming HTTP responses.

This feature has been enabled for frameworks like Next.js (App Router), SvelteKit, Remix, and more. We've been progressively rolling out streaming to more frameworks over the past two years, and we're beginning to roll out streaming for all functions and compatible frameworks.

Read more

Javi Velasco Mariano Cocirio Tom Lienard Craig Andrews Doug Parsons
https://vercel.com/changelog/easier-toolbar-setup-for-sveltekit-and-other-vite-based-frameworks Easier toolbar setup for SvelteKit and other Vite-based frameworks 2024-07-04T13:00:00.000Z

Vite-based frameworks such as SvelteKit, Remix, Nuxt, or Astro can now more easily integrate with the Vercel Toolbar in both local and production environments. The Toolbar enables you to comment on deployments, toggle feature flags, view draft content from a CMS, and more.

The updated @vercel/toolbar package offers a Vite plugin and client-side function for injection and configuration, and can be integrated like this:

Check out the documentation to learn more.

Read more

Simon Holthausen
https://vercel.com/changelog/vercel-functions-to-enable-streaming-by-default Streaming to be enabled by default for all Node.js Vercel Functions 2024-07-04T13:00:00.000Z

Streaming will soon be enabled by default for all Node.js Vercel Functions.

This change will be effective for Hobby accounts starting July 8th, 2024; and for Pro and Enterprise accounts starting October 1st, 2024.

To enable streaming as the default immediately for all your Vercel Functions, set the VERCEL_FORCE_NODEJS_STREAMING environment variable in your project to true. Streaming will be enabled on your next deployment.

Streaming responses from functions will change the format and frequency of your runtime logs. If you are using Log Drains, you should ensure that your ingestion pipeline can handle the new format and increased frequency.

Check out this blog post and our streaming documentation for more details.

Read more

Javi Velasco Kiko Beats Craig Andrews
https://vercel.com/changelog/new-webhook-for-promotion-events New deployment promotion event 2024-07-03T13:00:00.000Z

Get notified after a deployment promotion by subscribing to the new deployment.promoted event through a webhook.

A promotion is the act of assigning your production domains to a deployment, so it starts serving your production traffic. This new event will notify you when:

  • Deployments are automatically promoted and domains are assigned (default)

  • Deployments are explicitly promoted from the CLI, API, or Dashboard.

Learn more about promotions or see the full list of events.

Read more

Mark Knichel Mariano Cocirio
https://vercel.com/changelog/sveltekit-now-supported-in-vercel-flags SvelteKit now supported in @vercel/flags 2024-07-03T13:00:00.000Z

Vercel is extending its newly introduced approach to working with feature flags to SvelteKit, with the v2.6.0 release of @vercel/flags.

With @vercel/flags/sveltekit you can now implement feature flags in your SvelteKit application code and call them within functions. Using this pattern automatically respects overrides set by the Vercel Toolbar, and integrates with our Developer Experience Platform features like Web Analytics and Runtime Logs.

Learn more about Vercel feature flags with SvelteKit in our documentation and deploy your own SvelteKit app with feature flags here.

Read more

Dominik Ferber
https://vercel.com/changelog/inspect-your-deployment-source-and-build-output-files Inspect your deployment source and build output files 2024-07-01T13:00:00.000Z

The UI for inspecting your deployment source and build output files is improved. Use the deployment Source tab to see what goes into a deployment and what gets created from the build process.

Read more

John Phamous Rohan Taneja
https://vercel.com/changelog/spend-management-now-pauses-production-deployments-by-default Spend Management now pauses production deployments by default 2024-06-27T13:00:00.000Z

Based on your feedback, Spend Management now pauses production deployments by default when your set amount is reached.

Spend Management allows you to receive notifications, trigger a webhook, and pause projects when metered usage exceeds the set amount within the current billing cycle. This stops you incurring further cost from the production deployments.

  • You'll receive realtime notifications when your spending approaches and exceeds the set amount. For further control, you can continue to use a webhook in addition to automatic project pausing

  • This includes Web and Email notifications at 50%, 75%, and 100%. You can also receive SMS notifications when your spending reaches 100%

  • Hobby customers will have their projects automatically paused when reaching the included free tier limits and do not need Spend Management

Check out our documentation to learn more.

Read more

Lee Robinson Matthew Sweeney
https://vercel.com/changelog/openai-will-not-support-the-hong-kong-region-hkg1-for-functions OpenAI will not support the Hong Kong region (hkg1) for Functions 2024-06-27T13:00:00.000Z

Vercel customers making API requests to OpenAI from Functions in Hong Kong (hkg1) may have received an email from OpenAI identifying API traffic from a region that OpenAI does not currently support.

OpenAI will take additional steps to block API traffic from unsupported countries and territories on July 9. We understand this block will include Functions in the Hong Kong region on Vercel. While the majority of functions do not execute in this region, Edge Functions may require updates to the execution region.

You can prevent this change affecting your deployments by specifying allowed regions for your functions and excluding Hong Kong. Changing regions requires a redeployment of your application.

Learn more about OpenAI's supported regions.

Read more

Lee Robinson
https://vercel.com/changelog/performance-and-usability-improvements-for-vercel-blob-storage Performance and usability improvements for Vercel Blob storage 2024-06-27T13:00:00.000Z

We've improved the performance and experience of the Vercel Blob file browser:

  • Faster blob deletion through parallelized deletions

  • Faster page transitions and back navigation for deep-linked pages

  • Delete all blobs at once with easy utility to empty your store

  • Easier access to URLs with new copy button directly on each row

Try it out or learn more about Vercel Blob.

Read more

Luis Meyer Vincent Voyer
https://vercel.com/changelog/v0-themes v0 Themes 2024-06-25T13:00:00.000Z

v0 now supports themes.

You can create custom themes from prompts, modify individual design tokens, and switch between different themes for your generations. For example, try out our theme for Windows 95. v0 supports all default Shadcn UI themes.

Try out v0 today and build your own theme.

Read more

Jared Palmer Aryaman Khandelwal Jorge Zreik Shu Ding Shadcn Max Leiter Jueun Grace Yun Pranathi Peri
https://vercel.com/changelog/amazon-bedrock-provider-for-the-vercel-ai-sdk-now-available Amazon Bedrock Provider for the Vercel AI SDK now available 2024-06-21T13:00:00.000Z

The Vercel AI SDK now supports Bedrock through a new official provider. To use the provider, install the relevant package:

You can then use the provider with all AI SDK Core methods. For example, here's how you can use it with generateText:

For more information, please see the documentation. Thanks Jon Spaeth for contributing this feature!

Read more

Lars Grammel
https://vercel.com/blog/introducing-vercel-ai-sdk-3-2 Introducing Vercel AI SDK 3.2 2024-06-18T13:00:00.000Z

We’ve been listening to your feedback and working hard to expand the capabilities of the AI SDK while improving its existing functionality. Today, we’re launching AI SDK 3.2.

Read more

Lars Grammel Jared Palmer Aryaman Khandelwal
https://vercel.com/changelog/cohere-provider-for-the-vercel-ai-sdk-now-available Cohere Provider for the Vercel AI SDK now available 2024-06-17T13:00:00.000Z

The Vercel AI SDK now supports Cohere through a new official provider. To use the provider, install the relevant package:

You can then use the provider with all AI SDK Core methods. For example, here's how you can use it with generateText:

For more information, please see the documentation.

Read more

Lars Grammel
https://vercel.com/changelog/reposition-the-vercel-toolbar Change the default position of your Vercel Toolbar 2024-06-14T13:00:00.000Z

You can now reposition the Vercel Toolbar by dragging it to any corner of your page. It will snap into place and persist across deployments until you move it again.

Read more

wits
https://vercel.com/blog/getting-started-with-ai-advice-from-the-experts-at-vercel-ship Getting started with AI: Advice from the experts at Vercel Ship 2024-06-13T13:00:00.000Z

At our annual end-user conference, Vercel Ship, we hosted a panel discussion on AI for enterprise teams featuring Paige Bailey (Google), Sunny Madra (Groq), Miqdad Jaffer (OpenAI), and moderated by Sabrina Halper (Tomorrow Talk). The panel of experts shared how customers are leveraging AI technologies to:

Read more

Alina Weinstein
https://vercel.com/blog/demystifying-inp-new-tools-and-actionable-insights Demystifying INP: New tools and actionable insights 2024-06-12T13:00:00.000Z

In March 2024 Interaction to Next Paint (INP) became part of Google’s Core Web Vitals, a set of metrics reporting on user experience of web pages based on field data, and used in Google’s search ranking.

Read more

Malte Ubl
https://vercel.com/changelog/azure-ai-provider-for-the-vercel-ai-sdk-now-available Azure AI Provider for the Vercel AI SDK now available 2024-06-12T13:00:00.000Z

The Vercel AI SDK now supports Azure AI services through a new official provider. To use the provider, install the relevant package:

You can then use the provider with all AI SDK Core methods. For example, here's how you can use it with generateText:

For more information, please see the documentation.

Read more

Lars Grammel
https://vercel.com/changelog/html-element-attribution-in-speed-insights HTML element attribution in Speed Insights 2024-06-12T13:00:00.000Z

Speed Insights now shows which HTML elements are causing low scores, helping you identify performance issues on your site. Supported metrics include:

  • Interaction to Next Paint (INP)

  • Cumulative Layout Shift (CLS)

  • Largest Contentful Paint (LCP)

  • First Input Delay (FID)

This feature is available to all customers using Speed Insights.

Get started with Speed Insights

Read more

Damien Simonin Feugas Timo Lins
https://vercel.com/blog/frameio-never-drop-the-illusion Never drop the illusion: How Frame.io builds fluid user experiences 2024-06-11T13:00:00.000Z

When Hollywood giants and global brands collaborate on video, they demand a seamless, high-performing experience — and Frame.io, an Adobe company, delivers.

Read more

Dan Fein
https://vercel.com/changelog/csv-export-in-web-analytics CSV Export in Web Analytics 2024-06-11T13:00:00.000Z

You can now export Web Analytics data as CSV. The aggregated data includes information about unique visitors and page views for the selected data set.

This feature is available to all customers using Web Analytics.

Enable Web Analytics

Read more

Timo Lins
https://vercel.com/changelog/account-owned-domains-now-visible-in-team-scope-domains-tab Account-owned domains now visible in team-scope domains tab 2024-06-10T13:00:00.000Z

To give users more clarity on the domains owned across both Teams and accounts, Team Owners can now see account-owned domains in the same tab as their Team domains. This change provides more visibility into the domains you own across your Teams and account.

Learn more about domains on Vercel in the documentation.

Read more

Nanda Syahrasyad Kylie Czajkowski Pranathi Peri
https://vercel.com/changelog/vercel-functions-now-have-faster-and-fewer-cold-starts Vercel Functions now have faster and fewer cold starts 2024-06-05T13:00:00.000Z

Vercel's infrastructure now keeps a minimum of one function instance warm for production environments on paid plans. This improves startup times for apps with relatively low traffic.

This builds on our recent improvements to make Vercel Functions start up even faster, by powering them with Rust and adding support for bytecode caching.

Get started with Vercel Functions.

Read more

Joe Haddad Gal Schlezinger
https://vercel.com/changelog/improved-security-with-automation-testing-now-available-on-all-plans Improved security with automation testing now available on all plans 2024-06-04T13:00:00.000Z

You can now more easily run end-to-end tests against deployments protected by Vercel Authentication.

All plans can now create a secret value to bypass authentication, which can then be set as an HTTP header (or query parameter) named x-vercel-protection-bypass.

The automation bypass enables you to protect your project's deployments with Vercel Authentication while still providing access to external services like Checkly and Playwright for your CI/CD e2e testing.

See how to use Protection Bypass for Automation.

Read more

Kit Foster Natalie Altman
https://vercel.com/changelog/vercel-is-now-certified-under-the-eu-us-data-privacy-framework-dpf Vercel is now certified under the EU-US Data Privacy Framework (DPF) 2024-06-04T13:00:00.000Z

We've achieved certification under the DPF to further strengthen our commitment to privacy at Vercel.

  • Commitment to protecting personal data: The DPF provides a reliable mechanism for transferring personal data from the EU, UK, and Switzerland to the U.S. in compliance with applicable privacy laws.

  • Vercel’s privacy practices: You have additional validation to assess Vercel with this internationally recognized certification, along with our ISO 27001 certification.

  • Supporting customer workflows: You have an additional mechanism that may help support their legal and compliance obligations when sending customer personal data to Vercel.

To view our public listing, visit the Data Privacy Framework website.

Read more

Kacee Taylor
https://vercel.com/blog/mintlify-scaling-a-powerful-documentation-platform-with-vercel Mintlify: Scaling a powerful documentation platform with Vercel 2024-06-03T13:00:00.000Z

Mintlify, a platform for public documentation, is a toolkit for developers to write, maintain and host documentation. The platform offers a flexible solution that can be used out of the box or customized to fit specific needs, enabling developers to create help guides, tutorials, and API references.

Read more

Alina Weinstein
https://vercel.com/blog/introducing-bytecode-caching-for-vercel-functions Introducing bytecode caching for Vercel Functions 2024-06-03T13:00:00.000Z

We recently shipped a new Rust-based core for Vercel Functions to improve startup times.

Today, we are announcing a new experimental feature to further reduce startup latency for large applications, resulting in up to 27% faster cold starts.

Introducing bytecode caching

One of the slowest parts of a cold start is loading and compiling the JavaScript source code. Before executing the code, it needs to be parsed and compiled into bytecode, which is then directly executed by the V8 virtual machine or compiled into machine code by V8's just-in-time compiler (JIT).

This conversion to bytecode must happen when a JavaScript file is executed for the first time, but it introduces latency.

What if we could cache this step and re-use it later on subsequent cold starts?

That's exactly how bytecode caching works. The first execution will produce a bytecode cache, and successive executions and cold starts will re-use and optimize the cache. This can improve the cold start duration by transparently eliminating the compilation step.

We initially tested with three different Next.js applications which each load a different amount of JavaScript. Each application would get a cold start every 15 minutes. We compared the startup duration before and after bytecode caching.

  • The first application’s main page is 250 kB. The average TTFB went from 873ms to 764ms (-12%) and the billed duration from 330ms to 137ms (-58%)

  • The second application’s main page is 550 kB. The average TTFB went from 1017ms to 869ms (-15%) and the billed duration from 463ms to 214ms (-54%)

  • The third application’s main page is 800 kB. The average TTFB went from 1548ms to 1130ms (-27%) and the billed duration from 866ms to 453ms (-48%)

These cold start improvements continue to improve as applications grow in size. With bytecode caching, your function both start faster and have a lower billed duration.

Technical details

The v8-compile-cache npm package is widely known in the ecosystem, but can’t easily be used with serverless platforms like Vercel. The file system is ephemeral and fresh during a cold start.

We developed our own bytecode caching implementation that overcomes this limitation and allows all subsequent cold starts to use the produced bytecode cache. This system can continuously improve the cache as more traffic is sent to the function.

For example, assume you have two routes /home and /blog . Your framework lazy-loads those routes from two JavaScript chunks. When a user hits /home for the first time, the bytecode generated by this first chunk is cached and re-used for future cold starts. But when a user then hits /blog , it produces a separate bytecode cache (since this chunk was lazy-loaded).

Vercel Functions will intelligently merge together all bytecode chunks, regardless of when or where they were created. This results in faster cold starts as your application gets more traffic.

Try bytecode caching

We’ve been using bytecode caching on our internal Vercel projects for the past month.

If you want to try experimental bytecode caching, you will need to use Node.js 20 and use a framework that compiles to CommonJS (for example, Next.js). We plan to use the new option available in Node.js 22 to support ES Modules in the future.

You can opt-in by adding the following Environment Variable in your project's settings, then re-deploy your application: USE_BYTECODE_CACHING=1. This improve only applies to production deployments.

Learn more about Vercel Functions or get started building your first application.

Read more

Javi Velasco Tom Lienard Gal Schlezinger Jimmy Lai
https://vercel.com/blog/vercel-ship-2024 Vercel Ship 2024 recap 2024-05-24T13:00:00.000Z

Vercel Ship 2024 was all about the power of the frontend cloud, highlighting the integrations, ecosystem, and teams building the web's best products.

Read more

Morgane Palomares
https://vercel.com/blog/introducing-the-vercel-waf Introducing the Vercel Web Application Firewall 2024-05-23T13:00:00.000Z

In any given week, Vercel blocks around 1 billion suspicious TCP connections, with some days seeing upwards of 7 billion malicious requests. The Vercel Firewall has been silently mitigating DDoS and Layer 3/4 attacks, but it's been operating as a black box with limited transparency.

Read more

Andrew Barba
https://vercel.com/blog/feature-flags Shipping safer and smarter: Integrating feature flags deeper in the Vercel workflow 2024-05-23T13:00:00.000Z

Feature flags help teams to release with confidence, safely roll out changes, and test efficiently, improving collaboration and accelerating development cycles. If you use tools like LaunchDarkly, Statsig, Split, or Optimizely to create feature flags, we're making integrating them into your Vercel workflows as easy as possible.

Read more

Dominik Ferber Andy Schneider Aaron Morris
https://vercel.com/blog/introducing-new-developer-tools-in-the-vercel-toolbar Introducing new developer tools in the Vercel Toolbar 2024-05-23T13:00:00.000Z

Vercel’s Frontend Cloud is all about giving you and your team the tools to prioritize the user experience—so you can focus on what makes your product great and quickly iterate together with your team.

Read more

Sam Saliba Alli Pope
https://vercel.com/changelog/log-drains-are-now-generally-available Log Drains are now generally available 2024-05-23T13:00:00.000Z

Vercel Log Drains are now generally available—send runtime and build logs from Vercel to third-party services.

What’s new?

Since we introduced Log Drains, you can now filter by different environments, define a sampling rate, transport logs with either the JSON or NDJSON formats, and more.

New Usage Based Billing

  • Usage of Log Drains costs $10 per 5GB of data transfer; all logs sent to a third-party accrue Log Drain usage automatically.

  • Existing Pro customers have three additional months free before billing begins. You can view the exact date based on your billing cycle in the dashboard.

  • Log Drains are only available on Pro and Enterprise plans. Existing Hobby customers may continue to use Log Drains as configured, but no further usage or configuration is available.

How can I check my Log Drain usage?

You can view your existing Log Drain usage on the Usage page.

Read more

Chris Widmaier Darpan Kakadia Amy Burns Natalie Altman
https://vercel.com/changelog/use-the-vercel-toolbar-in-production Use the Vercel Toolbar in Production with the Chrome Extension or the toolbar menu 2024-05-23T13:00:00.000Z

You can now get the toolbar in your production environment without any configuration by installing the Vercel Chrome Extension and ensuring that you are signed in to your team on Vercel.com. You can also enable the toolbar for your production domains by selecting Enable Vercel Toolbar in Production in the toolbar menu and choosing the domain you'd like to enable it on. For more advanced usage, it is still possible to use the toolbar's npm package.

This allows you and your team to use all the features of the Vercel Toolbar, like comments, flags, and tools like accessibility audit and interaction timing, in production.

Learn more about the features of the toolbar and adding it to your environments in the documentation.

Read more

wits Gary Borton Sam Saliba
https://vercel.com/changelog/declaring-feature-flags-in-code Declaring feature flags in code 2024-05-23T13:00:00.000Z

We’re introducing a new approach for working with feature flags. This approach allows declaring feature flags in code and calling them as functions. Flags implemented using this pattern can automatically respect overrides set by the Vercel Toolbar, and integrate with our Developer Experience Platform features like Web Analytics and Runtime Logs.

The pattern avoids common pitfalls of client-side feature flag and experimentation usage, such as flashing the wrong experiment, loading spinners, layout shift, and jank. It works with any feature flag provider and even custom setups.

The pattern further allows for optionally precomputing certain feature flags in Middleware. Middleware can then route visitors to statically generated pages tailored to their specific combination of feature flags and experiments.

This even works when multiple flags are present on the page, which typically suffers from combinatory explosion. Precomputing is great for experimentation on marketing pages as it allows keeping them completely static with ultra-low TTFB, no layout shift, and no flashing of the wrong experiment.

We have implemented this new feature flags design pattern for Next.js in @vercel/flags/next, and we are releasing an implementation for SvelteKit soon.

Check out our documentation to learn more.

Read more

Dominik Ferber Andy Schneider Aaron Morris
https://vercel.com/changelog/observe-your-feature-flags-with-the-vercel-dx-platform Observe your feature flags with the Vercel DX platform 2024-05-23T13:00:00.000Z

The Vercel DX Platform now has a deep understanding of the feature flags you use and create in third-party providers. This beta integration provides better insights into your applications and streamlines your development workflow.

  • Web Analytics integration: Break down page views and custom events by feature flags in Web Analytics, gaining granular insights into user interactions.

  • Enhanced Log visibility: The platform now displays feature flags in logs, making it easier to understand the conditions under which errors occur.

  • reportValue: Reports an evaluated feature flag from the server for runtime logs and Custom Analytics Events (server-side).

  • <FlagValues />: Surfaces a feature flags value on the client for usage in Analytics.

These features have universal compatibility with any feature flag provider you're already using, like LaunchDarkly, Statsig, or Split, or custom setups.

This update lets users on all plans leverage existing feature flag workflows within the Vercel platform and ship safely with more confidence.

Check out the documentation to learn more.

Read more

Dominik Ferber Andy Schneider Timo Lins Tobias Lins Chris Widmaier
https://vercel.com/changelog/protect-against-owasp-risks-with-the-vercel-firewall Protect against OWASP risks with the Vercel Firewall 2024-05-23T13:00:00.000Z

Enterprise customers can now protect against the top OWASP risks.

The Vercel Firewall protects against the OWASP Core Ruleset for Enterprise, which enables Vercel to log, block, or challenge traffic against these rules.

In addition to new custom rules, customers can also ensure they remain protected against the biggest risks for web applications with new OWASP Top 10 protection. For example, this ruleset includes automatic protection against SQL injection attacks.

Contact sales to see a demo and learn more.

Read more

Andrew Barba Sage Abraham Natalie Altman Dan Fein
https://vercel.com/changelog/block-rate-limit-and-challenge-traffic-with-the-vercel-firewall Block, rate limit, and challenge traffic with the Vercel Firewall 2024-05-23T13:00:00.000Z

The Vercel Firewall now allows you to create custom rules to log, block, challenge, or rate limit (beta) traffic. The Firewall is available on all plans for free.

You can define custom rules to handle incoming traffic:

  • Rules can be based on 15+ fields including request path, user agent, IP address, JA4 fingerprint, geolocation, HTTP headers, and even target path.

  • Firewall configuration changes propagate within 300ms globally. If you make a mistake, you can instantly rollback to previous rules.

You can now see requests automatically protected by the Firewall, as well as managed custom rules for the WAF. You can also access managed rulesets, included our first ruleset available for Enterprise to mitigate the OWASP core risks.

Learn more about the WAF and available configuration options. Contact us if you want to try our private beta for rate limiting.

Read more

Andrew Barba Joseph Collins Dany Volk Sage Abraham Natalie Altman Kevin Rupert Ismael Rumzan Dan Fein
https://vercel.com/changelog/accessibility-tool Uncover accessibility issues on your deployments from the Vercel Toolbar 2024-05-22T13:00:00.000Z

Accessibility Audit now runs in the background for you everywhere you have the Vercel Toolbar. You can view the compliance of Web Content Accessibility Guidelines 2.0 level A and AA rules for the page you are on from the toolbar menu. The rules will show grouped by impact as defined by deque axe.

You can also turn on recording to keep track of issues that turn up as you move around a page. This feature is available to all Vercel users.

See the Accessibility Audit documentation to learn more.

Read more

George Karagkiaouris wits Christopher Skillicorn Gary Borton Sam Saliba
https://vercel.com/changelog/options-allowlist OPTIONS Allowlist 2024-05-21T13:00:00.000Z

The OPTIONS Allowlist improves the security of deployments on Vercel by limiting CORS preflight OPTIONS requests to specified paths.

Before the OPTIONS Allowlist, all OPTIONS requests to deployments bypassed Deployment Protection in compliance with CORS specifications.

The new OPTIONS Allowlist feature is available on all plans.

Learn more about the OPTIONS Allowlist.

Read more

Kit Foster Brooke Mosby Kevin Rupert
https://vercel.com/changelog/interaction-timing-tool Understand Interaction to Next Paint (INP) with the Vercel Toolbar 2024-05-21T13:00:00.000Z

The Vercel Toolbar now helps you investigate your site's Interaction to Next Paint (INP).

This new Core Web Vital, which impacts Google search ranking as of March 2024, is now available in the toolbar menu under Interaction Timing. As you interact with your site, this tool measures input delay, processing times, and rendering delay and allows you to inspect in detail how these are affecting each interaction's latency.

This tool can also notify you as you navigate your site of any interactions that take more than 200ms, the upper limit for a good INP score. These toasts can be configured in Preferences under the toolbar menu.

Learn more about the Vercel Toolbar and INP.

Read more

wits
https://vercel.com/changelog/inspect-open-graph-data-with-the-vercel-toolbar Inspect Open Graph data with the Vercel Toolbar 2024-05-20T13:00:00.000Z

The Vercel Toolbar can now show a preview of how the page will look shared on social media.

After selecting "Open Graph" from the toolbar menu, you can see how your images and metadata will display on X, Slack, Facebook, and LinkedIn. The toolbar also provides information if any metadata is missing on your page, which could affect the display of social cards.

Learn more about the Vercel Toolbar.

Read more

George Karagkiaouris Christopher Skillicorn
https://vercel.com/changelog/aggregate-and-visualize-traffic-data-with-monitoring Aggregate and visualize traffic data with Monitoring 2024-05-17T13:00:00.000Z

You can now select an aggregation when analyzing data in Vercel Monitoring. This change provides more visibility to make it easier to analyze your application.

The following new aggregations are now available, in addition to sums and counts.

  • Average values

  • Per second sums and counts

  • Minimum and maximum values

  • 75th, 90th, 95th and 99th percentiles

  • Percentages of the overall values

These aggregations can be used with any visualize setting, for analyzing data transfer, function duration, function execution, and request counts. Enterprise customers can also access data with a five minute granularity when viewing 24 hours of data or less.

Learn more in our documentation about Monitoring.

Read more

Ethan Shea
https://vercel.com/blog/securing-data-in-your-next-js-app-with-okta-and-openfga Securing data in your Next.js app with Okta and OpenFGA 2024-05-16T13:00:00.000Z

Modern Next.js applications can have large codebases operating across multiple environments, including client components running in the browser, Server Actions executing on the server, and more.

Read more

Sam Bellen
https://vercel.com/changelog/waituntil-is-now-available-for-vercel-functions waitUntil is now available for Vercel Functions 2024-05-10T13:00:00.000Z

You can now use waitUntil by importing @vercel/functions in your Vercel Functions, regardless of the framework or runtime you use.

The waitUntil() method enqueues an asynchronous task to be performed during the lifecycle of the request. It doesn't block the response, but should complete before shutting down the function.

It's used to run anything that can be done after the response is sent, such as logging, sending analytics, or updating a cache, without blocking the response from being sent.

The package is supported in Next.js (including Server Actions), Vercel CLI, and other frameworks, and can be used with the Node.js and Edge runtimes.

Learn more in the documentation.

Read more

Kiko Beats Javi Velasco
https://vercel.com/blog/how-vercel-helped-desenio-future-proof-their-business How Vercel helped Desenio future-proof their business 2024-05-09T13:00:00.000Z

The merger of two of the world's largest affordable art providers, Desenio and The Poster Store, gave their developers the chance to modernize their application architecture, improve their entire process, and dismantle the monolithic approach that made for long deployment times and slow iteration. Thanks to Vercel, they went from duplicate pipelines to a unified workflow—resulting in faster builds, a 37% lower bounce rate, 48% longer sessions, and a 34% improvement in site conversions. 

Read more

Alli Pope
https://vercel.com/blog/7-ai-features-you-can-add-to-your-app-today 7 AI features you can add to your app today 2024-05-09T13:00:00.000Z

Imagine a customer finding the perfect item on your website in seconds—not because they know the jargon to search, but because your search bar understands what they're looking for.

That level of convenience wasn't possible a year ago. Even getting close was a huge hassle. But now, thanks to advancements in AI and large language models (LLMs) like OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude, businesses without dedicated AI teams are rolling out impressive features in record time.

And Vercel is here to help speed that process up. Let’s take a look at what’s possible.

Read more

Alice Alexandra Moore
https://vercel.com/changelog/vercel-functions-for-hobby-can-now-run-up-to-60-seconds Vercel Functions for Hobby can now run up to 60 seconds 2024-05-09T13:00:00.000Z

Based on your feedback, Hobby customers can now run functions up to 60 seconds.

Starting today, all new deployments will be able to increase the maximum duration of functions on the free tier from 10 seconds up to 60 seconds. If you need longer than 60 seconds, you can upgrade to Pro for up to 5 minutes.

Check out our documentation to learn more.

Read more

Malte Ubl Amy Burns
https://vercel.com/changelog/access-groups-now-generally-available-on-enterprise-plans Access groups now generally available on Enterprise plans 2024-05-07T13:00:00.000Z

Enterprise customers can now manage access to critical Vercel projects across many Vercel users easier than ever with Access Groups.

Access Groups allow team administrators to create a mapping between team members and groups of Vercel projects. Users added to an Access Group will automatically be assigned access to the Projects connected to that Access Group, and will be given the default role of that group, making onboarding easier and faster than ever for new Vercel Team members.

For customers who use a third-party Identity Provider, such as Okta, Access Groups can automatically sync with their provider, making it faster to start importing users to Vercel without creating manual user group mappings (Vercel is SCIM compliant).

For example, you can have a Marketing Engineering Access Group, which has a default project role of "Developer". When a new member is added to the Marketing Engineering group, they will automatically be assigned the Developer role, and access to all Projects assigned to that group.

This builds on our advanced access controls, like project level access controls and deployment protection. Learn more about Access Groups or contact us for a demo of our access security features.

Read more

Javier Bórquez Enric Pallerols Christopher Skillicorn Bel Curcio Natalie Altman Angela Zhang Jhey Tompkins
https://vercel.com/changelog/recommend-branch-based-feature-flag-overrides Recommend branch based feature flag overrides 2024-05-07T13:00:00.000Z

You can now recommend feature flag overrides for specific branches in order to equip your team and quickly share work in development.

The Vercel Toolbar will suggest flag overrides to team members working on the branch locally or when visiting a branch Preview Deployment. This extends the recently announced ability to view and override your application's feature flags from Vercel Toolbar, currently in beta.

As part of this change, we’ve improved the onboarding for setting up and integrating feature flags into the toolbar.

Learn more about the Vercel Toolbar and feature flags.

Read more

Dominik Ferber wits Christopher Skillicorn
https://vercel.com/changelog/python-3-12-and-ruby-3-3-are-now-available Python 3.12 and Ruby 3.3 are now available 2024-05-06T13:00:00.000Z

Starting today, new Python Builds and Functions will use version 3.12 and new Ruby Builds and Functions will use version 3.3.

If you need to continue using Python 3.9 or Ruby 3.2, ensure you have 18.x selected for the Node.js Version in your project settings to use the older build image.

For Python 3.9, ensure your Pipfile and corresponding Pipfile.lock have python_version set to 3.9 exactly. Similarly, for Ruby 3.2, make sure ruby "~> 3.2.x" is defined in the ‌Gemfile‍​‍​‍‌‍‌.

Check out the documentation to learn more about our supported runtimes.

Read more

Janos Szathmary Sean Massa Nathan Rajlich Balazs Varga Guðmundur Bjarni Ólafsson Felix Haus
https://vercel.com/blog/vercel-functions-are-now-faster-and-powered-by-rust Vercel Functions are now faster—and powered by Rust 2024-05-03T13:00:00.000Z

Vercel Functions run code on demand without the need to manage your own infrastructure, provision servers, or upgrade hardware—and are now powered by Rust under the hood.

Read more

Tom Lienard Seiya Nuta Gal Schlezinger Javi Velasco Craig Andrews
https://vercel.com/blog/how-dub-grew-to-3000-active-domains-with-vercels-multi-tenant-saas-toolkit How Dub grew to 3,000 active domains with Vercel’s multi-tenant SaaS toolkit 2024-05-03T13:00:00.000Z

Dub is an open-source link management platform that helps marketing teams create marketing campaigns, link sharing features, and referral programs. Currently, Dub boasts over 3,000 active domains, growing at a remarkable 25% month-over-month rate.

Read more

Alina Weinstein
https://vercel.com/blog/vercel-ai-sdk-3-1-modelfusion-joins-the-team Vercel AI SDK 3.1: ModelFusion joins the team 2024-05-02T13:00:00.000Z

Today, we're releasing the AI SDK 3.1, with ModelFusion joining our team.

This release brings us one step closer to delivering a complete TypeScript framework for building AI applications. It is organized into three main parts:

Read more

Jared Palmer Lars Grammel
https://vercel.com/blog/vercel-supports-hipaa-compliance Vercel supports HIPAA compliance 2024-05-01T13:00:00.000Z

Vercel is committed to providing a secure and reliable platform for hosting websites and applications—across all industries. But this can be challenging with industry-specific regulations, especially for healthcare organizations and entities that process protected health information (PHI).

Read more

Kacee Taylor
https://vercel.com/changelog/accounts-can-now-have-multiple-email-addresses Accounts can now have multiple email addresses 2024-04-30T13:00:00.000Z

You can now add multiple email addresses to your Vercel account.

For example, both your personal email and work email can be attached to the same Vercel account. All verified emails attached to your account can be used to login. You can mark an email as "primary" on your account, which makes it the destination for account and project notifications.

Learn more in our documentation.

Read more

Bel Curcio Meg Bird Kit Foster Enric Pallerols Miroslav Simulcik
https://vercel.com/changelog/faster-build-times-with-optimized-uploads Faster build times with optimized uploads 2024-04-30T13:00:00.000Z

We've optimized our build process to reduce upload times by 15% on average for all customers.

For customers with large builds (10,000 outputs or more), upload times have decreased by 50%. This results in a time saving of up to 5 minutes per build for several customers.

Learn more about builds in our documentation.

Read more

Felix Haus Guðmundur Bjarni Ólafsson Andrew Healey Janos Szathmary
https://vercel.com/changelog/legacy-environment-variable-secrets-sunset-reminder Reminder of legacy environment variable secrets sunset 2024-04-29T13:00:00.000Z

This is a reminder for legacy secrets deprecation. On May 1st, 2024 secrets will be automatically converted to sensitive Environment Variables for Preview and Production environments. Secrets attached to Development environments will not be migrated.

  • Existing legacy secrets will be automatically converted. You do not need to manually take action for non-development values. Read below to view your impacted projects.

  • All Environment Variables remain securely encrypted. The majority of Vercel workloads have moved from the legacy secrets functionality.

Why are legacy secrets being sunset?

Our legacy secrets were encrypted values global for your entire team and could only be managed through the CLI. Based on your feedback, we have since:

When will I no longer be able to use secrets?

On May 1st, 2024secrets will be removed from Vercel CLI:

  • Existing secrets added to the Preview and Production environments will be converted to Sensitive Environment Variables

  • Existing secrets added to the Development environment will not be migrated for your security. If you have a secret shared between all environments, including Development, it will not be migrated. These values must be manually migrated.

How can I migrate to Sensitive Environment Variables?

Secrets for Preview and Production environments will be automatically migrated.

For secrets which contain the Development environment, you should create new Sensitive Environment Variables, as these values will not be automatically migrated for your security. If you need to share Environment Variables across projects, you can make them shared.

How can I understand if I’m affected?

To list projects using secrets that will be automatically converted, run:

Read more

Ana Jovanova Marc Greenstock Bel Curcio Angela Zhang
https://vercel.com/changelog/vercel-terraform-provider-v1-9 Vercel Terraform Provider v1.9 2024-04-29T13:00:00.000Z

The Vercel Terraform Provider allows you to create, manage and update your Vercel projects, configuration, and settings through infrastructure-as-code.

You can now control significantly more Vercel resources through Terraform:

Learn how to get started with the Terraform provider for Vercel. If you already have Terraform set up, upgrade by running:

Read more

Doug Parsons
https://vercel.com/blog/how-vercel-helped-tonies-expand-into-new-markets How Vercel helped Tonies expand into new markets and improve conversion rates 2024-04-26T13:00:00.000Z

‌‍​‍​‍‌‍‌​‍‌‍‍‌‌‍‌‌‍‍‌‌‍‍​‍​‍​‍‍​‍​‍‌‍​‌‍‌‍‍‌‌​‌‍‌‌‌‍‍‌‌​‌‍‌‍‌‌‌‌‍​​‍‍‌‍​‌‍‌‍‌​‍​‍​‍​​‍​‍‌‍‍​‌​‍‌‍‌‌‌‍‌‍​‍​‍​‍‍​‍​‍​‍‌‍​‌‌​​‌‍‍‌​‍‌‍​‍‌‍​‌‍‌‍‌​‍‌‍‌‌‌‍‌​‌‍‍‌‌‌​​‌‍‌‌​‌​‌‌‌‍‌‌​‌‌‌‌‌‌‌​‌‌‍‍‍‌​​​‌‌​‌‌​​‌​‌‍‌​‌​‌​‍​‌​‍​‌‌​​‌‌​‌​‌​​‌‌​‍‌‌​​‌‌‍​‌​‍‌‍‌‌​​‌‌‍‌‌​‍​‍​‍​​‍​‍‌‌​‌‍‌‌​​‌‍‌‌​‍​‍​‍‍​‍​‍‌‌​‌‍‌‌‌‍​‌‌​​‍​‍​‍​​‍​‍‌‍‌​‌‍​‌‌‌​‌‍​‌​‍​‍​‍‍​‍​‍‌​‌‌‌‌‍​‍‌‌​‌‍‍‌‌‌​‌‍​‌‍‌‌​‍​‍‌‌Tonies, creators of the smart audio system for children, sought to expand into new markets, but it became clear that their existing platform couldn't support this growth. In response, they undertook a strategic transition to a new frontend platform powered by Vercel's Frontend Cloud and Contentful's CMS.

Read more

Alli Pope
https://vercel.com/changelog/faster-defaults-for-vercel-function-cpu-and-memory Faster defaults for Vercel Function CPU and memory 2024-04-26T13:00:00.000Z

The default CPU for Vercel Functions will change from Basic (0.6 vCPU/1GB Memory) to Standard (1 vCPU/1.7GB Memory) for new projects created after May 6th, 2024. Existing projects will remain unchanged unless manually updated.

This change helps ensure consistent function performance and faster startup times. Depending on your function code size, this may reduce cold starts by a few hundred milliseconds.

While increasing the function CPU can increase costs for the same duration, it can also make functions execute faster. If functions execute faster, you incur less overall function duration usage. This is especially important if your function runs CPU-intensive tasks.

This change will be applied to all paid plan customers (Pro and Enterprise), no action required.

Check out our documentation to learn more.

Read more

Shohei Maeda Tobias Lins Tom Lienard Brian Emerick
https://vercel.com/changelog/improved-infrastructure-pricing-is-now-active-for-new-customers Improved infrastructure pricing is now active for new customers 2024-04-25T13:00:00.000Z

Earlier this month, we announced our improved infrastructure pricing, which is active for new customers starting today.

Billing for existing customers begins between June 25 and July 24. For more details, please reference the email with next steps sent to your account. Existing Enterprise contracts are unaffected.

Our previous combined metrics (bandwidth and functions) are now more granular, and have reduced base prices. These new metrics can be viewed and optimized from our improved Usage page.

These pricing improvements build on recent platform features to help automatically prevent runaway spend, including hard spend limits, recursion protection, improved function defaults, Attack Challenge Mode, and more.

Read more

Guillermo Rauch
https://vercel.com/changelog/improved-team-onboarding-experience Improved team onboarding experience 2024-04-24T13:00:00.000Z

It’s now easier to join a team on Vercel. New team members no longer need to re-enter their email, or create a Hobby team or Pro trial. Team invite emails now lead to a sign up page customized for the team. This includes simplified sign up options that reflect the team's SSO settings.

You can invite new team members under "Members" in your team settings. Learn more about managing team members in the documentation.

Read more

Tom Bremer Zach Ward Meg Bird Kylie Czajkowski Sam Saliba
https://vercel.com/blog/latency-numbers-every-web-developer-should-know Latency numbers every frontend developer should know 2024-04-23T13:00:00.000Z

Web page load times and responsiveness to user action in web apps is a primary driver of user satisfaction–and both are often dominated by network latency.

Latency itself is a function of the user's connection to the internet (Wifi, LTE, 5G), how far away the server is that the user is connecting to, and the quality of the network in between.

While the latency numbers may seem low by themselves, they compound quickly. For example, a network waterfall of depth 3 on a 300ms link leads to a total latency of 900ms. Technologies like React Server Components can move network waterfalls to the server where the same request pattern might be 100 times as fast.

Read more

Malte Ubl
https://vercel.com/blog/how-global-retail-brands-cut-development-time-from-months-to-1-week How Global Retail Brands cut development time from months to 1 week with Vercel 2024-04-18T13:00:00.000Z

Global Retail Brands (GRB) is one of Australia’s fastest-growing retailers, specializing in homeware and kitchenware goods with over 250 physical stores throughout the country. GRB is known for its flagship brands such as House, MyHouse, House Bed & Bath, Baccarat, and Robins Kitchen.

Read more

Alli Pope
https://vercel.com/changelog/upcoming-change-in-lets-encrypt-chain-of-trust Upcoming change in Let's Encrypt Chain of Trust 2024-04-18T13:00:00.000Z

Important: This change does not impact customers currently using custom certificates issued by commercial CAs and using them on Vercel via the custom certificate feature.

Vercel uses Let's Encrypt as its certificate authority (CA) to auto-provision TLS certificates to enable secure connections by default. When using custom domains in your Vercel app, traffic between clients and Vercel Edge Network is encrypted and protected using the auto-provisioned Let's Encrypt certificate.

As planned, on September 30th, 2024, the current Let’s Encrypt cross-sign DST Root CA X3 root certificate issued by IdenTrust will expire and no longer be available. Considering the small proportion of internet users with older devices today, Let's Encrypt has decided to officially sunset this cross-sign certificate chain. This change has been planned by Let's Encrypt over the past few years, under their mission of providing safe and secure communication to everyone who uses the Web. You can read more about this change in their blog post.

After September 30th, 2024, clients accessing your websites hosted on Vercel must be able to trust the latest ISRG Root X1 root certificate from their local trust store. Modern operating systems and browsers trust this certificate, and it should not cause any noticeable impacts on your users. However, some older devices, such as Android 7.0 or earlier, may be unable to trust the new chain by default.

You can check more details about this change and review remedy options in our public announcement on the GitHub community forum.

Read more

Shohei Maeda Mark Glagola
https://vercel.com/blog/building-an-interactive-3d-event-badge-with-react-three-fiber Building an interactive 3D event badge with React Three Fiber 2024-04-17T13:00:00.000Z

In this post, we’ll look at how we made the dropping lanyard for the Vercel Ship 2024 site, diving into the inspiration, tech stack, and code behind the finished product.

Read more

Paul Henschel
https://vercel.com/blog/releasing-safe-and-cost-efficient-blue-green-deployments Releasing safe and cost-efficient blue-green deployments 2024-04-12T13:00:00.000Z

Blue-green deployments are a great way to mitigate the risks associated with rolling out new software versions.

Read more

Malte Ubl
https://vercel.com/changelog/ai-enhanced-search-for-vercel-documentation AI-enhanced search for Vercel documentation 2024-04-11T13:00:00.000Z

You can now get AI-assisted answers to your questions directly from the Vercel docs search:

  • Use natural language to ask questions about the docs

  • View recent search queries and continue conversations

  • Easily copy code and markdown output

  • Leave feedback to help us improve the quality of responses

Start searching with ⌘K (or Ctrl+K on Windows) menu on vercel.com/docs.

Read more

Jhey Tompkins Rich Haines Christopher Skillicorn
https://vercel.com/blog/creating-a-robust-platform-for-documentation-with-next-js-and-vercel Creating a robust platform for documentation with Next.js and Vercel 2024-04-10T13:00:00.000Z

Teleport, an open-core platform for secure infrastructure access, sought to unify and enhance their website and documentation. They needed a framework that could support dynamic content, provide a smooth developer experience, and ultimately provide a robust and up-to-date resource for their customers.

Read more

Alli Pope
https://vercel.com/changelog/gemini-ai-chatbot-with-generative-ui-support Gemini AI Chatbot with Generative UI support 2024-04-10T13:00:00.000Z

The Gemini AI Chatbot template is a streaming-enabled, Generative UI starter application. It's built with the Vercel AI SDK, Next.js App Router, and React Server Components & Server Actions.

This template features persistent chat history, rate limiting to prevent abuse, session storage, user authentication, and more.

The Gemini model used is models/gemini-1.0-pro-001, however, the Vercel AI SDK enables exploring an LLM provider (like OpenAI, Anthropic, Cohere, Hugging Face, or using LangChain) with just a few lines of code.

Try the demo or deploy your own.

Read more

Jared Palmer Shu Ding Shadcn Jeremy Philemon Max Leiter
https://vercel.com/blog/composable-ai-for-ecommerce-hands-on-with-vercels-ai-sdk Composable AI for ecommerce: Hands-on with Vercel’s AI SDK 2024-04-09T13:00:00.000Z

Imagine you have a great idea for an AI-powered feature that will transform your ecommerce storefront—but your existing platform stands in the way of innovating and shipping. Legacy platforms come with slow and costly updates, and you're beholden to your vendor's roadmap.

With composable architecture, that all changes. You can choose and seamlessly integrate all the best tools, shipping your ideas with maximum efficiency.

At Vercel, we believe composable should include AI. We want it to be as straightforward as possible within the JavaScript ecosystem to develop AI features that enrich your users’ digital experiences.

Read more

Malte Ubl
https://vercel.com/blog/how-ruggable-saw-more-organic-clicks-by-optimizing-their-frontend How Ruggable saw 300% more organic clicks by optimizing their frontend architecture 2024-04-08T13:00:00.000Z

Ecommerce brands today face immense pressure to stay agile and innovate continuously. Recognizing the need to optimize site performance, enhance SEO, boost conversions, and improve developer experience, Ruggable, a leading online rug retailer, embarked on a digital transformation with Vercel and Contentful.

Read more

Alli Pope
https://vercel.com/blog/improved-infrastructure-pricing Improved infrastructure pricing 2024-04-04T13:00:00.000Z

Based on your feedback, we're updating how we measure and charge for usage of our infrastructure products.

Read more

Guillermo Rauch
https://vercel.com/changelog/hostname-support-in-web-analytics Hostname support in Web Analytics 2024-04-04T13:00:00.000Z

You can now inspect and filter hostnames in Vercel Web Analytics.

  • Domain insights: Analyze traffic by specific domains. This is beneficial for per-country domains, or for building multi-tenant applications.

  • Advanced filtering: Apply filters based on hostnames to view page views and custom events per domain.

This feature is available to all Web Analytics customers.

Learn more in our documentation about filtering.

Read more

Timo Lins Tobias Lins
https://vercel.com/blog/design-engineering-at-vercel Design Engineering at Vercel 2024-03-29T13:00:00.000Z

Design Engineer is a new role that is gaining popularity—a role that is both confusing and exciting. Expectations for what good software looks and feels like have never been higher. Design Engineers are a core part in exceeding that expectation.

Read more

Glenn Hitchcock Henry Heffernan John Phamous Rauno Freiberg Yasmin Pessoa
https://vercel.com/blog/demant-achieves-global-scalability-and-30x-faster-response-times-with-vercel Demant achieves global scalability and 30x faster response times with Vercel 2024-03-29T13:00:00.000Z

Demant, a prominent hearing healthcare and technology group, has been dedicated to improving people's health and hearing since 1904.

Read more

Alli Pope
https://vercel.com/changelog/node-js-v20-lts-is-now-generally-available Node.js v20 LTS is now generally available 2024-03-25T13:00:00.000Z

Node.js 20 is now fully supported for Builds and Vercel Functions. You can select 20.x in the Node.js Version section on the General page in the Project Settings. The default version for new projects is now Node.js 20.

Node.js 20 offers improved performance and introduces new core APIs to reduce the dependency on third-party libraries in your project.

The exact version used by Vercel is 20.11.1 and will automatically update minor and patch releases. Therefore, only the major version (20.x) is guaranteed.

Read the documentation for more.

Read more

Felix Haus Janos Szathmary Sean Massa Nathan Rajlich Andrew Healey Gargi Sharma Guðmundur Bjarni Ólafsson
https://vercel.com/blog/protecting-ai-apps-with-vercel-and-kasada Protecting AI apps from bots and bad actors with Vercel and Kasada 2024-03-22T13:00:00.000Z

The growth in popularity of AI applications, and the relative high cost of the LLMs to power those calls, means AI apps have emerged as an incredibly high-value target for bots and bad actors.

Read more

Malte Ubl
https://vercel.com/blog/revolutionizing-video-editing-on-the-web-with-next-js-and-vercel Revolutionizing video editing on the web with Next.js and Vercel 2024-03-20T13:00:00.000Z

Ozone set out to revolutionize video editing by embracing the web.

Read more

Alli Pope
https://vercel.com/changelog/skew-protection-is-now-generally-available Skew Protection is now generally available 2024-03-19T13:00:00.000Z

Last year, we introduced Vercel's industry-first Skew Protection mechanism and we're happy to announce it is now generally available.

Skew Protection solves two problems with frontend applications:

  1. If users try to request assets (like CSS or JavaScript files) in the middle of a deployment, Skew Protection enables truly zero-downtime rollouts and ensures those requests resolve successfully.

  2. Outdated clients are able to call the correct API endpoints (or React Server Actions) when new server code is published from the latest deployment.

Since the initial release of Skew Protection, we have made the following improvements:

  • Skew Protection can now be managed through UI in the advanced Project Settings

  • Pro customers now default to 12 hours of protection

  • Enterprise customers can get up to 7 days of protection

Skew Protection is now supported in SvelteKit (v5.2.0 of the Vercel adapter), previously supported in Next.js (stable in v14.1.4), and more frameworks soon. Framework authors can view a reference implementation here.

Learn more in the documentation to get started with Skew Protection.

Read more

Steven Salat JJ Kasper Malte Ubl
https://vercel.com/changelog/next-js-ai-chatbot-2-0 Next.js AI Chatbot 2.0 2024-03-19T13:00:00.000Z

The Next.js AI Chatbot template has been updated to use AI SDK 3.0 with React Server Components.

We've included Generative UI examples so you can get quickly create rich chat interfaces beyond just plain text. The chatbot has also been upgraded to the latest Next.js App Router and Shadcn UI.

Lastly, we've simplified the default authentication setup by removing the need to create a GitHub OAuth application prior to initial deployment. This will make it faster to deploy and also easier for open source contributors to use Vercel Preview Deployments when they make changes.

Try the demo or deploy your own.

Read more

Jared Palmer Shadcn Lars Grammel Jeremy Philemon Shu Ding Max Leiter
https://vercel.com/blog/leonardo-ai-performantly-generates-4-5-million-images-daily-with-next-js-and-vercel Leonardo generates 4.5M images daily with Next.js and Vercel 2024-03-18T13:00:00.000Z

Generating more than 4.5 million images a day, Leonardo.ai merges artificial intelligence with creativity to transform content creation across industries like gaming, marketing, and design.

Read more

Alli Pope
https://vercel.com/blog/from-wordpress-monolith-to-vercel-personio-elevates-site-performance WordPress monolith to Vercel: How Personio elevated site performance and efficiency 2024-03-18T13:00:00.000Z

As Europe's leading all-in-one HR solution for small and midsized organizations, Personio is committed to the highest standards of both user experience and application security.

Read more

Alli Pope
https://vercel.com/changelog/prioritize-production-deployments-to-build-before-queued-preview Prioritize production builds available on all plans 2024-03-15T13:00:00.000Z

To accelerate the production release process, customers on all plans can now prioritize changes to the Production Environment over Preview Deployments.

With this setting configured, any Production Deployment changes will skip the line of queued Preview Deployments and go to the front of the queue.

You can also increase your build concurrency limits to give you the ability to start multiple builds at once. Additionally, Enterprise customers can also contact sales to purchase enhanced build machines with larger memory and storage.

Check out our documentation to learn more.

Read more

Felix Haus Mariano Cocirio
https://vercel.com/changelog/manage-your-vercel-functions-cpu-and-memory-in-the-dashboard Manage your Vercel Functions CPU and memory in the dashboard 2024-03-11T13:00:00.000Z

You can now configure Function CPU from the project settings page, where you can change your project’s default memory, and by extension CPU. Previously, this could only be changed in vercel.json.

The memory configuration of a function determines how much memory and CPU the function can use while executing. This new UI makes it more clear increasing memory boosts vCPU, which can result in better performance, depending on workload type.

Existing workloads (that have not modified vercel.json) are using the cost-effective basic option. Increasing function CPU increases the cost for the same duration, but may result in a faster function. This faster function may make the change net-neutral (or a price decrease in some cases).

Check out the documentation to learn more.

Read more

Shohei Maeda Tiago Ventura Loureiro Brian Emerick Justin Kropp Sam Becker Ismael Rumzan
https://vercel.com/changelog/improved-hard-caps-for-spend-management Improved hard caps for Spend Management 2024-03-08T13:00:00.000Z

Pro customers can now automatically pause all projects when a spend amount is reached.

Spend Management allows you to receive notifications, trigger a webhook, and now more immediately pause projects when metered usage exceeds the set amount within the current billing cycle. This stops you incurring further cost from the production deployments.

  • You'll receive realtime notifications when your spending approaches and exceeds the set amount. For further control, you can continue to use a webhook in addition to automatic project pausing

  • This includes Web and Email notifications at 50%, 75%, and 100%. You can also receive SMS notifications when your spending reaches 100%

Check out our documentation to learn more.

Read more

Arian Daneshvar Christopher Skillicorn Marc Brakken Saranya Desetty Amy Burns
https://vercel.com/blog/8-advantages-of-composable-commerce 8 advantages of composable commerce 2024-03-07T13:00:00.000Z

A monolithic ecommerce platform, where your commerce data and user-facing storefront are bundled into one provider, can help you get your business off the ground. But as your customer base expands and your strategies become more sophisticated, you may be bumping into some of the rough edges of your provider.

If you crave blazing-fast site performance, personalized experiences, and the freedom to adapt without vendor lock-in, Vercel and Next.js offer a compelling, composable solution for your storefront’s unlimited global growth.

Here are the benefits composable commerce can offer.

Read more

Alice Alexandra Moore
https://vercel.com/blog/toolbar-feature-flags Introducing feature flag management from the Vercel Toolbar 2024-03-06T13:00:00.000Z

Using feature flags to quickly enable and disable product features is more than just a development technique; it's a philosophy that drives innovation and ensures that only the best, most performant features reach your users.

However, when working on a new feature you need to leave your current browser tab, sign into your flag provider, switch the flag to the value you need for development—all while coordinating and communicating this change with teammates. This adds a lot of overhead and disrupts your work.

Today, we’re making that workflow easier by adding the ability for team members to override your application’s feature flags right from the Vercel Toolbar.

You can manage flags set in any provider including LaunchDarkly, Optimizely, Statsig, Hypertune, or Split—and additionally you can integrate any other provider or even your own custom flag setup. By creating overrides for your flags from the toolbar, you can stay in the flow and improve your iteration speed.

Read more

Dominik Ferber
https://vercel.com/changelog/support-for-remix-with-vite Support for Remix with Vite 2024-03-06T13:00:00.000Z

Vercel now supports deploying Remix applications using Vite.

We've collaborated with the Remix team to add Server Bundles to Remix. Vercel will now detect Remix projects using Vite and optimize them using our new Vite preset (@vercel/remix/vite).

This preset enables adding additional features for Remix on Vercel such as:

  • Streaming SSR: Dynamically stream content with both Node.js and Edge runtimes

  • API Routes: Easily build your serverless API with Remix and a route loader

  • Advanced Caching: Use powerful cache headers like stale-while-revalidate

  • Data Mutations: Run actions inside Vercel Functions

Deploy Remix to Vercel or learn more in the docs.

Read more

Nathan Rajlich
https://vercel.com/changelog/view-and-override-feature-flags-from-the-vercel-toolbar View and override feature flags from the Vercel Toolbar 2024-03-06T13:00:00.000Z

You can now view and override your application's feature flags from the Vercel Toolbar.

This means you can override flags provided by LaunchDarkly, Optimizely, Statsig, Hypertune, Split, or your custom setup without leaving your Vercel environment.

Vercel can now query an API Route defined in your application to find out about your feature flags, and will pick up their values by scanning the DOM for script tags. From there you can create overrides from the Vercel Toolbar, per session, for shorter feedback loops and improved QA and testing. Additionally, the overrides will be stored in an optionally encrypted cookie so your application can respect them.

This functionality is currently in beta and available to users on all plans.

Check out the documentation to learn more.

If you're a feature flag provider and interested in integrating with the Vercel Toolbar, contact us.

Read more

Dominik Ferber Andy Schneider Sam Becker Christopher Skillicorn Aaron Morris Chris Widmaier George Karagkiaouris Jhey Tompkins
https://vercel.com/blog/ai-sdk-3-generative-ui Introducing AI SDK 3.0 with Generative UI support 2024-03-01T13:00:00.000Z

Last October, we launched v0.dev, a generative UI design tool that converts text and image prompts to React UIs and streamlines the design engineering process.

Today, we are open sourcing v0's Generative UI technology with the release of the Vercel AI SDK 3.0. Developers can now move beyond plaintext and markdown chatbots to give LLMs rich, component-based interfaces.

Read more

Jared Palmer Shu Ding Max Leiter Shadcn Lars Grammel Jeremy Philemon
https://vercel.com/blog/the-resiliency-of-the-frontend-cloud The Frontend Cloud: Powering resiliency for global web applications 2024-02-29T13:00:00.000Z

Modern web apps are global, omni-channel and fast. Above all else they must be available at all times. Every second of website downtime translates to lost revenue and eroded customer trust.

Leveraging Vercel's Frontend Cloud allows you to:

Read more

Alice Alexandra Moore
https://vercel.com/changelog/prevent-malicious-traffic-with-attack-challenge-mode-for-vercel-firewall Prevent malicious traffic with Attack Challenge Mode for the Vercel Firewall 2024-02-29T13:00:00.000Z

Vercel Firewall protects your application from DDoS attacks.

Spikes in high volumes of traffic sometimes indicate malicious activity on your site. Customers can now quickly lock down traffic and further protect against DDoS attacks by challenging requests, minimizing the chance that malicious bots get through.

Attack Challenge Mode is now available for all Vercel customers at no additional cost. When temporarily enabled, all visitors to your site will see a challenge screen before they are allowed through.

Learn how to enable Attack Challenge Mode and protect your site.

Read more

Natalie Altman Joseph Collins Andrew Barba Kevin Rupert Christopher Skillicorn Amy Burns
https://vercel.com/blog/deploy-safely-on-vercel-without-merge-queues Deploying safely on Vercel without merge queues 2024-02-26T13:00:00.000Z

In order to prevent issues from reaching production, repositories often have settings enabled to keep the main branch green whenever any code is merged. When there are many developers contributing code, such as in a monorepo, this usually results in a slowdown in developer productivity.

If branches are required to be synced to HEAD before merge, developers may have to update branch multiple times before they can merge their code, unnecessarily performing a lot of the same checks over again. A merge queue is an alternative solution to alleviate this pain, but this can also slow down productivity by forcing commits from each developer to be tested before merge in serial, even from unrelated commits.

With Vercel, you can ensure the safety of production and developers can merge quickly, without using a merge queue.

Read more

Mark Knichel Sean Massa
https://vercel.com/blog/effortless-high-availability-for-dynamic-frontends Effortless high availability for dynamic frontends 2024-02-21T13:00:00.000Z

Vercel’s Frontend Cloud is designed for high availability from the ground up, with robustness against large-scale regional cloud outages at every layer of our architecture.

This includes making it extraordinarily easy for our customers to run the compute they deploy to Vercel in the same highly resilient architecture. Concretely speaking, this can make the difference between downtime or smooth operation during major sales events such as Black Friday.

Read more

Malte Ubl
https://vercel.com/changelog/vercel-otel-1-3-0 @vercel/otel 1.3.0 2024-02-16T13:00:00.000Z

Vercel and Next.js provide increased observability of your applications through OpenTelemetry.

v1.3.0 of @vercel/otel now providing custom resource and operation names for Datadog to satisfy their cardinality requirements. You can group related URL paths for a given span to reduce cardinality and associated usage.

For example, /products/hoodie can be mapped to /products/[name].

Learn more in our documentation or start using the package with Next.js.

Read more

Dima Voytenko JJ Kasper Andrew Gadzik Gary Borton
https://vercel.com/changelog/lowering-default-serverless-function-timeout-in-enterprise-projects No action required: Lowering default function timeout in new Enterprise projects 2024-02-16T13:00:00.000Z

The default Vercel Function timeout of all new projects for Enterprise customers will be reduced to 15 seconds on Feb 20th. This change helps prevent unintentional function usage, unless explicitly opted into the longer function duration (up to 15 minutes).

Existing Enterprise projects will not have their defaults changed.

Check out our documentation to learn more.

Read more

Florentin Eckl Tiago Ventura Loureiro
https://vercel.com/blog/evolving-vercel-functions Evolving Vercel Functions 2024-02-14T13:00:00.000Z

We’ve been building a new foundation for compute, built on top of Vercel’s Managed Infrastructure, for the past year.

Read more

Lee Robinson
https://vercel.com/blog/vercel-wpp-creativity-enabled-by-technology Vercel + WPP: World-class creativity enabled by technology 2024-02-14T13:00:00.000Z

Today, we've announced our strategic partnership with WPP, a world leader in communications, experience, commerce, and technology. 

Through the years, brands have entrusted Vercel and WPP’s global network of agencies to help them modernize their digital experience with the best creative and the best technologies. Together, we serve leading organizations like The International Olympic Committee, James Hardie, Fluor, and Country Road Group.

Read more

Malte Ubl
https://vercel.com/changelog/utm-parameter-support-in-web-analytics UTM parameter support in Web Analytics 2024-02-14T13:00:00.000Z

UTM parameters are now available in Vercel Web Analytics, enabling detailed insights into marketing campaign effectiveness directly from the dashboard.

  • Visibility into campaign performance: Analyze traffic by specific campaigns, mediums, sources, content, and terms using UTM parameters.

  • Advanced filtering: Apply filters based on UTM parameters for deeper insights into the impact of your marketing campaigns.

  • Historical UTM data: Start analyzing past campaigns immediately with historical data automatically included.

This feature is available to Pro customers with Web Analytics Plus and Enterprise customers.

Read more

Timo Lins Tobias Lins
https://vercel.com/blog/finishing-turborepos-migration-from-go-to-rust Finishing Turborepo's migration from Go to Rust 2024-02-12T13:00:00.000Z

We've finished porting Turborepo, the high performance JavaScript and TypeScript build system, from Go to Rust. This lays the groundwork for better performance, improved stability, and powerful new features.

Read more

Nicholas Yang Anthony Shew
https://vercel.com/blog/ai-integrations Introducing AI Integrations on Vercel 2024-02-08T13:00:00.000Z

Today, we’re launching nine new AI integrations for Vercel from leading AI companies.

We’ve also created a new model playground where you can try dozens of models instantly to generate text, images, audio, and more right in your dashboard.

Building the future with AI

Vercel is the product infrastructure for AI applications.

From chatbots that augment customer service flows, to recommendation systems with semantic search, Retrieval Augmented Generation (RAG), and generative image services—companies can build better product experiences faster than ever before with AI.

We've partnered with our first cohort of AI providers to speed up your product development process.

"We're excited to partner with Vercel on bringing the latest state of the art open source machine learning models to more AI Engineers. We believe that AI should be easy to run and integrate into any web application." — Replicate Software Engineer, Charlie Holtz

Connecting to models with the AI SDK

After you've integrated with an AI provider, you can then quickly get started using the model in your frontend application using the Vercel AI SDK. This SDK is like an ORM for any AI model you want to use, whether it's for text, images, and soon audio.

For example, if you want to use the Perplexity API with Next.js, it only takes the following code to stream back responses to your frontend:

Learn more about the AI SDK or follow the instructions after connecting to your provider of choice.

Get Started Today

The future of application development is intelligent, intuitive, and immersive. With Vercel's AI Integrations, you're not just building applications; you're crafting experiences that anticipate and adapt to user needs in real-time.

If you’re an AI company or developer keen to join our AI Integrations, you can create your own integration.

Check out the new tab in your Vercel dashboard and add AI to your app today.

Read more

Jared Palmer
https://vercel.com/changelog/ai-integration-and-playground-in-the-vercel-dashboard AI Integrations and playground in the Vercel Dashboard 2024-02-08T13:00:00.000Z

You can now incorporate AI models and services from industry-leading providers into your Vercel projects with a single click.

  • AI tab: Seamlessly integrate with 3rd-party AI providers and vector databases.

  • Playground: In-dashboard playground to explore and experiment with models and preview their outputs.

Check out the documentation to get started.

Read more

Jared Palmer Max Leiter Jueun Grace Yun Nanda Syahrasyad Mariana Castilho Hedi Zandi Rich Haines Kylie Czajkowski
https://vercel.com/blog/pci-compliance-for-ecommerce-teams PCI compliance for ecommerce 2024-02-07T13:00:00.000Z

At Vercel, we strive to provide the best support for ecommerce customers worldwide. As a part of this work, we want to ensure that we provide support for our customers to comply with the Payment Card Industry Data Security Standard (PCI-DSS).

In accordance with Vercel's shared responsibility model, this post will walk you through our recommended approach using an iframe to process payments—creating a secure conduit between your end users and your payment provider.

Read more

Aaron Brown
https://vercel.com/changelog/recent-preview-deployments-now-displayed-in-the-dashboard Recent Preview Deployments now displayed in the dashboard 2024-02-07T13:00:00.000Z

Preview deployments you have recently viewed or deployed are now accessible from the Recent Previews column on your dashboard.

Each recent preview includes a link to the deployment's page in Vercel and a link to the PR or source on your git provider's site when available.

Learn more in the dashboard overview documentation.

Read more

Michael Wenzel Mariana Castilho Sam Becker Christopher Skillicorn Sam Saliba Jhey Tompkins
https://vercel.com/changelog/invite-collaborators-to-view-and-comment-on-your-deployments Invite collaborators to view and comment on your deployments 2024-02-06T13:00:00.000Z

You can now invite emails or team members to view a deployment through the share menu. All invitees will receive an email with a link to the deployment and have access to comment if comments are enabled.

The share menu will display who currently has access to a given deployment. Users with sufficient permissions will also be able to revoke access.

Visit the documentation to learn more about all options for sharing deployments.

Read more

George Karagkiaouris Sam Saliba Christopher Skillicorn Kostyantyn Voytenko Amy Burns Rich Haines
https://vercel.com/changelog/sensitive-environment-variables-are-now-available Sensitive environment variables are now available 2024-02-01T13:00:00.000Z

You can now add sensitive Environment Variables to your projects for added security of secret values like API keys.

While all Environment Variables are encrypted, sensitive values can only be decrypted during builds. This replaces our legacy secrets implementation which is being sunset.

Get started using Sensitive Environment Variables through the dashboard or with Vercel CLI 33.4.

Read more

Ana Jovanova Marc Greenstock Bel Curcio Angela Zhang
https://vercel.com/changelog/legacy-environment-variable-secrets-are-being-sunset Legacy environment variable secrets are being sunset 2024-02-01T13:00:00.000Z

Legacy secrets are being sunset in favor of Sensitive Environment Variables, which are now shareable across projects.

  • Existing legacy secrets will be automatically converted. You do not need to manually take action for non-development values. Read below to view your impacted projects.

  • All Environment Variables remain securely encrypted. The majority of Vercel workloads have moved from the legacy secrets functionality.

On May 1st, 2024 secrets will be automatically converted to sensitive Environment Variables for Preview and Production environments. Secrets attached to Development environments will not be migrated.

Why are legacy secrets being sunset?

Our legacy secrets were encrypted values global for your entire team and could only be managed through the CLI. Based on your feedback, we have since:

When will I no longer be able to use secrets?

On May 1st, 2024secrets will be removed from Vercel CLI:

  • Existing secrets added to the Preview and Production environments will be converted to Sensitive Environment Variables

  • Existing secrets added to the Development environment will not be migrated for your security. If you have a secret shared between all environments, including Development, it will not be migrated. These values must be manually migrated.

How can I migrate to Sensitive Environment Variables?

Secrets for Preview and Production environments will be automatically migrated.

For secrets which contain the Development environment, you should create new Sensitive Environment Variables, as these values will not be automatically migrated for your security. If you need to share Environment Variables across projects, you can make them shared.

How can I understand if I’m affected?

To list projects using secrets that will be automatically converted, run:

Read more

Ana Jovanova Marc Greenstock Bel Curcio Angela Zhang
https://vercel.com/blog/how-streaming-helps-build-faster-web-applications How streaming helps build faster web applications 2024-01-31T13:00:00.000Z

Streaming is the key to fast and dynamic web applications.

When streaming, you can progressively send UI from server to client, without needing to wait until all of your data has been loaded. This helps your customers see content immediately, like your main call to action to add an item to the cart.

Read more

Lee Robinson
https://vercel.com/changelog/switch-between-branches-directly-from-deployments Switch between branches directly from deployments 2024-01-31T13:00:00.000Z

You can now switch between branches directly from the Vercel Toolbar.

Access the command menu through the toolbar or ⌘K (Ctrl+K on Windows) and select branch switcher. You’ll see your team’s branches sorted to highlight those with recent activity or unread comments. Then, select a branch to switch to that deployment.

Learn more about the command menu and other features of the toolbar.

Read more

George Karagkiaouris Gary Borton Christopher Skillicorn
https://vercel.com/changelog/instrument-and-trace-applications-with-the-opentelemetry-collector Instrument and trace applications with the OpenTelemetry collector 2024-01-31T13:00:00.000Z

Vercel and Next.js provide increased observability of your applications through OpenTelemetry.

v1.0 of @vercel/otel now supports:

  • Support for Node.js and Edge runtimes

  • Telemetry context propagation, including W3C Trace Context

  • Fetch API instrumentation with context propagation

  • Support and auto-configuration for the Vercel OTEL collector

  • Enhanced metadata reporting

  • Sampling support

  • Custom tracing exporter support

  • Batched trace exporting

Learn more in our documentation or start using the package with Next.js.

Read more

Dima Voytenko JJ Kasper Andrew Gadzik Gary Borton
https://vercel.com/changelog/public-environment-variables-now-display-a-warning Public environment variables now display a warning 2024-01-29T13:00:00.000Z

When creating and editing Environment Variables on Vercel, you can now see hints to warn you of potentially leaking secret values to the public. This supports all frameworks that use a prefix to mark an environment variable as safe to use on the client like:

  • Next.js

  • Create React App

  • Vue.js

  • Nuxt

  • Gridsome

  • Gatsby

  • SvelteKit

  • Vite

Learn more about Environment Variables.

Read more

John Phamous Sam Becker
https://vercel.com/changelog/improved-resiliency-for-vercel-functions-with-failover-support Improved resiliency for Vercel Functions with inter-region failover support 2024-01-26T13:00:00.000Z

Vercel Functions can now automatically failover to the next healthy region.

Vercel's Edge Network is resilient to regional outages by automatically rerouting traffic to static assets. Vercel Functions also have multiple availability zone redundancy by default. We are now enhancing this further with support for multi-region redundancy for Functions.

In the instance of a regional outage, traffic directed towards your Vercel Function using the Node.js runtime will be automatically re-routed to the next healthy region, ensuring continuous service delivery and uptime without manual intervention.

Failover regions are also supported through Vercel Secure Compute, which allows you to create private connections between your databases and other private infrastructure.

You can configure which regions to failover to in your vercel.json file. For example, you might want to fallback to many different regions, or specific regions in a country.

Enterprise teams can enable this feature in their project settings. If you are not on Enterprise, get in touch to upgrade and enable function failover.

Read more

Casey Gowrie Tristan Siegel Abdel Sabbah Yanick Bélanger
https://vercel.com/changelog/webhooks-are-now-generally-available Webhooks are now generally available 2024-01-25T13:00:00.000Z

Webhooks allow you to get notified through a defined HTTP endpoint about certain deployment or project events that happened on the Vercel platform.

Webhooks are now available for all Pro and Enterprise customers. You can create a maximum of 20 webhooks per account.

Check out our documentation to create your first webhook.

Read more

Adrian Cooney Fabio Benedetti Florentin Eckl Chris Widmaier
https://vercel.com/changelog/protect-your-edge-config-with-a-json-schema Protect your Edge Config with a JSON schema 2024-01-22T13:00:00.000Z

You can now protect your Edge Config with a JSON schema. Use schema protection to prevent unexpected updates that may cause bugs or downtime.

Edge Config is a low latency data store accessed from Vercel Functions or Edge Middleware. It is ideal for storing experimentation data like feature flags and A/B testing cohorts, as well as configuration data for Middleware routing rules like redirects or blocklists.

To protect an Edge Config with a schema:

  • Select the Storage tab in the dashboard and then create or select your Edge Config

  • Toggle the Schema button to open the schema editing tab. Enter your JSON schema into the editor, and Vercel will use the schema to validate your data in real-time

  • Click Save. This will save changes to both the schema and data

Check out the documentation to learn more.

Read more

Aaron Morris Dominik Ferber Andy Schneider Chris Widmaier
https://vercel.com/blog/how-core-web-vitals-affect-seo How Core Web Vitals affect SEO 2024-01-19T13:00:00.000Z

Core Web Vitals influence how your application's pages rank on Google. Here, we'll dive into what they are, how they’re measured, and how your users and search ranking are impacted by them.

Read more

Malte Ubl Alice Alexandra Moore
https://vercel.com/changelog/improved-domains-page Improved domains page 2024-01-19T13:00:00.000Z

Viewing domains for your team now has faster search and a refreshed design.

It's now easier to filter domains based on their renewal status, as well as see options to configure, transfer, move, or delete individual domains.

Check out the documentation to learn more.

Read more

John Phamous Christopher Skillicorn
https://vercel.com/changelog/convert-comments-to-github-issues Convert comments to GitHub Issues 2024-01-19T13:00:00.000Z

With the GitHub Issues integration, you can now convert comments to GitHub Issues.

For teams who use GitHub Issues and Projects to track work, this allows comments to fit into your existing workflow. Collect and discuss feedback in-context on your deployment with comments and then convert those threads to issues to manage and track that work.

Your converted issues will contain the full thread, attached images and screenshots, and a link back to the thread within the preview.

GitHub Issues is part of our growing collection of integrations for comments which includes Slack, Linear, and Jira, now available to all Vercel users.

Check out the documentation to learn more.

Read more

wits Christopher Skillicorn Amy Burns Sam Saliba
https://vercel.com/changelog/5tb-file-transfers-with-vercel-blob-multipart-uploads Up to 5 TB file transfers with Blob multipart uploads 2024-01-17T13:00:00.000Z

Vercel Blob now supports storing files up to 5 TB with multipart uploads.

When using multipart: true, put() and upload() will progressively read and chunk data, upload it in parts, and retry if there are issues.

Network output is maximized without consuming too much memory. Multipart uploads support retrying streams (Node.js streams and the Web Streams API), a unique feature amongst file upload APIs.

Check out the documentation to learn more.

Read more

Vincent Voyer Luis Meyer
https://vercel.com/blog/architecting-reliability-stripes-black-friday-site Architecting a live look at reliability: Stripe's viral Black Friday site 2024-01-16T13:00:00.000Z

In 2023, businesses processed more than $18.6 billion on Stripe over Black Friday and Cyber Monday (BFCM).

This year, just 19 days before Black Friday, Stripe asked a question: "What if?" What if they opened up Stripe's core metrics and gave a detailed look into their core business, reliability, and the reach of their products?

In response, employees from across the company came together to construct a real-time, publicly accessible microsite that dynamically showcased Stripe's reliability, transaction volumes, global activity, and more, during BFCM—and they showcased it all on Vercel.

By leaning on Vercel's framework-defined infrastructure, the Stripe team was able to focus on design, performance, and reliability rather than on architecting a dynamic cache system from scratch in 19 days.

Stripe built a live experience in record time, allowing viewers to see never-before-seen real-time transaction data. By harnessing Vercel's robust infrastructure and cutting-edge technologies like Next.js, SWR, and ISR, the result was a flawlessly performing microsite.

Optimizing for the unique challenges of a viral real-time microsite

Stripe needed to strike a balance between a fast initial page load and responsive user interactions while effectively managing the application's dynamic components.

At first, they considered a per-client WebSocket approach to handle the real-time updates. However, given the tight timelines and the expected load, they opted for a solution of polling a backend cache while frequently rehydrating that cache—SWR facilitated real-time interactions on the client side, while ISR ensured that dynamic content updates occurred without directly querying the backend for every request.

Let's break it down further:

  • getStaticProps (gSP): Fetched static data during build time, ensuring that the essential data required for the initial page load is pre-fetched and rendered.

  • Stale-While-Revalidate (SWR): Managed the real-time data and interactions on the client side, allowing the application to display the latest data to users while simultaneously triggering a background revalidation process. This ensured that the data remained up-to-date without causing excessive load on the backend.

  • Incremental Static Regeneration (ISR): Automated revalidation of the static content from gSP and SWR, propagated and seeded throughout the entire Vercel Edge Network automatically.

Stripe chose a one-second max-age cache, enabling the application to deliver static pages with minimal backend queries, ensuring the system's capability to efficiently handle millions of requests during peak times by updating content in the background while users accessed static pages.

This combined strategy effectively decreased the backend load by redirecting traffic to Vercel's cache network and utilizing ISR for background data generation. The result was a seamless user experience, enabling the real-time update of the counter without sacrificing overall performance.

Ensuring reliability and uncompromising safety

Stripe is known for their unwavering dedication to their infrastructure's reliability. Across BFCM, Stripe handled a peak volume of 93,304 transactions per minute, while maintaining API uptime greater than 99.999%. This campaign had to not only showcase that reliability but also operate with the strictest of security measures to ensure no risk entered into the Stripe ecosystem.

Everything isolated: API-first approach to securing the core business

First, the team ensured complete isolation of the microsite’s data sources from core Stripe infrastructure. This deliberate separation served as a safeguard, guaranteeing that even in the face of any issues, any impact would be confined to an isolated endpoint.

Fallback Strategy: Navigating the Unknown

In the realm of real-time data streaming, where novel metrics were being presented for the first time, the team acknowledged the need for a robust fallback strategy. Should a metric fail to update or cease to provide results, meticulous planning was put in place to detect and mitigate those scenarios.

By teaming up with Vercel, Stripe's innovative BFCM microsite provided a unique, live insight into Stripe's operational reliability, showcasing an impressive handling of high transaction volumes while ensuring exceptional API uptime. In 19 days, the rapid and successful execution of this project not only emphasizes Stripe's role as a leader in innovation, but also establishes a new benchmark for efficiency in the fintech sector.

Read more

Greta Workman
https://vercel.com/changelog/metrics-for-outgoing-requests Metrics for outgoing requests 2024-01-16T13:00:00.000Z

You can now see all outgoing requests for a selected log in Runtime Logs.

Logs now display the status, duration, URL, and a trace for each request. Request metrics work with every request on Vercel, so all frameworks are supported. This makes it easier to debug latency and caching inside your Vercel Functions or when calling databases.

This release also includes various of quality-of-life improvements in the Logs UI.

Request metrics are free while in beta and only available to Pro and Enterprise customers.

Read more

Darpan Kakadia Timo Lins Tobias Lins Javi Velasco Kiko Beats
https://vercel.com/changelog/pinecone-integration-now-available-for-vector-databases Pinecone integration now available for vector databases 2024-01-16T13:00:00.000Z

You can now use the Pinecone integration to create vector databases for your AI applications. Vector databases enable augmenting LLMs with the ability to retrieve additional knowledge (RAG) from your provided sources.

This integration is available for users on all plans.

Check out the integration to get started.

Read more

Jared Palmer
https://vercel.com/changelog/vercel-firewall-proactively-protects-against-vulnerability-in-the-clerk-sdk Vercel Firewall proactively protects against vulnerability in the Clerk SDK 2024-01-12T13:00:00.000Z

A security vulnerability in the @clerk/nextjs SDK was identified by the Clerk team recently, which allows malicious actors to act-on-behalf-of other users.

The Clerk team has already released a patch with the latest version. Please check the public announcement by the Clerk team for more details.

While we still recommend updating to the latest version of the Clerk SDK, Vercel has taken proactive measures on our Firewall to protect our customers on all plans.

We will continue efforts to proactively protect Clerk + Next.js deployments on Vercel through the Vercel Firewall, regardless of Clerk's Next.js SDK version running.

Read more

Shohei Maeda Aaron Brown
https://vercel.com/changelog/login-with-passkey-is-now-supported Login with passkey is now supported 2024-01-11T13:00:00.000Z

You can now use passkeys to login to Vercel including touch, facial recognition, a device password, or a PIN. Passkeys provide a simple and secure authentication option.

How do I use passkeys on Vercel?

  1. Under the Authentication page of Account Settings you will find a passkey button

  2. Click the passkey button to add a new passkey

  3. Select the authenticator of preference and follow the instructions

  4. The new passkey will appear in your list of login connections

  5. You are now able to log in with passkeys

Learn more in our documentation.

Read more

Ana Jovanova Natalie Altman Bel Curcio Christopher Skillicorn
https://vercel.com/changelog/2024-01-account-changes Easier transitions between hobby and pro 2024-01-11T13:00:00.000Z

Your Vercel personal account will soon automatically become a free team.

What is changing?

  • Your personal projects will live under a new free team with the slug {username}s-projects.

  • Future auto-generated deployment URLs will end with {username}s-projects.vercel.app.

  • Your original username remains unchanged and can be viewed in your account settings.

  • Existing deployments will not be affected.

Your free Vercel experience will remain unchanged. Upgrading and downgrading will now be easier, as they will no longer require transferring projects.

When will my account be updated?

We'll start rolling out this change today and it might take some time before it's your turn. Once it's your turn, in most cases, update should happen instantly.

For a small number of accounts with thousands of projects or deployments, you may temporarily see a message displayed on the dashboard. During this update period, you will have read access to your personal account projects and resources, but not write access.

Who can I contact if something seems off?

Please contact us using the “Account Management” case type if you have questions about this change or notice something wrong with your account.

Read more

Elliott Johnson Shu Uesugi Kylie Czajkowski
https://vercel.com/blog/common-mistakes-with-the-next-js-app-router-and-how-to-fix-them Common mistakes with the Next.js App Router and how to fix them 2024-01-08T13:00:00.000Z

After talking to hundreds of developers and looking at thousands of Next.js repositories, I've noticed ten common mistakes when building with the Next.js App Router.

This post will share why these mistakes can happen, how to fix them, and some tips to help you understand the new App Router model.

Read more

Lee Robinson
https://vercel.com/changelog/https-dns-records-are-now-supported-in-vercel-dns HTTPS DNS records are now supported in Vercel DNS 2024-01-08T13:00:00.000Z

You can now create HTTPS DNS records in Vercel DNS.

The new HTTPS DNS record type, also known as SVCB or Service Binding record, has recently been published as RFC 9460.

This record type is designed for the HTTP protocol to improve client performance and privacy in establishing secure connections. The record can include additional information about the target server, such as supported ALPN protocols (e.g., HTTP/2, HTTP/3, etc), which can eliminate the need for protocol negotiation/upgrade between client and server to minimize the number of round trips.

Since the HTTPS record type is still a new standard, not all HTTP clients can support it. Learn more in our documentation.

Read more

Shohei Maeda
https://vercel.com/blog/forrester-total-economic-impact-vercel-ROI Forrester Total Economic Impact™ study: Vercel delivered a 264% ROI 2024-01-04T13:00:00.000Z

Inefficient developer workflows. Poor user experience. Sluggish site performance. These are common woes that customers come to Vercel to alleviate. They result in costs that affect your team’s day-to-day workflow and impact your organization’s bottom line.

But stakeholders still want to know the answer to a simple question: What will the quantifiable ROI on Vercel be?

This is what you can tell them: By migrating to Vercel, businesses saw:

  • A three-year 264% ROI

  • $9.53M in quantifiable benefits

I’m pleased to debut the findings of The Total Economic Impact™ of Vercel’s Frontend Cloud, a commissioned study conducted by Forrester Consulting.

Research overview: Quantifiable benefits totaling $9.53M

To calculate these financial models, Forrester took a multistep approach to evaluating Vercel’s impact. This included interviews with Vercel customers to gather data on benefits, costs, and risks. From this, they created a composite organization based on the interviewed companies that deployed Vercel’s Frontend Cloud.

Like many of you, their goals were to:

  • Improve user experience

  • Improve developer experience

The research outlines a set of quantifiable advantages amounting to a three-year net present value of $9.53M, within each of these objectives:

Improved developer experience

  • Developers spend 90% less time managing frontend infrastructure on Vercel

  • Developers spend 80% less time building and deploying code on Vercel

  • Developers release four times more major website enhancements, thereby improving website performance by up to 90%, using Vercel

Improved user experience

  • Higher customer conversion rates on Vercel generate $2.6 million in incremental profits

  • Higher website traffic on Vercel generates $7.7 million in incremental profits

Build, scale, and secure in 2024

Every day, I have the privilege of speaking with Vercel customers reaping the benefits of the Frontend Cloud. That’s why I’m delighted, but not surprised, to see the results of this study.

I can’t wait to see more customers build, scale, and secure a faster, more personalized web in 2024 and beyond—and I hope this study sheds light on why your organization should join us!

Read more

Paul Staelin
https://vercel.com/changelog/improvements-to-deployment-summaries Improvements to deployment summaries 2023-12-22T13:00:00.000Z

Deployment summaries help you understand how changes in your frontend application code lead to managed infrastructure on Vercel. We've improved the output with the following:

  • Static assets now display a leading / for output paths

  • Files sizes are now vertically aligned for easier visual scanning

  • Next.js Metadata outputs are properly categorized under static assets

  • Partial Prerendering now displays the generated Vercel Functions

Read more in our documentation or learn more about framework-defined infrastructure.

Read more

John Phamous Sam Becker
https://vercel.com/blog/the-developer-experience-of-the-frontend-cloud The developer experience of the Frontend Cloud 2023-12-21T13:00:00.000Z

In a large team, creating new code should never be scary. Finding where to place code shouldn't be difficult. And deploying new code certainly shouldn't break anything.

Ideally, your codebase feels transparent: easy to create, adjust, and monitor.

The Frontend Cloud offers a complete Developer Experience (DX) Platform, so you don't have to spend so much developer time curating and maintaining systems that can be easily automated.

Instead, you get centrally-located and collaborative tooling—Git-based workflows with automatic staging environments and more—where you can easily leverage the self-serve tools in front of you that just work by default.

Read more

Alice Alexandra Moore
https://vercel.com/changelog/restrict-access-with-ip-blocking-by-cidr-range Restrict access with IP blocking by CIDR range 2023-12-21T13:00:00.000Z

Vercel Firewall protects your application from DDoS attacks and unauthorized access.

Enterprise teams now have increased security with the ability to control traffic and restrict access through static IP addresses or entire network CIDR ranges.

Stay secure by blocking entire subnets, or restricting access from untrusted networks, to prevent attacks against your applications.

Learn more in our documentation or contact our sales team to upgrade to Enterprise.

Read more

Natalie Altman Tristan Siegel
https://vercel.com/changelog/view-upload-and-delete-blob-files-in-the-dashboard View, upload, and delete Blob files in the dashboard 2023-12-21T13:00:00.000Z

You can now manage your Vercel Blob files from the dashboard using the new file browser:

  • View individual files or folders

  • Upload new files, including support for drag-and-drop and bulk uploads

  • Delete individual files or in bulk

Try it out or learn more about Vercel Blob.

Read more

Vincent Voyer Luis Meyer
https://vercel.com/blog/aws-reinvent-2023-iteration-velocity AWS re:Invent 2023: Iteration velocity is the solution to all software problems 2023-12-20T13:00:00.000Z

Recently the Vercel Team had the pleasure of sponsoring AWS re:Invent 2023. This year we attended as an official part of the AWS Marketplace, which makes it possible to onboard and build on Vercel in just a few clicks.

While at re:Invent, I was able to share my thoughts on The Frontend Cloud, Generative UI, and the keys to a highly iterative team. Here’s a look at my talk.

The shift to the frontend—the final frontier of differentiation

The current state of web development requires a focus on the frontend—because the frontend is where you differentiate from your competition. It’s where you meet your customers, users, and partners, and become the leader in your market.

Composable architecture for better speed, UX, and AI adoption

But many teams are stuck with a monolithic system, bogged down by configuration and left with less room for innovation. These stacks are:

  • Inflexible

  • Expensive

  • Difficult to scale

This is especially true in the age of AI, which is changing the tech landscape and necessitating that businesses remain nimble.

Imagine your website is built on a legacy stack, and you want to add an AI chatbot. In a managed monolith, you're beholden to that vendor's roadmap. AI capabilities must be added to the entire app before anyone can use it—in other words, frontend devs are left waiting for your vendor to make changes. Meanwhile, in composable patterns, frontend devs can directly integrate AI services and get going whenever they need.

The answer to this problem: Implement a composable architecture—build a composable frontend with complete control over the user and developer experience while plugging in your preferred APIs, and don’t be held back from important user features.

A composable solution is integral to iteration velocity.

Iterate faster and make better decisions with the Frontend Cloud

One of my firmest beliefs is that iteration velocity can solve all software problems.

That’s because, as software engineers, we know that we are going to make mistakes. So we must iterate, experiment, and fix things. Quickly.

The velocity component is about speed and direction—it means moving faster and making better decisions each step of the way.

Here’s how the Frontend Cloud helps you iterate faster at every step of your software lifecycle:

Develop your application

Preview your site after you build it

  • Preview deployments: Get an automatic direct, immutable deployment with every commit push, that you can share with stakeholders

  • Comments: Figma-like live feedback on your Preview deployments, actionable through Slack, Linear and Jira integrations

  • Fast builds: Fast builds with each step of the Preview process, to maintain team velocity

Ship your site into production

  • Continuous Deployment: Deploy quickly in production, automatically via your favorite Git provider

  • Edge Config: Globally push configuration of your app to global edge, so you can run A/B tests and other experimentation as close as possible to your users

  • Instant Rollback: We all make mistakes, so instantly rollback to any previous version of your app in a second or less

Leverage The Frontend Cloud in order enhance your time-to-market and optimize decision making at every step—that is, supercharge your iteration velocity.

Generative UI helps you take that first step

Iteration velocity matters at every step of the way, and our new product v0 comes in at the very first step. v0 makes website creation as simple as describing your ideas. We’ve dubbed it Generative UI—combining the best practices of frontend development with the potential of generative AI.

v0 builds the first iteration of your application, similar to how ChatGPT does.

Here’s how it works:

  • Describe the interface you want to build

  • v0 produces code using open-source tools like React, Tailwind CSS, and 

    shadcn/ui

  • Select an iteration and keep editing in v0

  • When you're ready, copy and paste that code into your app and develop from there

It doesn’t replace the entire process, but rather it gets you started in seconds—another piece of your toolkit to iteration velocity—betting on the power of simple text prompts.

Configure less. Create more.

One of my favorite parts of my job is watching the brands achieve peak iteration velocity and solve software problems with tools like these.

Another iteration velocity hack: With partnerships like AWS Marketplace, Vercel helps users take advantage of best-in-class AWS infrastructure with zero configuration, making a composable architecture cost-effective, flexible, and secure.

By going composable and leveraging tools like the Frontend Cloud and v0, teams can access peak iteration velocity, and lead with their frontend into an AI-first world.

Read more

Malte Ubl
https://vercel.com/changelog/improved-log-drain-filtering Improved Log Drain filtering 2023-12-20T13:00:00.000Z

Log Drains now support the following options through the dashboard and API:

  1. Filtering based on environment (production or preview)

  2. Configuring a sample size to reduce the throughput

Learn more in our documentation.

Read more

Chris Widmaier Julia Shi Darpan Kakadia
https://vercel.com/changelog/stage-and-manually-promote-deployments-to-production Stage and manually promote deployments to production 2023-12-19T13:00:00.000Z

You can now control when domains are assigned to deployments, enabling the manual promotion of production deployments to serve traffic.

When a new deployment is created (with our Git Integrations, CLI, or REST API), Vercel will automatically apply any custom domains configured for the project.

You can now create staged deployments that do not assign domains, which can later be promoted to production and serve traffic. This is helpful for custom workflows and having multiple production environments for QA or testing.

From the dashboard

  • Disable the assignment of domains for your production branch in your Git project settings.

  • Find your deployment from the list of all deployments and use the right menu to select Promote to Production.

From the CLI

  • vercel --prod --skip-domain

  • vercel promote [deployment-id or url]

Learn more in our documentation.

Read more

Sean Massa Sam Becker Mariano Cocirio Chris Barber Trek Glowacki
https://vercel.com/changelog/revert-and-pin-deployments-with-instant-rollback Revert and pin deployments with Instant Rollback 2023-12-19T13:00:00.000Z

Instant Rollback enables you to quickly revert to a previous production deployment, making it easier to fix breaking changes.

You can now choose to prevent the automatic assignment of production domains when rolling back. Reverted deployments will not be replaced by new production deployments until you manually promote a new deployment.

Learn more in our documentation.

Read more

Sean Massa Sam Becker Mariano Cocirio Chris Barber Trek Glowacki
https://vercel.com/changelog/manually-create-deployments-by-commit-or-branch-in-the-dashboard Manually create deployments by commit or branch in the dashboard 2023-12-19T13:00:00.000Z

You can now initiate new deployments directly from the dashboard using a git reference. This approach is helpful when git providers have service interruptions with webhook delivery.

To create a deployment from a git branch or SHA:

  1. From the dashboard, select the project you'd like to create a deployment for.

  2. Select the Deployments tab. Once on the Deployments page, select the Create Deployment button in the three dots to the right of the Deployments header.

Depending on how you would like to deploy, enter the following:

  1. Targeted Deployments: Provide the unique ID (SHA) of a commit to build a deployment based on that specific commit.

  2. Branch-Based Deployments: Provide the full name of a branch when you want to build the most recent changes from that specific branch.

Finally, select Create Deployment and Vercel will build and deploy your commit or branch.

When the same commit appears in multiple branches, Vercel will prompt you to choose the appropriate branch configuration. This choice is crucial as it affects settings like environment variables linked to each branch.

Learn more in our documentation.

Read more

Felix Haus Balazs Varga Sam Becker Mariano Cocirio
https://vercel.com/changelog/improved-build-compute-performance-for-enterprise-customers Improved build compute performance for Enterprise customers 2023-12-12T13:00:00.000Z

Enterprise customers now have faster build compute infrastructure by default.

Builds are now 15% faster than Pro by median, and 7% faster than the previous Enterprise build infrastructure. Additionally, Enterprise customers can now purchase enhanced build machines with larger memory and storage.

Learn more about builds or contact us to upgrade to Enterprise.

Read more

Guðmundur Bjarni Ólafsson Andrew Healey Gargi Sharma Janos Szathmary Mariano Cocirio
https://vercel.com/changelog/vercel-functions-now-scale-12x-faster-for-high-volume-requests Vercel Functions now scale 12x faster for high-volume requests 2023-12-11T13:00:00.000Z

Vercel Functions now scale 12x faster for high-volume requests on paid plans:

  • The default concurrency quota has increased to 30,000

  • Scale out by 1,000+ concurrency every 10 seconds automatically

  • Ideal for unpredictable traffic or flash sales

Learn more about automatic concurrency scaling with Vercel Functions.

Read more

Joe Haddad
https://vercel.com/changelog/unified-documentation-search-across-vercel-next-js-and-turborepo Unified documentation search across Vercel, Next.js, and Turborepo 2023-12-08T13:00:00.000Z

Searching across the Vercel documentation is now faster and more intuitive with a redesigned ⌘+K menu that includes:

  • Cross-platform search: Search across Vercel, Next.js, and Turborepo documentation sites from the Vercel docs or dashboard.

  • Customized search results: Filter your search by choosing specific platforms–Vercel, Next.js, or Turborepo–or view all results combined.

  • Most relevant results: Quickly view the three most relevant results to your query, ensuring you get the best matches instantly.

You can access the menu by pressing ⌘+K on macOS or Ctrl+K on Windows and Linux from the Vercel documentation site or dashboard with Shift+D.

Check out the documentation to learn more.

Read more

Jhey Tompkins Rich Haines
https://vercel.com/changelog/improved-speed-insights-experience Improved Speed Insights experience 2023-12-07T13:00:00.000Z

Speed Insights measures site performance and helps you understand areas for improvement. This includes Core Web Vitals like First Contentful Paint, Largest Contentful Paint, Cumulative Layout Shift, and more.

The Speed Insights experience has been improved to include:

  • Support for all frontend frameworks: You can now use Speed Insights with any framework using the new @vercel/speed-insights package. This includes supporting dynamic route segments in frameworks like SvelteKit and Remix.

  • First-party data ingestion: Data will now be processed directly through your own domain, eliminating the third-party domain lookup.

  • Updated scoring criteria: All previous and future metrics measured by Speed Insights are now updated with new weights, based on the latest guidance from Core Web Vitals and Lighthouse.

  • UI improvements: You can now view performance data by region. Displayed metrics now default to p75 (the experience of the fastest 75% of your users).

  • Time to First Byte (TTFB): This metric is now measured, providing visibility into how quickly the server responds to the first request.

  • Advanced customization: New tools to intercept requests and set sample rates on a per-project basis.

Speed Insights is available on all plans. Learn more about upgrading to the new package and see how to take advantage of the new features.

Read more

Tobias Lins Timo Lins Damien Simonin Feugas Chris Widmaier
https://vercel.com/blog/introducing-conformance Introducing Conformance and Code Owners: Move fast, don't break things 2023-12-05T13:00:00.000Z

As organizations grow, it can become hard to sustain fast release cycles without diminishing code health and letting errors slip into production. It shouldn't be this way. We should be able to move fast without breaking things—making quick updates and innovating while retaining great performance, security, and accessibility.

Today, we're releasing new features to Vercel's Developer Experience Platform to help Enterprise teams ship higher quality code, with the same velocity even as teams and codebases scale.

  • Conformance: Automate detection of critical issues early in the development lifecycle and prevent them from reaching production.

  • Code Owners: Find who is responsible for the code and make sure that code changes are reviewed by the right people, every time.

  • A reimagined dashboard experience: A workspace to surface code health insights, help with cross-team collaboration, and ensure a better onboarding experience for new team members.

Conformance: Out-of-the-box static analysis

Our Conformance tooling runs static analysis checks over your codebase to find critical issues before merging—allowing you to move quickly without compromising quality. It automatically checks for issues that may result in performance, security, or quality problems in your production applications.

Conformance rules span multiple files, instead of verifying each file individually, providing a holistic perspective on your codebase. It also adds frontend specific context to issues, classifies and tags issues, as well as assigns a severity with granular ownership of both rules and rule violation exceptions.

By providing a high-level score and tracking issues in the dashboard, you get a barometer for assessing accumulated technical debt. Much like a performance budget, this score becomes invaluable in understanding when and where to prioritize tasks. Specifically, you can allowlist a specific number of issues before going to production, then track how you start unlisting them and burning down through the issues to improve code health.

Conformance was built by the creators of Next.js and Turborepo. By codifying decades of their combined experience crafting high performant web sites and deep knowledge of the framework ecosystem, we're able to go beyond catching errors, towards actually optimizing your application.

You can run Conformance within your CI/CD systems or locally to:

  • Next.js: Use guardrails crafted by the inventors of Next.js to catch common issues that can happen in Next.js applications. For example, detect when getServerSideProps is not needed, as there's no use of the context parameter and it could be static generated.

  • Performance: Catch issues that negatively affect the performance of your website. For example, prevent blocking serial asynchronous calls in your applications.

  • Code health: Set general rules that can prevent things from negatively affecting your codebase or code health. For example, require that a workspace package that uses TypeScript files has configured TypeScript correctly for that workspace.

  • Security: Act as a first layer of threat detection for security vulnerabilties. For example, require that important security headers are set correctly for Next.js apps and contain valid directives.

Accelerate innovation, reduce time spent on bugs

Deploying bad code has an outsized impact on a team's velocity.

Debugging alone can take away a year's worth of valuable developer time. Conformance strategically places guardrails to redirect brainpower towards creation, rather than time-consuming error detection. By proactively resolving potential issues, Conformance frees developers from unnecessary dependencies, leading to increased productivity and allowing them to channel their efforts into the projects and features that will improve end-customer experiences.

Code Owners: Framework-defined ownership

As your company grows, you need a code ownership system that grows with you.

Code Owners works with your Git integration, ensuring code reviews with smart reviewer assignments, and an escalation protocol that ensures appropriate individuals review your code and escalate concerns when needed.

Code Owners mirrors the structure of your organization. This means Code Owners who are higher up in the directory tree act as broader stewards over the codebase and are the fallback if owners files go out of date, such as when developers switch teams. And, with Modifiers your organization can tailor your code review process. For example, you can assign reviews in a round-robin style, based on who's on call, or to the whole team.

All while elevating application security

Security remains at the forefront of every feature we release. Creating security rules with Conformance and Code Owners brings your security team into the development process. Conformance catches issues that could become security vulnerabilities, like unsafe usage of cookies in your application, before they make it to production. Similarly, Code Owners ensures no one on your team becomes a security vulnerability.

Using the features together, you can define an allowlist file for Security rules, and then assign your Security team as code owner of that file. So whenever someone tries to add something new to the list, the Security teams needs to approve it.

A reimagined dashboard experience for monorepos

When you start using Conformance, you'll also see a redesigned dashboard within vercel.com that gives developers and leadership team members an overall view of project health. At a glance, any team member can see global code health, Conformance scores, and the teams responsible for those repositories. This means, you can understand problem areas and investigate errors by seeing all of your allowlisted performance, security, or code-quality errors.

Move fast, don’t break things

Conformance and Code Owners are a major step forward in providing developers with the tools and resources they need to build better, more efficient applications.

Today, Conformance and Code Owners are Generally Available on Vercel for Enterprise teams.

Read more

Brody McKee Cody Brouwers
https://vercel.com/changelog/conformance-and-code-owners-are-now-generally-available-for-enterprise-teams Conformance and Code Owners are now generally available for Enterprise teams 2023-12-05T13:00:00.000Z

Today, we're releasing new features to Vercel's Developer Experience Platform to help Enterprise teams ship higher quality code, faster—even as teams and codebases scale:

Conformance: Maintain high-quality code standards across projects in your codebase.

  • Conformance CLI: Run Conformance in your CI/CD systems to block the merge of new code, or run it locally to catch issues before even committing them.

  • Custom Rules: Add organization-specific rules to ensure codebase consistency.

Code Owners: Integrate with your Git client for streamlined code reviews and smart reviewer assignments.

  • Reviewer Assignments: Intelligent code review assignments based on your organization's structure.

  • Modifiers: Customize your review process to fit your team's needs. Assign reviews in a round-robin style, based on who's on call, or to the whole team.

A reimagined dashboard experience: When you start using Conformance, you’ll see a reengineered workspace to surface code health insights, aid cross-team collaboration, and ensure a better onboarding experience for new team members.

Check out the documentation to learn more or contact us to get started.

Read more

Mark Knichel Brody McKee Cody Brouwers Mariano Cocirio Pearl Latteier Gaspar Garcia Justin Vitale Christopher Skillicorn
https://vercel.com/blog/the-user-experience-of-the-frontend-cloud The user experience of the Frontend Cloud 2023-12-04T13:00:00.000Z

The world's best websites load before you've finished this sentence.

Those websites can't be static, but serving performance and personalization to a global user base has historically been complex.

The primary goal of Vercel's Frontend Cloud is to collect industry-best practices into one easy-to-use workflow, integrating new and better solutions as they come.

In this article, we'll look at why speed and personalization matter to your business, and how the Frontend Cloud gives you abundant options for both.

Read more

Alice Alexandra Moore
https://vercel.com/blog/guide-to-fast-websites-with-next-js-tips-for-maximizing-server-speeds Guide to fast websites with Next.js: Tips for maximizing server speeds and minimizing client burden 2023-11-29T13:00:00.000Z

Tinloof is an agency obsessed with delivering fast websites such as jewelry brand Jennifer Fisher, which went from a Shopify theme to a modern Next.js website that instantly loads with 80% less JavaScript.

Read more

Seif Ghezala
https://vercel.com/changelog/faster-and-more-reliable-managed-infrastructure Faster and more reliable Managed Infrastructure 2023-11-29T13:00:00.000Z

We've upgraded our Managed Infrastructure resulting in up to 45% faster routing at p99 and reliability improvements for all plans.

When a request is made to a Vercel-managed site, traffic is routed to the nearest Edge Network region with our Anycast routing. Vercel processes the request, identifies the deployment to serve, and instantly retrieves related metadata about the requested deployment.

Now with optimized metadata retrieval and routing, this performance enhancement benefits all workloads. Responses to static resources are then fetched from storage, or dynamic content is generated through Vercel Functions, based on the routing details from the deployment metadata.

These infrastructure improvements benefit all existing and new deployments. Deploy now or learn more about Vercel's Managed Infrastructure.

Read more

Brooke Mosby
https://vercel.com/changelog/node-js-16-deprecation Node.js 16 is being deprecated on January 31, 2025 2023-11-29T13:00:00.000Z

Following the Node.js 16 end of life on September 11, 2023, we are deprecating Node.js 16 for Builds and Functions on January 31, 2025.

Will my existing deployments be affected?

No, existing deployments with Serverless Functions will not be affected.

When will I no longer be able to use Node.js 16?

On January 31, 2025, Node.js 16 will be disabled in project settings. Existing projects using 16 as the version for Functions will display an error when a new deployment is created.

How can I upgrade my Node.js version?

You can configure your Node.js version in project settings or through the engines field in package.json.

How can I see which of my projects are affected?

You can see which of your projects are affected by this deprecation with:

Read more

Lee Robinson
https://vercel.com/blog/commerceui-headless-shopify-nextjs The power of headless: Ecommerce success with Next.js, Vercel, and Shopify 2023-11-28T13:00:00.000Z

Translating designer brand experiences to the digital world requires putting complete control in the hands of the developer. A lack of ability to fine-tune performance optimizations and application decisions often limits UI possibilities.

Read more

Alice Alexandra Moore
https://vercel.com/changelog/upgrading-ruby-v2-7-to-v3-2 Upgrading Ruby v2.7 to v3.2 2023-11-22T13:00:00.000Z

Ruby v3.2 is now generally available, and is the new default runtime version for Ruby based Builds and Serverless Functions. Additionally, Ruby v2.7 will be discontinued on December 7th, 2023.

  • Existing deployments that use Ruby v2.7 will continue to work

  • New deployments will use Ruby v3.2 by default, or if ruby "~> 3.2.x" is defined in the ‌Gemfile‍​‍​‍‌‍‌

  • After December 7th, 2023, new deployments that define ruby "~> 2.7.x" in the ‌Gemfile‍​‍​‍‌‍‌​‍‌‍‍‌‌‍‌‌‍‍‌‌‍‍​‍​‍​‍‍​‍​‍‌‍​‌‍‌‍‍‌‌​‌‍‌‌‌‍‍‌‌​‌‍‌‍‌‌‌‌‍​​‍‍‌‍​‌‍‌‍‌​‍​‍​‍​​‍​‍‌‍‍​‌​‍‌‍‌‌‌‍‌‍​‍​‍​‍‍​‍​‍​‍‌‍​‌‌​​‌‍‍‌​‍‌‍​‍‌‍​‌‍‌‍‌​‍‌‍‌‌‌‍‌​‌‍‍‌‌‌​​‌‍‌‌​‌​​‌‌‍​‌‍‌‌‌‍​‌‌‍‌‍‍‌‍​‌​‍‌‌‌‌‌​‌​​‍​​‌‌​‍​‌‍‍‌​​‌‌​‍‌‌‍‍‌‍‌​​‌‌‍‌‌‌​​‌‌‍‍‌​‍‌‍‌‌​​‌‌‍‌‌​‍​‍‌‌ will no longer build

Only the minor version (3.2) is guaranteed, meaning we will always use the latest patch version available within the minor range.

Read the documentation for more.

Read more

Nathan Rajlich Ethan Arrowood Sean Massa
https://vercel.com/blog/the-foundations-of-the-frontend-cloud The foundations of the Frontend Cloud 2023-11-21T13:00:00.000Z

Core web app decisions tend to center the backend, due to its complexity and impact over huge swaths of the business.

However, frontends have grown far more important and complex in their own right. When not prioritized, the intricate infrastructure around them can quickly spin out of control, dragging teams into untold amounts of tech debt.

As decoupled architecture becomes more common, developers are turning to the Frontend Cloud to automate away the behind-the-scenes hassles of creating and growing dynamic websites.

Instead of managing infrastructure as a separate step of the development process, the Frontend Cloud provisions global infrastructure for you, based on your existing application code.

This approach to web development massively increases developer velocity, allowing your team to experiment safely and meet shifting market demands. Teams of all sizes can effortlessly scale global apps while maintaining the highest possible bars for performance, personalization, and security.

You can think of the backend cloud as your cost center and the Frontend Cloud as your profit center.

Read more

Alice Alexandra Moore
https://vercel.com/changelog/convert-comments-on-deployments-to-jira-issues Convert Comments on deployments to Jira issues 2023-11-21T13:00:00.000Z

Comments on your deployments can now be converted into Jira issues. This makes it easy to take action on feedback in the workflows your team is already using.

You can name your issue and select the project and issue type without leaving your deployment. Issues retain the full thread history with any attached images and include a link back to where the comment was left.

Jira is part of our growing collection of integrations for comments which includes Slack and Linear, available to Pro and Enterprise users as well as Hobby users with public git repositories.

Check out the documentation to learn more.

Read more

wits Shaquil Hansford Christopher Skillicorn
https://vercel.com/blog/how-to-scale-a-large-codebase How to scale a large codebase 2023-11-16T13:00:00.000Z

Scaling a codebase is an integral, and inevitable, part of growing a software company.

You may have heard many terms thrown around as answers — monoliths, monorepos, micro frontends, module federation, and more.

At Vercel, we’ve helped thousands of large organizations evolve their codebases, and we have an opinion on the optimal way to build software.

Read more

Lee Robinson
https://vercel.com/changelog/nodejs-20 Node.js v20 LTS is now available in beta 2023-11-16T13:00:00.000Z

As of today, Node.js version 20 can be used as the runtime for Builds and Serverless Functions. Select 20.x in the Node.js Version section on the General page in the Project Settings. The default version remains Node.js 18.

Node.js 20 introduces several new features including:

  • New experimental permission model

  • Synchronous import.meta.resolve

  • Stable test runner

  • Performance updates to V8 JavaScript Engine and Ada (URL Parser)

Node.js 20 is faster and introduces new core APIs eliminating the need for some third-party libraries in your project. Support for Node.js 20 on Vercel is currently in beta.

The exact version used by Vercel is 20.5.1 and will automatically update minor and patch releases. Therefore, only the major version (20.x) is guaranteed.

Read the documentation for more.

Read more

Ethan Arrowood Nathan Rajlich Janos Szathmary Guðmundur Bjarni Ólafsson
https://vercel.com/changelog/vercel-cron-jobs-are-now-generally-available Vercel Cron Jobs are now generally available 2023-11-15T13:00:00.000Z

Vercel Cron Jobs let you to run scheduled jobs for things like data backups or archives, triggering updates to third-party APIs, sending email and Slack notifications, or any task you need to run on a schedule.

By using a specific syntax called a cron expression, you can define the frequency and timing of each task. Cron Jobs work with any frontend framework and can be defined in vercel.json. You can use them to run your Serverless Functions and Edge Functions.

During the beta, we made Cron Jobs more secure by providing an environment variable with the name CRON_SECRET. We also added support for Deployment Protection and Instant Rollback.

Cron Jobs are now included for customers on all plans. Per account, users on the Hobby plan will have access to 2 Cron Jobs, users on the Pro plan will have access to 40 Cron Jobs, and users on the Enterprise plan will have access to 100 Cron Jobs.

Check out our documentation or deploy an example with Cron Jobs.

Read more

Andy Schneider Chris Widmaier Vincent Voyer
https://vercel.com/changelog/automatically-detect-and-replay-layout-shifts Automatically detect and replay layout shifts from the Vercel Toolbar 2023-11-14T13:00:00.000Z

Vercel can now automatically detect and replay layout shifts on your deployments from the Vercel Toolbar.

Layout shifts are reported and notified through the Toolbar. Each reported shift includes a summary of what caused the shift and how many elements it affected. Additionally, you replay and animate the shift to see it again.

The Toolbar is automatically added to all Preview Deployments, but can also be used in localhost and in production (likely behind your own staff authentication checks) when using the @vercel/toolbar package.

Check out the documentation to learn more.

Read more

wits Sam Saliba Christopher Skillicorn Jhey Tompkins
https://vercel.com/blog/partial-prerendering-with-next-js-creating-a-new-default-rendering-model Partial prerendering: Building towards a new default rendering model for web applications 2023-11-09T13:00:00.000Z

At this year’s Next.js Conf, we discussed the developer and user experience challenges of global delivery of dynamic web applications. How can we fetch data without expensive waterfalls and also deliver content directly from the edge?

The answer to all of these current challenges: Partial Prerendering (PPR).

PPR combines ultra-quick static edge delivery with fully dynamic capabilities and we believe it has the potential to become the default rendering model for web applications, bringing together the best of static site generation and dynamic delivery.

Today, you can try an experimental preview of PPR with Next.js 14 on Vercel or visit our demo for a first impression of PPR.

Read more

Sebastian Markbåge Malte Ubl
https://vercel.com/changelog/vercel-has-proactively-protected-against-a-vulnerability-in-the-sentry-next Vercel Firewall proactively protects against vulnerability in the Sentry Next.js SDK 2023-11-09T13:00:00.000Z

A security vulnerability was discovered that affects Sentry’s Next.js SDK, which made it possible to exploit Sentry’s Tunnel feature to establish Server-Side Request Forgery (SSRF) attacks.

The Sentry team has already released a patch with the latest version 7.77.0.

While we still recommend updating to the latest version of the Sentry SDK, Vercel has taken proactive measures on our firewall to protect our customers.

We will continue to proactively protect all Sentry + Next.js deployments on Vercel through the Vercel Firewall, regardless of Sentry's Next.js SDK version running.

Read more

Matheus Fernandes Shohei Maeda
https://vercel.com/changelog/backups-now-available-for-vercel-edge-config Backups now available for Vercel Edge Config 2023-11-08T13:00:00.000Z

Vercel Edge Config is our global low-latency data store for feature flags, experiments, and configuration metadata. Now, backups of your Edge Config are automatically created with every update to an Edge Config's items. You can restore backups from the Storage tab in your Vercel dashboard.

Customers on all plans can take advantage of backups. Hobby customers have 7 days of backup retention, Pro customers have 90 days of backup retention, and Enterprise customers have 365 days of backup retention.

Check out the documentation to learn more.

Read more

Aaron Morris Dominik Ferber Amy Burns
https://vercel.com/changelog/report-on-out-of-memory-or-disk-space More detailed report on out of memory or disk space errors on builds 2023-11-08T13:00:00.000Z

You will now see more information in the build logs when your build fails due to either exhausting the available memory (OOM) or disk space (ENOSPC).

In the case of OOM, your build logs will confirm the event. For ENOSPC situations, detailed information on disk space allocation is provided.

Check out our documentation to learn more.

Read more

Peter van der Zee Felix Haus
https://vercel.com/changelog/favorite-teams-and-projects-to-appear-in-your-dashboard Favorite teams and projects to appear in your dashboard 2023-11-07T13:00:00.000Z

We recently introduced an improved project and team switcher on Vercel, including the option to favorite projects.

Now, favorited projects will also appear in your dashboard overview, and you can easily add and remove them from the context menu.

Read more

Christopher Skillicorn John Phamous
https://vercel.com/blog/building-the-most-ambitious-sites-on-the-web-with-vercel-and-next-js-14 Building the most ambitious sites on the Web with Vercel and Next.js 14 2023-11-06T13:00:00.000Z

At this year's Next.js Conf, thousands of community members tuned in to learn about updates to the framework thousands of developers deploy with everyday. Among the announcements were:

Read more

Guillermo Rauch
https://vercel.com/blog/building-secure-and-performant-web-applications-on-vercel Building secure and performant web applications on Vercel 2023-11-06T13:00:00.000Z

Web Apps are the ultimate dynamic use-case on the Web. As opposed to websites, web apps typically require or facilitate user-to-data interactions. Applications like customer-facing dashboards, support portals, internal employee apps, and much more require up-to-date, personalized information delivered in a performant and secure way.

Vercel's Frontend Cloud offers support for deploying complex and dynamic web applications with managed infrastructure so you have control and flexibility without having to worry about configuration and maintenance—and yes, this means everything required to serve your App.

Read more

Alli Pope Matt Jared
https://vercel.com/changelog/deployment-protection-is-now-enabled-by-default-for-new-projects Deployment Protection is now enabled by default for new projects 2023-11-02T13:00:00.000Z

Deployment Protection is now enabled by default for all new projects, and our full set of protection options is now generally available.

Deployment Protection includes a series of features that ensure you can keep your Vercel deployments secure. Secure your Preview and Production deployments with:

  • Vercel Authentication: Restricts access to your deployments to only Vercel users with suitable access rights. Vercel Authentication is available on all plans.

  • Password Protection: Restricts access to your deployments to only users with the correct password. Password Protection is available on the Enterprise plan, or as a paid add-on for Pro plans.

  • Trusted IPs: Restricts access to your deployments to only users with the correct IP address. Trusted IPs is available in addition to Vercel Authentication and available as a part of the Enterprise plan.

To configure existing deployments with Deployment Protection, you can use this migration guide. For all new deployments, Deployment Protection with Vercel Authentication is now enabled by default.

Check out the documentation to learn more.

Read more

Kit Foster Balazs Varga Natalie Altman
https://vercel.com/changelog/protect-past-production-deployments-with-deployment-protection Protect past Production Deployments with Deployment Protection 2023-11-02T13:00:00.000Z

Ensure your past production deployments remain secure by enabling Standard Protection as the default setting for your deployments. With Standard Protection, Vercel Authentication or Password Protection will ensure that all of your preview and production deployments remain secure.

Migrating existing deployments to use Standard Protection will protect both preview and generated production URLs. Standard Protection restricts access to the production generated deployment URL

Learn more about migrating to Standard Protection in our documentation.

Deployment Protection is available on all plans.

Read more

Kit Foster Balazs Varga Natalie Altman
https://vercel.com/changelog/trusted-ips-is-now-generally-available-for-enterprise-customers Trusted IPs for Deployment Protection is now Generally Available 2023-11-02T13:00:00.000Z

Trusted IPs are a feature of Deployment Protection that allow you to limit access to your deployments by IP address. Configure Trusted IPs in addition to Vercel Authentication to ensure only your team members can access and make changes to your deployments.

For customers who rely on a VPN or additional proxy, Trusted IPs ensure you can restrict access to your deployments to only users behind the VPN.

You can configure Trusted IPs by specifying a list of IPv4 addresses, or by CIDR ranges.

Trusted IP for Deployment Protection is only available for customers on the Enterprise plan.

Check out the documentation to learn more.

Read more

Kit Foster Balazs Varga Natalie Altman
https://vercel.com/blog/understanding-cookies Understanding cookies 2023-11-01T13:00:00.000Z

Cookies are small pieces of data stored by web browsers on a user's device at the request of web servers. They are sent back unchanged by the browser each time it accesses that server. Cookies allow the server to "remember" specific user information, facilitating functionalities like maintaining user sessions, remembering preferences, and tracking user behavior.

Read more

Lydia Hallie
https://vercel.com/changelog/next-js-14 Next.js 14 on Vercel 2023-10-26T13:00:00.000Z

Next.js 14 is fully supported on Vercel. Build data-driven, personalized experiences for your visitors with Next.js, and automatically deploy to Vercel with optimizations, including:

  • Streaming: The Next.js App Router natively supports streaming responses. Display instant loading states and stream in units of UI as they are rendered. Streaming is possible for Node.js and Edge runtimes—with no code changes—with Vercel Functions.

  • React Server Components: Server Components allow you to define data fetching at the component level, and easily express your caching and revalidation strategies. On Vercel, this is supported natively with Vercel Functions and the Vercel Data Cache, a new caching architecture that can store both static content and data fetches.

  • React Server Actions: Server Actions enable you to skip manually writing APIs and instead call JavaScript functions directly for data mutations. On Vercel, Server Actions use Vercel Functions.

  • Partial Prerendering (Experimental): A new compiler optimization for dynamic content with a fast initial static response based on a decade of research and development into server-side rendering (SSR), static-site generation (SSG), and incremental static revalidation (ISR).

Additionally in Next.js 14 you will find:

  • Turbopack: 5,000 tests passing for App & Pages Router with 53.3% faster local server startup and 94.7% faster code updates with Fast Refresh.

  • Forms and mutations: The user experience is improved when the user has a slow network connection, or when submitting a form from a lower powered device.

  • Metadata: Blocking and non-blocking metadata are now decoupled. Only a small subset of metadata options are blocking, and we ensured non-blocking metadata will not prevent a partially prerendered page from serving the static shell.

  • Logging: More verbose logging around fetch caching can be enabled.

  • create-next-app: There is now an 80% smaller function size for a basic create-next-app application.

  • Memory management: Enhanced memory management is available when using edge runtime in development.

Check out our documentation to learn more.

Read more

Tim Neutkens Delba de Oliveira Tobias Koppers JJ Kasper Jimmy Lai Luba Kravchenko Agustin Falco Nabeel Sulieman
https://vercel.com/blog/how-we-optimized-package-imports-in-next-js How we optimized package imports in Next.js 2023-10-13T13:00:00.000Z

In the latest version of Next.js, we've made improvements to optimize package imports, improving both local dev performance and production cold starts, when using large icon or component libraries or other dependencies that re-export hundreds or thousands of modules.

This post explains why this change was needed, how we've iterated towards our current solution, and what performance improvements we've seen.

Read more

Shu Ding
https://vercel.com/changelog/vercel-postgres-is-now-available-for-pro-users Vercel Postgres is now generally available for Hobby and Pro users 2023-10-13T13:00:00.000Z

Vercel Postgres, our serverless SQL database, is now available for Hobby and Pro users.

During the beta period, we reduced cold start times to 100-200ms and fixed several bugs around handling connections. Usage prices have also been lowered from the beta:

  • Total storage:

    reduced 60% from $0.30/GB to $0.12/GB

  • Written data:

    reduced 4% from $0.10/GB to $0.096/GB

  • Data transfer: reduced 55% from $0.20/GB to $0.09/GB

Billing will begin on October 19th and Pro users have the following usage included:

  • 1 database then $1.00 USD per additional database

  • 100 hours of compute time per month then $0.10 USD per additional compute-hour

  • 512 MB total storage then $0.12 USD per additional GB

  • 512 MB written data per month then $0.096 USD per additional GB

  • 512 MB data transfer per month then $0.09 USD per additional GB

If you were a beta participant and want to opt out of using Vercel Postgres, you can backup your database and delete it.

Check out the documentation to learn more.

Read more

Edward Thomson Adrian Cooney Shaquil Hansford Dom Busser Fabio Benedetti
https://vercel.com/blog/teklas-ecommerce-evolution-harnessing-flexibility-with-vercel-and-medusa Tekla's ecommerce evolution: harnessing flexibility with Vercel and Medusa 2023-10-11T13:00:00.000Z

With Vercel and Medusa at the helm of their frontend stack, Copenhagen-based bedding brand Tekla can handle high traffic while providing fast, personalized digital experiences to their customers.

Agilo, a digital design and development agency, wants to provide the best solutions possible for their clients. When the ecommerce brand Tekla turned to the agency for additional development support, Agilo came with a plan. By upgrading Tekla’s composable setup, the agency provided Tekla with enough speed and reliability to handle their growing traffic volume and deliver personalized digital experiences.

Read more

Alli Pope
https://vercel.com/blog/announcing-v0-generative-ui Announcing v0: Generative UI 2023-10-11T13:00:00.000Z

A few weeks ago, we introduced v0: a product that makes website creation as simple as describing your ideas. We call it Generative UI—combining the best practices of frontend development with the potential of generative AI.

The interest in v0 has been incredible, with 100,000 people registering for the waitlist in just three weeks. Today, we’re transitioning v0 from Alpha to Beta, rolling out access to 5,000 additional users, and introducing subscription plans for those who want to unlock the full v0 feature set.

Read more

Jared Palmer
https://vercel.com/changelog/strengthening-vercels-infrastructure-against-http-2-rapid-reset-attacks Strengthening Vercel's Infrastructure against HTTP/2 Rapid Reset Attacks 2023-10-11T13:00:00.000Z

At Vercel, we consistently monitor and update our security protocols to address emerging threats. A new vulnerability, known as the HTTP/2 Rapid Reset Attack (CVE-2023-44487), has the potential to disrupt HTTP/2-enabled web servers.

Rapid Reset is a vulnerability possible in the HTTP/2 protocol involving quickly initiating and canceling streams. It can be used to launch large denial-of-service attacks, negatively affecting performance and availability.

We've taken proactive steps to refine our infrastructure and strengthen our defenses. Our improved system can now more efficiently handle the HTTP/2 Rapid Reset Attack.

An essential component of our defense strategy is inline network traffic monitoring, where we identify malicious TCP connections and terminate them. Limiting abuse over a single connection has enabled Vercel to protect against HTTP/2 Rapid Reset Attack.

Combining our existing system with new improvements, all applications on Vercel are even further resistant to the HTTP/2 Rapid Reset Attack.

We want to assure you that your web assets are protected against the HTTP/2 Rapid Reset Attack. We're committed to consistently improving our security measures in response to new threats to ensure safety and reliability for all users.

Read more

Abdel Sabbah Casey Gowrie Joe Haddad
https://vercel.com/blog/images-on-the-web Images on the web 2023-10-10T13:00:00.000Z

Images are the most popular resource type on the web, yet understanding the nuances of various image formats and their technical attributes can be challenging.

Read more

Lydia Hallie
https://vercel.com/changelog/comments-now-available-in-vercels-slack-integration Comments now available in Vercel's Slack integration 2023-10-06T13:00:00.000Z

Vercel's Slack integration now includes Comments. Once the Vercel Slack app is installed, you can subscribe to messages in a channel about all Comments made on your team's deployments or Comments made on specific projects.

If you configured the Slack app before October 4th, 2023, the updated app requires new permissions. You must reconfigure the app to subscribe to new Comment threads and link new channels.

You will get a Slack message for each new Comment, and replies in Slack will automatically appear in the Comment thread on your deployment. You can also log in to the integration with your Vercel account to get DMs about comments relevant to you.

Install the integration in our marketplace, or visit the documentation to learn more.

Read more

George Karagkiaouris Christopher Skillicorn Shaquil Hansford Sam Saliba
https://vercel.com/changelog/track-server-side-custom-events-with-vercel-web-analytics Track server-side custom events with Vercel Web Analytics 2023-10-06T13:00:00.000Z

Vercel Web Analytics now supports tracking custom events on the server-side, in addition to existing support for client-side tracking.

Events can now be tracked from Route Handlers, API Routes, and Server Actions when using Next.js (or other frameworks like SvelteKit and Nuxt) through the track function.

Custom event tracking is available for Pro and Enterprise users.

Check out the documentation to learn more.

Read more

Chris Widmaier Tobias Lins
https://vercel.com/blog/introducing-spend-management-realtime-usage-alerts-sms-notifications Introducing Spend Management 2023-10-05T13:00:00.000Z

Serverless infrastructure can instantly and infinitely scale. While powerful, this has had tradeoffs. An unforced error or traffic spike could cause an unexpected bill.

Read more

Lee Robinson
https://vercel.com/changelog/spend-management-now-available-for-pro-users Spend Management now available for Pro users 2023-10-05T13:00:00.000Z

Today, we'll begin rolling out Spend management on the Pro plan for the Billing and Owner roles. You can recieve notifications and trigger webhooks when you pass a given spend amount on metered resources like Functions. The actions you can take are:

When your spending approaches or exceeds the set limit, you'll receive realtime notifications to help you stay in control. This includes Web and Email notifications at 50%, 75%, and 100%. Additionally, you can also receive SMS notifications when your spending reaches 100%.

Setting a spend amount does not mean your project with pause automatically. To programmatically take action based on your set amount, you can use a webhook to pause your project, or even put your site into maintenance mode.

Check out our documentation to learn more.

Read more

Chloe Tedder Cindy Wu Saranya Desetty Christopher Skillicorn Amy Burns Marc Brakken
https://vercel.com/blog/understanding-the-samesite-cookie-attribute Understanding the SameSite cookie attribute 2023-10-02T13:00:00.000Z

Navigating the web safely while ensuring user privacy is a top priority. When working with cookies, it’s important to ensure they are secure and serve their intended purpose without compromising user privacy.

One key attribute to consider is SameSite, which dictates when and how cookies are sent in cross-site requests.

Read more

Lydia Hallie
https://vercel.com/changelog/command-menu-now-available-in-deployments Command Menu now available in Deployments 2023-10-02T13:00:00.000Z

You can now use ⌘K (or Ctrl+K on Windows) to open the Command Menu on any deployment where the Vercel Toolbar is enabled, including production and localhost. You can use Cmd + Shift + K if you're viewing a deployment of a website that has its own ⌘K menu.

Users can now navigate between a deployment and other Vercel pages relevant to the project directly through the menu.

Check out our documentation to learn more.

Read more

wits Shaquil Hansford Gary Borton Christopher Skillicorn Sam Saliba
https://vercel.com/blog/understanding-csrf-attacks Understanding CSRF attacks 2023-09-29T13:00:00.000Z

Cross-Site Request Forgery (CSRF) is an attack that tricks users into executing unwanted actions on a web application where they're currently authenticated.

Read more

Lydia Hallie
https://vercel.com/changelog/exceeding-included-image-optimization-usage-no-longer-pauses-deployments Exceeding included Image Optimization usage no longer pauses deployments 2023-09-28T13:00:00.000Z

Based on your feedback, rather than pausing a deployment when exceeding the included Image Optimization usage, Vercel will now only pause optimization for additional source images.

  • Your existing images and all traffic will not be affected

  • Additional source images will throw a 402 status code when optimizing, triggering the onError callback (if provided) and showing the alt text instead of the image 

Check out our documentation to learn more.

Read more

Steven Salat
https://vercel.com/changelog/comments-are-now-visible-in-your-dashboard-notifications Comments are now visible in your dashboard notifications 2023-09-28T13:00:00.000Z

You can now receive and view Comment notifications in the Vercel dashboard.

Notifications for new Comments are shown in the dashboard with a counter on the bell icon. You can quickly resolve Comments there or filter by specific pages, branches, or authors.

Check out our documentation to learn more.

Read more

Andrew Gadzik Shaquil Hansford Gary Borton Christopher Skillicorn Mariana Castilho Sam Saliba George Karagkiaouris
https://vercel.com/changelog/hints-now-available-when-creating-environment-variables Hints now available when creating Environment Variables 2023-09-27T13:00:00.000Z

When creating and editing Environment Variables on Vercel, you can now see hints that will warn you of potential typos in the name. This includes issues like:

  • New line characters

  • Tabs

  • Spaces

  • New line

  • Carriage return

  • Vertical tab

  • Form feed

  • Non-breaking space

  • Non-breaking space (fixed width)

  • Zero-width space

  • Zero-width non-joiner

  • Zero-width joiner

  • Line separator

  • Paragraph separator

  • Narrow non-breaking space

  • Medium mathematical space

  • Ideographic space

  • Zero-width no-break space

Learn more about Environment Variables.

Read more

John Phamous Christopher Skillicorn Sam Becker
https://vercel.com/blog/first-input-delay-vs-interaction-to-next-paint First Input Delay (FID) vs. Interaction to Next Paint (INP) 2023-09-26T13:00:00.000Z

As of March 2024, Interaction to Next Paint (INP) will replace the First Input Delay (FID) as a new Core Web Vital.

Read more

Lydia Hallie
https://vercel.com/blog/optimizing-web-fonts Optimizing web fonts 2023-09-26T13:00:00.000Z

Web fonts are vital to branding and user experience. However, the inconsistent rendering of these fonts while they're being fetched from the server can cause unintended shifts in layout.

Read more

Lydia Hallie
https://vercel.com/changelog/vercel-toolbar-now-available-to-use-collaboration-features-in-production @vercel/toolbar available to use collaboration features in production 2023-09-22T13:00:00.000Z

Comments and other collaboration features are available in all Preview Deployments on Vercel. Now, you can enable them in Production Deployments and localhost by injecting the Vercel toolbar on any site with our @vercel/toolbar package.

By using the @vercel/toolbar npm package you and your team can leave feedback with Comments, take advantage of Draft Mode to view unpublished CMS content, or use Visual Editing on your production application.

This package is available to users on all plans and is our first step in bringing the Vercel Toolbar into your production sites.

Check out the documentation to learn more.

Read more

George Karagkiaouris Shaquil Hansford
https://vercel.com/blog/how-whop-improved-their-real-experience-score-by-200-with-the-next-js-app How Whop improved their Real Experience Score by 200% with the Next.js App Router 2023-09-21T13:00:00.000Z

Whop, an online marketplace for digital products, recognized the importance of having a seamless developer and end-user experience and aimed to transform their platform with a modern tech stack.

To achieve this, they focused on migrating from Ruby on Rails to Next.js, quickly followed by the incremental adoption of App Router for even better page speed and developer experience.

Read more

Alli Pope
https://vercel.com/blog/why-vercel-and-next-js-are-the-perfect-fit-for-this-global-fashion-media Why Vercel and Next.js are the perfect fit for this global fashion media group 2023-09-21T13:00:00.000Z

L’Officiel Inc. is a century-old fashion media group representing 10 renowned publications in more than 80 countries. Despite its global reach, the brand has a small team that maintains its 30 web properties, while also developing new features and working on special projects sold to clients. 

Read more

Alli Pope
https://vercel.com/changelog/serverless-functions-can-now-run-up-to-5-minutes Serverless Functions can now run up to 5 minutes 2023-09-20T13:00:00.000Z

Based on your feedback, we’re improving Serverless Functions as follows:

  • Pro customers can now run longer functions for up to 5 minutes.

  • Pro customers default function timeout will be reduced to 15 seconds on October 1st.

These changes help prevent unintentional function usage, unless explicitly opted into the longer function duration.

Beginning October 1st, all new projects will receive a default timeout of 15 seconds. In addition, any projects that have not had functions run for more than 15 seconds will have their default timeouts reduced to 15 seconds.

To avoid unexpected timeouts, any projects that have had functions running for longer than 15 seconds (less than 1% of traffic) will not have their defaults changed.

Existing defaults still apply for Hobby and Enterprise customers.

Check out our documentation to learn more.

Read more

Edward Thomson Florentin Eckl Amy Burns Alli Pope
https://vercel.com/changelog/support-for-remix-v2 Support for Remix v2 2023-09-19T13:00:00.000Z

Vercel now supports Remix v2. Deploy your Remix application on Vercel with advanced support for:

  • Streaming SSR: Dynamically stream content with both Node.js and Edge runtimes

  • API Routes: Easily build your serverless API with Remix and a route loader

  • Advanced Caching: Use powerful caching headers like stale-while-revalidate

  • Data Mutations: Run actions inside Serverless and Edge Functions

Deploy our Remix template to get started.

Read more

Nathan Rajlich
https://vercel.com/changelog/create-search-presets-for-your-runtime-logs Create search presets for your Runtime Logs 2023-09-18T13:00:00.000Z

You can now create and save presets of your commonly used filters for all of your Runtime Logs searches. You can save presets to either My Project Presets (related to your personal account) or Team Project Presets. Personal presets can only be viewed and edited by the user who created them.

This feature is available to users on all plans.

Check out our documentation to learn more.

Read more

Kevin Rupert Uche Nkadi Julia Shi
https://vercel.com/changelog/vercel-blob-is-now-in-public-beta-for-hobby-and-pro-customers Vercel Blob is now in public beta for Hobby and Pro customers 2023-09-18T13:00:00.000Z

Vercel Blob is a fast, easy, and efficient solution for storing files in the cloud, perfect for large files, like videos.

The Vercel Blob works with any framework. It can be securely called from Edge and Serverless Functions and returns an immutable URL that can be exposed to visitors or put into storage.

This feature is now in public beta and available for all Hobby and Pro customers.

Check out our documentation to learn more.

Read more

Edward Thomson Vincent Voyer Fabio Benedetti
https://vercel.com/changelog/new-project-access-controls-for-enterprise-customers New project access controls for Enterprise customers 2023-09-18T13:00:00.000Z

Today, we’re introducing more ways for Enterprise customers to have control over which members of their Vercel team have access to certain projects for increased security.

The new team level role Contributor, has restricted access to make changes at the project level, and only has access to the projects to which they’ve been assigned. This role can be useful for agencies and contractors working on a limited project basis.

Additionally, we’ve introduced new Project level roles: Project Admin, Project Developer, and Project Viewer. Project level roles are assigned to a team member on a project level and are only valid for the project they are assigned to.

Check out the documentation to learn more.

Read more

Ana Jovanova Javier Bórquez Miroslav Simulcik Rich Haines Natalie Altman Hector Simpson Enric Pallerols Balazs Varga Marc Greenstock
https://vercel.com/blog/vercel-iso-27001-security Vercel achieves ISO 27001:2013 certification to further strengthen commitment to security 2023-09-12T13:00:00.000Z

Today, we’re excited to announce our achievement of the ISO 27001:2013 (ISO 27001) certification. This further strengthens our commitments to security in Vercel’s Frontend Cloud.

Read more

Ty Sbano
https://vercel.com/blog/improving-developer-workflow How to create an optimal developer workflow 2023-09-12T13:00:00.000Z

Software engineers strive to build experiences that delight and engage customers, but there are plenty of workflow roadblocks that can stand in the way of shipping great software quickly.

In this blog, we'll break down the costs of poor developer experience and share some tactics that can help promote a healthy development workflow. 

Read more

Lindsey Simon Mark Knichel
https://vercel.com/changelog/improved-error-messages-for-failed-or-canceled-builds Improved error messages for failed or canceled builds 2023-09-12T13:00:00.000Z

Failed or canceled builds now have better feedback clearly displayed on the Vercel dashboard in the deployment details page.

The following build failures now have more helpful error messages:

  • An invalid vercel.json configuration

  • Canceled builds due to the ignore build step

  • A newer commit in the branch triggering a more up-to-date deployment

Check out our documentation to learn more.

Read more

Felix Haus Peter van der Zee
https://vercel.com/changelog/vercel-has-now-achieved-the-iso-27001-2013-certification Vercel has now achieved the ISO 27001:2013 certification 2023-09-12T13:00:00.000Z

We have achieved the ISO 27001:2013 certification to further strengthen our commitment to security at Vercel.

  • We're committed to keeping your data safe: ISO 27001 provides a framework for establishing, implementing, operating, monitoring, reviewing, and maintaining information security controls.

  • You can verify Vercel’s security practices: You have additional validation to assess Vercel with this globally recognized certification, along with our SOC 2 Type 2 attestation.

  • We're committed to compliance: As part of our adherence to ISO 27001, we’ll continue with ongoing surveillance audits.

Learn more about security at Vercel.

Read more

Ty Sbano Kacee Taylor Aaron Brown
https://vercel.com/blog/hydrow How the at-home workout sensation, Hydrow, cut authoring times from weeks to minutes 2023-09-11T13:00:00.000Z

In 2022, Hydrow, celebrated for its personal rowing machines and immersive workout content, was in search of a seamless digital experience for its users.

Shopify Liquid and WordPress offer robust capabilities, but Hydrow required more custom, dynamic content capabilities. 

Read more

Alice Alexandra Moore Alli Pope
https://vercel.com/changelog/bun-install-is-now-supported-with-zero-configuration Bun install is now supported with zero configuration 2023-09-11T13:00:00.000Z

Projects using Bun as a package manager can now be deployed to Vercel with zero configuration.

Like yarn, npm, and pnpm, Bun acts as a package manager focused on saving disk space and boosting installation speed. Starting today, Projects that contain a bun.lockb file will automatically run bun install as the default Install Command using bun@1.

This change impacts the build phase but not runtime. Therefore, Serverless Functions will not use the Bun runtime yet.

Check out the documentation to learn more.

Read more

Steven Salat Sean Massa Ethan Arrowood Chris Barber Trek Glowacki Nathan Rajlich Mariano Cocirio
https://vercel.com/blog/how-we-continued-porting-turborepo-to-rust Using Zig in our incremental Turborepo migration from Go to Rust 2023-09-08T13:00:00.000Z

We’ve been porting Turborepo, the high-performance build system for JavaScript and TypeScript, from Go to Rust. We talked about how we started the porting process, so now let’s talk about how we began porting our two main commands: run and prune.

Read more

Nicholas Yang
https://vercel.com/blog/incremental-migrations Why all application migrations should be incremental 2023-08-30T13:00:00.000Z

In 2023, there are few software projects that are true greenfield endeavors. Instead, migrations of existing systems are the new normal. Migrations done wrong can introduce substantial business and timeline risks into any software project. An incremental migration strategy can minimize those risks while pulling forward validation of business impact.

Vercel’s product is designed to support incremental migration from the ground up. In this post you'll get a high-level overview of incremental migration strategies and considerations.

Read more

Malte Ubl
https://vercel.com/changelog/hypertune-integration-available-for-low-latency-experimentation Hypertune integration available for low latency experimentation 2023-08-24T13:00:00.000Z

You can now use the Hypertune integration to initialize the Hypertune SDK from Vercel Edge Config with zero latency. This allows you to access your feature flags and run A/B tests with no performance impact to your applications.

This integration is available for users on all plans.

Check out the integration to get started.

Read more

Dominik Ferber
https://vercel.com/blog/deploying-at-the-speed-of-on-demand-streaming Deploying at the speed of on-demand streaming 2023-08-23T13:00:00.000Z

With many other streaming services to choose from, standing out in the crowd—or on users’ screens—requires speed and innovation. German-based platform Joyn knows the challenge well and relies on Vercel to automate and accelerate its development workflow.

Read more

Alli Pope
https://vercel.com/blog/vercel-ai-accelerator-demo-day Vercel AI Accelerator Demo Day 2023-08-23T13:00:00.000Z

Earlier this week, we held Demo Day for the Vercel AI Accelerator program. 28 talented AI teams showed off the impressive demos they built over the 6 weeks of the program, in 3 minutes each.

Watch the demo day recording.

Read more

Hassan El Mghari Lee Robinson
https://vercel.com/changelog/improved-user-experience-for-vercel-documentation Improved user experience for Vercel documentation 2023-08-23T13:00:00.000Z

We've redesigned and improved the Vercel documentation with:

  • Updated navigation: Navigation is now separated by product categories. You can quickly view all products in a category by hovering the navigation item.

  • Customization: You can use the global frameworks toggle to show code examples with your favorite framework.

  • All Products page: You can now see all Vercel products on a single documentation page.

  • Improved mobile design: The new mobile-friendly navigation enables you to discover and read easily when you’re on the go.

Get started with the Vercel documentation today.

Read more

Ismael Rumzan Amy Burns Glenn Hitchcock Rich Haines Meg Bird
https://vercel.com/blog/how-sonos-amplified-their-devex Developing at the speed of sound: How Sonos amplified their DevEx 2023-08-17T13:00:00.000Z

As the world’s leading sound experience company with a 20-year legacy of innovation and over 3,000 patents, Sonos understands the importance of a robust digital presence that reflects the brand’s cutting-edge ethos. 

However, for years, the high costs and slow builds of their web infrastructure hindered developers from making critical site updates. The solution: a transition to a headless, composable architecture using Vercel and Next.js.

The switch resulted in a remarkable 75% improvement in build times, empowering developers to innovate with ease and confidence.

Read more

Greta Workman
https://vercel.com/blog/konobos-empowers-industry-giant-to-deploy-50-faster Konabos empowers an industry giant to deploy 50% faster with a composable stack 2023-08-15T13:00:00.000Z

When American Bath Group realized their team’s productivity was being interrupted by inefficient collaboration, they turned to the full-service digital agency Konabos for help. Konabos supported American Bath Group in moving away from their monolithic setup—the cause of their lagging dev velocity—in favor of a composable stack comprised of Vercel, Next.js, and Kontent.AI. Now with Vercel’s streamlined deployments and infrastructure, American Bath Group can deploy 50% faster, shorten review cycles, and enjoy a better developer experience.

Read more

Alli Pope
https://vercel.com/changelog/hydrogen-2-remix-vercel Hydrogen 2 projects can now be deployed with zero configuration 2023-08-14T13:00:00.000Z

Vercel now supports and automatically optimizes your Hydrogen 2 projects as of Vercel CLI v31.2.3. When importing a new project, it will detect Hydrogen and configure the right settings for optimal performance — including using Vercel Edge Functions for server-rendering pages.

Deploy the Hydrogen template or run vercel init hydrogen-2 command in your terminal to get started.

Read more

Nathan Rajlich
https://vercel.com/changelog/prioritize-production-deployments-to-build-before-any-queued-preview Prioritize Production deployments to build before any queued Preview 2023-08-14T13:00:00.000Z

Enterprise customers are now able to configure builds of their Production deployments to begin before any builds of their Preview deployments.

With this setting configured, any Production Deployment changes will skip the line of queued Preview Deployments, so they're ready as soon as possible.

You can also increase your build concurrency limits to give you the ability to kick off multiple builds at once.

Read more in our documentation.

Read more

Felix Haus Mariano Cocirio
https://vercel.com/changelog/commenting-on-dns-records-is-now-available Commenting on DNS records is now available 2023-08-11T13:00:00.000Z

You can now leave comments when creating a new DNS record in Vercel. In addition, you can also edit comments on existing DNS records.

New records created by an Email Preset will include a comment explaining why a record was added.

Check out our documentation to learn more about DNS records.

Read more

John Phamous Christopher Skillicorn
https://vercel.com/blog/algolia-cuts-build-times-in-half-with-isr-using-next-js-on-vercel Algolia cuts build times in half with ISR using Next.js on Vercel 2023-08-09T13:00:00.000Z

Algolia helps users across industries create dynamic digital experiences through search and discovery. With a constant addition of new features and pages on their website and blog, their technical team of five needed to improve their development cycle. By adopting Next.js on Vercel, Algolia reduced build times by 50% while making it easier to collaborate across teams.

Read more

Greta Workman
https://vercel.com/changelog/support-center-on-pro Support Center is now available for Pro customers 2023-08-08T13:00:00.000Z

Pro customers can now create and view support cases on the Vercel dashboard.

The Vercel Support Center allows you to create support cases, view their statuses, and receive any messages from our Customer Success team. All cases are securely stored to safeguard your data.

Check out the documentation on Support Center to learn more.

Read more

Baruch Hen Amy Burns Cody Brouwers Brody McKee Nanda Syahrasyad Pearl Latteier Okiki Ojo Sarvani Pandyaram Holden Altaffer
https://vercel.com/blog/introducing-next-js-commerce-2-0 Introducing Next.js Commerce 2.0 2023-08-07T13:00:00.000Z

Today, we’re excited to introduce Next.js Commerce 2.0.

Read more

Michael Novotny Lee Robinson
https://vercel.com/blog/understanding-react-server-components Understanding React Server Components 2023-08-01T13:00:00.000Z

React Server Components (RSCs) augment the fundamentals of React beyond being a pure rendering library into incorporating data-fetching and remote client-server communication within the framework.

Below, we’ll walk you through why RSCs needed to be created, what they do best, and when to use them. We'll also touch on how Next.js eases and enhances the RSC implementation details through the App Router.

Read more

Alice Alexandra Moore
https://vercel.com/changelog/share-your-preview-urls-immediately Share your Preview URLs immediately 2023-08-01T13:00:00.000Z

Preview URLs are now shareable at the beginning of the build process, instead of after the build finishes.

The Preview URL in the video above is surfaced both in the dashboard and in the Vercel Bot comments attached to the pull request. Customers on all plans will now see this functionality automatically on all new builds.

Check out the documentation to learn more.

Read more

Felix Haus Mariano Cocirio Sam Becker John Phamous
https://vercel.com/blog/washington-post-next.js-vercel-engineering-at-the-speed-of-breaking-news Engineering a site at the speed of breaking news 2023-07-27T13:00:00.000Z

Many Vercel and Next.js users deal with large swaths of data. But few wrangle data in the way The Washington Post Elections Engineering team does.

Knowing their platform must be fast and visually compelling—all while handling constant updates from thousands of federal, state, and local elections—The Post moved to Next.js and Vercel for the 2022 US midterm elections.

Read more

Greta Workman
https://vercel.com/changelog/improved-performance-for-vercel-postgres-from-edge-functions Improved performance for Vercel Postgres from Edge Functions 2023-07-27T13:00:00.000Z

The Vercel Postgres SDK has significantly improved performance for Postgres queries from Vercel Edge Functions.

The @vercel/postgres package has been updated to use the latest version of Neon’s Serverless driver which adds support for SQL-over-HTTP when you use the sql template literal tag. Simple queries that do not require transactions now complete in ~10ms—up to a 40% speed increase.

You do not need to make any changes to your queries to see these improvements, you only need to update to the latest version of @vercel/postgres to take advantage of these improvements.

Read more

Vincent Voyer Edward Thomson
https://vercel.com/changelog/disable-git-integration-comments Disable Git Integration comments from the dashboard 2023-07-26T13:00:00.000Z

We've added new options to the "Connected Git Repository" settings in the Vercel dashboard. It's now possible to configure whether the Vercel bot comments on:

  • Pull Requests

  • Production Commits

These settings are available for all connected repositories, not just GitHub repositories.

Previously, there was a github.silent setting available in vercel.json that didn't allow more granular control over disabling comments. We will be deprecating that option on Monday, September 25, 2023. There is no action required at this time to prepare for deprecation. Until that date, if you set that option in your vercel.json file we will continue to read it, and update the configuration in the dashboard accordingly.

Read more

Max Leiter Kevin Rupert wits
https://vercel.com/changelog/split-integration-for-low-latency-experimentation Split integration for low latency experimentation 2023-07-26T13:00:00.000Z

The Split integration syncs the feature flags you have in Split already into Vercel Edge Config to help safely launch your releases and experiments. 

With near-zero latency storage provided by Vercel Edge Config your feature flags are immediately available to SDKs within the Vercel network. This improves performance and load experience when deploying features and experiments; all while Split keeps your data up-to-date.

This integration is available to users on all plans.

Check out the integration to get started.

Read more

Dominik Ferber
https://vercel.com/blog/introducing-react-tweet Introducing React Tweet 2023-07-25T13:00:00.000Z

Introducing react-tweet – embed tweets into any React application with a single line of code, without sacrificing performance.

Read more

Luis Alvarez Lee Robinson Steven Tey
https://vercel.com/blog/examine How Vercel helped this popular health database increase free trials by 284% 2023-07-25T13:00:00.000Z

Examine is the Web’s largest database of nutrition and supplement research—empowering their users with scientific data to inform healthier lives. 

Prior to adopting Vercel’s Frontend Cloud, their five-person dev team was struggling with a pile of tech debt, brought on by their monolithic architecture setup.

Read more

Kelsey Dillon
https://vercel.com/changelog/improved-dashboard-navigation Improved dashboard navigation 2023-07-25T13:00:00.000Z

The dashboard navigation has received a visual update. You can now see the project icon in the navigation and the mobile version only shows the name of the scope you are currently on to save space.

Check out the documentation to learn more.

Read more

Glenn Hitchcock Emil Kowalski
https://vercel.com/blog/how-turborepo-is-porting-from-go-to-rust How Turborepo is porting from Go to Rust 2023-07-21T13:00:00.000Z

In a previous blog post, we talked about why we are porting Turborepo, the high-performance build system for JavaScript and TypeScript, from Go to Rust. Now, let's talk about how.

Today, our porting effort is in full swing, moving more and more code to Rust. But when we were starting out, we had to make sure that porting was feasible for us to accomplish. A migration from one language to another is no small task and there's a lot of research to do up front to ensure that the end goal is attainable.

Here’s how we started the process, validated our current porting strategy, and made the call to port Turborepo to Rust.

Read more

Nicholas Yang Anthony Shew
https://vercel.com/blog/how-react-18-improves-application-performance How React 18 Improves Application Performance 2023-07-19T13:00:00.000Z

React 18 has introduced concurrent features that fundamentally change the way React applications can be rendered. We'll explore how these latest features impact and improve your application's performance.

Read more

Lydia Hallie
https://vercel.com/blog/iterating-from-design-to-deploy Iterating from design to deploy: the shape of future builders 2023-07-13T13:00:00.000Z

In a world of accelerating digital innovation, we need tools that transform the web development landscape. In his recent Figma Config keynote, Guillermo Rauch spoke about how we at Vercel enable builders—non-developers included—to tighten the cycle of design and deploy.

Below, we’ll dive behind the scenes of the talk and give you tangible ways to try out Vercel’s Frontend Cloud.

Read more

Alasdair Monk Alice Alexandra Moore
https://vercel.com/blog/ai-accelerator-participants Meet the Vercel AI Accelerator Participants 2023-07-12T13:00:00.000Z

Today, we’re announcing the participants of Vercel’s AI Accelerator—a program for the brightest AI builders and early-stage startups.

We're thrilled to include both prominent builders and rising startups solving interesting or impactful problems, like using AI for cancer detection or transforming how academic research is made available.

We received over 1500 applications from talented startups and individuals and accepted 40, which is less than 3% of applications. The 40 accepted participants are presented below.

Read more

Hassan El Mghari Lee Robinson
https://vercel.com/changelog/billing-role-on-pro Pro teams now have an included team seat for Billing 2023-07-06T13:00:00.000Z

Pro teams can now assign the Billing Role to a single user.

The Billing Role allows them to view invoices and edit payment settings, as well as provides read-only access to all projects on a team. Pro teams can add one team seat for free that has the Billing Role. Enterprise customers can add multiple billing team seats.

Checkout the documentation to learn more.

Read more

Ana Jovanova
https://vercel.com/blog/platforms-starter-kit Introducing the Vercel Platforms Starter Kit 2023-07-05T13:00:00.000Z

Today, we are excited to launch the all-new Vercel Platforms Starter Kit — a full-stack Next.js template for building multi-tenant applications with custom domains, built with App Router, Vercel Postgres, and the Vercel Domains API.

Read more

Steven Tey
https://vercel.com/changelog/improved-experience-for-configuring-ignored-builds Improved experience for configuring ignored builds 2023-07-05T13:00:00.000Z

You no longer have to write your own commands when configuring your project's Ignored Build Step. We've looked at the most commonly used scenarios to create presets for an easier experience:

  • Choose a preset to use common configurations of Vercel customers

  • Select a Node or Bash script from your repository

  • Write an arbitrary Bash script using the "Custom" option

Check the documentation to learn more about Ignored Build Step.

Read more

John Phamous Sam Becker Peter van der Zee
https://vercel.com/blog/edge-config-and-launch-darkly Expanding the experimentation ecosystem with Edge Config and LaunchDarkly 2023-06-27T13:00:00.000Z

We're excited to announce a new LaunchDarkly integration to bring low latency, global feature flags to your favorite frontend framework.

Feature flags help your team safely release new code and experiment with changes. Vercel Edge Config helps you instantly read configuration data globally, making it a perfect match for feature flag and experimentation data.

Read more

Dominik Ferber Alli Pope
https://vercel.com/blog/incrementally-adopting-next-js-at-one-of-europes-fastest-growing-brands Incrementally adopting Next.js at one of Europe's fastest growing brands 2023-06-23T13:00:00.000Z

While reMarkable, pioneers of the next-generation paper tablet, can credit much of their initial success to their original website, they knew they’d need to improve key elements of their stack and workflow to reach new heights. The team opted for a composable stack—comprised of Sanity, Next.js, and Vercel—to meet the needs of their developers while empowering their content creators to deliver truly delightful digital experiences.

Read more

Kiana Lewis
https://vercel.com/blog/an-introduction-to-streaming-on-the-web An Introduction to Streaming on the Web 2023-06-22T13:00:00.000Z

The ability to process data as it streams has always been a fundamental concept in computer science. JavaScript developers had access to streaming through XMLHttpRequest, but it wasn't until 2015 that it was accessible natively through the Fetch API.

Web streams provide a standardized way to continuously send or receive data asynchronously across network connections. They bring the power of streaming to the web, enabling developers to handle large data sets through "chunks", deal with congestion control (backpressure), and create highly efficient and responsive applications.

Leveraging web streams in your web apps can enhance the performance and responsiveness of your UIs. The immediate data processing allows for real-time updates and interactions, providing a seamless user experience with quicker load times, more up-to-date information, and a smoother, more interactive interface.

Due to their increasing popularity, the Web Streams API has become a cornerstone of many major web platforms, including web browsers, Node.js, and Deno. In this blog post, we’ll look at what web streams are; how they work, the many advantages they bring to the table for your website, streaming on Vercel, and tools built around web streams that we can use today.

Read more

Lydia Hallie
https://vercel.com/blog/neo-financial How Neo Financial cut time spent on infrastructure admin by 50% 2023-06-22T13:00:00.000Z

Neo Financial is a next-generation banking app and Canada’s fastest-growing financial services company. They’re leveraging Vercel's frontend cloud to enhance their web development process, boost performance, and meet industry security standards—all while saving on resources.

Read more

Alli Pope
https://vercel.com/blog/enhanced-content-management-for-headless-cmses Enhanced content management for your headless CMS 2023-06-22T13:00:00.000Z

Today we’re excited to announce updates to Draft Mode, making it easier to see your latest content changes before they’re published.

Draft Mode goes hand in hand with Visual Editing, our real-time content editing feature for websites using headless Content Management Systems (CMSes). When you make changes through Visual Editing, you can guarantee that your edits will show up the next time the page is viewed in Draft Mode.

Read more

wits Steven Salat
https://vercel.com/changelog/improvements-and-fixes Improvements and fixes 2023-06-22T13:00:00.000Z
  • Draft Mode: Users on any plan can now enable Draft Mode from the Vercel toolbar. When you do so, the toolbar color changes to purple to indicate you are viewing draft content.

  • Skew Protection: You can now implement Skew Protection to eliminate version skew between web clients and servers on Next.js version 13.4.7 or newer. The Skew Protection platform primitive is available to all frameworks.

  • Storage transfers: When Hobby users upgrade to Pro, their stores will be transferred to the new team.

  • Configured Ignored Build Step script: When rebuilding or promoting a deployment in a project with a Ignored Build Step script now you can explicitly skip the ignore build step script, forcing the build to happen.

  • System environment variables: VERCEL_BRANCH_URL with the generated Git branch URL has been added to the system env vars to access a deployment’s Git branch alias from within their code.

  • Faster deployment times: Projects with Edge Functions are now faster to deploy by: 2 seconds on average, 9 seconds in slow cases, and up a 20 second improvement in the slowest case.

  • Git metadata: You can now see Git metadata for deployments when there are unstaged changes.

  • Vercel CLI: v30.2.3 was published with updates to dependencies for Node and Remix.

Read more

Luc Leray Chris Barber Sean Massa wits Steven Salat
https://vercel.com/blog/version-skew-protection Introducing Skew Protection 2023-06-21T13:00:00.000Z

Have you ever seen a 404 for requests from old clients after a deployment? Or gotten a 500 error because the client didn’t know that a new server deployment changed an API? We're introducing a generic fix for this problem space.

Vercel customers are deploying over 6 million times per month, making their businesses more successful one commit at a time. But since the dawn of the distributed computing age, each system deployment has introduced the risk of breakage: When client and server deployments aren’t perfectly in sync, and they won’t be, then calls between them can lead to unexpected behavior.

We call this issue version skew. In the worst case, version skew can break your app, and in the best case, it leads to substantial extra engineering effort as software changes crossing system boundaries must be backward and forward-compatible.

Today, we're introducing Skew Protection for deployments, a novel mechanism to eliminate version skew between web clients and servers. This technology will substantially reduce errors users observe as new deployments are rolled out. Additionally, it will increase developer productivity as you no longer need to worry about backward and forward compatibility of your API changes. Available today for everyone in Next.js and SvelteKit with Nuxt and Astro coming soon.

Read more

Malte Ubl
https://vercel.com/blog/feature-complete-sveltekit New features for SvelteKit: Optimize your application with ease 2023-06-20T13:00:00.000Z

Svelte has made a name for itself in the world of web development frameworks, thanks to its unique approach of converting components into optimized JavaScript modules. This innovative way of rendering apps eliminates the overhead found in traditional frameworks, leading to more performant and efficient applications.

With the release of SvelteKit 1.0, developers can leverage the power of fullstack Svelte without worrying about breaking changes. Furthermore, SvelteKit continues to evolve, offering a robust set of features and seamless integration with various deployment environments, including Vercel.

Vercel, using framework-defined infrastructure (FDI), has embraced SvelteKit, recently adding support for per-route configuration for Serverless and Edge Functions, Incremental Static Regeneration (ISR), and easier compatibility with a range of Vercel products. In this article, we'll explore how to make your apps more performant, scalable, and user friendly.

Read more

Alice Alexandra Moore
https://vercel.com/changelog/azure-cosmosdb-integration-now-available Azure CosmosDB integration now available 2023-06-20T13:00:00.000Z

Our integration with Azure CosmosDB is now available. With this integration you can easily create Vercel applications with a Cosmos DB database already configured, enabling developers to get the benefits of serverless architecture with a versatile and high-performance NoSQL database.

This feature is available to customers on all plans.

Install the integration or deploy a template with Azure CosmosDB.

Read more

Alex Hawley
https://vercel.com/changelog/markdown-support-for-comments-on-preview-deployments Markdown support for comments on Preview Deployments 2023-06-20T13:00:00.000Z

With the ability to comment on Preview Deployments, anyone added to your projects can comment directly on copy, components, and interactions. Now with support for markdown, you can format your comments with lists, bold text, links, quotes, and more.

You can trigger these by using inline characters:

  • * for bold (or Ctrl/Cmd+B)

  • _ for italics (or Ctrl/Cmd+I)

  • ~ for strikethrough (or Ctrl/Cmd+Shift+X)

  • ` for code (or Ctrl/Cmd+E)

  • > and space to start a quote

  • - or * or 1. plus space to start a list

  • Tab or Shift+Tab to change indentation

The toolbar will also have a button for the basic inline styles, and if you click on a link we have a new popup for editing the text and URL.

Check out the documentation to learn more.

Read more

George Karagkiaouris Glenn Hitchcock
https://vercel.com/blog/from-idea-to-acqusition-how-potion-shipped-4k-sites-on-vercel From idea to acquisition: How Potion.so shipped 4,000+ sites on Vercel 2023-06-15T13:00:00.000Z

Potion.so is a Notion-to-website builder powered by Next.js and Vercel. Founder and sole employee Noah Bragg leverages the Platforms Starter Kit and Vercel's Edge Network to serve 4,000 custom domains and over 100,000 pageviews.

In June 2023, Potion was acquired for $300,000.

Read more

Steven Tey
https://vercel.com/blog/introducing-the-vercel-ai-sdk Introducing the Vercel AI SDK 2023-06-15T13:00:00.000Z

Over the past 6 months, AI companies like Scale, Jasper, Perplexity, Runway, Lexica, and Jenni have launched with Next.js and Vercel. Vercel helps accelerate your product development by enabling you to focus on creating value with your AI applications, rather than spending time building and maintaining infrastructure.

Today, we're launching new tools to improve the AI experience on Vercel.

  • Vercel AI SDK: Easily stream API responses from AI models

  • Chat & Prompt Playground: Explore models from OpenAI, Hugging Face, and more

The Vercel AI SDK

The Vercel AI SDK is an open-source library designed to help developers build conversational, streaming, and chat user interfaces in JavaScript and TypeScript. The SDK supports React/Next.js, Svelte/SvelteKit, with support for Nuxt/Vue coming soon.

To install the SDK, enter the following command in your terminal:

You can also view the source code on GitHub.

Built-in LLM Adapters

Choosing the right LLM for your application is crucial to building a great experience. Each has unique tradeoffs, and can be tuned in different ways to meet your requirements.

Vercel’s AI SDK embraces interoperability, and includes first-class support for OpenAI, LangChain, and Hugging Face Inference. This means that regardless of your preferred AI model provider, you can leverage the Vercel AI SDK to create cutting-edge streaming UI experiences.

Streaming First UI Helpers

The Vercel AI SDK includes React and Svelte hooks for data fetching and rendering streaming text responses. These hooks enable real-time, dynamic data representation in your application, offering an immersive and interactive experience to your users.

Building a rich chat or completion interface now just takes a few lines of code thanks to useChat and useCompletion:

Stream Helpers and Callbacks

We've also included callbacks for storing completed streaming responses to a database within the same request. This feature allows for efficient data management and streamlines the entire process of handling streaming text responses.

Edge & Serverless ready

Our SDK is integrated with Vercel products like Serverless and Edge Functions. You can deploy AI application that scale instantly, stream generated responses, and are cost effective.

With framework-defined infrastructure, you write application code in frameworks like Next.js and SvelteKit using the AI SDK, and Vercel converts this code into global application infrastructure.

Chat & Prompt Playground

In late April, we launched an interactive online prompt playground play.vercel.ai with 20 open source and cloud LLMs.

The playground provides a valuable resource for developers to compare various language model results in real-time, tweak parameters, and quickly generate Next.js, Svelte, and Node.js code.

Today, we’ve added a new chat interface to the playground so you can simultaneously compare chat models side-by-side. We’ve also added code generation support for the Vercel AI SDK. You can now go from playground to chat app in just a few clicks.

What’s Next?

We'll be adding more SDK examples in the coming weeks, as well as new templates built entirely with the AI SDK. Further, as new best practices for building AI applications emerge, we’ll lift them into the SDK based on your feedback.

Read more

Jared Palmer Shu Ding Max Leiter
https://vercel.com/changelog/vercel-kv-is-now-generally-available Vercel KV is now generally available for Hobby and Pro customers 2023-06-15T13:00:00.000Z

Vercel KV, our durable Redis database that enables you to store and retrieve JSON data, is now generally available.

This feature is available for Hobby and Pro users with Hobby users getting 1 Database, 30,000 requests per month, 256 MB total storage, and 256 MB data transfer per month. Pro users will get 1 Database, 150,000 requests per month, 512 MB total storage, 512 MB data transfer per month. Billing will begin on June 20th.

On-demand pricing for Pro users has also been lowered, with total storage reduced 17% from $0.30/GB to $0.25/GB and data transfer reduced 50% from $0.20/GB to $0.10/GB.

To see your usage, visit the the Usage page in the Dashboard. If you want to stop leveraging Vercel KV or see how to optimize your usage, you can stop querying the database and delete it. If you're an Enterprise company interested in using Vercel KV, you can contact us to get started.

Check out the documentation to learn more about Vercel KV.

Read more

Adrian Cooney Fabio Benedetti Edward Thomson Dom Busser
https://vercel.com/blog/vercel-ai-accelerator Introducing Vercel's AI Accelerator 2023-06-14T13:00:00.000Z

Today, we’re announcing Vercel’s AI Accelerator – a program for the brightest AI builders and early stage startups. Over a span of 6 weeks, we aim to empower 40 of the industry's top innovators to create and develop next-generation AI apps.

Applications are open for two weeks – apply today.

Read more

Hassan El Mghari Lee Robinson
https://vercel.com/changelog/visual-editing-can-now-be-used-with-datocms Visual Editing can now be used with DatoCMS 2023-06-12T13:00:00.000Z

Visual Editing from Vercel allows you to click-to-edit content on your site, with a direct link to exactly where your content lives in your CMS.

This functionality is now available for Enterprise customers using DatoCMS as their CMS. DatoCMS is now the fourth CMS to adopt content source-mapping technology that enables Visual Editing from a headless CMS with zero code changes to your website.

Check out the documentation to learn more or contact us for access.

Read more

wits
https://vercel.com/changelog/vercel-extension-for-azure-devops-now-available Vercel extension for Azure DevOps now available 2023-06-08T13:00:00.000Z

Customers using Azure DevOps can now use our extension from the Visual Studio Marketplace to get their deployments triggered automatically whenever they make a new commit or create a new pull request. This makes Azure pipeline development much easier and creates a better integration with other commonly used Azure products, like Azure Key Vault.

The extension can create a comment on the pull requests, containing crucial information about the deployment status and the Preview URL, to help track deployments better.

This feature is available to customers on all plans.

Check out the documentation to learn more or view the extension to get started.

Read more

Ethan Arrowood Mariano Cocirio Ismael Rumzan
https://vercel.com/blog/visual-editing-meets-markdown Visual Editing meets Markdown 2023-06-06T13:00:00.000Z

We're excited to share that TinaCMS now supports Visual Editing in Vercel Preview Deployments.

The TinaCMS team is on a mission to bring visual editing to the headless CMS in a way that works for developers. So when we had the opportunity to collaborate with Vercel on this, we didn't hesitate and the results are stunning.

Read more

Scott Gallant
https://vercel.com/changelog/visual-editing-can-now-be-used-with-tinacms Visual Editing can now be used with TinaCMS 2023-06-06T13:00:00.000Z

Visual Editing allows you to click-to-edit content on your Vercel site, with a direct link to exactly where your content lives in your CMS.

This functionality is now possible for Enterprise customers using Tina as their CMS. TinaCMS is now the third CMS to adopt content source-mapping technology that enables Visual Editing from a headless CMS with zero code changes to your website.

Check out the documentation to learn more or contact us for access.

Read more

wits
https://vercel.com/blog/designing-the-vercel-virtual-product-tour Designing the Vercel virtual product tour 2023-06-02T13:00:00.000Z

If you've tried a new tech tool recently, this experience might sound familiar: you visit the website, skim the homepage content, but still struggle to understand what the tool will do for you.

The Vercel virtual product tour is a key resource for prospective teams to interactively understand what Vercel can offer. It takes the breadth of information about Vercel and breaks the product down into the most relevant parts.

First, we’ll talk about why we designed the tour the way we did. Then, for the technically curious, we’ll walk through some of the most interesting hows.

Read more

Alice Alexandra Moore Carmel Schetrit Jueun Grace Yun Yasmin Pessoa Elijah Cobb
https://vercel.com/blog/10-years-of-react Celebrating 10 Years of React 2023-05-29T13:00:00.000Z

Today marks a significant milestone for frontend development.

May 29th is the 10th anniversary of React, a project that has transformed the web industry and reshaped the way we build digital experiences.

A huge congratulations and thank you to the team at Meta, who through their stewardship and relentless innovation, have created and maintained one of the most successful open source projects of all time.

Read more

Guillermo Rauch
https://vercel.com/changelog/more-flexible-environment-variables-in-edge-functions-and-middleware More flexible Environment Variables in Edge Functions and Middleware 2023-05-24T13:00:00.000Z

You now have more flexible access and improved limits for environment variables from Edge Functions and Middleware:

  • The max environment variable size is now 64KB instead of 5KB, same as Serverless Functions.

  • Other than the reserved names, there are no additional restrictions to name environment variables.

  • Accessing process.env is no longer restricted to be statically analyzable. This means that, for example, you can now compute variable names such as process.env[`${PREFIX}_SECRET`].

Check out the documentation to learn more.

Read more

Javi Velasco Gal Schlezinger
https://vercel.com/changelog/integrate-remix-session-storage-with-your-vercel-kv-database Integrate Remix session storage with your Vercel KV database 2023-05-22T13:00:00.000Z

The release of @vercel/remix v1.16.0 introduces a new function, createKvSessionStorage(), which allows you to integrate your Remix session storage with your Vercel KV database in a few lines of code.

Upgrade to @vercel/remix v1.16.0 to get started.

Check out the documentation to learn more about storage with Vercel KV.

Read more

Nathan Rajlich
https://vercel.com/changelog/node-js-14-and-16-are-being-deprecated Node.js 14 and 16 are being deprecated 2023-05-19T13:00:00.000Z

Vercel is announcing the deprecation of Node.js 14 and 16, which will be discontinued on August 15th 2023 and January 31 2025 respectively. Node.js 14 reached official end of life on April 30th 2023. Node.js 16 reached official end of life on September 11, 2023.

On August 15th 2023, Node.js 14 will be disabled in the Project Settings and existing Projects that have Node.js 14 selected will render an error whenever a new Deployment is created. The same error will show if the Node.js version was configured in the source code.

On January 31 2025, Node.js 16 will be disabled in the Project Settings and existing Projects that have Node.js 16 selected will render an error whenever a new Deployment is created. The same error will show if the Node.js version was configured in the source code.

While existing Deployments with Serverless Functions will not be affected, Vercel strongly encourages upgrading to Node.js 18 or Node.js 20 to ensure you receive security updates (using either engines in package.json or the General page in the Project Settings).

Check out the documentation as well.

Read more

Sean Massa
https://vercel.com/changelog/improved-experience-for-moving-between-your-teams-and-projects Improved experience for moving between your teams and projects 2023-05-18T13:00:00.000Z

An improved project and team switcher within Vercel is now available for all users:

  • Quickly navigate through your projects, without having to switch teams first

  • Choose favorite projects across your teams that you can access quickly

  • Switch between projects without losing context. For example, if you're viewing Web Analytics, you can change projects while remaining on the same view.

  • Keyboard friendly navigation

Read more

Timo Lins John Phamous
https://vercel.com/changelog/visual-editing-can-now-be-used-with-builder-io Visual Editing can now be used with Builder.io 2023-05-18T13:00:00.000Z

With Visual Editing you can click-to-edit content on your Vercel site, with a direct link to exactly where your content lives in your CMS.

This functionality is now possible for Enterprise customers using Builder.io as their CMS. Builder.io is now the second CMS to adopt content source-mapping technology that enables Visual Editing from a headless CMS with zero code changes to your website.

Check out the documentation to learn more or contact us for access.

Read more

wits
https://vercel.com/blog/vercel-sanity-innovating-on-a-faster-collaborative-web Vercel + Sanity: Innovating on a faster, more collaborative Web 2023-05-17T13:00:00.000Z

We’re excited to announce a strategic partnership with Sanity—the Composable Content Cloud. Sanity is the modern content management system companies use to meet the realities of ever-increasing content complexity and customer expectations.

Giving developers the tools to create at the moment of inspiration is core to both Vercel and Sanity’s DNA. From enabling Sanity Studio to be embedded in a Next.js app to the recent co-development of Visual Editing, we aim to challenge the status quo through joint innovation.

The composability at every layer of Sanity's content stack combined with Vercel’s Frontend Cloud result in the industry’s leading Web architecture for next-generation apps—trusted by organizations like Loom, Morning Brew, and Takeda.

Future-proof with zero tradeoffs

When you go composable with Vercel and Sanity, you’re future-proofing your business without sacrificing content creativity or iteration velocity. 

From measurable metrics—like page load speed and time-to-first-byte—to intangible team collaboration improvements, your developer and user experience alike will reap dividends from this joint solution.

Your collaboration-enhancing toolkit will include:

Cloud-native, composable Web stacks are becoming the go-to solution for innovative businesses. Decoupling the frontend and backend removes limitations set by monolithic platforms and frees up developers to build dynamic user experiences that convert, including: 

  • An optimal developer workflow for the fastest release velocity 

  • Peak user experience for performance and SEO

  • An editing environment that's decoupled from the content backend, allowing for total customization to optimize the editor experience

Take Loom: a video communication platform supporting seamless collaboration—and a joint customer of Vercel and Sanity. Their team opted to migrate from a monolithic tech stack to a composable one with Sanity, Next.js, and Vercel. 

Vercel's Frontend Cloud enables the entire Loom team with the features they each need to ship the highest quality work. Meanwhile, their marketers are self-serving content creation and updates in Sanity, the team's CMS. “The engineering team can work on a completely separate feature or implement new designs using Next.js—all while sharing our work throughout the process,” says Tatiana Mac, a senior software engineer at Loom.

Visual Editing: Vercel and Sanity’s joint solution for content collaboration

When you leverage monolithic architecture, modifying content directly on the server is often facilitated by a “what you see is what you get” (WYSIWYG) editor. With a composable tech stack, content management becomes decoupled from the overall system. This separation can make it difficult for authors to quickly find and replace content on their website.

That’s why Vercel and Sanity released Visual Editing, which gives you a one-click path from your frontend to your content's home in Sanity Studio—so you can edit the source no matter where it may be reused.

This means that anyone can visually edit content and experience faster iteration—from developers to marketers.

Read more

Guillermo Rauch
https://vercel.com/changelog/automatic-recursion-protection-for-vercel-serverless-functions Automatic recursion protection for Vercel Functions 2023-05-11T13:00:00.000Z

Vercel now has automatic recursion protection for Vercel Functions.

This provides safety against your code inadvertently triggering itself repeatedly, incurring unintentional usage. Recursion protection supports using the http module or fetch in the Node.js runtime for Serverless Functions, both for user-defined code and dependencies. Requests using the bare Socket constructor are not protected against recursion.

Recursion protection is available free on all plans. It does not require any code changes in your application, but does require a new deployment. Outbound requests now include the x-vercel-id header of the request that originated the new fetch.

We’re continuing to invest in platform improvements to help developers understand and monitor usage and avoid unintended usage on the Vercel platform.

Read more

Javi Velasco Gal Schlezinger Seiya Nuta
https://vercel.com/changelog/instant-rollback-is-now-available-to-revert-deployments Instant Rollback is now generally available to revert deployments 2023-05-11T13:00:00.000Z

With Instant Rollback you can quickly revert to a previous production deployment, making it easier to fix breaking changes.

Instant Rollback is now generally available for all Vercel users. Hobby users can roll back to the previous production deployment. Pro and Enterprise users can roll back to any eligible deployment.

Check out the documentation to learn more.

Read more

Ernest Delgado Mariano Cocirio Sam Becker
https://vercel.com/blog/what-is-vercel What does Vercel do? 2023-05-10T13:00:00.000Z

Vercel builds a frontend-as-a-service product—they make it easy for engineers to deploy and run the user facing parts of their applications.

Read more

Justin Gage
https://vercel.com/blog/nuxt-on-vercel Improved support for Nuxt on Vercel 2023-05-05T13:00:00.000Z

We've been partnering with Nuxt to further integrate the framework with Vercel and support all Vercel products. Nuxt on Vercel now supports:

Read more

Steph Dietz
https://vercel.com/blog/authentication-for-the-frontend-cloud Authentication for the frontend cloud 2023-05-05T13:00:00.000Z

We’re in the midst of the next big platform shift. Last generation we moved from server rooms to the cloud, and today we’re moving from the traditional, backend-oriented cloud to a new frontend cloud.

The frontend cloud is characterized by performance: It enables both faster application development and faster end-user interactions. Each element is critical to success in today’s ultra-competitive software market.

At Clerk, we build authentication for the frontend cloud. We saw the need arise as frameworks and hosts tailored to the frontend cloud grew in popularity, especially Next.js and Vercel. Legacy authentication tools were not built frontend-first, and their technical architecture usually undermines the goal of speeding up end-user interactions, since they’re slow at the edge.

Read more

Colin Sidoti
https://vercel.com/changelog/introducing-vercel-data-cache Introducing the Vercel Data Cache: Optimized caching for React Server Components 2023-05-04T13:00:00.000Z

Vercel Data Cache is now available to give you framework-defined caching and propagation infrastructure to handle responses from React Server Components.

Data Cache is a globally distributed, ephemeral cache accessible from both serverless and edge runtimes, allowing you to cache data granularly in the region in which your function executes, with different treatments depending on the type of response:

  • Dynamic data is re-fetched with every execution

  • Static data is cached and revalidated either by time-based or on-demand revalidation

This feature is currently supported for the Next.js App Router and is available for users on all plans.

Check out our documentation and usage limits to learn more.

Read more

Casey Gowrie Luba Kravchenko JJ Kasper Alasdair Monk Tristan Siegel Amy Burns
https://vercel.com/changelog/next-js-13-4 Next.js 13.4 on Vercel 2023-05-04T13:00:00.000Z

The Next.js App Router, now stable in Next.js 13.4 is supported out-of-the-box on Vercel, with pre-configured, global, framework-defined infrastructure.

Build data-driven, personalized experiences for your visitors with Next.js, and automatically deploy to Vercel with optimized, global performance.

  • Nested Routes and Layouts: Easily share UI between routes while preserving state and avoiding expensive re-renders. On Vercel, your layouts and pages can be configured to deploy as Edge Functions, delivering substantial SEO and performance improvements.

  • Streaming: The Next.js App router natively supports streaming responses. Display instant loading states and stream in units of UI as they are rendered. Streaming is possible for Node and Edge runtimes—with no code changes—with Vercel Functions.

  • React Server Components: Server Components allow you to define data fetching at the component level, and easily express your caching and revalidation strategies. On Vercel, this is supported natively with Vercel Functions and Vercel Data Cache, a new caching architecture that can store both static content and data fetches.

  • Support for Data Fetching: With granular caching, Next.js allows you to choose from static or dynamic data at the fetch level. On Vercel, the Data Cache is automatically shared across deployments, speeding up build times and improving performance.

  • Built-in SEO Support: With the Metadata API, easily customize your page for sharing on the web, compatible with streaming.

Additionally in Next.js 13.4 you will find:

  • Turbopack (Beta): Your local dev server, faster and with improved stability.

  • Server Actions (Alpha): Mutate data on the server with zero client JavaScript.

Check out our documentation to learn more.

Read more

Tim Neutkens Delba de Oliveira Tobias Koppers JJ Kasper Jimmy Lai
https://vercel.com/blog/visual-editing Visual Editing: Click-to-edit content for headless CMSes 2023-05-03T13:00:00.000Z

Adding collaborative comments to Vercel Previews Deployments was our first step towards bringing the workflow of Google Docs and Figma to web development.

Today, we're bringing content editing to your Preview Deployment interface with Visual Editing.

Anyone can visually edit content and experience faster iteration—from developers to marketing teams.

Read more

Malte Ubl
https://vercel.com/blog/vercel-spaces Quality software at scale with Vercel Spaces 2023-05-03T13:00:00.000Z

As companies and codebases grow, it becomes hard to sustain a fast release cycle without letting errors slip in to production. It shouldn't be this way. We should be able to move fast without breaking things—making quick updates while retaining great performance, security, and accessibility.

Today, we're introducing Vercel Spaces, the biggest evolution of Vercel's workflow yet. Introducing powerful tools and conventions designed to integrate with your monorepo setup, to help you scale efficiently while retaining quality.

With Vercel Spaces, you'll find insights on your development workflows, code health and build logs, and brand new functionality to boost efficiency and remove blockers with Conformance, Code Owners, and Vercel Runs. These products, currently available in early private beta for Enterprises, can be used with Vercel regardless of where you host your application.

Read more

Mark Knichel Gaspar Garcia
https://vercel.com/blog/vercel-security Introducing Vercel Firewall and Vercel Secure Compute 2023-05-02T13:00:00.000Z

Finding the right balance between developer experience and robust enterprise-grade security can be challenging. Developers want tools that streamline workflows and enhance productivity, while organizations prioritize security measures to protect sensitive data and meet compliance standards. At Vercel, we believe you can have the best of both worlds—exceptional developer experience and top-tier security.

Read more

Ty Sbano
https://vercel.com/changelog/custom-firewall-rules-for-ip-blocking Custom firewall rules for IP blocking 2023-05-02T13:00:00.000Z

As a part of Vercel Firewall, you can now create custom rules to block specific IP addresses. By restricting access to your applications or websites based on the IP addresses of incoming requests, you can block malicious actors from viewing your site—preventing unauthorized access or unwanted traffic.

This feature is available for Enterprise teams on Vercel. Contact us to get started, or check out the documentation to learn more.

Read more

Tristan Siegel Geovani de Souza John Phamous Casey Gowrie Okiki Ojo Justin Vitale Kaitlyn Carter
https://vercel.com/blog/vercel-storage Introducing storage on Vercel 2023-05-01T13:00:00.000Z

Data is an integral part of the web. As JavaScript and TypeScript frameworks make it easier than ever to server-render just-in-time data, it's time to make databases a first-class part of Vercel's frontend cloud.

Read more

Edward Thomson Adrian Cooney Jueun Grace Yun Vincent Voyer Elijah Cobb Hector Simpson Fabio Benedetti
https://vercel.com/changelog/vercel-blob Introducing Vercel Blob 2023-05-01T13:00:00.000Z

Vercel Blob is a fast, easy, and efficient solution for storing files in the cloud.

The Vercel Blob API works with any framework. It can be securely called from Edge and Serverless Functions and returns an immutable URL that can be exposed to visitors or put into storage.

Vercel Blob is in private beta. Join the waitlist to get early access in the coming weeks.

Check out our documentation to learn more.

Read more

Vincent Voyer Edward Thomson Dom Busser Fabio Benedetti Hector Simpson
https://vercel.com/changelog/vercel-postgres Introducing Vercel Postgres 2023-05-01T13:00:00.000Z

Vercel Postgres is a serverless PostgresSQL database, designed to integrate with Vercel Functions and any frontend framework.

Vercel Postgres is available for Hobby and Pro users during the public beta.

Check out our documentation or get started with a template:

Read more

Elijah Cobb Jueun Grace Yun Edward Thomson Kylie Czajkowski Hector Simpson
https://vercel.com/changelog/vercel-kv Introducing Vercel KV 2023-05-01T13:00:00.000Z

Vercel KV is a serverless, durable Redis database, making it easy to implement features like rate limiting, session management, and also manage application state.

The Redis-compatible SDK works from Edge or Serverless Functions and scales with your traffic. KV stores are single region by default, but can be replicated to multiple regions for distributed workloads.

Vercel KV is available for Hobby and Pro users during the public beta.

Check out our documentation to learn more.

Read more

Adrian Cooney Fabio Benedetti Edward Thomson Dom Busser Hector Simpson
https://vercel.com/changelog/open-graph-link-sharing-inspector Inspect and validate Open Graph metadata for enhanced link sharing 2023-04-28T13:00:00.000Z

You can now inspect and validate Open Graph (OG) images and metadata for your Vercel deployments directly from the dashboard, without needing to use third-party tools.

View link previews for social platforms like Twitter, Slack, Facebook, and LinkedIn, and optimize your site for sharing on the web. The OG inspector also provides context-aware suggestions for routes in your application based on the deployment output. Protected routes using Deployment Protection can also be inspected.

Try it now by visiting the Open Graph tab on a Vercel Deployment.

Read more

Samuel Foster Kevin Rupert Elijah Cobb Meg Bird Tom Bremer
https://vercel.com/changelog/detailed-deployment-summaries Detailed deployment summaries 2023-04-25T13:00:00.000Z

Deployment summaries now have more detailed views of the infrastructure primitives provisioned by Vercel at build time.

With this improved summary, you can track how changes in your frontend application code lead to specific changes in the build's output—the runtimes, regions, paths, and more—that Vercel uses or creates when deploying your application on our Global Edge Network.

Read more in our documentation or learn more about framework-defined infrastructure.

Read more

Andrew Healey Sam Becker Emil Kowalski Sean Massa Chris Barber John Phamous Mariano Cocirio
https://vercel.com/changelog/turborepo-run-summary-is-now-available Turborepo Run Summary is now available 2023-04-24T13:00:00.000Z

Turborepo Run Summary helps you visualize and debug your Turborepo tasks.

Easily view all tasks that ran as part of your deployment, complete with their execution time and cache status, as well as a snapshot of the data that turbo used to construct the cache key. Easily compare deployments to quickly identify the root cause of cache misses, and eliminate slow builds.

Turborepo Run Summary is now available on all plans for everyone using [email protected] or newer.

Check out our documentation to learn more.

Read more

Sam Becker Mehul Kar Tom Knickman Greg Soltis Jared Palmer
https://vercel.com/blog/vercel-web-analytics-is-now-generally-available Vercel Web Analytics is now generally available 2023-04-19T13:00:00.000Z

Vercel Web Analytics is generally available for insights on your top pages, top referrers, and user demographics such as countries, operating systems, browser information, and more. We're also excited to announce new functionality, filtering and custom events.

Web Analytics is available on all plans and custom events are available for Pro and Enterprise users.

Read more

Chris Widmaier Timo Lins
https://vercel.com/changelog/custom-events-now-available-for-web-analytics Custom events now available for Web Analytics 2023-04-19T13:00:00.000Z

Vercel Web Analytics now allows Pro and Enterprise users to track custom events in frontend applications. With custom events, you can measure user actions like newsletter signups or what CTAs your customers are clicking the most to dive deeper into understanding your user's journey.

Custom events are included in the Pro and Enterprise plans. Pro users can track up to 2 keys max per custom event object, while Enterprise users can track a custom amount. Pro users can also add on Web Analytics Plus for $50/month to get 8 keys.

To start tracking custom events, please upgrade to the new 1.0.0 release of our npm package @vercel/analytics.

Check out our documentation to learn more.

Read more

Chris Widmaier Doug Parsons Tobias Lins Timo Lins
https://vercel.com/changelog/web-analytics-is-now-generally-available Web Analytics is now generally available 2023-04-19T13:00:00.000Z

Vercel Web Analytics, previously known as Audiences, is now available to all users for insights on your top pages, top referrers, and user demographics such as countries, operating systems, and browser information.

Web Analytics now also includes custom event tracking to dive deeper into your user's interactions, such as button clicks, form submissions, and conversions.

Speed Insights is still a separate product that allows you to track application performance through your Real Experience Score.

Web Analytics is available on all plans with the Hobby plan getting 2,500 events per month, the Pro plan getting 25,000 events per month, and custom amounts for Enterprise. Pro users can also add on Web Analytics Plus for $50/month to get 500k included events.

Check out the documentation to learn more.

Read more

Chris Widmaier Tobias Lins Doug Parsons Timo Lins
https://vercel.com/blog/core-construction Building towards operational excellence at CORE Construction 2023-04-18T13:00:00.000Z

Balancing the intricacies of construction management requires a keen focus on efficiency and innovation. For technology-driven construction management company, CORE Construction, their unique approach hinges on adhering to their set standards within a program called Operational Excellence (OPEX). To reach the pinnacle of OPEX, the CORE team relies on their developers to deliver innovative solutions that push the boundaries of the industry.

Read more

Greta Workman
https://vercel.com/changelog/usage-notification-settings-is-now-generally-available Usage notification settings is now generally available 2023-04-17T13:00:00.000Z

Pro users can now configure notification settings by category, set specific preferences, and choose when to be notified in order to help personalize your experience and avoid unexpected high usage bills.

  • Each category will have associated thresholds and dollar values. You are now able to select multiple thresholds and be notified only when they are reached.

  • You can choose not to receive usage notifications for one of the categories. However, we do not recommend this action because you will be aware when you are approaching additional charges and will not be surprised by a large overage invoice.

Check out our documentation to learn more.

Read more

Chloe Tedder Kevin Rupert Saranya Desetty
https://vercel.com/blog/making-commerce-ui-a-trusted-partner-for-global-ecommerce-brands Making Commerce-UI a trusted partner for global ecommerce brands 2023-04-14T13:00:00.000Z

Commerce-UI is a boutique agency focused on composable eCommerce that specializes in creating optimized, performant experiences for design-driven brands. Thanks to Next.js and Vercel, they’re able to help clients—like viral sensation Lift Foils—discover headless commerce, while providing a seamless shopping experience to users around the world.

Read more

Alli Pope
https://vercel.com/blog/incremental-migration-from-wordpress-for-a-dev-first-approach Incremental migration from WordPress for a dev-first approach 2023-04-14T13:00:00.000Z

Navigating the agency world can be complicated, with each agency claiming to offer the most innovative solutions. Enter Gearbox, a five-person team that crafts stunning sites and apps while empowering their clients to retain complete control over their brand. The secret to their success: a "dev-first approach" that sets them apart from typical marketing and design-focused competitors.

Flexible enough to power both their small team and larger brands, Next.js and Vercel are Gearbox's go-to solutions—even for clients who may not be ready to transition to a headless stack all at once.

Features to power an incremental migration

When he first founded Gearbox, CEO Troy McGinnis served his clients using WordPress, which was the tech he was accustomed to from his previous agency. After two years of working around security, maintenance, and performance issues with Wordpress—the Gearbox team endeavored to move to a headless tech stack

They initially chose Gatsby as their frontend framework. But “Gatsby and WordPress were still too restrictive. Gatsby was falling off the radar. That’s when we found Next.js and Vercel. We loved it from the get-go, and with all the recent features, it just keeps getting better and better.”

Going headless at the edge

Gearbox uses Edge Middleware to incrementally adopt Next.js. One advantage of this is in managing blog redirects, to maintain SEO. With Edge Middleware, the new posts automatically rewrite to an existing WordPress blog under the hood, filter traffic, and point to other services and websites—all while masking the URL and without complicated server setup. Eventually, they will move everything from that WordPress blog and this allows the team to incrementally migrate in phases and manage scopes accordingly.

Security, streamlined

The Gearbox team deeply values the security of their clients’ apps and sites. Security vulnerabilities are common with legacy WordPress sites, so the team often pushes clients to migrate to Vercel with this in mind. 

McGinnis shares, “We actually just had a client run a penetration test with their legacy WordPress stack, and it failed within a few hours. The first thing that popped into my head was, ‘if this was on Vercel and Next.js, we wouldn’t be having this problem.' We’d be locked down on security, and Vercel would be handling all this for us.”

With Vercel, sites are secure by default: requests are handled in isolation and content is replicated globally, ensuring stability by design.

Getting to production faster

The Gearbox team also loves Vercel Preview Deployments. “I was just raving to our team that it’s so nice to have Preview URLs. Previously we would have to pull down the database from the server to be able to view development content, or hook it up to another database and then pull down files, and it was all very time consuming even just to review code," says McGinnis, "whereas with Vercel and our Preview URLs, it’s just there. It’s so integrated into our workflow.

A revamped workflow for continued success

Gearbox has been all-in on Vercel and Next.js for almost three years now. They have a growing and diverse client base, from breweries to government agencies. Even if their clients can’t switch to a composable tech stack all at once, they experience the benefits of Next.js and Vercel with each increment of their migration. 

“It has completely reshaped how we approach development workflow,” says McGinnis. 

Read more

Kelsey Dillon
https://vercel.com/changelog/convert-comments-on-preview-deployments-to-linear-issues Convert Comments on Preview Deployments to Linear issues 2023-04-13T13:00:00.000Z

Comments on Vercel Preview Deployments can now be converted to Linear issues directly from the comment. This makes it easy to take action on feedback in the workflows your team is already using.

This feature is available to Pro and Enterprise users as well as Hobby users with public repos.

To get started, check out our Vercel Linear integration in the Vercel Integrations Marketplace. You can select the Add Integration button from within the Marketplace, then select which Team and project the integration should be scoped to.

Check out our documentation to learn more.

Read more

P.B. To Gary Borton Becca Zandstein
https://vercel.com/blog/wunderman-thompson-composable-workflow Containing multi-site management within a single codebase 2023-04-12T13:00:00.000Z

Wunderman Thompson, a global digital agency, specializes in helping brands create and manage their digital presence.

Their teams based in Europe often serve multiple countries and languages, catering to the needs of various portfolio brands, each with its own unique identity.

To tackle these challenges, Wunderman Thompson uses the principles of Atomic Design, a headless CMS, a monorepo workflow, and Vercel's serverless platform. This approach cuts development time by a factor of 10 and costs by a factor of 25 compared to their former method of PHP servers and WordPress monoliths.

In this guide, we'll discuss the importance of choosing the right framework for an efficient developer workflow, and walk you through how to use these techniques to create your own efficient design system deployed on Vercel and the headless CMS of your choice.

Read more

Alice Alexandra Moore
https://vercel.com/changelog/improve-infrastructure-security-with-vercel-secure-compute Improved infrastructure security with Vercel Secure Compute 2023-04-12T13:00:00.000Z

Vercel Secure Compute enables creating private connections between Vercel Serverless Functions and your backend cloud, like databases inside VPCs and other private infrastructure.

Enterprise customers can now opt-into deploying to a isolated private network with dedicated IP addresses. This includes all production and preview traffic. Additionally, builds can be placed in specific regions and isolated from other customers.

Secure Compute improves the security and compliance of your deployments on Vercel. For more information about Vercel Secure Compute on Enterprise, or if you require support for VPC peering or VPN connections, please contact our sales team.

Read more

Miroslav Simulcik Enric Pallerols Hector Simpson Joe Haddad Simon Wijckmans Yanick Bélanger
https://vercel.com/blog/how-vercel-helps-mmm-page-manage-over-30-000-custom-domains How Vercel helps mmm.page manage over 30,000 sites 2023-04-07T13:00:00.000Z

mmm.page was founded to provide anyone with the tools to create their own website, regardless of their technical know-how. With fast and early success, having the whole platform as a single page application on Amazon Simple Storage Service (Amazon S3) became untenable as the user base grew into the tens of thousands. That’s why they turned to Vercel. Thanks to Server-Side Rendering (SSR), ease of deployment, and support for custom domains, Vercel makes it simple to manage mmm.page’s scale, monetize their offerings, and continue to innovate.

Read more

Alli Pope Steven Tey
https://vercel.com/blog/vercel-edge-config-is-now-generally-available Vercel Edge Config is now generally available 2023-04-06T13:00:00.000Z

Configuration data is used to control everything from A/B testing and feature flags to advanced rewrites and bespoke request-blocking rules. However, traditional solutions for managing configuration data can be slow, unreliable, and difficult to scale, which can negatively impact the user experience, latency, and overall effectiveness of your website.

Today we’re announcing the general availability of Edge Config, a new solution for managing configuration data in the cloud designed to be ultra-fast, reliable, and scalable, which can help improve your website's performance and efficiency.

Read more

Dominik Ferber Andy Schneider
https://vercel.com/changelog/edge-config-is-now-generally-available Edge Config is now generally available 2023-04-06T13:00:00.000Z

With Vercel Edge Config you can perform ultra-low latency reads from Vercel Edge Middleware, Edge Functions, and Serverless Functions. Edge Config gives you a place to store your experimentation data like feature flags and A/B testing cohorts, and configuration data for middleware routing rules like redirects or blocklists.

Edge Config is now available for all Vercel users. Hobby users can perform 50,000 reads per month. Pro and Enterprise users can perform 1,000,000 reads per month, with additional usage available on-demand for an additional charge.

Checkout the documentation to learn more.

Read more

Dom Busser Dominik Ferber Andy Schneider Edward Thomson
https://vercel.com/blog/vercel-joins-aws-marketplace Powering a serverless Web: Vercel joins AWS Marketplace 2023-04-05T13:00:00.000Z

AWS and Vercel have always had a shared vision: accelerating innovation through the power of serverless computing—and helping customers win big in the process.

Read more

Kevin Van Gundy
https://vercel.com/changelog/improved-node-js-compatibility-for-edge-middleware-and-edge-functions Improved Node.js compatibility for Edge Middleware and Edge Functions 2023-04-03T13:00:00.000Z

Vercel Edge Middleware and Edge Functions now support more Node.js modules. You may want to make use of these modules directly, but many of these low-level APIs are pieces of core functionality that other modules depend on. Adding support for these APIs expands the compatibility of existing npm packages.

The following APIs are now supported:

  • AsyncLocalStorage: Support for maintaining data for an invocation between different asynchronous execution contexts, which allows you to pass state to the context even when the function is hot and module context is preserved.

  • EventEmitter: A flexible API to build event-driven systems that serves as a core building block for communication between libraries that control I/O and listeners that process data when events occur.

  • Buffer: The most common way of handling binary data in Node.js, available globally or importable from buffer.

  • assert: A set of assertion functions to validate invariants and logical rules that are very useful to explicitly test assumptions in your code path that need to run in Edge Functions.

  • util.promisify and util.callbackify: A helper function to convert a callback-style function signature into one that returns a Promise, and a helper function to convert a function that returns a Promise into one that accepts a callback.

  • util.types: A set of functions to validate that objects are of a given type.

You can take advantage of these additional APIs in Edge Middleware and Edge Functions with your next deployment. To deploy Edge Functions on Vercel you can get started with any framework or one of our templates.

Read more

Javi Velasco Gal Schlezinger
https://vercel.com/changelog/git-lfs-support Git LFS support 2023-04-03T13:00:00.000Z

Vercel now supports Git LFS, enabling storing large files using your git client of choice.

Git LFS is free for all plans and can be toggled on and off in your project settings. File storage may consume additional bandwidth with your git provider that stores files (GitHub, GitLab, Bitbucket, etc.).

Check out our documentation to learn more.

Read more

Gargi Sharma Mariano Cocirio
https://vercel.com/blog/managing-major-traffic-spikes-during-ticket-drops-with-vercel Managing major traffic spikes during ticket drops with Vercel 2023-03-31T13:00:00.000Z

Managing the complex scaling needs of an online ticketing platform can be challenging. For the French end-to-end ticketing solution Shotgun, ticket drops used to involve stress around scaling and server provisioning. As a company dedicated to providing artists and their fans with the best service, Shotgun now relies on Vercel for seamless launches. So when they drop tickets for artists like Billie Eilish, the team can rest assured their site can handle the traffic.

Read more

Kiana Lewis
https://vercel.com/blog/vercel-edge-google-optimize Replacing Google Optimize with the Vercel Edge Network 2023-03-30T13:00:00.000Z

Since 2012, Google Optimize has been helping web builders experiment with their UIs to find the most effective patterns for their applications. However, Google has announced that Optimize will be sunset on September 30, 2023.

Vercel is ready to help you build a platform to continue your research with higher performance, more control, and better data by leveraging the edge.

Read more

Anthony Shew
https://vercel.com/blog/streaming-for-serverless-node-js-and-edge-runtimes-with-vercel-functions Streaming for Serverless Node.js and Edge Runtimes with Vercel Functions 2023-03-28T13:00:00.000Z

Vercel recently became the first serverless computing provider to offer stable support for HTTP response streaming in both Node.js (Lambda) and Edge runtimes. This capability helps developers architect high-performance web applications with a focus on speed, scalability, and efficient resource usage.

Let’s take a look at how Vercel enables streaming for serverless Node.js environments, and how this capability can significantly boost your web app's performance and user experience.

Read more

Lydia Hallie
https://vercel.com/blog/nextjs-next-font Custom fonts without compromise using Next.js and `next/font` 2023-03-28T13:00:00.000Z

As web developers, we know the importance of typography in design. Custom fonts can set the tone for a website and enhance its overall aesthetic. However, using custom fonts can often create issues with website performance and user experience.

One of the biggest issues with custom fonts is the Cumulative Layout Shift (CLS) that occurs when a font takes too long to load. These Flashes of Unstyled Content (FOUC) can alter the positioning of elements already on the page and make it difficult to navigate. CLS and FOUC can also impact an application's search engine ranking.

On Vercel’s external websites, we used to solve these problems with the workarounds that we’ll talk about below. However, with the release of Next.js 13, we switched to next/font, which cut down on complex code, client-side JavaScript, and layout shift.

Read more

wits Alice Alexandra Moore
https://vercel.com/changelog/automatic-pnpm-v8-support Automatic pnpm v8 support 2023-03-28T13:00:00.000Z

Vercel now supports pnpm v8. For deployments with a pnpm-lock.yaml file with lockfileVersion: '6.0', Vercel will automatically use pnpm v8 for install and build commands.

To upgrade your project to pnpm v8, run pnpm install -g pnpm@8 locally and then re-run pnpm install to generate the new pnpm-lock.yaml file. After updating, create a new deployment for the changes to take effect.

If you want to specify an exact version of pnpm in your Vercel project, enable Corepack (experimental).

Check out the documentation to learn more.

Read more

Ethan Arrowood Steven Salat
https://vercel.com/blog/zero-cls-experiments-nextjs-edge-config How to build zero-CLS A/B tests with Next.js and Vercel Edge Config 2023-03-23T13:00:00.000Z

A/B testing and experiments help you build a culture of growth. Instead of guessing what experiences will work best for your users, you can build, iterate, and adapt with data-driven insights to produce the most effective UI possible.

In this article, you'll learn how we built a high-performance experimentation engine for vercel.com using Next.js and Vercel Edge Config, allowing our developers to create experiments that load instantly with zero Cumulative Layout Shift (CLS) and a great developer experience.

Read more

Elijah Cobb Samuel Foster Anthony Shew
https://vercel.com/blog/vercel-remix-integration-with-edge-functions-support Remix without limits 2023-03-22T13:00:00.000Z

We are excited to announce our advanced Remix integration, including support for:

Read more

Nathan Rajlich Ethan Arrowood
https://vercel.com/changelog/advanced-remix-integration-with-streaming-ssr-and-multi-runtime-support Advanced Remix integration with streaming SSR and multi-runtime support 2023-03-22T13:00:00.000Z

Deploy your Remix application on Vercel with advanced support for:

  • Streaming SSR: Dynamically stream content with both Node.js and Edge runtimes

  • API Routes: Easily build your serverless API with Remix and a route loader

  • Advanced Caching: Use powerful caching headers like stale-while-revalidate

  • Data Mutations: Run actions inside Serverless and Edge Functions

Check out our blog to learn more about how Vercel enhances the Remix experience.

Read more

Nathan Rajlich Ethan Arrowood
https://vercel.com/changelog/march-2023 Improvements and Fixes 2023-03-09T13:00:00.000Z
  • AWS credentials in Serverless functions: You can now add environment variables with the AWS_ prefix like AWS_ACCESS_KEY_ID or AWS_REGION via the dashboard.

  • Framework specific documentation: There is a new Vercel docs section dedicated to frameworks such as, Next, SvelteKit, Astro, Create React App, and Gatsby.

  • Vercel CLI: v28.16.13 was released with an upgrade to Turbo version 1.8.3, improved Remix support with an upgrade to @remix-run/dev version 1.14.0, support for Astro V2, and more.

  • Improved date picker: The new date picker in the Usage tab includes natural language parsing, presets, and shortcuts.

  • Vercel Cron Jobs: We now allow framework authors to create Cron Jobs via the crons property of the Build Output API configuration and creating Cron Jobs via the crons property of vercel.json for end users.

Read more

Steven Salat Craig Andrews John Phamous Shaquil Hansford Vincent Voyer
https://vercel.com/blog/framework-defined-infrastructure Framework-defined infrastructure 2023-03-07T13:00:00.000Z

Infrastructure as code (IaC) is the industry-standard practice for provisioning infrastructure in a repeatable and reliable way. Framework-defined infrastructure (FdI) is an evolution of IaC, where the deployment environment automatically provisions infrastructure derived from the framework and the applications written in it.

The best way to understand it is that a build-time program parses the source code written to a framework, understands the intent behind the code, and then automatically generates the IaC configuration needed to run the software. This means more predictable, lower cost, and lower risk DevOps through truly serverless—dare we say, infrastructureless—architecture.

In this article, we’ll explain how framework-defined infrastructure fits into modern views of infrastructure definition and automation. We’ll then show examples of how framework-defined infrastructure improves the experience of developing in open-source frameworks.

Read more

Malte Ubl
https://vercel.com/blog/turborepo-migration-go-rust Why Turborepo is migrating from Go to Rust 2023-03-07T13:00:00.000Z

Read more

Anthony Shew Jared Palmer Greg Soltis Nathan Hammond Nicholas Yang
https://vercel.com/blog/introducing-monitoring Introducing Vercel Monitoring 2023-03-06T13:00:00.000Z

We’re excited to share some new additions to our observability suite: Monitoring, now generally available for Pro and Enterprise teams and Logs for users on all plans. These tools give teams on Vercel the ability to quickly identify and resolve issues before they become major problems with an aggregated view of web traffic and performance data.

Read more

Uche Nkadi Mariano Cocirio
https://vercel.com/changelog/monitoring-is-now-available-to-view-traffic-and-performance-data-for Monitoring is now available to view traffic and performance data for improved observability 2023-03-06T13:00:00.000Z

Monitoring helps you detect and diagnose issues in your web applications by surfacing errors, traffic, and performance data. You can leverage example queries like Bandwidth by Project or Requests by Bot or create your own to quickly resolve issues and optimize your projects.

Monitoring is available for Pro and Enterprise users with 30 days data retention for Pro and 90 days data retention for Enterprise.

Check out our documentation to learn more.

Read more

John Phamous Arian Daneshvar Caleb Boyd Kevin Rupert Maedah Batool Meg Bird Cami Cano Uche Nkadi
https://vercel.com/changelog/activity-date-filtering-now-available Activity date filtering now available 2023-03-06T13:00:00.000Z

You are now able to filter the Activity of your team on Vercel based on a specific date. This makes it easier to find the actions your team has taken.

Check out the documentation to learn more.

Read more

John Phamous Simon Wijckmans
https://vercel.com/changelog/add-real-time-analytics-to-your-application-with-tinybird-integration Add real-time analytics to your project with the Tinybird integration 2023-03-03T13:00:00.000Z

The Tinybird integration lets you build a real-time backend for your Vercel projects in minutes. Developers now have instant access to the same data engine and APIs that Vercel uses to ingest and display billions of data points in real-time.

Try out the integration for your own data-driven application.

Read more

Noor Al-Alami Cami Cano Peter Saulitis
https://vercel.com/changelog/enhanced-logs-ui-to-search-inspect-and-share-application-logs Enhanced Logs to search, inspect, and share runtime logs 2023-03-01T13:00:00.000Z

Search, inspect, and share runtime logs for any deployment or project with our enhanced logs experience. This gives you the ability to quickly identify the root cause of persistent errors and customer issues.

All plans can access and search logs from the Vercel dashboard. Hobby and Pro customers have 1 hour of log retention.

Check out the documentation to learn more.

Read more

Darpan Kakadia Vincent Voyer Naoyuki Kanezawa Julia Shi Kevin Rupert Dom Busser Cami Cano Mariano Cocirio
https://vercel.com/blog/your-guide-to-headless-commerce Your guide to headless commerce 2023-02-27T13:00:00.000Z

Adopting a headless, or composable, commerce architecture helps to ensure your digital storefront is high-performing, scalable, and increasing in conversions each year. Leading ecommerce brands are choosing to go headless to stay competitive.

Let’s get back to basics and explore what headless commerce is, how it compares to monolithic commerce, and what you should do once you've made the migration to outpace your competitors and reach your KPIs.

Read more

Kiana Lewis Peter Saulitis
https://vercel.com/blog/a-better-developer-experience-makes-building-cruise-critic-more-efficient Optimizing performance for over 6M monthly visitors at CruiseCritic 2023-02-24T13:00:00.000Z

The web is any traveler’s first stop when it comes to planning vacations. Cruise Critic (a subsidiary of Tripadvisor) knows just how essential review sites are to today’s traveler; the company serves six million visitors every month. Growing traffic meant that Cruise Critic needed to evaluate their application stack as well as their development workflow in order to scale. 

Read more

Greta Workman
https://vercel.com/blog/vercel-cache-api-nextjs-cache Vercel Data Cache: A progressive cache, integrated with Next.js 2023-02-23T13:00:00.000Z

Before today, developers had to choose between either fully static or fully dynamic pages.

With Next.js 13.2, we’re excited to announce the Next.js Cache (beta) and the brand-new Vercel Data Cache (beta). This enables caching only part of your page as static data, while fully dynamically rendering the rest of your application, including accessing real-time and personalized data.

Read more

Casey Gowrie Luba Kravchenko JJ Kasper Sebastian Markbåge Lee Robinson
https://vercel.com/blog/from-monolith-to-composable-equipping-a-financial-services-ipo Moving from monolithic WordPress to composable gives Plenti total freedom 2023-02-23T13:00:00.000Z

Plenti is a technology-led consumer lending and investment company that helps borrowers bring their big ideas to life. Established in Australia in 2014, Plenti has funded over $900m of loans to over 55,000 borrowers and has attracted over 22,000 registered investors. 

Consumers hold financial services providers to high standards, so a Vercel and Next.js frontend was instrumental in transforming their brand and giving their users a trustworthy experience. 

Because of this, migrating from their WordPress-based monolithic stack and launching a composable Next.js frontend on Vercel was Plenti’s top tech priority as they approached their rebrand and IPO. 

And they did this all with a one-developer team.

Total flexibility: Next.js as a cornerstone of Plenti’s rebrand

Previously called RateSetter Australia, the Plenti team knew they needed to update their tech stack when they set off to IPO and rebrand in 2020. As part of the rebrand, everything on their website was redesigned, refactored, and rewritten. 

Plenti Software Engineer Matt Milburn managed the tech stack migration, saying “I tried to weigh every option possible, but Next.js ended up being the obvious choice. It just looks so nice, and gets you going so fast. You start building what you want to build right away, with zero configuration.” 

Milburn adds, “I’m happy we are no longer messing with our old monolithic WordPress stack. It’s total freedom.”

Everything optimized with Vercel API and Analytics

One of Milburn’s favorite features is the Vercel API, a REST API that empowers you to gauge the capabilities of the entire Vercel platform including the Integrations. He uses the Vercel API to make a plugin for Strapi, Plenti’s Headless CMS provider, which triggers deployments automatically.

Milburn also values Vercel Analytics–particularly the Real Experience Score (RES) feature. “RES helps us narrow down a variety of optimizations we can make proactively,” he says. With RES, Plenti is able to collect web vitals from the actual devices their visitors are using. Because of that, it provides a real grade of how users actually experience what Plenti builds.

Zero-configuration empowers a team of one

But perhaps the most impressive part of Plenti’s successful composable frontend is that it’s managed by a single software engineer: Milburn. “My team is just me…I credit that to using Next.js and Strapi. I can manage both, all on my own,” he says. 

And when it comes to harnessing the power of Vercel in tandem with Next.js, he concludes, “using Next.js and Vercel together? Of course. They make the thing. It’s zero-configuration.”

  • See how Vercel Analytics gives you better insights for peak performance

  • Why are innovators going composable (also known as headless)? Get the guide. 

  • Want to ensure a great developer and user experience by going composable with Next.js? Get in touch.

Read more

Kelsey Dillon
https://vercel.com/blog/nextjs-seo-playbook The Next.js SEO Playbook: Ranking higher with Next.js on Vercel 2023-02-23T13:00:00.000Z

Search engine optimization (SEO) lets customers find and trust you more easily. And yet, improving your application's SEO is too often an opaque process. When SEO success feels like magic, you’re not always sure how to repeat or better it.

In this article, we'll demystify SEO, clarifying some of the most technical aspects of the process. SEO doesn't have to be a chore, and with Next.js on Vercel as your solid foundation, you're ready to build up best practices, seeing exponential gains in the process.

So, let’s start from the beginning.

Read more

Alice Alexandra Moore Thom Crowe
https://vercel.com/blog/how-makeswift-improved-ci-speed-by-65-with-turborepo How Makeswift improved CI speed by 65% with Turborepo 2023-02-22T13:00:00.000Z

Trusted by companies like Caterpillar and Render, Makeswift prides itself on providing easy visual or no-code Next.js site builders for their clients. When their small team began struggling with lengthy build times and a subpar dev experience, they turned to Turborepo. After adopting Turborepo, Makeswift improved overall CI pipeline time by 65%.

Read more

Kiana Lewis Anthony Shew
https://vercel.com/blog/cron-jobs Introducing Vercel Cron Jobs 2023-02-22T13:00:00.000Z

Vercel Cron Jobs can be used with Vercel Functions to:

Read more

Vincent Voyer Andy Schneider
https://vercel.com/blog/how-a-global-agency-built-a-web-innovation-engine-in-two-months How a global agency built a web innovation engine in two months 2023-02-22T13:00:00.000Z

If you’ve experienced a new technology for the first time at an exhibit or event, the talented technologists at Globacore, an award-winning digital agency based in Toronto, might have introduced you.

Globacore specialized in creating interactive experiences for physical spaces, like trade shows and offices, that stretched the laws of technology and human imagination for global brands like Acura, the IEEE, Volkswagen, and Samsung.

“We catch people’s attention with cutting-edge technology that they’ve never seen, much less experienced,” says Dave Boyle, Head of Development at Globacore.

Read more

Peter Saulitis
https://vercel.com/blog/how-vercel-and-next-js-keep-rippling-on-their-rising-path-to-success How Vercel and Next.js keep Rippling on their rising path to success 2023-02-22T13:00:00.000Z

After going from $13M to $100M in revenue in two years, HR platform Rippling needed a frontend stack as fast and flexible as its innovative solutions.

As they scaled to over 600 pages, engineer Robert Schneiderman realized that a fullstack WordPress solution wouldn't be able to handle their stakeholders' rapid iteration needs while maintaining the performance their customers require. By leveraging Next.js and Vercel alongside their WordPress headless CMS, Rippling was able to build a solution that kept developers, content creators, and customers happy.

As the company grows, teams across Rippling are empowered to make the changes they need. Over 90% of site changes are deployed by stakeholders immediately, giving Schneiderman the freedom to keep improving Rippling’s site performance and user experience. 

Read more

Greta Workman
https://vercel.com/changelog/run-scheduled-jobs-with-vercel-cron-jobs-and-vercel-functions Run scheduled jobs with Vercel Cron Jobs and Vercel Functions 2023-02-22T13:00:00.000Z

Vercel Cron Jobs enable you to run scheduled jobs for automating backups and archiving, sending email and Slack notifications, and more. Cron jobs can be used for any task you need to run on a schedule.

By using a specific syntax called a cron expression, you can define the frequency and timing of each task. Cron jobs are supported in Serverless Functions, Edge Functions, and the Build Output API.

Vercel Cron Jobs are available in public beta. Check out the documentation to get started.

Read more

Vincent Voyer Andy Schneider Luc Leray George Karagkiaouris Maedah Batool Garrett Tolbert
https://vercel.com/blog/how-indent-delivers-secure-access-with-next.js-and-vercel How Indent delivers secure access with Next.js and Vercel 2023-02-17T13:00:00.000Z

Indent is a security company that enables teams to perform critical business operations faster and more securely. They help organizations like HackerOne, Modern Treasury, and PlanetScale manage temporary access to cloud infrastructure for engineering teams and admin escalation for IT and security teams.

One of the key selling points for their customers is an easy-to-use experience for everyone at a company to request, approve, and revoke access from Slack or their web dashboard. Indent turned to Next.js to provide unparalleled developer experience and a performant end-user experience for their application and public-facing website.

Read more

Alli Pope
https://vercel.com/blog/nextjs-app-router-data-fetching Less code, better UX: Fetching data faster with the Next.js 13 App Router 2023-02-10T13:00:00.000Z

There's plenty to be excited about with the launch of Next.js 13, from the release of the automatically self-hosted @next/font to the highly-optimized next/image component. Today, we'll talk about the app directory, and how React Server Components and nested layouts save time for developers and users alike when it comes to fetching data and serving it on Vercel.

Read more

Alice Alexandra Moore Ariel Kanter
https://vercel.com/blog/runway-enables-next-generation-content-creation-with-ai-and-vercel Runway enables next-generation content creation with AI and Vercel 2023-02-10T13:00:00.000Z

Runway is an applied AI research company providing next-generation creation tools to users around the world. As a small company that prioritizes speed and innovation, every second counts.

Read more

Kiana Lewis Steven Tey
https://vercel.com/changelog/deployment-logs-filtering-now-available Deployment logs filtering now available 2023-02-10T13:00:00.000Z

You can now apply filters to your deployment logs. For failed builds, the logs will automatically filter to errors. The heuristics used to detect error and warning logs have also been improved.

Read more

John Phamous Sam Becker
https://vercel.com/blog/from-newsletter-to-global-media-brand-with-a-headless-frontend From newsletter to global media brand with a frontend cloud 2023-02-09T13:00:00.000Z

Read more

Kelsey Dillon
https://vercel.com/blog/navigating-tradeoffs-in-large-scale-website-migrations Navigating tradeoffs in large-scale website migrations 2023-02-09T13:00:00.000Z

“Why migrate a perfectly functioning website to a new framework? Will the end user benefit from all this, or is it just to satisfy the development team?”

We recently helped a client work through this decision process during a redesign of their entire web experience.

Read more

Julian Benegas Jose Rago
https://vercel.com/changelog/improved-web-notifications-now-generally-available Improved web notifications are now generally available 2023-02-09T13:00:00.000Z

Web notifications, previously in a public beta, are now generally available. Two improvements we have made for this release include:

  • Deployment failure notifications will not land in your web inbox if you have already navigated to the deployment's page.

  • You can now opt out of domain configuration notifications. While we do not recommend this action to avoid any application downtime, you are able to set this as a preference.

Check out the documentation to learn more.

Read more

Aaron Morris Caleb Boyd Liv Carman Kevin Rupert Sarvani Pandyaram Becca Zandstein
https://vercel.com/changelog/new-relic-integration-now-supports-traces-from-opentelemetry New Relic Integration now supports traces from OpenTelemetry 2023-02-09T13:00:00.000Z

The New Relic integration now supports OpenTelemetry traces using Vercel’s new OpenTelemetry Collector. The collector allows users to send traces from Serverless Functions in just a few clicks.

The New Relic integration now includes:

  1. Vercel Log Drains

  2. OpenTelemetry traces (for Serverless Functions)

  3. A pre-configured dashboard to analyze traces

Install the integration today or learn more about how to use OpenTelemetry on Vercel.

Read more

Cami Cano Darpan Kakadia Fabio Benedetti Noor Al-Alami Craig Andrews Marc Greenstock Dom Busser Damien Simonin Feugas
https://vercel.com/blog/vercel-remote-cache-turbo Faster iteration with Turborepo and Vercel Remote Cache 2023-02-07T13:00:00.000Z

Your software delivery is only as fast as the slowest part of your toolchain. As you and your teams work towards optimizing your deployment pipelines, it's important to make sure the speed of your continuous integration (CI) automations keep pace with your developers.

Read more

Anthony Shew
https://vercel.com/changelog/redeploy-or-promote-cli-deployments-from-the-dashboard Redeploy or promote CLI deployments from the dashboard 2023-02-04T13:00:00.000Z

You can now redeploy or promote all deployments to production from the Vercel dashboard, no matter if you create them with the CLI or through the Git integration.

Read the documentation to learn more.

Read more

Chris Barber wits Mariano Cocirio
https://vercel.com/changelog/see-deployment-status-and-comments-on-active-branches See deployment status and comments on active branches 2023-02-03T13:00:00.000Z

The Active Branches view for Deployments has an updated design which is now filtered on open branches with deployments. This update allows you to access your work in progress, surfaces useful data including deployment status and comments metadata, and also gives you a quick link to your Preview Deployment.

Check out the documentation to learn more.

Read more

George Karagkiaouris Gary Borton Malte Ubl Christopher Skillicorn Becca Zandstein
https://vercel.com/changelog/refreshed-deployment-link-design-in-vercel-dashboard Refreshed deployment link design in Vercel dashboard 2023-02-02T13:00:00.000Z

We've improved the experience of accessing Preview Deployments in the dashboard.

When using Vercel with our supported Git integrations (GitHub, Gitlab, Bitbucket, or any git provider through our API and CLI), a unique Preview Deployment is created for every git push to your project.

Based on your feedback, we've updated the design to better highlight the branch URL – an always up-to-date version of your code with every new commit you push. In addition, we now better surface the related git metadata, including the git branch and commit.

Check out the documentation to learn more about Git integrations.

Read more

wits Kevin Rupert
https://vercel.com/blog/super-serves-thousands-of-domains-on-one-project-with-next-js-and-vercel Super serves thousands of domains from a single codebase with Next.js and Vercel 2023-02-01T13:00:00.000Z

Super is the easiest way to create a website using nothing but Notion. In less than a minute, Super allows you to build a sleek, easy-to-manage site with instant page loads, SEO optimization, and zero code. 

CEO and Founder Jason Werner switched to Next.js and Vercel from Gatsby and Netlify early on, and has never looked back.  “Because Vercel is the creator and maintainer of Next.js, I know the hosting solution and features will always be perfectly integrated with the framework. It just pairs so well” says Werner.

Werner uses Vercel’s API to let his users add or remove custom domains to their Super projects. With the API, he is also able to detect any configuration changes in his users' domains and update it in real time.

Read more

Alli Pope Steven Tey
https://vercel.com/blog/gpt-3-app-next-js-vercel-edge-functions Building a GPT-3 app with Next.js and Vercel Edge Functions 2023-02-01T13:00:00.000Z

The field of artificial intelligence continues to take the world by storm. Huge strides have been made in text and image generation through tools like ChatGPT, GPT-3, DALL-E, and Stable Diffusion. It’s spawned a wave of exciting AI startups, many of which we’re seeing built with Vercel and Next.js.

One of the most exciting developments in the AI space is GPT-3, a cutting-edge natural language processing model developed by OpenAI. With its ability to understand and generate human-like text, GPT-3 has the potential to disrupt how we perform many of our tasks.

In this blog post, we’re going to break down how to build GPT-3 Apps with OpenAI, Next.js, and Vercel Edge Functions. We’ll do this by building twitterbio.com—first with serverless functions, then rebuilding it with Edge Functions and streaming to showcase the speed and UX benefits. By the end, you should be able to build your own GPT-3-powered applications.

The frontend

The Next.js frontend consists of a few elements:

  • A text box for users to copy their current bio or write a few sentences about themselves

  • A dropdown where they can select the tone of the bio they want to be generated

  • A submit button for generating their bio, which when clicked calls an API route that uses OpenAI’s GPT-3 model and returns two generated bios

  • Two containers to display the generated bios after we get them back from the API route

Here’s what the code for our index page looks like. We have a few pieces of state that correspond to the elements mentioned above. We’re also defining a prompt—like ChatGPT, we need to send a prompt to GPT-3 to instruct it to generate the new bios. Finally, we ask GPT-3 to generate two bios clearly labeled (so we can parse them correctly) using the user-provided bio and vibe as context.

The rest of our index page is comprised of the UI elements themselves: our text box, dropdown, submit button, and two containers on the bottom that we display when we get the generated bios. There's also some loading logic for the button to show a loading indicator when clicked.

In addition to the UI elements and the loading logic, we have a generateBio function that’s called when the user clicks the submit button. This sends a POST request to our /api/generate API route with the prompt in the body.

We get the generated bios back from the API route, save it to the generatedBios state, then display it to the user. Because we asked GPT-3 to return the text in a specific numbered format, we can split it based on the “2.” to show the user the two bios separated nicely into containers as seen below.

The backend

A great advantage of using Next.js is being able to handle both our frontend and backend in a single application. We can spin up an API route just by creating a file called generate.ts in our api folder. Let’s take a look at our /api/generate API Route.

We get the prompt from the request body that’s passed in on the frontend, then construct a payload to send to OpenAI. In this payload, we specify some important information like the exact model (GPT-3) and how many tokens we want OpenAI to respond with (a token is approximately 4 characters). In this case, we’re limiting the max tokens because Twitter bios have a character constraint.

After the payload is constructed, we send it in a POST request to OpenAI, await the result to get back the generated bios, then send them back to the client as JSON.

There we have it! We built the first version of our application. Feel free to check out the code and demo for this approach.

Limitations of the serverless function approach

While this serverless approach works, there are some limitations that make edge a better fit for this kind of application:

  1. If we’re building an app where we want to await longer responses, such as generating entire blog posts, responses will likely take over 10 seconds which can lead to serverless timeout issues on Vercel’s Hobby tier. Vercel's Pro tier has a 60-second timeout which is usually enough for GPT-3 applications.

  2. Waiting several seconds before seeing any data isn't a good user experience. Ideally, we want to incrementally show the user data as it’s being generated—similar to how ChatGPT works.

  3. The responses may take even longer due to the cold start that’s present in serverless lambda functions.

Thankfully, there is a better way to build this application that addresses all three of these problems: Vercel Edge Functions with streaming. Edge Functions may not always be the answer, especially if you're replying on specific Node.js libraries that are not edge compatible. In this case however, they will work great.

Let’s explore what Edge Functions are and how we can migrate our app to use them for faster generations and a better user experience.

Edge Functions vs. Serverless Functions

You can think of Edge Functions as serverless functions with a more lightweight runtime. They have a smaller code size limit, smaller memory, and don’t support all Node.js libraries. So you may be thinking—why would I want to use them?

Three answers: speed, UX, and longer timeouts.

  1. Because Edge Functions use a smaller edge runtime and run very close to users on the edge, they’re also fast. They have virtually no cold starts and are significantly faster than serverless functions.

  2. They allow for a great user experience, especially when paired with streaming. Streaming a response breaks it down into small chunks and progressively sends them to the client, as opposed to waiting for the entire response before sending it.

  3. Edge Functions have a timeout of 25 seconds and even longer when streaming, which far exceeds the timeout limit for serverless functions on Vercel’s Hobby plan. Using these can allow you to get past timeout issues when using AI APIs that take longer to respond. As an added benefit, Edge Functions are also cheaper to run.

To see a demo of Serverless vs Edge Functions in action, check out the video below, specifically from 4:05 to 4:40.

Edge Functions with streaming

Now that we understand the benefits and cost-effectiveness of using Edge Functions, let’s refactor our existing code to use them. Let’s start with our backend's API route.

The first thing we do is define a config variable and set the runtime to "edge". This is all you need to define this API route as an Edge Function. We also added an extra variable to our payload, stream: true, to make sure OpenAI streams in chunks back to the client.

Finally, the last major change to this file is to define the stream variable after specifying the payload. We used a helper function, OpenAIStream, to enable us to incrementally stream responses to the client as we get data from OpenAI.

Let’s take a look at the helper function we used. It sends a POST request to OpenAI with the payload, similar to how we did it in the serverless version, but this is where the similarities stop. We create a stream to continuously parse the data we’re receiving from OpenAI, all while waiting for the [DONE] token to be sent since this signifies the end. When this happens, we close the stream.

In our frontend, the only code that changes is our generateBio function. Specifically, we define a reader using the native web API, getReader(), and progressively add data to our generatedBio state as it’s streamed in.

We’ve now refactored our app to use Edge Functions with streaming, making it faster and greatly improving the user experience by incrementally displaying data as it comes in.

Resources

We hope this walkthrough helps you build incredible GPT-3 powered applications. We’ve already seen several sites built with this template such as Rephraser, GenzTranslator, and ChefGPT—some of which have thousands of users. Visit the Twitter Bio site to see everything we talked about in action, check out our other AI templates, or start optimizing prompts across various models with Vercel's AI Playground.

Read more

Hassan El Mghari
https://vercel.com/blog/behind-the-scenes-of-vercels-infrastructure Behind the scenes of Vercel's infrastructure: Achieving optimal scalability and performance 2023-01-27T13:00:00.000Z

Vercel's platform provides speed, reliability, and the convenience of not having to worry about setting up and maintaining your own infrastructure. But what exactly goes on behind the scenes when we deploy our projects to Vercel, and what happens when you make a request to a site on the platform?

This post will go behind the scenes, explaining how Vercel builds and deploys serverless applications for maximum scalability, performance, and fast iterations.

Read more

Lydia Hallie
https://vercel.com/changelog/support-center Create and view support cases on the Vercel dashboard 2023-01-27T13:00:00.000Z

Enterprise customers can now submit support cases using the Vercel Support Center. The Support Center allows you to create and view all support cases, their statuses, and any messages from our Customer Success team in your dashboard. All cases are securely stored to safeguard your data.

Check out the documentation on Support Center to learn more.

Read more

Jarryd McCree Baruch Hen Amy Burns Cody Brouwers Brody McKee Nanda Syahrasyad Pearl Latteier Okiki Ojo Sarvani Pandyaram
https://vercel.com/changelog/deployment-environment-filtering-now-available Deployment environment filtering now available 2023-01-27T13:00:00.000Z

You can now filter your Deployments on Vercel by their environment type in addition to filtering by branches, making it easier to navigate to your production and/or preview deployments.

Check out the documentation to learn more.

Read more

P.B. To Kevin Rupert Becca Zandstein
https://vercel.com/blog/how-plex-6x-their-impressions-deploying-next-js-on-vercel How Plex 6x their impressions deploying Next.js on Vercel 2023-01-26T13:00:00.000Z

In 2021 they set out to create a new unified foundation to build their web experiences for years to come.

Read more

Alli Pope
https://vercel.com/changelog/domain-renewal-status-filtering Domain renewal status filtering now available 2023-01-26T13:00:00.000Z

You can now filter your account's Domains based on renewal status. This makes it easier to quickly determine which of your domains is set to expire soon or has an upcoming renewal.

Check out the documentation to learn more.

Read more

Travis Arnold Kevin Rupert Becca Zandstein Tori Russell
https://vercel.com/blog/deploying-ai-applications Deploying AI-driven apps on Vercel 2023-01-25T13:00:00.000Z

AI is transforming how we build and communicate on the Web—nowhere seen more clearly than on Vercel. A stable diffusion search engine, a suite of AI-powered visual editing tools, and even a rejection generator are just a few of the new projects keeping us amazed.

Whether you’re just starting out with AI or have experience in the field, let's explore how AI teams are building new projects, faster on Vercel.

Read more

Alice Alexandra Moore Hassan El Mghari Steph Dietz Steven Tey
https://vercel.com/changelog/filter-analytics-traffic-data Filter Analytics traffic data 2023-01-25T13:00:00.000Z

It's now possible to drill down into Vercel Analytics for a deeper understanding of your website traffic with the ability to filter traffic data by specific values.

Filtering can help answer questions like “Where did visitors who viewed your pricing page come from?“, “What content do people from Austria view the most?“, and “What pages do visitors coming from GitHub look at?“.

Check out our documentation to learn more.

Read more

Timo Lins Chris Widmaier Tobias Lins Doug Parsons
https://vercel.com/blog/how-supabase-elevated-their-developer-experience-with-turborepo How Supabase elevated their developer experience with Turborepo 2023-01-24T13:00:00.000Z

Supabase is an open-source alternative to Firebase that provides all the backend features you need to ship a project in a weekend. Their growing 60-person development team has been using Next.js on Vercel from the beginning to quickly ship their documentation, marketing site, and dashboard to thousands users. Yet with a user base that continues to grow, the team is ready to ship even faster.

Read more

Alli Pope Anthony Shew
https://vercel.com/changelog/january-2023 Improvements and fixes 2023-01-23T13:00:00.000Z
  • Image Optimization: Source images for Vercel Image Optimization can now be viewed on the Usage tab.

  • Vercel CLI: Shipped v28.12.7 with improved Gatsby support.

  • Python Runtime for Vercel Functions: Improved documentation and examples for using the Python Runtime for Vercel Serverless Functions.

  • Edge Functions: Improved source map resolution and filtering

    for more readable and actionable errors.

  • Docs search: Improved search in docs by making CMD+K the default, enhancing the accuracy and relevance of search results, and including path-based recommendations.

  • Changes to .vercelignore: Created a .vercelignore file in the "root directory" to fix a bug that caused deployments sourced from git to not properly resolve the .vercelignore when a "root directory" has been set.

Read more

Jarryd McCree Steven Salat Dominik Ferber Chris Barber Gal Schlezinger Ethan Arrowood Ismael Rumzan
https://vercel.com/changelog/top-paths-in-the-usage-tab-are-now-generally-available Top Paths in the Usage tab are now generally available 2023-01-23T13:00:00.000Z

Top Paths are now available for free on all Vercel plans. With Top Paths, filters can be applied to query a specific date range or project making it easier to understand your team's resource usage down to specific projects or Edge Functions.

You can click the Explore button to expand the section to a full page, allowing your team to see more paths as well as providing the ability to download a CSV file and share the view with other Team members.

We’re continuing to improve observability of projects on Vercel, now with enhanced visibility into usage. This builds on the release of Monitoring and improved Logs, which are now available for Enterprise teams.

Check out our documentation to learn more.

Read more

Jarryd McCree Uche Nkadi Arian Daneshvar John Phamous Valerie Downs Christopher Skillicorn
https://vercel.com/changelog/improved-support-for-gatsby-sites Improved support for Gatsby sites 2023-01-23T13:00:00.000Z

Gatsby sites on Vercel can now take advantage of powerful new features, including:

  • Server-Side Rendering (SSR): Render dynamic content, on-demand.

  • Deferred Static Generation (DSG): Generate static pages in the background on new requests, using the same infrastructure as Incremental Static Regeneration.

  • Native API Routes: Create functions inside the api directory to instantly scaffold new API Routes.

Gatsby v4+ sites deployed to Vercel will automatically detect Gatsby usage and install the new @vercel/gatsby-plugin-vercel-builder plugin. Gatsby v5 sites require Node.js 18, the current default version used for new Projects.

Get started using Gatsby with our updated template.

Read more

Ethan Arrowood Nathan Rajlich Lydia Hallie
https://vercel.com/blog/react-wrap-balancer Improving readability with React Wrap Balancer 2023-01-19T13:00:00.000Z

Titles and headings on websites play a crucial role in helping users understand the content and context of a webpage. Unfortunately, these elements can often be difficult to read due to typographical anti-patterns, such as a single hanging word on the last line.

To tidy up these "widows and orphans," React Wrap Balancer reduces the content wrapper to the minimum possible width before an extra line break is required. As a result, the lines of text stay balanced and legible, especially when the content is lengthy.

Read more

Shu Ding Emil Kowalski Alice Alexandra Moore
https://vercel.com/changelog/configurable-webhooks Configurable webhooks 2023-01-19T13:00:00.000Z

Pro and Enterprise customers no longer need to create integrations to use webhooks. They are now configurable at the account level in the dashboard.

Check out the documentation to learn more.

Read more

Cami Cano Adrian Cooney Fabio Benedetti Chris Widmaier Florentin Eckl Sam Becker
https://vercel.com/changelog/configurable-log-drains Configurable Log Drains 2023-01-19T13:00:00.000Z

Pro and Enterprise customers can now configure Log Drains in the dashboard, without needing to create an integration. Third-party logging integrations will continue to be supported.

Check out the documentation to learn more.

Read more

Cami Cano Fabio Benedetti Adrian Cooney Florentin Eckl Sam Becker
https://vercel.com/changelog/changes-to-vercel-image-optimizations Changes to Vercel Image Optimization 2023-01-18T13:00:00.000Z

Today, we resumed charging for Image Optimization overages at a lower rate of $5 per 1,000 images for Pro and Enterprise teams. Optimizing images improves the end-user experience of your site and decreases bandwidth.

Teams only incur overages after the included limits have been reached. For example, Pro plans include 5,000 source images per billing period. Source image usage reset to zero at 9am Pacific Time on January 18th, 2023.

You can view your current source image count in your usage dashboard. Further, you can always disable optimization for a given project using this guide.

Read more

Jarryd McCree
https://vercel.com/blog/delivering-ai-analysis-faster-with-the-vercel-workflow Delivering AI analysis faster with the Vercel workflow 2023-01-17T13:00:00.000Z

Viable is an AI company that analyzes customer feedback and presents insights to businesses to improve products and services. With just six engineers, they’ve already processed 3.8 million data points for businesses like Latch, Uber, and AngelList.

Read more

Alli Pope
https://vercel.com/blog/how-vercel-enables-wunderman-thompson-to-launch-global-brands How Vercel enables Wunderman Thompson to launch global brands 2023-01-17T13:00:00.000Z

Wunderman Thompson unlocks the potential of international brands through strategic, digital-led growth. As Web technologies rapidly evolve, the agency looks to Vercel to lay a consistent foundation for dynamic websites.

“Normally, it’s not easy to sleep when you launch a website,” says Rodrigo Barona, Engineering and Design Manager at Wunderman Thompson. “But now, it’s not my business. It’s Vercel’s.”

By handling the intricacies of a global, edge-ready network, integrations for the most popular stacks, and even live on-page collaboration, Vercel lets Wunderman Thompson focus on what it does best: “moving at the speed of culture.”

Read more

Alice Alexandra Moore
https://vercel.com/blog/sanity-edge-middleware Sanity balances experimentation and performance with Vercel Edge Middleware 2023-01-13T13:00:00.000Z

The Sanity Composable Content Cloud enables teams to create better digital experiences—unleashing editor creativity while reducing engineering headaches. When it comes to their own marketing site, Sanity has similarly high standards, which is why they rely on Vercel and Next.js. With Edge Middleware and Serverless Functions, Vercel makes it simple for Sanity’s developers to collaborate between teams, create and manage experiments, and empower their users to dream big with pre-built templates.

Read more

Grace Madlinger
https://vercel.com/blog/edge-functions-enable-read-cv-to-deliver-profiles-globally-with-near-zero Edge Functions enable Read.cv to deliver profiles globally, with near-zero latency 2023-01-13T13:00:00.000Z

For Read.cv, showing is better than telling. The professional networking platform helps users add a more personal touch to the typical work profile—all made possible with Vercel and Edge Functions.

Read more

Greta Workman
https://vercel.com/blog/hashnode-runs-the-fastest-blogs-on-the-web-with-vercel Hashnode runs the fastest blogs on the web with Vercel 2023-01-13T13:00:00.000Z

Hashnode, a blogging platform for the developer community built using Next.js, was born from the fundamental idea that developers should own the content they publish. A key component of that ownership is publishing articles on a custom domain—a feature the Hashnode team spent hours monitoring and maintaining themselves. That’s when they turned to Vercel. 

Read more

Greta Workman
https://vercel.com/blog/helping-swells-merchants-provide-unparalleled-ecommerce-experiences Helping Swell’s merchants provide unparalleled ecommerce experiences 2023-01-13T13:00:00.000Z

Swell, a platform on Vercel, enables anyone to spin up their own ecommerce website using its headless, API-first backend. For them, Vercel and Next.js provide both the flexibility and accessibility they need to power their users’ storefronts around the world. The benefits are twofold: not only do Vercel and Next.js provide game-changing tools and features for the Swell team, but they ensure Swell’s merchants can create the fastest sites and the best shopping experiences for their customers. 

Read more

Greta Workman
https://vercel.com/blog/vercel-sitecore-partnership Vercel + Sitecore: Partnering on a composable future 2023-01-12T13:00:00.000Z

Today, we've announced a strategic partnership with Sitecore, a leading Digital Experience Platform (DXP) and Content Hub, to deliver an end-to-end composable solution for building and deploying dynamic web experiences.

Combining customer data and AI to deliver personalized experiences and offering a powerful CMS to create and manage content across channels and devices, Sitecore is an ideal solution for today’s connected, omnichannel digital experience.

Read more

Guillermo Rauch
https://vercel.com/blog/the-turbopack-vision The Turbopack vision 2023-01-11T13:00:00.000Z

The Turbopack team and I were excited to announce Turbopack's alpha release at Next.js Conf and we've been even more energized by the progress we've made since then.

Last month, I had the opportunity to take the stage at React Day Berlin to share more about the future plans for Turbopack.

Read more

Tobias Koppers
https://vercel.com/blog/kidsuper-innovates-with-next.js Building a global streetwear label with Next.js 2023-01-10T13:00:00.000Z

KidSuper is a Brooklyn-based cult streetwear label and hybrid art brand with strong ties to the music, sports, and tech communities. From collaborating with the likes of Puma and Nike to co-designing Louis Vuitton's 2023 menswear collection, founder Colm Dillane and CTO Adham Foda are known worldwide for their boundary-pushing approach to fashion.

The brand went viral in 2011 after Mac Miller wore their apparel on the cover of iTunes, and the duo knew they’d eventually require a tech solution that could keep up with their creativity. They needed to branch out from their Shopify-managed storefront and go headless, allowing them to bring their vision to life.

Read more

Greta Workman
https://vercel.com/blog/building-a-fast-animated-image-gallery-with-next-js Building a fast, animated image gallery with Next.js 2023-01-09T13:00:00.000Z

We held our biggest ever Next.js Conference on October 25, 2022 with over 110,000 registered developers, 55,000 online attendees, and hundreds attending in person in San Fransisco. We had photographers on site to capture the experience and we wanted to share the photos they took with the community.

Instead of just sharing photos with a Google Drive link, we thought it’d be good idea to showcase these 350+ amazing photos in an image gallery that was fast, functional, and beautiful. We ended up building our own and open sourcing the code, making it easy for anyone to build their own image gallery.

In this blog post, we’re going to share the techniques we used to build a performant image gallery site that can handle hundreds of large images and deliver a great user experience.

Read more

Hassan El Mghari
https://vercel.com/changelog/link-and-share-build-logs Link and share Build Logs 2023-01-03T13:00:00.000Z

You can now link to specific Build Log lines or groups of Build Logs to help point to a specific section of the output when sharing.

By clicking on the timestamp on the left side of the build log line, you can highlight a line that stays highlighted when you copy the URL and share it with a colleague. The page will automatically scroll to the correct line when sharing linked logs.

Check out the documentation to learn more.

Read more

Dominik Ferber
https://vercel.com/changelog/vitepress-projects-can-now-be-deployed-with-zero-configuration VitePress projects can now be deployed with zero configuration 2023-01-03T13:00:00.000Z

Vercel now automatically optimizes your VitePress projects. When importing a new project, it will detect VitePress and configure the right settings for optimal performance — including automatic immutable HTTP caching headers for JavaScript and CSS assets.

Deploy the VitePress template to get started.

Read more

Lee Robinson
https://vercel.com/blog/turborepo-remote-cache-nextjs-publish-times-80-percent Turbocharging Next.js: How Remote Caching decreased publish times by 80% 2022-12-22T13:00:00.000Z

Next.js lets developers iterate on their projects faster—but we want to iterate on Next.js itself faster, too.

This year, Next.js surpassed 4 million npm downloads for the first time. With over 2,400+ contributors, the core team here at Vercel must craft a developer experience to keep up with such a vast community to develop, test, build, and publish Next.js.

Next.js had another first this year: introducing Rust to its core. While adding Rust brings greatly improved performance for developers using Next.js, the tradeoff was an increase in CI time to publish new releases due to the prolonged process of building Rust binaries.

Until implementing Turborepo Remote Caching dropped publish times by 80%.

Read more

JJ Kasper Anthony Shew
https://vercel.com/blog/optimize-your-nextjs-site How to optimize your Next.js site: Tips from industry leaders 2022-12-21T13:00:00.000Z

Optimizing your Next.js site for performance and efficiency can be complicated, but a good developer toolkit can help. Hear from some of the experts from this year’s Next.js Conf to see how you can best use React Server Components, the latest in web UI, powerful layouts, and more to create a world-class website.  

Read more

Hassan El Mghari Kiana Lewis
https://vercel.com/blog/making-live-reviews-a-reality-enhanced-preview-experience Enhanced Preview experience 2022-12-20T13:00:00.000Z

When teams can easily share and comment on work in progress, big ideas happen faster. Today, we’re bringing that capability to all teams on Vercel with the ability to comment on Preview Deployments. Now, collaborating on websites and applications is as seamless as working on a Google Doc or Figma file. 

Preview Deployments provide a shareable, production-quality URL for your website, while commenting enables real-time feedback in the context of the product you’re building. The result: dramatically faster iteration cycles and higher quality input from developers, designers, product managers, stakeholders, and more.

Read more

Malte Ubl Becca Zandstein
https://vercel.com/changelog/comments-on-preview-deployments-are-now-generally-available Comments on Preview Deployments are now generally available 2022-12-20T13:00:00.000Z

Comments on Vercel Preview Deployments are now generally available giving you a centralized review workflow for rapid iteration. We have also added full support for GitLab and BitBucket integrations in addition to GitHub.

All Pro and Enterprise teams will have the ability to use comments on their Preview Deployments, by default, for free.

Check out the documentation to learn more.

Read more

Malte Ubl Becca Zandstein Gary Borton Christopher Skillicorn George Karagkiaouris Nate Wienert Alli Pope
https://vercel.com/blog/vercel-at-afrotech-2022 Vercel at AfroTech 2022: An immersive experience 2022-12-19T13:00:00.000Z

Last month, Vercel had the privilege of sponsoring AfroTech Conference 2022—the place for all things Black in tech and Web3. Our team was joined by the likes of Google, Meta, and Tesla in the expo hall—so we knew that we needed to find ways to stand out, engage with the community, and attract top talent. 

This was our approach. 

An immersive experience with _OFCOLOR

Vercel’s mission is to give every developer the power to create at the moment of inspiration. From creatives to technologists, we envision a world where everyone can contribute to the web development process. And you’ve probably noticed that art and design plays a major role in everything that we do.

Because of this mission, we brought leaders from Black At Vercel, our new Employee Resource Group (ERG) to partner with local Austin arts alliance _OFCOLOR. We produced an immersive experience called All Kinds of Black In Tech, featuring a photo exhibit of Black tech workers, interactive product demos, and a live DJ. It wasn’t your average networking event. 

One voice, all roles, same team

Representation matters—across our whole business. “When I joined Vercel last year, I was amazed to see how many people of color there were on the All Hands company call. It absolutely blew me away. From software engineers to product directors, there were people of color all over Vercel,” said Jeremy Jefferson, Black at Vercel Founder and Leader. 

With this in mind, we ensured that our AfroTech crew on the ground included folks from all areas of the business, including sales, data, engineering, marketing, and product. 

This was also the perfect opportunity for Black At Vercel to meet and strategize. “While we strive to never be siloed, it’s not every day that you get to work face-to-face with folks from all areas of the business. Our number one goal is to help build our diversity network across the organization. Networking is vital for people of color, especially for those of us working remotely,” says Jeremy. 

Telling our story, attracting top talent

At the end of the day, we went to AfroTech to connect with top talent. We spent three days showcasing product demos, reviewing resumes, and performing interviews. We talked to over 1,500 attendees at our booth, conducted interviews, and led a workshop with computer science and MBA students at Huston-Tillotson University, a local historically Black college (HBCU). 

But most critically, we got to tell the Vercel story to an important audience. 

Maybe someday soon, one of our future engineers or data scientists will come up with an algorithm on how to measure those seemingly intangible metrics. For now, we can say with assurance that we aim to be a place where anyone, inclusive of identity, can contribute to our mission of making the Web faster. 

Read more

Mayokia Fowler
https://vercel.com/blog/protecting-deployments Deployment Protection: Added security controls now available on all plans 2022-12-19T13:00:00.000Z

Today we're thrilled to announce added privacy controls across all plans, including the ability to secure their preview deployments behind Vercel Authentication with just one click.

Read more

Kit Foster Balazs Varga George Karagkiaouris Hector Simpson Malte Ubl
https://vercel.com/changelog/protected-preview-deployments-available-on-all-plans Protected Preview Deployments available on all plans 2022-12-19T13:00:00.000Z

You can now make Preview Deployments private for free, across all plans.

  • Shareable Links: Share private Preview Deployments with external collaborators without the need to log in. See docs for limits.

  • Vercel Authentication: Team members can log in with their Vercel account to access secure previews.

Password Protection is now Advanced Deployment Protection and keeps the same price at $150/mo. Pro and Enterprise customers can add on Advanced Deployment Protection, which includes:

Enterprise customers will also have access to audit logs allowing them to track who generated a given Sharable Link at what time and from which device.

Check out the documentation to learn more.

Read more

Kit Foster Balazs Varga Dominik Weber Simon Wijckmans Hector Simpson Malte Ubl Maedah Batool George Karagkiaouris
https://vercel.com/blog/building-a-powerful-notification-system-for-vercel-with-knock-app Building a powerful notification system for Vercel with Knock 2022-12-16T13:00:00.000Z

One of the main benefits of building with Next.js is the ease of leveraging APIs and components to quickly integrate with best-of-breed, backend technology.

Today released our new notification system as a public beta, made possible with the help of our integration partner Knock, their powerful API, and robust component library.

This post will cover how we chose and implemented Knock for our notification center, and how you can use Knock to build notifications into your own application.

Read more

Becca Zandstein Aaron Morris
https://vercel.com/blog/edge-config-public-beta Introducing Edge Config: Globally distributed, instant configuration 2022-12-15T13:00:00.000Z

Last month we announced the limited availability of Vercel Edge Config, an ultra-low latency data store for near-instant reads of configuration data.

Edge Config is now generally available, alongside integrations with Statsig and HappyKit for A/B testing and Feature Flags.

Read more

Dominik Ferber Dom Busser Andy Schneider Sam Becker
https://vercel.com/blog/edge-functions-generally-available Vercel Edge Functions are now generally available 2022-12-15T13:00:00.000Z

Access to fast, global compute can give developers more flexibility to build rich experiences, regardless of their users' physical location or network speed. Vercel's Edge Functions aim to bring this capability into every developer's toolkit for building on the Web.

This past summer, alongside our GA of Edge Middleware, we released Edge Functions to Public Beta. During our beta period, our Edge Network has seen over 30 billion Edge Function invocations.

Since launching, we’ve made Edge Functions faster, more flexible, and capable of even larger workloads:

Read more

Javi Velasco Damien Simonin Feugas Marc Greenstock Craig Andrews Gal Schlezinger Seiya Nuta Kiko Beats Edward Thomson Angela Zhang Shaquil Hansford Amy Burns
https://vercel.com/changelog/edge-config-is-now-in-public-beta Edge Config is now in public beta 2022-12-15T13:00:00.000Z

Edge Config allows you to distribute data to our Edge Network without needing to perform a deployment. Instead, your data is actively replicated to all our regions before it’s requested. This means your data will always be available instantly, with most lookups returning in 5 ms or less, and 99% of reads returning under 15 ms.

Edge Config is now available in public beta—you can get started with our new Statsig and HappyKit integrations, Edge Config SDK, or by using one of our examples.

Check out the documentation to learn more.

Read more

Dominik Ferber Edward Thomson Noor Al-Alami Ismael Rumzan Dom Busser Andy Schneider Shaquil Hansford Javi Velasco Sam Becker
https://vercel.com/changelog/new-notification-controls-available-in-public-beta Get notified on build failures and more with new notification controls 2022-12-15T13:00:00.000Z

The new notification experience, now in public beta, allows you to configure alerts to land in your email and/or the Vercel dashboard for all notification types. Additionally, you can now configure your notification settings to group misconfigured domains into a single notification on desktop, preventing an influx of unwanted or repetitive emails.

Start configuring your notifications more granularly to increase the signal for:

  • Usage

  • Domain alerts

  • Team invites

  • Deployment failures

Check out the documentation to learn more.

Read more

Becca Zandstein Kevin Rupert Amy Burns Aaron Morris Liv Carman Sarvani Pandyaram Caleb Boyd
https://vercel.com/changelog/edge-functions-are-now-generally-available Edge Functions are now generally available 2022-12-15T13:00:00.000Z

During the private beta, we’ve made Edge Functions faster, more flexible, and more compatible with Node.js:

We’re excited to announce that beginning today, Edge Functions are now generally available so you can start executing more efficiently with instant “cold starts” to your functions for a better end-user experience.

Check out the documentation to learn more.

Read more

Edward Thomson Nathan Rajlich Sean Massa Javi Velasco Kiko Beats Damien Simonin Feugas Marc Greenstock Angela Zhang
https://vercel.com/blog/announcing-sveltekit-auth Announcing SvelteKit Auth: Bringing NextAuth.js to all frameworks 2022-12-14T13:00:00.000Z

NextAuth.js, the most popular authentication library for Next.js applications with almost 300,000 npm downloads per week, is growing to support the entire ecosystem of frontend frameworks.

Today, we’re excited to announce SvelteKit Auth (experimental) as the first framework outside of Next.js officially supported, built on top of the new @auth/core decoupled library. This new package marks the larger move to Auth.js, providing authentication for the Web, with any framework you like.

Get started with our new SvelteKit Authentication Template.

Read more

Balázs Orbán
https://vercel.com/blog/using-sveltekit-1-0-on-vercel Using SvelteKit 1.0 on Vercel 2022-12-14T13:00:00.000Z

SvelteKit is a new framework for building web applications that is gaining popularity among developers for its simplicity and performance. Built on top of Svelte (like Next.js for React), SvelteKit simplifies creating and deploying web applications. Server-side rendering, routing, code-splitting, and adapters for different serverless platforms are just a few of its out-of-the-box features.

Deploying SvelteKit 1.0 today or continue reading to learn about the improvements to the framework in the past year and the benefits of deploying SvelteKit projects on Vercel.

What is SvelteKit?

SvelteKit is built around the Svelte framework, a modern JavaScript compiler that allows developers to write efficient and lightweight code. Instead of using runtime frameworks to stack user interfaces on top of the DOM, Svelte compiles components at build time down to a native JavaScript bundle. This results in fast web apps with small bundle sizes.

SvelteKit solves many common issues faced by web developers by providing an intuitive experience that takes care of tedious configuration and boilerplate code. Additionally, instead of retrieving the entire application on initial load, SvelteKit makes it easy to split your code into reusable chunks that can be quickly loaded on demand, allowing for snappy user and developer experiences alike.

SvelteKit extends Svelte by adding:

  • Server-side rendering (SSR), which can improve the performance and SEO of your application

  • Easy generation of static sites, which can be useful for blogs, marketing sites, and other types of content-heavy websites

  • TypeScript support

  • Hot Module Replacement, allowing you to update your application in real-time without losing state or refreshing the page

SvelteKit Features

SvelteKit is great for building applications of all sizes, with a fluid developer experience to match. It doesn't compromise on SEO, progressive enhancement, or the initial load experience, but unlike traditional server-rendered apps, navigation is instantaneous. SvelteKit comes with an abundance of out-of-the-box features, making it the recommended way to build Svelte applications. Let’s take a look:

  • Directory-based Router:

    SvelteKit includes a directory-based router that updates the page contents after intercepting navigations. This means that the folder structure of the /src/routes folder is going to be the route structure of our application. So for example, /src/routes/+page.svelte creates the root route, and /src/routes/about/+page.svelte creates a an /about route. To learn more about routing in SvelteKit, checkout Vercel’s Beginner SvelteKit course.

  • Layouts:

    If you need an element displayed on multiple pages of an application, such as a header or a footer, you can use layouts. To create layouts in SvelteKit, add a file called +layout.svelte in the /routes folder. You can add whatever markup, styles, and behavior you want to this file, and it will be applied to all pages in the app. You can even make nested and grouped layouts to target only specific routes.

  • The load function:

    SvelteKit has a unique way of loading page data using the load function. All +page.svelte files can have a sibling +page.js file that exports a load function. The returned value of this is available to the page via the data prop. The load function runs on both the client and server, but you can add the extension +page.server.js to make it run on the server only.

  • Layout Data:

    All +layout.svelte files can also have a sibling +layout.js file that loads data using the load function. In addition to the layout it ‘belongs’ to, data returned from layout load functions is also available to all child +layout and +page files.

  • Endpoints:

    As well as pages, you can define routes with a +server.js file (also referred to as an 'API route' or an 'endpoint'), which gives you full control over the response. These files export functions corresponding to HTTP verbs that that take a Request Event argument and return a Response object.

  • Adapters:

    An adapter is a plugin that takes your app as input during build and generates output suitable for deployment on a specific platform. By default, projects are configured to use @sveltejs/adapter-auto, which detects your production environment and automatically selects the appropriate adapter for you.

To learn more about SvelteKit's features in depth, check out Vercel's free Beginner SvelteKit course.

What’s changed in the past 12 months

As the Svelte team, including the core team members at Vercel, have worked hard to prepare for the stable SvelteKit 1.0 release, some necessary breaking changes had to be made. If you’ve used SvelteKit in the past, it may look quite different today. Let’s go over the most notable changes made to SvelteKit in the last year.

New Directory-based Routing

Changing SvelteKit’s file-based routing is by far one of the biggest updates made to SvelteKit. Previously, any file added to the routes directory would automatically create a route at that name. For example, creating the page routes/about.svelte would automatically create a page at /about, and routes/index.svelte would create our root page. Now, all routes are directory based and the old index.svelte has been replaced by +page.svelte. This new convention ensures that you are deliberately creating a route, and eliminates the need of underscores in order to colocate files. With this new convention, the page at the route /about will be routes/about/+page.svelte and our root page will be routes/+page.svelte.

Old File-based routing

New Directory-based routing

Learn more about SvelteKit's routing.

New Layouts System

With the new routing changes came major changes to the layouts system as well. Instead of naming our layout files __layout.svelte we now name them +layout.svelte similar to our pages. Previously, we could have multiple layouts in a single directory using named layouts, such as [email protected], which we have since said goodbye to 👋.

In addition to the changes, a new grouped layouts convention was added. This allows us to share layouts within group directories, which are folders wrapped in parentheses. Group directories do not affect the pathname of nested routes, but acts as a root route for layouts. To learn more about SvelteKit’s layouts checkout the Beginner SvelteKit course here.

Old layouts system

New layouts system

Learn more about SvelteKit's layouts.

Loading Data

Previously, the load function would be called in a page component’s context="module" script block, and the returned data would become available to the page as a prop. A page calling the load function would look something like this:

Now, SvelteKit has completely gotten rid of the context="module", and the load function has moved into the endpoint file. Our +page.svelte can automatically load the data from a +page.js route by exporting the strongly-typed data prop. Just like before, the load function runs on both the client and server. If you only want it to run on the server, you can add the .server extension (+page.server.js). Loading data into a page the new way looks like this:

To learn more about SvelteKit's data fetching.

Server routes (Endpoints)

Previously, to create an endpoint, you would add a .js (or .ts ) file somewhere into src/routes, and include the data type it was meant to return as part of the name of that file. For example: if you wanted to return some data as JSON at the path /api/about.json, you could simply add an about.json.js file into your routes folder like this:

Now, instead of adding server routes directly in the routes directory, we instead add them in the /routes/api directory. The new way of creating an endpoint is more similar to creating a page. Instead of simply adding the file about.json.js within this directory, we add a +server.js file within an about folder like this:

In addition to this change, server routes must now return a proper Response object. Thankfully, SvelteKit has a json function available that will do this for us by simply importing it, then wrapping whatever data we return in that function call. Lastly, the HTTP verb functions accepted by a server route must now be capitalized.

Migrating an old project? Checkout the Migration Guide.

SvelteKit on Vercel

Vercel is a cloud platform for deploying and hosting web applications. Using Vercel in conjunction with SvelteKit creates a dream stack, offering several improvements:

  • Vercel provides a zero-configuration platform for deploying and hosting SvelteKit apps, making it easy to get your app up and running quickly.

  • Vercel recently launched Edge Functions, which allow you to run JavaScript code on their globally-distributed edge network. SvelteKit supports Vercel Edge Functions, meaning you can serve your users dynamically-rendered pages at the same speed you would serve static files from a CDN, drastically improving the performance and scalability of your SvelteKit app.

  • Vercel offers Vercel Analytics in the dashboard to help you understand the performance of your application based on real visitor data. With the Vercel Analytics API, you can now use Vercel Analytics with SvelteKit.

  • Vercel provides built-in support for server-side rendering (SSR) and static site generation (SSG), which can improve the performance and SEO of your SvelteKit app.

  • Vercel offers seamless integrations with popular development tools and services, such as GitHub, GitLab, and Visual Studio Code, making it easy to integrate your SvelteKit app into your existing workflow.

  • Vercel provides a powerful, intuitive interface for managing and monitoring your SvelteKit app, allowing you to see how your app is performing and make updates and changes as needed.

Overall, Vercel can provide a number of benefits when used with SvelteKit, making it easier to deploy, host, and manage your SvelteKit app. Whether you're a small team building a simple web app or a large organization with complex, mission-critical applications, Vercel can help you get the most out of your SvelteKit app.

Community

One of the key reasons for SvelteKit’s growing popularity is the inclusive community that has formed around it. Svelte Society, the community-run Svelte network, has become the home of all things related to Svelte and SvelteKit.

This community encourages participation from developers of all skill levels, and there are plenty of opportunities for beginners to get involved and learn from more experienced members. In addition to Svelte Society, there is also a network for women and non-binary people interested in Svelte called Svelte Sirens. These communities are all active on forums and social media, and there are regular events where SvelteKit developers can connect with each other.

But the SvelteKit ecosystem is more than just documentation and a supportive community. There are also many tools and resources available to use while building SvelteKit applications. These include templates, starter kits, and other helpful resources that can make it even easier to get started with SvelteKit. Here are some of our favorites:

The SvelteKit ecosystem is constantly growing and evolving. We’ve already got some awesome companies using SvelteKit on Vercel to do some amazing things! Check some of them out:

Get started with SvelteKit 1.0

Get started with SvelteKit on Vercel by deploying one of our SvelteKit templates in seconds, or begin learning with Vercel’s free Beginner SvelteKit course!

Read more

Steph Dietz
https://vercel.com/changelog/vuepress-projects-can-now-be-deployed-with-zero-configuration VuePress projects can now be deployed with zero configuration 2022-12-13T13:00:00.000Z

Vercel now automatically optimizes your VuePress projects. When importing a new project, it will detect VuePress and configure the right settings for optimal performance — including automatic immutable HTTP caching headers for JavaScript and CSS assets.

Deploy the VuePress template to get started.

Read more

Lee Robinson
https://vercel.com/blog/from-idea-to-100-million-views-instafest-music-festival-application From idea to 100 million views: Building a viral application for your personal music festival 2022-12-12T13:00:00.000Z

Instafest allows users to quickly create a festival poster from their top Spotify, Apple Music, and Last.fm artists. Anshay Saboo, a Computer Science student at USC, used Next.js and Vercel to launch Instafest fast and scale to 500,000 new users per hour, gaining millions of users and going viral on Twitter, TikTok, and more.

Read more

Lee Robinson
https://vercel.com/blog/migrating-a-large-open-source-react-application-to-next-js-and-vercel Migrating a large, open-source React application to Next.js and Vercel 2022-12-08T13:00:00.000Z

If your company started building with React over 5 years ago, chances are you implemented your own custom solution or internal framework. Many engineers and teams want to explore technologies like Next.js and Vercel. However, some don't know where to get started because it's so far from their current reality or they don't see how supporting a custom framework is holding them back.

As a coding exercise, we wanted to show what this could look like by migrating a large, open-source React application to use Next.js.

We managed to remove 20,000+ lines of code and 30+ dependencies, all while improving the local iteration speed of making changes from 1.3s to 131ms. This post will share exactly how we incrementally adopted Next.js and Vercel to rebuild the BBC website.

Read more

Michael Novotny
https://vercel.com/changelog/get-instant-observability-with-the-new-relic-integration Get instant observability with the New Relic integration 2022-12-07T13:00:00.000Z

New Relic is a fullstack application monitoring platform used by the world's top development teams.

With the integration, you can stream function, error, and build logs from your Vercel projects to a pre-built dashboard in New Relic. This allows you to observe key metrics like cache hit rate, 4xx and 5xx error log counts, and the performance of your Serverless Functions for rapid troubleshooting and optimization.

Try out the integration for instant observability.

Read more

Cami Cano Darpan Kakadia Noor Al-Alami Dom Busser
https://vercel.com/blog/aws-and-vercel-accelerating-innovation-with-serverless-computing AWS and Vercel: Accelerating innovation with serverless computing 2022-12-06T13:00:00.000Z

Last week, I joined Holly Mesrobian, AWS VP of Serverless Compute, on stage at AWS re:Invent in Las Vegas. We discussed our shared vision of accelerating innovation with serverless computing, and how Vercel has leveraged AWS Lambda over the years.

Making the Web fasterin development and production

I’m passionate about digital transformation, and what it means for our customers—and their customers. We pride ourselves on creating the ultimate experience for developers and their users alike. 

As you would expect from a developer-first platform, it all starts with pushing code to the cloud, while ensuring the workflow is optimized for developer productivity.

To deliver world-class sites in production, we've turned lambda into an edge-first compute layer. We've also added globally distributed caching, which gets automatically purged from any data source, whether it's a database like DynamoDB or a composable commerce platform like Sitecore, BigCommerce, or Salesforce Commerce Cloud.​ With this model, our customers get optimal performance and infinite scale—at a fraction of the cost and overhead of manually provisioning servers or configuring a litany of cloud services.​

Because of these powerful DX and UX elements, Holly and I agree that the easiest, fastest, most effective way to modernize is to go serverless. 

Take The Washington Post

As leaders in digital content production, their engineering team needed to "match the speed of The Post's formidable newsroom," according to their Director of Newsroom Engineering, Jeremy Bowers. They started out by using Next.js and Vercel to collaborate and launch code quickly using Preview Deployments, establishing an internal engine for innovation that enabled "lightning-fast turnaround on developing new features."

The team realized the benefits of the serverless model extended to production, and when coupled with Vercel’s Edge Network, provided the optimal performance and scale to meet high-traffic moments. That’s why they chose Vercel as their frontend for their US Midterm Elections Results pages

The Washington Post handled this high-traffic moment flawlessly, making it “the smoothest election night anyone could remember” says Jeremy. 

Get started with serverless

Vercel helps users take advantage of best-in-class AWS infrastructure with zero configuration. Our customers are transforming their digital presence through their frontend—and accelerating the world's adoption of serverless technology.

I am constantly in awe of our customers’ achievements with this platform, and I can’t wait to see how they'll continue to drive innovation on the Web. 

Read more

Guillermo Rauch
https://vercel.com/changelog/updated-permissions-for-developer-role Updated permissions for Developer role 2022-12-05T13:00:00.000Z

Starting December 5, 2022, Enterprise plan users in the Developer role will be able to initiate a build when committing to the main branch of a Git project. This change ensures closer alignment with the security controls in your git environment.

Currently, users assigned the Developer role can only initiate a build when they commit to the main branch of a project if they are promoted to the Project Administrator role.

We suggest Enterprise plan customers review their policies in their Git environment to manage how users commit to the main branch.

Learn more about Enterprise team roles and permissions.

Read more

Simon Wijckmans
https://vercel.com/changelog/instant-rollback-public-beta-cli Instant Rollback public beta now available in the CLI 2022-12-02T13:00:00.000Z

You can now use Instant Rollback on the CLI to quickly revert to a previous production deployment, to prevent regressions with your site’s availability. Now available in Beta for all plans.

Check out the documentation to learn more.

Read more

Chris Barber
https://vercel.com/blog/datocms-builds-60-faster-with-a-streamlined-workflow DatoCMS builds 60% faster with a streamlined workflow 2022-11-30T13:00:00.000Z

DatoCMS provides over 25,000 businesses with a headless CMS built for the modern Web. Since their users rely on them for speed and innovation, they needed to find a fix fast when build times grew and complexity increased on their static CDN. By switching to Next.js on Vercel, the team was able to cut build times by 60% while achieving both a better developer experience and simpler infrastructure.

Read more

Greta Workman
https://vercel.com/blog/scale-unifies-design-and-performance-with-next-js-and-vercel How Scale AI unifies design and performance with Next.js and Vercel 2022-11-30T13:00:00.000Z

Scale is a data platform company serving machine learning teams at places like Lyft, SAP, and Nuro. It might come as a surprise to learn that they do all this with only three designers. Their secret to scaling fast: Vercel and Next.js

Read more

Greta Workman
https://vercel.com/blog/how-vercel-helped-justincase-technologies-cut-their-build-time-in-half How Vercel helped justInCase Technologies cut their build time in half 2022-11-30T13:00:00.000Z

justInCase Technologies’ development team needed a platform that would allow them to deliver a faster user experience without sacrificing developer experience. They struggled with their cloud platform’s infrastructure, with GitHub previews on a previous solution often getting stuck on the queued stage and failing. Not only were builds slow, they were also unreliable.

Once they made the switch to Vercel, they no longer faced preview failures. With 50% faster builds, they now save 72 hours of developer time per month.

Read more

Greta Workman
https://vercel.com/blog/loom-headless-with-nextjs With Next.js, Vercel, and Sanity, Loom empowers every team to iterate 2022-11-30T13:00:00.000Z

Loom, a video communication platform, helps teams create easy-to-use screen recordings to support seamless collaboration. Loom places high value on developer experience, but never wants to sacrifice user experience. Going headless with Next.js on Vercel, they can achieve both. By leaning on best-of-breed tools, all seamlessly embedded in their frontend, Loom's developers empower stakeholders, while the engineering team continues to bring new features to market.

Read more

Greta Workman
https://vercel.com/blog/edge-config-ultra-low-latency-data-at-the-edge Edge Config: Ultra-low latency data at the edge 2022-11-23T13:00:00.000Z

Today, we're introducing Edge Config: an ultra-low latency data store for configuration data.

Globally distributed on Vercel's Edge Network, this new storage system gives you near-instant reads of your configuration data from Edge Middleware, Edge Functions, and Serverless Functions. Edge Config is already being used by customers to manage things like A/B testing and feature flag configuration data.

Edge Config is now generally available. Check out the documentation or deploy it on Vercel.

Read more

Dominik Ferber Dom Busser Edward Thomson Andy Schneider Jimmy Lai Doug Parsons
https://vercel.com/changelog/new-integrations-to-extend-your-vercel-workflow New integrations to extend your Vercel workflow 2022-11-21T13:00:00.000Z

We are excited to announce our integration marketplace has nine new additions:

  • Highlight: send sourcemaps to Highlight for better debugging

  • Inngest: run Vercel functions as background jobs or Cron jobs

  • Knock: add Knock’s notification system to your application

  • Novu: add real-time notifications to your app

  • Sitecore XM Cloud: deploy to Vercel from Sitecore’s headless CMS

  • Svix: add Svix’s webhook service to your application

  • TiDB Cloud: connect your app to a TiDB Cloud cluster

  • Tigris Data: connect a Tigris database to your Vercel project

  • Zeitgeist: manage deployments from mobile

The integration marketplace allows you to extend and automate your workflow by integrating with your favorite tools.

Explore these integrations and more at our Integrations Marketplace.

Read more

Cami Cano Noor Al-Alami
https://vercel.com/changelog/node-js-18-lts-is-now-available Node.js 18 LTS is now available 2022-11-18T13:00:00.000Z

As of today, version 18 of Node.js can be selected in the Node.js Version section on the General page in the Project Settings. Newly created projects will default to this version.

The new version introduces several new features including:

  • ECMAScript RegExp Match Indices

  • Blob

  • fetch

  • FormData

  • Headers

  • Request

  • Response

  • ReadableStream

  • WritableStream

  • import test from 'node:test'

Node.js 18 includes substantial improvements to align the Node.js runtime with the Edge Runtime, including alignment with Web Standard APIs.

The exact version used today is 18.12.1 and will automatically update minor and patch releases. Therefore, only the major version (18.x) is guaranteed.

Read the documentation for more.

Read more

Steven Salat Guðmundur Bjarni Ólafsson Ethan Arrowood Chris Barber Nathan Rajlich
https://vercel.com/changelog/faster-builds-with-improved-caching Faster builds with improved caching 2022-11-18T13:00:00.000Z

By optimizing how we retrieve the build cache, deployments are 15s faster for p90 (build times for 90% of users, filtering out the slowest 10% outliers).

This affects mainly large applications, reaching up to 45s faster builds.

Check out the documentation to learn more.

Read more

Peter van der Zee
https://vercel.com/changelog/bulk-upload-now-available-for-environment-variables Bulk upload now available for Environment Variables 2022-11-17T13:00:00.000Z

You can now more easily add Environment Variables to your projects using bulk upload. Import a .env file or paste multiple environment variables directly from the UI.

Check out the documentation to learn more.

Read more

Baruch Hen Alasdair Monk Ismael Rumzan Valerie Downs Jarryd McCree
https://vercel.com/changelog/import-turborepo-nx-and-rush-monorepos-with-zero-configuration Import Turborepo, Nx, and Rush monorepos with zero configuration 2022-11-15T13:00:00.000Z

You can now import your TurborepoNx, and Rush projects to Vercel without configuration.

Try it now by importing a new project or cloning an example project. The generated configurations will be seen when expanding the "Build and Output Settings" section. In addition, we have also shipped an Nx guide and template to help you get started quickly.

Read more

Andrew Gadzik Chloe Tedder Tom Knickman
https://vercel.com/changelog/november-2022 Improvements and fixes 2022-11-14T13:00:00.000Z

With your feedback, we've shipped dozens of bug fixes and small feature requests to improve your product experience.

  • Vercel CLI: 28.5.0 was released with improved vc build monorepo support.

  • Build without cache via env: It's now possible to force a build through Git that skips the build cache by setting the VERCEL_FORCE_NO_BUILD_CACHE environment variable in your project settings.

  • Environment variables: Each deployment on Vercel can now support up to 1000 environment variables instead of only 100.

  • Vercel dashboard UI: The primary and secondary navigation bars are now full width so that each page UI has the option to maintain a max-width or take advantage of the whole viewport.

  • Vercel menu component: The menu dropdown in your dashboard is now slightly more compact on desktop with an improved animation, which increases contrast and gives you higher information density.

  • Improved code in Vercel docs: Code blocks now include file location as a header.

  • Improved visuals in Vercel docs: We now support dynamic dark and light mode screenshots.

Read more

Christopher Skillicorn Rich Haines Sean Massa Kevin Rupert
https://vercel.com/changelog/share-environment-variables-across-your-team-and-projects Share environment variables across your Team and Projects 2022-11-07T13:00:00.000Z

You can now create environment variables securely at the team level and assign those variables to one or more projects for all Teams on the Pro and Enterprise plan. When an update is made to a shared environment variable, that value is updated for all projects to which the variable is linked.

Read the documentation to learn more.

Read more

Baruch Hen Jarryd McCree Valerie Downs Ismael Rumzan Alasdair Monk
https://vercel.com/changelog/emoji-reactions-now-available-in-preview-deployment-comments Emoji reactions now available in Preview Deployment comments 2022-11-04T13:00:00.000Z

You can now add emoji reactions when using comments in Preview Deployments.

With emoji reactions, you can signal boost any comment without adding noise to threads.

To access your Slack workspace custom emojis in a Preview Deployment, install the Vercel Slack Beta app and connect your Vercel account to Slack.

Check out the documentation to learn more about comments in Preview Deployments.

Read more

George Karagkiaouris Gary Borton Malte Ubl Christopher Skillicorn Nate Wienert Becca Zandstein
https://vercel.com/blog/using-vercel-comments-to-improve-the-next-js-13-documentation Using Vercel comments to improve the Next.js 13 documentation 2022-11-03T13:00:00.000Z

Writing documentation is a collaborative process—and feedback should be too. With the release of Next.js 13, we looked to the community to ensure our docs are as clear, easy to digest, and comprehensive as possible.

To help make it happen, we enabled the new Vercel commenting feature (beta) on the Next.js 13 docs. With 2,286 total participants, 509 discussion threads, and 347 resolved issues so far, our community-powered docs are on track to be the highest quality yet.

Visit beta.nextjs.org/docs to give it a try.

Read more

Delba de Oliveira Anthony Shew
https://vercel.com/blog/turbopack Introducing Turbopack 2022-10-25T13:00:00.000Z

Vercel's mission is to provide the speed and reliability innovators need to create at the moment of inspiration. Last year, we focused on speeding up the way Next.js bundles your apps.

Each time we moved from a JavaScript-based tool to a Rust-based one, we saw enormous improvements. We migrated away from Babel, which resulted in 17x faster transpilation. We replaced Terser, which resulted in 6x faster minification to reduce load times and bandwidth usage.

There was one hurdle left: webpack. Webpack has been downloaded over 3 billion times. It’s become an integral part of building the web, but it's time to go faster.

Today, we’re launching Turbopack: a high-performance bundler for React Server Components and TypeScript codebases.

Read more

Tobias Koppers Jared Palmer
https://vercel.com/blog/vercel-acquires-splitbee Vercel acquires Splitbee to expand first-party analytics 2022-10-25T13:00:00.000Z

The future of web analytics is real-time and privacy-first. Today, we're excited to announce our acquisition of Splitbee—bringing more analytics capabilities to all Vercel customers.

Along with the acquisition of Splitbee, we're adding top pages, top referring sites, and demographics to Vercel Analytics—available now. With Analytics, you can go beyond performance tracking and experience the same journey as your users with powerful insights tied to real metrics.

Read more

Kathy Korevec Timo Lins Tobias Lins
https://vercel.com/changelog/enhanced-audience-metrics-now-available-in-vercel-analytics Enhanced audience metrics now available in Vercel Analytics 2022-10-25T13:00:00.000Z

With the acquisition of Splitbee, Vercel Analytics now has privacy-friendly, first-party audience analytics.

Measure page views and understand your audience breakdown, including referrers and demographics—available now in Beta.

Check out the documentation to get started.

Read more

Kathy Korevec Timo Lins Tobias Lins Doug Parsons Chris Widmaier
https://vercel.com/changelog/instant-rollback-public-beta-available-to-revert-deployments Instant Rollback public beta available to revert deployments 2022-10-25T13:00:00.000Z

With Instant Rollback you can quickly revert to a previous production deployment, making it easier to fix breaking changes. Now available in Beta for everyone.

Check out the documentation to learn more.

Read more

Sam Becker Adrian Bettridge-Weise Tori Russell Liv Carman Arian Daneshvar Kathy Korevec Becca Zandstein Maedah Batool
https://vercel.com/blog/building-an-interactive-webgl-experience-in-next-js Building an interactive WebGL experience in Next.js 2022-10-21T13:00:00.000Z

WebGL is a JavaScript API for rendering 3D graphics within a web browser, giving developers the ability to create unique, delightful graphics, unlike anything a static image is capable of. By leveraging WebGL, we were able to take what would have been a static conference signup and turned it into the immersive Next.js Conf registration page.

In this post, we will show you how to recreate the centerpiece for this experience using open-source WebGL tooling—including a new tool created by Vercel engineers to address performance difficulties around 3D rendering in the browser.

Read more

Paul Henschel Anthony Shew
https://vercel.com/blog/regional-execution-for-ultra-low-latency-rendering-at-the-edge Regional execution for ultra-low latency rendering at the edge 2022-10-20T13:00:00.000Z

As we work to make a faster Web, increasing speed typically looks like moving more towards the edge—but sometimes requests are served fastest when those computing resources are close to a data source.

Today, we’re introducing regional execution of Edge Functions to address this. Regional execution of Edge Functions allow you to specify the region your Edge Function executes in. This capability allows you to run your functions near your data to avoid high-latency waterfalls while taking advantage of the fast cold start times of Edge Functions and ensuring your users have the best experience possible.

Read more

Edward Thomson Gal Schlezinger Meno Abels
https://vercel.com/changelog/regional-edge-functions-are-now-available Vercel Edge Functions can now be regional or global 2022-10-19T13:00:00.000Z

Vercel Edge Functions can now be deployed to a specific region.

By default, Edge Functions run in every Vercel region globally. You can now deploy Edge Functions to a specific region, which allows you to place compute closer to your database. This keeps latency low due to the close geographical distance between your Function and your data layer.

Check out the documentation to learn more.

Read more

Edward Thomson Gal Schlezinger Malte Ubl Sean Massa
https://vercel.com/blog/nextjs-conf-2022-iterate-scale-deliver Next.js Conf 2022: Iterate, scale, and deliver a great UX 2022-10-18T13:00:00.000Z

On October 25, at 10:30am PT, nearly 90,000 viewers will tune in virtually to see what’s new for React and Next.js developers, while hearing over 25 experts share how they use Next.js to iterate, scale, and deliver amazing UX. Register for Next.js Conf 2022 today to join them live and see what’s coming.

Whether you’re part of a small team or an enterprise, take a sneak peek at what's in store for the most anticipated developer experience of the year.

Read more

Hassan El Mghari
https://vercel.com/changelog/explore-bot-traffic-data-now-in-monitoring-beta Explore bot traffic data, now in Monitoring Beta 2022-10-12T13:00:00.000Z

Monitoring now lets you explore traffic data that comes from known and unknown bots. You can group the traffic data by public_ip, user_agent, asn, and bot_name to efficiently debug issues related to traffic coming from real users or bots.

Three new example queries have been added to help you get started:

  1. Requests by IP Address

  2. Requests by Bot/Crawler

  3. Requests by User Agent

Check out the documentation to learn more.

Read more

John Phamous Gaspar Garcia
https://vercel.com/changelog/improved-logs-available-as-public-beta-for-enterprise-teams Improved logs available as public beta for Enterprise Teams 2022-10-11T13:00:00.000Z

Improved logs are now in public beta for all Enterprise accounts. This improvement allows you to search, inspect, and share your organization's runtime logs, either at a project or team level.

The new UI consolidates and streamlines error handling and debugging. Enterprise users can now search runtime logs from all your deployments directly from the Vercel dashboard. Vercel will retain log data for 10 days and continue increasing our retention policy throughout the beta period. For longer log storage, you can use Log Drains.

Read the documentation to learn more.

Read more

Vincent Voyer Darpan Kakadia Kevin Rupert Meg Bird Naoyuki Kanezawa Mariano Cocirio Maedah Batool
https://vercel.com/blog/introducing-vercel-og-image-generation-fast-dynamic-social-card-images Introducing OG Image Generation: Fast, dynamic social card images at the Edge 2022-10-10T13:00:00.000Z

We’re excited to announce Vercel OG Image Generation – a new library for generating dynamic social card images. This approach is 5x faster than existing solutions by using Vercel Edge Functions, WebAssembly, and a brand new core library for converting HTML/CSS into SVGs.

Try it out in seconds.

Read more

Shu Ding Steven Salat Shu Uesugi
https://vercel.com/blog/improving-the-accessibility-of-our-nextjs-site Improving the accessibility of our Next.js site 2022-09-30T13:00:00.000Z

Read more

John Phamous Max Leiter Zach Ward Anthony Shew
https://vercel.com/blog/serving-millions-of-users-on-the-new-mrbeast-storefront How the world’s biggest YouTuber served millions of users on Vercel 2022-09-29T13:00:00.000Z

How do you build a site to support peak traffic, when peak traffic means a fanbase of over 100 million Youtube subscribers? In this guest post, Julian Benegas, Head of Development at basement.studio, walks us through balancing performance, entertainment, and keeping "the buying flow" as the star of the show for MrBeast's new storefront.

Read more

Julian Benegas Jose Rago
https://vercel.com/changelog/september-2022-papercuts Improvements and Fixes 2022-09-29T13:00:00.000Z

With your feedback, we've shipped bug fixes and small feature requests to improve your product experience.

  • Vercel CLI: v28.4.5 was released with bug fixes and improved JSON parsing.

  • A new system environment variable: VERCEL_GIT_PREVIOUS_SHA is now available in the Ignored Build Step, allowing scripts to compare changes against the SHA of the last successful deployment for the current project, and branch.

  • Vercel dashboard navigation: We’ve made it easier to navigate around the dashboard with the Command Menu. You can now search for a specific setting and get linked right to it on the page.

  • More granular deployment durations: The total duration time shown in the deployment tab on the Vercel dashboard now includes all 3 steps (building, checking, and assigning domains) and the time stamp next to each step is no longer rounded up.

  • Transferring projects: When transferring a project, the current team is always shown in the dropdown, disabled, with a "Current" label at the end. This is to prevent users from trying to transfer a project to the same Hobby team it already is in and also to keep the current team context.

  • Improved deployment logs: Logs that start with npm ERR! are now highlighted in red in deployment logs.  

  • CLI docs revamp: The Vercel CLI docs have moved and now include release phases and plan call-outs.

  • Build environment updates: Node.js updated to v16.16.0, npm updated to v8.11.0, pnpm updated to v7.12.2.

Read more

Tom Knickman John Phamous Steven Salat Rich Haines Max Leiter
https://vercel.com/changelog/easily-access-vercel-brand-assets-and-guidelines Easily access Vercel Brand Assets and Guidelines 2022-09-28T13:00:00.000Z

You can now copy the SVGs for the Vercel logo and wordmark or open the brand guidelines by right clicking on the Vercel logo no matter where you are in the platform. The SVGs are ready for you to use in code or in your favorite design app.

Read more

John Phamous Christopher Skillicorn
https://vercel.com/changelog/improved-monorepo-support-with-increased-projects-per-repository Improved monorepo support with increased Projects per repository 2022-09-27T13:00:00.000Z

To help your monorepo grow, we have updated the number of projects that you are able to add from a single git repository for both Pro and Enterprise plans.

Pro users can attach up to 60 (increased from 10) projects per single git repository, and the Enterprise limit has more than doubled.

Check out the documentation to learn more.

Read more

Tom Knickman Becca Zandstein Jared Palmer Nathan Hammond
https://vercel.com/blog/introducing-commenting-on-preview-deployments Introducing Commenting on Preview Deployments 2022-09-22T13:00:00.000Z

Vercel aims to encourage innovation through collaboration. We've enabled this from the start by making it easy to see your code staged on live environments with Preview Deployments. Today, we’re taking a step toward making Preview Deployments even more collaborative with new commenting capabilities now in Public Beta. By bringing everyone into the development process with comments on Previews and reviewing your UI on live, production-grade infrastructure, you deliver expert work faster.

Read more

Malte Ubl Becca Zandstein
https://vercel.com/changelog/commenting-on-previews-is-now-in-public-beta Commenting on Previews is now in Public Beta 2022-09-22T13:00:00.000Z

With comments, teams can give collaborative feedback directly on copy, components, interactions, and more right in your Preview Deployments.

PR owners, comment creators, and participants in comment threads can review and collaborate on real UI with comments, screenshots, notifications, all synchronized with Slack.

Check out the documentation to learn more or opt-in to start using comments now.

Read more

Christopher Skillicorn Becca Zandstein Malte Ubl Kathy Korevec George Karagkiaouris Gary Borton Nate Wienert Emil Kowalski Shaquil Hansford Alli Pope
https://vercel.com/changelog/add-elastic-scalability-to-your-backend-with-cockroach-labs Add elastic scalability to your backend with Cockroach Labs 2022-09-21T13:00:00.000Z

Combine CockroachDB Serverless with Vercel Serverless functions in under a minute to build apps faster and scale your entire backend elastically with the new Cockroach Labs integration, now in beta.

Try out the integration.

Read more

Noor Al-Alami Cami Cano
https://vercel.com/changelog/vercel-remote-cache-sdk-is-now-available Vercel Remote Cache SDK is now available 2022-09-19T13:00:00.000Z

Remote Caching is an advanced feature that build tools like Turborepo use to speed up execution by caching build artifacts and outputs in the cloud. With Remote Caching, artifacts can be shared between team members in both local, and CI environments—ensuring you never need to recompute work that has already been done.

With the release of the Vercel Remote Cache SDK, we're making the Vercel Remote Cache available to everyone. Through Vercel's Remote Caching API, teams can leverage this advanced primitive without worrying about hosting, infrastructure, or maintenance.

In addition to Turborepo, which ships with the Vercel Remote Cache support by default, we're releasing plugins for Nx and Rush.

Check out our examples to get started.

Read more

Gaspar Garcia Tom Knickman Jared Palmer
https://vercel.com/changelog/search-domains-on-the-vercel-dashboard Search domains on the Vercel dashboard 2022-09-15T13:00:00.000Z

You can now search your list of domains in the Domains tab on the Vercel dashboard to instantly find what you're looking for.

The search bar improves discoverability for teams working with multiple domains that often have long lists of domains to parse through.

Check out the documentation to learn more.

Read more

Kathy Korevec Tori Russell wits Kevin Rupert
https://vercel.com/blog/next-js-layouts-rfc-in-5-minutes Next.js Layouts RFC in 5 minutes 2022-09-14T13:00:00.000Z

The Next.js team at Vercel released the Layouts RFC a few months ago outlining the vision for the future of routing, layouts, and data fetching in the framework. The RFC is detailed and covers both basic and advanced features.

This post will cover the most important features of the upcoming Next.js changes landing in the next major version that you should be aware of.

Read more

Lee Robinson
https://vercel.com/blog/using-the-latest-next-js-12-3-features-on-vercel Using the latest Next.js 12.3 features on Vercel 2022-09-13T13:00:00.000Z

When we created Next.js in 2016, we set out to make it easier for developers to create fast and scalable web applications, and over the years, Next.js has become one of the most popular React frameworks. We’re excited to release Next.js 12.3 which includes Fast Refresh for .env files, improvements to the Image Component, and updates to upcoming routing features.

While these Next.js features work out of the box when self-hosting, Vercel natively supports and extends them, allowing teams to improve their workflow and iterate faster while building and sharing software with the world.

Let’s take a look at how these new Next.js features are enhanced on Vercel.

Read more

Lee Robinson Delba de Oliveira
https://vercel.com/changelog/enterprise-customers-can-now-export-audit-logs Enterprise customers can now export audit logs 2022-09-12T13:00:00.000Z

Customers on the Enterprise plan can now export up to 90 days of Audit Logs to a CSV file.

Audit Logs allow team owners to track important events that occurred on their team including who performed an action, what action was taken, and when it was performed.

Check out the documentation to learn more.

Read more

Kit Foster Ana Jovanova Simon Wijckmans Balazs Varga Valerie Downs Dominik Weber Javier Bórquez Jarryd McCree Andy Schneider Maedah Batool
https://vercel.com/blog/building-a-viral-application-to-visualize-train-routes Building a viral application to visualize train routes 2022-09-10T13:00:00.000Z

When inspiration struck Benjamin Td to visualize train routes across Europe, he created a Next.js application on Vercel in the moment of inspiration. To his surprise, his project ended up generating over a million views, reaching the top of Hacker News and going viral on Twitter.

Read more

Lee Robinson
https://vercel.com/blog/introducing-the-vercel-templates-marketplace Introducing the Vercel Templates Marketplace 2022-09-09T13:00:00.000Z

We are excited to announce the launch of the Vercel Templates Marketplace.

Read more

Steven Tey
https://vercel.com/blog/curve-fitting-for-charts-better-visualizations-for-vercel-analytics Curve fitting for charts: better visualizations for Vercel Analytics 2022-09-09T13:00:00.000Z

Read more

Shu Ding
https://vercel.com/blog/ab-testing-with-nextjs-and-vercel How to run A/B tests with Next.js and Vercel 2022-09-09T13:00:00.000Z

Running A/B tests is hard.

We all know how important it is for our business–it helps us understand how users are interacting with our products in the real world.

However, a lot of the A/B testing solutions are done on the client side, which introduces layout shift as variants are dynamically injected after the initial page load. This negatively impacts your websites performance and creates a subpar user experience.

To get the best of both worlds, we built Edge Middleware: code that runs before serving requests from the edge cache. This enables developers to perform rewrites at the edge to show different variants of the same page to different users.

Today, we'll take a look at a real-world example of how we used Edge Middleware to A/B test our new Templates page.

Read more

Steven Tey
https://vercel.com/blog/nextjs-conf-2022 At Next.js Conf 2022, learn to build better and scale faster 2022-09-02T13:00:00.000Z

We’re excited to announce the third annual Next.js Conf on October 25, 2022. Claim your ticket now.

Read more

Hank Taylor Kathy Korevec
https://vercel.com/changelog/new-configuration-overrides-available-per-deployment New configuration overrides available per-deployment 2022-09-02T13:00:00.000Z

It's now easier to test out a new framework, package manager, or other build tool without disrupting the rest of your project. We've added support for configuration overrides on a per-deployment basis powered by six new properties for vercel.json.

The six supported settings are:

  • framework

  • buildCommand

  • outputDirectory

  • installCommand

  • devCommand

  • ignoreCommand

Check out the documentation to learn more.

Read more

Ethan Arrowood Steven Salat Nathan Rajlich
https://vercel.com/blog/sza-integral-create-at-the-moment-of-inspiration How SZA and Integral Studio create at the moment of inspiration 2022-08-29T13:00:00.000Z

Read more

Greta Workman Grace Madlinger
https://vercel.com/blog/introducing-support-for-webassembly-at-the-edge Introducing support for WebAssembly at the Edge 2022-08-26T13:00:00.000Z

We've been working to make it easier for every developer to build at the Edge, without complicated setup or changes to their workflow. Now, with support for WebAssembly in Vercel Edge Functions, we've made it possible to compile and run Vercel Edge Functions with languages like Rust, Go, C, and more.

Read more

Edward Thomson Gal Schlezinger
https://vercel.com/changelog/intelligent-ignored-builds-using-turborepo Intelligent ignored builds using Turborepo 2022-08-26T13:00:00.000Z

When deployed on Vercel, Turborepo now supports only building affected projects via the new turbo-ignore npm package, saving time and helping teams stay productive.

turbo-ignore leverages the Turborepo dependency graph to automatically determine if each app, or one of its dependencies has changed and needs to be deployed.

Try it now by setting npx turbo-ignore as the Ignored Build Step for each project within your monorepo.

Check out the documentation to learn more.

Read more

Tom Knickman Steven Salat Andrew Healey Jared Palmer Nathan Hammond
https://vercel.com/changelog/august-2022-papercuts Improvements and fixes 2022-08-22T13:00:00.000Z

With your feedback, we've shipped dozens of bug fixes and small feature requests to improve your product experience.

  • Vercel CLI: v28 was released with new commands and bug fixes.

  • Integrations: Team Owners can now transfer ownership of integrations installed on a Team to another member. This helps prevent disruption of work when a member leaves a Team.

  • Domain emails: Domain email notifications are now only sent to account owners. This includes domain transfer, expiration, and renewal emails.

  • Incremental Static Regeneration logs: Function logs from Incremental Static Regeneration now appear in the Vercel.com console, making it easier to understand when your pages are revalidated and monitor the usage of your revalidation functions.

  • Usage summaries: Usage summaries for Hobby accounts are now available in Account Settings → Billing.

  • Branch URLs on mobile: The deployment overview now includes a popover that lists branch URLs so that you can easily access them on your mobile device.

Read more

Rich Harris Steven Salat Sean Massa Lee Robinson
https://vercel.com/changelog/new-help-and-guides-pages-on-the-vercel-docs New help and guides pages on the Vercel docs 2022-08-22T13:00:00.000Z

Vercel's help page allows you to search documentation, find framework communities, or submit a case with our success team. The new guides page enables you to filter and search through hundreds of learning resources.

Check out the help and guides pages to learn more.

Read more

Rich Haines Ismael Rumzan Kevin Rupert Elijah Cobb Samuel Foster
https://vercel.com/changelog/monitoring-is-in-public-beta-for-enterprise-teams Monitoring is in public beta for Enterprise Teams 2022-08-17T13:00:00.000Z

The Monitoring tab is now in public beta for all Enterprise accounts. This new feature allows you to visualize, explore, and monitor your usage & traffic data. Using the query editor, you can create custom queries to gain greater insights into your data - allowing you to more efficiently debug issues and optimize all of the projects on your Vercel Team.

Check out the documentation to learn more.

Read more

Gaspar Garcia Jared Palmer John Phamous Hector Simpson Jarryd McCree Maedah Batool
https://vercel.com/changelog/vercel-cli-v28 Vercel CLI v28 is now available 2022-08-12T13:00:00.000Z

Version 28.0.0 of Vercel CLI is now available. Here are some of the key improvements made within the last couple of months:

  •  If you have a Git provider repository configured, Vercel CLI will now ask if you want to connect it to your Project during vercel link setup. [28.0.0] (Note: This functionality was reverted in [28.1.4])

  • A new command vercel git allows you to set up deployments via Git from Vercel CLI. Get started by running vercel git connect in a directory with a Git repository. [27.1.0]

  • Previously, Vercel CLI deployments did not include Git metadata, even if you had a Git repository set up. Now, Git metadata is sent in deployments created via Vercel CLI. [25.2.0]

  • Now, when you run vercel env pull, if changes were made to an existing .env* file, Vercel CLI will list the variables that were added, changed, and removed. [27.3.0]

  • vercel ls and vercel project ls were visually overhauled, and vc ls is now scoped to the currently-linked Project. [28.0.0]

Notable changes

  • Dropped support for Node.js 12 [25.0.0]

  • Removed vercel billing command [28.0.0]

  • Removed auto clipboard copying in vercel deploy [27.0.0]

  • Deprecated --confirm in favor of --yes to skip prompts throughout Vercel CLI [27.4.0]

  • Added support for Edge Functions in vercel dev [25.2.0]

  • Added support for importing .wasm in vercel dev [27.3.0]

Note this batch of updates includes breaking changes. Check out the full release notes to learn more.

Read more

Matthew Stanciu Nathan Rajlich Sean Massa Chris Barber Steven Salat
https://vercel.com/changelog/view-projects-grouped-by-git-repository-with-list-view View projects grouped by Git repository with list view 2022-08-11T13:00:00.000Z

You can now view projects on the dashboard grouped by their repository with list view.

List view improves the experience for teams using monorepos or a large number of projects. Projects are sorted by date and displayed as a list. You can use the toggle to switch between the card or list view for displaying projects, with your preference saved across devices.

Check out the documentation to learn more.

Read more

Shaziya Bandukia Ernest Delgado Jared Palmer Becca Zandstein Christopher Skillicorn
https://vercel.com/blog/how-we-made-the-vercel-dashboard-twice-as-fast How we made the Vercel Dashboard twice as fast 2022-08-09T13:00:00.000Z

We want to keep the Vercel Dashboard fast for every customer, especially as we add and improve features. Aiming to lift our Core Web Vitals, our Engineering Team took the Lighthouse score for our Dashboard from 51 to 94.

We were able to confirm that our improvements had a real impact on our users over time using Vercel Analytics, noting that our Vercel Analytics scores went from 90 to 95 on average (desktop). Let’s review the techniques and strategies we used so you can make a data-driven impact on your application.

Read more

Shu Ding Anthony Shew
https://vercel.com/blog/improving-interaction-to-next-paint-with-react-18-and-suspense Improving INP with React 18 and Suspense 2022-08-09T13:00:00.000Z

Updated January 18, 2024.

Interaction to Next Paint (INP) measures your site’s responsiveness to user interactions on the page. The faster your page responds to user input, the better.

On March 12, 2024, INP will officially replace First Input Delay (FID) as the third Core Web Vital.

This post will help you understand why INP is a better way to measure responsiveness than FID and how React and Next.js can improve INP. You'll be prepared for updates to Core Web Vitals, which impact search rankings, as INP moves from experimental to stable. We have a separate post on understanding the metric and further optimization of INP.

Read more

Lee Robinson
https://vercel.com/changelog/vercel-analytics-support-for-interaction-to-next-paint-experimental Vercel Analytics support for Interaction to Next Paint (Experimental) 2022-08-09T13:00:00.000Z

Vercel Analytics now supports measuring Interaction to Next Paint (INP).

INP measures your site’s responsiveness to user interactions on the page. The faster your page responds to user input – the better. INP is an experimental metric to develop a better way of measuring responsiveness than First Input Delay (FID).

Try Vercel Analytics today to start measuring your performance.

Read more

Lee Robinson
https://vercel.com/changelog/instantly-transfer-domains-to-new-projects Instantly transfer domains to new projects 2022-08-05T13:00:00.000Z

Domains already in use can now be transferred directly to a new project on Vercel.

Previously, domains had to be removed from a project before being added to a new one. With this update, if you attempt to move a live domain to a new project, a prompt will appear offering to move the in-use domain and all associated redirects to the selected project.

Check out the documentation as well.

Read more

Justin Vitale
https://vercel.com/changelog/16x-larger-environment-variable-storage-up-to-64kb 16x larger Environment Variable storage up to 64KB 2022-08-04T13:00:00.000Z

You can now use a total of 64KB in Environments Variables for each of your Deployments on Vercel. This change means you can add large values for authentication tokens, JWTs, or certificates, without worrying about storage size.

Deployments using Node.js, Python, and Ruby can support the larger 64KB environment.

Check out the documentation as well.

Read more

Craig Andrews Mariano Cocirio
https://vercel.com/blog/hashnode-runs-faster-blogs-on-the-web-with-vercel Hashnode runs the fastest blogs on the web with Vercel 2022-08-03T13:00:00.000Z

Hashnode, a blogging platform for the developer community built using Next.js, was born from the fundamental idea that developers should own the content they publish. A key component of that ownership is publishing articles on a custom domain—a feature the Hashnode team spent hours monitoring and maintaining themselves. That’s when they turned to Vercel. 

Read more

Greta Workman
https://vercel.com/changelog/enhanced-geolocation-information-available-for-vercel-functions Enhanced geolocation information for Vercel Functions 2022-08-03T13:00:00.000Z

Requests received by Serverless and Edge Functions are now enriched with headers containing information about the timezone of the visitor:

As an example, a request from Tokyo is now enriched with the following headers:

This header is now automatically activated for all new and existing Vercel Functions for all plans — no code or configuration change needed.

Check out the documentation as well.

Read more

Naoyuki Kanezawa Matheus Fernandes Luc Leray
https://vercel.com/changelog/improved-accuracy-for-vercel-analytics-charts Improved accuracy for Vercel Analytics charts 2022-08-03T13:00:00.000Z

It's now easier to visualize performance trends over time with Vercel Analytics.

Individual Core Web Vital data points are now displayed as a scatter plot with a trend line showing the estimation curve. This line is shown when there are more than 100 data points for the currently selected date and time window. The performance delta is calculated based on the estimation curve instead of the first and last data points for improved accuracy.

Check out the documentation to learn more.

Read more

Shu Ding
https://vercel.com/blog/build-your-own-web-framework Build your own web framework 2022-07-28T13:00:00.000Z

Have you ever wondered what it takes to build your own web framework that also deploys to edge and serverless infrastructure? What features does a modern framework need to support, and how can we ensure that these features allow us to build a scalable, performant web application?

Read more

Lydia Hallie
https://vercel.com/changelog/role-based-access-control-now-generally-available-on-enterprise-plans Role-based Access Control now generally available on Enterprise Plans 2022-07-28T13:00:00.000Z

Role-based access controls are now available to all Enterprise customers, including:

  • Viewer: Read-only access

  • Billing: View invoices and edit billing settings

  • Developer: Grant elevated permissions per-project

Check out the documentation to learn more.

Read more

Jarryd McCree Ana Jovanova Miroslav Simulcik Dominik Weber Enric Pallerols Balazs Varga Valerie Downs Christopher Skillicorn Andy Schneider Maedah Batool
https://vercel.com/changelog/filter-checks-by-status-for-enhanced-troubleshooting Filter Checks by status for enhanced troubleshooting 2022-07-26T13:00:00.000Z

You can now filter Checks by status to show which failures are causing performance regressions. Install the Checkly Integration to add auto-generated Web Vitals monitoring to your deployments and prevent performance regressions.

To build your own deployment validation and status checks, view the Checks API documentation.

Read more

Darpan Kakadia Cami Cano Amy Burns Sam Becker
https://vercel.com/blog/build-output-api Announcing the Build Output API 2022-07-21T13:00:00.000Z

We believe the Web is an open platform for everyone, and strive to make Vercel accessible and available no matter how you choose to build for the Web.

Today we’re introducing the Build Output API, a file-system-based specification that allows any framework to build for Vercel and take advantage of Vercel’s infrastructure building blocks like Edge Functions, Edge Middleware, Incremental Static Regeneration (ISR), Image Optimization, and more.

Read more

Lee Robinson Sean Massa Nathan Rajlich Greta Workman Steven Salat Jeff Escalante
https://vercel.com/changelog/new-build-and-deploy-capabilities-in-vercel-cli New build and deploy capabilities in Vercel CLI 2022-07-21T13:00:00.000Z

Vercel’s Build Output API is now generally available. This API allows any framework, including your own custom-built solution, to take advantage of Vercel’s infrastructure building blocks including Edge Middleware, Edge Functions, Incremental Static Regeneration, Image Optimization, and more.

This specification also allows us to introduce two new commands to Vercel CLI:

  • vercel build: Build a project locally or in your own CI environment

  • vercel deploy --prebuilt: Deploy a build output directly to Vercel without sending source code through Vercel's build system

Read more about the Build Output API announcement on the blog. For framework authors, explore the Build Output API examples.

Read more

Nathan Rajlich Steven Salat Sean Massa
https://vercel.com/changelog/expiration-dates-now-available-for-access-tokens Expiration dates now available for Access Tokens 2022-07-20T13:00:00.000Z

You can now set an expiration date on all newly created Access Tokens. Setting an expiration date on an Access Token is highly recommended and is considered one of the standard security practices that helps keep your account secure. You can select from a default list of expiration dates ranging from 1 day to 1 year. Expired tokens can be viewed on the tokens page.

Check out the documentation to learn more.

Read more

Balazs Varga Valerie Downs Dominik Weber Jarryd McCree
https://vercel.com/changelog/corepack-experimental-is-now-available Corepack (experimental) is now available 2022-07-14T13:00:00.000Z

Corepack allows you to use a specific package manager version (pnpm, yarn, npm) in your Project. Starting today, you can enable experimental Corepack support.

Enable Corepack by adding packageManager to your package.json file and ENABLE_EXPERIMENTAL_COREPACK=1 as an Environment Variable in your Project. Corepack is experimental and not subject to semantic versioning rules. Breaking changes or removal may occur in any future release of Node.js.

Check out the documentation as well.

Read more

Steven Salat
https://vercel.com/changelog/osaka-japan-is-now-available-on-the-edge-network Osaka (Japan) is now available on the Edge Network 2022-07-13T13:00:00.000Z

We're excited to introduce Osaka (Japan) as our second region in Japan. Using Vercel Analytics, we saw a 12% reduction in end-user Time To First Byte (TTFB) in Japan and 15% in South Korea.

Static files and function responses are now automatically cached in the Osaka region. You can now select this region for running Serverless Functions in your Project Settings. Edge Middleware will now also run in this region, resulting in improved performance for surrounding countries.

Check out the documentation as well.

Read more

Joe Haddad Casey Gowrie Matheus Fernandes
https://vercel.com/changelog/enhanced-observability-on-the-usage-tab Enhanced observability on the Usage tab now in public beta 2022-07-13T13:00:00.000Z

The Usage tab makes it easier to understand your Team's resource usage, down to specific projects and Serverless Functions. Today, we've improved this functionality with a new section called Top Paths that displays the paths that are consuming the most resources in your Team. This functionality allows you to optimize your website by providing enhanced insights into bandwidth, requests, and invocations consuming the most resources over time.

With Top Paths, filters can be applied to query a specific date range or project. Clicking the Explore button expands the section to a full page, allowing your Team to see more paths as well as providing the ability to download a CSV file and share the view with other Team members.

This functionality is now available on all plans in public beta. To learn more, check out our documentation.

Read more

John Phamous Christopher Skillicorn Valerie Downs Jarryd McCree
https://vercel.com/changelog/improved-alerting-for-slack-integration Improved alerting for Slack Integration 2022-07-11T13:00:00.000Z

You can now get alerted on specific environments with the Slack Integration. This helps reduce notifications by allowing you to send Preview and Production deployment alerts to different channels.

Try out the Slack Integration today.

Read more

Chris Widmaier
https://vercel.com/changelog/connect-hasura-apis-to-your-app-instantly Connect Hasura APIs to your app, instantly 2022-07-11T13:00:00.000Z

You can now easily connect your Hasura GraphQL and REST APIs to your Vercel projects for streamlined fullstack development. Deploy faster data-rich applications at global scale on Vercel with a Hasura Cloud backend.

Try out the integration and connect your APIs.

Read more

Cami Cano Darpan Kakadia Chris Widmaier Noor Al-Alami
https://vercel.com/changelog/hydrogen-projects-can-now-be-deployed-with-zero-configuration Hydrogen projects can now be deployed with zero configuration 2022-07-08T13:00:00.000Z

Vercel now automatically optimizes your Hydrogen projects. When importing a new project, it will detect Hydrogen and configure the right settings for optimal performance — including using Vercel Edge Functions for server-rendering pages.

Deploy the Hydrogen template to get started.

Read more

Nathan Rajlich
https://vercel.com/changelog/connect-your-postgres-db-faster-with-thin-integration Connect your Postgres DB faster with Thin integration 2022-07-07T13:00:00.000Z

Thin provides a fast, full-featured API backend on top of your Postgres DB with Git-like workflow and end-to-end type safety. Connect your Vercel project to a Thin backend in a few clicks.

Try out the integration and create your backend.

Read more

Noor Al-Alami Darpan Kakadia Chris Widmaier Cami Cano
https://vercel.com/changelog/cape-town-south-africa-is-now-available-on-the-edge-network Cape Town (South Africa) is now available on the Edge Network 2022-07-07T13:00:00.000Z

We're excited to introduce Cape Town (South Africa) as a new region on the Edge Network. Using Vercel Analytics, we saw a 50% reduction in end-user Time To First Byte (TTFB) in South Africa.

Static files and function responses are now automatically cached in the Cape Town region. You can now select this region for running Serverless Functions in your Project Settings. Edge Middleware will now also run in this region, resulting in improved performance for surrounding countries.

Check out the documentation as well.

Read more

Joe Haddad Casey Gowrie Matheus Fernandes
https://vercel.com/changelog/vercel-analytics-api-is-now-available-for-all-frameworks Vercel Analytics API is now available for all frameworks 2022-07-01T13:00:00.000Z

Vercel Analytics helps you understand the performance of your application based on real visitor data. With the Vercel Analytics API, you can now use Vercel Analytics with any framework.

We currently have zero-configuration Analytics support for Next.js, Nuxt, and Gatsby. Now, any website can monitor its Core Web Vitals with Vercel Analytics by using the API directly. Get started today with the SvelteKit or Create React App starts, which have been updated to include support for the Vercel Analytics API.

Check out the documentation as well.

Read more

Lee Robinson
https://vercel.com/blog/vercel-edge-middleware-dynamic-at-the-speed-of-static Vercel Edge Middleware: Dynamic at the speed of static 2022-06-28T13:00:00.000Z

Since we announced Middleware last October, we’ve seen 80% month-over-month growth and over 30 billion requests routed through Edge Middleware on Vercel during public beta. Customers like Vox Media, Hackernoon, Datastax, and HashiCorp are using Edge Middleware to have complete control over routing requests in their Next.js applications.

With the release of Next.js 12.2, Vercel Edge Middleware for Next.js is now generally available (GA) for all customers. Edge Middleware is also available for all frameworks—now available in public beta along with a suite of other edge-first tools.

Read more

Greta Workman Lee Robinson
https://vercel.com/changelog/vercel-edge-functions-are-now-in-public-beta Vercel Edge Functions are now in public beta 2022-06-28T13:00:00.000Z

Edge Functions are now in public beta. Edge API Routes, which use Edge Functions, enable you to create high-performance APIs for use with any frontend framework. These functions use the same standard Web APIs as Edge Middleware, like RequestResponse, and fetch.

Check out the documentation to get started.

Read more

Javi Velasco Malte Ubl Amy Burns
https://vercel.com/changelog/vercel-edge-middleware-is-now-generally-available Vercel Edge Middleware is now generally available 2022-06-28T13:00:00.000Z

Edge Middleware for Next.js is now generally available on all plans. Edge Middleware allows you to run code on Vercel's Edge Network before the request is processed on your site. Middleware runs on the Vercel Edge network and can be used to handle A/B testing, geolocation, authentication, and more.

Learn how to get started with Vercel Edge Middleware.

Read more

Javi Velasco
https://vercel.com/blog/introducing-the-edge-runtime Introducing the Edge Runtime 2022-06-21T13:00:00.000Z

Vercel’s mission is to enable developers to build dynamic, global applications.

To enable every framework to build for the edge, we’re releasing edge-runtime: a toolkit for developing, testing, and defining the runtime web APIs for edge infrastructure.

Read more

Lee Robinson
https://vercel.com/changelog/headless-commerce-with-the-swell-integration Headless commerce with the Swell integration 2022-06-16T13:00:00.000Z

Swell is a headless ecommerce platform designed for performance and customization. You can now easily connect a Swell storefront to your Vercel project and deploy the next big thing.

Try out the integration and connect your storefront.

Read more

Cami Cano Darpan Kakadia Chris Widmaier
https://vercel.com/changelog/sunsetting-the-oauth2-integration-entrypoint Sunsetting the OAuth2 integration entrypoint 2022-06-14T13:00:00.000Z

With the introduction of the API Scopes, the OAuth2 entrypoint is being sunset. Integrations using the OAuth2 entrypoint have until December 31, 2022 to migrate to External mode installation flow to avoid any disruptions.

Check out the documentation for instructions.

Read more

Chris Widmaier Cami Cano
https://vercel.com/changelog/faster-and-more-reliable-global-propagation Faster and more reliable global propagation 2022-06-14T13:00:00.000Z

We've upgraded our infrastructure resulting in significant performance and reliability improvements for all plans. Vercel's Edge infrastructure is now 70% faster at p99 for cache purges and configuration updates, serving over 25B requests per week.

Purges now propagate globally in ~300ms, regardless of the region the event originated from. These improvements impact all parts of the Vercel platform:

Deploy now to try our improved infrastructure.

Read more

Joe Haddad Jason Hoch Deniz Kusefoglu
https://vercel.com/blog/mongodb-and-vercel-from-idea-to-global-fullstack-app-in-seconds MongoDB and Vercel: from idea to global fullstack app in seconds 2022-06-13T13:00:00.000Z

Last week, I had the pleasure of joining Sahir Azam, MongoDB’s Chief Product Officer, on stage at MongoDB World in New York City. We announced the Vercel and MongoDB integration—and shared our vision for enabling developers to create at the moment of inspiration.

Read more

Guillermo Rauch
https://vercel.com/changelog/mongodb-atlas-integration Fullstack serverless with MongoDB Atlas integration 2022-06-07T13:00:00.000Z

The MongoDB Atlas integration allows you to connect a new or existing free Atlas database to your Vercel project in seconds. Go from idea to global application in a few clicks with serverless frontend and backend infrastructure.

Try out the integration or jumpstart your development with the MongoDB Starter.

Read more

Chris Widmaier Darpan Kakadia Cami Cano
https://vercel.com/changelog/enhanced-security-with-new-api-scopes-for-integrations Enhanced security with new API scopes for integrations 2022-06-06T13:00:00.000Z

Vercel Integrations now have improved API scopes to allow or restrict access to specific features of your Vercel account.

All new integrations are required to select which API scopes they need access to and existing integrations are required to add scopes by July 31, 2022.

Check out the documentation as well.

Read more

Cami Cano Chris Widmaier Doug Parsons Florentin Eckl Darpan Kakadia
https://vercel.com/changelog/log-drains-now-support-log-source-selection Log drains now support log source selection 2022-06-02T13:00:00.000Z

Log drains can now be configured to send over select Vercel logs to providers for reduced log management costs and simplified monitoring setup. This new functionality is currently available for the Datadog integration.

Choose to send logs from the following sources:

  • static

  • lambda

  • edge

  • build

  • external

For information on log sources and how to add them to your integration, check out the documentation.

Read more

Darpan Kakadia Cami Cano
https://vercel.com/changelog/vercel-remote-cache-is-now-generally-available Vercel Remote Cache is now generally available 2022-06-01T13:00:00.000Z

Vercel Remote Cache is now generally available on all plans. Vercel Remote Cache can store and distribute build artifacts to make builds faster across teams and CI. This functionality is automatically provisioned for Turborepos on Vercel.

For more information and pricing check out the documentation.

Read more

Gaspar Garcia Greg Soltis Becca Zandstein Jared Palmer Valerie Downs Tom Knickman Meg Bird Amy Burns
https://vercel.com/changelog/increased-security-with-the-developer-and-billing-roles Better security with the developer and billing roles 2022-05-31T13:00:00.000Z

Enterprise Teams can now assign the developer and billing roles to users:

  • Developer Role: Allows Team owners to grant elevated permissions to users on a project-by-project basis. Developers can create deployments but are prevented from performing sensitive actions such as viewing production environment variables or promoting deployments to production.

  • Billing Role: Allows users to view invoices and edit billing settings while also providing read-only access to all Projects on a Team.

The developer and billing roles are now in public beta. Learn more about roles and permissions of Team members.

Read more

Jarryd McCree Ana Jovanova Miroslav Simulcik Dominik Weber Enric Pallerols Balazs Varga Valerie Downs Andy Schneider
https://vercel.com/changelog/node-js-12-is-being-deprecated Node.js 12 is being deprecated 2022-05-20T13:00:00.000Z

Following the release of Node.js 16 last week, Vercel is announcing the deprecation of Node.js 12, which reached its official end of life on April 30th 2022.

On October 3rd 2022, Node.js 12 will be disabled in the Project Settings and existing Projects that have Node.js 12 selected will render an error whenever a new Deployment is created. The same error will show if the Node.js version was configured in the source code.

While existing Deployments with Serverless Functions using the Node.js 12 runtime will not be affected, we strongly encourage upgrading to Node.js 16 to ensure you receive security updates (using either engines in package.json or the General page in the Project Settings).

Check out the documentation as well.

Read more

Steven Salat
https://vercel.com/changelog/astro-projects-can-now-be-deployed-with-zero-configuration Astro projects can now be deployed with zero configuration 2022-05-19T13:00:00.000Z

Vercel now automatically optimizes your Astro projects. When importing a new project, it will detect Astro and configure the right settings for optimal performance — including automatic immutable HTTP caching headers for JavaScript and CSS assets.

Deploy the Astro template to get started.

Read more

Lee Robinson
https://vercel.com/changelog/automatic-pnpm-v7-support Automatic pnpm v7 Support 2022-05-12T13:00:00.000Z

Vercel now supports pnpm v7. For deployments with a pnpm-lock.yaml file with version: 5.4, Vercel will automatically use pnpm v7 for install and build commands.

To upgrade your project to pnpm v7, run pnpm install -g pnpm@7 locally and then re-run pnpm install.After updating, create a new deployment!

Check out the documentation as well.

Read more

Ethan Arrowood Steven Salat
https://vercel.com/changelog/faster-builds-for-everyone Faster builds for everyone 2022-05-12T13:00:00.000Z

All Vercel customers will now experience faster build times.

We’ve made improvements to the Vercel infrastructure for customers across all plans:

  • 11 seconds faster (average)

  • 40 seconds faster (large projects)

  • 105 seconds faster (extra-large projects)

Deploy a template to get started.

Read more

Andrew Healey Luc Leray
https://vercel.com/changelog/hobby-customers-can-now-select-their-preferred-region-for-serverless Hobby customers can now select their preferred region for Serverless Functions 2022-05-12T13:00:00.000Z

Deploying your Serverless Functions in a region close to your data can greatly improve performance by reducing latency.

Previously, Hobby customers could only choose US East (iad1) regardless of where their data was hosted. Starting today, all plans can co-locate their Functions with their data for lower latency when server-rendering with hybrid frameworks like Next.js, SvelteKit, and more, or when using API Routes.

Check out the documentation as well.

Read more

Lee Robinson Matheus Fernandes
https://vercel.com/changelog/may-2022-papercuts Improvements and fixes 2022-05-10T13:00:00.000Z

With your feedback, we've shipped dozens of bug fixes and small feature requests to improve your product experience. Here are some of our most recent.

  • Vercel CLI: Vercel CLI v24.2.1 is now available, which includes support for Node.js 16.

  • Terms of Service (ToS): The Vercel ToS has been updated to provide additional information about what using a beta product or feature means.

  • Documentation improvements: The documentation for integrations now more clearly show how to use webhooks, Log Drains, and Checks.

  • Improvements for URL filters: When you select a time period on the usage tab, the filter is persisted in the URL which makes sharing that specific view with your team easier.

  • Design improvements for the usage tab: If limits have been exceeded on the usage tab, the color of the usage meter changes to better indicate the current state.

  • Project settings improvements: Transferring or deleting a project is now available on the general settings for that project.

Let us know other opportunities where we can improve your experience with Vercel.

Read more

Steven Salat Becca Zandstein Tom Knickman Kylie Czajkowski
https://vercel.com/changelog/node-js-16-lts-is-now-available Node.js 16 LTS is now available 2022-05-09T13:00:00.000Z

As of today, version 16 of Node.js can be selected in the Node.js Version section on the General page in the Project Settings (newly created Projects will default to the new version).

The new version introduces several new features including:

  • ECMAScript RegExp Match Indices

  • AbortController

  • AggregateError

  • Array.prototype.at()

  • require('crypto').webcrypto

  • require('timers/promises')

  • fs.cp()

The exact version used today is 16.14.0 and will automatically update minor and patch releases. Therefore, only the major version (16.x) is guaranteed.

Check out the documentation as well.

Read more

Steven Salat
https://vercel.com/changelog/self-serve-delegation-of-subdomains Self-serve delegation of subdomains 2022-05-02T13:00:00.000Z

If you host multiple subdomains on Vercel throughout separate accounts, you are now able to verify ownership of those subdomains in a self-serve manner via the Vercel Dashboard and API. Adding a subdomain to a project no longer requires the apex domain. Ownership is established via a token that is generated when the subdomain is added to a project and published in the domain owner’s DNS records. This change makes it easier to share domains for Platforms, teams, and collaborators on Vercel.

To learn more check out the UI docs or REST API docs to add a domain to a project and verify that domain if needed.

Read more

Mark Glagola Casey Gowrie Joe Haddad Agustin Falco Ethan Arrowood
https://vercel.com/changelog/faster-build-times-for-monorepos Faster builds for monorepos 2022-04-29T13:00:00.000Z

New and existing monorepos deployed to Vercel will experience faster builds.

Vercel now automatically caches node_modules recursively when installing dependencies during the build process. ENABLE_ROOT_PATH_BUILD_CACHE=1 will be set as a default environment variable on all new and existing monorepo projects. For large monorepos, this can decrease build times by minutes.

Check out the docs as well.

Read more

Ethan Arrowood Steven Salat Jared Palmer
https://vercel.com/changelog/python-3-6-is-being-deprecated Python 3.6 is being deprecated 2022-04-29T13:00:00.000Z

Following the release of Python 3.9, Vercel is deprecating support for Python 3.6 which reached end of life last year.

On July 18th 2022, new deployments targeting Python 3.6 will fail with an error message. Existing deployments will be unaffected.

Check out the documentation as well.

Read more

Steven Salat
https://vercel.com/blog/how-hashicorp-developers-iterate-faster-with-isr How HashiCorp developers iterate faster with Incremental Static Regeneration 2022-04-26T13:00:00.000Z

Incremental Static Regeneration (ISR) dramatically reduces build times, allowing developers to deliver faster changes and better site performance. With Next.js 12.1, we’ve now introduced on-demand ISR, our most requested feature by developers shipping large-scale projects.

Bryce Kalow, a senior web engineer at HashiCorp, met with us to explain how HashiCorp's engineers use ISR and on-demand ISR to iterate quickly—while maintaining flexible sites and apps.

Read more

Bryce Kalow Greta Workman
https://vercel.com/changelog/axiom-is-joining-the-vercel-marketplace Axiom is joining the Vercel marketplace 2022-04-26T13:00:00.000Z

The Axiom integration enables you to monitor the health and performance of your Vercel deployments by ingesting all your request, function, and web vitals data. Use Axiom's pre-built dashboard for an overview across all your Vercel logs and vitals, drill down to specific projects and deployments, and get insight on how functions are performing with a single click.

Try out the integration and start streaming your logs.

Read more

Cami Cano Darpan Kakadia
https://vercel.com/changelog/deployment-filters-for-project-dashboard Deployment filters for project dashboard 2022-04-25T13:00:00.000Z

Find your essential deployments instantly with status and branch filters.

From the deployments tab, you can search for branches matching deployments you’re interested in. You can also filter by status, with canceled deployments filtered out automatically making the view more useful at a glance. Both status and branch filters are also persisted via the url, so you can quickly share context with your team.

Checkout the project dashboard documentation for more information.

Read more

Christopher Skillicorn Becca Zandstein Mark Glagola Tom Knickman Nate Wienert
https://vercel.com/changelog/deploy-to-vercel-from-terraform Deploy to Vercel from Terraform 2022-04-21T13:00:00.000Z

With the Vercel Provider, now verified in the Terraform Registry, you can configure and deploy Vercel projects alongside your back-end services from Terraform’s infrastructure as code (IaC) software tool. By codifying cloud infrastructure and frontend deployment into a single workflow, your team can provision, preview, and ship applications faster.

Check out our guide to get started.

Read more

Doug Parsons Cami Cano Maedah Batool Ismael Rumzan
https://vercel.com/changelog/improved-formatting-for-pull-request-comments Improved formatting for Pull Request comments 2022-04-14T13:00:00.000Z

We've updated the design of comments from Vercel on pull requests.

Vercel automatically deploys your projects and creates Preview Deployments when integrated with GitHub, GitLab, and Bitbucket. The updated comment design makes it easier to see deployment statuses and quickly navigate to Preview Deployments. The table design improves the monorepo experience where multiple Vercel deployments are shown in a single pull request.

Check out the documentation as well.

Read more

Ernest Delgado Becca Zandstein
https://vercel.com/changelog/april-2022-papercuts Improvements and fixes 2022-04-14T13:00:00.000Z

With your feedback, we've shipped dozens of bug fixes and small feature requests to improve your product experience. Here are some of our most recent.

  • Design papercuts: We’ve polished hover states, loading states, padding and margins for dashboard elements, table scrolling on mobile, and other subtle mobile styling improvements.

  • Copy & paste build logs easily: You can now easily copy the entire build output to your clipboard.

  • Accessibility improvements: vercel.com and nextjs.org have improved color contrast, better semantic HTML elements, and other small fixes.

  • Improved integration installations: Searching for teams is now more accurate while installing integrations.

  • Public project deployment author attribution: When viewing deployments for a public project, we now show the author of the deployment in the navbar instead of the currently logged-in user.

Let us know other opportunities where we can improve your experience with Vercel.

Read more

Shaziya Bandukia Ernest Delgado
https://vercel.com/changelog/increased-security-with-view-only-permissions Increased security with view-only permissions 2022-04-13T13:00:00.000Z

Enterprise users can now be assigned a viewer role, providing increased security with view-only permissions. The viewer role enables members to view and collaborate on projects while preventing them from editing any team or project settings.

Viewer role is in public beta. Learn more about roles and permissions of Team members.

Read more

Ana Jovanova Andy Schneider Balazs Varga Valerie Downs Christopher Skillicorn Jarryd McCree Enric Pallerols Miroslav Simulcik Dominik Weber
https://vercel.com/changelog/projects-using-pnpm-can-now-be-deployed-with-zero-configuration Projects using pnpm can now be deployed with zero configuration 2022-03-22T13:00:00.000Z

Projects using pnpm can now be deployed to Vercel with zero configuration. Vercel is also now sponsoring pnpm to further package manager innovation.

Like Yarn and npm, pnpm is a package manager focused on saving disk space and boosting installation speed by utilizing symlinks. Starting today, Projects that contain a pnpm-lock.yaml file will automatically run pnpm install as the default Install Command using the latest version of pnpm.

Check out the documentation as well.

Read more

Ethan Arrowood Jared Palmer Steven Salat
https://vercel.com/changelog/filters-are-persisted-for-vercel-analytics Filters are persisted for Vercel Analytics 2022-03-22T13:00:00.000Z

You can now share and bookmark a specific filter state of Vercel Analytics, which are now persisted as URL parameters.

Check out the documentation as well.

Read more

Shaziya Bandukia Ernest Delgado
https://vercel.com/blog/upgrading-nextjs-for-instant-performance-improvements Upgrading Next.js for instant performance improvements 2022-03-17T13:00:00.000Z

Since the release of Next.js, we’ve worked to introduce new features and tools that drastically improve application performance, as well as overall developer experience. Let’s take a look at what a difference upgrading to the latest version of Next.js can make.

Read more

Lydia Hallie
https://vercel.com/changelog/access-tokens-can-now-be-scoped-to-teams Access tokens can now be scoped to teams 2022-03-15T13:00:00.000Z

Access tokens used in the CLI and for authenticating APIs can now be scoped to specific Teams.

This improvement provides additional security and controls for those extending the Vercel platform through our CLI or our APIs.

Check out the documentation as well.

Read more

Gaspar Garcia
https://vercel.com/blog/monorepos Monorepos are changing how teams build software 2022-03-03T13:00:00.000Z

The largest software companies in the world use monorepos. But historically, adopting a monorepo for anything other than at a Facebook or Google scale was difficult, time-consuming, and often filled with headaches.

Since Turborepo joined Vercel, we’ve seen development teams of all sizes adopt Turborepo for faster builds and save over 200 days worth of time by remotely caching their deployments on Vercel.

Read more

Lee Robinson
https://vercel.com/changelog/remote-cache-api-for-monorepo-build-tools-is-now-in-public-beta Remote Cache API for monorepo build tools is now in public beta 2022-03-01T13:00:00.000Z

Monorepo build systems like Turborepo are able to leverage Vercel's infrastructure to remotely cache build artifacts using our Remote Cache API. Turborepo uses this API out-of-the-box to store build artifacts and make builds more efficient.

This API is now available as a public beta. To learn more about using the Remote Cache API please read our documentation here.

Read more

Jared Palmer Gaspar Garcia Greg Soltis Becca Zandstein
https://vercel.com/changelog/visualize-time-saved-using-turborepo-with-remote-caching Visualize time saved using Turborepo with Remote Caching 2022-02-28T13:00:00.000Z

When using monorepo build tools like Turborepo, Vercel automatically caches build artifacts remotely for faster, more efficient builds. The usage dashboard now highlights time saved for your team's projects using Remote Caching. You can visualize data based on whether the cache was local or remote, as well as per project.

Check out the documentation to get started with Remote Caching.

Read more

Jared Palmer Gaspar Garcia Greg Soltis Becca Zandstein
https://vercel.com/changelog/enterprise-customers-can-now-transfer-projects Enterprise customers can now transfer projects 2022-02-24T13:00:00.000Z

Enterprise customers can now transfer projects to other Vercel accounts.

This makes it easier for Enterprise teams to move projects between different accounts when ownership changes.

Check out the documentation as well.

Read more

Matthew Sweeney
https://vercel.com/changelog/next-js-12-1 Next.js 12.1 is now available 2022-02-17T13:00:00.000Z

We're excited to release one of our most requested features with Next.js 12.1:

  • On-demand ISR (Beta): Revalidate pages using getStaticProps instantly.

  • Expanded Support for SWC: styled-components, Relay, and more.

  • next/jest Plugin: Zero-configuration Jest support using SWC.

  • Faster Minification with SWC (RC): 7x faster minification than Terser.

  • Self-Hosting Improvements: ~80% smaller Docker images.

  • React 18 & Server Components (Alpha): Improved stability and support.

  • Developer Survey: Help us improve Next.js with your feedback.

Starting today when deployed to Vercel, on-demand revalidation propagates globally in ~300ms when pushing pages to the edge. Read the 12.1 blog post to learn more.

Read more

Peter Yoakum
https://vercel.com/changelog/integrations-are-now-shown-in-activity-log Integrations are now shown in Activity Log 2022-02-17T13:00:00.000Z

Installing, uninstalling, and changing permissions of Integrations is now shown in the Activity Log.

Check out the Activity Log to see integrations data.

Read more

Kathy Korevec Dominik Ferber Cami Cano
https://vercel.com/changelog/vercel-cli-v24 Vercel CLI v24 is now available 2022-02-17T13:00:00.000Z

Version 24 of the Vercel CLI has been released, including many improvements and bug fixes, as well as the new vercel bisect command:

  • Added new command vercel bisect: Inspired by the git bisect command, this new command helps identify in which Deployment a bug was introduced.

  • Added support for the --project flag in vercel link.

  • Removed support for single file deployments.

  • vercel dev is now stable (no longer in beta).

  • Refactored most of the CLI source code to TypeScript.

This is a major version bump and includes some breaking changes, most of which are the final removal of features that have been deprecated for years. Read the full changelog carefully before updating.

Read more

Nathan Rajlich Steven Salat Lindsey Simon Greta Workman
https://vercel.com/changelog/remote-cache-usage-graphs Remote Cache Usage graphs 2022-02-16T13:00:00.000Z

Remote Caching for Monorepo tools now includes usage graphs for your team, including:

  • Total number of artifacts uploaded or downloaded

  • Total size of artifacts successfully uploaded or downloaded

Monorepo tools like Turborepo can now use Remote Caching on Vercel with zero configuration. Check out the documentation.

Read more

Jared Palmer Gaspar Garcia Greg Soltis Christopher Skillicorn
https://vercel.com/changelog/schema-autocomplete-and-validation-for-vercel-json Schema autocomplete and validation for vercel.json 2022-02-04T13:00:00.000Z

You can now add autocompletion, type checking, schema validation, and in-editor documentation to any vercel.json file.

Add https://openapi.vercel.sh/vercel.json as the $schema key at the top of your file. The schema file is autogenerated similar to our automatic REST API documentation.

Check out the documentation as well.

Read more

Gal Schlezinger Javi Velasco
https://vercel.com/changelog/railway-integration-postgres-redis-mysql Connect your database with the Railway integration 2022-02-03T13:00:00.000Z

The Railway integration connects your Postgres, Redis, and MySQL databases hosted on Railway with your Vercel project. Instantly provision and deploy your backend infrastructure with Railway, then integrate it with your Vercel frontend in seconds.

Try out the integration and connect your database.

Read more

Chris Widmaier Cami Cano
https://vercel.com/blog/how-the-web-evolves The evolution of the Web: What we learned and where we’re going 2022-02-02T13:00:00.000Z

From open source to a more powerful edge, see our predictions for the future of frontend development—featuring experts in React, Next.js, Svelte, and more.

Read more

Guillermo Rauch Kathy Korevec
https://vercel.com/changelog/unlimited-custom-domains-for-all-pro-teams Unlimited custom domains for all Pro teams 2022-01-24T13:00:00.000Z

You can now add unlimited Custom Domains to your project on a Pro team. This enables creators, entrepreneurs, and platforms on Vercel to create the next big thing. To help enable this, we've created a Platforms Starter Kit.

Try out the demo and create your own platform.

Read more

Lee Robinson
https://vercel.com/changelog/checkly-integration-and-checks-api-now-generally-available Checkly Integration and Checks API now generally available 2022-01-18T13:00:00.000Z

With the Vercel Checkly Integration, monitor the Core Web Vitals of your site on every build before it gets deployed so that your performance never degrades.

This integration can be installed from the Integration Marketplace or Status view, and comes with rich functionality out-of-the-box. You can now:

  • Run reliability and performance checks on preview and production

  • Automatically block your build when checks fail

  • Get deep insights such as web vitals and error logs

This Checkly Integration is built using our new Checks API which allows you to insert validation and status checks after a deployment is built but before it is released to production.

Read more

Cami Cano Christopher Skillicorn Chris Widmaier Brody McKee Doug Parsons Dominik Ferber
https://vercel.com/changelog/papercuts-small-feature-requests-and-bug-fixes Papercuts, small feature requests, and bug fixes 2021-12-23T13:00:00.000Z

With your feedback, we've shipped dozens of bug fixes and small feature requests to improve your product experience:

  • iCloud DNS Email Preset:

    Easily add DNS records to your domain to allow email forwarding.

  • Syntax Highlighting for Source Files:

    When viewing contents of a Deployment, source files now have syntax highlighting based on the code language.

  • Easier Domain Assignment:

    When adding new custom domains, you can now press Enter to save the input instead of clicking save.

  • Fixed Sticky Tabs in Safari:

    The dashboard navigation now properly hides items after scrolling in Safari.

  • Safer Domain Removal:

    When removing a domain from your project, we’ve removed the option to also remove all subdomains based on customer feedback.

  • Full Timestamps for Activity Log:

    When viewing the Activity Log for your team, you can now hover over the relative timestamp to view the full date and time.

  • Improved Team Member Search:

    When trying to add members to a large team, searching now happens server-side to prevent needing to click load more.

  • New Confirmation Modals:

    When modifying Git Fork Protection and Log and Source Protection, confirmation is now required to save changes.

  • Team Navigation shows Plan:

    Change between your Personal Account and Teams now clearly shows whether the selected account is Hobby, Pro, Enterprise, and also if billing is overdue.

Let us know other opportunities where we can improve your experience with Vercel.

Read more

Paco Coursey Christopher Skillicorn Lee Robinson Kathy Korevec Ana Jovanova Shu Uesugi
https://vercel.com/changelog/easily-manage-custom-nameservers-for-domains Easily manage custom nameservers for domains 2021-12-22T13:00:00.000Z

You can now more easily add custom nameservers to your Vercel hosted domain, allowing for delegation to other DNS providers. Add up to four nameservers at once, and revert to your previous settings if necessary.

Check out the documentation as well.

Read more

Sam Ko Holden Altaffer
https://vercel.com/changelog/log4j-vulnerability Vercel Security: Response to Log4j Vulnerability 2021-12-21T13:00:00.000Z

Recently, a series of security vulnerabilities were discovered in the popular logging utility Log4j.

As with any emerging threat, details are still coming to light and investigations are ongoing, but we have not presently identified any use of Log4j in our environment that would make us or our customers susceptible to the detailed exploit. Additionally, we're working with our third-party providers to ensure that they have patched or will patch instances of this vulnerability according to their highest criticality timelines.

Our internal teams are working closely with our external security services provider to monitor our environment. We will send follow-up communications to customers as, and when, appropriate.

Please feel free to reach out to us at [email protected] if you have any questions or concerns.

Read more

Guillermo Rauch
https://vercel.com/changelog/usage-overview-project-grouping Usage Overview project grouping 2021-12-16T13:00:00.000Z

To help provide better insight into your accounts resource usage, the Usage Overview now gives you the ability to view data grouped by your top 4 projects, in addition to grouping the charts by count or ratio.

Check out the documentation here.

Read more

Kathy Korevec Christopher Skillicorn Cody Brouwers
https://vercel.com/blog/the-future-of-svelte-an-interview-with-rich-harris The future of Svelte, an interview with Rich Harris 2021-12-15T13:00:00.000Z

Svelte has been voted the most loved Web framework with the most satisfied developers.

In this 45-minute interview with Lee Robinson, hear Rich Harris, the creator of Svelte, talk about his plans for the future of the framework. Other topics include funding open source, SvelteKit 1.0, the Edge-first future, and more.

Read more

Lee Robinson Rich Harris
https://vercel.com/blog/supporting-the-future-of-react Supporting the Future of React 2021-12-14T13:00:00.000Z

React is one of the most popular ways to build user interfaces. Many of the world's largest enterprises and newest startups are building their online presence with it, pushing demand for React developers, improvements to React, and learning resources to an all-time high.

Read more

Guillermo Rauch
https://vercel.com/blog/vercel-acquires-turborepo Vercel acquires Turborepo to accelerate build speed and improve developer experience 2021-12-09T13:00:00.000Z

We're thrilled to announce our acquisition of Turborepo to join us on our mission to make the Web. Faster.

Read more

Guillermo Rauch Jared Palmer
https://vercel.com/changelog/turborepo-open-source-cli-and-remote-caching-on-vercel Turborepo Open-Source CLI and Remote Caching on Vercel 2021-12-09T13:00:00.000Z

Turborepo, a high-performance build system for JavaScript and TypeScript codebases, is joining Vercel. Starting today, the Turborepo CLI is open source and available for anyone to use.

Turborepo reduces build times by providing:

  • Intelligent caching

  • Content-aware hashing

  • Remote caching (beta)

  • Parallel execution

  • Task pipelines

Get started with Turborepo today.

Read more

Guillermo Rauch Jared Palmer
https://vercel.com/changelog/remix-projects-can-now-be-deployed-with-zero-configuration Remix projects can now be deployed with zero configuration 2021-11-25T13:00:00.000Z

Vercel now automatically optimizes your Remix projects. When importing a new project, it will detect Remix and configure the right settings for you — including automatic immutable HTTP caching headers for JavaScript and CSS assets.

Get started by deploying the Remix template or running npx create-remix@latest, selecting Vercel, and then deploying with npx vercel.

Check out the documentation as well.

Read more

Lee Robinson Leo Lamprecht
https://vercel.com/blog/vercel-funding-series-d-and-valuation Announcing $150M to build the end-to-end platform for the modern Web 2021-11-23T13:00:00.000Z

Our mission is to make the Web. Faster.

We're excited to announce $150 million in Series D funding at a valuation of over $2.5 billion. We'll use this funding to accelerate how we:

Read more

Guillermo Rauch Kevin Van Gundy Ashley Mcenery
https://vercel.com/changelog/automatic-rest-api-documentation-with-openapi Automatic REST API documentation with OpenAPI 2021-11-23T13:00:00.000Z

Our REST API documentation is now automatically updated with changes to our API.

Developers can use our REST API to extend the Vercel platform and programmatically augment their workflows. After every change to our API repository, a new version of the documentation is automatically generated. This includes all API endpoints and the correct fields and types used. Starting today, more endpoints are listed and all parameters are now documented.

Check out the improved documentation.

Read more

Paco Coursey Nathan Rajlich Javi Velasco Ana Jovanova Shu Ding Naoyuki Kanezawa
https://vercel.com/changelog/python-3-9-is-now-available Python 3.9 is now available 2021-11-23T13:00:00.000Z

As of today, new Deployments using Python Serverless Functions will use version 3.9 and the legacy version 3.6 is being deprecated.

If you need to continue making Deployments using Python 3.6, ensure your Pipfile and corresponding Pipfile.lock have python_version set to 3.6 exactly.

Python 3.6 will reach end of life in December 2021. Before completely removing support, there will be another announcement with the exact sunset date.

Check out the documentation as well.

Read more

Steven Salat
https://vercel.com/changelog/ip-geolocation-now-available-for-all-plans IP Geolocation now available for all plans 2021-11-12T13:00:00.000Z

IP Geolocation is now available on all plans, Hobby, Pro and Enterprise teams, for requests received by Edge and Serverless Functions.

In this Edge Functions example, you can determine a user's location.

Check out our documentation to learn more.

Read more

Matheus Fernandes Connor Davis
https://vercel.com/blog/vercel-welcomes-rich-harris-creator-of-svelte Vercel welcomes Rich Harris, creator of Svelte 2021-11-11T13:00:00.000Z

Today, we're excited to share Rich Harris, the creator of Svelte, has joined Vercel to help us in our mission to make the Web. Faster.

Read more

Guillermo Rauch
https://vercel.com/changelog/edge-functions-are-now-available-in-public-beta Edge Functions are now available in Public Beta 2021-10-26T13:00:00.000Z

Edge Functions are now available in Public Beta. They allow developers to deliver content to your site with speed and personalization, by enabling them to serve the exact end-user experience they're imagining, every time. Edge Functions have instant cold boots, support streaming, and are deployed globally by default.

To get started with Edge Functions, create a _middleware.js file in the pages/ directory for Next.js 12+ or on the root of any Vercel project. Middleware enables Edge authentication, bot protection, feature flags, A/B testing, server-side analytics, logging, and more.

Read about deploying Edge Functions in the documentation, and check out the examples.

Read more

Kathy Korevec Javi Velasco Matheus Fernandes
https://vercel.com/changelog/frankfurt-germany-is-now-available-on-the-edge-network Frankfurt (Germany) is now available on the Edge Network 2021-09-21T13:00:00.000Z

The Edge just got even more powerful. We're excited to introduce Frankfurt (Germany) as a new region on the Edge Network, allowing you to serve up faster experiences to customers and visitors all over central Europe.

Static files and the responses from your Serverless Functions are now automatically cached in this region. Furthermore, Pro and Enterprise Teams can choose the region for running Serverless Functions on the respective page in the Project Settings.

Check out the documentation as well.

Read more

Matheus Fernandes
https://vercel.com/blog/at-next-js-conf-2021-lets-make-the-web-faster At Next.js Conf 2021, let’s make the Web. Faster. 2021-09-20T13:00:00.000Z

The next Web is faster, more collaborative, more personalized, and built by you. We’re throwing a party for Next.js as it turns 5—have you claimed your ticket?

Register now at nextjs.org/conf

Read more

Kathy Korevec Tim Neutkens
https://vercel.com/changelog/quickly-navigate-the-dashboard-with-shortcuts Quickly navigate the Dashboard with shortcuts 2021-09-09T13:00:00.000Z

The ⌘K shortcut on your keyboard now lets you navigate straight to a particular area of the dashboard and take important actions without using the dashboard menu.

Among many other ways of saving you time, the new Command Menu lets you:

  • Navigate to a specific Project, or even Deployments within that Project

  • Search the documentation for helpful information

  • Invite others to join your currently selected Vercel Team

  • Create new Projects from an existing Git repository or Template

Check out the documentation as well.

Read more

Rauno Freiberg Paco Coursey Rizwana Akmal Khan Christopher Skillicorn
https://vercel.com/changelog/request-access-to-teams-right-from-the-dashboard Request access to Teams right from the Dashboard 2021-08-26T13:00:00.000Z

For you to join a Team on Vercel, one of its owners can invite you from the Team Settings on the dashboard (or, in the case of SAML, add you to the respective Directory Sync provider).

In addition, the platform now also suggests Teams that you can request access to based on:

  • The Git namespaces (e.g. GitHub organizations) that any Git Login Connections associated with your Personal Account have access to.

  • The email domain associated with your Personal Account.

Suggested Teams will appear on a new Teams page in your Personal Account Settings (which also provides an overview of the Teams that you're already a part of) and in the scope selector on the top left of the dashboard.

Check out the documentation as well.

Read more

Brody McKee Rauno Freiberg Paco Coursey Christopher Skillicorn
https://vercel.com/changelog/sunsetting-ui-hooks-and-legacy-webhooks Sunsetting UI-Hooks and Legacy Webhooks 2021-08-20T13:00:00.000Z

As previously mentioned (on May 25th, 2021) Vercel will be removing UI Hooks for integrations.

UI Hooks have already become unavailable for newly created Integrations, but they will also be removed from all existing Integrations, meaning that:

  • Integrations with UI Hooks can't be installed anymore.

  • Integration UI Hooks will no longer be shown on the Dashboard.

  • The respective configuration field will be removed from the Integration Console.

  • The API endpoint 

    /v1/integrations/configuration/:id/metadata

     will become unavailable.

Furthermore, we also deprecated the manual webhook creation through our API. See our previous announcement about this change. This means that:

  • The API endpoint

    /v1/integrations/webhooks

    will become unavailable.

  • The API endpoint

    /v1/integrations/webhooks/:id

    will become unavailable.

  • DELETE

    requests to the configured generic Webhook URL will be not send anymore.

Check the updated documentation to learn more about upgrading your Integration.

Read more

Chris Widmaier Kathy Korevec
https://vercel.com/changelog/projects-can-now-be-transferred-to-personal-accounts Projects can now be transferred to Personal Accounts 2021-08-12T13:00:00.000Z

As of today, you can transfer projects from Teams to Personal Accounts with no workflow interruptions or downtime.

This is useful when your project no longer requires the functionality offered by Vercel Teams, like working with collaborators.

Previously, it was only possible to transfer Vercel projects from Personal Accounts to Teams when a need for collaboration and more advanced functionality arose.

Check out the documentation as well.

Read more

Paco Coursey Nathan Rajlich Christopher Skillicorn
https://vercel.com/changelog/new-slack-integration New Slack Integration 2021-08-11T13:00:00.000Z

We’re happy to share that Vercel's new and improved Slack Integration is now ready for use.

It can be installed from the Integration Marketplace and comes with a number of advantages over the previous Slack Integration. You can now:

  • Choose specific events you’re interested in.

  • Send all events to one Slack channel or to multiple on a per-project basis.

  • Send events to private Slack channels.

If you're currently using the previous legacy Slack Integration (you can tell by opening the Integration's configuration view from your Personal Account or Team dashboard), please remove it and add the new one, as the legacy Integration will stop working on August 20th.

Furthermore, if you're using the legacy Integration, you will also receive a notification directly on your Slack channel about the next steps you can take to upgrade to its latest version.

Read more

Chris Widmaier
https://vercel.com/changelog/a-new-dashboard-overview-is-now-available A new dashboard overview is now available 2021-08-06T13:00:00.000Z

Your Vercel dashboard provides you with an overview of Projects available within your Personal Account or Team, and what their status is.

Today we're excited to announce a completely overhauled dashboard overview with a number of improvements. You can now...

  • Search for a particular Project.

  • See which change is currently available in Production for your Projects.

  • Easily find the Project you're looking for by framework icon or favicon.

  • Navigate to Projects via the cards and hover to quickly visit Production on desktop.

In addition, the header and activity stream were removed to help you get to what matters faster.

Read more

Paco Coursey George Karagkiaouris Christopher Skillicorn
https://vercel.com/changelog/version-7-of-npm-is-now-supported Version 7 of npm is now supported 2021-08-06T13:00:00.000Z

Vercel will now automatically detect whether your Project's dependencies were added with version 7 of the npm CLI, based on the presence of the latest lockfile format.

If detected, Vercel will automatically switch to using npm v7 to install your Project's dependencies within the Build Step.

This means that, among many bug fixes in the latest version of npm, your Deployments can now make use of the following new features:

Check out the full release notes and the documentation as well.

Read more

Nathan Rajlich Kaitlyn Carter
https://vercel.com/changelog/customizing-the-install-command-while-creating-projects Customizing the Install Command while creating Projects 2021-08-06T13:00:00.000Z

When importing a Git repository into Vercel, your Project's dependencies used to automatically be installed using either Yarn or npm, depending on your code. Selecting a different package manager such as pnpm was only possible after the Project was already deployed.

As of today, however, you can configure your custom Install Command even before the first Deployment for your new Project is created.

This also comes in handy for passing custom options to the yarn or npm install commands, since you can simply place the command of your choice in the "Install Command" field.

Check out the documentation as well.

Read more

Ana Jovanova
https://vercel.com/changelog/get-started-faster-with-the-new-project-creation-flow Get started faster with the new Project Creation Flow 2021-07-28T13:00:00.000Z

Creating a new project on the dashboard now takes you through a new, simpler flow:

  • New projects will default to your currently selected Personal Account or Team, instead of prompting you to pick one every time.

  • Everything now happens on a single view, instead of making you navigate through four different pages every time.

These improvements make it easier to import a Git repository or start fresh from a Template. Try it out by clicking the New Project button on your dashboard!

If you'd like to integrate our new Project Creation Flow into your own application or Git repository, check out the documentation as well.

Read more

Ana Jovanova Brody McKee Luc Leray Rauno Freiberg Naoyuki Kanezawa Christopher Skillicorn Evil Rabbit
https://vercel.com/changelog/sveltekit-projects-can-now-be-deployed-with-zero-configuration SvelteKit projects can now be deployed with zero configuration 2021-07-27T13:00:00.000Z

Vercel now automatically optimizes your SvelteKit projects. When importing a new project, it will detect SvelteKit and configure the right settings for you.

In addition, System Environment Variables are made available under the SVELTEKIT_ prefix by default and you can now easily start new SvelteKit projects from the dashboard.

Check out the documentation as well.

Read more

Lee Robinson
https://vercel.com/changelog/integrations-can-now-be-managed-more-efficiently Integrations can now be managed more efficiently 2021-07-16T13:00:00.000Z

Following our launch of Vercel's new Integration Marketplace, we're now making it easier to manage Integrations on the dashboard.

  • Before: Navigating to the "Integrations" tab of a Personal Account or Team would list all previously added Integrations and all their different configurations.

  • After: Every Integration is listed only once per Personal Account or Team, and per-project configurations are handled by the respective third party instead.

Legacy Integrations, as mentioned at the bottom of the dashboard, will disappear on August 20th. Integration authors will be able to upgrade them until then.

Check out our updated documentation to learn how to create your own Integration.

Read more

Shu Uesugi Chris Widmaier Christopher Skillicorn
https://vercel.com/changelog/removing-domains-is-now-much-easier Removing Domains is now much easier 2021-07-16T13:00:00.000Z

Starting today, removing Domains in the Project Settings now also optionally removes them globally, for all other Projects on your Personal Account or Team.

Before, this required navigating to the global Domains page on your Personal Account or Team.

Check out the documentation as well.

Read more

Rizwana Akmal Khan Matthew Sweeney
https://vercel.com/changelog/vite-projects-can-now-be-deployed-with-zero-configuration Vite projects can now be deployed with zero configuration 2021-07-13T13:00:00.000Z

Vercel now automatically optimizes your Vite projects. When importing a new project, it will detect Vite and configure the right settings for you.

In addition, System Environment Variables are made available under the VITE_ prefix by default and you can now easily start new Vite projects from the dashboard.

Check out the documentation as well.

Read more

Lee Robinson
https://vercel.com/changelog/saml-single-sign-on-and-directory-sync-now-fully-available SAML Single Sign-On and Directory Sync now fully available 2021-07-09T13:00:00.000Z

Enterprise Teams can now use their identity provider to log into and sign up to Vercel with SAML Single Sign-On. Popular identity providers such as Okta, Google, Auth0, OneLogin, and Azure are supported and can be configured in the Security section of your Team Settings.

For additional security, SAML can be enforced. This requires team members to authenticate with your identity provider for all interactions with the Vercel Team.

Lastly, the related Directory Sync feature allows Enterprise teams to automatically sync users from a directory provider (Okta, Google, Azure, and generic SCIM providers are supported), add or remove them from the Team, and issue Vercel Personal Accounts as needed.

Contact Sales to upgrade to an Enterprise plan, or check out the documentation.

Read more

Nathan Rajlich Paco Coursey Christopher Skillicorn
https://vercel.com/changelog/new-filters-and-metrics-available-on-the-usage-overview New filters and metrics available on the Usage Overview 2021-07-08T13:00:00.000Z

To provide better insight into the amount of resources a project uses on Vercel, new functionality was added to the Usage Overview:

  • Additional presets for the date range, and the ability to select any start and end date, allowing for a better understanding of how your usage has changed over time.

  • Usage metrics can now also be filtered by individual Serverless Functions, allowing you to identify which Functions don't perform optimally.

  • A new chart for Image Optimization shows the number of optimized source images, to provide insight into how Image Optimization is used across your projects.

Check out the documentation as well.

Read more

Andy Schneider Shu Ding Christopher Skillicorn
https://vercel.com/blog/welcoming-kathy-korevec-to-vercel-our-new-head-of-product Welcoming Kathy Korevec to Vercel, our new Head of Product 2021-07-07T13:00:00.000Z

Today, we’re excited to announce Kathy Korevec will be joining our leadership team at Vercel as Head of Product!

Read more

Guillermo Rauch
https://vercel.com/changelog/reworked-integrations-and-integrations-marketplace Reworked Integrations and Integrations Marketplace 2021-07-07T13:00:00.000Z

The Integrations Marketplace has been upgraded. With an improved design, it's now easier than ever to discover and install Integrations in one place. All listed Integrations have been reworked to maximize usability and deliver a stellar user experience.

Check out the new categories of Integrations:

Check out our updated documentation to learn how to create your own Integration.

Read more

Chris Widmaier Shu Uesugi
https://vercel.com/blog/integrations-marketplace Supercharge your Vercel Projects with Integrations 2021-07-01T13:00:00.000Z

Today, we’re announcing our upgraded Integration Marketplace. We collaborated with partners to streamline installation and reduce as much configuration as possible, and gathered feedback from customers to increase visibility and confidence at every step of your development journey.

Read more

Chris Widmaier Shu Uesugi Jen Chang Bel Curcio
https://vercel.com/blog/series-c-102m-continue-building-the-next-web $102M to Continue Building the Next Web, Together 2021-06-23T13:00:00.000Z

Today, we’re happy to announce our Series C funding. This is a major milestone for our company, customers, and community in our mission to build a faster web, together.

Read more

Guillermo Rauch Kevin Van Gundy
https://vercel.com/blog/nextjs-special-event-recap Next.js 11, Next.js Live and more: A recap of Next.js Conf Special Edition 2021-06-22T13:00:00.000Z

Last week, over 65,000 members of the Next.js community tuned in to watch a special edition of Next.js Conf where we shared our progress toward building a faster web.

Missed it? Here's what you need to know.

Read more

Greta Workman Lee Robinson
https://vercel.com/changelog/routing-based-on-headers-and-query-string-parameters Routing based on Headers and Query String Parameters 2021-06-02T13:00:00.000Z

At Vercel, our goal is to provide you with the best performance for your web projects, while still allowing for the most flexibility possible when it comes to tailoring responses to users.

As part of these efforts, we're now launching a new sub property called has for rewrites, redirects, and headers (in vercel.json or next.config.js), which allows for routing conditionally based on the values of headers, cookies, and query string parameters.

Combined with features like SSG, ISR, or cached SSR, it can be used in cases like these:

  • Responding differently based on a cookie that was set in the visitor's browser (Cookie header) or the type of device the visitor is using (User-Agent header).

  • Responding differently based on the geographical location of the visitor (Geo-IP headers).

  • Redirecting users directly to their dashboard if they're logged in (Cookie header).

  • Redirecting old browsers to prevent serving unsupported pages (User-Agent header).

Check out the documentation and Next.js announcement to learn more.

Read more

JJ Kasper Connor Davis
https://vercel.com/changelog/detailed-usage-metrics-for-personal-accounts Detailed Usage metrics for Personal Accounts 2021-05-28T13:00:00.000Z

The new Usage Overview on the dashboard that was recently added for Teams on the Pro and Enterprise plans is now also available for Personal Accounts on the Hobby plan.

It provides insight into the following metrics related to your own usage of the Vercel platform:

  • The Networking section helps you to optimize your Deployment's responses.

  • The Functions section lets you identify Serverless Functions that execute poorly.

  • The Builds section will help you improve the duration of your Builds.

  • The Analytics section provides information about how many Web Vital data points were collected for your Deployments.

Check out the documentation as well.

Read more

Andy Schneider Christopher Skillicorn
https://vercel.com/changelog/ui-hooks-for-integrations-will-be-deprecated UI Hooks for Integrations will be deprecated 2021-05-25T13:00:00.000Z

Since the launch of the Integration Marketplace, any newly submitted Integration was expected to provide UI Hooks for integrating its UI into the Vercel Dashboard.

As this constraint required a lot of additional work to be done by third-party services that already had their Integration UI available in their own dashboard, UI Hooks will now be deprecated in favor of allowing Integration authors to re-use existing interfaces outside Vercel.

UI Hooks have already become unavailable for newly created Integrations, but they will soon also be removed from all existing Integrations, meaning that:

  • Integration UI Hooks will no longer be shown on the Dashboard.

  • The respective configuration field will be removed from the Integration Console.

  • The API endpoint /v1/integrations/configuration/:id/metadata will become unavailable.

These changes will be applied on August 20th, 2021, which is the same date as the one that was announced for the Deprecation of old Integration Webhooks.

Check out the updated documentation to learn more about upgrading your Integration.

Read more

Chris Widmaier
https://vercel.com/changelog/surfacing-the-environment-of-deployments-and-domains Surfacing the Environment of Deployments and Domains 2021-05-05T13:00:00.000Z

The following Dashboard pages now display the Environment of Deployments and Domains:

  • The Deployment View

  • The Deployment List

  • The Domain List in your Project Settings

In the case of Domains and Environment Variables, the Environment reflects the Environment of the Deployments that they're assigned to.

This change will make it easier for you to determine which Deployments, Domains, and Environment Variables relate to each other.

Check out the documentation as well.

Read more

Andy Schneider Paco Coursey Christopher Skillicorn
https://vercel.com/changelog/git-fork-protection-can-now-be-disabled Git Fork Protection can now be disabled 2021-04-23T13:00:00.000Z

If you receive a pull request from a fork of your Git repository that includes a change to the vercel.json file or the Project has Environment Variables configured, Vercel will require authorization from you or a member of your Team to deploy the pull request.

This behavior protects you from accidentally leaking sensitive Project information.

If you're certain your Environment Variables do not contain sensitive information, you can now disable Git Fork Protection by visiting the Security section of your Project Settings.

Check out the documentation as well.

Read more

Steven Salat
https://vercel.com/changelog/environments-variables-per-git-branch Environments Variables per Git branch 2021-04-21T13:00:00.000Z

You can now add Environment Variables to a specific Git branch in the Preview Environment.

When you push to a branch, a combination of Preview Environment Variables and branch-specific variables will be used. Branch-specific variables will override other variables with the same name. This means you don't need to replicate all your existing Preview Environment Variables for each branch – you only need to add the values you wish to override.

Also, you no longer need to specify the type of Environment Variable (Plaintext, Secret, Provided by System) because all values are now encrypted. The new design is optimized for both security and convenience, ensuring you can easily view the value later by editing in the UI or running vercel env pull to fetch Development Environment Variables locally.

We previously introduced the Provided by System option as some frameworks need to map system variables like VERCEL_URL to framework prefixed variables like NEXT_PUBLIC_VERCEL_URL. You no longer need to configure this mapping because the prefixed variables are added automatically based on your Framework Preset.

Check out the documentation as well.

Read more

Steven Salat Ernest Delgado Leo Lamprecht Christopher Skillicorn
https://vercel.com/changelog/integration-webhooks-are-now-easier-to-configure Integration Webhooks are now easier to configure 2021-04-20T13:00:00.000Z

You can now specify a generic Webhook URL in your Integration settings. If your Integration relies on Webhooks, it will now be much easier to configure and use them. Each of the following events will be sent as a POST request:

It's been possible to manually list, create, and delete Webhooks via the Vercel API, but this API is deprecated and will be removed on August 20th, 2021.

We no longer support a Delete Hook URL that receives a DELETE request when an Integration Configuration was removed. If a Delete Hook URL was set, it is now set as a Webhook URL, and "Integration Removed" is turned on. At the moment, the Integration Removed event is sent as two events (a POST request and a DELETE request). The DELETE requests will no longer be sent starting August 20th, 2021.

Check out the updated documentation and API reference to learn more.

Read more

Javi Velasco Chris Widmaier Shu Uesugi Leo Lamprecht Christopher Skillicorn
https://vercel.com/blog/core-web-vitals How Core Web Vitals Will Impact Google Rankings in 2021 2021-04-15T13:00:00.000Z

Beginning this June, Google will add Core Web Vitals to its Page Experience ranking signal. Google announced last year that changes were coming to the way its algorithm ranks pages that went beyond how quickly pages were loaded, safe-browsing, HTTPS, and their mobile-friendliness. 

Core Web Vitals evaluate speed, responsiveness, and visual stability of pages and prioritize the site in rankings based on the outcomes of these scores. This means your site performance has a direct impact on SEO and your business.

Read more

Christina Kopecky Lee Robinson
https://vercel.com/changelog/domains-can-now-easily-be-transferred-out Domains can now easily be transferred out 2021-04-02T13:00:00.000Z

Starting today, if you are looking to transfer out domains you purchased from or transferred into Vercel, you can access the authentication code for initiating the transfer to another registrar directly on your Domains overview.

Previously, this was only possible by contacting Support.

Check out the documentation as well.

Read more

Allen Hai Rizwana Akmal Khan
https://vercel.com/changelog/domains-now-include-their-www-counterpart Domains now include their `www` counterpart 2021-04-02T13:00:00.000Z

Adding a domain to a project will now also suggest adding its www counterpart. This ensures visitors can always access your site, regardless of whether they type www when entering the domain, or not.

Using a www domain guarantees that the Vercel Edge Network can reliably and securely route incoming traffic as quickly as possible, so redirecting non-www to the www domain is recommended. Redirecting the other way works too if you prefer a cleaner URL address.

Existing domains are not affected by this change, but we recommend ensuring that your project already has a www redirect in place.

Check out the documentation as well.

Read more

Paco Coursey Christopher Skillicorn
https://vercel.com/changelog/the-build-cache-can-now-be-enabled-when-redeploying The build cache can now be enabled when redeploying 2021-03-22T13:00:00.000Z

To surface the default behavior and provide granular control, you can now find an option for including the build cache when redeploying an existing Deployment.

This update also comes with a refreshed UI for the redeploying functionality, which states more clearly which Domains will be applied to the new Deployment.

Check out the documentation as well.

Read more

Rauno Freiberg
https://vercel.com/changelog/faster-builds-with-per-branch-caching Faster builds with per-branch caching 2021-03-16T13:00:00.000Z

The Build Step now considers the current Git branch when reading and writing the cache.

Since the first push to a branch will create a deployment without a branch-specific cache, it will read from the Production branch's cache. Subsequent pushes to that branch will read from its own branch-specific cache.

This means that Preview branches will no longer write to the Production branch's cache. This leads to faster builds because changing dependencies in one branch won't change the cache of another branch.

In addition, we no longer delete the build cache when a build fails. Instead, you can manually trigger a build without cache by using the "Redeploy" button on the Dashboard.

Our tests with a large Next.js app brought down incremental build times from 13 minutes to 4 minutes.

Check out the documentation as well.

Read more

Igor Klopov Steven Salat Luc Leray
https://vercel.com/changelog/ip-geolocation-for-serverless-functions IP Geolocation for Serverless Functions 2021-03-05T13:00:00.000Z

Requests received by Serverless Functions on Pro and Enterprise Teams are now enriched with headers containing information about the geographic location of the visitor:

  • X-Vercel-IP-Country – The 2-letter country code of the IP sending the request.

  • X-Vercel-IP-Country-Region – The ISO 3166-2 region code associated to the IP.

  • X-Vercel-IP-City – The city name associated to the IP.

As an example, a request from Tokyo is now enriched with the following headers:

This feature is now automatically activated for all new and existing Serverless Functions on Pro and Enterprise Teams — no code or configuration change needed.

Check out the documentation as well.

Read more

Naoyuki Kanezawa Matheus Fernandes Luc Leray
https://vercel.com/blog/nuxt-analytics-on-vercel Nuxt Analytics on Vercel 2021-02-26T13:00:00.000Z

Since the last Next.js Conf, we have expanded Vercel analytics offerings to include Next.js and Gatsby. Today, we expand that offering to include Nuxt analytics, providing developers with their Real Experience Score through data from actual visitors.

Read more

Lee Robinson Joe Haddad Nathan Rajlich
https://vercel.com/changelog/every-push-now-receives-a-unique-url Every push now receives a new unique URL 2021-02-26T13:00:00.000Z

Today, we're announcing that every Git push and Vercel CLI invocation will result in a new unique URL and a new immutable Deployment.

Existing Deployments will no longer be re-used if you try to create a new one.

This change will likely not impact you in a meaningful way. On November 20th 2020, we enabled automatic System Environment Variables by default. If that option is enabled, a new immutable Deployment will already be created every time.

Vercel always strives to give you real-time feedback on every change you push. To this end, we are working on leveraging smart incremental computation techniques to avoid redoing work that’s already been done.

Read more

Luc Leray Leo Lamprecht
https://vercel.com/changelog/real-time-vercel-analytics Vercel Analytics are now real-time 2021-02-26T13:00:00.000Z

Vercel Analytics now updates your Real Experience Score in near real-time as visitors load your website:

  • Data is available seconds after enabling Analytics (down from ~30 minutes).

  • Immediately see updated metrics after new Production Deployment (down from ~3 hours).

  • Enjoy a reactive dashboard experience, even when viewing data for long time spans.

  • Improved search, loading, and sorting for Pages/URLs for better page-by-page analysis.

In addition to near real-time updates, you can now adjust the score interval granularity to help you understand your Real Experience Score better than ever before.

Enable Vercel Analytics for your project today, or visit the documentation.

Read more

Joe Haddad JJ Kasper Matheus Fernandes Joe Cohen
https://vercel.com/changelog/nuxt-analytics-available-on-vercel-analytics Nuxt analytics available on Vercel Analytics 2021-02-26T13:00:00.000Z

Nuxt analytics are now available for all Nuxt projects on Vercel as part of Vercel Analytics, with zero configuration. This allows developers to understand the Real Experience Score for Nuxt projects.

To enable, after importing your Nuxt project:

  1. Open a Nuxt project in your Vercel dashboard.

  2. Select the "Analytics" tab and follow the flow.

No code changes are required, and options for self-hosted applications are available for enterprises.

Once deployed, your application will automatically report Core Web Vitals to Vercel.

Check out the documentation as well.

Read more

Lee Robinson Nathan Rajlich Joe Haddad
https://vercel.com/changelog/jekyll-deployments-are-now-15x-faster Jekyll deployments are now 15x faster 2021-02-25T13:00:00.000Z

Starting today, Jekyll dependencies from bundle install will be cached and used for subsequent Deployments. A "hello world" Jekyll application now builds 15x faster – down from 3 minutes to 11 seconds with cache.

You can verify that the build cache was used by viewing your Deployment's build logs.

Read more

Steven Salat
https://vercel.com/changelog/detailed-usage-metrics-are-now-available Detailed usage metrics are now available 2021-02-24T13:00:00.000Z

With the introduction of a new Usage overview, the dashboard now provides detailed insight into all the relevant usage metrics for your Team, and visualizes them as different charts:

  • The Networking section helps you ensure all responses are made as efficient as possible, split by cached and uncached responses.

  • The Functions section helps you track down misbehaving Serverless Functions through understanding requests that failed, timed out or were throttled.

  • The Builds section helps you ensure your Deployments spend the least amount of time possible in the Build Step.

Navigate to the new "Usage" tab available to Teams on the Pro or Enterprise plan and start optimizing your Projects today.

Check out the documentation as well.

Read more

Shu Ding Andy Schneider Joe Haddad Christopher Skillicorn Leo Lamprecht
https://vercel.com/blog/sophisticated-usage-dashboard Visualize Team Usage With Sophisticated Usage Dashboard 2021-02-23T13:00:00.000Z

Today, we are announcing an improvement to how usage metrics are delivered to developers and teams on Pro and Enterprise plans. 

Read more

Shu Ding Andy Schneider Joe Haddad Christopher Skillicorn Leo Lamprecht Christina Kopecky
https://vercel.com/changelog/domains-can-now-be-redirected-with-a-custom-status-code Domains can now be redirected with a custom status code 2021-02-18T13:00:00.000Z

You can now select a temporary or permanent status code for Domain Redirects.

There are some subtle differences between these status codes:

  • 307 Temporary Redirect: Not cached by client, method and body never changed.

  • 302 Found: Not cached by client, method may or may not be changed to GET.

  • 308 Permanent Redirect: Cached by client, method and body never changed.

  • 301 Moved Permanently: Cached by client, method may or may not be changed to GET.

We recommend using status code 307 or 308 to avoid the ambiguity of non-GET methods, which is necessary when your application needs to redirect a public API.

Check out the documentation as well.

Read more

Steven Salat
https://vercel.com/blog/vercel-and-next-js-experts-help-teams-build-the-next-big-thing Vercel & Next.js Experts Help Teams Build the Next Big Thing 2021-02-16T13:00:00.000Z

In the past year, we have enabled enterprise companies like Airbnb, Harry Rosen, and Coravin to develop better websites that deliver tremendous business impact. We didn’t do this alone. Vercel's partners helped to make these mission-critical transformations a reality.  

Read more

Kevin Van Gundy Jen Chang Shu Uesugi Evil Rabbit
https://vercel.com/changelog/node-js-10-is-being-deprecated Node.js 10 is being deprecated 2021-02-09T13:00:00.000Z

Following the release of Node.js 14 last week, Vercel is announcing the deprecation of Node.js 10, which reaches its offical end of life on April 30th 2021.

On April 20th 2021, Node.js 10 will be disabled in the Project Settings and existing Projects that have Node.js 10 selected will render an error whenever a new Deployment is created. The same error will show if the Node.js version was configured in the source code.

Serverless Functions of existing Deployments that are using Node.js 10 will be migrated to Node.js 12 on the date mentioned above.

If your Project is using Node.js 10 (which you've either defined in engines in package.json or on the General page in the Project Settings), we recommend upgrading it to the latest version (Node.js 14).

Need help migrating to Node.js 14? Let us know and we'll help you out.

Read more

Leo Lamprecht Steven Salat Matheus Fernandes
https://vercel.com/changelog/the-directory-listing-feature-will-be-disabled-for-older-projects The Directory Listing feature will be disabled for older projects 2021-02-08T13:00:00.000Z

Last month, Vercel announced that the Directory Listing feature could now be toggled directly from the Project Settings and that it would be disabled for newly created Projects.

In favor of security, and to prevent unexpected behavior for older Projects, the Directory Listing feature will be disabled for all Projects that were created before January 12th 2021, which is the release date of the respective Project Setting.

The change will be applied on March 8th 2021.

Because the Directory Listing feature allows for accessing the source code of a Deployment if no index file is present within it, it's safer to disable it by default. If you want, however, you can turn the feature back on right afterwards, if you're relying on it.

Check out the documentation as well.

Read more

Leo Lamprecht
https://vercel.com/changelog/node-js-14-lts-is-now-available Node.js 14 LTS is now available 2021-02-04T13:00:00.000Z

As of today, version 14 of Node.js can be selected in the Node.js Version section on the General page in the Project Settings (newly created Projects will default to the new version).

Among other features, the new version introduces Diagnostic Reports, which can be logged to Log Drains (recommended) like so:

The following features were introduced through v8:

The exact version used is 14.15.4 (changelog), but automatic updates will be applied for new minor and patch releases. Therefore, only the major version (14.x) is guaranteed.

Check out the documentation as well.

Read more

Steven Salat Leo Lamprecht
https://vercel.com/changelog/correcting-request-urls-with-python-serverless-functions Correcting Request URLs with Python Serverless Functions 2021-02-02T13:00:00.000Z

At the moment, the URLs of incoming requests to Python Serverless Functions deployed on Vercel are decoded automatically.

Because this behavior is not consistent with a "standalone" Python server, Vercel will stop decoding them for newly created Serverless Functions starting March 2nd, 2021. Existing Deployments will not be affected.

As an example, take a look at the Python Serverless Function code shown above and imagine that the URL of the incoming request ends in /hi%21:

  • With the incorrect behavior, self.path will be set to /hi!.

  • With the updated correct behavior, self.path will be set to /hi%21, which matches the behavior of the built-in HTTPServer class in Python.

To try out this change, define a FORCE_RUNTIME_TAG Environment Variable for your project, set it to canary and create a new Deployment.

Read more

Nathan Rajlich
https://vercel.com/blog/transfer-vercel-projects-with-zero-downtime Transfer Vercel projects with zero downtime 2021-01-28T13:00:00.000Z

There is a new way to transfer projects on Vercel from Hobby accounts to Team plans or between two different Teams with zero downtime. This means Teams no longer need to redeploy projects that were deployed under a Hobby plan or a different Team.

Read more

Mark Glagola Paco Coursey Steven Salat Leo Lamprecht Christopher Skillicorn
https://vercel.com/changelog/projects-can-now-be-transferred-without-downtime Projects can now be transferred without downtime 2021-01-28T13:00:00.000Z

Personal Accounts on Vercel are great for hobby projects. Once you need to deploy more powerful sites, collaborate with other people and customize your workflow, Teams are the right way to go.

Migrating from a Personal Account to a Team previously wasn't possible without incurring downtime. It required removing Projects from their old location and deploying them again in the new one. As of today, it's only a matter of a few clicks on the dashboard.

Transferring projects without downtime is now as easy as navigating to the Advanced page in the Project Settings, following the flow and watching the magic happen.

All Deployments, Domains, Environment Variables, and any other configuration of your Project will automatically be moved to the target Team for you. Even optional features you might've enabled can be transferred over.

Check out the documentation as well.

Read more

Mark Glagola Paco Coursey Steven Salat Leo Lamprecht Christopher Skillicorn
https://vercel.com/blog/10-next-js-tips-you-might-not-know 10 Next.js tips you might not know 2021-01-26T13:00:00.000Z

Here are 10 little known Next.js tips you might not have heard that could help you save time on your next project:

Read more

Lee Robinson Christina Kopecky
https://vercel.com/changelog/node-js-version-now-customizable-in-the-project-settings Node.js Version now customizable in the Project Settings 2021-01-22T13:00:00.000Z

For easy customization and in preparation for Node.js 14 LTS landing in the future, the General page in the Project Settings now contains a section for defining the Node.js version used in the Build Step and Serverless Functions.

Previously, defining an engines property in the package.json file was required to customize the Node.js version. However, this property will take precedence over the Project Setting.

Check out the documentation as well.

Read more

Steven Salat Leo Lamprecht
https://vercel.com/changelog/invoices-are-now-available-on-a-dedicated-page Invoices are now available on a dedicated page 2021-01-22T13:00:00.000Z

Invoices for your payments on Vercel were previously found in the Past Invoices section of the Usage tab in the Dashboard.

To make it easier to navigate, they have been moved to a dedicated Invoices page alongside Billing in the Personal Account and Team Settings.

Even though the Upcoming Invoice section has been removed with this change, the Billing page in the Personal Account and Team Settings now provides the same insight.

Read more

Andy Schneider Christopher Skillicorn Leo Lamprecht
https://vercel.com/changelog/urls-are-becoming-consistent URLs are becoming consistent 2021-01-20T13:00:00.000Z

A lot of feedback we've gathered has shown that the URLs Vercel currently provides you with are too complicated. As part of our strategy for making them simpler, we're starting with applying a consistent format on February 20th 2021:

  • Custom Domains and Automatic URLs ending in now.sh will instead end in vercel.app.

  • Automatic Deployment URLs like project-d418mhwf5.vercel.app will gain the slug of the owner Vercel scope to match Automatic Branch URLs: project-d418mhwf5-team.vercel.app.

  • Automatic Branch URLs like project-git-update.team.vercel.app will lose their second subdomain level in favor of a dash: project-git-update-team.vercel.app.

  • Automatic Project URLs like project.team.vercel.app and Automatic Team Member URLs like project-user.team.vercel.app will be adjusted like Automatic Branch URLs.

It is recommended to not rely on any of the Automatic URLs for Production use cases and instead use Custom Domains for that. If that's not possible, please ensure any program sending requests to these URLs supports 308 redirects – like modern browsers do.

Read more

Leo Lamprecht
https://vercel.com/blog/everything-about-react-server-components React Server Components with Next.js 2021-01-15T13:00:00.000Z

React Server Components allow developers to build applications that span the server and client, combining the rich interactivity of client-side apps with the improved performance of traditional server rendering.

In the upcoming Next.js major release, React developers will be able to use Server Components inside the app directory as part of the changes outlined by the Layouts RFC. This post will explore how Server Components will enable developers to create faster React applications.

Read more

Tim Neutkens Joe Haddad Lee Robinson Christina Kopecky
https://vercel.com/changelog/serverless-functions-are-now-deployed-to-us-east-by-default Serverless Functions are now deployed to US East by default 2021-01-14T13:00:00.000Z

Many Serverless Functions communicate with third-party services. Because most of these services are available in the US East region, deploying Serverless Functions to US West leads to slower response times.

For that reason (and to decrease the latency for requests arriving from Europe), newly created projects will default to the US East region (Washington, D.C., USA) instead of US West (San Francisco, USA) when deploying Serverless Functions.

Existing projects will be unaffected, but can be switched to the new default from the new "Serverless Functions" page in the Project Settings.

Check out the documentation as well.

Read more

Paco Coursey Steven Salat Nathan Rajlich Christopher Skillicorn Leo Lamprecht
https://vercel.com/changelog/failed-payments-can-now-be-retried Failed payments can now be retried 2021-01-12T13:00:00.000Z

If a payment had failed for your Team because of an issue with your payment method, you previously had to reach out to our Support Team and ask them to retry it, so that you could create Deployments again.

Since that's quite a slow process, we instead added a button to the Billing page in your Team Settings that you can click to immediately issue a new charge. If that charge succeeds, your Team will be able to create Deployments again right after.

Changing the payment method will automatically issue a charge too, so this button is particularily helpful if you've fixed an issue with an existing payment method.

Read more

Andy Schneider Leo Lamprecht
https://vercel.com/changelog/listing-the-content-of-directories-can-now-be-toggled Listing the content of directories can now be toggled 2021-01-12T13:00:00.000Z

Until now, directories used to list the directory's contents whenever their path was visited (provided they didn't contain an index file).

In cases where this was considered a security issue, turning off the Directory Listing required configuring a rewrite rule in vercel.json.

As of today, the Directory Listing is disabled for all newly created Projects and can be toggled on the "Advanced" page in the Project Settings.

Check out the documentation as well.

Read more

Naoyuki Kanezawa Luc Leray Christopher Skillicorn Leo Lamprecht
https://vercel.com/changelog/git-repositories-can-now-be-searched-for-and-imported-easily Git repositories can now be searched for and imported easily 2021-01-08T13:00:00.000Z

Importing a Git repository into Vercel used to require navigating to it on the Git provider of your choice, copying its URL from the address bar, pasting it in the project creation flow on Vercel and then following the steps.

Thanks to its most recent update, however, the project creation flow now renders a list of recommended Git repositories to import and allows for searching for a particular one as well, annotated with icons for the frameworks that are being used within those Git repositories.

Cloning a Template is now also much easier than before, as they are presented on the same page as the recommended Git repositories.

Check out the documentation as well.

Read more

Ana Jovanova Luc Leray Naoyuki Kanezawa Christopher Skillicorn Leo Lamprecht
https://vercel.com/changelog/multiple-git-namespaces-per-personal-account-and-team Multiple Git namespaces per Personal Account and Team 2021-01-08T13:00:00.000Z

When connecting a Project on Vercel to a Git repository, the Git repository previously had to be located in the same Git scope as the Git repositories of all other Projects within that Personal Account or Team.

Now that this connection is defined on the Project-level (see above) instead of being configured on the Personal Account or Team, this limitation is lifted. Additionally, problems with an active connection are now surfaced there too.

Every Personal Account or Team can now contain Vercel Projects that are connected to Git repositories located in various different Git scopes. This also means that, when importing one, Vercel no longer forces a certain destination Personal Account or Team.

Check out the documentation as well.

Read more

Shu Ding Javi Velasco Christopher Skillicorn Leo Lamprecht
https://vercel.com/changelog/environment-variables-can-now-be-filtered Environment Variables can now be filtered 2020-12-23T13:00:00.000Z

Previously, Environment Variables defined in the Project Settings used to be sharded into different Environments using tabs in the UI. To make it easier to add them to multiple Environments at once and edit them like that, however, they now live in a single list.

In order to make it easy for you to still only view the Environment Variables you're interested in, we just added a new search field and a select field on the right that lets you filter the Environment Variables down to a specific Environment.

Read more

Ana Jovanova Leo Lamprecht
https://vercel.com/blog/three-improvements-to-vercel-project-creation-vercel-git-integration Three Improvements to Project Creation & Git Integration 2020-12-18T13:00:00.000Z

Projects are core to everything on your Vercel account. We’ve recently improved the developer experience by introducing three updates for projects. These apply to all users on Hobby, Pro, and Enterprise plans.

By improving how projects are created and connected to Git in Vercel, we expect a decrease in the time between project creation and deployment for all users and a reduction in complexity for some larger Vercel customers.

Read more

Leo Lamprecht Shu Ding Luc Leray Ana Jovanova Naoyuki Kanezawa Christopher Skillicorn
https://vercel.com/blog/series-b-40m-to-build-the-next-web $40M to Build the Next Web 2020-12-16T13:00:00.000Z

Today, we announce $40M in new funding to help everyone build the next web.

When responding to investors, we told them the stories of our customers, from independent developers to Fortune 10 companies, and the lessons we learned this year about how Next.js and Vercel help teams collaborate and move faster with greater flexibility.

Read more

Guillermo Rauch Kevin Van Gundy Chris Leishman
https://vercel.com/changelog/build-and-function-logs-now-render-ansi-color-codes-nicely Build and Function Logs now render ANSI color codes nicely 2020-12-03T13:00:00.000Z

If the logs that your source code or your framework are exposing within the Build Step or within your Serverless Functions contain ANSI color codes for providing more clarity, Vercel previously directly printed them out in the respective views on the Dashboard.

As of today, however, all of those codes are automatically parsed within the Deployment View, which contains Build Logs on the main page, but also the logs for your Serverless Functions on the "Functions" tab.

In the example above, you can see that ANSI codes are now automatically rendered as the colors they are supposed to represent, which makes the text much easier to understand.

Read more

Nathan Rajlich Leo Lamprecht
https://vercel.com/changelog/system-environment-variables-are-now-available-by-default System Environment Variables are now available by default 2020-11-20T13:00:00.000Z

Previously, consuming values provided by the Vercel platform in your Environment Variables (like the URL of your Deployment) required adding System Environment Variables using the "Environment Variables" page in the Project Settings.

All new Projects created as of today, however, will automatically receive all System Environment Variables by default – without you having to expose them explicitly.

This setting can also be controlled from existing Projects, which means that you can easily opt into the new behavior for those as well.

Furthermore, the available System Environment Variables were revamped to have much more straightforward names and don't differentiate between Git providers anymore. For example, you can now use VERCEL_GIT_COMMIT_REF to retrieve the Git commit SHA for GitHub, GitLab and Bitbucket instead of having to use several different System Environment Variables for that.

Check out the documentation as well.

Read more

Luc Leray Ana Jovanova Leo Lamprecht
https://vercel.com/changelog/projects-can-now-be-renamed Projects can now be renamed 2020-11-20T13:00:00.000Z

For all the Projects you've deployed on Vercel in the past, the platform either automatically selected a name for you based on the name of your Git repository or local directory, or you manually customized it before the Project was created.

Previously, it wasn't possible to change the name of a Project after it was created. As of today, however, you can do it directly from the "General" page in the Project Settings.

When changing the name of your Project, no interruptions in your or your Team's workflow will occur, considering that you're either deploying from a Git repository or you've linked the Project to a local directory using Vercel CLI.

Check out the documentation as well.

Read more

Paco Coursey Leo Lamprecht
https://vercel.com/changelog/dependencies-can-now-be-installed-with-a-custom-command Dependencies can now be installed with a custom command 2020-11-11T13:00:00.000Z

By default, Vercel automatically determines the right command for installing your project's code dependencies in the Build Step based on the Framework Preset configured for your project and the presence of certain files (like package-lock.json) in your source code.

As of today, you can customize the command that Vercel will run within the Build Step for installing your code dependencies.

In the new Install Command section within the Project Settings, you can now enter any command of your choice that will be run instead of having Vercel automatically determine the right one for you.

Check out the documentation as well.

Read more

Steven Salat Leo Lamprecht
https://vercel.com/changelog/auto-renewal-can-now-be-disabled-for-domains Auto Renewal can now be disabled for Domains 2020-11-10T13:00:00.000Z

Once you've purchased a Domain with Vercel or transferred it in from a different platform, Vercel will automatically make sure that your Domain is renewed every year – before it expires.

Like this, you never have to worry about your projects becoming unavailable. Instead, you will automatically be charged the renewal fee every year and your Domain will continue working.

Previously, the only way to prevent a Domain from being renewed again (in the case that you don't want to continue using it, for example) was contacting our Support Team, who disabled auto renewal for you.

As of today, you can toggle the auto renewal behavior of a Domain right on the Dashboard by navigating to the "Domains" tab on your Personal Account or Team, clicking the Domain you're interested in and toggling the option on the top right.

Check out the documentation as well.

Read more

Paco Coursey Leo Lamprecht
https://vercel.com/blog/gatsby-analytics Vercel Analytics for Gatsby 2020-11-04T13:00:00.000Z

At Next.js Conf, we announced Next.js Analytics, providing developers with their Real Experience Score through data from actual visitors. Today we're expanding Vercel's analytics offerings to include Gatsby.

Read more

Lee Robinson Joe Haddad
https://vercel.com/blog/changelog-september-2020 September 2020 2020-09-01T13:00:00.000Z

Read more

https://vercel.com/blog/monorepos-are-changing-how-teams-build-software Monorepos 2020-08-28T13:00:00.000Z

Vercel now supports monorepos for improved flexibility at scale. From the same Git repository, you can set up multiple projects to be built and deployed in parallel.

Monorepos let your team use multiple programming languages and frameworks, collaborate better, and leverage microfrontend architectures.

Learn more about how monoreops are changing how teams build software.

Read more

Shu Ding Igor Klopov Steven Salat Javi Velasco Christopher Skillicorn Leo Lamprecht
https://vercel.com/blog/changelog-august-2020 August 2020 2020-08-01T13:00:00.000Z

Read more

https://vercel.com/blog/new-edge-dev-infrastructure Our new Edge and Dev infrastructure 2020-07-21T13:00:00.000Z

Vercel was born to help frontend teams succeed at scale. From the ideal developer experience on localhost, to the best performance for your end-user via our Global Edge Network.

Today we are introducing major end-to-end enhancements, starting with a realtime developer workflow (with Next.js and Vercel CLI) and finishing with serving pages up to 6x faster.

Read more

Matheus Fernandes Nathan Rajlich Tim Neutkens Joe Haddad Max Leiter
https://vercel.com/blog/custom-production-branch Custom production branch 2020-07-17T13:00:00.000Z

Up until now, after creating a new Project from a Git repository or one of our examples, all commits to its default branch were being deployed to Production.

Today we are introducing a new default for newly created Projects, as well as an easy way to customize it from your Project Settings.

Read more

Shu Ding Christopher Skillicorn Andy Schneider Leo Lamprecht
https://vercel.com/blog/nextjs-server-side-rendering-vs-static-generation Next.js: Server-side Rendering vs. Static Generation 2020-07-09T13:00:00.000Z

Next.js is a React framework that supports pre-rendering. Instead of having the browser render everything from scratch, Next.js can serve pre-rendered HTML in two different ways.

Read more

Lee Robinson
https://vercel.com/blog/changelog-july-2020 July 2020 2020-07-01T13:00:00.000Z

Read more

https://vercel.com/blog/dns-records-ui DNS Records UI 2020-06-23T13:00:00.000Z

Applying custom DNS Records to your Domains (for receiving emails, for example) has so far always required interacting with our advanced command-line interface.

From today, you'll be able to manage them directly from the Web UI and even insert presets for commonly used DNS Records.

Read more

Luc Leray Christopher Skillicorn Ana Jovanova Leo Lamprecht
https://vercel.com/blog/changelog-june-2020 June 2020 2020-06-01T13:00:00.000Z

Read more

https://vercel.com/blog/changelog-may-2020 May 2020 2020-05-01T13:00:00.000Z

Read more

https://vercel.com/blog/security-controls-protected-preview-deployments-passwords Protecting Deployments 2020-05-01T13:00:00.000Z

Pushing a change to your project results in a Preview Deployment. Then, once you're ready, merging it into master results in a Production Deployment with the domain of your choice.

Even though Preview Deployments receive a unique URL, they might still be accessed by anyone that finds out about the URL. Today, we're introducing two features for easily protecting them right from the Dashboard.

Read more

Naoyuki Kanezawa Christopher Skillicorn Joe Cohen Leo Lamprecht Connor Davis
https://vercel.com/blog/zeit-is-now-vercel ZEIT is now Vercel 2020-04-21T13:00:00.000Z

Read more

Guillermo Rauch
https://vercel.com/blog/environment-variables-ui Environment Variables UI 2020-04-14T13:00:00.000Z

If you are working on a sophisticated project, you might have found yourself wanting to configure different Environment Variables depending on the Environment your project is deployed to.

With today's release, we're making it possible to configure different Environment Variables for Production, Preview, and Development – right in the Dashboard.

Read more

Luc Leray Steven Salat Leo Lamprecht Christopher Skillicorn
https://vercel.com/blog/simpler-pricing Simpler Pricing 2020-04-08T13:00:00.000Z

Since the launch of our platform, we have always aimed to make our pricing model as simple as possible, and perfectly tailored to your needs.

Today, we are taking a giant leap towards that goal by introducing our new pricing plans for your personal account and teams.

Read more

Andy Schneider Shu Ding Max Rovensky Christopher Skillicorn Leo Lamprecht
https://vercel.com/blog/changelog-april-2020 April 2020 2020-04-01T13:00:00.000Z

Read more

https://vercel.com/blog/we-are-all-in-this-together We're All in This Together 2020-03-25T13:00:00.000Z

As a remote-first company, we're lucky to be minimally affected by recent events. Outside of our regular work, we're doing our best to support one another and our families — hosting virtual game nights, zoom hangouts, group meditation, and regular check-ins.

We also recognize that we have an opportunity — no, an obligation — to help our communities in any way we can. So today, we want to step aside from our typical product-focused content and highlight some recent projects from developers in our Next.js and Vercel community.

Our community has built 2,500+ COVID-19 related sites generating over 150 million requests in the past 72 hours — providing critical information and awareness, helping prevent further outbreaks, and giving us tools for keeping each other safe. This blog post is dedicated to these inspiring efforts.

Read more

Kevin Van Gundy Timothy Lorimer Matthew Sweeney Sarup Banskota
https://vercel.com/blog/canceling-ongoing-deployments Canceling Ongoing Deployments 2020-03-24T13:00:00.000Z

Sometimes you might find yourself having created a deployment that you don't need anymore, or that is causing other deployments to get queued behind it.

Previously, it was necessary to wait for such deployments to complete, and then delete them. As of today, however, you can immediately cancel deployments if they are no longer required.

Read more

Ana Jovanova Igor Klopov Joe Cohen Leo Lamprecht
https://vercel.com/blog/new-git-integration-settings New Git Integration Settings 2020-03-23T13:00:00.000Z

Creating a new project on Vercel is as simple as importing a Git repository from your favorite provider, whether that's GitHub, GitLab, or Bitbucket.

Once a project has been imported, the Git Integration connection can be edited in the blink of an eye. Today, we're making this process easier to understand and more reliable than before.

Read more

Luc Leray Christopher Skillicorn Leo Lamprecht
https://vercel.com/blog/refined-logging Refined Logging 2020-03-11T13:00:00.000Z

With the launch of Log Drains, we made it easy to pipe the invocation logs of your Serverless Functions or Static Files to a log inspection tool like LogDNA or Datadog.

Handing off this piece of your production workflow to a service dedicated to this purpose allowed us to tighten our focus around what we do best: Plug-and-play realtime logs.

Read more

Christopher Skillicorn Max Rovensky Leo Lamprecht
https://vercel.com/blog/changelog-march-2020 March 2020 2020-03-01T13:00:00.000Z

Read more

https://vercel.com/blog/advanced-project-settings Advanced Project Settings 2020-02-06T13:00:00.000Z

With the launch of Zero Config Deployments, we made setting up your projects as easy as importing a Git repository, and having every push and pull request deployed with Vercel. No configuration.

Today, we're extending this process to non-JavaScript projects (like Hugo sites) and giving you full control over your project's automatically configured settings.

Read more

Andy Schneider Luc Leray Shu Ding Christopher Skillicorn Leo Lamprecht
https://vercel.com/blog/support-form Get support from the dashboard 2020-02-03T13:00:00.000Z

Getting in touch with Vercel Support has always been straightforward. However, we wanted to make this even easier, providing direct access to Vercel Support from your dashboard and reducing the impact on your workflow. We are delighted to say that from today this is now possible, with the new Support Form.

Read more

Allen Hai Christopher Skillicorn Matthew Sweeney Leo Lamprecht
https://vercel.com/blog/changelog-february-2020 February 2020 2020-02-01T13:00:00.000Z

Read more

https://vercel.com/blog/log-drains Log Drains 2020-01-31T13:00:00.000Z

Inspecting logs for the Build Step, Runtime, and Edge Network traffic of a deployment can be crucial to pinpointing aspects of its behavior and understanding better where improvements can be made.

Today, we are thrilled to announce support for Log Drains: collect all of your logs using a service that specializes in storing app logs.

Read more

Naoyuki Kanezawa Joe Cohen
https://vercel.com/blog/changelog-january-2020 January 2020 2020-01-01T13:00:00.000Z

Read more

https://vercel.com/blog/our-first-online-conference backendlessConf_ 2019 2019-12-23T13:00:00.000Z

2019 has been an incredible year for Vercel. We announced zero-config, launched a new integrations platform, and even hosted a successful hackathon.

To end the year on a memorable note, we held our first-ever remote conference: backendlessConf_.

Read more

Sarup Banskota Giel Cobben Matthew Sweeney Max Rovensky Paco Coursey
https://vercel.com/blog/branch-domains Branch Domains 2019-12-20T13:00:00.000Z

After editing your project, previewing your changes with Vercel is only a matter of pushing a Git commit using our Git Integration, or by running a single command using our command-line interface.

Every Deployment created in either way receives a unique URL, yet you still might want to apply a Custom Domain for your Preview Deployments. Today, we are making this possible with Branch Domains.

Read more

Luc Leray Christopher Skillicorn Leo Lamprecht
https://vercel.com/blog/changelog-december-2019 December 2019 2019-12-01T13:00:00.000Z

Read more

https://vercel.com/blog/bitbucket Vercel for Bitbucket 2019-11-27T13:00:00.000Z

Bitbucket is popular among teams as the central place to plan projects, collaborate on code, test, and deploy — especially in combination with Jira and Trello.

Today, we are proud to announce our first-class Bitbucket integration, Vercel for Bitbucket.

Read more

Arunoda Susiripala Joe Cohen
https://vercel.com/blog/dashboard-redesign Dashboard redesign 2019-11-20T13:00:00.000Z

With the launch of Zero Config Deployments, Vercel made it easier than ever to deploy websites and applications. Now, we're bringing the simplicity of our developer experience to our web dashboard.

Creating new projects, importing existing code, managing domains, setting up redirects, inspecting deployments and functions, and managing teams has never been easier.

We are unveiling the next evolution of the Vercel Dashboard.

Read more

Evil Rabbit Shu Ding Paco Coursey Christopher Skillicorn Max Rovensky
https://vercel.com/blog/deploy-button Introducing the Deploy Button 2019-11-18T13:00:00.000Z

As the author of an open source project or framework, one of your key focuses is making it as easy as possible for users to get started with your creation.

With the help of today's feature release, you can now reduce this entire process down to the click of a single button: The Vercel Deploy Button.

Read more

Shu Ding Christopher Skillicorn Timothy Lorimer Paco Coursey Leo Lamprecht
https://vercel.com/blog/functions-tab Inspecting Serverless Functions 2019-11-18T13:00:00.000Z

After deploying a static frontend to Vercel, some projects might make use of Serverless Functions to feed data from.

Creating Serverless Functions is as simple as adding an API directory in your project, and today inspecting them became just as comfortable with the new "Functions" tab from your Deployment Overview.

Read more

Max Rovensky Leo Lamprecht Christopher Skillicorn
https://vercel.com/blog/customizing-serverless-functions Customizing Serverless Functions 2019-11-12T13:00:00.000Z

When extending your project with Serverless Functions, you might find yourself in a situation where adjusting the default behavior is necessary.

Today, we are adding a new functions configuration property to allow you to do just this.

Read more

Andy Schneider Leo Lamprecht
https://vercel.com/blog/changelog-november-2019 November 2019 2019-11-01T13:00:00.000Z

Read more

https://vercel.com/blog/default-production-domain Default Production Domain 2019-10-31T13:00:00.000Z

When creating a new project, it's important that the road to sharing a working production URL of your newly deployed code is as short as possible, with the least amount of friction.

With today's announcement, we're ensuring exactly that.

Read more

Andy Schneider Shu Ding Christopher Skillicorn Luc Leray Leo Lamprecht
https://vercel.com/blog/redirecting-domains Redirecting Domains 2019-10-29T13:00:00.000Z

If you own multiple domains and would like to forward them to a single one, or redirect a subdomain like www to your apex domain, you previously had to create multiple deployments and set up Routes for each of them.

Now, you can accomplish the same, right from your dashboard.

Read more

Max Rovensky Christopher Skillicorn Andy Schneider Leo Lamprecht
https://vercel.com/blog/advanced-invoice-settings Advanced Invoice Settings 2019-10-02T13:00:00.000Z

If you are a business working with Vercel, you've probably found yourself in a situation where the invoices you've received from us are missing information required by your accounting department.

Today, we are changing this by providing you with ways to configure those missing fields.

Read more

Andy Schneider Leo Lamprecht
https://vercel.com/blog/changelog-october-2019 October 2019 2019-10-01T13:00:00.000Z

Read more

https://vercel.com/blog/wildcard-domains Introducing Wildcard Domains 2019-09-10T13:00:00.000Z

With Vercel, you can already deploy to HTTPS-enabled subdomains of your choice.

What if you could let customers choose those subdomains (like with Slack workspaces)? Today, we're making this possible with the introduction of Wildcard Domains!

Read more

Naoyuki Kanezawa Joe Cohen Allen Hai
https://vercel.com/blog/deploy-summary Deploy Summary Integration 2019-09-03T13:00:00.000Z

Today, we're introducing Deploy Summary, a Vercel integration to augment your workflow with our GitHub and GitLab integrations even further.

Deploy Summary analyzes your pull requests and merge requests, detects changed pages, and provides a detailed preview right next to your commits:

Read more

Luc Leray Leo Lamprecht
https://vercel.com/blog/zero-config Zero Config Deployments 2019-08-07T13:00:00.000Z

Few weeks ago, we introduced Vercel as the most powerful and scalable platform for static websites and serverless functions powered by any language or framework.

This came at the expense of writing vercel.json files. Today, we are introducing Zero Config, a conventional and completely backwards-compatible approach to deployment.

Read more

Leo Lamprecht Andy Schneider Guillermo Rauch
https://vercel.com/blog/introducing-deploy-hooks Introducing Deploy Hooks 2019-07-30T13:00:00.000Z

Thanks to our first-class GitHub and GitLab Integrations, you can simply push your code to deploy with Vercel. But what if you wanted to create a deployment not based on change of source code, but another external event, such as an update in CMS content?

Starting today, you can deploy based on any event with Deploy Hooks.

Read more

Javi Velasco Sarup Banskota
https://vercel.com/blog/node-10 Node.js 10 is Now Available 2019-06-25T13:00:00.000Z

With the release of Node.js 10, features like BigInt, a stable API for native addons, and several performance improvements have found their way into production.

Today, we are enabling Node.js 10 support for new serverless Node.js functions and Next.js applications deployed using Vercel.

Read more

Steven Salat Leo Lamprecht
https://vercel.com/blog/vercel-node-helpers Helpers for Serverless Node.js Functions 2019-06-19T13:00:00.000Z

Migrating to serverless Node.js functions or creating new ones can mean that some of the tools and frameworks you used previously are not suitable anymore.

With today's feature release, we want to solve this problem by providing you with a set of default helpers exposed within your Node.js function.

Read more

Luc Leray Leo Lamprecht
https://vercel.com/blog/hackathon-winners Vercel Hackathon Winners 2019-06-07T13:00:00.000Z

We kicked off June with the first-ever Vercel Hackathon, focused on creating integrations.

The event was a phenomenal success. Over 250 participants joined us from every corner of the world to submit high quality integrations that improve their workflow. After much deliberation, our judges finally have the results, and we are thrilled to announce the winners.

Read more

Sarup Banskota Arunoda Susiripala Alyssa Rose
https://vercel.com/blog/vercel-dev-windows Windows Support for `vercel dev` 2019-05-07T13:00:00.000Z

With the release of vercel dev, we provided developers with the first single-command development environment that can handle multiple services at once.

In order to open up this opportunity to an even wider range of users, we are very pleased to announce that vercel dev supports Windows.

Read more

Nathan Rajlich Leo Lamprecht
https://vercel.com/blog/serverless-pre-rendering Introducing Serverless Pre-Rendering (SPR) 2019-05-03T13:00:00.000Z

Static websites are fast. When you deploy static frontends to Vercel, we automatically serve them from every edge of our global Smart CDN network.

But static websites are also... static. Static site generators create all your pages during the build process — all of them, all at once. Ever had to quickly fix a typo in a page, only to wait minutes or hours for your change to go live?

Today, we are introducing Serverless Pre-Rendering, an industry-defining feature of our Smart CDN network that allows you to get the best of both worlds: the speed and reliability of static, and the versatility of dynamic data rendering.

Read more

Juan Campa Matheus Fernandes
https://vercel.com/blog/vercel-dev Introducing `vercel dev`: Serverless, on localhost 2019-04-30T13:00:00.000Z

Vercel was born out of the idea that deploying a website could be much simpler. You only have to run a single command: vercel – that is all.

With our GitHub and GitLab integrations, we enabled deploying on every git push, and teams to manage staging and production by simply merging pull requests.

Read more

Nathan Rajlich Leo Lamprecht Steven Salat Connor Davis Sophearak Tha Sarup Banskota
https://vercel.com/blog/automatic-ssl-with-vercel-lets-encrypt Automatic SSL with Vercel and Let's Encrypt 2019-04-16T13:00:00.000Z

Our Vercel platform enables you to deploy modern websites and applications without needing any complicated server configuration. Not only do we automatically configure DNS records for your domain, we also instantly issue and renew free wildcard SSL certificates, completely hands-free.

Historically, companies have spent thousands to get their websites HTTPS-enabled. Not to mention the whole process of issuance, download, re-upload, reconfigure, restart server with downtime — it's always enormously stressful and requires significant engineering resources.

Read more

Javi Velasco Mark Glagola Ana Jovanova Sarup Banskota
https://vercel.com/blog/auto-job-cancellation-for-vercel-github Auto Job Cancellation for Vercel for GitHub 2018-11-15T13:00:00.000Z

When you connect your GitHub organization to Vercel, with Vercel for GitHub, we build and deploy your app for each every Git push. We call such an event a job.

For a given branch, we process each job in a queue. If multiple jobs are waiting, we pick the latest one to build. Vercel for GitHub will always give you the deployment URL for the most recent commit.

Read more

Arunoda Susiripala
https://vercel.com/blog/next6-1 Next.js 6.1 2018-06-27T13:00:00.000Z

We are proud today to introduce the production-ready Next.js 6.1, featuring:

Read more

Tim Neutkens
https://vercel.com/blog/next6 Next.js 6 and Nextjs.org 2018-05-16T13:00:00.000Z

This year, the ZEIT Day Keynote started by highlighting our Open Source projects including showing the metrics of Next.js. With over 25000 stars on GitHub and over 10000 websites are already powered by it, we're incredibly amazed at its growth and love seeing the increasing amount of projects depending on it.

Read more

Tim Neutkens Arunoda Susiripala
https://vercel.com/blog/next5-1 Next.js 5.1: Faster Page Resolution 2018-03-26T13:00:00.000Z

We are happy to introduce Next.js 5.1, which features support for environment configuration, phases, source maps, and new Next.js plugins.

Major performance improvements are introduced: resolving pages is 102x faster, and error pages are loaded more efficiently.

Read more

Tim Neutkens Arunoda Susiripala
https://vercel.com/blog/next5 Next.js 5: Universal Webpack, CSS Imports, Plugins and Zones 2018-02-05T13:00:00.000Z

We are very happy to introduce Next.js 5.0 to the world. It’s available on npm effective immediately.

Read more

Tim Neutkens Arunoda Susiripala
https://vercel.com/blog/next-canary Towards Next.js 5: Introducing Canary Updates 2017-11-15T13:00:00.000Z

On the heels of the announcements of canary releases for HyperNow CLI, and Now Desktop, we are glad to announce the immediate availability of a canary channel for Next.js.

In addition, we are excited to share some of the goals we are currently working on towards the release of Next.js 5!

Read more

Tim Neutkens Arunoda Susiripala
https://vercel.com/blog/next4 Next.js 4: React 16 and styled-jsx 2 2017-10-09T13:00:00.000Z

We are happy to introduce Next.js 4, which features support for React 16 and introduces a major upgrade for the default styling engine styled-jsx with support for dynamic styles.

Read more

Tim Neutkens Giuseppe Gurgone Arunoda Susiripala
https://vercel.com/blog/next3 Next.js 3.0 2017-08-08T13:00:00.000Z

We are very excited excited to announce the stable release of Next.js 3.0. Ever since our beta announcement, we have been using it to power vercel.com and have received lots of feedback and contributions from our community.

Let’s walk through what’s been improved and what’s altogether new, or fetch the latest version from npm!

New to Next.js? Next.js is a zero-configuration, single-command toolchain for React apps, with built-in server-rendering, code-splitting and more. Check out Learn Next.js to get started!

Read more

Arunoda Susiripala Tim Neutkens
https://vercel.com/blog/next3-preview Next 3.0 Preview: Static Exports and Dynamic Imports 2017-05-15T13:00:00.000Z

On the heels of our announcement of free static deployments earlier today, we are excited to introduce a beta release of the upcoming Next.js 3.0, featuring next export, dynamic components and various bugfixes.

Read more

Arunoda Susiripala
https://vercel.com/blog/next2 Next.js 2.0 2017-03-27T13:00:00.000Z

More than 3.1 million developers read our announcement post of Next.js. More than 110 contributors have submitted patches, examples or improved our documentation. Over 10,000 developers have starred us on GitHub.

Today, we are proud to introduce Next 2.0 to the world. What follows is a quick summary of every new feature and improvement we have made.

Read more

Arunoda Susiripala Naoyuki Kanezawa Tim Neutkens
https://vercel.com/blog/next Next.js 2016-10-25T13:00:00.000Z

We're very proud to open-source Next.js, a small framework for server-rendered universal JavaScript webapps, built on top of React, Webpack and Babel, which powers this very site!

Read more

Naoyuki Kanezawa Guillermo Rauch Tony Kovanen