Skip to content

fix(bun): Consume fetch response body to prevent memory leak#19927

Open
Karavil wants to merge 4 commits intogetsentry:developfrom
Karavil:fix/bun-transport-consume-response-body
Open

fix(bun): Consume fetch response body to prevent memory leak#19927
Karavil wants to merge 4 commits intogetsentry:developfrom
Karavil:fix/bun-transport-consume-response-body

Conversation

@Karavil
Copy link

@Karavil Karavil commented Mar 21, 2026

Summary

Bun's fetch retains the backing ArrayBuffer of unconsumed response bodies indefinitely. The Bun transport's makeFetchTransport calls fetch() but never consumes the response body, causing a memory leak when sending Sentry envelopes.

This is the same class of bug that was fixed for the Cloudflare transport in #18545. We improve on that fix by using void response.text().catch(() => {}) instead of await response.text(): the void approach drains the body asynchronously, adding zero latency to Sentry envelope sends. We don't need the response content, just need to trigger the drain so Bun releases the backing ArrayBuffer.

Bun fetch memory leak: documented and ongoing

Bun has a well-documented history of fetch response bodies not being garbage collected. Despite multiple fixes across releases, the core problem persists for the simplest case (unconsumed bodies):

Merged fixes (proving the problem exists)

PR Description Date
oven-sh/bun#10933 fix(fetch): allow Response to be GC'd before all request body received Jun 2024
oven-sh/bun#23313 refactor(Response): isolate body usage (fetch memory leak fix) 2025
oven-sh/bun#25846 fix(fetch): fix ReadableStream memory leak when using stream body Jan 2026
oven-sh/bun#25965 fix(http): fix Strong reference leak in server response streaming Jan 2026
oven-sh/bun#27191 fix: release ReadableStream Strong ref on fetch body cancel (260KB leaked per cancelled request) Feb 2026

Still-open issues

Issue Description
oven-sh/bun#20912 Large fetch request count causes RSS to grow to 2.46GB then crash
oven-sh/bun#27358 RSS retention on Bun 1.3.9/1.3.10 with fetch + TLS
oven-sh/bun#10763 Partially-read fetch response bodies leak when cancel() is not called

Root cause (from #10763): Bun holds a strong reference to the ReadableStream backing the response body. If never consumed, the strong ref prevents GC.

Improvement over the Cloudflare transport fix (#18545)

The Cloudflare transport fix in #18545 used await response.text(), which blocks the transport until the body is fully read. This adds unnecessary latency to every Sentry envelope send.

Our fix uses void response.text().catch(() => {}) instead:

  • No added latency: the transport returns immediately with status code and headers
  • Body still drains: the promise runs in the background, releasing the ArrayBuffer once complete
  • No async needed: the .then() callback stays synchronous, matching the original signature
  • Defensive .catch(() => {}): if the drain fails for any reason, it doesn't crash the transport

This is a strictly better pattern for any transport where we don't need the response body content.

Production evidence (heap snapshots)

We observed this leak in production on a Bun service running @sentry/bun:

Metric Value
ArrayBuffers accumulated (4h) 130,389 (1,055 MB)
ArrayBuffer size ~8 KB each (Bun's Buffer.poolSize)
RSS growth 776 MB (t=0) -> 2,693 MB (t=4.84h)
CPU: get buffer accessor 5% (t=0) -> 47% (t=4h), GC thrashing
OOM kills Every ~5h, hitting 4GB container limit

After applying this fix, the leak was eliminated.

Reproduction

// repro.js — run with: bun run repro.js
// Watch RSS grow unboundedly because response bodies are never consumed

const server = Bun.serve({
  port: 3456,
  fetch() {
    return new Response("x".repeat(8192));
  },
});

let i = 0;
setInterval(async () => {
  const res = await fetch("http://localhost:3456");
  // NOT consuming res.text() or res.arrayBuffer() — just discard
  void res;
  if (++i % 100 === 0) {
    Bun.gc(true);
    console.log(
      `Requests: ${i}, RSS: ${(process.memoryUsage.rss() / 1024 / 1024).toFixed(1)}MB`
    );
  }
}, 10);

Expected: RSS stays flat. Observed on Bun 1.3.x: RSS grows linearly.


This PR was generated with the assistance of Claude Code and reviewed by a human.

Bun's fetch implementation retains the backing ArrayBuffer of unconsumed
response bodies indefinitely. This causes a memory leak when sending
many Sentry envelopes, as each response's ArrayBuffer accumulates in
memory.

This applies the same fix that was made for the Cloudflare transport in
getsentry#18545 — consuming the response body with response.text() after
extracting the needed headers.

In production, this leak manifests as ~8KB ArrayBuffers accumulating at
~8/sec (one per envelope), leading to OOM kills after ~5 hours on
containers with 4GB memory limits.
@github-actions
Copy link
Contributor

github-actions bot commented Mar 21, 2026

Semver Impact of This PR

🟢 Patch (bug fixes)

📋 Changelog Preview

This is how your changes will appear in the changelog.
Entries from this PR are highlighted with a left border (blockquote style).


New Features ✨

Deps

  • Bump mongodb-memory-server-global from 10.1.4 to 11.0.1 by dependabot in #19888
  • Bump stacktrace-parser from 0.1.10 to 0.1.11 by dependabot in #19887

Bug Fixes 🐛

Core

  • Do not overwrite user provided conversation id in Vercel by nicohrubec in #19903
  • Return same value from startSpan as callback returns by s1gr1d in #19300

Deps

  • Bump next to 15.5.14 in nextjs-15 and nextjs-15-intl E2E test apps by chargome in #19917
  • Bump socket.io-parser to 4.2.6 to fix CVE-2026-33151 by chargome in #19880

Other

  • (bun) Consume fetch response body to prevent memory leak by Karavil in #19927
  • (cloudflare) Forward ctx argument to Workflow.do user callback by Lms24 in #19891
  • (craft) Add missing mainDocsUrl for @sentry/effect SDK by bc-sentry in #19860
  • (nestjs) Add node to nest metadata by chargome in #19875
  • (serverless) Add node to metadata by nicohrubec in #19878

Internal Changes 🔧

Deps Dev

  • Bump qunit-dom from 3.2.1 to 3.5.0 by dependabot in #19546
  • Bump @react-router/node from 7.13.0 to 7.13.1 by dependabot in #19544

Other

  • (astro) Re-enable server island tracing e2e test in Astro 6 by Lms24 in #19872
  • (ci) Fix "Gatbsy" typo in issue package label workflow by chargome in #19905
  • (lint) Resolve oxlint warnings by isaacs in #19893
  • (node-integration-tests) Remove unnecessary file-type dependency by Lms24 in #19824
  • (remix) Replace glob with native recursive fs walk by roli-lpci in #19531
  • (sveltekit) Replace recast + @babel/parser with acorn by roli-lpci in #19533
  • Add external contributor to CHANGELOG.md by javascript-sdk-gitflow in #19925
  • Add external contributor to CHANGELOG.md by javascript-sdk-gitflow in #19909

🤖 This preview updates automatically when you update the PR.

Use `void response.text().catch(() => {})` instead of `await response.text()`
to avoid adding latency to Sentry envelope sends. We don't need the body
content, just need to trigger the drain so Bun can free the ArrayBuffer.
@Karavil Karavil closed this Mar 21, 2026
@Karavil Karavil reopened this Mar 21, 2026
@Karavil
Copy link
Author

Karavil commented Mar 21, 2026

@s1gr1d @JPeer264 — This applies the same response body consumption fix you shipped for the Cloudflare transport in #18545, now for the Bun transport. We ran into this in production: 130K leaked ArrayBuffers (1GB) causing OOM kills every 5h.

One improvement over the Cloudflare fix: we use void response.text().catch(() => {}) instead of await response.text(), so the body drains asynchronously without adding latency to envelope sends. Happy to switch to await if you prefer consistency across transports.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

The `void response.text().catch(() => {})` pattern only handles async
promise rejections. If `response.text` is not a function, a synchronous
TypeError is thrown before `.catch()` is reached, rejecting the entire
promise chain and preventing the transport from returning status/headers.

Wrap in try/catch to handle both:
- try/catch: synchronous TypeError (response.text not a function)
- .catch(): async rejection (body read fails mid-stream)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant