Skip to content

Consolidate PR workflows into single CI workflow#11531

Merged
loks0n merged 6 commits into1.8.xfrom
ci/single-pr-workflow
Mar 12, 2026
Merged

Consolidate PR workflows into single CI workflow#11531
loks0n merged 6 commits into1.8.xfrom
ci/single-pr-workflow

Conversation

@loks0n
Copy link
Copy Markdown
Member

@loks0n loks0n commented Mar 12, 2026

Summary

  • Consolidate linter.yml, static-analysis.yml, tests.yml, benchmark.yml, check-dependencies.yml, and pr-scan.yml into a single ci.yml workflow
  • Structured job naming convention:
    • Checks / Format, Checks / Analyze, Checks / Dependencies, Checks / Image
    • Tests / Unit, Tests / E2E / General, Tests / E2E / {db} ({mode}) / {service}, Tests / E2E / Abuse ({mode}), Tests / E2E / Screenshots ({mode})
    • Benchmark
  • Shared Docker image build between tests and benchmark (no duplicate builds)
  • Replace Trivy custom PR comment with SARIF upload to GitHub Security tab
  • Upgrade osv-scanner to reusable-pr workflow v2.3.3 (drop unused merge_group trigger)
  • Update actions to latest versions ([email protected], github-script@v8, upload-artifact@v7, stale@v10)
  • Remove no-op pull_request trigger from ai-moderator
  • Fix format job git checkout HEAD^2 to only run on pull_request events
  • Fix mislabeled "CodeQL" step name (actually runs PHPStan)
  • Add Docker Hub login to benchmark job

Kept separate

  • ai-moderator.yml — multi-event triggers (issues, discussions, comments)
  • cleanup-cache.yml — PR closed lifecycle
  • codeql-analysis.yml — push to master + schedule
  • nightly.yml — scheduled security scan

Test plan

  • Verify CI workflow triggers correctly on PR
  • Confirm all job names display correctly in GitHub Actions UI
  • Validate checks, tests, and benchmark jobs complete successfully
  • Verify Trivy results appear in GitHub Security tab

🤖 Generated with Claude Code

Merge linter, static-analysis, tests, and benchmark workflows into ci.yml
with structured job naming (Checks / Format, Tests / E2E / ..., etc.).
Shared Docker image build between tests and benchmark. Update actions to
latest versions.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This pull request removes several standalone GitHub Actions workflows (benchmark.yml, linter.yml, static-analysis.yml, check-dependencies.yml, pr-scan.yml), removes the pull_request trigger from ai-moderator.yml, updates stale.yml to use actions/stale@v10, and significantly rewrites .github/workflows/ci.yml: renaming it to "CI", changing concurrency/grouping, adding top-level jobs (dependencies, security/image, format, analyze, benchmark), renaming/reorganizing build/unit/matrix/E2E jobs, and integrating benchmark execution, artifact handling, and PR comment logic into the CI workflow.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Consolidate PR workflows into single CI workflow' accurately summarizes the primary change—consolidating multiple workflow files into a single ci.yml file.
Description check ✅ Passed The description comprehensively explains the consolidation objectives, structural changes, job naming conventions, and actions updates directly related to the changeset.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ci/single-pr-workflow
📝 Coding Plan for PR comments
  • Generate coding plan

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 12, 2026

Security Scan Results for PR

Docker Image Scan Results

Package Version Vulnerability Severity
imagemagick 7.1.2.13-r0 CVE-2026-25897 CRITICAL
imagemagick 7.1.2.13-r0 CVE-2026-25898 CRITICAL
imagemagick 7.1.2.13-r0 CVE-2026-25968 CRITICAL
imagemagick 7.1.2.13-r0 CVE-2026-25971 CRITICAL
imagemagick 7.1.2.13-r0 CVE-2026-25983 CRITICAL
imagemagick 7.1.2.13-r0 CVE-2026-25986 CRITICAL
imagemagick 7.1.2.13-r0 CVE-2026-25987 CRITICAL
imagemagick 7.1.2.13-r0 CVE-2026-26284 CRITICAL
imagemagick 7.1.2.13-r0 CVE-2026-24481 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-24485 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25794 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25795 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25796 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25798 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25799 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25965 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25966 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25967 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25969 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25970 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25985 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25988 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-25989 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-26066 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-26283 HIGH
imagemagick 7.1.2.13-r0 CVE-2026-27798 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25897 CRITICAL
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25898 CRITICAL
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25968 CRITICAL
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25971 CRITICAL
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25983 CRITICAL
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25986 CRITICAL
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25987 CRITICAL
imagemagick-c++ 7.1.2.13-r0 CVE-2026-26284 CRITICAL
imagemagick-c++ 7.1.2.13-r0 CVE-2026-24481 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-24485 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25794 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25795 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25796 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25798 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25799 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25965 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25966 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25967 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25969 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25970 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25985 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25988 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-25989 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-26066 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-26283 HIGH
imagemagick-c++ 7.1.2.13-r0 CVE-2026-27798 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25897 CRITICAL
imagemagick-dev 7.1.2.13-r0 CVE-2026-25898 CRITICAL
imagemagick-dev 7.1.2.13-r0 CVE-2026-25968 CRITICAL
imagemagick-dev 7.1.2.13-r0 CVE-2026-25971 CRITICAL
imagemagick-dev 7.1.2.13-r0 CVE-2026-25983 CRITICAL
imagemagick-dev 7.1.2.13-r0 CVE-2026-25986 CRITICAL
imagemagick-dev 7.1.2.13-r0 CVE-2026-25987 CRITICAL
imagemagick-dev 7.1.2.13-r0 CVE-2026-26284 CRITICAL
imagemagick-dev 7.1.2.13-r0 CVE-2026-24481 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-24485 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25794 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25795 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25796 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25798 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25799 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25965 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25966 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25967 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25969 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25970 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25985 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25988 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-25989 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-26066 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-26283 HIGH
imagemagick-dev 7.1.2.13-r0 CVE-2026-27798 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25897 CRITICAL
imagemagick-heic 7.1.2.13-r0 CVE-2026-25898 CRITICAL
imagemagick-heic 7.1.2.13-r0 CVE-2026-25968 CRITICAL
imagemagick-heic 7.1.2.13-r0 CVE-2026-25971 CRITICAL
imagemagick-heic 7.1.2.13-r0 CVE-2026-25983 CRITICAL
imagemagick-heic 7.1.2.13-r0 CVE-2026-25986 CRITICAL
imagemagick-heic 7.1.2.13-r0 CVE-2026-25987 CRITICAL
imagemagick-heic 7.1.2.13-r0 CVE-2026-26284 CRITICAL
imagemagick-heic 7.1.2.13-r0 CVE-2026-24481 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-24485 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25794 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25795 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25796 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25798 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25799 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25965 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25966 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25967 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25969 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25970 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25985 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25988 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-25989 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-26066 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-26283 HIGH
imagemagick-heic 7.1.2.13-r0 CVE-2026-27798 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25897 CRITICAL
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25898 CRITICAL
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25968 CRITICAL
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25971 CRITICAL
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25983 CRITICAL
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25986 CRITICAL
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25987 CRITICAL
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-26284 CRITICAL
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-24481 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-24485 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25794 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25795 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25796 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25798 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25799 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25965 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25966 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25967 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25969 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25970 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25985 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25988 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-25989 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-26066 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-26283 HIGH
imagemagick-jpeg 7.1.2.13-r0 CVE-2026-27798 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25897 CRITICAL
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25898 CRITICAL
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25968 CRITICAL
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25971 CRITICAL
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25983 CRITICAL
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25986 CRITICAL
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25987 CRITICAL
imagemagick-jxl 7.1.2.13-r0 CVE-2026-26284 CRITICAL
imagemagick-jxl 7.1.2.13-r0 CVE-2026-24481 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-24485 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25794 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25795 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25796 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25798 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25799 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25965 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25966 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25967 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25969 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25970 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25985 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25988 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-25989 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-26066 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-26283 HIGH
imagemagick-jxl 7.1.2.13-r0 CVE-2026-27798 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25897 CRITICAL
imagemagick-libs 7.1.2.13-r0 CVE-2026-25898 CRITICAL
imagemagick-libs 7.1.2.13-r0 CVE-2026-25968 CRITICAL
imagemagick-libs 7.1.2.13-r0 CVE-2026-25971 CRITICAL
imagemagick-libs 7.1.2.13-r0 CVE-2026-25983 CRITICAL
imagemagick-libs 7.1.2.13-r0 CVE-2026-25986 CRITICAL
imagemagick-libs 7.1.2.13-r0 CVE-2026-25987 CRITICAL
imagemagick-libs 7.1.2.13-r0 CVE-2026-26284 CRITICAL
imagemagick-libs 7.1.2.13-r0 CVE-2026-24481 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-24485 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25794 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25795 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25796 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25798 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25799 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25965 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25966 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25967 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25969 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25970 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25985 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25988 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-25989 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-26066 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-26283 HIGH
imagemagick-libs 7.1.2.13-r0 CVE-2026-27798 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25897 CRITICAL
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25898 CRITICAL
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25968 CRITICAL
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25971 CRITICAL
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25983 CRITICAL
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25986 CRITICAL
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25987 CRITICAL
imagemagick-tiff 7.1.2.13-r0 CVE-2026-26284 CRITICAL
imagemagick-tiff 7.1.2.13-r0 CVE-2026-24481 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-24485 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25794 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25795 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25796 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25798 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25799 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25965 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25966 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25967 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25969 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25970 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25985 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25988 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-25989 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-26066 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-26283 HIGH
imagemagick-tiff 7.1.2.13-r0 CVE-2026-27798 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25897 CRITICAL
imagemagick-webp 7.1.2.13-r0 CVE-2026-25898 CRITICAL
imagemagick-webp 7.1.2.13-r0 CVE-2026-25968 CRITICAL
imagemagick-webp 7.1.2.13-r0 CVE-2026-25971 CRITICAL
imagemagick-webp 7.1.2.13-r0 CVE-2026-25983 CRITICAL
imagemagick-webp 7.1.2.13-r0 CVE-2026-25986 CRITICAL
imagemagick-webp 7.1.2.13-r0 CVE-2026-25987 CRITICAL
imagemagick-webp 7.1.2.13-r0 CVE-2026-26284 CRITICAL
imagemagick-webp 7.1.2.13-r0 CVE-2026-24481 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-24485 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25794 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25795 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25796 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25798 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25799 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25965 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25966 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25967 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25969 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25970 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25985 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25988 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-25989 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-26066 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-26283 HIGH
imagemagick-webp 7.1.2.13-r0 CVE-2026-27798 HIGH
libecpg 18.1-r0 CVE-2026-2004 HIGH
libecpg 18.1-r0 CVE-2026-2005 HIGH
libecpg 18.1-r0 CVE-2026-2006 HIGH
libecpg 18.1-r0 CVE-2026-2007 HIGH
libecpg-dev 18.1-r0 CVE-2026-2004 HIGH
libecpg-dev 18.1-r0 CVE-2026-2005 HIGH
libecpg-dev 18.1-r0 CVE-2026-2006 HIGH
libecpg-dev 18.1-r0 CVE-2026-2007 HIGH
libheif 1.20.2-r1 CVE-2025-68431 HIGH
libpng 1.6.54-r0 CVE-2026-25646 HIGH
libpng-dev 1.6.54-r0 CVE-2026-25646 HIGH
libpq 18.1-r0 CVE-2026-2004 HIGH
libpq 18.1-r0 CVE-2026-2005 HIGH
libpq 18.1-r0 CVE-2026-2006 HIGH
libpq 18.1-r0 CVE-2026-2007 HIGH
libpq-dev 18.1-r0 CVE-2026-2004 HIGH
libpq-dev 18.1-r0 CVE-2026-2005 HIGH
libpq-dev 18.1-r0 CVE-2026-2006 HIGH
libpq-dev 18.1-r0 CVE-2026-2007 HIGH
postgresql18-dev 18.1-r0 CVE-2026-2004 HIGH
postgresql18-dev 18.1-r0 CVE-2026-2005 HIGH
postgresql18-dev 18.1-r0 CVE-2026-2006 HIGH
postgresql18-dev 18.1-r0 CVE-2026-2007 HIGH
zlib 1.3.1-r2 CVE-2026-22184 CRITICAL
zlib-dev 1.3.1-r2 CVE-2026-22184 CRITICAL

Source Code Scan Results

🎉 No vulnerabilities found!

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 12, 2026

Greptile Summary

This PR consolidates four separate CI workflow files (linter.yml, static-analysis.yml, tests.yml, benchmark.yml) into a single ci.yml workflow, adds structured job naming, shares the Docker image build between all test and benchmark jobs to eliminate duplicate builds, and bumps several actions to their latest major versions.

Key changes:

  • linter.yml merged as Checks / Format job; static-analysis.yml merged as Checks / Analyze job
  • benchmark.yml merged as Benchmark job, now depending on the shared setup job instead of building its own image
  • actions/github-script bumped from v7 → v8; actions/upload-artifact bumped from v6 → v7; actions/stale bumped from v9 → v10
  • Concurrency group renamed from ${{ github.workflow }}-${{ github.ref }} to ci-${{ github.ref }} to avoid stale group names after the rename

Issues found:

  • The format job unconditionally executes git checkout HEAD^2, which assumes a pull-request merge commit is checked out. This will always fail when the workflow is triggered via workflow_dispatch, because no merge commit exists in that context. The original linter.yml was scoped to [pull_request] only, so this is a regression introduced by merging into the combined workflow.
  • CACHE_KEY uses github.event.pull_request.head.sha, which is empty for workflow_dispatch triggers. This results in the key appwrite-dev-, causing potential cache collisions between concurrent manual runs and possible reuse of stale images.

Confidence Score: 3/5

  • Safe to merge for normal PR workflows, but workflow_dispatch triggers will fail on the format job and may produce unreliable caches.
  • The consolidation is well-structured and the pull_request path is functionally equivalent to the old separate workflows. However, two logic bugs affect the workflow_dispatch trigger path: the git checkout HEAD^2 step will error out (breaking the format job), and the empty CACHE_KEY can cause cross-run cache collisions. Both are straightforward to fix but should be addressed before merging.
  • ci.yml — specifically the format job's git checkout HEAD^2 step (line 32) and the workflow-level CACHE_KEY env var (line 10).

Important Files Changed

Filename Overview
.github/workflows/ci.yml New consolidated CI workflow (renamed from tests.yml) adding format, analyze, and benchmark jobs; contains two logic bugs: git checkout HEAD^2 in the format job will fail on workflow_dispatch, and CACHE_KEY resolves to an empty string for non-pull_request triggers.
.github/workflows/benchmark.yml Deleted — benchmark logic migrated into ci.yml's benchmark job with the shared Docker image cache, eliminating a duplicate build.
.github/workflows/linter.yml Deleted — linter logic migrated into ci.yml's format job; migration is functionally equivalent for pull_request events but introduces a regression for workflow_dispatch.
.github/workflows/static-analysis.yml Deleted — CodeQL and locale-check logic migrated into ci.yml's analyze job; no issues introduced.
.github/workflows/stale.yml Minor update: bumps actions/stale from v9 to v10; no logic changes.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    PR[pull_request / workflow_dispatch] --> F[Checks / Format]
    PR --> A[Checks / Analyze]
    PR --> M[Tests / Matrix]
    PR --> S[Setup - Build & Cache Docker Image]

    S --> U[Tests / Unit]
    S --> EG[Tests / E2E / General]
    S & M --> ES[Tests / E2E / Service matrix]
    S & M --> EA[Tests / E2E / Abuse matrix]
    S & M --> ESC[Tests / E2E / Screenshots matrix]
    S --> B[Benchmark]

    style F fill:#f9a,stroke:#c66
    style B fill:#adf,stroke:#36a
Loading

Comments Outside Diff (1)

  1. .github/workflows/ci.yml, line 10 (link)

    Empty cache key on workflow_dispatch

    CACHE_KEY is built from github.event.pull_request.head.sha, which is only populated for pull_request events. For a workflow_dispatch run this expression evaluates to an empty string, producing the key appwrite-dev-. Two consequences:

    1. All concurrent manual-dispatch runs share the same key, so they can race and overwrite each other's cached Docker image.
    2. A stale image cached by a prior PR run (that happened to produce the same bare suffix) could be silently reused.

    Consider falling back to github.sha when the pull-request context is absent so that every triggered run gets a unique, meaningful key.

Last reviewed commit: aecca2f

with:
fetch-depth: 2

- run: git checkout HEAD^2
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

git checkout HEAD^2 fails on workflow_dispatch

The format job unconditionally runs git checkout HEAD^2, which assumes the checked-out commit is a GitHub-generated merge commit (as is the case for pull_request events). When the workflow is triggered via workflow_dispatch, GitHub checks out the branch HEAD directly — there is no merge commit, so HEAD^2 does not exist and the step will fail with a fatal error.

The original linter.yml was only triggered on [pull_request], so this was never an issue there. Merging it into the combined ci.yml — which also has a workflow_dispatch trigger — introduces this regression.

Fix by guarding the step (or the whole job) with an event-name condition:

Suggested change
- run: git checkout HEAD^2
- run: git checkout HEAD^2
if: github.event_name == 'pull_request'

Alternatively, add if: github.event_name == 'pull_request' at the job level so the entire format job is skipped when triggered manually.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (3)
.github/workflows/ci.yml (3)

512-518: Third-party apt repository introduces supply chain risk.

Installing oha from packages.azlux.fr adds an external dependency that could become unavailable or compromised. Consider using the official release from GitHub or caching the binary.

♻️ Alternative: Install from GitHub releases
       - name: Install Oha
         run: |
-          echo "deb [signed-by=/usr/share/keyrings/azlux-archive-keyring.gpg] http://packages.azlux.fr/debian/ stable main" | sudo tee /etc/apt/sources.list.d/azlux.list
-          sudo wget -O /usr/share/keyrings/azlux-archive-keyring.gpg https://azlux.fr/repo.gpg
-          sudo apt update
-          sudo apt install oha
+          OHA_VERSION="1.4.0"
+          curl -sL "https://github.com/hatoo/oha/releases/download/v${OHA_VERSION}/oha-linux-amd64" -o oha
+          chmod +x oha
+          sudo mv oha /usr/local/bin/
           oha --version
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 512 - 518, The "Install Oha" CI step
currently adds a third-party apt repo and installs oha from packages.azlux.fr;
replace this with a safer approach: fetch a pinned oha GitHub release binary (or
use a maintained action) instead of adding the azlux apt source, verify the
downloaded artifact with a checksum/signature, install it into PATH, and keep
the existing sanity check (run oha --version) to validate installation; update
the step named "Install Oha" to download a specific release URL, verify
integrity, and install the binary rather than using sudo apt install from the
external repository.

505-510: Consider using health checks instead of sleep 10.

Hardcoded sleeps are fragile—services may start faster or slower depending on the runner. Other jobs in this workflow use explicit wait loops or --wait flag. For consistency and reliability, consider a health check approach.

♻️ Proposed refactor
       - name: Load and Start Appwrite
         run: |
           sed -i 's/traefik/localhost/g' .env
           docker load --input /tmp/${{ env.IMAGE }}.tar
-          docker compose up -d
-          sleep 10
+          docker compose up -d --wait

Or if --wait isn't suitable, add a health check loop:

           docker compose up -d
-          sleep 10
+          until curl -sf http://localhost/v1/health/version > /dev/null; do
+            echo "Waiting for Appwrite..."
+            sleep 2
+          done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 505 - 510, Replace the fragile sleep
10 after "docker compose up -d" with a deterministic health-wait: either use
"docker compose up --wait" to wait for services to be healthy, or implement a
small retry loop that polls the service/container health (e.g., via docker
compose ps, docker inspect .State.Health.Status, or HTTP health endpoints) until
the expected service is healthy before proceeding; update the block that
currently runs "docker compose up -d" and removes "sleep 10" so the job reliably
waits for readiness.

533-534: Same sleep 10 issue when starting the latest version.

This also uses a hardcoded sleep. Apply consistent health check logic here as well.

♻️ Proposed refactor
           docker compose up -d
-          sleep 10
+          until curl -sf http://localhost/v1/health/version > /dev/null; do
+            echo "Waiting for Appwrite..."
+            sleep 2
+          done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 533 - 534, Replace the hardcoded
"sleep 10" after "docker compose up -d" with the same health-check logic used
elsewhere in the workflow: poll the started service containers' health status
(via docker compose ps / docker inspect Health.Status or the repository's
existing wait-for function) until all report "healthy" or a configured timeout
is reached, and fail the job if the timeout elapses; ensure you reuse the shared
health-check implementation and include a clear timeout/failure path instead of
sleeping.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/ci.yml:
- Around line 51-54: The step named "Run CodeQL" is misleading because it
executes "composer check" which runs PHPStan; rename the workflow step to
accurately reflect this (e.g., "Run PHPStan" or "Run PHPStan (composer check)")
so reviewers and logs are correct; update the step name string that currently
reads "Run CodeQL" in the job that runs the docker command ("docker run ...
composer check") and leave the command itself unchanged.
- Line 32: The step that runs the command git checkout HEAD^2 will fail when the
workflow is triggered by workflow_dispatch because HEAD^2 doesn't exist; update
the CI workflow to either guard that specific step with a conditional so it only
runs for PR events (add an if: github.event_name == 'pull_request' on the step
that runs git checkout HEAD^2) or restrict the job/workflow to run only on
pull_request events (adjust the on: triggers to include only pull_request),
ensuring the git checkout HEAD^2 command is never executed for workflow_dispatch
triggers.
- Around line 488-503: The benchmark job is missing the Docker Hub login step
which other jobs have; add a step in the "benchmark" job (before any image pulls
or cache usage) that runs the Docker Hub login using the docker/login-action
(e.g., docker/login-action@v2) and read credentials from the existing secrets
(DOCKERHUB_USERNAME and DOCKERHUB_TOKEN) so the job uses the same authenticated
pull behavior as the unit/e2e_general/e2e_service jobs and avoids rate limits.
- Around line 552-554: The jq calls are inconsistent: the PR side uses the safe
operator `tonumber?` but the latest side uses `tonumber` which can fail on
malformed/absent values; update the jq expressions that parse numeric fields
(e.g. the '.summary.requestsPerSec|tonumber' occurrence and any other
'.summary.requestsPerSec|tonumber' uses for benchmark-latest.json) to use the
safe variant 'tonumber?' so both benchmark.json and benchmark-latest.json use
identical, error-tolerant parsing; leave the existing '.latencyPercentiles.p99'
and status code tostring logic unless you intentionally want similar resilience
there.

In @.github/workflows/stale.yml:
- Line 12: The workflow currently uses "uses: actions/stale@v10" which requires
GitHub Actions runner v2.327.1+; either change the action reference back to a v9
tag (e.g., actions/stale@v9) to avoid the Node.js 24 runtime requirement or
ensure your self-hosted runners are upgraded to runner v2.327.1 or later; update
the workflow to use the chosen approach and add a brief comment explaining the
decision so future reviewers see why "uses: actions/stale@v10" was replaced or
why runner upgrades are required.

---

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 512-518: The "Install Oha" CI step currently adds a third-party
apt repo and installs oha from packages.azlux.fr; replace this with a safer
approach: fetch a pinned oha GitHub release binary (or use a maintained action)
instead of adding the azlux apt source, verify the downloaded artifact with a
checksum/signature, install it into PATH, and keep the existing sanity check
(run oha --version) to validate installation; update the step named "Install
Oha" to download a specific release URL, verify integrity, and install the
binary rather than using sudo apt install from the external repository.
- Around line 505-510: Replace the fragile sleep 10 after "docker compose up -d"
with a deterministic health-wait: either use "docker compose up --wait" to wait
for services to be healthy, or implement a small retry loop that polls the
service/container health (e.g., via docker compose ps, docker inspect
.State.Health.Status, or HTTP health endpoints) until the expected service is
healthy before proceeding; update the block that currently runs "docker compose
up -d" and removes "sleep 10" so the job reliably waits for readiness.
- Around line 533-534: Replace the hardcoded "sleep 10" after "docker compose up
-d" with the same health-check logic used elsewhere in the workflow: poll the
started service containers' health status (via docker compose ps / docker
inspect Health.Status or the repository's existing wait-for function) until all
report "healthy" or a configured timeout is reached, and fail the job if the
timeout elapses; ensure you reuse the shared health-check implementation and
include a clear timeout/failure path instead of sleeping.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3cc0a8a0-eb76-4f0c-9b9f-2da4aedef6ff

📥 Commits

Reviewing files that changed from the base of the PR and between 7e4ec9e and aecca2f.

📒 Files selected for processing (5)
  • .github/workflows/benchmark.yml
  • .github/workflows/ci.yml
  • .github/workflows/linter.yml
  • .github/workflows/stale.yml
  • .github/workflows/static-analysis.yml
💤 Files with no reviewable changes (3)
  • .github/workflows/benchmark.yml
  • .github/workflows/linter.yml
  • .github/workflows/static-analysis.yml

Comment on lines +488 to +503
benchmark:
name: Benchmark
runs-on: ubuntu-latest
needs: setup
permissions:
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v6

- name: Load Cache
uses: actions/cache@v5
with:
key: ${{ env.CACHE_KEY }}
path: /tmp/${{ env.IMAGE }}.tar
fail-on-cache-miss: true
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing Docker Hub login step in benchmark job.

Other jobs (unit, e2e_general, e2e_service, etc.) include a "Login to Docker Hub" step to avoid rate limits. The benchmark job omits this, which could cause pull failures if rate limited.

🛠️ Proposed fix: Add Docker Hub login
   benchmark:
     name: Benchmark
     runs-on: ubuntu-latest
     needs: setup
     permissions:
       pull-requests: write
     steps:
       - name: Checkout repository
         uses: actions/checkout@v6

       - name: Load Cache
         uses: actions/cache@v5
         with:
           key: ${{ env.CACHE_KEY }}
           path: /tmp/${{ env.IMAGE }}.tar
           fail-on-cache-miss: true

+      - name: Login to Docker Hub
+        uses: docker/login-action@v4
+        with:
+          username: ${{ vars.DOCKERHUB_USERNAME }}
+          password: ${{ secrets.DOCKERHUB_TOKEN }}
+
       - name: Load and Start Appwrite
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 488 - 503, The benchmark job is
missing the Docker Hub login step which other jobs have; add a step in the
"benchmark" job (before any image pulls or cache usage) that runs the Docker Hub
login using the docker/login-action (e.g., docker/login-action@v2) and read
credentials from the existing secrets (DOCKERHUB_USERNAME and DOCKERHUB_TOKEN)
so the job uses the same authenticated pull behavior as the
unit/e2e_general/e2e_service jobs and avoids rate limits.

Comment on lines +552 to +554
echo "| RPS | $(jq -r '.summary.requestsPerSec|tonumber?|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.summary.requestsPerSec|tonumber|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | " >> benchmark.txt
echo "| 200 | $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | " >> benchmark.txt
echo "| P99 | $(jq -r '.latencyPercentiles.p99' benchmark.json ) | $(jq -r '.latencyPercentiles.p99' benchmark-latest.json ) | " >> benchmark.txt
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent error handling in jq expressions between PR and latest benchmarks.

Line 543 uses tonumber? (with error suppression) for the PR benchmark, but line 552 uses tonumber (without ?) for the latest benchmark. If the JSON structure differs or contains unexpected data, the PR version will handle it gracefully while the latest version will fail the step.

🛠️ Proposed fix: Use consistent error handling
-          echo "| RPS | $(jq -r '.summary.requestsPerSec|tonumber?|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.summary.requestsPerSec|tonumber|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | "  >> benchmark.txt
+          echo "| RPS | $(jq -r '.summary.requestsPerSec|tonumber?|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.summary.requestsPerSec|tonumber?|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | "  >> benchmark.txt
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
echo "| RPS | $(jq -r '.summary.requestsPerSec|tonumber?|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.summary.requestsPerSec|tonumber|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | " >> benchmark.txt
echo "| 200 | $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | " >> benchmark.txt
echo "| P99 | $(jq -r '.latencyPercentiles.p99' benchmark.json ) | $(jq -r '.latencyPercentiles.p99' benchmark-latest.json ) | " >> benchmark.txt
echo "| RPS | $(jq -r '.summary.requestsPerSec|tonumber?|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.summary.requestsPerSec|tonumber?|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | " >> benchmark.txt
echo "| 200 | $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | " >> benchmark.txt
echo "| P99 | $(jq -r '.latencyPercentiles.p99' benchmark.json ) | $(jq -r '.latencyPercentiles.p99' benchmark-latest.json ) | " >> benchmark.txt
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 552 - 554, The jq calls are
inconsistent: the PR side uses the safe operator `tonumber?` but the latest side
uses `tonumber` which can fail on malformed/absent values; update the jq
expressions that parse numeric fields (e.g. the
'.summary.requestsPerSec|tonumber' occurrence and any other
'.summary.requestsPerSec|tonumber' uses for benchmark-latest.json) to use the
safe variant 'tonumber?' so both benchmark.json and benchmark-latest.json use
identical, error-tolerant parsing; leave the existing '.latencyPercentiles.p99'
and status code tostring logic unless you intentionally want similar resilience
there.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 12, 2026

🔄 PHP-Retry Summary

Flaky tests detected across commits:

Commit aecca2f - 3 flaky tests
Test Retries Total Time Details
UsageTest::testDatabaseStatsCollectionsAPI 1 10.27s Logs
LegacyTransactionsCustomServerTest::testUpsertDocument 1 240.90s Logs
LegacyTransactionsCustomServerTest::testBulkUpdateOperations 1 240.16s Logs
Commit e67ed26 - 1 flaky test
Test Retries Total Time Details
UsageTest::testDatabaseStatsCollectionsAPI 1 10.32s Logs
Commit 26326d0 - 3 flaky tests
Test Retries Total Time Details
UsageTest::testDatabaseStatsCollectionsAPI 1 10.37s Logs
TablesDBConsoleClientTest::testNotBetween 1 240.74s Logs
TablesDBTransactionsConsoleClientTest::testDoubleCommitRollback 1 240.32s Logs
Commit e99f682 - 1 flaky test
Test Retries Total Time Details
UsageTest::testDatabaseStatsCollectionsAPI 1 10.35s Logs
Commit 1abbca9 - 2 flaky tests
Test Retries Total Time Details
UsageTest::testDatabaseStatsCollectionsAPI 1 10.10s Logs
FunctionsCustomServerTest::testErrorPages 1 243ms Logs

Note: Flaky test results are tracked for the last 5 commits

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 12, 2026

✨ Benchmark results

  • Requests per second: 1,650
  • Requests with 200 status code: 297,086
  • P99 latency: 0.101470396

⚡ Benchmark Comparison

Metric This PR Latest version
RPS 1,650 1,126
200 297,086 202,642
P99 0.101470396 0.198673955

loks0n and others added 2 commits March 12, 2026 12:56
- Move check-dependencies into ci.yml as Checks / Dependencies
  (upgrade to [email protected], drop merge_group)
- Move pr-scan into ci.yml as Checks / Image
  (upgrade Trivy to 0.33.1, use SARIF + upload-sarif instead of
  custom PR comment logic)
- Rename Setup job to Build
- Fix format job git checkout HEAD^2 to only run on pull_request
- Rename PHPStan step correctly (was mislabeled CodeQL)
- Add Docker Hub login to benchmark job
- Remove no-op pull_request trigger from ai-moderator

Co-Authored-By: Claude Opus 4.6 <[email protected]>
@github-advanced-security
Copy link
Copy Markdown

You are seeing this message because GitHub Code Scanning has recently been set up for this repository, or this pull request contains the workflow file for the Code Scanning tool.

What Enabling Code Scanning Means:

  • The 'Security' tab will display more code scanning analysis results (e.g., for the default branch).
  • Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results.
  • You will be able to see the analysis results for the pull request's branch on this overview once the scans have completed and the checks have passed.

For more information about GitHub Code Scanning, check out the documentation.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
.github/workflows/ci.yml (2)

12-33: ⚠️ Potential issue | 🟠 Major

Add merge_group trigger for dependency scanning and restore pull_request_target for dedicated security boundary.

The consolidation removed both the merge_group dependency scan (needed for merge queue protection) and the pull_request_target security boundary. These triggers are absent from all workflows—including nightly.yml, which only runs on schedule. The current ci.yml covers only pull_request events and manual dispatch, leaving merge queue operations without OSV scanning and removing the designated PR security context.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 12 - 33, The workflow currently only
triggers on pull_request and workflow_dispatch which removed the
merge_queue/merge_group scan and the pull_request_target security context;
update the on: block to include both pull_request_target and merge_group
triggers (e.g., add merge_group alongside pull_request and add
pull_request_target) so the dependencies job (name: Checks / Dependencies) and
security job (name: Checks / Image) run under the dedicated PR security boundary
and during merge queue operations, and ensure nightly.yml still contains its
schedule trigger unchanged.

10-10: ⚠️ Potential issue | 🟠 Major

Use a non-PR fallback for CACHE_KEY.

On workflow_dispatch, github.event.pull_request.head.sha is undefined, so manual runs all share the same cache key. This causes test and benchmark jobs to reuse build artifacts from different revisions.

🛠️ Proposed fix
-  CACHE_KEY: appwrite-dev-${{ github.event.pull_request.head.sha }}
+  CACHE_KEY: appwrite-dev-${{ github.event.pull_request.head.sha || github.sha }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml at line 10, CACHE_KEY currently uses
github.event.pull_request.head.sha which is undefined for workflow_dispatch
runs; change the assignment so it falls back to a non-PR SHA (e.g., github.sha)
when pull_request data is absent. Update the CACHE_KEY expression to use
github.event.pull_request.head.sha if present, otherwise use github.sha (or
another stable identifier) so manual runs produce a unique cache key for that
commit/dispatch; ensure the symbols referenced are CACHE_KEY and
github.event.pull_request.head.sha (with fallback to github.sha).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In @.github/workflows/ci.yml:
- Around line 12-33: The workflow currently only triggers on pull_request and
workflow_dispatch which removed the merge_queue/merge_group scan and the
pull_request_target security context; update the on: block to include both
pull_request_target and merge_group triggers (e.g., add merge_group alongside
pull_request and add pull_request_target) so the dependencies job (name: Checks
/ Dependencies) and security job (name: Checks / Image) run under the dedicated
PR security boundary and during merge queue operations, and ensure nightly.yml
still contains its schedule trigger unchanged.
- Line 10: CACHE_KEY currently uses github.event.pull_request.head.sha which is
undefined for workflow_dispatch runs; change the assignment so it falls back to
a non-PR SHA (e.g., github.sha) when pull_request data is absent. Update the
CACHE_KEY expression to use github.event.pull_request.head.sha if present,
otherwise use github.sha (or another stable identifier) so manual runs produce a
unique cache key for that commit/dispatch; ensure the symbols referenced are
CACHE_KEY and github.event.pull_request.head.sha (with fallback to github.sha).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: accc18a5-730c-4019-8f40-12878edf65cf

📥 Commits

Reviewing files that changed from the base of the PR and between aecca2f and 8d0a4d7.

📒 Files selected for processing (4)
  • .github/workflows/ai-moderator.yml
  • .github/workflows/check-dependencies.yml
  • .github/workflows/ci.yml
  • .github/workflows/pr-scan.yml
💤 Files with no reviewable changes (3)
  • .github/workflows/check-dependencies.yml
  • .github/workflows/pr-scan.yml
  • .github/workflows/ai-moderator.yml

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
.github/workflows/ci.yml (1)

576-581: Benchmark startup lacks --wait flag unlike other jobs.

All test jobs use docker compose up -d --quiet-pull --wait to ensure services are healthy before proceeding, but the benchmark job relies on a hardcoded sleep 10. This could lead to flaky benchmarks if services aren't fully ready.

🔧 Proposed fix: Use consistent startup approach
       - name: Load and Start Appwrite
         run: |
           sed -i 's/traefik/localhost/g' .env
           docker load --input /tmp/${{ env.IMAGE }}.tar
-          docker compose up -d
-          sleep 10
+          docker compose up -d --wait
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 576 - 581, The "Load and Start
Appwrite" step currently uses a fixed sleep after running "docker compose up
-d", which can cause flakiness; update this step to start services with the same
flags used by other jobs by replacing the "docker compose up -d" invocation with
"docker compose up -d --quiet-pull --wait" and remove the hardcoded "sleep 10"
so the job waits for healthy services instead of sleeping; keep the preceding
"sed -i 's/traefik/localhost/g' .env" and "docker load --input /tmp/${{
env.IMAGE }}.tar" lines unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/ci.yml:
- Around line 615-616: The jq pipeline that extracts
.statusCodeDistribution."200" can return null when there are no 200 responses;
update the jq expression to coalesce null to a safe default (e.g., use jq's //
operator to replace null with 0) so the echo into benchmark.txt always receives
a numeric/string value; apply the same null-coalescing change to the other
benchmark echo line mentioned (the similar jq expression at the other benchmark
file entry) so both uses of .statusCodeDistribution."200" in the benchmark.json
-> benchmark.txt writes are protected.
- Around line 597-605: The CI step "Installing latest version" currently curls
production compose/env from https://appwrite.io/install which yields production
configs, causing an apples-to-oranges benchmark vs the PR build that uses
TESTING=true and target: development; update the workflow to make the comparison
consistent by either (A) building and running the "latest" image with the same
testing/development flags (e.g., create a comparable compose with TESTING=true
and target: development or build the image locally with the same build args) or
(B) add a clear caveat to the benchmark comment output that the "latest" run
uses production compose/env from the curl lines and therefore configurations
differ; adjust the "Installing latest version" step (the curl and docker compose
up commands) or the benchmark reporting logic accordingly so both runs use
comparable configurations or the report explicitly states the mismatch.

---

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 576-581: The "Load and Start Appwrite" step currently uses a fixed
sleep after running "docker compose up -d", which can cause flakiness; update
this step to start services with the same flags used by other jobs by replacing
the "docker compose up -d" invocation with "docker compose up -d --quiet-pull
--wait" and remove the hardcoded "sleep 10" so the job waits for healthy
services instead of sleeping; keep the preceding "sed -i 's/traefik/localhost/g'
.env" and "docker load --input /tmp/${{ env.IMAGE }}.tar" lines unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 61ba06f3-3b5c-4e3c-9d9a-4f5e25199fe0

📥 Commits

Reviewing files that changed from the base of the PR and between 8d0a4d7 and 26326d0.

📒 Files selected for processing (1)
  • .github/workflows/ci.yml

Comment on lines +597 to +605
- name: Installing latest version
run: |
rm docker-compose.yml
rm .env
curl https://appwrite.io/install/compose -o docker-compose.yml
curl https://appwrite.io/install/env -o .env
sed -i 's/_APP_OPTIONS_ABUSE=enabled/_APP_OPTIONS_ABUSE=disabled/g' .env
docker compose up -d
sleep 10
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Benchmark comparison may be misleading due to differing image configurations.

The PR benchmark runs against the development image built with TESTING=true and target: development, while the "latest" comparison downloads production compose files from appwrite.io/install. This creates an apples-to-oranges comparison where differences might reflect configuration variance rather than actual code performance changes.

Consider documenting this caveat in the benchmark comment output, or ensuring both benchmarks use comparable configurations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 597 - 605, The CI step "Installing
latest version" currently curls production compose/env from
https://appwrite.io/install which yields production configs, causing an
apples-to-oranges benchmark vs the PR build that uses TESTING=true and target:
development; update the workflow to make the comparison consistent by either (A)
building and running the "latest" image with the same testing/development flags
(e.g., create a comparable compose with TESTING=true and target: development or
build the image locally with the same build args) or (B) add a clear caveat to
the benchmark comment output that the "latest" run uses production compose/env
from the curl lines and therefore configurations differ; adjust the "Installing
latest version" step (the curl and docker compose up commands) or the benchmark
reporting logic accordingly so both runs use comparable configurations or the
report explicitly states the mismatch.

Comment on lines +615 to +616
echo "- Requests with 200 status code: $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json)" >> benchmark.txt
echo "- P99 latency: $(jq -r '.latencyPercentiles.p99' benchmark.json )" >> benchmark.txt
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Status code distribution parsing lacks null handling.

The expressions .statusCodeDistribution."200" assume a "200" key exists. If no successful responses occurred (e.g., service failed to start), this field may be absent or null, causing the jq pipeline to output null or fail.

🛠️ Proposed fix: Add null coalescing
-          echo "- Requests with 200 status code: $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json)" >> benchmark.txt
+          echo "- Requests with 200 status code: $(jq -r '(.statusCodeDistribution."200" // 0)|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json)" >> benchmark.txt

Apply similar fix to line 624 for both benchmark files.

Also applies to: 624-625

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 615 - 616, The jq pipeline that
extracts .statusCodeDistribution."200" can return null when there are no 200
responses; update the jq expression to coalesce null to a safe default (e.g.,
use jq's // operator to replace null with 0) so the echo into benchmark.txt
always receives a numeric/string value; apply the same null-coalescing change to
the other benchmark echo line mentioned (the similar jq expression at the other
benchmark file entry) so both uses of .statusCodeDistribution."200" in the
benchmark.json -> benchmark.txt writes are protected.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
.github/workflows/ci.yml (1)

576-582: Consider using --wait instead of sleep 10 for reliability.

The test jobs use docker compose up -d --quiet-pull --wait which waits for containers to be healthy. The benchmark job uses a fixed sleep 10 which may be insufficient or wasteful depending on startup time.

♻️ Suggested improvement
       - name: Load and Start Appwrite
         run: |
           sed -i 's/traefik/localhost/g' .env
           docker load --input /tmp/${{ env.IMAGE }}.tar
-          docker compose up -d
-          sleep 10
+          docker compose up -d --wait
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 576 - 582, Replace the fixed "sleep
10" wait with docker compose's built-in health wait by running "docker compose
up -d --quiet-pull --wait" in the "Load and Start Appwrite" step (the block that
currently runs sed, docker load, docker compose up -d, sleep 10) so the workflow
waits for containers to be healthy instead of sleeping a fixed duration; keep
the prior sed and docker load lines and ensure the docker compose invocation
uses --quiet-pull and --wait flags.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 576-582: Replace the fixed "sleep 10" wait with docker compose's
built-in health wait by running "docker compose up -d --quiet-pull --wait" in
the "Load and Start Appwrite" step (the block that currently runs sed, docker
load, docker compose up -d, sleep 10) so the workflow waits for containers to be
healthy instead of sleeping a fixed duration; keep the prior sed and docker load
lines and ensure the docker compose invocation uses --quiet-pull and --wait
flags.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d08ccab3-4859-49ea-a16f-6db0cc4cc47a

📥 Commits

Reviewing files that changed from the base of the PR and between 26326d0 and e99f682.

📒 Files selected for processing (1)
  • .github/workflows/ci.yml

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/workflows/ci.yml (1)

7-11: ⚠️ Potential issue | 🟠 Major

CACHE_KEY will be empty on workflow_dispatch triggers.

The cache key uses github.event.pull_request.head.sha which is undefined for workflow_dispatch events. This will cause cache misses and potentially break jobs that depend on the cached image with fail-on-cache-miss: true.

🛠️ Proposed fix: Use a fallback for non-PR events
 env:
   COMPOSE_FILE: docker-compose.yml
   IMAGE: appwrite-dev
-  CACHE_KEY: appwrite-dev-${{ github.event.pull_request.head.sha }}
+  CACHE_KEY: appwrite-dev-${{ github.event.pull_request.head.sha || github.sha }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 7 - 11, The CACHE_KEY environment
variable uses github.event.pull_request.head.sha which is undefined for
workflow_dispatch events; update the CACHE_KEY expression to fall back to a
non-PR identifier (for example github.sha or github.run_id) when
github.event.pull_request is missing so workflow_dispatch runs produce a stable
value; modify the CACHE_KEY definition to reference CACHE_KEY and replace the
current github.event.pull_request.head.sha usage with a conditional fallback to
github.sha (or github.run_id) to avoid empty keys and cache-miss failures.
♻️ Duplicate comments (2)
.github/workflows/ci.yml (2)

623-625: ⚠️ Potential issue | 🟡 Minor

Inconsistent error handling in jq expressions between PR and latest benchmarks.

Line 623 uses tonumber? (with error suppression) for the PR benchmark RPS, but uses tonumber (without ?) for the latest benchmark. Line 624 also lacks null coalescing for .statusCodeDistribution."200" on both files. For consistency and robustness:

🛠️ Proposed fix: Use consistent error handling
-          echo "| RPS | $(jq -r '.summary.requestsPerSec|tonumber?|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.summary.requestsPerSec|tonumber|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | "  >> benchmark.txt
-          echo "| 200 | $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | "  >> benchmark.txt
+          echo "| RPS | $(jq -r '.summary.requestsPerSec|tonumber?|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '.summary.requestsPerSec|tonumber?|floor|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | "  >> benchmark.txt
+          echo "| 200 | $(jq -r '(.statusCodeDistribution."200" // 0)|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json) | $(jq -r '(.statusCodeDistribution."200" // 0)|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark-latest.json) | "  >> benchmark.txt
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 623 - 625, Make the jq expressions
consistent and resilient by using the safe conversion and null-coalescing
operators: replace the plain tonumber with tonumber? for
'.summary.requestsPerSec' in both inputs, and add a fallback using the '//'
operator for '.statusCodeDistribution."200"' (and optionally
'.latencyPercentiles.p99') so missing or non-numeric values yield a safe default
(e.g., 0) instead of causing errors; update the jq invocations that reference
'.summary.requestsPerSec', '.statusCodeDistribution."200"', and
'.latencyPercentiles.p99' accordingly.

614-616: ⚠️ Potential issue | 🟡 Minor

Status code distribution parsing lacks null handling.

The expression .statusCodeDistribution."200" assumes a "200" key exists. If the service failed to start or no successful responses occurred, this field may be absent, causing unexpected output like null.

🛠️ Proposed fix: Add null coalescing
-          echo "- Requests with 200 status code: $(jq -r '.statusCodeDistribution."200"|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json)" >> benchmark.txt
+          echo "- Requests with 200 status code: $(jq -r '(.statusCodeDistribution."200" // 0)|tostring|[while(length>0;.[:-3])|.[-3:]]|reverse|join(",")' benchmark.json)" >> benchmark.txt
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 614 - 616, The jq expression reading
the 200 bucket can yield null; update the echo that uses
'.statusCodeDistribution."200"' to coalesce null to a safe default (e.g. 0)
using jq's // operator so the command becomes something like
'.statusCodeDistribution["200"] // 0' (keep the existing
formatting/number-grouping pipeline). Locate the echo line that writes "-
Requests with 200 status code: $(jq -r
'.statusCodeDistribution."200"|tostring|...` and replace the inner key access
with the null-coalescing expression to ensure it outputs 0 instead of null when
the key is missing.
🧹 Nitpick comments (2)
.github/workflows/ci.yml (2)

553-559: Missing contents: read permission in benchmark job.

Other jobs that checkout the repository explicitly include contents: read permission. While this permission is often implicitly available, explicitly declaring it maintains consistency with other jobs in this workflow.

♻️ Proposed fix
   benchmark:
     name: Benchmark
     runs-on: ubuntu-latest
     needs: build
     permissions:
+      contents: read
       pull-requests: write
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 553 - 559, The benchmark job's
permissions block is missing the explicit contents: read entry; update the
benchmark job (named "benchmark") to include permissions: pull-requests: write
and contents: read so it matches other jobs and explicitly grants read access to
repository contents.

576-582: Consider using --wait instead of sleep 10 for container startup.

Other jobs in this workflow use docker compose up -d --quiet-pull --wait which properly waits for containers to be healthy. Using a fixed sleep 10 is less reliable and may cause intermittent failures if the containers take longer to start.

♻️ Proposed fix
       - name: Load and Start Appwrite
         run: |
           sed -i 's/traefik/localhost/g' .env
           docker load --input /tmp/${{ env.IMAGE }}.tar
-          docker compose up -d
-          sleep 10
+          docker compose up -d --wait
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 576 - 582, Replace the fixed sleep
with Docker Compose's built-in wait: in the step where the job runs "docker
compose up -d" followed by "sleep 10", change the invocation to use "docker
compose up -d --quiet-pull --wait" (the same pattern used in other jobs) and
remove the "sleep 10" line so the job waits for container health rather than a
fixed delay.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In @.github/workflows/ci.yml:
- Around line 7-11: The CACHE_KEY environment variable uses
github.event.pull_request.head.sha which is undefined for workflow_dispatch
events; update the CACHE_KEY expression to fall back to a non-PR identifier (for
example github.sha or github.run_id) when github.event.pull_request is missing
so workflow_dispatch runs produce a stable value; modify the CACHE_KEY
definition to reference CACHE_KEY and replace the current
github.event.pull_request.head.sha usage with a conditional fallback to
github.sha (or github.run_id) to avoid empty keys and cache-miss failures.

---

Duplicate comments:
In @.github/workflows/ci.yml:
- Around line 623-625: Make the jq expressions consistent and resilient by using
the safe conversion and null-coalescing operators: replace the plain tonumber
with tonumber? for '.summary.requestsPerSec' in both inputs, and add a fallback
using the '//' operator for '.statusCodeDistribution."200"' (and optionally
'.latencyPercentiles.p99') so missing or non-numeric values yield a safe default
(e.g., 0) instead of causing errors; update the jq invocations that reference
'.summary.requestsPerSec', '.statusCodeDistribution."200"', and
'.latencyPercentiles.p99' accordingly.
- Around line 614-616: The jq expression reading the 200 bucket can yield null;
update the echo that uses '.statusCodeDistribution."200"' to coalesce null to a
safe default (e.g. 0) using jq's // operator so the command becomes something
like '.statusCodeDistribution["200"] // 0' (keep the existing
formatting/number-grouping pipeline). Locate the echo line that writes "-
Requests with 200 status code: $(jq -r
'.statusCodeDistribution."200"|tostring|...` and replace the inner key access
with the null-coalescing expression to ensure it outputs 0 instead of null when
the key is missing.

---

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 553-559: The benchmark job's permissions block is missing the
explicit contents: read entry; update the benchmark job (named "benchmark") to
include permissions: pull-requests: write and contents: read so it matches other
jobs and explicitly grants read access to repository contents.
- Around line 576-582: Replace the fixed sleep with Docker Compose's built-in
wait: in the step where the job runs "docker compose up -d" followed by "sleep
10", change the invocation to use "docker compose up -d --quiet-pull --wait"
(the same pattern used in other jobs) and remove the "sleep 10" line so the job
waits for container health rather than a fixed delay.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 46061c04-b8f5-4b15-bf64-972c4cbf4b27

📥 Commits

Reviewing files that changed from the base of the PR and between e99f682 and 1abbca9.

📒 Files selected for processing (1)
  • .github/workflows/ci.yml

@loks0n loks0n merged commit d8f7337 into 1.8.x Mar 12, 2026
36 of 37 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants