Conversation
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughAdded OAuth-specific validation and ingress-path generation when a gateway is OAuth-enabled: the controller now requires Changes
Sequence Diagram(s)sequenceDiagram
participant Reconciler
participant Validator
participant PathBuilder
participant K8sAPI as "Kubernetes API\n(Ingress)"
participant MCPServer
Reconciler->>Validator: validateGatewayConfig(MCPServer)
Validator-->>Reconciler: ok / error (requires auth.issuerURL if OAuth)
Reconciler->>PathBuilder: buildIngressPaths(MCPServer.spec.ingressPath, usesOAuth?)
PathBuilder-->>Reconciler: ["/<ingressPath>", "/.well-known/oauth-protected-resource/<ingressPath>"?]
Reconciler->>K8sAPI: createOrUpdate(Ingress with paths)
K8sAPI-->>Reconciler: success / failure
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/e2e/Dockerfile (1)
1-7:⚠️ Potential issue | 🔴 CriticalInvalid Go version:
golang:1.24.11does not exist.Go 1.24 was never released. Available golang Docker image versions skip from 1.23 to 1.25 and above. Using this base image will cause Docker build failures. Use
golang:1.25or later instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/Dockerfile` around lines 1 - 7, Update the Dockerfile base image reference: replace the invalid "FROM golang:1.24.11" image with a valid Go release (e.g., "golang:1.25" or newer) so the Docker build succeeds; modify the top-of-file FROM line in the Dockerfile to use a supported tag and ensure any build tooling or documentation that references golang:1.24.11 is updated accordingly.
🧹 Nitpick comments (5)
test/e2e/registry.Dockerfile (1)
1-1: Pin the base image by digest for reproducible and safer e2e builds.
registry:2.8.3is valid, but tag-only pinning is mutable. Prefer immutable digest pinning to avoid unexpected upstream changes in CI/e2e environments.🔒 Suggested hardening change
-FROM registry:2.8.3 +FROM registry:2.8.3@sha256:<resolved_digest>You can verify and resolve the exact digest for
registry:2.8.3from your container registry metadata before updating.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/registry.Dockerfile` at line 1, Replace the mutable base image tag in the Dockerfile by pinning the registry image to its immutable digest: locate the FROM instruction in test/e2e/registry.Dockerfile (the line containing "FROM registry:2.8.3"), look up the corresponding sha256 digest for registry:2.8.3 from the registry metadata or Docker Hub, and update the FROM to use the image@sha256:<digest> form so CI/e2e builds use an immutable, reproducible base.test/e2e/mcp_header_proxy.py (2)
87-91: Broad exception handling is acceptable here but could be narrowed.The
except Exceptioncatch is flagged by Ruff (BLE001). While acceptable for a test utility's best-effort error path, you could narrow this to specific exceptions for cleaner diagnostics:♻️ Optional: Narrow exception types
- except Exception as exc: # pragma: no cover - best effort diagnostic path + except (OSError, http.client.HTTPException) as exc: # pragma: no cover - best effort diagnostic path self.send_error(502, f"upstream request failed: {exc}") return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/mcp_header_proxy.py` around lines 87 - 91, The broad except Exception in the upstream request handler should be narrowed to the relevant error types to satisfy Ruff BLE001 and provide clearer diagnostics: replace the generic except with specific exceptions (e.g., requests.RequestException / urllib3 exceptions, socket.error or OSError, and TimeoutError as appropriate for the client you're using) around the block that calls the upstream and calls self.send_error and conn.close; ensure the same error message is passed to self.send_error and keep the finally conn.close() behavior intact so resources are still released.
76-77: Consider adding a timeout parameter for upstream connections.The hardcoded 30-second timeout is reasonable for e2e tests, but making it configurable via CLI args would improve flexibility for debugging slow environments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/mcp_header_proxy.py` around lines 76 - 77, The code currently hardcodes a 30s timeout when creating the upstream connection (conn_class and conn); make this timeout configurable via a CLI/test runner argument: add a new parsed argument (e.g., --upstream-timeout or timeout_seconds) with a default of 30 and help text, read it into the same config/variable used in the module, and replace the literal 30 in the conn = conn_class(..., timeout=30) call with that variable so conn_class and conn use the CLI-provided timeout at runtime.test/e2e/kind.sh (1)
896-948: Consider extracting duplicated Pythonpost()function.The
post()function is defined identically in three separate embedded Python scripts (lines 908-921, 985-998, 1071-1084). This duplication makes maintenance harder and increases the risk of drift.Consider extracting to a shared Python module in
test/e2e/that can be imported by the inline scripts, or consolidating the validation logic into a single Python script invoked with parameters.Also applies to: 973-1018, 1059-1103
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/kind.sh` around lines 896 - 948, The inline Python helper post() is duplicated across multiple embedded scripts (the post function used around the initialize/notifications/ tools/call sequences); extract post() into a shared helper module (e.g., test/e2e/mcp_client.py) and change the embedded Python blocks to import and call post() (or consolidate the three validation blocks into a single script that accepts parameters) so that the logic in post() is defined once and reused to avoid drift.internal/cli/setup.go (1)
1720-1726: Regex-based imagePullPolicy replacement is fragile.The regex
(?m)^(\s*imagePullPolicy:\s*)\S+replaces allimagePullPolicyfields in the manifest. Per the context snippet fromconfig/manager/manager.yaml, there's currently only one container, but if init containers or sidecars are added later, this could incorrectly modify unintended policies.Consider targeting only the manager container's imagePullPolicy, or using YAML parsing for more precise modification.
♻️ Suggested improvement using targeted replacement
func injectOperatorImagePullPolicy(yamlContent, pullPolicy string) string { if strings.TrimSpace(pullPolicy) == "" { return yamlContent } - pullPolicyPattern := regexp.MustCompile(`(?m)^(\s*imagePullPolicy:\s*)\S+`) - return pullPolicyPattern.ReplaceAllString(yamlContent, "${1}"+pullPolicy) + // Replace only the first occurrence to target the manager container + pullPolicyPattern := regexp.MustCompile(`(?m)^(\s*imagePullPolicy:\s*)\S+`) + return pullPolicyPattern.ReplaceAllStringFunc(yamlContent, func(match string) string { + return pullPolicyPattern.ReplaceAllString(match, "${1}"+pullPolicy) + }) }Or use
strings.Replacewithn=1for first occurrence only:func injectOperatorImagePullPolicy(yamlContent, pullPolicy string) string { if strings.TrimSpace(pullPolicy) == "" { return yamlContent } return regexp.MustCompile(`(?m)^(\s*imagePullPolicy:\s*)\S+`).ReplaceAllStringFunc(yamlContent, func(match string) string { // Only replace once return strings.Replace(yamlContent, match, strings.TrimSuffix(match, strings.Fields(match)[1])+pullPolicy, 1) }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/cli/setup.go` around lines 1720 - 1726, The current injectOperatorImagePullPolicy function uses a global regex that will replace every imagePullPolicy in the manifest; instead parse the YAML and set only the manager container's imagePullPolicy (or init/sidecars named "manager" if applicable). Update injectOperatorImagePullPolicy to unmarshal the YAML (yaml.v3), locate .spec.template.spec.containers (and .initContainers), find the container with name "manager", set its imagePullPolicy to the given value, then marshal the document(s) back to YAML; if you prefer a minimal change, restrict the replacement to the first match only by replacing the first regex match rather than all matches, but the recommended fix is explicit YAML parsing and targeted modification of the manager container.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yaml:
- Around line 87-137: In the e2e-kind job update the pinned tool versions used
in the "Install kubectl" and "Install kind" steps: change the kubectl download
URL/version from v1.34.1 to v1.35.3 and change the kind download URL/version
from v0.30.0 to v0.31.0 (keep the same curl+chmod+mv flow and filenames); this
means editing the curl targets in the "Install kubectl" and "Install kind" steps
so the binary downloads point to v1.35.3 and v0.31.0 respectively.
In `@mcp-sentinel`:
- Line 1: The submodule mcp-sentinel is pointing to a non-existent commit
01b718262e175860d92cbb53058d11a9d8be543b; update the gitlink to a valid ref by
changing the submodule entry to the existing commit (e.g.
b2ffb2c1e9e3371156925acc908fc7327994d0e1) or to the main branch tip
(105f2c4c641c2bbd415cd926870f008f0f6091d8) as appropriate: open the submodule
configuration in .gitmodules and the gitlink in the index/commit being added
(mcp-sentinel), set the commit hash to the chosen valid ref (suggest
b2ffb2c1e9e3371156925acc908fc7327994d0e1 from oauth-gateway-auth), commit that
change and verify by running git submodule update --init --recursive to ensure
cloning works.
In `@test/e2e/kind.sh`:
- Around line 513-557: The for-loop in wait_for_grant_tool_rule declares an
unused loop variable i; change it to a discard variable (e.g., local _ and for _
in $(seq 1 "${tries}")) so Shellcheck’s warning is resolved and intent is clear;
update the function wait_for_grant_tool_rule to replace occurrences of i with _
in the loop declaration only.
---
Outside diff comments:
In `@test/e2e/Dockerfile`:
- Around line 1-7: Update the Dockerfile base image reference: replace the
invalid "FROM golang:1.24.11" image with a valid Go release (e.g., "golang:1.25"
or newer) so the Docker build succeeds; modify the top-of-file FROM line in the
Dockerfile to use a supported tag and ensure any build tooling or documentation
that references golang:1.24.11 is updated accordingly.
---
Nitpick comments:
In `@internal/cli/setup.go`:
- Around line 1720-1726: The current injectOperatorImagePullPolicy function uses
a global regex that will replace every imagePullPolicy in the manifest; instead
parse the YAML and set only the manager container's imagePullPolicy (or
init/sidecars named "manager" if applicable). Update
injectOperatorImagePullPolicy to unmarshal the YAML (yaml.v3), locate
.spec.template.spec.containers (and .initContainers), find the container with
name "manager", set its imagePullPolicy to the given value, then marshal the
document(s) back to YAML; if you prefer a minimal change, restrict the
replacement to the first match only by replacing the first regex match rather
than all matches, but the recommended fix is explicit YAML parsing and targeted
modification of the manager container.
In `@test/e2e/kind.sh`:
- Around line 896-948: The inline Python helper post() is duplicated across
multiple embedded scripts (the post function used around the
initialize/notifications/ tools/call sequences); extract post() into a shared
helper module (e.g., test/e2e/mcp_client.py) and change the embedded Python
blocks to import and call post() (or consolidate the three validation blocks
into a single script that accepts parameters) so that the logic in post() is
defined once and reused to avoid drift.
In `@test/e2e/mcp_header_proxy.py`:
- Around line 87-91: The broad except Exception in the upstream request handler
should be narrowed to the relevant error types to satisfy Ruff BLE001 and
provide clearer diagnostics: replace the generic except with specific exceptions
(e.g., requests.RequestException / urllib3 exceptions, socket.error or OSError,
and TimeoutError as appropriate for the client you're using) around the block
that calls the upstream and calls self.send_error and conn.close; ensure the
same error message is passed to self.send_error and keep the finally
conn.close() behavior intact so resources are still released.
- Around line 76-77: The code currently hardcodes a 30s timeout when creating
the upstream connection (conn_class and conn); make this timeout configurable
via a CLI/test runner argument: add a new parsed argument (e.g.,
--upstream-timeout or timeout_seconds) with a default of 30 and help text, read
it into the same config/variable used in the module, and replace the literal 30
in the conn = conn_class(..., timeout=30) call with that variable so conn_class
and conn use the CLI-provided timeout at runtime.
In `@test/e2e/registry.Dockerfile`:
- Line 1: Replace the mutable base image tag in the Dockerfile by pinning the
registry image to its immutable digest: locate the FROM instruction in
test/e2e/registry.Dockerfile (the line containing "FROM registry:2.8.3"), look
up the corresponding sha256 digest for registry:2.8.3 from the registry metadata
or Docker Hub, and update the FROM to use the image@sha256:<digest> form so
CI/e2e builds use an immutable, reproducible base.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 57c2d666-49b8-4d37-849e-ef7de593ff37
📒 Files selected for processing (16)
.github/workflows/ci.yamlconfig/registry/base/deployment.yamlinternal/cli/pipeline.gointernal/cli/pipeline_test.gointernal/cli/setup.gointernal/cli/setup_helpers_test.gointernal/operator/controller.gointernal/operator/controller_test.gomcp-sentinelpkg/metadata/crd_generator.gopkg/metadata/crd_generator_test.gopkg/metadata/schema.gotest/e2e/Dockerfiletest/e2e/kind.shtest/e2e/mcp_header_proxy.pytest/e2e/registry.Dockerfile
| e2e-kind: | ||
| name: Kind E2E | ||
| runs-on: ubuntu-24.04 | ||
| needs: [lint, test] | ||
| timeout-minutes: 90 | ||
| permissions: | ||
| contents: read | ||
| steps: | ||
| - name: Checkout | ||
| uses: actions/checkout@v4 | ||
| with: | ||
| submodules: recursive | ||
|
|
||
| - name: Set up Go | ||
| uses: actions/setup-go@v5 | ||
| with: | ||
| go-version-file: go.mod | ||
| cache: true | ||
|
|
||
| - name: Install kubectl | ||
| run: | | ||
| curl -fsSL -o kubectl "https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl" | ||
| chmod +x kubectl | ||
| sudo mv kubectl /usr/local/bin/kubectl | ||
|
|
||
| - name: Install kind | ||
| run: | | ||
| curl -fsSL -o kind "https://kind.sigs.k8s.io/dl/v0.30.0/kind-linux-amd64" | ||
| chmod +x kind | ||
| sudo mv kind /usr/local/bin/kind | ||
|
|
||
| - name: Show tool versions | ||
| run: | | ||
| docker version | ||
| kubectl version --client | ||
| kind version | ||
|
|
||
| - name: Run kind e2e | ||
| env: | ||
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | ||
| ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} | ||
| E2E_ARTIFACT_DIR: ${{ github.workspace }}/.e2e-artifacts/kind | ||
| run: bash test/e2e/kind.sh | ||
|
|
||
| - name: Upload e2e artifacts | ||
| if: always() | ||
| uses: actions/upload-artifact@v4 | ||
| with: | ||
| name: kind-e2e-artifacts | ||
| path: .e2e-artifacts/kind | ||
| if-no-files-found: ignore |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the latest version of kind kubernetes tool 2026?
💡 Result:
The latest version of the kind Kubernetes tool as of 2026-03-27 is v0.31.0. This is confirmed across official sources including the kind documentation, GitHub releases page, and installation instructions, which all reference v0.31.0 as the current stable release (released 2025-12-18). There are mentions of v0.32.0-alpha pre-releases in Go module versions up to March 2026, but no stable v0.32.0 release is evident. To install: go install sigs.k8s.io/[email protected] or download binaries from https://kind.sigs.k8s.io/dl/v0.31.0/.
Citations:
- 1: https://github.com/kubernetes-sigs/kind/releases
- 2: https://kind.sigs.k8s.io/
- 3: https://kind.sigs.k8s.io/docs/user/quick-start/
- 4: https://go.libhunt.com/kind-latest-version
- 5: https://github.com/kubernetes-sigs/kind
- 6: https://newreleases.io/project/github/kubernetes-sigs/kind/release/v0.31.0
- 7: https://pkg.go.dev/sigs.k8s.io/kind
- 8: https://kind.sigs.k8s.io/docs/user/quick-start
🌐 Web query:
What is the latest stable version of kubectl 2026?
💡 Result:
The latest stable version of kubectl as of 2026-03-27 is v1.35.3. This matches the latest stable Kubernetes release, from which kubectl binaries are provided. Official Kubernetes documentation directs users to download kubectl from the latest stable release via https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/.../kubectl, and the Kubernetes releases page lists 1.35.3 (released 2026-03-19) as the latest for the current stable minor version 1.35. Binaries for v1.35.3 are available at dl.k8s.io/v1.35.3/bin/[platform]/kubectl. There is no separate release page for kubectl; it is bundled with Kubernetes releases.
Citations:
- 1: https://kubernetes.io/releases/
- 2: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
- 3: https://kubernetes.io/releases/download
- 4: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux
- 5: https://k8s.io/releases/
Update pinned tool versions to current releases.
The job is well-structured with proper artifact handling and dependency management. However, both pinned tool versions are outdated:
kubectl v1.34.1→ should bev1.35.3(current stable as of March 2026)kind v0.30.0→ should bev0.31.0(current stable, released December 2025)
Update both versions in the curl commands to receive the latest security patches and features.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yaml around lines 87 - 137, In the e2e-kind job update
the pinned tool versions used in the "Install kubectl" and "Install kind" steps:
change the kubectl download URL/version from v1.34.1 to v1.35.3 and change the
kind download URL/version from v0.30.0 to v0.31.0 (keep the same curl+chmod+mv
flow and filenames); this means editing the curl targets in the "Install
kubectl" and "Install kind" steps so the binary downloads point to v1.35.3 and
v0.31.0 respectively.
| wait_for_grant_tool_rule() { | ||
| local grant_name="$1" | ||
| local tool_name="$2" | ||
| local expected_decision="$3" | ||
| local tries="${4:-40}" | ||
| local i | ||
| for i in $(seq 1 "${tries}"); do | ||
| local policy_json | ||
| policy_json="$(kubectl get configmap "${SERVER_NAME}-gateway-policy" -n mcp-servers -o "jsonpath={.data.policy\.json}" 2>/dev/null || true)" | ||
| if POLICY_JSON="${policy_json}" GRANT_NAME="${grant_name}" TOOL_NAME="${tool_name}" EXPECTED_DECISION="${expected_decision}" python3 <<'PY' | ||
| import json | ||
| import os | ||
| import sys | ||
|
|
||
| policy = os.environ.get("POLICY_JSON", "") | ||
| if not policy: | ||
| raise SystemExit(1) | ||
|
|
||
| try: | ||
| doc = json.loads(policy) | ||
| except json.JSONDecodeError: | ||
| raise SystemExit(1) | ||
|
|
||
| grant_name = os.environ["GRANT_NAME"] | ||
| tool_name = os.environ["TOOL_NAME"] | ||
| expected = os.environ["EXPECTED_DECISION"] | ||
|
|
||
| for grant in doc.get("grants", []): | ||
| if grant.get("name") != grant_name: | ||
| continue | ||
| for rule in grant.get("tool_rules", []): | ||
| if rule.get("name") == tool_name and rule.get("decision") == expected: | ||
| raise SystemExit(0) | ||
|
|
||
| raise SystemExit(1) | ||
| PY | ||
| then | ||
| return 0 | ||
| fi | ||
| sleep 2 | ||
| done | ||
| echo "timed out waiting for tool rule ${tool_name}=${expected_decision} in grant ${grant_name}" >&2 | ||
| kubectl get configmap "${SERVER_NAME}-gateway-policy" -n mcp-servers -o yaml || true | ||
| return 1 | ||
| } |
There was a problem hiding this comment.
Unused loop variable i in wait_for_grant_tool_rule.
Shellcheck correctly identifies that i is declared but never used in the loop body. Use _ as the variable name to indicate intentional discard.
🔧 Fix unused variable
wait_for_grant_tool_rule() {
local grant_name="$1"
local tool_name="$2"
local expected_decision="$3"
local tries="${4:-40}"
- local i
- for i in $(seq 1 "${tries}"); do
+ for _ in $(seq 1 "${tries}"); do📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| wait_for_grant_tool_rule() { | |
| local grant_name="$1" | |
| local tool_name="$2" | |
| local expected_decision="$3" | |
| local tries="${4:-40}" | |
| local i | |
| for i in $(seq 1 "${tries}"); do | |
| local policy_json | |
| policy_json="$(kubectl get configmap "${SERVER_NAME}-gateway-policy" -n mcp-servers -o "jsonpath={.data.policy\.json}" 2>/dev/null || true)" | |
| if POLICY_JSON="${policy_json}" GRANT_NAME="${grant_name}" TOOL_NAME="${tool_name}" EXPECTED_DECISION="${expected_decision}" python3 <<'PY' | |
| import json | |
| import os | |
| import sys | |
| policy = os.environ.get("POLICY_JSON", "") | |
| if not policy: | |
| raise SystemExit(1) | |
| try: | |
| doc = json.loads(policy) | |
| except json.JSONDecodeError: | |
| raise SystemExit(1) | |
| grant_name = os.environ["GRANT_NAME"] | |
| tool_name = os.environ["TOOL_NAME"] | |
| expected = os.environ["EXPECTED_DECISION"] | |
| for grant in doc.get("grants", []): | |
| if grant.get("name") != grant_name: | |
| continue | |
| for rule in grant.get("tool_rules", []): | |
| if rule.get("name") == tool_name and rule.get("decision") == expected: | |
| raise SystemExit(0) | |
| raise SystemExit(1) | |
| PY | |
| then | |
| return 0 | |
| fi | |
| sleep 2 | |
| done | |
| echo "timed out waiting for tool rule ${tool_name}=${expected_decision} in grant ${grant_name}" >&2 | |
| kubectl get configmap "${SERVER_NAME}-gateway-policy" -n mcp-servers -o yaml || true | |
| return 1 | |
| } | |
| wait_for_grant_tool_rule() { | |
| local grant_name="$1" | |
| local tool_name="$2" | |
| local expected_decision="$3" | |
| local tries="${4:-40}" | |
| for _ in $(seq 1 "${tries}"); do | |
| local policy_json | |
| policy_json="$(kubectl get configmap "${SERVER_NAME}-gateway-policy" -n mcp-servers -o "jsonpath={.data.policy\.json}" 2>/dev/null || true)" | |
| if POLICY_JSON="${policy_json}" GRANT_NAME="${grant_name}" TOOL_NAME="${tool_name}" EXPECTED_DECISION="${expected_decision}" python3 <<'PY' | |
| import json | |
| import os | |
| import sys | |
| policy = os.environ.get("POLICY_JSON", "") | |
| if not policy: | |
| raise SystemExit(1) | |
| try: | |
| doc = json.loads(policy) | |
| except json.JSONDecodeError: | |
| raise SystemExit(1) | |
| grant_name = os.environ["GRANT_NAME"] | |
| tool_name = os.environ["TOOL_NAME"] | |
| expected = os.environ["EXPECTED_DECISION"] | |
| for grant in doc.get("grants", []): | |
| if grant.get("name") != grant_name: | |
| continue | |
| for rule in grant.get("tool_rules", []): | |
| if rule.get("name") == tool_name and rule.get("decision") == expected: | |
| raise SystemExit(0) | |
| raise SystemExit(1) | |
| PY | |
| then | |
| return 0 | |
| fi | |
| sleep 2 | |
| done | |
| echo "timed out waiting for tool rule ${tool_name}=${expected_decision} in grant ${grant_name}" >&2 | |
| kubectl get configmap "${SERVER_NAME}-gateway-policy" -n mcp-servers -o yaml || true | |
| return 1 | |
| } |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 519-519: i appears unused. Verify use (or export if used externally).
(SC2034)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/e2e/kind.sh` around lines 513 - 557, The for-loop in
wait_for_grant_tool_rule declares an unused loop variable i; change it to a
discard variable (e.g., local _ and for _ in $(seq 1 "${tries}")) so
Shellcheck’s warning is resolved and intent is clear; update the function
wait_for_grant_tool_rule to replace occurrences of i with _ in the loop
declaration only.
a665c3a to
cda0f9f
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #41 +/- ##
==========================================
+ Coverage 69.01% 69.05% +0.03%
==========================================
Files 32 32
Lines 6029 6055 +26
==========================================
+ Hits 4161 4181 +20
- Misses 1553 1556 +3
- Partials 315 318 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
Summary by CodeRabbit
Improvements
Tests
Chores