[apps][system] Add -lock BucketClass, -readonly BucketAccessClass, and bucket user model#2119
[apps][system] Add -lock BucketClass, -readonly BucketAccessClass, and bucket user model#2119
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds multi-user bucket support (values: Changes
Sequence DiagramsequenceDiagram
participant User
participant HelmChart as Helm Chart
participant K8s as Kubernetes
participant SeaweedFS as SeaweedFS/COSI
participant S3Mgr as s3manager (login-mode)
participant UserCreds as user-credentials Template
User->>HelmChart: Deploy chart with values (users, locking, storagePool)
HelmChart->>K8s: Render BucketClaim(s), BucketAccess(es), BucketClass/AccessClass
K8s->>SeaweedFS: Provision buckets & access classes
SeaweedFS-->>K8s: Create BucketInfo (Secret)
HelmChart->>UserCreds: Lookup BucketInfo Secrets per user
UserCreds->>K8s: Emit per-user -credentials Secrets
User->>S3Mgr: Login (login-mode) or use pre-configured creds
S3Mgr->>K8s: Use per-session or static credentials to access bucket objects
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the COSI integration for SeaweedFS and the bucket application, providing more granular control over bucket provisioning and user access. The changes introduce new configuration parameters for object locking and storage pool selection, alongside a flexible user management system that allows defining multiple BucketAccess resources with specific permissions. This enhances the overall configurability and security of object storage resources within the system. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request effectively unifies and enhances the COSI integration for SeaweedFS and the bucket application. The changes are well-aligned with the description, including renaming the BucketClass suffix from -worm to -lock, introducing a flexible users model for creating BucketAccess resources, and adding support for object locking and storage pools in the bucket app. The related updates to RBAC rules, validation checks, and ApplicationDefinition schemas are consistent and correctly implemented. The reduction of the default object lock retention period is also a sensible change. Overall, the code quality is high and the changes are a solid improvement.
33ae43a to
ad99caf
Compare
| @@ -31,9 +31,7 @@ spec: | |||
| exclude: [] | |||
| include: | |||
There was a problem hiding this comment.
Without include for user credentials, these secrets won't be shown in dashboard. Please check apps/rabbitmq chart, there's usage of label and labelSelector
apps.cozystack.io/user-secret: "true"
kvaps
left a comment
There was a problem hiding this comment.
This also won't passtrough values to the system/bucket to display secrets in user specific format.
Also we need to redesign this system/bucket app somehow, because it also deploys ui for the bucket that can be accessed by access_key/secret_key pair.
…SI parameters Rename storage pool COSI resources to use consistent naming: - BucketClass suffix: -worm -> -lock - Parameter name: disk -> diskType - Retention days: 36500 -> 365 - Validation suffix check updated accordingly Signed-off-by: IvanHunters <[email protected]>
Add COSI resources for object locking and read-only access to both client topology and system chart: - BucketClass with -lock suffix (COMPLIANCE mode, 365 days retention) - BucketAccessClass with -readonly suffix - Explicit accessPolicy: readwrite on default BucketAccessClass Signed-off-by: IvanHunters <[email protected]>
Replace hardcoded BucketAccess with user-driven model: - locking: provisions from -lock BucketClass (object lock enabled) - storagePool: selects pool-specific BucketClass - users: map of named users, each creating a BucketAccess with optional readonly flag Empty users map produces zero BucketAccess resources (breaking change). Update ApplicationDefinition schema and dashboard RBAC accordingly. Signed-off-by: IvanHunters <[email protected]>
The seaweedfs-cosi-driver v0.3.0 expects the parameter key 'disk', not 'diskType'. Restore the correct key to match the driver's paramDisk constant. Co-Authored-By: Claude <[email protected]> Signed-off-by: IvanHunters <[email protected]>
Add a catch-all include selector so that COSI-created user credential secrets (dynamically named per user) are visible in the dashboard. The lineage webhook already verifies ownership via the graph walk (Secret -> BucketAccess -> HelmRelease -> Bucket), so an empty selector safely matches only secrets belonging to this application. This is needed because COSI sidecar creates secrets without custom labels, making the matchLabels pattern (used by rabbitmq) inapplicable. Co-Authored-By: Claude <[email protected]> Signed-off-by: IvanHunters <[email protected]>
This reverts commit 87b89d9. Signed-off-by: IvanHunters <[email protected]>
Create labeled secrets in the -system chart using lookup to copy credentials from COSI-created secrets. The ApplicationDefinition matchLabels selector exposes them in the dashboard. Co-Authored-By: Claude <[email protected]> Signed-off-by: IvanHunters <[email protected]>
Remove the yq strip of properties from Makefile that was clearing the schema, and run make generate to sync all generated files. Co-Authored-By: Claude <[email protected]> Signed-off-by: IvanHunters <[email protected]>
97dfe4d to
1d41e27
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/apps/bucket/README.md (1)
3-6: Minor: Redundant header nesting.The "## Parameters" on line 3 followed immediately by "### Parameters" on line 5 creates redundant nesting. However, since this file is auto-generated by cozyvalues-gen (based on learnings), this should be addressed in the generator configuration rather than manually editing this file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/apps/bucket/README.md` around lines 3 - 6, The README shows redundant headers ("## Parameters" followed by "### Parameters") because the auto-generated template in cozyvalues-gen emits both; update the cozyvalues-gen generator configuration or template so it produces a single Parameters header (remove the extra "### Parameters" emission or collapse the headings logic in the generator template), then regenerate the README so the duplicate header is removed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/system/bucket/templates/user-credentials.yaml`:
- Line 17: The endpoint value in the user-credentials.yaml template should be
quoted for consistency and safety: update the endpoint expression using the
existing trimPrefix call (trimPrefix "https://" (index $bucketInfo.spec.secretS3
"endpoint")) to include the YAML pipe quote (| quote) like the accessKey,
secretKey, and bucketName entries so special characters (e.g., colons) are
safely escaped during rendering.
---
Nitpick comments:
In `@packages/apps/bucket/README.md`:
- Around line 3-6: The README shows redundant headers ("## Parameters" followed
by "### Parameters") because the auto-generated template in cozyvalues-gen emits
both; update the cozyvalues-gen generator configuration or template so it
produces a single Parameters header (remove the extra "### Parameters" emission
or collapse the headings logic in the generator template), then regenerate the
README so the duplicate header is removed.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
packages/apps/bucket/Makefilepackages/apps/bucket/README.mdpackages/apps/bucket/templates/bucketclaim.yamlpackages/apps/bucket/templates/dashboard-resourcemap.yamlpackages/apps/bucket/templates/helmrelease.yamlpackages/apps/bucket/values.schema.jsonpackages/apps/bucket/values.yamlpackages/extra/seaweedfs/templates/client/cosi-bucket-class.yamlpackages/extra/seaweedfs/templates/seaweedfs.yamlpackages/extra/seaweedfs/templates/storage-pool-bucket-classes.yamlpackages/system/bucket-rd/cozyrds/bucket.yamlpackages/system/bucket/templates/user-credentials.yamlpackages/system/bucket/values.yamlpackages/system/seaweedfs/charts/seaweedfs/templates/cosi/cosi-bucket-class.yaml
💤 Files with no reviewable changes (1)
- packages/apps/bucket/Makefile
| stringData: | ||
| accessKey: {{ index $bucketInfo.spec.secretS3 "accessKeyID" | quote }} | ||
| secretKey: {{ index $bucketInfo.spec.secretS3 "accessSecretKey" | quote }} | ||
| endpoint: {{ trimPrefix "https://" (index $bucketInfo.spec.secretS3 "endpoint") }} |
There was a problem hiding this comment.
Add | quote for consistency and safety.
The endpoint field is missing | quote while accessKey, secretKey, and bucketName all use it. This inconsistency could cause YAML parsing issues if the endpoint contains special characters (e.g., colons in port numbers).
Proposed fix
- endpoint: {{ trimPrefix "https://" (index $bucketInfo.spec.secretS3 "endpoint") }}
+ endpoint: {{ trimPrefix "https://" (index $bucketInfo.spec.secretS3 "endpoint") | quote }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| endpoint: {{ trimPrefix "https://" (index $bucketInfo.spec.secretS3 "endpoint") }} | |
| endpoint: {{ trimPrefix "https://" (index $bucketInfo.spec.secretS3 "endpoint") | quote }} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/system/bucket/templates/user-credentials.yaml` at line 17, The
endpoint value in the user-credentials.yaml template should be quoted for
consistency and safety: update the endpoint expression using the existing
trimPrefix call (trimPrefix "https://" (index $bucketInfo.spec.secretS3
"endpoint")) to include the YAML pipe quote (| quote) like the accessKey,
secretKey, and bucketName entries so special characters (e.g., colons) are
safely escaped during rendering.
The default BucketAccess was removed in favor of per-user access. Update secret.yaml to look up the first user's COSI secret instead of the non-existent default one, with nil-check for race conditions. Co-Authored-By: Claude <[email protected]> Signed-off-by: IvanHunters <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/system/bucket/templates/secret.yaml`:
- Around line 1-4: The template currently picks the lexicographically first user
($firstUser from $users) which can be readonly; change the selection to prefer
the first non-readonly user by filtering .Values.users for entries where
.Values.users.[user].readonly is not true (e.g., create a $writableUsers list
from keys .Values.users where readonly != true), then set $firstUser = index
$writableUsers 0; if $writableUsers is empty fall back to the original $users
and keep the existing fallback behavior for $secretName (printf "%s-%s"
.Values.bucketName $firstUser) so write-capable credentials are used when
available.
Show only per-user credential secrets in the dashboard instead of both the internal UI secret and per-user secrets. Signed-off-by: IvanHunters <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/apps/bucket/templates/dashboard-resourcemap.yaml`:
- Around line 11-13: The dashboard-resourcemap template is hardcoding the secret
prefix with $.Release.Name inside the range ($name, $user) which can diverge
from the credential producer; change the prefix to use the same bucket-name Helm
value the credential generator uses (the key referenced in
packages/system/bucket/templates/user-credentials.yaml, e.g. .Values.bucketName
or the exact .Values.bucket.<key> used there) so the generated entries become
"<bucket-name>-{{ $name }}-credentials" instead of "{{ $.Release.Name }}-{{
$name }}-credentials", ensuring the RBAC maps to the actual produced secrets.
- Around line 11-13: The resourceNames list is rendered unguarded by the {{-
range $name, $user := .Values.users }} loop so when .Values.users is empty the
rule becomes namespace-wide; wrap the range (or the entire resourceNames
section) in a conditional that only emits resourceNames entries when users are
present (e.g. check len .Values.users > 0 or if .Values.users), and keep the
existing item format ({{ $.Release.Name }}-{{ $name }}-credentials) inside that
guarded block so no resourceNames field is emitted when there are no users.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2786ceb0-448f-4858-9745-7aaa77c7de2f
📒 Files selected for processing (1)
packages/apps/bucket/templates/dashboard-resourcemap.yaml
| {{- range $name, $user := .Values.users }} | ||
| - {{ $.Release.Name }}-{{ $name }}-credentials | ||
| {{- end }} |
There was a problem hiding this comment.
Align credential secret naming with the secret producer template.
Line 12 hardcodes .Release.Name as the prefix, but the credential secret generator uses a bucket-name-based prefix (packages/system/bucket/templates/user-credentials.yaml:1-20). If those differ, the dashboard RBAC maps to non-existent secrets.
Suggested fix
+ {{- $secretPrefix := default $.Release.Name $.Values.bucketName }}
{{- range $name, $user := .Values.users }}
- - {{ $.Release.Name }}-{{ $name }}-credentials
+ - {{ $secretPrefix }}-{{ $name }}-credentials
{{- end }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/apps/bucket/templates/dashboard-resourcemap.yaml` around lines 11 -
13, The dashboard-resourcemap template is hardcoding the secret prefix with
$.Release.Name inside the range ($name, $user) which can diverge from the
credential producer; change the prefix to use the same bucket-name Helm value
the credential generator uses (the key referenced in
packages/system/bucket/templates/user-credentials.yaml, e.g. .Values.bucketName
or the exact .Values.bucket.<key> used there) so the generated entries become
"<bucket-name>-{{ $name }}-credentials" instead of "{{ $.Release.Name }}-{{
$name }}-credentials", ensuring the RBAC maps to the actual produced secrets.
Prevent accidental namespace-wide Secret read when users is empty.
When Line 11 renders zero entries (default users: {}), resourceNames becomes effectively unrestricted, and this rule can grant get/list/watch on all Secrets in the namespace.
Suggested fix
- - apiGroups:
- - ""
- resources:
- - secrets
- resourceNames:
- {{- range $name, $user := .Values.users }}
- - {{ $.Release.Name }}-{{ $name }}-credentials
- {{- end }}
- verbs: ["get", "list", "watch"]
+{{- if .Values.users }}
+- apiGroups:
+ - ""
+ resources:
+ - secrets
+ resourceNames:
+ {{- range $name, $user := .Values.users }}
+ - {{ $.Release.Name }}-{{ $name }}-credentials
+ {{- end }}
+ verbs: ["get", "list", "watch"]
+{{- end }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/apps/bucket/templates/dashboard-resourcemap.yaml` around lines 11 -
13, The resourceNames list is rendered unguarded by the {{- range $name, $user
:= .Values.users }} loop so when .Values.users is empty the rule becomes
namespace-wide; wrap the range (or the entire resourceNames section) in a
conditional that only emits resourceNames entries when users are present (e.g.
check len .Values.users > 0 or if .Values.users), and keep the existing item
format ({{ $.Release.Name }}-{{ $name }}-credentials) inside that guarded block
so no resourceNames field is emitted when there are no users.
When a bucket has no users configured, s3manager previously crashed due to missing ACCESS_KEY_ID/SECRET_ACCESS_KEY env vars. This adds a login mode where users enter their S3 credentials via a web form. New Go code (via cozystack.patch): - auth.go: session-based auth middleware, login/logout handlers, per-request S3 client from encrypted cookie session - login.html.tmpl: Materialize CSS login form - main.go: LoginMode toggle, conditional route setup - Dependency: gorilla/sessions for AES-256 encrypted cookies Dockerfile: add go mod tidy step for new dependency resolution. Co-Authored-By: Claude <[email protected]> Signed-off-by: IvanHunters <[email protected]>
deployment.yaml: use s3._namespace.host for ENDPOINT instead of secret ref, inject ACCESS_KEY_ID/SECRET_ACCESS_KEY only when users exist. Without users, s3manager starts in login mode. ingress.yaml: nginx basic auth annotations only when users exist. Without users, s3manager handles authentication via its login form. Co-Authored-By: Claude <[email protected]> Signed-off-by: IvanHunters <[email protected]>
Remove nginx basic auth and credential secret injection from the bucket Helm chart. s3manager now always starts in login mode and handles authentication via its own login page with encrypted session cookies. This eliminates the dependency on the -credentials and -ui-auth secrets for the UI layer. Co-Authored-By: Claude <[email protected]> Signed-off-by: IvanHunters <[email protected]>
Rebuild s3manager with auth.go login page support and push to 999669/s3manager registry for testing. Co-Authored-By: Claude <[email protected]> Signed-off-by: IvanHunters <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/system/bucket/images/s3manager.tag`:
- Line 1: The image reference in s3manager.tag points to an untrusted personal
registry and uses the mutable tag "latest" (999669/s3manager:latest@sha256:...),
which bypasses the Makefile's REGISTRY variable and breaks reproducibility;
replace that line with the official registry image (use REGISTRY from the
Makefile, e.g., ghcr.io/cozystack/cozystack/s3manager) and a pinned semantic
version (e.g., v0.5.0@sha256:<digest>), rebuild the image and push it to the
official registry, update the tag to the new digest, and remove any "for
testing" commit artifacts before merging.
In `@packages/system/bucket/images/s3manager/cozystack.patch`:
- Around line 75-78: The code currently treats a missing ACCESS_KEY_ID OR
SECRET_ACCESS_KEY as reason to set loginMode=true; change the conditional in the
check that references accessKeyID and secretAccessKey so it only enables
loginMode when BOTH are empty (use && instead of ||) so partial misconfiguration
isn’t silently treated as login mode; update the if statement that sets
loginMode in the block that logs "ACCESS_KEY_ID or SECRET_ACCESS_KEY not set,
starting in login mode" to use accessKeyID == "" && secretAccessKey == "" (or
len(...) == 0 && len(...) == 0) to ensure only absent-both triggers loginMode.
- Around line 299-314: The NewSessionStore function currently generates a
per-process random key which breaks sessions across restarts/replicas; change it
to load a shared key from a Kubernetes-backed source (env var or file) instead:
read a single base64 (or hex) encoded key from an env var (e.g., SESSION_KEY) or
from a mounted secret file, validate/ensure it decodes to 32 bytes, and use that
byte slice when constructing sessions.NewCookieStore; if the key is missing or
the decoded length is incorrect, log.Fatal with a clear message. Ensure you
remove the per-process rand.Read usage and keep the existing sessions.Options
assignment.
- Around line 300-306: The session currently uses sessions.NewCookieStore(key)
which only signs cookies (so session.Values like accessKey and secretKey remain
readable); change NewSessionStore to generate two distinct keys (an
authentication key and an encryption key) using crypto/rand (e.g., 32 bytes auth
+ 32 bytes encryption for AES-256), pass both keys into
sessions.NewCookieStore(authKey, encryptionKey) so cookies are encrypted as well
as signed, keep the existing store.Options assignment (HttpOnly, Secure,
SameSite, MaxAge) and ensure any error from rand.Read is handled the same way as
now; reference NewSessionStore, sessions.NewCookieStore, and
session.Values/accessKey/secretKey when making the change.
In `@packages/system/bucket/images/s3manager/Dockerfile`:
- Around line 12-13: The Dockerfile currently runs "go mod tidy" which can
rewrite go.mod/go.sum and break reproducible builds; replace that step with
deterministic validation by running "go mod download && go mod verify" instead,
and modify the build invocation (the RUN that executes GOOS=$TARGETOS
GOARCH=$TARGETARCH CGO_ENABLED=0 go build -ldflags="-s -w" -a -installsuffix cgo
-o bin/s3manager) to include -mod=readonly so the toolchain cannot alter modules
during build; update those two RUN steps accordingly to ensure pinned versions
from the cozystack.patch remain unchanged.
In `@packages/system/bucket/templates/deployment.yaml`:
- Around line 19-20: The ENDPOINT env var currently interpolates
.Values._namespace.host directly (value: "s3.{{ .Values._namespace.host }}")
which yields an invalid "s3." when empty; update the Helm template to guard this
by checking .Values._namespace.host (or the cozy-lib.ns-host helper) and either
(A) only emit the ENDPOINT env var when the host is non-empty, or (B) provide a
sane default/explicit failure path (e.g., render a meaningful error or skip the
Deployment) so that the s3 ENDPOINT is never set to "s3." and s3manager cannot
fail at runtime.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 67c23b8a-8089-468d-89f6-27718da60e12
📒 Files selected for processing (6)
packages/system/bucket/images/s3manager.tagpackages/system/bucket/images/s3manager/Dockerfilepackages/system/bucket/images/s3manager/cozystack.patchpackages/system/bucket/templates/deployment.yamlpackages/system/bucket/templates/ingress.yamlpackages/system/bucket/templates/secret.yaml
💤 Files with no reviewable changes (1)
- packages/system/bucket/templates/ingress.yaml
| @@ -1 +1 @@ | |||
| ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:279008f87460d709e99ed25ee8a1e4568a290bb9afa0e3dd3a06d524163a132b | |||
| 999669/s3manager:latest@sha256:134bd78a5e9412df3102a58487f7200279a0ba3dac78f2dc5ac1dd4d299bcbd2 | |||
There was a problem hiding this comment.
Critical: Image reference points to an untrusted personal registry and uses mutable latest tag.
This change introduces significant supply chain and reproducibility risks:
-
Untrusted registry: The image source changed from the official project registry (
ghcr.io/cozystack/cozystack) to what appears to be a personal Docker Hub account (999669). -
Mutable tag: Using
latestinstead of a pinned version (v0.5.0) breaks reproducibility and makes rollbacks unreliable. -
Build system inconsistency: The Makefile defines
REGISTRY ?= ghcr.io/cozystack/cozystack, but this file bypasses that entirely. -
Test artifact: The commit message explicitly states this was "for testing" - this should not be merged to production.
Before merging, rebuild and push the image to the official registry with a proper version tag:
-999669/s3manager:latest@sha256:134bd78a5e9412df3102a58487f7200279a0ba3dac78f2dc5ac1dd4d299bcbd2
+ghcr.io/cozystack/cozystack/s3manager:v0.6.0@sha256:<new-digest>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/system/bucket/images/s3manager.tag` at line 1, The image reference
in s3manager.tag points to an untrusted personal registry and uses the mutable
tag "latest" (999669/s3manager:latest@sha256:...), which bypasses the Makefile's
REGISTRY variable and breaks reproducibility; replace that line with the
official registry image (use REGISTRY from the Makefile, e.g.,
ghcr.io/cozystack/cozystack/s3manager) and a pinned semantic version (e.g.,
v0.5.0@sha256:<digest>), rebuild the image and push it to the official registry,
update the tag to the new digest, and remove any "for testing" commit artifacts
before merging.
| + if len(accessKeyID) == 0 || len(secretAccessKey) == 0 { | ||
| + log.Println("ACCESS_KEY_ID or SECRET_ACCESS_KEY not set, starting in login mode") | ||
| + loginMode = true | ||
| } |
There was a problem hiding this comment.
Do not silently switch to login mode when only one static credential is missing.
The current condition (||) masks partial misconfiguration. Login mode fallback should happen only when both credentials are absent.
✅ Proposed fix
- if len(accessKeyID) == 0 || len(secretAccessKey) == 0 {
+ if len(accessKeyID) == 0 && len(secretAccessKey) == 0 {
log.Println("ACCESS_KEY_ID or SECRET_ACCESS_KEY not set, starting in login mode")
loginMode = true
+ } else if len(accessKeyID) == 0 || len(secretAccessKey) == 0 {
+ log.Fatal("ACCESS_KEY_ID and SECRET_ACCESS_KEY must be set together")
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| + if len(accessKeyID) == 0 || len(secretAccessKey) == 0 { | |
| + log.Println("ACCESS_KEY_ID or SECRET_ACCESS_KEY not set, starting in login mode") | |
| + loginMode = true | |
| } | |
| if len(accessKeyID) == 0 && len(secretAccessKey) == 0 { | |
| log.Println("ACCESS_KEY_ID or SECRET_ACCESS_KEY not set, starting in login mode") | |
| loginMode = true | |
| } else if len(accessKeyID) == 0 || len(secretAccessKey) == 0 { | |
| log.Fatal("ACCESS_KEY_ID and SECRET_ACCESS_KEY must be set together") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/system/bucket/images/s3manager/cozystack.patch` around lines 75 -
78, The code currently treats a missing ACCESS_KEY_ID OR SECRET_ACCESS_KEY as
reason to set loginMode=true; change the conditional in the check that
references accessKeyID and secretAccessKey so it only enables loginMode when
BOTH are empty (use && instead of ||) so partial misconfiguration isn’t silently
treated as login mode; update the if statement that sets loginMode in the block
that logs "ACCESS_KEY_ID or SECRET_ACCESS_KEY not set, starting in login mode"
to use accessKeyID == "" && secretAccessKey == "" (or len(...) == 0 && len(...)
== 0) to ensure only absent-both triggers loginMode.
| +// NewSessionStore creates a CookieStore with a random encryption key. | ||
| +func NewSessionStore() *sessions.CookieStore { | ||
| + key := make([]byte, 32) | ||
| + if _, err := rand.Read(key); err != nil { | ||
| + log.Fatal("failed to generate session key:", err) | ||
| + } | ||
| + store := sessions.NewCookieStore(key) | ||
| + store.Options = &sessions.Options{ | ||
| + Path: "/", | ||
| + MaxAge: 86400, | ||
| + HttpOnly: true, | ||
| + Secure: true, | ||
| + SameSite: http.SameSiteLaxMode, | ||
| + } | ||
| + return store | ||
| +} |
There was a problem hiding this comment.
Per-process random session key will break sessions on restarts and multi-replica deployments.
Generating a fresh key on every boot forces logout on rollout/restart and causes cross-pod session decode failures behind load balancing.
🧩 Suggested direction
-// NewSessionStore creates a CookieStore with a random encryption key.
-func NewSessionStore() *sessions.CookieStore {
- key := make([]byte, 32)
- if _, err := rand.Read(key); err != nil {
- log.Fatal("failed to generate session key:", err)
- }
- store := sessions.NewCookieStore(key)
+// NewSessionStore creates a CookieStore from stable keys injected via Secret/env.
+func NewSessionStore(authKey, encKey []byte) *sessions.CookieStore {
+ store := sessions.NewCookieStore(authKey, encKey)
Use a Kubernetes Secret-backed env var/key file so all replicas share the same keys.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| +// NewSessionStore creates a CookieStore with a random encryption key. | |
| +func NewSessionStore() *sessions.CookieStore { | |
| + key := make([]byte, 32) | |
| + if _, err := rand.Read(key); err != nil { | |
| + log.Fatal("failed to generate session key:", err) | |
| + } | |
| + store := sessions.NewCookieStore(key) | |
| + store.Options = &sessions.Options{ | |
| + Path: "/", | |
| + MaxAge: 86400, | |
| + HttpOnly: true, | |
| + Secure: true, | |
| + SameSite: http.SameSiteLaxMode, | |
| + } | |
| + return store | |
| +} | |
| // NewSessionStore creates a CookieStore from stable keys injected via Secret/env. | |
| func NewSessionStore(authKey, encKey []byte) *sessions.CookieStore { | |
| store := sessions.NewCookieStore(authKey, encKey) | |
| store.Options = &sessions.Options{ | |
| Path: "/", | |
| MaxAge: 86400, | |
| HttpOnly: true, | |
| Secure: true, | |
| SameSite: http.SameSiteLaxMode, | |
| } | |
| return store | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/system/bucket/images/s3manager/cozystack.patch` around lines 299 -
314, The NewSessionStore function currently generates a per-process random key
which breaks sessions across restarts/replicas; change it to load a shared key
from a Kubernetes-backed source (env var or file) instead: read a single base64
(or hex) encoded key from an env var (e.g., SESSION_KEY) or from a mounted
secret file, validate/ensure it decodes to 32 bytes, and use that byte slice
when constructing sessions.NewCookieStore; if the key is missing or the decoded
length is incorrect, log.Fatal with a clear message. Ensure you remove the
per-process rand.Read usage and keep the existing sessions.Options assignment.
| +func NewSessionStore() *sessions.CookieStore { | ||
| + key := make([]byte, 32) | ||
| + if _, err := rand.Read(key); err != nil { | ||
| + log.Fatal("failed to generate session key:", err) | ||
| + } | ||
| + store := sessions.NewCookieStore(key) | ||
| + store.Options = &sessions.Options{ |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
gorilla/sessions NewCookieStore encryption authentication keys behavior
💡 Result:
gorilla/sessions.NewCookieStore(keyPairs ...[]byte) treats the variadic []byte args as (authKey, encKey) pairs for key rotation:
-
Per pair semantics
- 1st key = authentication (HMAC) key (required) in every pair.
- 2nd key = encryption (AES) key (optional); it may be
nilor omitted only for the last pair (meaning “signed-only”, not encrypted). [1]
-
Key size expectations
authKey: recommended 32 or 64 bytes. [1]encKey(if provided): must be 16 / 24 / 32 bytes for AES-128/192/256 (otherwise you’ll see errors likecrypto/aes: invalid key size). [1][2]
-
How multiple pairs behave (rotation)
NewCookieStore(...)buildsstore.Codecsviasecurecookie.CodecsFromPairs(...). [1]- When saving, sessions uses
securecookie.EncodeMulti(..., store.Codecs...); when reading, it usessecurecookie.DecodeMulti(..., store.Codecs...). [1] EncodeMulti/DecodeMultitry codecs in order, which is what enables rotation: put your newest keys first, followed by older keys so old cookies still decode. [3]
Typical setups
- Signed-only cookie sessions:
NewCookieStore(authKey)(no encryption). [1][2] - Signed + encrypted:
NewCookieStore(authKey, encKey)(encKey length must be 16/24/32). [1][2] - Rotation example:
NewCookieStore(newAuth, newEnc, oldAuth, oldEnc)(new first). [1][3]
Important operational note: if you generate keys at startup and don’t persist them, cookies issued before a restart won’t decode. [2]
References:
[1] https://raw.githubusercontent.com/gorilla/sessions/main/store.go
[2] https://pkg.go.dev/github.com/gorilla/securecookie
[3] https://pkg.go.dev/github.com/gorilla/securecookie#EncodeMulti
🏁 Script executed:
# Read the patch file to understand the context
cat packages/system/bucket/images/s3manager/cozystack.patch | head -400
Repository: cozystack/cozystack
Length of output: 16049
Store S3 credentials encrypted in session cookies, not just signed.
NewCookieStore(key) initializes the cookie store with only an authentication key, which signs but does not encrypt the cookie payload. This means accessKey and secretKey stored in session.Values are readable from the client-side cookie despite being signed.
Proposed fix
func NewSessionStore() *sessions.CookieStore {
- key := make([]byte, 32)
- if _, err := rand.Read(key); err != nil {
+ key := make([]byte, 64) // 32-byte auth key + 32-byte encryption key
+ if _, err := rand.Read(key); err != nil {
log.Fatal("failed to generate session key:", err)
}
- store := sessions.NewCookieStore(key)
+ store := sessions.NewCookieStore(key[:32], key[32:])
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| +func NewSessionStore() *sessions.CookieStore { | |
| + key := make([]byte, 32) | |
| + if _, err := rand.Read(key); err != nil { | |
| + log.Fatal("failed to generate session key:", err) | |
| + } | |
| + store := sessions.NewCookieStore(key) | |
| + store.Options = &sessions.Options{ | |
| func NewSessionStore() *sessions.CookieStore { | |
| key := make([]byte, 64) // 32-byte auth key + 32-byte encryption key | |
| if _, err := rand.Read(key); err != nil { | |
| log.Fatal("failed to generate session key:", err) | |
| } | |
| store := sessions.NewCookieStore(key[:32], key[32:]) | |
| store.Options = &sessions.Options{ |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/system/bucket/images/s3manager/cozystack.patch` around lines 300 -
306, The session currently uses sessions.NewCookieStore(key) which only signs
cookies (so session.Values like accessKey and secretKey remain readable); change
NewSessionStore to generate two distinct keys (an authentication key and an
encryption key) using crypto/rand (e.g., 32 bytes auth + 32 bytes encryption for
AES-256), pass both keys into sessions.NewCookieStore(authKey, encryptionKey) so
cookies are encrypted as well as signed, keep the existing store.Options
assignment (HttpOnly, Secure, SameSite, MaxAge) and ensure any error from
rand.Read is handled the same way as now; reference NewSessionStore,
sessions.NewCookieStore, and session.Values/accessKey/secretKey when making the
change.
| RUN go mod tidy | ||
| RUN GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0 go build -ldflags="-s -w" -a -installsuffix cgo -o bin/s3manager |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the Dockerfile and examine it
find . -name "Dockerfile" -path "*/s3manager/*" | head -20
Repository: cozystack/cozystack
Length of output: 115
🏁 Script executed:
# Check the directory structure
ls -la packages/system/bucket/images/s3manager/ 2>/dev/null || echo "Directory not found"
Repository: cozystack/cozystack
Length of output: 303
🏁 Script executed:
# Search for cozystack.patch
find . -name "cozystack.patch" -o -name "*patch*" | grep -E "(cozystack|s3manager)" | head -20
Repository: cozystack/cozystack
Length of output: 120
🏁 Script executed:
# Look for all patches in the bucket/images directory
find packages/system/bucket/images -name "*.patch" 2>/dev/null
Repository: cozystack/cozystack
Length of output: 118
🏁 Script executed:
# Read the Dockerfile with line numbers
cat -n packages/system/bucket/images/s3manager/Dockerfile
Repository: cozystack/cozystack
Length of output: 1077
🏁 Script executed:
# Check the patch for gorilla module pins mentioned in review
grep -A 2 -B 2 "gorilla/sessions\|gorilla/securecookie" packages/system/bucket/images/s3manager/cozystack.patch | head -40
Repository: cozystack/cozystack
Length of output: 1340
🏁 Script executed:
# Get more context on the patch file structure
head -50 packages/system/bucket/images/s3manager/cozystack.patch
Repository: cozystack/cozystack
Length of output: 2489
Replace go mod tidy with deterministic dependency resolution to maintain reproducibility.
At Line 12, go mod tidy after applying cozystack.patch (which pins gorilla/sessions v1.4.0 and gorilla/securecookie v1.1.2) risks re-normalizing go.mod/go.sum based on the current Go toolchain and module proxy state, potentially deviating from the patch's pinned versions and hashes. This breaks deterministic builds.
Use go mod download && go mod verify to validate the patched dependencies, then build with -mod=readonly to prevent any modification:
Proposed fix
-RUN go mod tidy
-RUN GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0 go build -ldflags="-s -w" -a -installsuffix cgo -o bin/s3manager
+RUN go mod download && go mod verify
+RUN GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0 \
+ go build -mod=readonly -ldflags="-s -w" -a -installsuffix cgo -o bin/s3manager
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN go mod tidy | |
| RUN GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0 go build -ldflags="-s -w" -a -installsuffix cgo -o bin/s3manager | |
| RUN go mod download && go mod verify | |
| RUN GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0 \ | |
| go build -mod=readonly -ldflags="-s -w" -a -installsuffix cgo -o bin/s3manager |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/system/bucket/images/s3manager/Dockerfile` around lines 12 - 13, The
Dockerfile currently runs "go mod tidy" which can rewrite go.mod/go.sum and
break reproducible builds; replace that step with deterministic validation by
running "go mod download && go mod verify" instead, and modify the build
invocation (the RUN that executes GOOS=$TARGETOS GOARCH=$TARGETARCH
CGO_ENABLED=0 go build -ldflags="-s -w" -a -installsuffix cgo -o bin/s3manager)
to include -mod=readonly so the toolchain cannot alter modules during build;
update those two RUN steps accordingly to ensure pinned versions from the
cozystack.patch remain unchanged.
| - name: ENDPOINT | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: {{ .Values.bucketName }}-credentials | ||
| key: endpoint | ||
| value: "s3.{{ .Values._namespace.host }}" |
There was a problem hiding this comment.
Guard against empty _namespace.host producing an invalid endpoint.
If .Values._namespace.host is empty or unset (which can happen per the cozy-lib.ns-host helper that defaults to ""), the ENDPOINT becomes just s3. — an invalid hostname that will cause s3manager to fail at runtime.
Suggested fix
- name: ENDPOINT
- value: "s3.{{ .Values._namespace.host }}"
+ value: "s3.{{ required "._namespace.host must be set for bucket deployment" .Values._namespace.host }}"
Alternatively, if empty host should skip deployment or fail gracefully, wrap the entire Deployment in a conditional.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/system/bucket/templates/deployment.yaml` around lines 19 - 20, The
ENDPOINT env var currently interpolates .Values._namespace.host directly (value:
"s3.{{ .Values._namespace.host }}") which yields an invalid "s3." when empty;
update the Helm template to guard this by checking .Values._namespace.host (or
the cozy-lib.ns-host helper) and either (A) only emit the ENDPOINT env var when
the host is non-empty, or (B) provide a sane default/explicit failure path
(e.g., render a meaningful error or skip the Deployment) so that the s3 ENDPOINT
is never set to "s3." and s3manager cannot fail at runtime.
v0.1.2 ignores accessPolicy parameter from BucketAccessClass, granting readwrite access to all users regardless of the readonly flag. v0.3.0 includes support for readonly bucket access, Object Lock, and improved error handling. Co-Authored-By: Claude <[email protected]> Signed-off-by: IvanHunters <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/system/seaweedfs/charts/seaweedfs/values.yaml`:
- Line 1549: The change modifies a vendored upstream Helm chart value (the image
value "ghcr.io/seaweedfs/seaweedfs-cosi-driver:v0.3.0") inside the vendored
chart; do not edit this file directly — revert this edit and instead apply the
image update via the chart vendoring/update workflow or add an umbrella-level
override (e.g., update the chart dependency, run the vendor sync, or place the
override in the umbrella chart values) so the new image is applied without
modifying the vendored values.yaml directly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 49667cca-9db9-4287-8d1c-bb258e71a0b0
📒 Files selected for processing (1)
packages/system/seaweedfs/charts/seaweedfs/values.yaml
| cosi: | ||
| enabled: false | ||
| image: "ghcr.io/seaweedfs/seaweedfs-cosi-driver:v0.1.2" | ||
| image: "ghcr.io/seaweedfs/seaweedfs-cosi-driver:v0.3.0" |
There was a problem hiding this comment.
Do not patch vendored upstream chart values directly.
This change updates a value inside a vendored chart file (packages/system/seaweedfs/charts/seaweedfs/values.yaml). It should be applied via the chart vendoring/update workflow (or umbrella-level override), otherwise it will drift and be overwritten on the next vendor sync.
As per coding guidelines, **/*.yaml: "Do NOT directly edit upstream Helm charts in vendored charts directory; use proper chart vendoring mechanisms".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/system/seaweedfs/charts/seaweedfs/values.yaml` at line 1549, The
change modifies a vendored upstream Helm chart value (the image value
"ghcr.io/seaweedfs/seaweedfs-cosi-driver:v0.3.0") inside the vendored chart; do
not edit this file directly — revert this edit and instead apply the image
update via the chart vendoring/update workflow or add an umbrella-level override
(e.g., update the chart dependency, run the vendor sync, or place the override
in the umbrella chart values) so the new image is applied without modifying the
vendored values.yaml directly.
Update bucket E2E test to match the new per-user access model: - Create bucket with admin (readwrite) and viewer (readonly) users - Test that readwrite user can upload, list, and download objects - Test that readonly user can list and download but cannot upload - Use per-user BucketAccess and credential secret names Signed-off-by: IvanHunters <[email protected]>
The mc client requires --insecure on each command when connecting to SeaweedFS S3 with self-signed certificates via port-forward. Signed-off-by: IvanHunters <[email protected]>
cozytest.sh executes .bats files as plain shell functions where bats builtins like `run` are not available. Use `!` negation to assert that readonly user upload fails. Signed-off-by: IvanHunters <[email protected]>
What this PR does
Combines and unifies COSI enhancements across seaweedfs and bucket charts:
SeaweedFS (extra + system charts):
-wormto-lockdisktodiskTypefor consistency with COSI driver-lockBucketClass (COMPLIANCE mode, 365 days) for client and system topologies-readonlyBucketAccessClass with explicitaccessPolicyfor all topologiesaccessPolicy: readwriteon default BucketAccessClass-locksuffix (was-worm)Bucket app:
lockingparameter: provisions from-lockBucketClassstoragePoolparameter: selects pool-specific BucketClassusersmap — each entry creates a BucketAccess with optionalreadonlyflagBreaking change: empty
users: {}(default) produces zero BucketAccess resources. Existing buckets that relied on the implicit default BucketAccess will need to define users explicitly.Release note
Summary by CodeRabbit
New Features
Behavior Changes
Documentation
Updates