I'm currently testing the container virtual registry with two use cases and wanted to share my experience
At first, pulling images via Kubernetes worked flawlessly (there are still pods running with cvr images). At some point, Kubernetes complains while pulling images:
Failed to pull image "my-private-gitlab.de/virtual_registries/container/1/berriai/litellm-database:main-latest": rpc error: code = FailedPrecondition desc
= failed to pull and unpack image "my-private-gitlab.de/virtual_registries/container/1/berriai/litellm-database:main-latest": failed commit on ref "index-sha256:bab68a6dbbbe056c3be13e9a2251efb69c6391040b1f6bd283877f1807c5fae2": unexpected commit digest sha256:6362986e52c62dc1bd9d8e7bfe2574f5ebf1f3ce1502d57eb5ff19b2c1055b4a, expected sha256:bab68a6dbbbe056c3be13e9a2251efb69c6391040b1f6bd283877f1807c5fae2: failed precondition
The upstream image is located here: ghcr.io/berriai/litellm-database:main-latest . Might be an issue with the floating tag - but this is just a guess.
docker pull fails with the following error message independent of the pulled image:
$ docker pull my-private-gitlab.de/virtual_registries/container/1/berriai/litellm-database:main-latest
main-latest: Pulling from virtual_registries/container/1/berriai/litellm-database
failed to decode referrers index: invalid character '<' looking for beginning of value
Pulling the same image with crane works flawlessly:
$ time crane pull my-private-gitlab.de/virtual_registries/container/1/berriai/litellm-database:main-latest test.tar
crane pull test.tar 3,89s user 13,49s system 16% cpu 1:44,46 total
$ docker load --input test.tar
Loaded image: my-private-gitlab.de/virtual_registries/container/1/berriai/litellm-database:main-latest
André Frimberger (24ff91b0) at 14 Mar 08:47
feat: test hostUsers with Gitaly
... and 132 more commits
André Frimberger (498457c4) at 14 Mar 08:44
Merge branch 're-enbale-http-audit' into 'master'
... and 120 more commits
André Frimberger (5484abc9) at 20 Feb 07:14
feat: test hostUsers with Gitaly
... and 157 more commits
André Frimberger (4c99c7c9) at 20 Feb 07:12
Merge branch 'docs-i18n/upstream-batch-push-2026-02-10-01' into 'ma...
... and 145 more commits
André Frimberger (9b257352) at 16 Jan 12:51
Merge branch 'renovate/github.com-aws-aws-sdk-go-v2-config-1.x' int...
... and 214 more commits
André Frimberger (09807875) at 16 Jan 12:36
feat: test hostUsers with Gitaly
... and 122 more commits
André Frimberger (7fd0f8d2) at 16 Jan 12:34
Merge branch 'add-gravatar-config-to-all-rails-pods' into 'master'
... and 73 more commits
Hi @clemensbeck , as it turned out in #5780 (comment 2963988463) seccompProfile.type: RuntimeDefault can only be enabled if cgroups are disabled.
cgroups are disabled by default. Therefore it might make sense to also set seccompProfile.type: RuntimeDefault by default.
Not sure what the "GitLab Helm Chart way" is if a user enables cgroups. My suggestion is to either documenting this or preferably add sanity checks directly in the Helm Chart (using fail).
@jamesliu-gitlab Thanks for the hint. If you're not deep into Gitaly details, it's really tricky to tell if cgroups are properly enabled.
Spoiler: Setting cgroups.enabled to true in the Helm Chart looks promising but doesn't actually activate cgroups :-).
init container seems to print the right output ...
init-cgroups {"time":"2025-12-18T16:42:41.180820546Z","level":"INFO","msg":"cgroup setup configuration","GITALY_POD_UID":"2c9711af-4ec1-439e-9dfc-50a081a909ca","CGROUP_PATH":"/run/gitaly/cgroup/kubepods.slice/","OUTPUT_PATH":"/init-secrets/gitaly-pod-cgroup"}
init-cgroups {"time":"2025-12-18T16:42:41.182254729Z","level":"INFO","msg":"found cgroup path for Gitaly pod","cgroup_path":"/run/gitaly/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c9711af_4ec1_439e_9dfc_50a081a909 ca.slice"}
init-cgroups {"time":"2025-12-18T16:42:41.182326633Z","level":"INFO","msg":"changed cgroup permissions for Gitaly pod","cgroup_path":"/run/gitaly/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c9711af_4ec1_439e_9dfc_50a081a909ca.slice"}
Startup of Gitaly prints "finished initializing cgroups". No error message.
gitaly {"component": "gitaly","subcomponent":"gitaly","level":"info","msg":"Starting Gitaly","pid":19,"time":"2025-12-18T16:42:45.434Z","version":"18.6.2"}
gitaly {"component": "gitaly","subcomponent":"gitaly","duration_ms":0,"level":"info","msg":"finished initializing cgroups","pid":19,"time":"2025-12-18T16:42:45.435Z"}
After some reading of the Gitaly source code I found out that I'm missing logs containing command.cgroup_path ...
After adding the actual configuration of the cgroup settings of Gitaly I can see the right output:
"command.cgroup_path":"/run/gitaly/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d7e27bf_8036_420f_bf56_305496d18952.slice/gitaly/gitaly-19/repos-2"
My key takeaways
pod-security.kubernetes.io/enforce: restricted (e.g. due to the hostPath mount of /sys/fs/cgroup )securityContext.seccompProfile.type: RuntimeDefault breaks the current implementation of GitalyConclusion for !4605
seccompProfile.type: RuntimeDefault can be enabled only if cgroups are disabled in Gitaly.
André Frimberger (8e853b63) at 18 Dec 18:44
feat: test hostUsers with Gitaly
André Frimberger (daf2294e) at 18 Dec 17:55
feat: test hostUsers with Gitaly
André Frimberger (89e4286f) at 18 Dec 17:43
feat: test hostUsers with Gitaly
André Frimberger (3fcc394a) at 18 Dec 17:14
feat: test hostUsers with Gitaly
André Frimberger (d280a2ec) at 18 Dec 17:02
feat: test hostUsers with Gitaly
André Frimberger (62ad685c) at 18 Dec 16:14
feat: test hostUsers with Gitaly
André Frimberger (53d9f786) at 18 Dec 14:58
André Frimberger (61606258) at 18 Dec 09:46
feat: test hostUsers with Gitaly
André Frimberger (1e006dd0) at 18 Dec 09:15
feat: test hostUsers with Gitaly
André Frimberger (11459751) at 18 Dec 08:00
feat: test hostUsers with Gitaly