Hi @lucus.li ,
Thanks for your insights and sorry for the delayed reply. I was waiting for the 2.10.0 operator to drop so I could test this end-to-end without custom images, and I can now confirm that, as of 2.10.0 and the 9.10.0 chart, it is indeed possible to specify a custom issuer simply by using gateway.annotations and setting configureCertmanager to false to prevent the default annotation from being added. I hope this approach will remain functional as the Gateway API support evolves as it is a very powerful customization capability permitting a variety of issuer strategies. Thanks
Hi @ratchade, Sure. I appreciate your further consideration. Thanks
Hi @ratchade, I haven't had a chance to try the PodSpec approach but I'm guessing it would work. That said, as a system architect, it has a rather "hacky" feel to it and doesn't really pass my (and seemingly other's) "smell test". Given the criticality of these two settings being adjustable in order to run buildkit:rootless, it would be nice to see them given more formal support in the implementation and added to the documentation on how to replace Kaniko (see https://docs.gitlab.com/ci/docker/using_buildkit/). Note that this documentation teaches the now deprecated approach of using annotations (see https://docs.gitlab.com/ci/docker/using_buildkit/ and https://github.com/kubernetes/kubernetes/issues/132952). It's your call, but I'd cast a vote for the cleaner, more K8s mainstream approach. Thanks
Marc Ullman (23f98eef) at 19 Mar 18:33
Removed parameter type to align code with gopls.
Hi @avonbertoldi ,
I'm certainly interested in additional opinions, especially if there is a simpler solution here, but these are the settings I found necessary to run buildkit:rootless in a non-privileged runner with K8s 1.35.1. Thanks
Marc Ullman (e0650884) at 18 Mar 22:26
Removed unnecessary local variable.
Hi @avonbertoldi ,
Thanks for the careful review. Are you sure you want me to make this change as the explicit typing is consistent with the rest of the file. Thanks
Hi @sroque-worcel ,
Here's the requested screen shot showing the security context for a transient runner operating with this change:
Regarding the test failures, indeed they do all appear to be "flakes". More specifically, they are either issues that are currently failing in main, unrelated Windows tests, or K8s tests known to be timing-dependent.
Addressed as part of item above.
Hi @sroque-worcel ,
Changes passed unit and in-situ testing and have been pushed. Pipeline is running now. Thanks
Marc Ullman (f6d0da1b) at 16 Mar 22:21
Refactor seccomp and AppArmor profile validation to use shared helpers
Marc Ullman (4aaf5eb7) at 16 Mar 22:08
Merge branch 'malvarez-consolidate-http-status-code-field' into 'main'
... and 17 more commits
Marc Ullman (56134df0) at 16 Mar 22:06
Merge branch 'lucas/gateway-https-redirect' into 'master'
... and 33 more commits
Hi @sroque-worcel ,
Thanks for the feedback. I've generated an alternative version that uses "if not" logic and factors out the common-mode error checking code. I'll push the change as soon as I have revalidated it. BTW, my test case is deploying a non-privileged k8s runner that can run buildkit:rootless successfully. As an aside, it was not trivial to figure out how to do this as, unless I'm missing something, the GitLab runner doc seems to leave some of the key details (like this fix) as an exercise for the reader. Thanks
Thanks @rsarangadharan , I've applied your suggested changes.
Marc Ullman (1810442d) at 15 Mar 20:31
Use "or later" rather than ">="
Regarding the AI code review suggestion to validate minimum version of Kubernetes at runtime, I believe this fix requires 1.30 or newer and 1.30 is already an End-of-Life release that no one should be running.
Marc Ullman (5ad08a3f) at 11 Mar 22:33
Fixed minor formatting error
Add native seccomp_profile_type and app_armor_profile_type fields (with corresponding localhost_profile fields) to [runners.kubernetes.build_container_security_context] and the pod/helper/service/init security context sections in config.toml, so users can configure Seccomp and AppArmor profiles without relying on deprecated annotations or pod spec patches.
The Kubernetes executor currently lacks native support for configuring seccomp profiles and AppArmor profiles through the security context configuration. Users who need these settings face three problematic workarounds:
container.seccomp.security.alpha.kubernetes.io/build = "unconfined" was removed in Kubernetes 1.27. The AppArmor annotation still works but is deprecated in favor of the securityContext.appArmorProfile field (GA in K8s 1.30).FF_USE_ADVANCED_POD_SPEC_CONFIGURATION feature flag, embed JSON/YAML inside TOML, don't integrate with Helm charts, and lack validation. Prior to Runner 18.7.0, patches were silently dropped because the k8s client-go (v0.26) did not include the AppArmorProfile type.Real-world impact: Ubuntu 23.10+ enables kernel.apparmor_restrict_unprivileged_userns=1 by default, which blocks rootless container image building tools (BuildKit, Podman) from mounting /proc inside RUN steps. The only fix is setting appArmorProfile: { type: Unconfined } on the build container — which is impossible through config.toml today.
This also blocks compliance-driven environments (e.g., Azure AKS policy enforcement) that require explicit seccomp profiles on all pods.
Add four new fields to both KubernetesContainerSecurityContext and KubernetesPodSecurityContext config structs:
[runners.kubernetes.build_container_security_context]
seccomp_profile_type = "Unconfined" # RuntimeDefault | Localhost | Unconfined
seccomp_profile_localhost_profile = "" # required when type = Localhost
app_armor_profile_type = "Unconfined" # RuntimeDefault | Localhost | Unconfined
app_armor_profile_localhost_profile = "" # required when type = Localhost
[runners.kubernetes.pod_security_context]
seccomp_profile_type = "RuntimeDefault"
seccomp_profile_localhost_profile = ""
app_armor_profile_type = "RuntimeDefault"
app_armor_profile_localhost_profile = ""
Behavior:
seccomp_profile_type is set, construct api.SeccompProfile{Type: ...} on the returned SecurityContext
app_armor_profile_type is set, construct api.AppArmorProfile{Type: ...} on the returned SecurityContext
Localhost, require the corresponding localhost_profile field; log a warning and skip if missingConfig struct (common/config.go):
KubernetesContainerSecurityContext — add 4 fields with TOML/JSON/env tagsKubernetesPodSecurityContext — add 4 fields with TOML/JSON/env tagsConversion logic (common/config.go):
GetContainerSecurityContext() — set SeccompProfile and AppArmorProfile on returned api.SecurityContext
GetPodSecurityContext() — set SeccompProfile and AppArmorProfile on returned api.PodSecurityContext
toSeccompProfile() and toAppArmorProfile() for validation and conversionDocumentation (docs/executors/kubernetes/index.md or equivalent):
seccomp_profile_type, seccomp_profile_localhost_profile, app_armor_profile_type, app_armor_profile_localhost_profile to the container security context options tableRootless BuildKit image building on Ubuntu 23.10+ nodes:
[[runners]]
executor = "kubernetes"
[runners.kubernetes]
[runners.kubernetes.build_container_security_context]
app_armor_profile_type = "Unconfined"
seccomp_profile_type = "Unconfined"
Azure AKS compliance with RuntimeDefault seccomp:
[[runners]]
executor = "kubernetes"
[runners.kubernetes]
[runners.kubernetes.pod_security_context]
seccomp_profile_type = "RuntimeDefault"
Custom seccomp profile:
[[runners]]
executor = "kubernetes"
[runners.kubernetes]
[runners.kubernetes.build_container_security_context]
seccomp_profile_type = "Localhost"
seccomp_profile_localhost_profile = "profiles/my-profile.json"
Add test cases to existing security context test files following the project's Ginkgo/Gomega patterns:
api.SecurityContext.SeccompProfile.Type is api.SeccompProfileTypeUnconfined
Type and LocalhostProfile are setapi.SecurityContext.AppArmorProfile.Type is api.AppArmorProfileTypeUnconfined
Changelog: added
Kubernetes executor: Add seccomp and AppArmor profile configuration to container and pod security contexts
api.AppArmorProfile type is now available in the codebase.AppArmorProfile field in SecurityContext is GA since Kubernetes 1.30. Setting it against older apiservers may cause pod creation to fail. This should be documented as a minimum version requirement.ProcMount and SELinuxType fields follow the same pattern of string-typed config values converted to k8s API types, so this change is consistent with existing conventions.Marc Ullman (95b56ee6) at 11 Mar 21:23
Add seccomp and AppArmor profile support to Kubernetes executor sec...