CD Foundation https://cd.foundation/ Open source community improving the world's ability to deliver software with security and speed. Fri, 13 Mar 2026 16:15:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://cd.foundation/wp-content/uploads/sites/35/2019/02/cropped-favicon-32x32.png CD Foundation https://cd.foundation/ 32 32 CD Foundation Ambassador Chair 2026-2027 — Nominations Open https://cd.foundation/blog/2026/03/13/ambassador-chair-nominations-open/ Fri, 13 Mar 2026 16:13:09 +0000 https://cd.foundation/?p=17224 Apply to be the new CDF Ambassador Chair by April 2.

The post CD Foundation Ambassador Chair 2026-2027 — Nominations Open appeared first on CD Foundation.

]]>

The nominations for the next Continuous Delivery Foundation (CDF) Ambassador Chair are open.

Eligibility: Candidates must be current or former CDF Ambassadors. View current ambassadors and alumni

Selection: The CDF Governing Board will review the nominations and elect the next chair.

Term: Ambassador Chair, which is elected for a term of two (2) years with a maximum of two (2) terms. Selection will be made through nomination and election of active Ambassador program participants.

Application Deadline: April 2, 2026, 11:59 pm PT

What does it mean to be the Ambassador Chair?

The CD Foundation Ambassador Chair reviews all Ambassador applications and selects the new cohort every year. Once the cohort is announced, the chair is responsible for leading the Ambassador meetings, once a month, leading the Ambassador Slack Channel, and guiding the cohort in contributing to and representing the CD Foundation in content and at events.

The Ambassador Chair is a voting member of the DevRel Committee and a contributor to the TOC. The Ambassador Chair has the right to delegate the role of representing in TOC and Outreach to other Ambassador Program members as needed.

Message from current Ambassador Chair

As we introduce the new process of nomination for the Ambassador Chair, I want to express my heartfelt gratitude to the incredible CD Foundation community and leadership team. It has been an honor to serve alongside so many passionate individuals who tirelessly advance the mission of continuous delivery and open collaboration.

To all current and future ambassadors, thank you for your commitment to advancing the principles of continuous delivery and open collaboration. I’m immensely proud of what we’ve built together and can’t wait to see where the next generation of ambassadors takes it.

This transition opens the door for new voices and fresh perspectives to shape the Ambassador program’s next chapter. I encourage anyone who’s passionate about driving innovation, community, and knowledge-sharing to step forward or nominate someone who embodies these values. The CD Foundation thrives because of its people; thank you all for making this journey truly inspiring.

Garima Bajpai

The post CD Foundation Ambassador Chair 2026-2027 — Nominations Open appeared first on CD Foundation.

]]>
Jenkins accepted as Mentor Organization | GSoC 2026 https://cd.foundation/blog/2026/03/06/jenkins-accepted-as-mentor-organization-gsoc-2026/ Fri, 06 Mar 2026 16:51:20 +0000 https://cd.foundation/?p=17205 The post Jenkins accepted as Mentor Organization | GSoC 2026 appeared first on CD Foundation.

]]>
Jenkins GSoC 2026

Contributed by Kris Stern | Originally posted on jenkins.io

Jenkins Accepted to Google Summer of Code 2026 🎉

We are thrilled to announce that Jenkins has been accepted as a mentoring organization for Google Summer of Code (GSoC) 2026! This marks our tenth year participating in this prestigious program, and we are excited to continue welcoming new contributors into our vibrant open-source community.

Why GSoC Matters

Google Summer of Code is an outstanding opportunity for aspiring developers to gain hands-on experience in open-source software development. Participants work on real-world projects, collaborate closely with experienced mentors, and make meaningful contributions to the Jenkins ecosystem. We are committed to providing a supportive and inclusive environment for all GSoC participants, and we look forward to seeing another summer of fruitful contributions that will emerge from this year’s program.

Preparing Your Application

If you’re interested in applying, now is the perfect time to get started. We encourage all interested candidates to start preparing their applications and to join our upcoming webinars where we will discuss potential project ideas and provide guidance on the application process. Stay tuned for more details on how to get involved and make the most of this exciting opportunity!

What’s on the near horizon for GSoC candidates?

  • Review the information for contributors page for detailed application guidelines.
  • Join our webinars for a walk through of project ideas, the details of which will be made available over the coming days via our official communication channels. Recordings of these webinars will be made available after each session.
  • Use the official proposal template to structure and draft your project proposal.

We encourage you to start engaging with the community early, ask questions, and explore the available project ideas. The more familiar you are with Jenkins and its ecosystem, the stronger your proposal will be.

Refer to the GSoC timeline for a complete list of important dates.

The post Jenkins accepted as Mentor Organization | GSoC 2026 appeared first on CD Foundation.

]]>
CD Foundation Governing Board Election Results 2026 https://cd.foundation/blog/2026/03/04/gov-board-election-results/ Wed, 04 Mar 2026 15:45:04 +0000 https://cd.foundation/?p=17088 Dadisi Sanyika, re-elected as Governing Board Chair, Mark Waite as Committer Rep and Treasurer, and Steve Fenton as the new General Member Rep.

The post CD Foundation Governing Board Election Results 2026 appeared first on CD Foundation.

]]>

The results for the most recent CD Foundation Governing Board Elections are in!

    • Dadisi Sanyika, Sol Duara, re-elected as Governing Board Chair
    • Mark Waite, Independent, re-elected as Committer Representative
    • Steve Fenton, Octopus Deploy, elected as the new General Member Representative

We’re excited to have them join the team and look forward to their contributions.

Thank our outgoing member, Ole Lensmar, for dedicating time and and knowledge.

What is the Governing Board?

The CD Foundation Governing Board is responsible for strategic direction, business oversight, and business decisions. An overview of the Governing Board is set forth in the CDF Charter.

Governing Board Members

Dadisi Sanyika

Dadisi Sanyika – Governing Board Chair

Dadisi is a co-author of the CDEvents Interoperability specification. He brings over a decade of experience improving software delivery at scale, including leading the Spinnaker engineering team at Apple Services Engineering. He continues to serve on the Spinnaker project Technical Oversight Committee and has contributed to the CDF through the Interoperability SIG, the TOC, Program Committees, and the Outreach Committee. Outside the Foundation, Dadisi is the Founder and CEO of Sol Duara, where he is building commercial infrastructure on top of the CDEvents standard. He is passionate about purpose-driven communities and the conviction that interoperability is shared infrastructure, not a competitive differentiator.

Connect with Dadisi on LinkedIn 📇

Mark Waite

Mark Waite – Committer Representative

Mark is a Jenkins user and contributor, a core maintainer, and maintainer of the git plugin, the git client plugin, and several others. He is one of the authors of the “Improve a plugin” tutorial.

Connect with Mark on LinkedIn 📇

 

Steve Fenton

Steve Fenton – General Member Representative, Octopus Deploy

Steve Fenton is a Principal DevEx Researcher at Octopus Deploy, a DORA Community Guide, and a 8-time Microsoft MVP with more than two decades of experience in software delivery. He has written books on TypeScript (Apress, InfoQ), Octopus Deploy, and Web Operations. Steve has worked in the role of Software Engineer, SDET, Development Manager, and Director of Product and Data in a range of startups, SMEs, and enterprises.

Connect with Steve on LinkedIn 📇

The post CD Foundation Governing Board Election Results 2026 appeared first on CD Foundation.

]]>
Is Open Source Worth the Investment? https://cd.foundation/blog/2026/02/26/open-source-roi/ Thu, 26 Feb 2026 18:29:26 +0000 https://cd.foundation/?p=17087 Why spend over 250K USD maintaining private forks when you can use and contribute to open source software and get at least 4.8 x return on your investment?

The post Is Open Source Worth the Investment? appeared first on CD Foundation.

]]>

By now, most organizations know using Open Source Software (OSS) is great for business. The problem is, many of them hesitate to actively contribute their money and developers’ time. The C-Levels and VCs want to know: “What’s the ROI?” 

The most recent “ROI for Open Source Software Contribution” Report, published by The Linux Foundation, answers this question with concrete numbers for the different types of open source contributions.

The Cost of Keeping to Yourself

Based on the report’s data, maintaining private forks can cost up to 5,160 labour hours, or $258,000 USD, per release cycle.

By simply using OSS, organizations get 4.8 x ROI, but why stop there? Being an active member of an open source foundation offers the same bonus. Companies can also unlock further gains by contributing to the code or the community.

As Chris Aniszczyk put it in the report’s foreword: 

“Active stewardship in an open source community allows a company to steer roadmaps toward their own strategic goals and even build products faster. Ultimately, moving from passive consumption to active contribution transforms open source from a cost-saving tool into a powerful engine for innovation, market leadership, and institutional resilience.”

— Chris Aniszczyk, Cloud Native Computing Foundation and The Linux Foundation

ROI is 6x
cost for maintaining private forks

The post Is Open Source Worth the Investment? appeared first on CD Foundation.

]]>
Tekton Pipelines v1.9.0 LTS: Continued Innovation and Stability https://cd.foundation/blog/2026/02/23/tekton-pipelines-v1-9-0/ Mon, 23 Feb 2026 12:50:58 +0000 https://cd.foundation/?p=17072 Announcing Tekton Pipeline v1.9.0 LTS with a summary of all the improvements since v1.0.0.

The post Tekton Pipelines v1.9.0 LTS: Continued Innovation and Stability appeared first on CD Foundation.

]]>

Contributed by Vincent Demeester, Red Hat | Originally posted on tekton.dev

Announcing Tekton Pipeline v1.9.0 LTS with a summary of all the improvements since v1.0.0.

Tekton Pipelines v1.9.0 LTS

We’re excited to announce the release of Tekton Pipelines v1.9.0, our latest Long-Term Support (LTS) release! Since the milestone v1.0.0 release in May 2025, the project has continued to evolve with significant new features, performance improvements, and stability enhancements. This post summarizes the journey from v1.0.0 to v1.9.0, organized by LTS milestones.

Installation

kubectl apply -f https://infra.tekton.dev/releases/pipeline/previous/v1.9.0/release.yaml

v1.0.0 → v1.3.0 LTS (May – August 2025)

The first LTS after v1.0.0 focused on controller resilience and performance.

Features

  • Exponential backoff retry – Improved handling of transient webhook issues during Pod, TaskRun, and CustomRun creation. Configurable via the wait-exponential-backoff ConfigMap. Documentation
  • Controller HA improvements – Anti-affinity rules ensure controller replicas are scheduled on different nodes for better availability
  • PodTemplate param substitution – Enables multi-arch builds with Matrix by allowing param substitution in TaskRunSpecs’ PodTemplate. This lets you target nodes with specific architectures. Documentation

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  name: multi-arch-build
spec:
  pipelineSpec:
    tasks:
    - name: build
      matrix:
        params:
        - name: arch
          value: ["amd64", "arm64"]
      taskSpec:
        steps:
        - name: build
          image: golang:1.21
          script: |
            echo "Building for $(params.arch)"
            GOARCH=$(params.arch) go build -o app-$(params.arch) .            
      # PodTemplate with param substitution to schedule on correct architecture
      podTemplate:
        nodeSelector:
          kubernetes.io/arch: $(params.arch)
  • Configurable threading – THREADS_PER_CONTROLLER environment variable for tuning controller performance based on cluster size
  • OOM detection – TaskRuns that fail due to Out-Of-Memory (OOM) conditions now clearly show the termination reason in status

Fixes

  • Retryable validation errors no longer fail PipelineRuns
  • PVC cleanup improvements – already-deleted PVCs no longer cause errors
  • Fixed managed-by annotation propagation to Pods

Breaking Changes

  • Deprecated metrics removed – Use pipelinerun_total instead of pipelinerun_counttaskrun_total instead of taskrun_count, etc.
  • linux/arm images dropped – armv5, armv6, armv7 are no longer supported

v1.3.0 LTS → v1.6.0 LTS (August – October 2025)

The second LTS brought major new features for remote resolution and pipeline composition.

Features

  • Resolvers caching – Automatic caching for bundle, git, and cluster resolvers. Three modes available: alwaysnever, and auto (default, caches only immutable references). Configurable cache size and TTL via ConfigMap.

apiVersion: v1
kind: ConfigMap
metadata:
  name: resolvers-feature-flags
  namespace: tekton-pipelines-resolvers
data:
  enable-bundles-resolver-caching: "true"
  bundles-resolver-cache-ttl: "1h"
  bundles-resolver-cache-size: "100"
  • Pipelines-in-Pipelines (TEP-0056) – Reference existing Pipelines as tasks within another Pipeline, enabling powerful composition and reuse patterns. Documentation

apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: integration-pipeline
spec:
  tasks:
  - name: run-unit-tests
    taskRef:
      name: unit-test-pipeline
      kind: Pipeline
  - name: run-e2e-tests
    taskRef:
      name: e2e-test-pipeline
      kind: Pipeline
    runAfter:
    - run-unit-tests
  • managedBy field – Delegate PipelineRun/TaskRun lifecycle control to external controllers for custom orchestration scenarios. Documentation

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  name: externally-managed
spec:
  managedBy: custom-orchestrator
  pipelineRef:
    name: my-pipeline
  • Concurrent StepActions resolution – Significantly faster TaskRun startup when using multiple remote StepActions
  • Task timeout overrides – Override individual task timeouts via spec.taskRunSpecs[].timeoutDocumentation

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  name: custom-timeouts
spec:
  pipelineRef:
    name: my-pipeline
  taskRunSpecs:
  - pipelineTaskName: slow-task
    timeout: 2h
  • Quota-aware PVC handling – PipelineRuns wait for quota availability instead of failing immediately
  • Array values in When expressions – More flexible conditional execution. Documentation
  • Step displayName – Human-readable names for steps for better observability
  • ARM64 tested releases – E2E tests now run on ARM64 architecture

Fixes

  • Fixed signal handling in SidecarLog for Kubernetes-native sidecar functionality
  • Pods for timed-out TaskRuns are now retained when keep-pod-on-cancel is enabled
  • Correct step status ordering when using StepActions

v1.6.0 LTS → v1.9.0 LTS (October 2025 – January 2026)

The latest LTS focuses on stability, observability, and pod configuration.

Features

  • hostUsers field in PodTemplate – Control user namespace isolation for Task pods. Documentation

apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
  name: secure-task
spec:
  podTemplate:
    hostUsers: false
  taskSpec:
    steps:
    - name: run
      image: alpine
      script: echo "Running with user namespace isolation"
  • Digest validation for HTTP resolver – Ensure integrity of remotely fetched resources by validating SHA256 digests
  • ServiceAccount inheritance for Affinity Assistants – Better workspace management with proper credentials
  • Improved error messages – Actual result size now included when exceeding maxResultSize for easier troubleshooting

Fixes

  • Major performance fix – Resolved issues causing massive invalid status updates that impacted API server load and stability
  • Parameter resolution – Fixed defaults with object references
  • Timeout handling – Prevented excessive reconciliation when timeout is disabled
  • Pod configuration errors – Early detection instead of waiting for timeout
  • Race conditions – Fixed TaskRun status issues during timeout handling
  • Sidecar stopping – Fixed 409 conflict errors by using Patch instead of Update
  • Matrix validation – Prevented panics from invalid result references (v1beta1)

LTS Support Policy

With v1.9.0 being an LTS release, it will receive security and critical bug fixes for an extended period. Users upgrading from previous LTS versions can expect a smooth transition:

From To Key Considerations
v1.0.0 v1.3.0 LTS Update metric dashboards for renamed metrics
v1.3.0 LTS v1.6.0 LTS Smooth upgrade, new features opt-in
v1.6.0 LTS v1.9.0 LTS Smooth upgrade, stability improvements

Read more about LTS releases and our support policy.

Looking Ahead

The Tekton Pipelines project continues to focus on:

  • Performance – Reducing reconciliation overhead and improving startup times
  • User experience – Better error messages, observability, and debugging tools
  • Resolver improvements – Working towards v2 resolvers with enhanced caching and usability
  • Kueue integration (TEP-0164) – Native support for Kueue job queueing to enable better resource management and fair-sharing in multi-tenant environments

We’re also making progress on our transition to the Cloud Native Computing Foundation (CNCF), which will provide Tekton with a neutral home and access to a broader ecosystem.

Get Involved

We invite you to try v1.9.0 LTS, provide feedback, and contribute to the project:

Thank you to all the contributors who made these releases possible!

The post Tekton Pipelines v1.9.0 LTS: Continued Innovation and Stability appeared first on CD Foundation.

]]>
Continuous Spotlight | Meet Luke Philips https://cd.foundation/blog/2026/02/20/continuous-spotlight-meet-luke-philips/ Fri, 20 Feb 2026 16:21:40 +0000 https://cd.foundation/?p=17049 Meet Luke Philips, a member of our awesome Continuous Delivery Community and the CDEvents project.

The post Continuous Spotlight | Meet Luke Philips appeared first on CD Foundation.

]]>

✨ Getting to know the wonderful Continuous Delivery Community

Name: Luke Philips
Pronouns: He/Him
Location: Steamboat Springs, Colorado

Who are you?

I work as a Principal Software Engineer at a financial services company, having previously been at many major Telco’s and Media companies before. I enjoy the intersection of technical challenges and sharing knowledge/experiences through the open source communities. Additionally, contributing to the occasional Wardley Map research group for strategic thinking and analysis of industries I have been in. I enjoy being active, as well as finding all the best coffeeshops and enjoying a good book.

Your hobbies?

Having grown up in Colorado I greatly enjoy all the “Colorado Things”. The great outdoors and stunning mountain ranges and the various activities amongst it all – hiking, trail running, mountain biking, skiing/backcountry skiing, camping. Additionally spending time advocating for local causes, most recently getting into transit support for my community. In lieu of TV, I watch way too much cooking YouTube.

What did you want to be when you were a kid?

Astronaut

What led you to a career in tech?

As a kid having assembled every Lego imaginable, as well as disassembly of any household appliance I could get my hands on – sometimes to the chagrin of my parents. Building and exploring, the schools I grew up at gave us very early Internet access (Unix terminals) and that just set off a long path of self-learning. 

Do you remember your first open source contribution?

As a teenager learning what Linux was (kernel 2.0 days) – finding the local LUG, asking questions/commenting on docs might be an early “contribution.” First GitHub commit was to Homebrew, with a new release of Vert.X

How did you get involved in the Continuous Delivery Foundation?

Some of the early collaborations with CDF and OpenGitOps, presenting at an early GitOpsCon/cdCon combined event – sharing about the success we had with CD and GitOps principles. Most recently CDEvents has become the “missing” piece I’ve been looking for, glueing together the complex “Application Delivery” space.

What’s your favourite thing/project/tech to work on?

A few years ago I purchased a new-to-me home as well as discovered the Home Assistant project. It has been a wild journey ever since, every aspect of my home now has a Grafana dashboard associated with it, I’ve overhauled my home’s electrical systems, and there are various protocols throughout the house.

Tell us about the thing you’re most proud of and why?

Partnering, helping, and collaborating with many of my colleagues to achieve their first conference talks. Every one was an incredible opportunity to connect more and share their brilliance with the broader community.

What is the best connection you’ve made through open source?

When you meet that person who created/maintains those obscure CLI, App, Database tools, etc.. that you become so dependent on in your day-to-day – it’s like a true rockstar moment.

What’s your favourite open source conference?

Am I allowed to say anything other than cdCon? ArgoCon has been special for me and its community has been great. 

What is your #1 tip for getting involved in the community?

Keep showing up and have patience. My first GitHub contribution was also rejected, as well as many early conference talks, though I kept learning, improving, getting involved in other ways and found success.

Where can we find you?

Connect with Luke ➡  Linktree

More from Luke

Watch the CDEvents, GitOps, and Argo CD talks and podcast Luke has recently been a part of.

The post Continuous Spotlight | Meet Luke Philips appeared first on CD Foundation.

]]>
Why Jenkins Users Need Post-Deployment Vulnerability Detection and Remediation https://cd.foundation/blog/2026/02/17/jenkins-ortelius/ Tue, 17 Feb 2026 12:29:46 +0000 https://cd.foundation/?p=16988 Jenkins is great, but with Ortelius, it's that much better. Find out why.

The post Why Jenkins Users Need Post-Deployment Vulnerability Detection and Remediation appeared first on CD Foundation.

]]>

Contributed by Tracy Ragan, DeployHub

How Ortelius Brings SBOM Intelligence and Deployment Awareness to the Final Mile of DevSecOps

For over a decade, Jenkins has been the backbone of Continuous Integration and continuous Delivery for millions of developers worldwide. It has shaped how teams automate builds, orchestrate pipelines, and accelerate delivery. But even with strong CI/CD practices, the industry faces a reality that Jenkins alone cannot solve: vulnerabilities don’t stop appearing once code is shipped.

In fact, the most dangerous risks emerge after deployment, precisely when systems are live, supporting real users, and running on distributed infrastructure. This is where traditional DevSecOps practices stop short, and where a new defensive discipline must take over: post-deployment vulnerability detection and rapid remediation.

Jenkins and Ortelius

The Security Gap in CI/CD Pipelines

Jenkins excels at automation before deployment, building, testing, scanning, packaging, and releasing software. Pre-deployment security tools such as SCA and SAST integrate directly into the pipeline, helping developers catch issues early with our ‘shift-left,’ offensive, commitment. 

But the threat landscape has shifted right and needs defensive measures:

  • New CVEs are published every day
  • Open source packages become vulnerable long after release
  • Attackers exploit known weaknesses in production faster than organizations can patch
  • Distributed, cloud-native environments make it harder to know exactly what is running where

By the time a vulnerability is announced, the code is already deployed. Jenkins pipelines have long finished their job, and the software continues running, often silently exposed.

This is the “last mile” of DevSecOps where most teams struggle. They know what they built but not what is actually running. They know a new vulnerability exists, but not which environments it affects. And they know it must be remediated quickly, yet lack a clear path to resolution.

Introducing Ortelius: Post-Deployment Intelligence for Jenkins Users

Ortelius, a sandbox project under the Continuous Delivery Foundation, was created to solve this last-mile challenge. It provides a standardized, open-source method for tracking what has been deployed, where, and with which dependent open-source packages.

Ortelius creates a deployment digital twin, a living model of every application version across clusters and environments. By mapping SBOM metadata to real deployments, Ortelius gives teams the one capability CI/CD pipelines cannot deliver on their own:

  • Real-time insight into how newly discovered vulnerabilities impact running systems.

For Jenkins users, this shifts security thinking from a pre-deployment exercise to a continuous defensive posture.

Why the Jenkins Community Should Care

1. Vulnerabilities evolve after your pipelines finish

A package that was safe at build time may become high-risk days later. Without post-deployment detection, teams are flying blind.

2. SBOMs only become powerful when tied to deployments

Jenkins can generate SBOMs. Ortelius consumes the SBOM to show where those components ended up, and alerts you when they become dangerous.

3. Faster MTTR (Mean Time to Remediation) protects production

Jenkins excels at automation. Ortelius adds the intelligence needed to trigger targeted remediation pipelines instead of broad, time-consuming patch cycles.

4. Cleaner signal, less noise

Ortelius eliminates false alarms by connecting CVEs to the specific versions actually running in production. not every possible version that ever passed through the pipeline.

5. Open source, community-driven, and built to complement Jenkins

Because Ortelius is open source and part of the CDF ecosystem, it integrates naturally into Jenkins-based DevSecOps workflows.

A New Architecture for the CDF Community: The Deployment Digital Twin

Ortelius introduces a concept that is rapidly becoming essential to large-scale DevSecOps: the deployment digital twin.

This model continuously maps:

  • Component versions
  • Open source libraries
  • SBOM details
  • Deployment locations (clusters, namespaces, environments)

With this real-time knowledge, newly published vulnerabilities can be traced instantly to affected applications, something CI pipelines alone cannot do.

For Jenkins users, a digital twin becomes the authoritative record of “what’s really running,” allowing security teams and developers to respond with surgical precision.

Post-Deployment Remediation: The Missing Step in Secure Delivery

Currently, the Ortelius community is working on auto-remediation once a new vulnerability is identified. In this step,  Jenkins re-enters the picture:

Ortelius gives Jenkins pipelines actionable intelligence, enabling them to:

  • Automatically generate targeted remediation branches
  • Rebuild affected components with secure package versions
  • Trigger re-tests and approvals
  • Roll out patched versions across environments
  • Create auditable compliance records

By connecting vulnerability impact → code change → rebuild → redeploy, Jenkins becomes an integral component of a trusted, closed-loop remediation workflow.

Why This Matters Now

The software supply chain is increasingly weaponized. Attackers move faster than traditional patch cycles. Developers face alert fatigue. Platform teams must protect distributed, cloud-native workloads running across hybrid environments.

The CDF community is uniquely positioned to champion a new model of security, one that extends beyond the build pipeline into real-time production awareness.

Post-deployment detection and remediation is no longer optional.  It is the final mile of continuous delivery.  And Ortelius is giving Jenkins users a powerful, open source way to close the gap, including: 

Capability Area Jenkins Jenkins with Ortelius
Security Focus Pre-deployment, offensive security (SAST, SCA, scans during build) Continuous, defensive security extending into post-deployment
Vulnerability Visibility After Release No native visibility once pipelines complete Real-time visibility into newly disclosed CVEs affecting running systems
Handling New CVEs CVEs discovered after deployment require manual investigation CVEs are automatically correlated to deployed versions via SBOM mapping
SBOM Usage SBOMs may be generated but remain static artifacts SBOMs are actively consumed and tied to live deployments
Knowledge of What’s Running Knows what was built, not what is currently running An authoritative view of what is actually running across environments
Deployment Awareness Limited to pipeline execution history Deployment digital twin tracks versions, locations, and dependencies
Signal vs. Noise Broad alerts across all historical builds and versions Precise alerts limited to vulnerable components actually in production
Mean Time to Remediation (MTTR) Slow, manual triage and broad patch cycles Faster MTTR through targeted, intelligence-driven remediation
Threat Landscape Analysis Largely manual and error-prone Automatic threat landscape identification across clusters and environments
Compliance & Audit Readiness Fragmented evidence across tools Auditable chain: vulnerability → fix → rebuild → redeploy
Architecture Model Pipeline-centric Pipeline + deployment digital twin
Role of Jenkins Ends at delivery Becomes part of a closed-loop remediation system

Join the Movement: Help Build the Future of Defensive DevSecOps

Ortelius is a community-driven project, and its roadmap is shaped by practitioners who understand the challenges of modern software delivery. If you are a Jenkins user, DevSecOps engineer, or platform leader interested in strengthening post-deployment security, we invite you to get involved.

Join the Ortelius Open Source Project at Ortelius and bring your expertise to the community, shaping the future of post-deployment vulnerability defense.

Together, we can extend the principles of continuous delivery into continuous protection—and build a safer, more resilient open source ecosystem for everyone.

The post Why Jenkins Users Need Post-Deployment Vulnerability Detection and Remediation appeared first on CD Foundation.

]]>
Apply to be a 2026 CDF Award Officer https://cd.foundation/blog/2026/02/10/2026-award-officers/ Tue, 10 Feb 2026 12:49:38 +0000 https://cd.foundation/?p=16960 The post Apply to be a 2026 CDF Award Officer appeared first on CD Foundation.

]]>

The CDF Awards process will start in March and end in May 2026. We are looking for two (2) award officers to observe the nomination and voting to ensure the transparency and sanctity of our awards. View Award Guidelines and 2026 Award Winners.

Commitment

  • 1-2 hours a month for 3 months
  • Review documentation, forms, and results
  • Communication will be done asynchronously

Applications

Applications are now open and will close on Sunday, February 22, 2026 at 11:59 PM PST.

Thank you to last year’s award officers: Giorgi Keratishvili and Kris Stern. If you message them on the CDF Slack, they’d be happy to tell you about their experience.

CDF 2025 Award Officers

The post Apply to be a 2026 CDF Award Officer appeared first on CD Foundation.

]]>
Blueprinting Security in CI/CD: Building Trust Through Open Source https://cd.foundation/blog/2026/02/06/blueprinting-security/ Fri, 06 Feb 2026 16:45:30 +0000 https://cd.foundation/?p=16923 Visualize how open source tools, modern frameworks, and continuous validation converge to build resilient, transparent software pipelines.

The post Blueprinting Security in CI/CD: Building Trust Through Open Source appeared first on CD Foundation.

]]>

Contributed by Kate Scarcella

In modern software delivery, speed without security is a false economy. Every commit, container, and deployment represents a potential point of compromise; yet it’s also an opportunity to embed trust. 

The CI/CD Security Blueprint for Open Source Tooling helps teams visualize how open source tools, modern frameworks, and continuous validation converge to build resilient, transparent software pipelines.

The Problem We’re Solving

Most pipelines are built for velocity, not veracity. Developers commit code quickly, builds trigger automatically, and deployments run within minutes — but too often, verification trails are incomplete or nonexistent. The blueprint addresses that gap by making security visible, measurable, and repeatable across three critical phases of the Continuous Integration/Continuous Delivery (CI/CD) lifecycle.

(This set of tools is only an example, you can use other tools. Check out our Cybersecurity Guide for more.)

Phase 1: Code and Pre-Build

Security begins long before the first build. This phase focuses on the source of truth: code, dependencies, and configuration. Teams can scan repositories for vulnerabilities and secrets before merging, analyze code for unsafe patterns or outdated dependencies, and generate an early Software Bill of Materials to establish dependency visibility.

Open source tools such as Semgrep and OSV-Scanner make these practices accessible and automatable. When integrated directly into pull requests or pre-commit hooks, they ensure unsafe code never reaches the build stage. This is where security truly shifts left…catching issues before they can propagate downstream.

Phase 2: Build and Deploy

Once code is merged, trust must be constructed. The build and deploy phase verifies both what is being built and how it is being built. Artifacts should be scanned, signed, and accompanied by provenance data and attestations.

Tools like Trivy, Cosign, and Open Policy Agent (OPA) support this process. Trivy identifies vulnerabilities in containers and artifacts, Cosign ensures each build is cryptographically signed, and OPA enforces the rules that govern what can be promoted or deployed. Together, they produce verifiable outputs: every artifact can prove its origin, composition, and integrity.

Phase 3: Post-Deploy

Security doesn’t end at deployment; it evolves. Once code runs in production, new data becomes the input for securing the next release. Teams can monitor workloads for anomalies, detect configuration drift in real time, and feed those findings back into earlier stages.

Open source tools such as Falco and OPA extend visibility into runtime behavior. Post-deploy monitoring closes the loop between development and operations, turning detection in one release into prevention in the next.

The Role of Platform Engineering

Platform engineering has become the connective tissue between development, operations, and security. Instead of forcing each team to assemble its own CI/CD pipeline, platform engineers create secure, reusable blueprints…pre-built paths that integrate open source tooling and policy by default.

In practice, platform engineering operationalizes the CI/CD Security Blueprint. Code and Pre-Build controls such as Semgrep and OSV-Scanner are built into the developer experience. Build and Deploy gates for scanning, signing, and policy enforcement are standardized within shared pipelines. Post-Deploy monitoring through Falco and OPA is unified under centralized observability systems.

This model transforms security from a checklist into a platform capability. Developers move faster not because guardrails are removed, but because trust is built into the environment itself. Platform engineering turns the blueprint into infrastructure; something consistent, observable, and scalable across teams.

Continuous Validation: The Living Loop

Beneath the three stages runs the engine of continuous trust: evidence, attestations, SBOMs, and monitoring. Each build, deployment, and runtime event generates artifacts that verify the software’s integrity. Together, these form a cycle of continuous validation, not a one-time audit but a living feedback loop between tools, frameworks, and teams.

Security evidence becomes a language that development and compliance can both understand.

Framework Alignment

The blueprint naturally aligns with frameworks such as the NIST Secure Software Development Framework (SSDF), NIST CSF 2.0, and the EU Cyber Resilience Act (CRA). Instead of referencing them as checkboxes, the blueprint translates their principles into daily automation. For example, identifying dependencies corresponds to running OSV-Scanner; protecting artifacts aligns with signing them through Cosign; and monitoring workloads is realized through Falco.

Policy statements become actions, and compliance becomes measurable and verifiable by design.

A Culture Shift in Security

This approach is not about adding more tools but about designing security as an architectural layer that developers can see, trust, and extend. In open source, transparency is a strength. When every phase of the CI/CD process is observable and verifiable, trust becomes a measurable quality rather than a promise.

Discover other tools that could work for you and your organization in our full CI/CD Cybersecurity Guide.

The post Blueprinting Security in CI/CD: Building Trust Through Open Source appeared first on CD Foundation.

]]>
How Temporal Powers Reliable Cloud Operations at Netflix https://cd.foundation/blog/community/2026/02/03/netflix-spinnaker/ Tue, 03 Feb 2026 16:43:24 +0000 https://cd.foundation/?p=16897 Real-world use case: how Netflix uses Temporal and Spinnaker

The post How Temporal Powers Reliable Cloud Operations at Netflix appeared first on CD Foundation.

]]>

Contributed by Jacob Meyers and Rob Zienert | Originally posted on netflixtechblog.com

Temporal is a Durable Execution platform which allows you to write code “as if failures don’t exist”. It’s become increasingly critical to Netflix since its initial adoption in 2021, with users ranging from the operators of our Open Connect global CDN to our Live reliability teams now depending on Temporal to operate their business-critical services. In this post, I’ll give a high-level overview of what Temporal offers users, the problems we were experiencing operating Spinnaker that motivated its initial adoption at Netflix, and how Temporal helped us reduce the number of transient deployment failures at Netflix from 4% to 0.0001%.

A Crash Course on (some of) Spinnaker

Spinnaker is a multi-cloud continuous delivery platform that powers the vast majority of Netflix’s software deployments. It’s composed of several (mostly nautical themed) microservices. Let’s double-click on two in particular to understand the problems we were facing that led us to adopting Temporal.

In case you’re completely new to Spinnaker, Spinnaker’s fundamental tool for deployments is the Pipeline. A Pipeline is composed of a sequence of steps called Stages, which themselves can be decomposed into one or more Tasks, or other Stages. An example deployment pipeline for a production service may consist of these stages: Find Image -> Run Smoke Tests -> Run Canary -> Deploy to us-east-2 -> Wait -> Deploy to us-east-1.

An example Spinnaker Pipeline for a Netflix service

An example Spinnaker Pipeline for a Netflix service

Pipeline configuration is extremely flexible. You can have Stages run completely serially, one after another, or you can have a mix of concurrent and serial Stages. Stages can also be executed conditionally based on the result of previous stages. This brings us to our first Spinnaker service: Orca. Orca is the orca-stration engine of Spinnaker. It’s responsible for managing the execution of the Stages and Tasks that a Pipeline unrolls into and coordinating with other Spinnaker services to actually execute them.

One of those collaborating services is called Clouddriver. In the example Pipeline above, some of the Stages will require interfacing with cloud infrastructure. For example, the canary deployment involves creating ephemeral hosts to run an experiment, and a full deployment of a new version of the service may involve spinning up new servers and then tearing down the old ones. We call these sorts of operations that mutate cloud infrastructure Cloud Operations. Clouddriver’s job is to decompose and execute Cloud Operations sent to it by Orca as part of a deployment. Cloud Operations sent from Orca to Clouddriver are relatively high level (for example: createServerGroup), so Clouddriver understands how to translate these into lower-level cloud provider API calls.

Pain points in the interaction between Orca and Clouddriver and the implementation details of Cloud Operation execution in Clouddriver are what led us to look for new solutions and ultimately migrate to Temporal, so we’ll next look at the anatomy of a Cloud Operation. Cloud Operations in the OSS version of Spinnaker still work as described below, so motivated readers can follow along in source code, however our migration to Temporal is entirely closed-source following a fork from OSS in 2020 to allow Netflix to make larger pivots to the product such as this one.

The Original Cloud Operation Flow

A Cloud Operation’s execution goes something like this:

  1. Orca, in orchestrating a Pipeline execution, decides a particular Cloud Operation needs to be performed. It sends a POST request to Clouddriver’s /ops endpoint with an untyped bag-of-fields.
  2. Clouddriver attempts to resolve the operation Orca sent into a set of AtomicOperation s— internal operations that only Clouddriver understands.
  3. If the payload was valid and Clouddriver successfully resolved the operation, it will immediately return a Task ID to Orca.
  4. Orca will immediately begin polling Clouddriver’s GET /task/<id> endpoint to keep track of the status of the Cloud Operation.
  5. Asynchronously, Clouddriver begins executing AtomicOperations using its own internal orchestration engine. Ultimately, the AtomicOperations resolve into cloud provider API calls. As the Cloud Operation progresses, Clouddriver updates an internal state store to surface progress to Orca.
  6. Eventually, if all went well, Clouddriver will mark the Cloud Operation complete, which eventually surfaces to Orca in its polling. Orca considers the Cloud Operation finished, and the deployment can progress.
A sequence diagram of a Cloud Operation execution

A sequence diagram of a Cloud Operation execution

This works well enough on the happy path, but veer off the happy path and dragons begin to emerge:

  1. Clouddriver has its own internal orchestration system independent of Orca to allow Orca to query the progress of Cloud Operation. This is largely undifferentiated lifting relative to Clouddriver’s goal of actuating cloud infrastructure changes, and ultimately adds complexity and surface area for bugs to the application. Additionally, Orca is tightly coupled to Clouddriver’s orchestration system — it must understand how to poll Clouddriver, interpret the status, and handle errors returned by Clouddriver.
  2. Distributed systems are messy — networks and external services are unreliable. While executing a Cloud Operation, Clouddriver could experience transient network issues, or the cloud provider it’s attempting to call into may be having an outage, or any number of issues in between. Despite all of this, Clouddriver must be as reliable as reasonably possible as a core platform service. To deal with this shape of issue, Clouddriver internally evolved complex retry logic, further adding cognitive complexity to the system.
  3. Remember how a Cloud Operation gets decomposed by Clouddriver into AtomicOperations? Sometimes, if there’s a failure in the middle of a Cloud Operation, we need to be able to roll back what was done in AtomicOperations prior to the failure. This led to a homegrown Saga framework being implemented inside Clouddriver. While this did result in a big step forward in reliability of Cloud Operations facing transient failures because the Saga framework also allowed replaying partially-failed Cloud Operations, it added yet more undifferentiated lifting inside the service.
  4. The task state kept by Clouddriver was instance-local. In other words, if the Clouddriver instance carrying out a Cloud Operation crashed, that Cloud Operation state was lost, and Orca would eventually time out polling for the task status. The Saga implementation mentioned above mitigated this for certain operations, but was not widely adopted across all cloud providers supported by Spinnaker.

We introduced a lot of incidental complexity into Clouddriver in an effort to keep Cloud Operation execution reliable, and despite all this deployments still failed around 4% of the time due to transient Cloud Operation failures.

Now, I can already hear you saying: “So what? Can’t people re-try their deployments if they fail?” While true, some pipelines take days to complete for complex deployments, and a failed Cloud Operation mid-way through requires re-running the whole thing. This was detrimental to engineering productivity at Netflix in a non-trivial way. Rather than continue trying to build a faster horse, we began to look elsewhere for our reliable orchestration requirements, which is where Temporal comes in.

Temporal: Basic Concepts

Temporal is an open source product that offers a durable execution platform for your applications. Durable execution means that the platform will ensure your programs run to completion despite adverse conditions. With Temporal, you organize your business logic into Workflows, which are a deterministic series of steps. The steps inside of Workflows are called Activities, which is where you encapsulate all your non-deterministic logic that needs to happen in the course of executing your Workflows. As your Workflows execute in processes called Workers, the Temporal server durably stores their execution state so that in the event of failures your Workflows can be retried or even migrated to a different Worker. This makes Workflows incredibly resilient to the sorts of transient failures Clouddriver was susceptible to. Here’s a simple example Workflow in Java that runs an Activity to send an email once every 30 days:


@WorkflowInterface
public interface SleepForDaysWorkflow {
    @WorkflowMethod
    void run();
}

public class SleepForDaysWorkflowImpl implements SleepForDaysWorkflow {

    private final SendEmailActivities emailActivities =
            Workflow.newActivityStub(
                    SendEmailActivities.class,
                    ActivityOptions.newBuilder()
                            .setStartToCloseTimeout(Duration.ofSeconds(10))
                            .build());

    @Override
    public void run() {
        while (true) {
            // Activities already carry retries/timeouts via options.
            emailActivities.sendEmail();

            // Pause the workflow for 30 days before sending the next email.
            Workflow.sleep(Duration.ofDays(30));
        }
    }
}

@ActivityInterface
public interface SendEmailActivities {
    void sendEmail();
}

There’s some interesting things to note about this Workflow:

  1. Workflows and Activities are just code, so you can test them using the same techniques and processes as the rest of your codebase.
  2. Activities are automatically retried by Temporal with configurable exponential backoff.
  3. Temporal manages all the execution state of the Workflow, including timers (like the one used by Workflow.sleep). If the Worker executing this workflow were to have its power cable unplugged, Temporal would ensure another Worker continues to execute it (even during the 30 day sleep).
  4. Workflow sleeps are not compute-intensive, and they don’t tie up the process.

You might already begin to see how Temporal solves a lot of the problems we had with Clouddriver. Ultimately, we decided to pull the trigger on migrating Cloud Operation execution to Temporal.

Cloud Operations with Temporal

Today, we execute Cloud Operations as Temporal workflows. Here’s what that looks like.

  1. Orca, using a Temporal client, sends a request to Temporal to execute an UntypedCloudOperationRunner Workflow. The contract of the Workflow looks something like this:

@WorkflowInterface
interface UntypedCloudOperationRunner {
  /**
   * Runs a cloud operation given an untyped payload.
   *
   * WorkflowResult is a thin wrapper around OutputType providing a standard contract for
   * clients to determine if the CloudOperation was successful and fetching any errors.
   */
  @WorkflowMethod
  fun  run(stageContext: Map<string, any?="">, operationType: String): WorkflowResult
}
</string,>

2. The Clouddriver Temporal worker is constantly polling Temporal for work. A worker will eventually see a task for an UntypedCloudOperationRunner Workflow and start executing it.

3. Similar to before with resolution into AtomicOperations, Clouddriver does some pre-processing of the bag-of-fields in stageContext and resolves it to a strongly typed implementation of the CloudOperation Workflow interface based on the operationType input and the stageContext:


interface CloudOperation {
  @WorkflowMethod
  fun operate(input: I, credentials: AccountCredentials): O
}

4. Clouddriver starts a Child Workflow execution of the CloudOperation implementation it resolved. The child workflow will execute Activities which handle the actual cloud provider API calls to mutate infrastructure.

5. Orca uses its Temporal Client to await completion of the UntypedCloudOperationRunner Workflow. Once it’s complete, Temporal notifies the client and sends the result and Orca can continue progressing the deployment.

sequence diagram cloud operations temporal

Sequence diagram of a Cloud Operation execution with Temporal

Results and Lessons Learned from the Migration

A shiny new architecture is great, but equally important is the non-glamorous work of refactoring legacy systems to fit the new architecture. How did we integrate Temporal into critical dependencies of all Netflix engineers transparently?

The answer, of course, is a combination of abstraction and dynamic configuration. We built a CloudOperationRunner interface in Orca to encapsulate whether the Cloud Operation was being executed via the legacy path or Temporal. At runtime, Fast Properties (Netflix’s dynamic configuration system) determined which path a stage that needed to execute a Cloud Operation would take. We could set these properties quite granularly — by Stage type, cloud provider account, Spinnaker application, Cloud Operation type (createServerGroup), and cloud provider (either AWS or Titus in our case). The Spinnaker services themselves were the first to be deployed using Temporal, and within two quarters, all applications at Netflix were onboarded.

Impact

What did we have to show for it all? With Temporal as the orchestration engine for Cloud Operations, the percentage of deployments that failed due to transient Cloud Operation failures dropped from 4% to 0.0001%. For those keeping track at home, that’s a four and a half order of magnitude reduction. Virtually eliminating this failure mode for deployments was a huge win for developer productivity, especially for teams with long and complex deployment pipelines.

Beyond the improvement in deployment success metrics, we saw a number of other benefits:

  1. Orca no longer needs to directly communicate with Clouddriver to start Cloud Operations or poll their status with Temporal as the intermediary. The services are less coupled, which is a win for maintainability.
  2. Speaking of maintainability, with Temporal doing the heavy lifting of orchestration and retries inside of Clouddriver, we got to remove a lot of the homegrown logic we’d built up over the years for the same purpose.
  3. Since Temporal manages execution state, Clouddriver instances became stateless and Cloud Operation execution can bounce between instances with impunity. We can treat Clouddriver instances more like cattle and enable things like Chaos Monkey for the service which we were previously prevented from doing.
  4. Migrating Cloud Operation steps into Activities was a forcing function to re-write the logic to be idempotent. Since Temporal retries activities by default, it’s generally recommended they be idempotent. This alone fixed a number of issues that existed previously when operations were retried in Clouddriver.
  5. We set the retry timeout for Activities in Clouddriver to be two hours by default. This gives us a long leash to fix-forward or rollback Clouddriver if we introduce a regression before customer deployments fail — to them, it might just look like a deployment is taking longer than usual.
  6. Cloud Operations are much easier to introspect than before. Temporal ships with a great UI to help visualize Workflow and Activity executions, which is a huge boon for debugging live Workflows executing in production. The Temporal SDKs and server also emit a lot of useful metrics.
Execution of a resizeServerGroup Cloud Operation as seen from the Temporal UI. This operation executes 3 Activities: DescribeAutoScalingGroup, GetHookConfigurations, and ResizeServerGroup

Execution of a resizeServerGroup Cloud Operation as seen from the Temporal UI. This operation executes 3 Activities: DescribeAutoScalingGroup, GetHookConfigurations, and ResizeServerGroup

Lessons Learned

With the benefit of hindsight, there are also some lessons we can share from this migration:

1. Avoid unnecessary Child Workflows: Structuring Cloud Operations as an UntypedCloudOperationRunner Workflow that starts Child Workflows to actually execute the Cloud Operation’s logic was unnecessary and the indirection made troubleshooting more difficult. There are situations where Child Workflows are appropriate, but in this case we were using them as a tool for code organization, which is generally unnecessary. We could’ve achieved the same effect with class composition in the top-level parent Workflow.

2. Use single argument objects: At first, we structured Workflow and Activity functions with variable arguments, much as you’d write normal functions. This can be problematic for Temporal because of Temporal’s determinism constraints. Adding or removing an argument from a function signature is not a backward-compatible change, and doing so can break long-running workflows — and it’s not immediately obvious in code review your change is problematic. The preferred pattern is to use a single serializable class to host all your arguments for Workflows and Activities — these can be more freely changed without breaking determinism.

3. Separate business failures from workflow failures: We like the pattern of the WorkflowResult type that UntypedCloudOperationRunner returns in the interface above. It allows us to communicate business process failures without failing the Workflow itself and have more overall nuance in error handling. This is a pattern we’ve carried over to other Workflows we’ve implemented since.

Temporal at Netflix Today

Temporal adoption has skyrocketed at Netflix since its initial introduction for Spinnaker. Today, we have hundreds of use cases, and we’ve seen adoption double in the last year with no signs of slowing down.

One major difference between initial adoption and today is that Netflix migrated from an on-prem Temporal deployment to using Temporal Cloud, which is Temporal’s SaaS offering of the Temporal server. This has let us scale Temporal adoption while running a lean team. We’ve also built up a robust internal platform around Temporal Cloud to integrate with Netflix’s internal ecosystem and make onboarding for our developers as easy as possible. Stay tuned for a future post digging into more specifics of our Netflix Temporal platform.

Acknowledgement

We all stand on the shoulders of giants in software. I want to call out that I’m retelling the work of my two stunning colleagues Chris Smalley and Rob Zienert in this post, who were the two aforementioned engineers who introduced Temporal and carried out the migration.

The post How Temporal Powers Reliable Cloud Operations at Netflix appeared first on CD Foundation.

]]>