The latest on DevSecOps - The GitHub Blog https://github.blog/enterprise-software/devsecops/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Tue, 09 Sep 2025 21:19:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 The latest on DevSecOps - The GitHub Blog https://github.blog/enterprise-software/devsecops/ 32 32 153214340 How to use the GitHub and JFrog integration for secure, traceable builds from commit to production https://github.blog/enterprise-software/devsecops/how-to-use-the-github-and-jfrog-integration-for-secure-traceable-builds-from-commit-to-production/ Tue, 09 Sep 2025 22:00:00 +0000 https://github.blog/?p=90682 Connect commits to artifacts without switching tools.

The post How to use the GitHub and JFrog integration for secure, traceable builds from commit to production appeared first on The GitHub Blog.

]]>

Today, we’re introducing a new integration between GitHub and JFrog that connects your source code and your attested binaries in one secure, traceable workflow.

For developers who often find themselves jumping between multiple tools to figure out which commit produced which artifact — or piecing together results from separate security scans for code and binaries — this integration can save time and effort, centralizing everything you need all in one place.

Below, we’ll dig into why the GitHub and JFrog integration is important, how it works, and how you can start using it today.

Why we built the GitHub and JFrog integration 

Modern software delivery is a supply chain. Your source code, build pipelines, and production artifacts are all links in that chain — and every link needs to be secure, traceable, and automated. Likewise, any weak link is a point of ingress for bad actors to gain access to data that should remain private and secure. 

But keeping this complete supply chain secure is challenging for developers who have numerous (and continually growing) responsibilities. When we talked to teams shipping at scale, we kept hearing the same pain points:

  • “We lose traceability once the build leaves GitHub.”
  • “Security scanning is split between multiple systems, and we have to reconcile results manually.”
  • “Our CI/CD pipelines feel stitched together instead of seamless.”

To address these issues, we worked closely with JFrog’s engineers to design a workflow where the commit that triggers a build is cryptographically linked to the artifact it produces, security scanning happens automatically and in context — providing the vulnerability scan attestations located in JFrog Evidence, and publishing and promoting artifacts, in compliance with an organization’s policies, is just another step in your GitHub Actions workflow, not a separate process.

Our goal: to remove friction, reduce risk, and give developers more time to focus on building features instead of managing handoffs. 

The integration we’re announcing today unlocks a seamless experience that lets you:

  • Run unified security scans, prioritizing Dependabot alerts based on production context from JFrog.
  • Publish and promote artifacts using policy-based gating of artifact promotion.
  • Automatically have all attestations created on GitHub (provenance, SBOM, custom attestations) ingested into JFrog evidence and associated with the build artifact. 

Here’s how it works

The integration connects GitHub’s developer platform with JFrog’s software supply chain platform using secure authentication and build metadata.

Here’s the flow:

  1. Push code to GitHub.
  2. Build and test with GitHub Actions.
  3. Link commits, builds, and artifacts for full lifecycle visibility.
  4. Publish artifacts to Artifactory automatically.
  5. Scan code with GitHub Advanced Security and artifacts with JFrog Xray.
Diagram showing the GitHub and JFrog integration.

Setting it up

  1. Enable the GitHub integration in JFrog Artifactory by navigating to Administration → General Management → Manage Integrations → GitHub. Toggle “Enable GitHub Actions” and authenticate your GitHub organization. Select your token type. Then create a pull request.
JFrog Artifactory integration screen.
  1. Trigger a build of your GitHub Actions workflow to build the artifact and generate the attestation. Make sure that your GitHub Actions workflow is using the ‘jfrog/jfrog-setup-cli’ and ‘actions/attest-build-provenance’ actions.
- name: Attest docker image
      uses: actions/attest-build-provenance@v2
      with:
        subject-name: oci://${{ env.JF_REGISTRY }}/${{ env.IMAGE_NAME }}
        subject-digest: ${{ steps.build-and-push.outputs.digest }}

Here’s an example of a workflow that you can use to generate the attestation and push it to Artifactory:

name: Build, Test & Attest

on:
  push:
    branches: 
      - main
 

env:
  OIDC_PROVIDER_NAME: [...]
  JF_URL: ${{ vars.JF_URL }}
  JF_REGISTRY: ${{ vars.JF_REGISTRY }}
  JF_DOCKER_REPO: [...]
  IMAGE_NAME: [...]
  BUILD_NAME: [...]

jobs:
  build-test-deploy:
    runs-on: ubuntu-latest
    permissions:
        contents: read
        packages: write
        attestations: write  # Required for attestation
        id-token: write      # Added for OIDC token access
    
    steps:
    - name: Checkout code
      uses: actions/checkout@v5

    - name: Install JFrog CLI
      id: setup-jfrog-cli
      uses: jfrog/[email protected]
      env:
        JF_URL: ${{ env.JF_URL }}
      with:
        version: 2.78.8
        oidc-provider-name: ${{ env.OIDC_PROVIDER_NAME }}
      
    - name: Docker login
      uses: docker/login-action@v3
      with:
        registry: ${{ env.JF_REGISTRY }}
        username: ${{ steps.setup-jfrog-cli.outputs.oidc-user }}
        password: ${{ steps.setup-jfrog-cli.outputs.oidc-token }}

    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v3

    - name: Build and push Docker image
      id: build-and-push
      uses: docker/build-push-action@v6
      with:
        context: .
        push: true
        tags: ${{ env.JF_REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.run_number }}
        build-args: ${{ env.BUILD_ARGS }}

    - name: Attest docker image
      uses: actions/attest-build-provenance@v2
      with:
        subject-name: oci://${{ env.JF_REGISTRY }}/${{ env.IMAGE_NAME }}
        subject-digest: ${{ steps.build-and-push.outputs.digest }}
  1. Once the build has run and the attestation has been generated, it will push the artifact to the JFrog Artifactory staging repo. The artifact is now ready to be validated. 
Artifactory view of the attestation in the dev environment.
  1. Once the artifact has been verified, confirming that a valid GitHub-signed provenance matches the trusted conditions (for example the issuer, repo, workflow, branch), on the policy passing, JFrog can automatically promote the attestation from the dev environment to the production environment. 
  2. Now that artifacts have been promoted to production, Dependabot continues scanning  its source repository, looking for dependencies and vulnerabilities. When a critical CVE is discovered, administrators will receive an alert of the security threat. 
View of critical Dependabot alerts.
  1. To find the alerts and vulnerabilities for artifacts that made it to production, we can filter with the following tag: artifact-registry:jfrog-artifactory.

    With this integration enabled, artifact lifecycle data is automatically pushed from JFrog to GitHub using GitHub’s new artifact metadata API. When an artifact is promoted to production in JFrog Artifactory, JFrog will automatically notify GitHub about the promotion, so that the artifact is picked up with the new Dependabot filter.
Dependabot filter for JFrog.
  1. Once an alert has been identified, it can be remediated using the suggested dependency update, which then allows you to rebuild and redeploy with fresh provenance.

To get the most out of using GitHub and Jfrog Artifactory, here are a few best practices: 

  • Use OIDC to avoid long-lived credentials in your workflows.
  • Automate promotions in Artifactory to move artifacts from dev → staging → production.
  • Set security gates early so unattested or vulnerable builds never make it to production.
  • Leverage provenance attestations in JFrog Evidence for instant traceability.

What’s next

You can enable the GitHub and JFrog integration today to start building a more secure, automated, and traceable software supply chain. 

For more details, check out the JFrog integration guide and the GitHub documentation.

The post How to use the GitHub and JFrog integration for secure, traceable builds from commit to production appeared first on The GitHub Blog.

]]>
90682
Enhance build security and reach SLSA Level 3 with GitHub Artifact Attestations https://github.blog/enterprise-software/devsecops/enhance-build-security-and-reach-slsa-level-3-with-github-artifact-attestations/ Thu, 19 Dec 2024 18:00:13 +0000 https://github.blog/?p=81734 Learn how GitHub Artifact Attestations can enhance your build security and help your organization achieve SLSA Level 3. This post breaks down the basics of SLSA, explains the importance of artifact attestations, and provides a step-by-step guide to securing your build process.

The post Enhance build security and reach SLSA Level 3 with GitHub Artifact Attestations appeared first on The GitHub Blog.

]]>

The need for software build security is more pressing than ever. High-profile software supply chain attacks like SolarWinds, MOVEit, 3CX, and Applied Materials have revealed just how vulnerable the software build process can be. As attackers exploit weaknesses in the build pipeline to inject their malicious components, traditional security measures—like scanning source code, securing production environments and source control allow lists—are no longer enough. To defend against these sophisticated threats, organizations must treat their build systems with the same level of care and security as their production environments.

These supply chain attacks are particularly dangerous because they can undermine trust in your business itself: if an attacker can infiltrate your build process, they can distribute compromised software to your customers, partners, and end-users. So, how can organizations secure their build processes, and ensure that what they ship is exactly what they intended to build?

The Supply-chain Levels for Software Artifacts (SLSA) framework was developed to address these needs. SLSA provides a comprehensive, step-by-step methodology for building integrity and provenance guarantees into your software supply chain. This might sound complicated, but the good news is that GitHub Artifact Attestations simplify the journey to SLSA Level 3!

In this post, we’ll break down what you need to know about SLSA, how Artifact Attestations work, and how they can boost your GitHub Actions build security to the next level.

Securing your build process: An introduction to SLSA

What is build security?

When we build software, we convert source code into deployable artifacts—whether those are binaries, container images or packaged libraries. This transformation occurs through multiple stages, such as compilation, packaging and testing, each of which could potentially introduce vulnerabilities or malicious modifications.

A properly secured build process can:

  • Help ensure the integrity of your deployed artifacts by providing a higher level of assurance that the code has not been tampered with during the build process.
  • Provide transparency into the build process, allowing you to audit the provenance of your deployed artifacts.
  • Maintain confidentiality by safeguarding sensitive data and secrets used in the build process.

By securing the build process, organizations can ensure that the software reaching end-users is the intended and unaltered version. This makes securing the build process just as important as securing the source code and deployment environments.

Introducing SLSA: A framework for build security

SLSA is a community-driven framework governed by the Open Source Security Foundation (OpenSSF), designed to help organizations systematically secure their software supply chains through a series of progressively stronger controls and best practices.

The framework is organized into four levels, each representing a higher degree of security maturity:

  • Level 0: No security guarantees
  • Level 1: Provenance exists for traceability, but minimal tamper resistance
  • Level 2: Provenance signed by a managed build platform, deterring simple tampering
  • Level 3: Provenance from a hardened, tamper-resistant build platform, ensuring high security against compromise

Provenance refers to the cryptographic record generated for each artifact, providing an unforgeable paper trail of its build history. This record allows you to trace artifacts back to their origins, allowing for verification of how, when and by whom the artifact was created.

Why SLSA Level 3 matters for build security

Achieving SLSA Level 3 is a critical step in building a secure and trustworthy software supply chain. This level requires organizations to implement rigorous standards for provenance and isolation, ensuring that artifacts are produced in a controlled and verifiable manner. An organization that has achieved SLSA Level 3 is capable of significantly mitigating the most common attack vectors targeting software build pipelines. Here’s a breakdown of the specific requirements for reaching SLSA Level 3:

  • Provenance generation and availability: A detailed provenance record must be generated for each build, documenting how, when and by whom each artifact was produced. This provenance must be accessible to users for verification.
  • Managed build system: Builds must take place on ephemeral build systems—short-lived, on-demand environments that are provisioned for each build in order to isolate builds from one another, reducing the risk of cross-contamination and unauthorized access.
  • Restricted access to signing material: User-defined build steps should not have access to sensitive signing material to authenticate provenance, keeping signing operations separate and secure.

GitHub Artifact Attestations help simplify your journey to SLSA Level 3 by enabling secure, automated build verification within your GitHub Actions workflows. While generating build provenance records is foundational to SLSA Level 1, the key distinction at SLSA Level 3 is the separation of the signature process from the rest of your build job. At Level 3, the signing happens on dedicated infrastructure, separated from the build workflow itself.

The importance of verifying signatures

While signing artifacts is a critical step, it becomes meaningless without verification. Simply having attestations does not provide any security advantages if they are not verified. Verification ensures that the signed artifacts are authentic and have not been tampered with.

The GitHub CLI makes this process easy, allowing you to verify signatures at any stage of your CI/CD pipeline. For example, you can verify Terraform plans before applying them, ensure that Ansible or Salt configurations are authentic before deployment, validate containers before they are deployed to Kubernetes, or use it as part of a GitOps workflow driven by tools like Flux.

GitHub offers several native ways to verify Artifact Attestations:

  • GitHub CLI: This is the easiest way to verify signatures.
  • Kubernetes Admission Controller: Use GitHub’s distribution of the admission controller for automated verification in Kubernetes environments.
  • Offline verification: Download the attestations and verify them offline using the GitHub CLI for added security and flexibility in isolated environments.

By verifying signatures during deployment, you can ensure that what you deploy to production is indeed what you built.

Achieving SLSA Level 3 compliance with GitHub Artifact Attestations

Reaching SLSA Level 3 may seem complex, but GitHub’s Artifact Attestations feature makes it remarkably straightforward. Generating build provenance puts you at SLSA Level 1, and by using GitHub Artifact Attestations on GitHub-hosted runners, you reach SLSA Level 2 by default. From this point, advancing to SLSA Level 3 is a straightforward journey!

The critical difference between SLSA Level 2 and Level 3 lies in using a reusable workflow for provenance generation. This allows you to centrally enforce build security across all projects and enables stronger verification, as you can confirm that a specific reusable workflow was used for signing. With just a few lines of YAML added to your workflow, you can gain build provenance without the burden of managing cryptographic key material or setting up additional infrastructure.

Build provenance made simple

GitHub Artifact Attestations streamline the process of establishing provenance for your builds. By enabling provenance generation directly within GitHub Actions workflows, you ensure that each artifact includes a verifiable record of its build history. This level of transparency is crucial for SLSA Level 3 compliance.

Best of all, you don’t need to worry about the onerous process of handling cryptographic key material. GitHub manages all of the required infrastructure, from running a Sigstore instance to serving as a root signing certificate authority for you.

Check out our earlier blog to learn more about how to set up Artifact Attestations in your workflow.

Secure signing with ephemeral machines

GitHub Actions-hosted runners, executing workflows on ephemeral machines, ensure that each build process occurs in a clean and isolated environment. This model is fundamental for SLSA Level 3, which mandates secure and separate handling of key material used in signing.

When you create a reusable workflow for provenance generation, your organization can use it centrally across all projects. This establishes a consistent, trusted source for provenance records. Additionally, signing occurs on dedicated hardware that is separate from the build machine, ensuring that neither the source code nor the developer triggering the build system can influence or alter the build process. With this level of separation, your workflows inherently meet SLSA Level 3 requirements.

Below is an example of a reusable workflow that can be utilized across the organization to sign artifacts:

name: Sign Artifact

on:
  workflow_call:
    inputs:
      artifact-path:
            required: true
            type: string

jobs:
  sign-artifact:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      attestations: write
      contents: read

    steps:
    - name: Attest Build Provenance
          uses: actions/attest-build-provenance@<version>
          with:
          subject-name: ${{ inputs.subject-name }}
          subject-digest: ${{ inputs.subject-digest }}

When you want to use this reusable workflow for signing in any other workflow, you can call it as follows:

name: Sign Artifact Workflow

on:
  push:
    branches:
      - main

jobs:
  sign:
    runs-on: ubuntu-latest

    steps:
    - name: Sign Artifact
      uses: <repository>/.github/workflows/sign-artifact.yml@<version>
      with:
        subject-name: "your-artifact.tar.gz" # Replace with actual artifact name
        subject-digest: "your-artifact-digest" # Replace with SHA-256 digest

This architecture of ephemeral environments and centralized provenance generation guarantees that signing operations are isolated from the build process itself, preventing unauthorized access to the signing process. By ensuring that signing occurs in a dedicated, controlled environment, the risk of compromising the signing workflow can be greatly reduced so that malicious actors can not tamper with the signing action’s code and deviate from the intended process. Additionally, provenance is generated consistently across all builds, providing a unified record of build history for the entire organization.

To verify that an artifact was signed using this reusable workflow, you can use the GitHub CLI with the following command:

gh artifact verify <file-path> --signer-workflow <owner>/<repository>/.github/workflows/sign-artifact.yml

This verification process ensures that the artifact was built and signed using the anticipated pipeline, reinforcing the integrity of your software supply chain.

A more secure future

GitHub Artifact Attestations bring the assurance and structure of SLSA Level 3 to your builds without having to manage additional security infrastructure. By simply adding a few lines of YAML and moving the provenance generation into a reusable workflow, you’re well on your way to achieving SLSA Level 3 compliance with ease!

Ready to strengthen your build security and achieve SLSA Level 3?

Start using GitHub Artifact Attestations today or explore our documentation to learn more.

The post Enhance build security and reach SLSA Level 3 with GitHub Artifact Attestations appeared first on The GitHub Blog.

]]>
81734
Frenemies to friends: Developers and security tools https://github.blog/enterprise-software/devsecops/frenemies-to-friends-developers-and-security-tools/ Mon, 08 Jan 2024 15:15:02 +0000 https://github.blog/?p=75933 When socializing a new security tool, it IS possible to build a bottom-up security culture where engineering has a seat at the table. Let's explore some effective strategies witnessed by the GitHub technical sales team to make this shift successful.

The post Frenemies to friends: Developers and security tools appeared first on The GitHub Blog.

]]>

You heard the vendor pitches. You evaluated the options. You got the budget approved. Now, you need your company’s developers to actually use the tool.

Socializing a new security tool can feel intimidating or overwhelming. It may feel like you are battling competing priorities and culture conflicts. However, security has become a foundational responsibility of developers, and it is possible to build a bottom-up security culture where engineering has a seat at the table. Whether you are rolling out a developer-first solution like GitHub Advanced Security, or a traditional tool that targets security specialists, let’s explore some effective strategies witnessed by the GitHub team to make this shift successful.

Document

Internal documentation is paramount for developers to feel empowered and supported when being given new tasks. Create a wiki with answers to common FAQ and flow charts to guide developers past common blockers. Source information from both the vendor’s resources and internal trial and error. It can be helpful to designate a single champion to take responsibility for updating and maintaining the wiki. Clearly lay out the process for developers to get support for issues that cannot be resolved through documentation.

Sample table of contents for a security tool wiki. The sections include getting started, remediation timelines, vendor resources, success stories, how to find, and how to fix. There is also a section title "Need more help?" that links to additional resources.

Set expectations

Before the tool can be socialized, management needs to be clear on what their goals are. Understand what your definition of success is, how you will measure it, and the why behind it. Determine what timelines are achievable, and how expectations will be communicated. Make this a cross-team initiative for the best chance of success, including management from security, ops, and engineering. When a diverse set of teams with varying interests are able to design and communicate shared goals, the tool can sustainably become part of normal processes (and not become shelfware).

Recognize success

Publicly highlight development teams who are successfully utilizing the tool or doing something interesting with it, as opposed to calling attention to those that may be falling behind goals. At least at first, you want to introduce the tool as a place where developers can excel, and be high-performers, instead of another duty to add to their backlog. Lead with data, tracking metrics like number of closed vulnerabilities, or Mean Time To Remediation, and tie these numbers to your announcements for credibility. These efforts can help you organically grow a team of security champions from developers who exhibit passion or career growth motivations, while demonstrating correct usage of the tool.

Go with the flow

Cultural shifts happen when security is built into the developer’s existing flow, as opposed to being injected as its own new stage in the pipeline. Look for points in their process where they are already in “pause” or “edit” mode, like at the Pull Request, where you can surface vulnerabilities and ask for remediation efforts. Doing so can avoid context switching and feelings of being interrupted. Capitalizing on an existing developer pause point can help train your developers to look at security vulnerabilities like functionality bugs, a skill they already have, while also shortening feedback loops.

Diagram representing the developer workflow. It includes the steps development, build, test Q/A, production, and maintenance.

Involve executive leadership

Getting the highest level of management involved can help set the tone that security is a company-wide policy, and that it is business-critical. This can take the form of including security as a topic on calls led by executive leadership, or inviting your C-suite to briefly speak on the upcoming monthly engineering call. Not only does this express the importance and longevity of a new tool to your individual contributors, but it will help keep leadership abreast of the reductions in risk this new tool is achieving.

Hold a hackathon

Quickstart efforts can be helpful as a complement to more sustainable longer-term goals. Consider a gamified event like an “in-house” bug bounty bash–a whole day devoted to tool education and getting rid of high criticality vulnerabilities. If engineering management is able to carve out time during a “learning day” or other type of free space in the sprint cycle, this effort can create immediate familiarity with the new tool (and build enthusiasm).

A man stands in front of a white board covered with brightly colored square Post-It notes. He is writing something on one of the slips of paper.

Listen to developers

Developer-to-developer enablement is key. There is often a feeling of mistrust between engineering and security, but developers share the same interests and have the same priorities. Let individual contributors have an opportunity to educate and enable other individual contributors. If you have had a successful pilot or PoC team, or notice self-motivated folks using the tool proactively, give them space to share their experience with the tool. Not only will your high-performers build confidence in their security expertise, but the rest of the audience can see how the tool is used in their real, every-day environment. This enablement can standalone, or be included as part of larger management-led training.


All of these suggestions can help you implement a new security tool while keeping the focus on developer goals (getting features completed on schedule, solving interesting problems). Socializing a new security tool, the right way, will encourage the idea that security belongs to everyone.

The post Frenemies to friends: Developers and security tools appeared first on The GitHub Blog.

]]>
75933
5 ways to make your DevSecOps strategy developer-friendly https://github.blog/enterprise-software/devsecops/5-ways-to-make-your-devsecops-strategy-developer-friendly/ Fri, 05 Jan 2024 15:02:36 +0000 https://github.blog/?p=75941 Developers care about security, but poorly integrated tools and other factors can cause frustration. Here are five best practices to reduce friction.

The post 5 ways to make your DevSecOps strategy developer-friendly appeared first on The GitHub Blog.

]]>
There are many benefits to implementing DevSecOps: minimized risk, reduced remediation costs, and faster and more secure product releases. But from a developer’s perspective, there’s a lot to be desired from the day-to-day practice. Developers often experience fragmented tool integration and are forced to take on additional responsibilities that can make the software development lifecycle (SDLC) seem more complex and overwhelming. They can also face development delays while working to understand, prioritize, and resolve different kinds of security alerts.

Evaluating and improving DevSecOps to make security a painless part of the current developer workflow is imperative to secure, fast delivery. Below, we’ll look at five tips for improving the experience and making security tools more usable for developers.

But first, what is DevSecOps?

The “Sec” in DevSecOps stands for security, and its addition to DevOps promotes security as a core component of the SDLC. The DevSecOps approach to software development puts the responsibility of security on everyone at an organization (as opposed to just the security team) by integrating security at the start of code production—or better yet, during the planning phase before the first line of code is written. This way, organizations can catch and fix vulnerabilities in the development process rather than in production or after release.

The result: security teams can use their expertise to set security policies, prioritize remediation focus areas, and foster the right behaviors and security teachings across the organization. Meanwhile, developers can interact with security tools, and are the first line of defense in reviewing, understanding, and remediating vulnerabilities.

DevSecOps advantages include shipping secure software more quickly and reaping cost-savIng benefits. In fact, IBM’s 2023 Cost of a Data Breach report cites a $1.68M cost savings for organizations with high DevSecOps adoption compared to those with low or no adoption.

5 tips for improving the DevSecOps experience

Improving the DevSecOps experience was top-of-mind for many speakers at GitHub Universe 2023. To catch you up, we pulled together the top five tips shared across various talks and interviews at the event.

1. Involve developers in security decisions

The more developers are involved in creating a security process and making policy decisions, the smoother the collaboration will be between engineering and security teams. So, before you purchase a new tool or change a policy, invite a developer champion into the conversation and ask for their feedback.

Here are some questions to get the conversation started:

  • What security practices and tools are currently in place? Understanding what’s in use will help identify areas that need improvement.
  • Do you find current security practices or tools help or hinder your workflow? How? Reducing friction in the DevSecOps pipeline can improve productivity.
  • What security tools or practices would you recommend? Why? Developers may have fresh perspectives to offer on technologies or approaches.
  • How comfortable are you integrating security into your work? This could help to identify gaps in training and support.
  • Are there any specific security measures you feel are redundant or unnecessary in your workflow? This could reveal practices that consume resources without providing substantial benefits.
  • Do you have sufficient communication and collaboration with the security team? Evaluating cross-team interactions can help to create a more collaborative culture.

2. Adapt security features to the developer environment

It’s important to acknowledge that many security tools are built for security professionals, and can create friction when bolted onto a developer’s workflow. When trying to integrate a security tool into the SDLC, it can be more effective to extract the desired data from the security tool and natively integrate it into the developer’s workflow—or, even better, use a security tool where the data is already directly embedded into the developer’s flow. Doing so reduces context switching and ultimately helps developers to detect and remediate vulnerabilities earlier.

In 2019, we acquired Dependabot and Semmle, which developed CodeQL. While Dependabot was designed for developers, CodeQL was designed for security experts, which we knew would be a barrier to entry for developers. So, we went to work optimizing CodeQL for developers, incorporating its functionalities directly into their workflow.

Today, developers don’t have to install or set up these tools separately. They can enable Dependabot alerts from repository settings. Once enabled, alerts go out if an outdated or vulnerable dependency needs to be updated, along with critical details about the vulnerabilities—all in a pull request. Developers can also enable code scanning through CodeQL from repository settings. Doing so will notify them about new and current static analysis alerts in their code.

Niroshan Rajadurai, senior director of GTM strategy for AI and DevSecOps, and I discuss the importance of designing security tools for developers in the age of shifting left:

Another way to reduce context switching and cognitive load is implementing AI tools, like GitHub Copilot. We’ll talk more about AI security capabilities below, but let’s first focus on how they can create a smoother DevSecOps experience within the IDE.

When developers receive a security alert, they can use a tool like GitHub Copilot Chat directly in their IDE instead of having to navigate to another website to research what the alert is, and how to fix it. Beyond understanding the theory behind the alert, developers can prompt Copilot Chat to create examples of how to fix that vulnerability tailored to the code in their IDE. As a result, they get a practical, hands-on learning experience that shows how the vulnerability manifests in real code.

Joseph Katsioloudes, a developer advocate for GitHub Security Lab, shares how AI can reduce cognitive load for a developer who’s been notified about a secret injection:

3. Maintain a developer’s trust in a security tool with an effective alert system

Bringing security into the development process ensures that remediating alerts becomes native to the developer’s workflow. However, developers still need to know what alerts to remediate and by when. Simply asking developers to remediate all alerts is untenable and unrealistic.

When developers are shown a long PDF of 500+ alerts that they’re assigned to review and fix (a pain point I’ve written about before), it’s probable that many of the alerts are false positives and only a portion are worth addressing. Why does this matter? For one, the developer has lost valuable time reviewing all of these alerts. Second, as the tool continues to produce these laundry lists, the developer will lose trust in the tool. That could result in the developer skimming past critical alerts because of low confidence in the tool’s data.

A security tool that’s effectively integrated into the SDLC has an alert system that surfaces high-priority alerts directly to the developer. For instance, alert settings based on custom and automated triage rules ensures engineering teams address the most urgent security alerts first. Being able to filter and search code scanning alerts helps developers to sift through a large set of alerts to focus on a particular type. And providing the ability to dismiss an alert—either by fixing or closing it—will reduce noise by stopping the tool from repeatedly generating the same alert on the same code.

Combined with processes to address a percentage of critical and high-risk vulnerabilities over a period of time, an effective security alert system helps developers prioritize high-risk alerts and help to clean an organization’s security debt, that is, the vulnerabilities that accumulate over time and therefore become harder and more costly to fix.

John Swanson, director of security strategy at GitHub, shares how new technology is creating developer-first security processes that enable developers to fix vulnerabilities earlier in the SDLC:

4. Use AI and automation to help developers find and fix vulnerabilities

Limited resources, rapid threat evolution, noisy false positive alerts, and the increasing complexity of systems—along with the continued use of legacy systems—can make it challenging to stay on top of the latest and most urgent vulnerabilities.

But here’s some good news: AI and automation can help reduce false positives, enable developers to conduct consistent security checks, and scale security practices all at once.

For instance, a feature like code scanning autofix streamlines remediation into the developer workflow by providing, alongside a vulnerability alert, an AI-generated code fix for CodeQL JavaScript and TypeScript in a pull request. Additionally,
secret scanning alerts developers if any secrets have been detected in code. This capability can be coupled with AI to detect generic or unstructured secrets and auto-generate custom patterns, which will detect token types unique to an organization.

Additionally, AI has the potential to enhance the modeling of an extensive range of open source frameworks and libraries. Security teams traditionally model thousands of packages and APIs by hand. Considering the sheer number and diversity of packages, along with frequent library updates, deprecations, or replacements, it’s a daunting task to keep abreast these changes and scale this modeling capability efficiently.

That’s where AI comes in. As the proportion of these frameworks are accurately modeled increases, the likelihood of diminishing false negatives also rises due to a better understanding of data flow within these systems. By turbocharging modeling efforts with AI, security experts can detect more vulnerabilities. In fact, GitHub’s CodeQL team used AI modeling to discover a new security vulnerability. Although this technology is still in the experimental phase at GitHub, we offered a glimpse into its potential during GitHub Universe 2023.

Rajadurai and I show how AI can address pressing security challenges, like modeling unknown packages, which could ultimately reduce the number of false positives:

Other automation capabilities include:

John Ruiz, security operations engineer at GitHub, emphasizes the importance of improving, then automating, basic security processes so developers can focus on what they do best, which is building great software:

5. Create clear expectations around secure coding practices, and communicate them through champions

A big part of improving the DevSecOps experience is not introducing more tooling, but getting clear on the process and expectations of how developers should use the tools they already have. Clear communication about policies ensures an organized and consistent approach to implementing security throughout the SDLC.

Organizations should work with vendors to create guides for how to use a new tool or product, then select security champions to echo these expectations across engineering teams.

Some principles that guide GitHub’s Product Security Engineering team when evaluating tools and designing a rollout plan include:

  • Weighing the security benefits of a new process against the impact on engineering teams.
  • How we can roll out a new process or tool incrementally and gather feedback.
  • Getting clear on expectations for engineers and prioritizing clear communication of those expectations.

Clear expectations for secure coding practices help to eliminate ambiguity and increase security consciousness among developers. Selecting champions who can clearly communicate those expectations can help to model desired behavior and drive a DevSecOps culture across the organization. As a result, secure coding standards are more likely to be understood and consistently implemented by developers, which enables organizations to quickly deliver more secure software.

Continuously improving DevSecOps

As developers embrace more security responsibility under the DevSecOps and shift-left models, evaluating and improving their user experience needs to be a priority. Organizations that invest in understanding a developer’s DevSecOps pain points and iterating solutions to address them, will see improved collaboration between engineering and security teams and faster delivery of more secure code.

More DevSecOps resources

  • Learn from security leaders about creating a safe but flexible developer experience, innovating faster by automating governance, securing the software supply chain with proven practices, and more.
  • Check out our comprehensive guide to DevSecOps.
  • Security training can be game-ified to increase retention. A free interactive training resource, like Secure Code Game, teaches developers how to spot and fix vulnerable patterns in real-world code, build security into workflows, and understand security alerts generated against code.
  • Read more about why making security tools usable for IT professionals is critical to securing the software supply chain.

The post 5 ways to make your DevSecOps strategy developer-friendly appeared first on The GitHub Blog.

]]>
75941
How GitHub accelerates development for embedded systems https://github.blog/enterprise-software/devsecops/how-github-accelerates-development-for-embedded-systems/ Thu, 09 Mar 2023 18:00:44 +0000 https://github.blog/?p=70642 In a world where software and hardware is ubiquitous, GitHub can help enable secure development for mission-critical embedded systems.

The post How GitHub accelerates development for embedded systems appeared first on The GitHub Blog.

]]>
We’re living in a world where software and hardware are ubiquitous—even more than you might initially think! When you think of hardware, what’s the first thing that comes to mind? Your phone? Your laptop? What about your washing machine or car? Or, one of the many smart home devices very likely scattered throughout your home?

When you consider these embedded systems from an industrial perspective, the opportunities are endless. From lifts (or elevators) to aircrafts, manufacturing lines to traffic signals, or medical equipment to the innovative world of robotics.

Those systems fundamentally rely upon software. The type of software that you find in these devices has historically been specialized for a given task. In your car, you may have systems that deal with functional aspects, such as engine and transmission management, or safety-critical systems, including airbag deployment or control of anti-lock braking systems.

We rely on these types of control systems every day. And the likelihood is that you never think about the critically that they play in our day-to–day lives. After all, as Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic.”

We know that DevOps is all about the continuous delivery of value to end-users. For these types of systems, quality and security are fundamental, non-negotiable aspects. Whether it’s ISO 26262 for automotive, IEC 62304 for medical devices, or any of the other myriad of functional safety standards, compliance to the standards adds significant complexity to the development process. When you add in ISO 27001 and the new UN regulatory requirements for cybersecurity on top of industry coding standards like AUTOSAR and Misra, you can see why finding ways to automate becomes even more critical in the world of embedded development.

In this post, we explore how GitHub can add value throughout the development lifecycle when designing, building, and deploying systems of this kind.

Collaborative development

We know that software development starts with a well-understood plan. The fun part comes once your team has a clear plan, writing the code and shipping incredible experiences for your users! GitHub assists with all areas of the software development cycle, from plan to build, to deployment and continuous feedback.

Fundamentally, this all starts with version control software. GitHub has been around since 2008, and you probably know the platform because of our Git version control capabilities (though, you’ll understand there’s a lot more to GitHub these days!). Prior to 2008, source code management took many different forms.

From a software development perspective, storing your code in version control is critical to unlocking the many benefits that we can gain from modern software engineering practices. Storing our code in a Git repository in GitHub provides:

  • A fully traceable history of the development on our codebase.
  • Engineering teams with the ability to work in separate branches, so they don’t interfere with each other’s active work in development.
  • The ability to rollback to a prior version of the codebase if needed.
  • The option to add branch protection rules to the production codebase, so that all changes must be peer-reviewed or pass a set of automated quality gates before being merged.
  • And much more!

These few bullet points highlight how version control is vital to developing these critical systems. With a detailed set of historical changes, the ability to add quality gates with branch protection rules and mandatory peer-reviews via pull requests, compliance suddenly becomes part of the process, as opposed to an afterthought or some additional set of boxes to check.

Security in everything that you do

Think about these types of systems from a security perspective. What happens if someone was able to take over a set of embedded computers in a vehicle? The impact of this would depend on which specific systems, but could potentially be catastrophic. For safety systems, then it could potentially lead to loss of life.

With that context, security, understandably, should be a high priority for development teams making these critical systems. At GitHub, we believe that security is everyone’s responsibility, and should not be an afterthought, or a one-time review before go-live. Instead, it’s something which can, and should, be incorporated into the development workflow. Just like compliance in the above section, it’s something that becomes a part of what you do every day.

GitHub has several tools that can help you to bring security into your workflow. Let’s explore some of them a little further.

Securing the code that you write

One of your first concerns may be the code that you write. We are human, and we are not perfect, so we don’t always write perfect code. Earlier, we talked about branch protection rules as an option to add quality checks into our development lifecycle. What if we could add security as one of those quality checks, and review the code that we have written?

This is where GitHub code scanning capabilities come in. You can think of this from two perspectives; CodeQL (GitHub’s built-in code analysis engine that powers code scanning) and third-party integrations.

CodeQL supports several languages, including C, C++, Java, and Python; all common languages when building embedded systems. CodeQL is able to translate your code into a database-like structure. You’re then able to use a number of queries made by GitHub out-of-the-box, and can expand these with queries from the open source community to identify patterns that occur in your codebase, such as injection or buffer overflow vulnerabilities.

A CodeQL scan can be executed as part of the pull request flow that we mentioned earlier. So, security now becomes another check as part of your quality assurance process, before merging code into your production codebase.

Screenshot showing a GitHub code scanning failure as it potentially allowed an injection attack

In fact, Brittany O’Shea wrote a blog post last year about CodeQL queries that implement the standards, CERT C++ and AUTOSAR C++.

But what if you’re already using some security scanning tools? Not to worry—code scanning integrates with third-party tools, as long as they support the SARIF output format.

Depending on secure packages

The software that you ship isn’t made of just the code you and your team have written. Open source software is increasingly being used, to build on the work of the wider community, rather than reinventing the wheel.

But now for the important question. How can you be sure that you’re not relying upon a vulnerable dependency in the software that you’re shipping? The answer isn’t to stop relying on open source. Instead, it’s about adopting open source software in a way that you are in control. This includes creating an Open Source Program Office (OSPO) in your organization to help manage your open source usage.

GitHub has tools available to help you keep your dependencies up to date. Let’s talk about Dependabot and dependency review.

Dependabot is our handy tool to help you keep your dependencies up to date. Dependabot is able to identify if you are already using a package that contains a vulnerability. If you are using an insecure dependency, then it will privately alert you in your repository. This means you can take the appropriate action to expedite fixes and ensure your code is patched with the latest version. You can even directly open a pull request based upon the alert to expedite your fixes.

But Dependabot can help even further, by keeping your dependency versions proactively up to date. By tracking the contents of supported package manifests (such as Maven, pip, and others), it can open pull requests when more recent package versions are available. The best bit? As it’s a pull request, all of your quality checks must still be met! That includes any automated checks (more on those in a bit), and manual approvals must be met, before the changes can be merged.

Screenshot showing a GitHub Pull Request that fixes vulnerabilities by upgrading from one version of a package to a more recent version

While Dependabot primarily works on manifests that are already being used in production, what can you do to add guardrails to your quality assurance process? GitHub Advanced Security’s dependency review capability can help there.

Screenshot showing dependency review as part of a GitHub Action workflow. Dependency review detected a newly introduced dependency, and blocked the workflow from progressing.

The Dependency Review GitHub Action is used in a pull request to scan for dependency changes. If a new vulnerability is detected (such as introducing a new package, or adjusting an existing package to a vulnerable version), then an error is raised by the GitHub Action, preventing the pull request from being merged.

Keeping secrets out of the equation

According to the IBM “Cost of a Data Breach 2022” report, exposed secrets and credentials are the most common cause of data breaches and often go untracked. Storing secrets in source control is an anti-pattern and can be problematic in scenarios where you’re shipping software that is accessible to anyone. Think of software being shipped to untrusted hardware devices, such as a lift (or elevator), a car, manufacturing production line equipment or an Internet of Things (IoT) device. Malicious actors may be able to get access to the software running on the device, gain access to any credentials stored locally, and use these secret materials for a breach.

Screenshot showing that GitHub security scanning detected an AWS Secret Access Key in code

Fortunately, GitHub’s secret scanning capability can help. It can scan for hundreds of partner patterns, and allows you to even define your own custom patterns, too. This works well for retrospective scans of your codebase. Buthow do you prevent it from ever entering production? This is where secret scanning push protection comes in, which was recently updated to also identify custom patterns.

Stay in the know

One of the main challenges from a security perspective is knowing where the weak points may be. As software engineers and engineering leads, we’re typically working in the scope of a project. However, our colleagues in security typically focus on the security posture of the overall organization. For them, it’s important to understand which repositories contain vulnerable code or packages, or whether secrets have been committed into those repositories.

We know that monitoring and observing these metrics is important, so we provide a security overview in GitHub Enterprise Cloud. This visualizes the results from GitHub Advanced Security, so that you have an all-up view of your current coverage and risk areas.

Screenshot showing GitHub's security risk overview dashboard. It demonstrates the number of alerts generated by Dependabot, Code Scanning and Secret Scanning across enabled repositories.

Screenshot showing GitHub's security coverage overview dashboard. It shows the number of repositories that have had Dependabot, Code Scanning and Secret Scanning enabled, and helps identify potential gaps across the estate.

If you have invested in security information and event management (SIEM) tooling, then you’ll be pleased to know that there are integrations available with several providers to further aid your management and monitoring needs.

Empowering developers with automation

We’ve mentioned that these types of software can be critical, so safeguards and controls should be in place. But to empower your developers, you need to give them access to tools to get the job done. That includes the ability to easily build, test, and deploy your software.

Building on the earlier themes, you can use GitHub branch protection rules to ensure certain standards are met before code is ever allowed to be merged into your production codebase. Earlier, we talked about peer reviews and collaboration. But what about using automation to help accelerate the process?

This is where GitHub Actions comes in. GitHub Actions is our CI/CD solution that allows you to automate your software workflows. You can use GitHub Actions in the development lifecycle of your embedded systems to:

  • Easily share common build and deployment patterns across your engineering teams using reusable workflows.
  • Cross-compile across different hardware platforms with GitHub Action’s matrix capabilities, or with Arm development tools inside GitHub Actions cloud-hosted runners.
  • Automatically build and test software as part of your continuous integration process. In your pull requests, ensure that a successful build takes place, and tests are successful before merging to production.
  • Automatically package and deploy updated firmware images efficiently, so that they are available to end-users timely and efficiently.
  • Automatically deploy firmware updates to embedded systems, by orchestrating updates and releases as part of your CD process.

Continue your journey with GitHub

This has been a high-level overview of GitHub, and how it can help from an embedded software perspective. With our collaborative platform to empower developer productivity and enable secure development, we’re sure there are many scenarios where GitHub can help you even further.

Want to find out more? We will be at Embedded World 2023, so come and chat with us there! We’ll be at stand no. 4-501a in Hall 4.

Not able to join us? Then start a free trial on GitHub Enterprise to explore how GitHub can help with your day-to-day development!

The post How GitHub accelerates development for embedded systems appeared first on The GitHub Blog.

]]>
70642
How to mitigate OWASP vulnerabilities while staying in the flow https://github.blog/enterprise-software/devsecops/how-to-mitigate-owasp-vulnerabilities-while-staying-in-the-flow/ Mon, 06 Feb 2023 15:02:16 +0000 https://github.blog/?p=68491 Explore how GitHub Advanced Security can help address several of the OWASP Top 10 vulnerabilities

The post How to mitigate OWASP vulnerabilities while staying in the flow appeared first on The GitHub Blog.

]]>
The pace and scale of security vulnerabilities is increasing. This is in spite of the fact that teams have been trying to keep their code secure for years. So, why are vulnerabilities still such a problem? When teams use security tools and strategies that don’t optimize the developer experience, development is slowed down. This creates frustration, undermines customer usability, and hampers business success. Businesses that use such tools and strategies end up de-prioritizing security, and instead focus on shipping software quickly.

Here at GitHub, we want to help you mitigate vulnerabilities while boosting developer productivity. Fortunately, the Open Web Application Security Project (OWASP) can help. OWASP provides a Top 10 list of vulnerabilities that gives developers and organizations the context they need to address security and compliance risks within their applications. Today, we’ll examine several of OWASP’s vulnerabilities and developer-optimized strategies for keeping your software safe while maintaining and even increasing developer productivity.

Security at the expense of usability comes at the expense of Security.

- Avi Douglen, OWASP Board of Directors

1. The ideal application security environment

First of all, your development team needs an environment that fosters success. And the most important part of this is embedding security into the developer workflow. Typically, disparate, third-party tools are used to identify security risks. But these tools can be slow, noisy, and decrease productivity. However, when security is integrated into the developer flow, you can secure your code quickly and easily. As discussed in the GitHub Advanced Security ebook, other ideal environment components include:

  • Visibility into your security posture across code, secrets, and supply chain.
  • Communication channels that are lightweight and context‐ sensitive, allowing for easy collaboration.
  • Scalability with your business needs.

It may not be possible to implement each of those capabilities at once, depending on the maturity of your business. But as you mature and continuously improve, fostering this ideal environment will help you protect against vulnerabilities and ship secure software faster.

2. OWASP vulnerabilities risk mitigation and prevention

Now that the ideal state has been described, let’s look at a few OWASP vulnerabilities and some techniques to mitigate them.

A02-Cryptographic Failures

No developer intentionally exposes their software to security threats. However, sometimes API keys, clear text passwords, security tokens, and other sensitive data may remain in code, leading to cryptographic vulnerabilities.

Ideally, secrets would never reach your enterprise, organization, or repository. But they often do and need to be mitigated. Fortunately, GitHub Advanced Security can help.

By implementing push protection, GitHub’s secret scanning feature will not only scan your code for exposed secrets, but also check pushes for high-confidence secrets (those identified with a low false positive rate). It will list any secrets it detects so you can review and remove them. If a decision is made not to remove them, an audit trail will be created.

A03-Injection

Cross-site scripting, path injection, SQL injection, and NoSQL injection are several of the vulnerabilities that have plagued applications for years and continue to stay in the OWASP Top 10 list.

One strategy to address these vulnerabilities is running consistent and effective security code reviews. Not only will your code become cleaner, free of technical debt, and code smells, it will also become more secure. Reviewing these vulnerabilities can become part of your existing GitHub pull request workflow.

Code scanning can also leverage the power of machine learning to find injection vulnerabilities and help communicate risks to developers and security professionals. CodeQL powers code scanning and scans your data for security vulnerabilities.

A04-Insecure Design

Threat modeling strategies can be implemented to encourage collaboration between developers, security professionals, and even risk management teams. This can help ensure that the architecture and design patterns are as secure as possible long before a single line of code has been written.

A06-Vulnerable and Outdated Components

As you continue to adopt open source components at a rapid pace, it’s more important than ever to understand the composition of your software and be able to update vulnerable components.

One of the best strategies for managing the risk of vulnerable and outdated components is to alert developers as soon as a security threat is found and give them the ability to take action in their normal workflows and tooling.

By leveraging Dependabot, you will receive an alert when a repository uses a software dependency with a known vulnerability. A pull request can even be triggered by the security team, which is a simple and effective way for security and developers to communicate.

The best part of these tools is they happen within GitHub. So, you don’t need to worry about decreased productivity or excessive context switching.

Developer-empowering application security

By implementing these strategies, you can create a developer-embedded, collaborative, and scalable application security environment that provides risk mitigation across the supply chain. At the same time, you will ensure that developer productivity is not adversely affected.

We will be at the OWASP 2023 Global AppSec Dublin event from February 15-16, Booth #DA2. Book some time with our experts here to dive deeper into this topic and answer any questions about these strategies.

Want to gain hands-on experience? Then check out GitHub Skills for details on this, and other hands-on learning exercises!

The post How to mitigate OWASP vulnerabilities while staying in the flow appeared first on The GitHub Blog.

]]>
68491
Passwordless deployments to the cloud https://github.blog/enterprise-software/devsecops/passwordless-deployments-to-the-cloud/ Wed, 11 Jan 2023 16:00:11 +0000 https://github.blog/?p=69409 Discovering passwords in our codebase is probably one of our worst fears. But what if you didn’t need passwords at all, and could deploy to your cloud provider another way? In this post, we explore how you can use OpenID Connect to trust your cloud provider, enabling you to deploy easily, securely and safely, while minimizing the operational overhead associated with secrets (for example, key rotations).

The post Passwordless deployments to the cloud appeared first on The GitHub Blog.

]]>
Security is top of mind for us all in software development. My colleague, Mark Paulsen, recently shared a number of examples to mitigate OWASP vulnerabilities while maintaining your developer experience and productivity.

The security of the applications we’re building is important. But we also need to consider the security of the hosting environments that we’re deploying to. When deploying somewhere, you typically need to provide several pieces of information to enable the deployment to take place. Think of your cloud provider. You may need to be authenticated using a service principal, authorized using role-based access control, and the name of a project or an ID to a subscription, or some additional resource metadata.

And here lies the challenge. You will typically have tens, possibly hundreds of service principals depending on the size of your application environment (also assuming you’ve adopted the principle of least privilege). Each of those service principals would have its own password or certificate, which would then be used to authenticate to the cloud provider.

Cryptographic failures is in position #2 of the OWASP 2021 Top 10 list (which would encompass scenarios like secret or certificate leaks, weak passwords, and hardcoded passwords). IBM’s Cost of a data breach 2022 report explains that stolen/lost credentials were the most common cause of a data breach and also took the longest to identify.

So, what happens if one of your passwords or certificates leaks? GitHub Advanced Security’s secret scanning can identify exposed secrets in your codebase. With push protection, you can be proactive and prevent secrets from being committed in the first place. In case you missed it, push protection now also covers custom patterns which you have defined!

But what about the best practice of regularly rotating the passwords/certificates associated with each of those service principals? How do you keep references of those secret materials up to date in your CI/CD tool? There is a fair amount of operational complexity involved in just maintaining secrets and access between your tools of choice.

From a Site Reliability Engineering (SRE) perspective, this overhead could be considered as toil based on the characteristics defined in Google’s Site Reliability engineering book. Toil is unavoidable. But in the context of this blog post, we have the opportunity for optimization. While there are tools like Hashicorp Vault, which can help you organize, automate and maintain your secrets, wouldn’t it be better if you could avoid the problem overall. What if you didn’t have to use secrets to deploy to your preferred cloud provider?

Fortunately, that’s something we’ve been working on at GitHub. Back in 2021, we announced that GitHub enables you to deploy to your cloud provider using OpenID Connect. In this post, I’ll provide you with a deeper overview of this functionality and how it can reduce operational complexities by removing the need for passwords.

What is OpenID Connect (OIDC)?

Let’s start by making sure we’re all on the same page. Open ID Connect is an authentication protocol, built on top of the OAuth 2.0 framework (an authorization protocol). An ID token is usually returned from an authorization endpoint by using a sign-on flow.

This ID token is served in the JSON Web Token (JWT) standard, and typically digitally signed. As a result, this token can be used to verify the identity of the caller, and retrieve additional claims (think of these as additional properties, or statements about the entity) as well.

You can find an example of an ID token returned from GitHub Actions below:

{
  "typ": "JWT",
  "alg": "RS256",
  "x5t": "example-thumbprint",
  "kid": "example-key-id"
}
{
  "jti": "example-id",
  "sub": "repo:octo-org/octo-repo:environment:prod",
  "environment": "prod",
  "aud": "https://github.com/octo-org",
  "ref": "refs/heads/main",
  "sha": "example-sha",
  "repository": "octo-org/octo-repo",
  "repository_owner": "octo-org",
  "actor_id": "12",
  "repository_visibility": private,
  "repository_id": "74",
  "repository_owner_id": "65",
  "run_id": "example-run-id",
  "run_number": "10",
  "run_attempt": "2",
  "actor": "octocat",
  "workflow": "example-workflow",
  "head_ref": "",
  "base_ref": "",
  "event_name": "workflow_dispatch",
  "ref_type": "branch",
  "job_workflow_ref": "octo-org/octo-automation/.github/workflows/oidc.yml@refs/heads/main",
  "iss": "https://token.actions.githubusercontent.com",
  "nbf": 1632492967,
  "exp": 1632493867,
  "iat": 1632493567
}

In the context of this blog post, we can use OpenID Connect in GitHub Actions to generate an ID token for us. This token is signed by GitHub and provides claims on the context of the workflow being executed (for example, the repository details, run number, actor that called the workflow, etc.).

As a result, a cloud provider can then use this ID token to verify the authenticity of a request, allowing a ‘trade’ of the GitHub ID token for a short-lived access token.


​​Diagram showing how a cloud provider can work with GitHub OIDC provider to generate a short-lived token for use in place of a password.

Without OpenID Connect, you would typically have to pass in some credentials to your CI/CD tool, so that it can authenticate to your cloud provider.

GitHub Actions uses OpenID Connect to enable a workflow to authenticate against the cloud provider directly, without needing to use a password or a certificate. Instead, the access token from the cloud provider can be used.

Throughout this process, you are effectively establishing a ‘trust’ between GitHub and a service principal in your cloud provider. In AWS, you would add an OIDC provider to IAM, in Azure a ‘Federated Identity Credential’ and then ‘Workload Identity Federation’ in GCP.

Tip: this means that your cloud provider needs to support OpenID Connect as an authentication mechanism. There are several examples available in the GitHub docs.

Setting up your GitHub Action Workflow for OIDC

Whenever you execute a GitHub Action workflow run, a GitHub Token is created. You may have already referenced this token in your existing workflows using the ${{ secrets.GITHUB_TOKEN }} expression. The GITHUB_TOKEN is typically used to gain access to the needed parts of GitHub for your automation’s needs.

For example, if your workflow is publishing a new package, then you may need write permissions to GitHub Packages. If you’re adding a comment to a GitHub Issue, then you would need write permissions to issues. Check out the GitHub docs for full details on permissions for the GITHUB_TOKEN.

To generate a GitHub OIDC ID token within your workflow, you’ll need to explicitly give the GITHUB_TOKEN permission to do this. This is done by setting the permissions for the id-token to write, as demonstrated in the snippet below.

permissions:
  id-token: write # This is required for requesting the JWT

This permission can be set either at the overall workflow level, or an individual job level. This will depend on where you need to use the token in your workflow (that is, across multiple jobs, or just one job— remember, principle of least privilege—only give the GITHUB_TOKEN the access it needs!).

Once this step is complete, your GitHub Action workflow will be capable of requesting the OIDC ID token, as outlined in the next section.

Authenticating to the cloud provider using the GitHub OIDC token

If you already use GitHub Actions to deploy to the cloud, then you may be aware that there are several GitHub Actions that you can use to authenticate to your cloud provider:

Note: while several cloud providers have GitHub Actions that support OIDC authentication, it’s possible to create a custom action for those providers which do not have an official GitHub action that supports this approach. You can find out more about the process in the GitHub docs.

GitHub Actions typically have multiple properties that can be set, so you need to consider the appropriate configuration for your cloud provider’s action. When configured to use OpenID Connect authentication, the GitHub Action will generate the GitHub ID token, and send that to the cloud provider to be exchanged for the access token to the cloud provider). This can then be used in the later steps of your workflow (for example, additional GitHub Actions or your own scripts), to perform authenticated steps against the cloud provider.

See an example below of logging in to Azure using OIDC:

name: Login to Azure and execute the Azure CLI
on: [push]

permissions:
  id-token: write

jobs: 
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: 'Login to Azure using OIDC'
        uses: azure/login@v1
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}


      - name: 'List the Azure Resource Groups'
        run: |
          az group list

There are a few points to note about the above example:

  • The id-token permission is set to write at the workflow level. If additional jobs were added, then they would also be able to retrieve a GitHub ID token. This could have been configured at the job level, beneath deploy instead. If the id-token permission was not explicitly set to write, then the login step would fail, as the workflow would be unable to retrieve the GitHub ID token.
  • The Azure/login step is used to authenticate to Azure. In this configuration, a Client ID, Tenant ID and Subscription ID are properties set on the action. Notice that a password/certificate is not provided.
  • Some may consider the ID of the service principal, Azure Active Directory tenant, and Azure Subscription as sensitive information. They are passed in using GitHub Secrets, which would then mask those values if they are outputted in the workflow logs.
  • The Azure/login step retrieves the GitHub ID token. It then sends the GitHub ID token to Azure, along with the Service Principal Client ID, Tenant ID and Subscription ID. Azure then validates whether this specific workflow is ‘allowed’ access. If allowed, then the access token will be sent back to the workflow. Otherwise, the login step will fail, and the workflow will fail.
  • The command line is then used to list the Azure Resource Groups in the subscription that the above service principal has access to.

Note: the configurable properties for each GitHub Action are set by the owner of the action. While client-id, tenant-id and subscription-id are used for the Azure/login step, these are not the same for actions from the other cloud providers. Make sure to familiarize yourself with the appropriate action for your cloud provider, and the recommended configuration.

Now, let’s take stock. At this point, you have a GitHub Actions workflow which is capable of generating a GitHub ID token. You can then use a GitHub Action from one of the cloud providers to take the ID token, and exchange it for a short-lived access token. This access token can then be used (based on the role-based access control permissions you have configured on the cloud provider) to execute your workflow steps.

We aren’t using passwords! No longer do we need to worry about rotating certificates, or passwords. Instead, we rely upon the OpenID Connect protocol, and the trust between GitHub and our cloud provider to provide a short-lived access token for use in the workflow. This takes us one step closer to a passwordless world, being able to deploy to our cloud provider without passing a password or certificate!

The post Passwordless deployments to the cloud appeared first on The GitHub Blog.

]]>
69409
Securing and delivering high-quality code with innersource metrics https://github.blog/enterprise-software/devsecops/securing-and-delivering-high-quality-code-with-innersource-metrics/ Wed, 18 May 2022 21:50:03 +0000 https://github.blog/?p=65060 With innersource, it’s important to measure both the amount of innersource activity and the quality of the code being created. Here’s how.

The post Securing and delivering high-quality code with innersource metrics appeared first on The GitHub Blog.

]]>
Innersource creates high quality user experiences and productive developers

The open source software community has organically developed techniques that ensure the code all of us rely on is high quality, reusable, and secure even though it is worked on by people all across the world.

When an organization, such as a company or an agency, employs similar methods within their engineering department it is known as innersource. Common innersource techniques include creating software templates and reusable components through collaboration across different development teams. These templates are then used across all the projects and services within a company to provide a consistent user experience and increase developer productivity by up to 87%.

As you develop an innersource practice within your organization it is important to measure both the amount of innersource activity and the quality of the code that is being created. Below we will focus on how to ensure the code you are using across your products and services is high quality and secure.

Secure your most used code

With the help of the GitHub Professional Services Team, a major government agency created a portal their developers could use to discover existing reusable software based on an open source SAP project. Once developers were able to easily discover relevant repositories they quickly began incorporating them into all of their current work. This meant that any problems in the original repositories would affect many different products and services, so ensuring that the original code was bug- and vulnerability-free had an outsized effect on the overall quality of the code base.

As secure code was the agency’s top priority, we built metrics into the discovery portal to provide visibility into the security status of their most innersourced repositories. These metrics are automatically updated daily, and allow the agency to prioritize their security efforts by keeping the most used repositories secure.

These metrics, along with the insights gathered from enabling GitHub Advanced Security secret scanning and code scanning on all 400+ of their innersource repositories, drove a 50% reduction in vulnerabilities. This means all the products and services dependent on these innersource repositories are more secure.

How to collect and secure your innersource

The government agency was able to develop, secure, and share reusable code internally to significantly accelerate and secure software development. Here are four simple steps your organization can take to accelerate development through innersource adoption:

  1. Identifying reusable software across the teams in your enterprise.
  2. Collecting and making those repositories discoverable.
  3. Tracking metrics related to the security and quality of these critical repositories.
  4. Taking targeted actions to improve those metrics and celebrate the results!

Learn more about how organizations are accelerating development and creating top company cultures.

If you need support or further guidance, let us know at https://services.github.com/#contact. We’d be happy to use our experience to help accelerate and secure your software development!

The post Securing and delivering high-quality code with innersource metrics appeared first on The GitHub Blog.

]]>
65060
GitHub Actions for security and compliance https://github.blog/enterprise-software/github-actions-for-security-compliance/ Fri, 22 Oct 2021 20:23:39 +0000 https://github.blog/?p=60823 GitHub Actions can automate several common security and compliance tasks, even if your CI/CD pipeline is managed by another tool.

The post GitHub Actions for security and compliance appeared first on The GitHub Blog.

]]>
When thinking about automating developer workflows, the first things that come to mind for most are traditional CI/CD tasks: build, test, and deploy. However, many other common tasks can benefit from automation outside of traditional deployment pipelines.

GitHub Actions can automate several common security and compliance tasks, which can be adopted in any GitHub repository, even if your CI/CD pipeline is managed by another tool.

Auditing repository access

Many organizations must regularly provide their auditors and control partners with evidence demonstrating which people have access to what resources. As the number of repositories grows regularly, gathering this information can be a challenge. Thankfully, there’s a GitHub Action that can automate this process for you.

The org-audit-action can be configured to output a .csv and .json file detailing, for every repository in every organization in your enterprise to include:

  • which users have access
  • the permission those users have
  • the user login, full name, and (optional) SAML identity of the user

Screenshot of org-audit-action in use

You can view a brief demonstration of this action from a GitHub Demo Days, here.

Enforcing security policy

GitHub provides a number of useful security features out of the box: Dependabot alerts notify repository owners of vulnerabilities in their open source dependencies and automatically open pull requests to update them. The dependency graph contains license information for open source packages. Additionally, GitHub code scanning will alert users when they have written insecure code.

With all that information available, it can be useful to set a security policy for Dependabot alerts, license compliance, and code scanning, then check each repository against that policy. The ghascompliance action lets you do just that.

Policies can be codified for Dependabot, secret scanning, and code scanning alerts, as well as for open source software (OSS) license usage. This lets organizations define their risk threshold for each alert and define times to remediate for each alert severity.

A detailed overview on how to implement the tool is available in the action’s marketplace listing. You can also find a quick implementation focused on OSS license policy, which was highlighted as part of a GitHub Demo Days, here.

Demonstrating traceability: Creating issue branches

Many regulated organizations must enforce end-to-end traceability for all changes deployed to production. While the implementation specifics vary from place to place and tool to tool, they each demonstrate a traceable relationship between a requirement, the code changes implementing the requirement, the required human and automated approval steps, and the eventual deployment to production.

This kind of end-to-end traceability can be accomplished with GitHub Actions. The short-lived, narrowly-scoped feature branching workflows advocated by the GitHub flow are ideal for this type of workflow. Many organizations follow a pattern of generating a branch for each requirement, using a standard syntax that includes a reference to the requirement’s unique identifier. The create-issue-branch action can be used to create these branches directly from a GitHub issue. Options are available to:

  • trigger the workflow on issue assignment and/or on use of a slash command in a comment
  • customize the branch name
  • customize the response content
  • open a draft pull request linked to the issue

Screenshot of create-issue-branch action in use

This video demonstrates an implementation of create-issue-branch that triggers when an issue is assigned, opens a draft pull request, and passes a custom comment with a link to open the branch in the GitHub web editor.

Demonstrating traceability: Requiring linked issues

Pull requests are the organizing constructs that link to all other parts of a development workflow: issues, code review, CI/CD, etc. This makes them ideal for demonstrating complete traceability.

If every change must be linked to a requirement, every pull request must have a linked issue. The create-issue-branch workflow, above, will automatically link a pull request to the issue, but it doesn’t enforce such a linkage. Thankfully, there’s the verify-linked-issue action. When configured to run on the pull_request event, it will create a check that fails unless the pull request is linked to an issue. Setting it as a required check in a branch protection rule will enforce the check and prohibit merging pull requests that aren’t linked to an issue.

screenshot of verify-linked-issue action in use

For more information

The Actions workflows highlighted here can be used to create secure, compliant workflows in GitHub. They represent a small fraction of the many useful automations available from the GitHub Marketplace.

If you have questions about how to get started, be sure to check out the Quickstart and Learn GitHub Actions sections of our documentation, the Actions section of Github.community, and our guide on security hardening for GitHub Actions.

The post GitHub Actions for security and compliance appeared first on The GitHub Blog.

]]>
60823
Applying DevSecOps to your software supply chain https://github.blog/enterprise-software/devsecops/applying-devsecops-to-your-software-supply-chain/ Thu, 03 Dec 2020 22:59:50 +0000 https://github.blog/?p=55221 To best apply DevSecOps principles to improve the security of your supply chain, you should ask your developers to declare your dependencies in code; and in turn provide your developers with maintained ‘golden’ artifacts and automated downstream actions so they can focus on code.

The post Applying DevSecOps to your software supply chain appeared first on The GitHub Blog.

]]>
This article was originally published on InfoWorld, and is republished here with permission. This is part of our blog series on DevSecOps and software security.

Developers often want to do the ‘right’ thing when it comes to security, but they don’t always know what that is. In order to help developers continue to move quickly, while achieving better security outcomes, organizations are turning to DevSecOps. DevSecOps is the mindset shift of making all parties who are part of the application development lifecycle accountable for security of the application, by continuously integrating security across your development process. In practice, this means shifting security reviews and testing left – from auditing or enforcing at deployment time, to also checking security controls earlier at build or development time.

For code your developers write, that means providing feedback on issues during the development process, so the developer doesn’t lose their flow. For dependencies your code pulls in as part of your software supply chain, what should you do?

Let’s first define a dependency. A dependency is another binary that your software needs in order to run, specified as part of your application. Using a dependency allows you to leverage the power of open source, and to pull in code for functions that aren’t a core part of your application, or where you might not be an expert. They often define your software supply chain — GitHub’s 2019 State of the Octoverse Report showed that on average, each repository has more than 200 dependencies (disclosure: I work for GitHub). An upstream vulnerability in any one of these dependencies means you’re likely affected too. The reality of the software supply chain is that you are dependent on code you didn’t write, yet the dependencies still require work from you for ongoing upkeep. So where should you get started in implementing security controls?

Create a unified CI/CD pipeline to shift security controls left

Part of the goal of DevSecOps, and shifting left, is to provide not only feedback but also consistency and repeatability as part of the development environment. This isn’t unique to your supply chain, but applies to any security control.

The earlier you can unify your CI/CD pipeline, the earlier you can implement controls, allowing your security controls to shift left. You don’t want to apply the same controls multiple times in different systems – it doesn’t scale, spreads your (already thin) security resources even thinner, allows inconsistencies to be introduced via drift or incompatibility in systems, and potentially worst of all, means you might miss something.

The precursor to shifting left and applying DevSecOps isn’t a security control at all – it’s about improving developer tooling to provide a consistent way to write, build, test, and deploy code. Introducing a centralized system for any one of these can help you improve your security. Organizations will frequently tackle developer tools from the last step, and work backwards to the first – that is, adopting a consistent deployment strategy before adopting a consistent build strategy – with one exception, code. Even if you build locally, chances are, you’re checking your code in for posterity.

You can start applying security controls to your code even without getting all the other steps unified. A developer-centric approach means your developers can stay in context and respond to issues as they code, not days later at deployment, or months later from a penetration test report. Building on a unified CI/CD pipeline, here are some tips on how your development team can apply DevSecOps to secure your software supply chain.

Declare dependencies in code, so you discover them at development time

First things first, in order to maintain your dependencies – for example, applying security patches – you need to know what your dependencies are. Seems straightforward, right?

There are many ways to detect your dependencies, at different parts of your development process: by analyzing at the dependencies declared in code, for example, specified by a developer in a manifest file or lockfile; by tracking the dependencies pulled in as part of a build process; or by examining completed build artifacts, for example once they’re in your registry. Unfortunately, there is no perfect solution as all methods have their challenges, but you should pick the solution that best integrates with your existing development pipeline or use multiple solutions to give you insights into dependencies at each step in your development process.

However, there are benefits to detecting dependencies in code, rather than later: you’re shifting that dependency management step left. This allows developers to immediately perform maintenance for dependencies – performing updates, applying security patches, or removing unnecessary dependencies – without waiting for feedback from a build or deployment step. And, if you don’t have a centralized or consistent build pipeline, and can’t apply a check later, detecting your dependencies in code means you can still infer this information. The main downside to detecting dependencies in code is that you might miss any artifacts pulled in later, for example, Gradle allows for dependencies to be resolved as part of a build, meaning build-time detection will contain more complete information.

To accurately detect dependencies in code – and to more easily control what dependencies you use – you’ll want to explicitly specify them as part of your application’s manifest file or lockfile, rather than vendoring them into a repository (forking a copy of a dependency as part of your project, aka copy-pasting it). Vendoring makes sense if you have a good reason to fork the code, for example, to modify or limit functionality for your organization, or use this as a step to review dependencies (you know, actually tracking inputs from vendors). (Some ecosystems also favor using vendoring.) However, if you’re planning on using the upstream version, vendoring makes updating your dependencies harder. By specifying your dependencies explicitly, it’s easier for your development team to update: with a single line of code change in a manifest, rather than re-forking and copying a whole repository. In certain ecosystems, you can use a lockfile to ensure consistency, so you’re using the same version in your development environment as you are for your production build, and review changes like any other code changes.

Provide a path paved with ‘golden’ packages for your development team

You might already be familiar with the concept of ‘golden’ images, which are maintained and sanctioned by your organization, including the latest security patches. This is a common concept for containers, to provide developers with a base image on which they can build their containers, without having to worry about the underlying OS. The idea here is to only have to maintain one set of OSes, managed by a central team, that you know have been reviewed for security issues and validated in your environment. Well, why not do that for any other artifacts too?

To supplement a unified CI/CD pipeline, you can provide a reference set of maintained artifacts and libraries. This is just a pre-emptive security control – rather than verifying that a package is up to date once it’s been built, give your developers what they need as an input to their build. For example, if multiple teams are using OpenSSL, you shouldn’t need every team to update it; if one team updates it (and there are sufficient tests in place!), then you should be able to change the default for all teams. This could be implemented by having a central internal package registry of your known good artifacts, that have already passed any security requirements, and have a clear owner responsible for updating if new versions are released.

By providing a single set of packages, you’re ensuring all teams reference these. Keep in mind, the latest you can do this is in the build system, but this could also be done earlier in code, especially if you’re using a monorepo. An added benefit of sharing common artifacts and libraries is making it easier to tell if you’re ‘affected’ by a newly discovered vulnerability. If the corresponding artifact hasn’t been updated, you are! And then it’s just one change to address the issue, and for the update to flow downstream to all teams. Phew.

Automate downstream build and deployment

To make sure that developers’ hard work pays off, their changes actually need to make it to production! In creating a unified CI/CD pipeline, you cleared a path for changes that are made to code in a development environment to propagate downstream to testing and production environments. The next step is to simplify this with automation. In an ideal world, your development team only makes changes to a development environment, with any changes to that environment automatically pushed to testing, validated, and automatically rolled out (and back, if needed).

Rather than applying DevOps and DevSecOps by requiring your development team to learn operations tools, you simplify those tools and feedback to what these teams need to know in order to make changes where they’re most familiar, in code. This should sound familiar – it’s what’s happening with trends like infrastructure as code, or GitOps – define things in code, and let your workflow tools handle making the actual change.

If you can automate downstream build, testing, and deployment of your code – then your developers only need to focus on fixing code. Following DevSecOps principles, they don’t need to learn tooling to do validating testing, phased deployments, or whatever you might need in your environment. Crucially, for security, your development team doesn’t need to learn how to roll out a fix in order to apply a fix. Fixing a security issue in code and committing it is sufficient to ensure it (eventually) gets fixed in production. Instead, you can focus on quickly finding and fixing bugs in code.

Creating a unified CI/CD pipeline allows you to shift security controls left, including for supply chain security. Then, to best apply DevSecOps principles to improve the security of your dependencies, you should ask your developers to declare your dependencies in code and in turn provide them with maintained ‘golden’ artifacts and automated downstream actions so they can focus on code. Since this requires changes to not just security controls, but your developers’ experience, just using security tooling isn’t sufficient to implement DevSecOps. In addition to enabling platform-native dependency management features, you’ll also want to take a closer look at your CI/CD pipeline and artifact management.

Altogether, applying DevSecOps means you can have a better understanding of what’s in your supply chain. By using DevSecOps, it should be simpler to manage your dependencies, with a change to a manifest or lockfile easily updating a single artifact in use across multiple teams, and automation of your CI/CD pipeline ensuring that changes developers make quickly end up in production.

The post Applying DevSecOps to your software supply chain appeared first on The GitHub Blog.

]]>
55221