The latest on automation - The GitHub Blog https://github.blog/enterprise-software/automation/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Fri, 13 Feb 2026 00:02:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 The latest on automation - The GitHub Blog https://github.blog/enterprise-software/automation/ 32 32 153214340 Automate repository tasks with GitHub Agentic Workflows   https://github.blog/ai-and-ml/automate-repository-tasks-with-github-agentic-workflows/ Fri, 13 Feb 2026 14:00:00 +0000 https://github.blog/?p=93730 Discover GitHub Agentic Workflows, now in technical preview. Build automations using coding agents in GitHub Actions to handle triage, documentation, code quality, and more.

The post Automate repository tasks with GitHub Agentic Workflows   appeared first on The GitHub Blog.

]]>

Imagine visiting your repository in the morning and feeling calm because you see:

  • Issues triaged and labelled
  • CI failures investigated with proposed fixes
  • Documentation has been updated to reflect recent code changes.
  • Two new pull requests that improve testing await your review.

All of it visible, inspectable, and operating within the boundaries you’ve defined.

That’s the future powered by GitHub Agentic Workflows: automated, intent-driven repository workflows that run in GitHub Actions, authored in plain Markdown and executed with coding agents. They’re designed for people working in GitHub, from individuals automating a single repo to teams operating at enterprise or open-source scale.

At GitHub Next, we began GitHub Agentic Workflows as an investigation into a simple question: what does repository automation with strong guardrails look like in the era of AI coding agents? A natural place to start was GitHub Actions, the heart of scalable repository automation on GitHub. By bringing automated coding agents into actions, we can enable their use across millions of repositories, while keeping decisions about when and where to use them in your hands.

GitHub Agentic Workflows are now available in technical preview. In this post, we’ll explain what they are and how they work. We invite you to put them to the test, to explore where repository-level AI automation delivers the most value.

Graphic showing quotes from customers. 'Home Assistant has thousands of open issues. No human can track what's trending or which problems affect the most users. I've built GitHub Agentic Workflows that analyze issues and surface what matters: that's the kind of judgment amplification that actually helps maintainers.'- Franck Nijhof, lead of the Home Assistant project, one of the top projects on GitHubby contributor countAgentic workflows also allow maintainers and community to experiment with repository automation together. 'Adopting GitHub’s Agentic Workflows has lowered the barrier for experimentation with AI tooling, making it significantly easier for staff, maintainers and newcomers alike. Inside of CNCF, we are benefiting from improved documentation automation along with improving team reporting across the organization. This isn't just a technical upgrade for our community, it’s part of a cultural shift that empowers our ecosystem to innovate faster with AI and agentic tooling.'- Chris Aniszczyk, CTO of the Cloud Native Computing Foundation (CNCF), whose mission is to make cloud native computing ubiquitous across the worldEnterprises are seeing similar benefits at scale. 'With GitHub Agentic Workflows, we’re able to expand how we apply agents to real engineering work at scale, including changes that span multiple repositories. The flexibility and built-in controls give us confidence to leverage Agentic Workflows across complex systems at Carvana.'- Alex Devkar, Senior Vice President, Engineering and Analytics, at Carvana

AI repository automation: A revolution through simplicity 

The concept behind GitHub Agentic Workflows is straightforward: you describe the outcomes you want in plain Markdown, add this as an automated workflow to your repository, and it executes using a coding agent in GitHub Actions.

This brings the power of coding agents into the heart of repository automation. Agentic workflows run as standard GitHub Actions workflows, with added guardrails for sandboxing, permissions, control, and review. When they execute, they can use different coding agent engines—such as Copilot CLI, Claude Code, or OpenAI Codex—depending on your configuration.

The use of GitHub Agentic Workflows makes entirely new categories of repository automation and software engineering possible, in a way that fits naturally with how developer teams already work on GitHub. All of them would be difficult or impossible to accomplish traditional YAML workflows alone:

  1. Continuous triage: automatically summarize, label, and route new issues.
  2. Continuous documentation: keep READMEs and documentation aligned with code changes.
  3. Continuous code simplificationrepeatedly identify code improvements and open pull requests for them.
  4. Continuous test improvementassess test coverage and add high-value tests.
  5. Continuous quality hygiene: proactively investigate CI failures and propose targeted fixes.
  6. Continuous reportingcreate regular reports on repository health, activity, and trends.

These are just a few examples of repository automations that showcase the power of GitHub Agentic Workflows. We call this Continuous AI: the integration of AI into the SDLC, enhancing automation and collaboration similar to continuous integration and continuous deployment (CI/CD) practices.

GitHub Agentic Workflows and Continuous AI are designed to augment existing CI/CD rather than replace it. They do not replace build, test, or release pipelines, and their use cases largely do not overlap with deterministic CI/CD workflows. Agentic workflows run on GitHub Actions because that is where GitHub provides the necessary infrastructure for permissions, logging, auditing, sandboxed execution, and rich repository context.

In our own usage at GitHub Next, we’re finding new uses for agentic workflows nearly every day. Throughout GitHub, teams have been using agentic workflows to create custom tools for themselves in minutes, replacing chores with intelligence or paving the way for humans to get work done by assembling the right information, in the right place, at the right time. A new world of possibilities is opening for teams and enterprises to keep their repositories healthy, navigable, and high-quality.

Let’s talk guardrails and control 

Designing for safety and control is non-negotiable. GitHub Agentic Workflows implements a defense-in-depth security architecture that protects against unintended behaviors and prompt-injection attacks.

Workflows run with read-only permissions by default. Write operations require explicit approval through safe outputs, which map to pre-approved, reviewable GitHub operations such as creating a pull request or adding a comment to an issue. Sandboxed execution, tool allowlisting, and network isolation help ensure that coding agents operate within controlled boundaries.

Guardrails like these make it practical to run agents continuously, not just as one-off experiments. See our security architecture for more details.

One alternative approach to agentic repository automation is to run coding agent CLIs, such as Copilot or Claude, directly inside a standard GitHub Actions YAML workflow. This approach often grants these agents more permission than is required for a specific task. In contrast, GitHub Agentic Workflows run coding agents with read-only access by default and rely on safe outputs for GitHub operations, providing tighter constraints, clearer review points, and stronger overall control.

A simple example: A daily repo report  

Let’s look at an agentic workflow which creates a daily status report for repository maintainers.

In practice, you will usually use AI assistance to create your workflows. The easiest way to do this is with an interactive coding agent. For example, with your favorite coding agent, you can enter this prompt:

Generate a workflow that creates a daily repo status report for a maintainer. Use the instructions at https://github.com/github/gh-aw/blob/main/create.md

The coding agent will interact with you to confirm your specific needs and intent, write the Markdown file, and check its validity. You can then review, refine, and validate the workflow before adding it to your repository.

This will create two files in .github/workflows

  • daily-repo-status.md (the agentic workflow)  
  • daily-repo-status.lock.yml (the corresponding agentic workflow lock file, which is executed by GitHub Actions) 

The file daily-repo-status.md will look like this: 

--- 
on: 
  schedule: daily 
 
permissions: 
  contents: read 
  issues: read 
  pull-requests: read 
 
safe-outputs: 
  create-issue: 
    title-prefix: "[repo status] " 
    labels: [report] 
 
tools: 
  github: 
---  
 
# Daily Repo Status Report 
 
Create a daily status report for maintainers. 
 
Include 
- Recent repository activity (issues, PRs, discussions, releases, code changes) 
- Progress tracking, goal reminders and highlights 
- Project status and recommendations 
- Actionable next steps for maintainers 
 
Keep it concise and link to the relevant issues/PRs.

This file has two parts: 

  1. Frontmatter (YAML between --- markers) for configuration 
  2. Markdown instructions that describe the job in natural language in natural language

The Markdown is the intent, but the trigger, permissions, tools, and allowed outputs are spelled out up front.

If you prefer, you can add the workflow to your repository manually: 

  1. Create the workflow: Add  daily-repo-status.md with the frontmatter and instructions.
  2. Create the lock file:  
    • gh extension install github/gh-aw  
    • gh aw compile
  3. Commit and push: Commit and push files to your repository.
  4. Add any required secrets: For example, add a token or API key for your coding agent.

Once you add this workflow to your repository, it will run automatically or you can trigger it manually using GitHub Actions. When the workflow runs, it creates a status report issue like this:

Screenshot of a GitHub issue titled "Daily Repo Report - February 9, 2026" showing key highlights, including 2 new releases, 1,737 commits from 16 contributors, 100 issues closed with 190 new issues opened, 50 pull requests merged from 93 opened pull requests, and 5 code quality issues opened.

What you can build with GitHub Agentic Workflows 

If you’re looking for further inspiration Peli’s Agent Factory is a guided tour through a wide range of workflows, with practical patterns you can adapt, remix, and standardize across repos.

A useful mental model: if repetitive work in a repository can be described in words, it might be a good fit for an agentic workflow.

If you’re looking for design patterns, check out ChatOps, DailyOps, DataOps, IssueOps, ProjectOps, MultiRepoOps, and Orchestration.

Uses for agent-assisted repository automation often depend on particular repos and development priorities. Your team’s approach to software development will differ from those of other teams. It pays to be imaginative about how you can use agentic automation to augment your team for your repositories for your goals.

Practical guidance for teams 

Agentic workflows bring a shift in thinking. They work best when you focus on goals and desired outputs rather than perfect prompts. You provide clarity on what success looks like, and allow the workflow to explore how to achieve it. Some boundaries are built into agentic workflows by default, and others are ones you explicitly define. This means the agent can explore and reason, but its conclusions always stay within safe, intentional limits.

You will find that your workflows can range from very general (“Improve the software”) to very specific (“Check that all technical documentation and error messages for this educational software are written in a style suitable for an audience of age 10 or above”). You can choose the level of specificity that’s appropriate for your team.

GitHub Agentic Workflows use coding agents at runtime, which incur billing costs. When using Copilot with default settings, each workflow run typically incurs two premium requests: one for the agentic work and one for a guardrail check through safe outputs. The models used can be configured to help manage these costs. Today, automated uses of Copilot are associated with a user account. For other coding agents, refer to our documentation for details. Here are a few more tips to help teams get value quickly:

  • Start with low-risk outputs such as comments, drafts, or reports before enabling pull request creation.
  • For coding, start with goal-oriented improvements such as routine refactoring, test coverage, or code simplification rather than feature work.
  • For reports, use instructions that are specific about what “good” looks like, including format, tone, links, and when to stop.
  • Agentic workflows create an agent-only, sub-loop that’s able to be autonomous because agents are acting under defined terms. But it’s important that humans stay in the broader loop of forward progress in the repository, through reports, issues, and pull requests. With GitHub Agentic Workflows, pull requests are never merged automatically, and humans must always review and approve.
  • Treat the workflow Markdown as code. Review changes, keep it small, and evolve it intentionally.

Continuous AI works best if you use it in conjunction with CI/CD. Don’t use agentic workflows as a replacement for GitHub Actions YAML workflows for CI/CD. This approach extends continuous automation to more subjective, repetitive tasks that traditional CI/CD struggle to express.

Build the future of automation with us   

GitHub Agentic Workflows are available now in technical preview and are a collaboration between GitHub, Microsoft Research, and Azure Core Upstream. We invite you to try them out and help us shape the future of repository automation.

We’d love for you to be involved! Share your thoughts in the Community discussion, or join us (and tons of other awesome makers) in the #agentic-workflows channel of the GitHub Next Discord. We look forward to seeing what you build with GitHub Agentic Workflows. Happy automating!

Try GitHub Agentic Workflows in a repo today! Install gh-aw, add a starter workflow or create one using AI, and run it. Then, share what you build (and what you want next)

The post Automate repository tasks with GitHub Agentic Workflows   appeared first on The GitHub Blog.

]]>
93730
How to streamline GitHub API calls in Azure Pipelines https://github.blog/enterprise-software/ci-cd/how-to-streamline-github-api-calls-in-azure-pipelines/ Thu, 24 Jul 2025 16:00:00 +0000 https://github.blog/?p=89733 Build a custom Azure DevOps extension that eliminates the complexity of JWT generation and token management, enabling powerful automation and enhanced security controls.

The post How to streamline GitHub API calls in Azure Pipelines appeared first on The GitHub Blog.

]]>

Azure Pipelines is a cloud-based continuous integration and continuous delivery (CI/CD) service that automatically builds, tests, and deploys code similarly to GitHub Actions. While it is part of Azure DevOps, Azure Pipelines has built-in support to build and deploy code stored in GitHub repositories.

Because Azure Pipelines is fully integrated into GitHub development flows, pipelines can be triggered by pushes or pull requests, and it reports the results of the job execution back to GitHub via GitHub status checks. This way, developers can easily see if a given commit is healthy or block pull request merges if the pipeline is not compliant with GitHub rulesets.

When you need additional functionality, you can use either extensions available in the marketplace  or GitHub APIs to deepen the integration with GitHub. Below, we’ll show how you can streamline the process of calling the GitHub API from Azure Pipelines by abstracting authentication with GitHub Apps and introducing a custom Azure DevOps extension, this will allow pipeline authors to easily authenticate against GitHub and call GitHub APIs without implementing authentication logic themselves. This approach provides enhanced security through centralized credential management, improved maintainability by standardizing GitHub integrations, time savings through cross-project reusability, and simplified operations with centrally managed updates for bug fixes.

Common use cases and scenarios

The GitHub API is very rich, so the possibilities for customization are almost endless. Some of the most common scenarios for GitHub calls in Azure Pipelines include:

  • Setting status checks on commits or pull requests: Report the success or failure of pipeline steps (like tests, builds, or security scans) back to GitHub, enabling rulesets utilization to enforce policies, and providing clear feedback to developers about the health of their code changes.
  • Adding comments to pull requests: Automatically post pipeline results, test coverage reports, performance metrics, or deployment information directly to pull request discussions, keeping all relevant information in one place for code reviewers.
  • Updating files in repositories: Automatically update documentation, configuration files, or version numbers as part of your CI/CD process, such as updating a CHANGELOG.md file or bumping version numbers in package files.
  • Managing GitHub Issues: Automatically create, update, or close issues based on pipeline results, such as creating bug reports when tests fail or closing issues when related features are successfully deployed.
  • Integrating with GitHub Advanced Security: Send code scanning results to GitHub’s code scanning, enabling centralized vulnerability management, security insights, and supporting DevSecOps practices across your development workflow.
  • Managing releases and assets: Automatically create GitHub releases and upload build artifacts, binaries, or documentation as release assets when deployments are successful, streamlining your release management process.
  • Tracking deployments with GitHub deployments: Integrate with GitHub’s deployment API to provide visibility into deployment history and status directly in the GitHub interface.
  • Triggering GitHub Actions workflows: Orchestrate hybrid CI/CD scenarios where Azure Pipelines handles certain build or deployment tasks and then triggers GitHub Actions workflows for additional processing or notifications.

Understanding GitHub API: REST vs. GraphQL

The GitHub API provides programmatic access to most of GitHub’s features and data, offering two distinct interfaces: REST and GraphQL. The REST API follows RESTful principles and provides straightforward HTTP endpoints for common operations like managing repositories, issues, pull requests, and workflows. It’s well documented, easy to get started with, and supports authentication via personal access tokens, GitHub Apps, or OAuth tokens.

GitHub’s GraphQL API offers a more flexible and efficient approach to data retrieval. Unlike REST, where you might need multiple requests to gather related data, GraphQL allows you to specify exactly what data you need in a single request, reducing over-fetching and under-fetching of data. This is particularly valuable when you need to retrieve complex, nested data structures or when you want to optimize network requests in your applications. You can see some examples in Exploring GitHub CLI: How to interact with GitHub’s GraphQL API endpoint.

Both APIs serve as the foundation for integrating GitHub’s functionality into external tools, automating workflows, and building custom solutions that extend GitHub’s capabilities.

How to choose the right authentication method

GitHub offers three primary authentication methods for accessing its APIs. Personal Access Tokens (PATs) are the simplest method, providing a token tied to a user account with specific permissions. OAuth tokens are designed for third-party applications that need to act on behalf of different users, implementing a standard authorization flow where users grant specific permissions to the application. 

GitHub Apps provide the most robust and scalable solution, operating as their own entities with fine-grained permissions, installation-based access, and higher rate limits — making them ideal for organizations and production applications that need to interact with multiple repositories or organizations while maintaining tight security controls.

Authentication TypeProsCons
Personal Access Tokens (PATs)– Simple to create and use
– Quick to get started
– Good for personal automation
– Can be scoped to multiple organizations
– Configurable permissions per token
– Admins can revoke organization access
– Configurable expiration dates
– Work with most GitHub API libraries
– No additional infrastructure needed
– Tied to user account lifecycle
– Limited to user’s permissions
– Classic PATs have coarse-grained permissions
– Require manual rotation
– Browser-based management only
– If compromised, expose all accessible organization(s)/repositories
OAuth Tokens– Standard OAuth 2.0 flow
– Organization admins control app access
– Can act on behalf of multiple users
– Excellent for web applications
– User-approved permissions
– Refresh token mechanism
– Widely supported by frameworks
– Good for user-facing applications
– Require storing refresh tokens securely
– Need server infrastructure
– More complex than PATs for simple automation
– Still tied to user accounts
– Require initial browser authorization
– Token management complexity
– Potential for scope creep
– User revocation affects functionality
GitHub Apps– Act as independent identity
– Fine-grained, repository-level permissions
– Installation-based access control
– Tokens can be scoped down at runtime
– Short-lived tokens (1 hour max)
– Higher rate limits
– Best security model available
– No user account dependency
– Audit trail for all actions
– Can be installed across multiple orgs
– More complex initial setup
– Require JWT implementation
– May be overkill for simple scenarios
– Require understanding of installation concept
– Private key management responsibility
– More moving parts to maintain
– Not all APIs support Apps

PATs have two flavors: classic and fine-grained. Classic PATs provide repository-wide access with coarse permissions. Fine-grained PATs offer more granular control, since they are  scoped to a single organization, allow specified permissions at the repository level, and limit access to specific repositories. Administrators can also require approval of fine-grained tokens before they can be used, making them a more secure choice for repository access management. However, they currently do not support all API calls and still have some limitations compared to classic PATs.

Because of their fine-grained permissions, security features, and higher rate limits, GitHub Apps are the ideal choice for machine-to-machine integration with Azure Pipelines. What’s more, the short-lived tokens and installation-based access model provide better security controls compared to PATs and OAuth tokens, making them particularly well-suited for automation in CI/CD scenarios.

Registering and installing a GitHub App

In order to use an application for authentication, register it as a GitHub App, and then install it on the accounts, organizations, or enterprises the application will interact with.

These are the steps to follow:

  1. Register the GitHub App in GitHub enterprise, organization, or account.
    • Make sure to select the appropriate permissions for the application. The permissions will determine what the application can do in the enterprise, organization, and repositories to which it has access.
    • Permissions may be modified at any time. Note that if the application is already installed, changes will require a new authorization from the owner administrators before they take effect.
    • Take care to understand the consequences of making the app public or private. It is very likely that you will want to make the app private, as it is only intended to be used by you or your organization. The semantics of public and private also vary depending on the  GitHub Enterprise Cloud type (Enterprise with personal accounts, with managed users, or with data residency).
    • If a private key was generated, save it in a safe place. Private keys are used to authenticate against GitHub to generate an installation token. Note that a key can be revoked or up to 20 more may be generated if desired. 
  2. Install the GitHub App on the accounts or organizations the application will interact with.
    • When an app is installed, select which repositories the app will have access to. Options include all repositories (current and future) or you can select individual repositories.

Note: An unlimited number of GitHub Apps may be installed on each account, but only 100 GitHub Apps may be registered per enterprise, organization, or account.

GitHub App authentication flow

GitHub Apps use a two-step authentication process to access the GitHub API. First, the app authenticates itself using a JSON Web Token (JWT) signed with its private key. This JWT proves the app’s identity but doesn’t provide access to any GitHub resource. To call GitHub APIs, the app needs to obtain an installation token. Installation tokens are scoped (enterprise, organization, or account) access tokens that are generated using the app’s JWT authentication. These tokens are short-lived (valid for one hour) and can only access the resources on the scope they are installed on (enterprise, organization, or repository) and use at max the permissions granted during the app’s installation.

To obtain an installation token, there are two approaches: either use a known installation ID, or retrieve the ID by calling the installations API. Once the app has the installation ID, it requests a new token using that ID. The resulting installation token inherits the app’s permissions and repository access for that installation. It can optionally request the token with reduced permissions or limited to specific repositories — a useful security feature when you don’t need the app’s full access scope.

The resulting installation token can then be used to make GitHub API calls with the returned permissions.

Note: The application can also authenticate on a user’s behalf, but it’s not an ideal scenario for CI/CD pipelines where we want to use a service account and not a user account.

Sequence diagram showing GitHub App authentication flow between Client and GitHub, including JWT generation, installation ID retrieval, and installation token creation steps.

From a pipeline perspective, generating an installation token is all that’s needed to call GitHub APIs.

Pipeline authors have three main options to generate installation tokens in Azure Pipelines:

  1. Use a command-line tool: Several tools are available that can generate installation tokens directly from a pipeline step. For example, gh-token is a popular open source tool that handles the entire token generation process.
  2. Write custom scripts: Implement the token generation process using bash/curl or PowerShell scripts following the authentication steps described above. This grants full control over the process but requires more implementation effort.
  3. Use Azure Pipeline tasks: While Azure Pipelines doesn’t provide built-in GitHub App authentication, you can either:
    • Find a suitable task in the Azure DevOps marketplace.
    • Create a custom task that implements the GitHub App authentication flow.

Next, we’ll explore creating a custom task using an Azure DevOps extension to provide an integration with GitHub App authentication and dynamically generated installation tokens.

Azure DevOps extension for GitHub App authentication

When creating an integration between Azure Pipelines and GitHub, security of the app private key should be top of mind. Possession of this key grants permissions to generate installation tokens and make API calls on behalf of the app, so it must be stored securely. Within Azure Pipelines, we have several options for storing sensitive data:

Service connections in Azure Pipelines provide several key benefits for managing external service authentication, including:

  • Centralized access control where administrators can specify which pipelines can use the connection
  • Support for multiple authentication schemes
  • Ability to share connections across multiple pipelines within a project
  • Built-in security controls for managing who can view or modify connection details
  • Keep sensitive credentials hidden from pipeline authors while still allowing usage
  • Shared connections across multiple projects, reducing duplication and management overhead

For GitHub App authentication, service connections are particularly valuable because they:

  • Securely store the app’s private key
  • Allow administrators to configure and enforce connection behaviors
  • Provide better security compared to storing secrets directly in pipelines or variable groups

For those eager to explore the sample code, check out the repository. The key components and configuration are detailed below.

Creating a custom Azure DevOps extension

Azure DevOps extensions are packages that add new capabilities to Azure DevOps services. In our case, we need to create an extension that provides two key components:

  • Custom service connection type for securely storing GitHub App credentials (and other settings)
  • Custom task that uses those credentials to generate installation tokens

An extension consists of a manifest file that describes what the extension provides, along with the actual implementation code.

The development process involves creating the extension structure, defining the service connection schema, implementing the custom task logic in PowerShell (Windows only) or JavaScript/TypeScript for cross-platform compatibility, and packaging everything into a distributable format. Once created, the extension can be published privately for your organization or shared publicly through the Azure DevOps Marketplace, making it available for others who have similar GitHub integration needs.

We are not going to do a full walkthrough of the extension creation process, but we will demonstrate the most important steps. You can find all the information here: 

Adding a custom service connection

To enable GitHub App authentication in Azure Pipelines, we need to create a custom service connection type since there isn’t a built-in one. This can be done by adding a custom endpoint contribution to our extension, which will define how the service connection stores and validates the GitHub App credentials, and provides a user-friendly UI for configuring the connection settings like App ID, private key, and other properties.

We need to add a contribution of type ms.vss-endpoint.service-endpoint-type to the extension contributions manifest. This contribution will define the service connection type and its properties, like the authentication scheme, the endpoint schema, and the input fields that will be displayed in the service connection configuration dialogue.

Something like this (see a snippet below, or explore the full contribution definition in reference implementation):

"contributions": [
  {
    "id": "github-app-service-endpoint-type",
    "description": "GitHub App Service Connection",
    "type": "ms.vss-endpoint.service-endpoint-type",
    "targets": [ "ms.vss-endpoint.endpoint-types" ],
    "properties": {
        "name": "githubappauthentication",
        "isVerifiable": false,
        "displayName": "GitHub App",
        "url": {
            "value": "https://api.github.com/",
            "displayName": "GitHub API URL",
            "isVisible": "true"
        },
        ...
  },

Once you install the extension, you can add/manage the service connection of type “GitHub App” and configure the app’s ID, private key, and other settings. The service connection will securely store the private key and can be used by custom tasks to generate installation tokens in a pipeline.

Azure DevOps new service connection dialog showing different connection types including Generic, GitHub, GitHub App (highlighted with red arrow), GitHub Enterprise Server, and Incoming WebHook options.

In addition to storing the private key, the custom service connection can also store other settings, such as the GitHub API URL and the app client ID. It can also be used to limit token permissions or scope the token to specific repositories. By optionally enforcing these settings at the service connection level, administrators can ensure consistency and security, rather than leaving configuration decisions to pipeline authors.

Azure DevOps service connection configuration form for custom GitHub App authentication, showing fields for GitHub API URL, Client ID, Private Key, Token Permissions, and Service Connection Name.

Adding a custom task

Now that we have a secure way to store the GitHub App credentials, we can create a custom task that will use the service connection to generate an installation token. The task will be a TypeScript application (cross platform) and use the Azure DevOps Extension SDK.

While I already shared the full walkthrough of creating a custom task, here is an abbreviated list to follow:

  • Create the custom task skeleton
  • Declare the inputs and outputs on the task manifest (task.json)
  • Implement the code
  • Declare the task and its assets on the extension manifest (vss-extension.json)

I have created an extension sample that contains both the service connection as well as a custom task that generates a GitHub installation token for API calls. Since the extension is not published to the marketplace, you have to (privately) publish under your account, share it with your Azure DevOps enterprise or organization, and then install it on all organizations where you want to use the custom task.

Jump to the next section If you choose this path, as you are now ready to use the custom task in your pipeline.

Note: The sample includes both a GitHub Actions workflow and an Azure Pipelines YAML pipeline that builds and packages the extension as an Azure DevOps extension that can be published in the Azure DevOps marketplace.

Using the custom task in Azure Pipelines

The task supports receiving the private key, as a string, a file (to be combined with secure files), or preferably a service connection (see input parameters).

Assuming you have a service connection named my-github-app-service-connection, let’s see how can use task to create a comment in a pull request in the GitHub repository that triggers the pipeline using the GitHub CLI to call the GitHub API:

steps:
- task: create-github-app-token@1
  displayName: create installation token
  name: getToken
  inputs:
    githubAppConnection: my-github-app-service-connection

- bash: |
    pr_number=$(System.PullRequest.PullRequestNumber)
    repo=$(Build.Repository.Name)
    echo "Creating comment in pull request #${pr_number} in repository ${repo}"
    gh api -X POST "/repos/${repo}/issues/${pr_number}/comments" -f body="Posting a comment from Azure Pipelines"
  displayName: Create comment in pull request
  condition: eq(variables['Build.Reason'], 'PullRequest')
  env:
    GH_TOKEN: $(getToken.installationToken)

Running this pipeline will result in a comment being posted in the pull request:

Screenshot of a GitHub pull request snippet showing an Azure Pipelines Status check, and comment that reads 'Posting a comment from Azure Pipelines' written by our pipeline.

Pretty simple, right? The task will create an installation token using the service connection and export it as a variable, which can be accessed as getToken.installationToken (with getToken being the identifier of the step). It can then be used to authenticate against GitHub, in this case using the GitHub CLI command, which will take care of the API call and authentication for us (we could have also used curl or any other HTTP client).

The task also exports other variables:

  • tokenExpiration: the expiration date of the generated token, in ISO 8601 format
  • installationId: the ID of the installation for which the token was generated

Unlocking powerful automation capabilities beyond basic CI/CD

By leveraging GitHub Apps for authentication, organizations can establish secure, scalable Azure Pipelines integrations that provide fine-grained permissions, short-lived tokens, and better security controls compared to traditional PATs.

The custom Azure DevOps extension approach provides a seamless integration experience that abstracts away the complexities of GitHub App authentication. Through service connections and custom tasks, pipeline authors can easily generate installation tokens without worrying about JWT generation, installation ID management, or token lifecycle concerns.

The streamlined approach also enables development teams to implement rich GitHub integrations, including automated status checks, pull request comments, issue management, security scanning integration, and deployment tracking. The result? A more cohesive development workflow where Azure Pipelines and GitHub work together seamlessly to provide comprehensive visibility and automation throughout the software development lifecycle.

Whether you’re looking to enhance your existing CI/CD processes or build entirely new automated workflows, the combination of Azure Pipelines and GitHub API through GitHub Apps provides a robust foundation for modern DevOps practices. This will allow you to enrich your existing pipelines with GitHub capabilities as you move your code from Azure Repos to GitHub.

Explore more blog posts covering a range of topics essential for enterprise software development >

The post How to streamline GitHub API calls in Azure Pipelines appeared first on The GitHub Blog.

]]>
89733
Cloud migration made easy: introducing GitHub Enterprise Importer https://github.blog/enterprise-software/automation/cloud-migration-made-easy-introducing-github-enterprise-importer/ Mon, 12 Jun 2023 16:04:24 +0000 https://github.blog/?p=72386 With GitHub Enterprise Importer, you can seamlessly move to GitHub Enterprise Cloud, bringing your code and collaboration history with you so your team doesn’t miss a beat.

The post Cloud migration made easy: introducing GitHub Enterprise Importer appeared first on The GitHub Blog.

]]>
If you want to move to GitHub.com and benefit from all of the great features developers love—from GitHub Actions to GitHub Codespaces—you’ll have existing data that you want to bring with you.

GitHub already offers a range of tools and services—from GitHub Actions Importer to Expert Services for companies planning complex migrations—to help teams to migrate from other platforms to GitHub so that they can hit the ground running quickly.

Today, we’re launching GitHub Enterprise Importer, a self-serve tool which empowers teams to migrate their code, history, and collaboration context to GitHub Enterprise Cloud.

Introducing GitHub Enterprise Importer

GitHub Enterprise Importer (GEI) enables high-fidelity, self-serve migrations to GitHub Enterprise Cloud and GitHub.com.

GitHub Enterprise Importer migrates your code, but the code is the easy bit—it also brings all your conversations and collaboration history with you. That means things like pull requests, reviews and comments. This is a game changer when you need to understand not just the history of your code, but the “why” behind that history.

We’re publicly launching GitHub Enterprise Importer today—but already, it has been used by over 2,000 customers to migrate more than 400,000 repositories to GitHub Enterprise Cloud.

Migrating our code with GitHub Enterprise Importer was frictionless. We quickly moved 5,300+ repos from GitHub Enterprise Server to GitHub Enterprise Cloud.

- Srini Raghavan / Director of Software Engineering, GSK

Migrating with GitHub Enterprise Importer

With GitHub Enterprise Importer, you can migrate from GitHub Enterprise Server, Azure DevOps, Bitbucket Data Center, and Bitbucket Server to GitHub Enterprise Cloud and GitHub.com—plus GEI can be used by existing GitHub.com customers to adopt Enterprise Managed Users (EMUs).

You can run migrations from our simple command line interface (CLI). The average repository takes just 70 seconds to migrate, and the CLI offers tools to help you to migrate large numbers of repositories in bulk. Once your migration finishes, the CLI reports back its status, including any warnings pointing to data which couldn’t be migrated.

As well as the CLI, we also offer a fully-featured API for advanced automations, giving you even more control.

Learn more about what data GitHub Enterprise Importer can migrate and how to use it in our documentation.

Planning your migration

We know that a successful migration isn’t just about tools—planning and preparation is what really makes the difference.

In recognition of that, we’ve published a new guide on how to plan your migration to GitHub. Even if you’re not on a migration path supported by GitHub Enterprise Importer, these docs will show you what you need to do, step by step.

For large and complex migrations, we know that many organizations want tailored support. The GitHub Expert Services Team offers hands-on support from migrations experts, taking the stress out of planning and executing migrations.

Choosing one source code management tool

No one loves migrating between tools—but consolidating to use a single source code management tool can reduce complexity and bring about a step change in developer happiness, productivity, and security.

Travelport recently migrated its complex DevOps toolchain to GitHub Enterprise Cloud, migrating over 6,000 repositories to GitHub Enterprise Cloud and adopting GitHub Actions at scale.

GitHub Enterprise Importer has been a godsend. It has given us a smooth path to migrating all our repositories. Without GEI, we wouldn't have been able to get our engineering teams to migrate, period. They would have put it off indefinitely so it wouldn’t disrupt their workflows. With GEI, we could move a large group of repositories very quickly so the teams only needed to plan for a few hours of downtime at most.

- Michael Oubre / Director of Engineering Excellence at Travelport

Instead of disrupting work for months on end, GitHub Enterprise Importer allowed the team to automate the process in just a few days. Travelport moved more than 4,000 repositories, 200 teams, and 1,500 developers from its on-premises GitHub Enterprise Server to GitHub Enterprise in the cloud. You can read more about their story in our case study.

Migrating from Bamboo Server and Data Center with GitHub Actions Importer

Our commitment to a seamless migration experience goes beyond migrating repos. In March, we launched GitHub Actions Importer, a tool to plan, forecast, and automate the migration of CI/CD pipelines to GitHub Actions. To date, thousands of CircleCI, GitLab, Jenkins, Azure DevOps, and Travis CI users have used GitHub Actions Importer to migrate their workflows to GitHub Actions.

Today, we’re also announcing a public beta enabling migrations from Atlassian’s Bamboo Server and Data Center products with GitHub Actions Importer. This makes it easy and free to migrate your Bamboo pipelines to GitHub Actions.

Head over to our documentation to get started. As always, we would love to hear from you. You can share your feedback on how we can improve GitHub Actions Importer by posting here.

Get started

To get started with migrations from GitHub.com, GitHub Enterprise Server or Azure DevOps, simply follow the instructions in our documentation.

If you’re looking to migrate from Bitbucket Server or Data Center, you can register for our beta program, and we’ll be in touch soon. With Atlassian having announced that they are ending support for Bitbucket Server in February 2024, it’s the perfect time to migrate.

Want to learn more about GitHub Enterprise? Get in touch with our sales team—we’ll be happy to help.

The post Cloud migration made easy: introducing GitHub Enterprise Importer appeared first on The GitHub Blog.

]]>
72386
Dependabot Updates hit GA in GHES https://github.blog/enterprise-software/automation/dependabot-updates-hit-ga-in-ghes/ Thu, 09 Jun 2022 20:47:19 +0000 https://github.blog/?p=65600 Dependabot is generally available in GitHub Enterprise Server 3.5. Here is how to set up Dependabot on your instance.

The post Dependabot Updates hit GA in GHES appeared first on The GitHub Blog.

]]>
Dependabot updates are now generally available in GitHub Enterprise Server 3.5 🎉! Dependabot alerts have been available on GitHub Enterprise Server (GHES) for years, but support for Dependabot updates––the ability to update dependencies automatically by opening pull requests––have been a long-standing feature request from GHES customers.

How to enable Dependabot on your GitHub Enterprise Server instance

As a quick refresher, Dependabot consists of three services:

  • Dependabot alerts: alerts you the moment vulnerabilities in your dependencies are detected
  • Dependabot security updates: upgrades a dependency to the next non-vulnerable version when an exposure is detected by opening a pull request to your repository
  • Dependabot version updates: opens pull requests to keep all your dependencies up to date, decreasing your exposure to vulnerabilities and your likelihood of getting stuck on an outdated version

In this post, we’ll walk through the steps for enabling Dependabot on your enterprise server, and more detailed reference material can be found in GitHub Docs:

Prerequisites

To enable Dependabot to update your dependencies:

  • Have an instance of GitHub Enterprise Server running 3.5 or higher.
    • Note: Though Dependabot updates are available on 3.3 and 3.4, 3.5 is the lowest recommended version for general availability support.
  • Enable GitHub Actions on your GitHub Enterprise Server.
    • Note: GitHub Actions is not supported on cluster configurations at this time, so Dependabot is not supported on clustered environments.
  • Set up one or more Linux virtual machines for the self-hosted Actions runners. These runners will be responsible for running the logic behind Dependabot.
  • Set up and configure self-hosted Actions runners with the `dependabot` label.
    • Note: Customers on air-gapped setups are restricted, since the runners for Dependabot updates require internet access.

Setting up GitHub Actions runners

To configure a GitHub Actions runner, follow the steps outlined in our documentation. GitHub Enterprise Server will provide you with a set of commands to run on your virtual machine that add self-hosted runners to the runner pool.

When configuring a runner for Dependabot, you must configure your runners with the dependabot label to indicate that this runner is available for Dependabot update jobs. You’ll be prompted to add a label and a group. After running ./config.sh in your terminal you’ll see the following:

"

Optionally, you can use a runner group to manage your Dependabot runners. If using repository or organization-level runner groups, make sure each runner is part of the Dependabot runner group and the repositories have access to this group you should have a runner group. Here is what your group would look like:

"

Enable Dependabot

In the site admin Management Console under Security, enable the following features. Please note that enabling Dependabot on your server requires a brief downtime where your server will be restarted.

  • Enable the dependency graph.
  • Enable Dependabot updates.

"

For more information, and for step-by-step instructions, see: Enabling Dependabot for your enterprise and Enabling the dependency graph for your enterprise in the docs.

Navigate to GHES settings, then GitHub Connect:

  • Enable GitHub Connect for your GHES.
  • Make sure both Dependabot settings are enabled.

"

For more information, see “Managing GitHub Connect.

Test Dependabot updates on a repository

Navigate to a test repository, and go to Settings, then Security and Analysis. Enable Dependabot security updates. You may also configure Dependabot security updates at the organization level by following the same pattern.

If you would like to enable Dependabot version updates, you will need to add a configuration file to each repository you want kept up-to-date.

You can test that Dependabot security updates are working properly by introducing a known bad dependency, for example, by adding a requirements.txt file with the following content.

requirements.txt file


pillow>= 2.4.0, < 5.3.1

Committing this file will trigger your repository to receive both Dependabot alerts and Dependabot security updates.

See Configuring Dependabot version updates – GitHub Docs and About Dependabot security updates – GitHub Docs.

How Dependabot on GHES differs from GitHub.com

Dependabot creates pull requests by analyzing the available versions of your dependencies and calculating the lowest secure version that you should run. On GitHub.com, Dependabot runs this analysis using internal infrastructure developed before GitHub Actions was a product. With GHES appliances, we needed to find a way to process these containers so they would not impact the availability of your GitHub Enterprise Server instance, would play nice with a variety of GHES instances, and also would plug into a repo’s CI. This is where Actions come in. The images running to create update pull requests are packaged using GitHub Actions. Find out how this rearchitect came to life here.

Having Dependabot updates run in GitHub Actions means you get the niceties that come with Actions. For example, you can watch Dependabot create pull requests and monitor the logs just like you would with other actions!

/

Please note that you cannot currently rerun jobs as you would with other Actions. Also, Dependabot on GHES only works with self-hosted runners.

Learn more about Dependabot and GHES

Here are some of the links used in this post to walk you through more detailed steps of enabling Dependabot on your Enterprise instance:

We hope you are as excited about Dependabot on GHES as we are! We are eager to hear what you think about the experience, and welcome comments or questions in the feedback discussion.

The post Dependabot Updates hit GA in GHES appeared first on The GitHub Blog.

]]>
65600
One developer’s journey bringing Dependabot to GitHub Enterprise Server https://github.blog/enterprise-software/automation/one-developers-journey-bringing-dependabot-to-github-enterprise-server/ Tue, 07 Jun 2022 19:55:45 +0000 https://github.blog/?p=65520 A personal story about building the feature you want and sharing it with the world.

The post One developer’s journey bringing Dependabot to GitHub Enterprise Server appeared first on The GitHub Blog.

]]>
If you’re like me, you’re still excited by last week’s news that Dependabot is generally available on GitHub Enterprise Server (GHES). Developers using GHES can now let Dependabot secure their dependencies and keep them up-to-date. You know who would have loved that? Me at my last job.

Before joining GitHub, I spent five years working on teams that relied on GHES to host our code. As a GHES user, I really, really wanted Dependabot. Here’s why.

🤕 Dependencies

One constant pain point for my previous teams was staying on top of dependencies. Creating a Rails project with rails new results in an app with 74 dependencies, Django apps start with 88 dependencies, and a project initialized with Create React App will have 1,432 dependencies!

Unfortunately, security vulnerabilities happen, and they can expose your customers to existential risk, so it’s important they are handled as soon as they’re published.

As I’m most familiar with the Ruby ecosystem, I’ll use Nokogiri, a gem for parsing XML and HTML, to illustrate the process of manually resolving a vulnerability. Nokogiri has been a dependency of every Rails app I’ve maintained. It’s also seen seven vulnerabilities since 2019. To fix these manually, we’ve had to:

  • Clone `my_rails_app`
  • Track down and parse the Nokogiri release notes
  • Patch Nokogiri in `my_rails_app` to a non-vulnerable version
  • Push the changes and open a pull request
  • Wait for CI to pass
  • Get the necessary reviews
  • Deploy, observe, and merge

This is just one of (at least) 74 dependencies in one Rails app. My team maintained 14 Rails apps in our microservices-based architecture, so we needed to repeat the process for each app. A single vulnerability would eat up days of engineering time. That’s just one dependency in one ecosystem. We also worked on apps written in Elixir, Python, JavaScript, and PHP.

If an engineer was patching vulnerabilities, they couldn’t pursue feature work, the thing our customers could actually see. This would, understandably, lead to conversations about which vulnerabilities were most likely to be exploited and which we could tolerate for now.

If we had Dependabot security updates, that process would have started with a pull request. What took an engineer days to complete on their own could have been done before lunch.

We could have invested in keeping all of our dependencies up-to-date. Incremental upgrades are typically easier to perform and pose less risk. They also give bad actors less time to find and exploit vulnerabilities. One of my previous teams was still running Rails 3.2, which was no longer maintained when Rails 6 was released six years later. As support phased out, we had to apply our own security patches to our codebase instead of getting them from the framework. This made upgrading even harder. We spent years trying to get to a supported version, but other product priorities always won out.

If my team had Dependabot version updates, Dependabot would have opened pull requests each time a new version of Rails was released. We’d still need to make changes to ensure our apps were compliant with the new versions, but the changes would be made incrementally, making the lift much lighter. But we didn’t have Dependabot. We had to upgrade manually, and that meant upgrading didn’t happen until it became a P0.

A new home

I joined GitHub in 2021 to work on Dependabot. Being intimately familiar with the challenges Dependabot could help address, I wanted to be part of the solution. Little did I know, the team was just starting the process of bringing Dependabot to GHES. Call it serendipity, a dream come true, or tea leaves arranged just so.

I quickly realized why Dependabot wasn’t already on GHES. GitHub acquired Dependabot in 2019, and it took some time to scale Dependabot to be able to secure GitHub’s millions of repositories. To achieve this, we ported the service’s backend to run on Moda, GitHub’s internal Kubernetes-based platform. The dependency update jobs that result in pull requests were updated to run on lightweight Firecracker VMs, allowing Dependabot to create millions of pull requests in just hours. It was an impressive effort by a small team.

That effort, however, didn’t lend itself to the architecture of GHES, where everything runs on a single server with limited resources. An auto-scaling backend and network of VMs wasn’t an option. Instead, we needed to port Dependabot’s backend to run on Nomad, the container orchestration option on GHES. The jobs running on Firecracker VMs needed to run on our customers’ hardware. Fortunately, organizations can self-host GitHub Actions runners in GHES, so we adapted them to run on GitHub Actions. We also had to adjust our development processes to support continuous delivery in the cloud and less frequent GHES releases.

The result is that developers relying on GHES now have the option to have their dependencies updated for them. Now, my former teammates can update their dependencies by:

  • Viewing the already opened pull request
  • Reviewing the pull request and the included release notes
  • Deploying, observing, and merging

We’re really proud of that. As for me, I get the immense satisfaction of knowing that I built something that will directly benefit my former teammates. It doesn’t get much better than that!

Guess what? GitHub is hiring. What would you like to make better?

If you’re inspired to work at GitHub, we’d love for you to join us. Check out our Careers page to see all of our current job openings.

  • Dedicated remote-first company with flexible hours
  • Building great products used by tens of millions of people and companies around the world
  • Committed to nurturing a diverse and inclusive workplace
  • And so much more!

The post One developer’s journey bringing Dependabot to GitHub Enterprise Server appeared first on The GitHub Blog.

]]>
65520
Promote consistency across your organization with workflow templates https://github.blog/enterprise-software/automation/promote-consistency-across-your-organization-with-workflow-templates/ Mon, 22 Jun 2020 15:00:05 +0000 https://github.blog/?p=53148 Now you can create custom workflow templates to promote best practices and consistency across your organization.

The post Promote consistency across your organization with workflow templates appeared first on The GitHub Blog.

]]>
Workflow templates make it easy for people to get started with GitHub Actions. They’re presented whenever you create a new GitHub Actions workflow, and provide examples of CI/CD across many different languages as well as general automation. Now enterprises and OSS teams can define custom workflow templates for their organization.

Screenshot showing a series of workflows created by Waldocat

Creating workflow templates

Workflow templates extend GitHub’s config-as-code capabilities. Like issue templates, workflow templates are defined in a .github repository, enabling you to leverage all the power of GitHub’s collaborative capabilities and providing full audibility. Each template is defined through a yaml file that looks just like the workflow yaml files you’re used to. In addition, a properties file specifies template-specific metadata, such as a description and icon.

Matching workflow templates to repositories

You can optionally define rules for which repositories are a best fit for the template. While a user can see all the templates defined at the organization, workflows that are matched to their repository will be promoted as “Suggested”.

For example, you can specify a template as being appropriate for Go projects. Then when a user goes to add a workflow to a Go repository this template will appear in the “Suggested” section of the GitHub interface.

You can also define regex expressions to match files in the repository’s root directory. For example, if you set “filePatterns”: “action.yml” then your template will be matched to repositories that contain either JavaScript or Docker container actions.

Using a workflow template

A workflow template can be used by any repository within the organization for enterprise accounts, or any public repository across all GitHub plans.

Workflow templates that match the repository appear in “Suggested”, and all others appear in a section for the organization.

Screenshot showing matching workflows

After clicking ‘Set up this workflow’, you will see the workflow template and can customize it to meet your requirements.

Screenshot showing editing a workllow file

Workflow templates can be shared with any public repository within the organization. They can also be shared with private repositories if the organization is part of an enterprise or GitHub One plan.

Learn more about organization workflow templates

The post Promote consistency across your organization with workflow templates appeared first on The GitHub Blog.

]]>
53148
Automating MySQL schema migrations with GitHub Actions and more https://github.blog/enterprise-software/automation/automating-mysql-schema-migrations-with-github-actions-and-more/ Fri, 14 Feb 2020 22:56:00 +0000 https://github.blog/?p=52037 In this deep dive, we cover how our daily schema migrations amounted to a significant toil on the database infrastructure team, and how we searched for a solution to automate the manual parts of the process.

The post Automating MySQL schema migrations with GitHub Actions and more appeared first on The GitHub Blog.

]]>
In the past year, GitHub engineers shipped GitHub Packages, Actions, Sponsors, Mobile, security advisories and updates, notifications, code navigation, and more. Needless to say, the development pace at GitHub is accelerated.

With MySQL serving our backends, updating code requires changes to the underlying database schema. New features may require new tables, columns, changes to existing columns or indexes, dropping unused tables, and so on. On average, we have two schema migrations running daily on our production servers. Some days we have a half dozen migrations to run. We’ll cover how this amounted to a significant toil on the database infrastructure team, and how we searched for a solution to automate the manual parts of the process.

What’s in a migration?

At first glance, migrating appears to be no more difficult than adding a CREATE, ALTER or DROP TABLE statement. At a closer look, the process is far more complex, and involves multiple owners, platforms, environments, and transitions between those pieces. Here’s the flow as we experience it at GitHub:

1. Starting the process

It begins with a developer who identifies the need for a schema change. Maybe they need a new table, or a new column in an existing table. The developer has a local testing environment where they can experiment however they like, until they’re satisfied and wish to apply changes to production.

2. Feedback and review

The developer doesn’t just apply their changes online. First, they seek review and discussion with their peers. Depending on the change, they may ask for a review from a group of schema reviewers (at GitHub, this is a volunteer group experienced with database design). Then, they seek the agreement of the database infrastructure team, who owns the production databases. The database infrastructure team reviews the changes, looking for performance concerns, among other potential issues. Assuming all reviews are favorable, it’s on the database infrastructure engineer to deploy the change to production.

3. Taking the change to production

At this point, we need to determine where the change is taking place since we have multiple clusters. Some of them are sharded, so we have to ask: Where do the affected tables exist in our clusters or schemas? Next, we need to know what to run. The developer presented the schema they want to see in production, but how do we transition the existing production schema into the one requested? What’s the formal CREATE, ALTER or DROP statement? Following what to run, we need to know how we should run the migration. Do we run the query directly? Or is it a blocking operation and we need an online schema change tool? And finally, we need to know when to execute the migration. Perhaps now is not a good time if there’s already a migration running on the cluster.

4. Migration

At long last, we’re ready to run the migration. Some of our larger tables may take hours and even days to migrate, especially since the site needs to be up and running. We want to track status. And we want to see what impact the migration may have on production, or, preferably, to ensure it does not have an impact.

5. Completing the process

Even as the migration completes there are further steps to take. There’s some cleanup process, and we want to unblock the next migration, if any currently exists. The database infrastructure team wishes to advertise to the developer that the changes have taken place, and the developer will have their own followup to address.

Throughout that flow, there’s a lot of potential for friction:

  • Does the database infrastructure team review the developer’s request in a timely fashion?
  • Is the review process productive?
  • Do we need to wait for something before running the migration?
  • Is the database infrastructure engineer actually available to run the migration, or perhaps they’re busy with other tasks?

The database infrastructure engineer needs to either create or review the migration statement, double-check their logic, ensure they can begin the migration, follow up, unblock other migrations as needed, advertise progress to the developer, and so on.

With our volume of daily migrations, this flow sometimes consumed hours of work of a database infrastructure engineer per day, and—in the best-case scenario—at least several hours of work per week. They would frequently multitask between two or three migrations and keep mental notes for next steps. Developers would ping us to ask what the status was, and their work was sometimes blocked until the migration was complete.

A brief history of schema migration automation at GitHub

GitHub was originally created as a Ruby on Rails (RoR) app. Like other frameworks, and in particular, those using Active Record, RoR has a built-in mechanism to generate database schema from code, as well as programmatically express migrations. RoR tooling can analyze code changes and create and run the SQL statements to change the database schema.

We use the GitHub flow to manage our own development: when suggesting a change, we create a branch, commit, push, and open a pull request. We use the declarative approach to schema definition: our RoR GitHub repository contains the full schema definition, such as the CREATE TABLE statements that generate the complete schema. This way, we know exactly what schema is associated with each commit or branch. Counter that with the programmatic approach, where your commits contain migration statements, and where to deduce a schema you need to start at some baseline and run through all statements sequentially.

The database infrastructure and the application teams collaborated to create a set of chatops tooling. We ran a chatops command to list pull requests with schema changes, and then another command to generate the CREATE/ALTER/DROP statement for a given pull request. For this, we used RoR’s rake command. Our wrapper scripts then added meta information, like which cluster is involved, and generated a script used to run the migration.

The generated statements and script were mostly fine, with occasional SQL syntax errors. We’d review the output and fix it manually as needed.

A few years ago we developed gh-ost, an online table migration solution, which added even more visibility and control through our chatops. We’d check progress, change runtime configuration, and cut-over the migration through chat. While simple, these were still manual steps.

The heart of GitHub’s app remains with the same RoR, but we’ve expanded far beyond it. We created more repositories and some also use RoR, while others are in other programming languages such as Go. However, we didn’t use Object Relational Mapping practice with the new repositories.

As GitHub expanded, the more toil the database infrastructure team had. We’d review pull requests, compare schemas, generate migration statements manually, and verify on a local machine. Other than the git log, no formal tracking for schema migrations existed. We’d check in chat, issues, and pull requests to see what was done and what wasn’t. We’d keep track of ongoing migrations in our heads, context switch between the migrations throughout the day, and how often we’d get interrupted by notifications. And we did this while taking each migration through the next step, keeping mental notes, and communicating the progress to our peers.

With these steps in mind, we wanted a solution to automate the process. We came up with various ideas, and in 2019 GitHub Actions was released. This was our solution: multiple loosely coupled components, each owning a specific aspect of the flow, all orchestrated by a controller service. The next section covers the breakdown of our solution.

Code

Our basic premise is that schema design should be treated as code. We want the schema to be versioned, and we want to know what schema is associated and with what version of our code.

To illustrate, GitHub provides not only github.com, but also GitHub Enterprise, an on-premise solution. On github.com we run continuous deployments. With GitHub Enterprise, we make periodic releases, and our customers can upgrade in-house. This means we need to be able to reproduce any schema changes we make to github.com on a customer’s Enterprise server.

Therefore we must keep our schema design coupled with the code in the same git repository. For a developer to design a schema change, they need to follow our normal development flow: create a branch, commit, push, and open a pull request. The pull request is where code is reviewed and discussion takes place for any changes. It’s where continuous integration and testing run. Our solution revolves around the pull request, and this is standardized across all our repositories.

The change

Once a pull request is opened, we need to be able to identify what changes we’d like to make. Typically, when we review code changes, we look at the diff. And it might be tempting to expect that git diff can help us formalize the schema change. Unfortunately, this is not the case, and git diff is poor at identifying these changes. For example, consider this simplified table definition:

CREATE TABLE some_table (
  id int(10) unsigned NOT NULL AUTO_INCREMENT,
  hostname varchar(128) NOT NULL,
  PRIMARY KEY (id),
  KEY (hostname)
);

Suppose we decide to add a new column and drop the index on hostname. The new schema becomes:

CREATE TABLE some_table (
  id int(10) unsigned NOT NULL AUTO_INCREMENT,
  hostname varchar(128) NOT NULL,
  time_created TIMESTAMP NOT NULL,
  PRIMARY KEY (id)
);

Running git diff on the two schemas yields the following:

@@ -1,6 +1,6 @@
 CREATE TABLE some_table (
   id int(10) unsigned NOT NULL DEFAULT 0,
   hostname varchar(128) NOT NULL,
-  PRIMARY KEY (id),
-  KEY (hostname)
+  time_created TIMESTAMP NOT NULL,
+  PRIMARY KEY (id)
 );

The pull request’s “Files changed” tab shows the same:

This is a sample Pull Request where we change a table's schema. git diff does a poor job of analyzing the schema change.

See how the PRIMARY KEY line goes into the diff because of the trailing comma. This diff does not capture the schema change well, and while RoR provides tooling for that,  we’ve still had to carefully review them. Fortunately, there’s a good MySQL-oriented tool to do the task.

skeema

skeema is an open source schema management utility developed by Evan Elias. It expects the declarative approach, and looks for a schema definition on your file system (hopefully as part of your repository). The file system layout should include a directory per schema/database, a file per table, and then some special configuration files telling skeema the identities of, and the credentials for, MySQL servers in various environments. Skeema is able to run useful tasks, such as:

  • skeema diff: generate SQL statements that convert the existing database schema into the schema defined in the file system. This includes as many CREATE, ALTER and DROP TABLE statements as needed.
  • skeema push: actually apply changes to the database server for the schema to match the one on file system
  • skeema pull: rewrite the filesystem schema based on the existing schema in the MySQL server.

skeema can do much more, including the ability to invoke online schema change tools—but that’s outside this post’s scope.

Git users will feel comfortable with skeema. Indeed, skeema works very well with git-versioned schemas. For us, the most valuable asset is its diff output: a well formed, reliable set of statements to show the SQL transition from one schema to another. For example, skeema diff output for the above schema change is:

USE `test`;
ALTER TABLE `some_table` ADD COLUMN `time_created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, DROP KEY `hostname`;

Note that the above is not only correct, but also formal. It reproduces correctly whether our code uses lower/upper case, includes/omits default value, etc.

We wanted to use skeema to tell us what statements we needed to run to get from our existing state into the state defined in the pull request. Assuming the master branch reflects our current production schema, this now becomes a matter of diffing the schemas between master and the pull request’s branch.

Skeema wasn’t without its challenges, and we had to figure out where to place skeema from a design perspective. Do the developers own it? Does every repository own it? Is there a central service to own it? Each presented its own problems, from false ownership to excessive responsibilities and access.

GitHub Actions

Enter GitHub Actions. With Actions, you’re able to run code as a response to events taking place in your repository. A new pull request, review, comment, issue, and quite a few others, are such events. The code (the action) is arbitrary, and GitHub spawns a container on its own infrastructure, where your code will run. What makes this extra interesting is that the container can get access to your repository. GitHub Actions implicitly receives an API token to interact with the repository.

The container comes with popular software packages pre-installed, such as a MySQL server.

Perhaps the most classic use of Actions is CI/CD.  When  a pull_request event occurs (a new pull request and any subsequent commit) run some code to build, test, lint, or validate the change. We took this approach to run skeema as part of a pull_request action flow, called skeema-diff.

Here’s a simplified breakdown of the action:

  1. Fetch skeema binary
  2. Checkout master branch
  3. Run skeema push to populate the container’s MySQL server with the schema as defined by the master branch
  4. Checkout pull request’s branch
  5. Run skeema diff to generate the statements that take the schema from the one in MySQL (remember, this is the master schema) to the one in the pull request’s branch
  6. Add the diff as a comment in the pull request
  7. Add a special label to indicate this pull request has a schema change

The GitHub Action, running skeema, generates schema diff output, which is added as a comment to the Pull Request. The comment presents the correct ALTER statement implied by the code change. This comment is both human and machine readable.

The code is more complex than what we’ve shown. We actually use base and head instead of master and branch, and there’s some logic to formalize, edit and validate the diff, to handle commits that further change the schema, among other processes.

By now, we have a partial flow, which works entirely on GitHub’s platform:

  • Schema change as code
  • Review process, based on GitHub’s pull request flow
  • Automated schema change analysis, based on skeema running in a GitHub Action
  • A visible output, presented as a pull request comment

Up to this point, everything is constrained to the repository. The repository itself doesn’t have information about where the schema gets deployed in production. This information is something that’s outside the repository’s scope, and it’s owned by the database infrastructure team rather than the repository’s developers. Neither the repository nor any action running on that repository has access to production, nor should they, as that would be a breach of domains.

Before we describe how the schema gets to production, let’s jump ahead and discuss the schema migration itself.

Schema migrations and gh-ost

Even the simplest schema migration isn’t simple. We are concerned with three types of table migrations:

  • CREATE TABLE is the simplest and the safest. We created something that didn’t exist before, and its creation time is instantaneous. Note that if the target cluster is sharded, this must be applied on all shards. If the cluster is sharded with vitess, then the vitess vtgate service automatically handles this for us.
  • DROP TABLE is a simple statement that comes with a great risk. What if it’s still in use and some code breaks as a result of the table going away? Note that we don’t actually drop tables as part of schema migrations. Any DROP TABLE statement is converted into a RENAME TABLE. Instead of DROP TABLE repositories (whoops!), our automation runs RENAME TABLE repositories TO _repositories_DROP_20200101123456. If our application fails because of this, we have an instant revert command: RENAME back to the original. Renamed tables are kept around for a few days prior to being garbage collected and dropped by our automation.
  • ALTER TABLE is the most complex case, mainly because it takes time to alter a table. We don’t actually ALTER tables in-place. We use gh-ost to emulate an ALTER TABLE, and the end result is the same even though the process is completely different. It doesn’t lock our apps, throttles as much as needed, and it’s controllable as well as auditable. We’ve run gh-ost in production for over three and a half years. It has little to no impact on production, and we generally don’t care that it’s running. But some of our larger tables may still take hours or even days to migrate. We also only run one ALTER (or, gh-ost) at a time on a cluster. Concurrent migrations are possible but compete over resources, leading to overall longer runtimes than sequential execution. This means that an ALTER migration requires scheduling. We need to be able to tell if a migration is already running on a cluster, as well as prioritize and queue migrations that apply to the same cluster. We also need to be able to tell the status over the duration of hours or days, and this needs to be communicated to the developer, the owner of the change. And, if the cluster is sharded, we need to run the migration per shard.

In order to run a migration, we must first determine the strategy for that migration (Is it direct query, gh-ost, or a manual?). We need to be able to tell where it can run,  how to go about the process if the cluster is sharded, as well as When to schedule it. While migrations can wait in queue while others are running, we want to be able to prioritize migrations, in case the queue is large.

skeefree

We created skeefree as the glue, which means it’s an orchestrating service that’s aware of our repositories, can communicate with our pull requests, knows about production (or, can get information about production) and which invokes the migrations. We run skeefree as a stateless kubernetes service, backed by a MySQL database that holds the state. Note that skeefree’s own schema is managed by skeefree.

skeefree uses GitHub’s API to interact with pull requests, GitHub’s internal inventory and discovery services, to locate clusters in production, and gh-ost to run migrations. Skeefree is best described by following a schema migration flow:

  1. A developer wishes to change the schema, so they open a pull request.
  2. skeema-diff Action springs to life and seeks a schema change. If a schema change isn’t found in the pull request, nothing happens. If there is a schema change, the Action, computes the change via skeema, adds a well-formed comment to the pull request indicating the change, and adds a migration:skeema:diff label to the pull request. This is done via the GitHub API.
  3. A developer looks into the change, and seeks review from a team member. At this time they may communicate to team members without actually going to production. Finally, they add the label migration:for:review.
  4. skeefree is aware of the developer’s repository and uses the GitHub API to periodically look for open pull requests, which are labeled by both migration:skeema:diff and migration:for:review, and have been approved by at least one developer.
  5. Once detected, skeefree investigates the pull request, and reads the schema change comment, generated by the Action. It maps the schema/repository to the schema/production cluster, and uses our inventory and discovery services to know if the cluster is sharded. Then, it finds the location and name of the cluster.
  6. skeefree then adds this to its backend database, and advertises its analysis on the pull request with another comment. This comment generally means “here’s what I will do if you approve”. And it proceeds to get a review from an authority. Once the user labels the Pull Request as "migration:for:review", skeefree analyzes the migration and evaluates where it needs to run. It proceeds to seek review from an authority.
  7. For most repositories, the authority is the database-infrastructure team. On our original RoR repository, we also seek review from a cross-functional team, known as the db-schema-reviewers, who are familiar with the general application and database design throughout the years and who have more context to offer. skeefree automatically knows which teams should be notified on which repositories.
  8. The relevant teams review and hopefully approve, and skeefree detects the approval, before choosing the proper strategy (direct query for CREATE and DROP, or RENAME), and gh-ost for ALTER. It then queues the migration(s).
  9. skeefree’s scheduler periodically checks what next can be executed. Remember we only run a single ALTER migration on a given cluster at a time, but we also have a limited number of runner hosts. If there’s a free runner host and the cluster is not running any migration, skeefree then proceeds to kick off a migration. Skeefree advertises this fact as a pull request comment to notify the developer that the migration started.
  10. Once the migration is complete, skeefree announces it in a pull request comment. The same applies should the migration fail.
  11. The pull request may also have more than one migration. Perhaps the cluster is sharded, or there may be multiple tables changed in the pull request. Once all migrations are successfully completed, skeefree advertises this in a pull request comment. The developer is notified that all migrations are done, and they’re encouraged to proceed with their standard deploy/merge flow.

as skeefree runs the migrations, it adds comments on the Pull Request page to indicate its progress. When all migrations are complete, skeefree comments as much, again on the pull request page.

Analysis of the flow

There are a few nuances here that make a good experience to everyone involved:

  • The database infrastructure team doesn’t know about the pull request until the developer explicitly adds the migration:for:review label. It’s like a draft pull request or a pull request that’s a work in progress, only this flag applies specifically to the schema migration flow. This allows the developer to use their preferred flow, and communicate with their team without interrupting the database infrastructure team or getting premature reviews.
  • The skeema analysis is contained within the repository, which means That no external service is required. The developer can check the diff result, themselves.
  • The Action is the only part of the flow that looks at the code. Neither skeefree nor gh-ost look at the actual code, and they don’t need git access.
  • The database infrastructure team only needs to take a single step, which is review the pull request.
  • The developers own the creation of pull requests, getting peer reviews, and finally, deploying and merging. These are the exact operations that should be under their ownership. Moreover, they get visibility into the state of their migration. By looking at the pull request page or their GitHub notifications, they can tell whether the pull request has been reviewed, queued, started, completed, or failed. They don’t need to ask. Even better, we have chatops that give visibility into the overall state of migration queue, a running migration’s progress, and more. These chatops are available for all to invoke.
  • The database infrastructure team owns the process of mapping the repository schema to production. This is done via chatops, but can also be completed via configuration. The team is able to cancel a pull request, retry a failed migration, and more.
  • gh-ost is generally trusted, and we have control over a running migration. This means that we can force it to throttle, set up a different throttle threshold, make it use less resources, or terminate it, if needed. We also have a throttling mechanism throughout our stack, so that long running processes like migrations yield to higher priority operations, which extends their own runtime so it doesn’t generate too much load on our database servers.
  • We use our own prefered pull request flow, oActions (skeefree was an early adopter for Actions), GitHub API, and our existing datacenter and database infrastructure, all of which are well understood internally.

Public availability

skeefree and the skeema-diff Action were authored internally at GitHub to solve a specific problem. skeefree uses our internal inventory and discovery services, it works with our chatops and uses some internal libraries.

Our experience in releasing open source software is that no one’s use case is exactly the same as ours. Our perception of an automated migrations flow may be very different from another organization’s perception. We still want to share more than just our words, so we’ve open sourced the code.

It’s a bit of a peculiar OSS release:

  • it’s missing some libraries; it will not build.
  • It expects some of our internal services to exist, which more than likely won’t be on your platform.
  • It expects chatops, and you may not be using chatops.
  • The code also needs to be rewritten for adaptation to your environment,

Note that the code is available, but not open for issues and pull requests. We hope the community finds it useful.

Get the code

The post Automating MySQL schema migrations with GitHub Actions and more appeared first on The GitHub Blog.

]]>
52037
Powering community-led innovation with GitHub Actions https://github.blog/enterprise-software/automation/powering-community-led-innovation-with-github-actions/ Thu, 14 Nov 2019 17:59:18 +0000 https://github.blog/?p=51064 As we celebrate Actions becoming generally available, check out some of the ways teams are contributing to Actions—and how you can start automating more of your workflow.

The post Powering community-led innovation with GitHub Actions appeared first on The GitHub Blog.

]]>
Last year at Universe, we released GitHub Actions, a new way for developers to automate workflows directly from their repositories. Actions are shareable, reusable, forkable, and infinitely customizable—just like any other code—and we’ve been amazed and humbled to watch the community build on each other’s work. As Actions becomes generally available this week, we’re reflecting on a big year for workflow automation.

An ever-expanding Actions Marketplace 

Since releasing Actions in November of last year, the community has contributed over 1,200 Actions to GitHub Marketplace. Developers can automate everything from Tweet collaboration, to WordPress publishing, and even GitHub itself. Solutions from the community have also popped up for widely-adopted products, including Vault, Datadog, and Jenkins. Automated environment setups for GitHub Actions in Node.js, Python, Java, Go, Ruby, PHP and .NET have been contributed by community members as well. 

GitHub partners have contributed enormously to Marketplace, so teams can extend and automate workflows with their existing tools. Some of the most popular Actions from GitHub partners include: 

  • Atlassian for automating JIRA 
  • Twilio for sending SMS messages in your workflow
  • Cloudflare for deploying a Cloudflare Worker
  • SonarCloud for scanning code quality 
  • JFrog for setting up and configuring the JFrog CLI
  • Mabl for automated functional testing

We’re looking forward to growing the Actions ecosystem with our partners even more and helping developers do what they do best: bring new ideas to life.

A faster path to container, Kubernetes, and serverless cloud deployments 

In August of this year, when we introduced Actions for CI/CD, one of the immediate asks from our users and customers was to further streamline the path from code to cloud. We’re excited to partner with Amazon Web Services, Google Cloud, and Microsoft Azure to ensure that teams are able to develop, deliver, and deploy to their cloud provider of choice directly from GitHub.

Amazon Web Services

AWS today announced a set of Actions that support Amazon ECS deployment and a Starter Workflow to build and deploy a container to an Amazon ECS services, powered by either AWS Fargate or Amazon EC2. This will allow Actions users to continuously deploy development or production workloads to Amazon ECS, directly from a GitHub repo, without additional tools or manual point-and-click steps. These Actions support both the serverless Fargate launch type, and the EC2 launch type for users requiring more granular, server-level control.

Google Cloud

Google Cloud has released a repository for a library of Actions providing functionality for working with Google Cloud Platform. It includes an updated Action to configure the Google Cloud SDK for use in Actions workflows. The repository also includes a complete Google Kubernetes Engine example that employs a GKE Starter Workflow released today.

Microsoft Azure

Yesterday, Microsoft announced the general availability of GitHub Actions for Azure for creating workflows to package, release, and deploy apps to the cloud from GitHub. They include starter workflows for popular languages and frameworks, Actions for working with a variety of Azure services, from Web Apps to serverless Functions, as well as Azure SQL and MySQL databases. Microsoft has also released Actions for building and deploying container-based applications that work with Docker and Kubernetes on any environment and cloud, in addition to their managed Azure Kubernetes Service.

Actions in the enterprise

Businesses we work with are realizing the benefits of using GitHub Actions, including reduced build and ship times for applications, and increased efficiency for development teams. 

  • Pinterest has migrated their use of the Texture framework to Actions, allowing their teams to build, test, and deploy right from GitHub. They’ve reduced their build time by nearly 90%, from 80 minutes to only ten minutes, freeing developers to focus more on building applications. 
  • Decathlon reuses existing Actions and develops new ones to automatically create a release note and push to the Wiki page for its respective repository. “Everything is done automatically. We see Actions as an extension of our continuous integration,” says Alexandre Faria, Decathlon’s Lead Engineer for developer tools.
  • Dow Jones already uses Actions to automate a number of developer workflows. “GitHub Actions enables us to automate cybersecurity and governance at the earliest stages of product development. Where previously we would have required three servers to host and manage our pipeline, we are now able to replace them with a single Action,” says Sydney Sweeney, Lead Cyber Security Engineer.

Join the community

If you’re still new to Actions, there are a few ways to get started:

It’s been an incredible year working with developers, teams, and businesses to see what GitHub Actions can do in the real world. We’re thankful to everyone—from individual contributors to our largest enterprise customers—in developing the Actions ecosystem in the past year, and we can’t wait to see how the next year unfolds.

 

The post Powering community-led innovation with GitHub Actions appeared first on The GitHub Blog.

]]>
51064
A thousand community-powered workflows using GitHub Actions https://github.blog/enterprise-software/automation/a-thousand-community-powered-workflows-using-github-actions/ Wed, 06 Nov 2019 21:00:53 +0000 https://github.blog/?p=50905 Celebrate a GitHub Action's milestone with highlights of a few key actions and a technology partner's work.

The post A thousand community-powered workflows using GitHub Actions appeared first on The GitHub Blog.

]]>
We recently celebrated an exciting milestone on the GitHub Actions team: 1,000 actions published to GitHub Marketplace.

If you haven’t browsed it yet, GitHub Marketplace is the home for shared actions that you can use to enhance your GitHub Actions workflows. Authors publish these actions to the marketplace to help you create more powerful workflows, whether you’re building an application, deploying it to a public cloud, or automating common tasks in your repository.

A few highlights

GitHub Actions is more than just a continuous integration build system. It also allows you to run workflows when changes occur in your GitHub repository. Here are some of our favorite actions that take advantage of this diversity—one for working with issues in your repository, one for working with deployments, and one for working with the code in pull requests.

Setup JFrog CLI

Continuous integration builds often need more than just your source code. You might need a particular piece of software installed on the build environment in order to run the build. Or you might build inside a container that’s pre-configured with all of your dependencies. The easiest way to manage these dependencies is with a package registry.

The Setup JFrog CLI action makes it easy to set up and use Artifactory as your package registry. It handles the setup of the CLI in your build environment and helps you configure authentication. All you have to do is set up the action, then you can jfrog rt download your artifacts and use them in your build.

Close Stale Issues

In a busy repository, it’s easy to get overwhelmed by the number of issues and pull requests, especially when issues go stale. Even with our best efforts, it’s hard to keep track after a certain point,  but closing out issues that don’t matter can help us focus on higher-priority or urgent requests and problems.

The Close Stale Issues action filters issues and pull requests that haven’t had any activity or comments for a few weeks. This action informs you when an issue is stale and—unless any new activity occurs—will close the issue or pull request a week later. This helps keep you informed about any issue that needs your attention, but you may have forgotten about. And it helps reduce the cognitive load of stale issues without any extra effort on your part.

image-actions

You may have heard that you only have a few seconds to make a good first impression. So don’t waste any of that time with unoptimized images—make sure your website loads quickly.

Thankfully, the image-actions from Calibre take care of optimization for you. Every time a pull request is opened, this action will search your repository for images that are large and unoptimized. Once it finds any images, the action uses lossless compression libraries to shrink the images down to a more manageable size. It’s an easy way to get more performance out of your website delivery—all with a simple action.

Want more actions to help you manage your repository, build your application, or deploy it to production? 

Get more actions from GitHub Marketplace

The post A thousand community-powered workflows using GitHub Actions appeared first on The GitHub Blog.

]]>
50905
How partners like GitKraken use GitHub Actions https://github.blog/enterprise-software/automation/how-partners-like-gitkraken-use-github-actions/ Mon, 14 Oct 2019 15:00:13 +0000 https://github.blog/?p=50572 Check out a few of our favorite GitHub Actions created by our partners at Mabl, Codefresh, GorillaStack, and GitKraken.

The post How partners like GitKraken use GitHub Actions appeared first on The GitHub Blog.

]]>
At GitHub Universe 2018 we announced GitHub Actions, the best way to automate your software workflows on GitHub. With Actions you can orchestrate any workflow, based on any event, while GitHub manages the execution and provides rich feedback and security every step along the way. We also recently announced that GitHub Actions now also supports CI/CD

We’re seeing more automation and helpful Actions being built every day. As that number grows, we want to take a moment to showcase some helpful Actions that were recently built by partners in the GitHub community to inspire what’s next for your workflow.

mabl

Mabl’s new GitHub Action makes it easy for you to integrate intelligent, scalable, cross-browser testing into your CI/CD pipeline. In minutes, you can configure the Action to run mabl tests whenever you deploy new code changes. Once the tests run, access the results from your workflow log, including deep links to diagnostic information such as errors, traces, visual changes, performance information, and more. You can even configure your workflow to automatically promote or roll back changes based on mabl test results. Try mabl with a free 14-day trial

Try mabl’s GitHub Action

Codefresh

Codefresh’s pipeline runner GitHub Action shows how easy it is to extend GitHub Actions with other platforms that already have an API available. This Action enables easy integration with Codefresh pipelines. For example, you can monitor push events in GitHub and launch Codefresh pipelines that take care of Helm deployments. 

You can also automatically monitor pull requests in GitHub and create preview environments on Kubernetes namespaces with Codefresh. Combining CI/CD and using the best capabilities of both is now possible with GitHub Actions.

Try Codefresh’s GitHub Action

GorillaStack

GorillaStack customers use the Terraform Apply GitHub Action to turn their GitHub repositories into the source of truth for all automation and remediation logic for their cloud cost optimization, backup, and security.

With this Action, customers can automatically validate configuration templates and automate all application updates. Even better, all configuration changes can be reviewed as pull requests with a full audit history of your GorillaStack configuration.

Try GorillaStack’s GitHub Action

GitKraken

Glo Boards

GitKraken Glo Boards is a task and issue tracking tool that optionally syncs with GitHub issues in real time. Now, you can automate updates to cards on your Glo Boards using GitHub Actions. When you include a link to Glo cards from a pull request description or commit message, you can trigger the following actions:

  • Move a Glo card to any column on your board
  • Create a new Glo card
  • Add a label to a Glo card
  • Assign a user to a Glo card
  • Add a comment a Glo card

For example, you can create a workflow to trigger moving a card to the “Deployed” column on a Glo Board when a pull request is merged.

Try GitKraken’s Glo Boards Action

GitKraken Git Client

GitKraken is a cross-platform Git GUI that connects to GitHub—and with the new Actions integration, you can create and manage workflow files right from the client.

For repositories with an upstream remote on GitHub, or when a repository contains the .github/workflows directory, you’ll see the GitHub Actions section in the left panel in GitKraken. This section displays any existing workflow files on the currently checked-out branch of your repository. It also provides quick access to view and edit those files with GitKraken’s built-in code editor.

Try GitKraken’s Git Client Action


Explore more Actions

Check out the library of existing Actions or build your own.

Learn more about GitHub Actions

The post How partners like GitKraken use GitHub Actions appeared first on The GitHub Blog.

]]>
50572