The GitHub Blog: Company News and Updates https://github.blog/company/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Thu, 12 Mar 2026 03:23:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 The GitHub Blog: Company News and Updates https://github.blog/company/ 32 32 153214340 GitHub availability report: February 2026 https://github.blog/news-insights/company-news/github-availability-report-february-2026/ Thu, 12 Mar 2026 03:23:54 +0000 https://github.blog/?p=94475 In February, we experienced six incidents that resulted in degraded performance across GitHub services.

The post GitHub availability report: February 2026 appeared first on The GitHub Blog.

]]>
In February, we experienced six incidents that resulted in degraded performance across GitHub services.

We recognize the impact these outages have had on teams, workflows, and overall confidence in our platform. Earlier today, we released a blog post outlining the root causes of recent incidents and the steps GitHub is taking to make our systems more resilient moving forward. Thank you for your patience as we work through near-term and long-term investments we’re making.

Below, we go over the six major incidents specific to February.

February 02 17:41 UTC (lasting 1 hour and 5 minutes)

From January 31, 2026, 00:30 UTC, to February 2, 2026, 18:00 UTC, Dependabot service was degraded and failed to create 10% of automated pull requests. This was due to a cluster failover that connected to a read-only database.

We mitigated the incident by pausing Dependabot queues until traffic was properly routed to healthy clusters. All failed jobs were identified and restarted.

We added new monitors and alerts to reduce our time to detect and prevent this in the future.

February 02 19:03 UTC (lasting 5 hours and 53 minutes)

On February 2, 2026, between 18:35 UTC and 22:20 UTC, GitHub Actions hosted runners and GitHub Codespaces were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners, February 3, 2026 at 00:30 UTC for larger runners, and February 3 at 00:15 for codespaces. During this time, actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot coding agent, Copilot code review, CodeQL, Dependabot, GitHub Enterprise Importer, and GitHub Pages. All regions and runner types were impacted. Codespaces creation and resume operations also failed in all regions. Self-hosted runners for actions on other providers were not impacted.

This outage was caused by a loss in telemetry that cascaded to mistakenly applying security policies to backend storage accounts in our underlying compute provider. Those policies blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available here. This was mitigated by rolling back the policy changes, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out.

We are working with our compute provider to improve our incident response and engagement time, improve early detection, and ensure safe rollout should similar changes occur in the future.

February 09 16:19 UTC (lasting 1 hour and 21 minutes) and February 09 19:01 UTC (lasting 1 hour and 8 minutes)

On February 9, 2026, GitHub experienced two related periods of degraded availability affecting github.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other services. The first period occurred between 16:12 UTC and 17:39 UTC, and the second between 18:53 UTC and 20:09 UTC. In total, users experienced approximately 2 hours and 43 minutes of degraded service across the two incidents.

During both incidents, users encountered errors loading pages on github.com, failures when pushing or pulling code over HTTPS, failures starting or completing GitHub Actions workflow runs, and errors using GitHub Copilot. Additional services including GitHub Issues, pull requests, webhooks, Dependabot, GitHub Pages, and GitHub Codespaces experienced intermittent errors. SSH-based Git operations were not affected during either incident.

Our investigation determined that both incidents shared the same underlying cause: a configuration change to a user settings caching mechanism caused a large volume of cache rewrites to occur simultaneously. In the first incident, asynchronous rewrites overwhelmed a shared infrastructure component responsible for coordinating background work, which led to cascading failures and connection exhaustion in the service proxying Git operations over HTTPS. We mitigated this incident by disabling async cache rewrites and restarting the affected Git proxy service across multiple datacenters.

The second incident arose when an additional source of cache updates, not addressed by the initial mitigation, introduced a high volume of synchronous writes. This caused replication delays, resulting in a similar cascade of failures and again leading to connection exhaustion in the Git HTTPS proxy. We mitigated by disabling the source of the cache rewrites and again restarting Git proxy.

We are taking the following immediate steps:

  • We optimized the caching mechanism to avoid write amplification and added self-throttling during bulk updates.
  • We are adding safeguards to ensure the caching mechanism responds more quickly to rollbacks and strengthening how changes to these caching systems are planned, validated, and rolled out with additional checks.
  • We are fixing the underlying cause of connection exhaustion in our Git HTTPS proxy layer so the proxy can recover from this failure mode automatically without requiring manual restarts.

February 12 07:53 UTC (lasting 2 hours and 3 minutes)

On February 12, 2026, between 00:51 UTC and 09:35 UTC, users attempting to create or resume Codespaces experienced elevated failure rates across Europe, Asia, and Australia, peaking at a 90% failure rate. Impact started in UK South and impacted other regions progressively. US regions were not impacted.

The failures were caused by an authorization claim change in a core networking dependency, which led to codespace pool provisioning failures. Alerts detected the issue but did not have the appropriate severity, leading to delayed detection and response. Learning from this, we have improved our validation of changes to this backend service and monitoring during rollout. Our alerting thresholds have also been updated to catch issues before they impact customers and improved our automated failover mechanisms to cover this area.

February 12 10:38 UTC (lasting 34 minutes)

On February 12, 2026, from 09:16 to 11:01 UTC, users attempting to download repository archives (tar.gz/zip) that include Git LFS objects received errors. Standard repository archives without LFS objects were not affected. On average, the archive download error rate was 0.0042% and peaked at 0.0339% of requests to the service. This was caused by the deployment of an incorrect network configuration in the LFS Service that caused service health checks to fail and an internal service to be incorrectly marked as unreachable.

We mitigated the incident by manually applying the corrected network setting. Additional checks for corruption and auto-rollback detection were added to prevent this type of configuration issue.


Follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the engineering section on the GitHub Blog.

The post GitHub availability report: February 2026 appeared first on The GitHub Blog.

]]>
94475
GitHub Game Off 2025 theme announcement https://github.blog/company/github-game-off-2025-theme-announcement/ Sat, 01 Nov 2025 20:37:00 +0000 https://github.blog/?p=92118 GitHub’s annual month-long game jam is back for its 13th year! This November, make a game inspired by the theme WAVES—literal, digital, or emotional. 👋🏻 🌊 📡

The post GitHub Game Off 2025 theme announcement appeared first on The GitHub Blog.

]]>

Get ready for the annual Game Off, our month-long game jam that has inspired thousands of developers to make, share, and play games since 2012. Whether you’re a first-time jammer or a returning champion, this November is your chance to make something unforgettable.

The theme for this year? WAVES!

You have until December 1, 2025, at 13:37 PST to build a game loosely based on the theme. How you interpret it is entirely up to you. Don’t overthink it. Just ride the creative wave and see where it takes you. 🏄🏻

Need inspiration? Here are a few concept ideas

  • A space shooter where you fly through gravitational waves and wormholes.
  • A survival game where you build a coastal base and brace for tsunami waves.
  • A tower defense game where you battle waves of increasingly powerful baddies.
  • A skateboard game where you ride a sine wave, shredding through peaks and troughs. 
  • A rhythm game where you catch the beat and ride the wave.
  • A racing game where you drift through vaporwave skylines and a totally tubular synthwave soundtrack.
  • A physics puzzler where you bounce, reflect, and refract energy waves.
  • A remake of a class you enjoyed when you were younger resulting in endless waves of nostalgia.

Whatever form your game takes, whether it crashes, ripples, or totally wipes out… we can’t wait to see it.

Pro tip: Stuck for ideas? GitHub Copilot might be able to help. Try asking, “What are some fun games I could create with the game jam theme, WAVES?”

How to participate

Work alone or on a team. Use whatever programming languages, game engines, or libraries you like.

  1. Sign up. Create a free GitHub account if you don’t have one.
  2. Join the jam. Hop onto the itch.io Game Off 2025 page. If you don’t already have an itch.io account, you can sign in with your GitHub account.
  3. Create a public repository. Store your source code on GitHub. Push your game before December 1 at 13:37 PST.
  4. Submit your game on itch.io. Once submitted, you’ll be able to play other entries and cast your votes.

Voting

After the submission period ends, participants will vote on each other’s games. Entries will be evaluated in the following categories:

  • Overall
  • Gameplay
  • Graphics
  • Audio
  • Innovation
  • Theme interpretation

Voting will end on January 8, 2026, at 13:37 PST. Winners will be announced on the GitHub Blog and social channels on January 10, 2026, at 13:37 PST.

Light rules

Game Off is intentionally relaxed, but here are a few simple guidelines to keep things fair and fun:

  • Your game must live in a GitHub repository. You should start from scratch, but you can use templates. The vast majority of the work should be done in the game jam period.
  • License it however you like. Open source is encouraged, but not required.
  • Fly solo or join a team. Work however you’re most comfortable.
  • Use any tools or assets you prefer. Open source, commercial, or your own creations are all welcome.
  • AI-assisted development is allowed. 

That’s it. Keep it creative, respectful, and fun, and remember to push your code before the deadline.

New to game development?

You don’t need to be an expert. Many participants build their first game during Game Off. Some use popular engines, others build their own, and a few even create games for classic hardware like the NES, Game Boy, or ZX Spectrum. However you make it, there’s no wrong way to play.

Here are a few engines you might want to explore:

  • Godot (GDScript, C#, C++): Great for 2D and 3D games. Open source, lightweight, and beginner-friendly.
  • Unity (C#): Ideal for 3D or mobile games with plenty of tutorials and asset packs available.
  • Unreal Engine (C++, Blueprints): Best for cinematic visuals, complex 3D games, and high-end experiences.
  • Phaser (JavaScript): Good choice for browser-based 2D arcade or platformer games.
  • Pygame (Python): A solid option for learning game development basics or prototyping ideas quickly.
  • Bevy (Rust): Modern, data-driven engine for developers who like performance and clean ECS design.
  • LÖVE (Lua): Lightweight and fast, good for 2D games and creative coding projects.
  • Flame (Dart / Flutter): Works well for mobile-first 2D games if you already use Flutter.
  • Ebitengine (Go): Simple and powerful engine for 2D games written in Go.
  • Defold (Lua): Cross-platform 2D engine with built-in tools and an active indie community.
  • libGDX (Java): A familiar choice for developers coming from Java or Android backgrounds.
  • HaxeFlixel (Haxe): Great for retro-style 2D games, platformers, and jam projects.

The Game Off 2025 Community is a great place to ask questions or look for teammates. There’s also a friendly community-run Discord server.

New to Git or GitHub?

Game Off is the perfect opportunity to check it out (version control pun intended).

Whether your build floats or sinks, you’re part of something swell. Join thousands of developers around the world for a month of creativity, learning, and code-powered fun. Let’s hang ten on your keyboard  🌊 🤙 and make some WAVES together.

Kung Fury hanging ten on their keyboard as they surf through time waves on Hackerman’s computer

Good luck, and have fun!

Join the jam! Head to Game Off 2025 on itch.io to sign up and start building your game >

The post GitHub Game Off 2025 theme announcement appeared first on The GitHub Blog.

]]>
92118
GitHub is enabling broader access for developers in Syria following new government trade rules https://github.blog/company/github-is-enabling-broader-access-for-developers-in-syria-following-new-government-trade-rules/ Fri, 05 Sep 2025 06:00:00 +0000 https://github.blog/?p=90643 With the relaxation of sanctions and export controls on Syria, GitHub will once again be broadly available to Syrian developers.

The post GitHub is enabling broader access for developers in Syria following new government trade rules appeared first on The GitHub Blog.

]]>
اقرأ باللغة العربية

أصبحت تتيح وصولاً أوسع للمطورين في سوريا عقب قواعد التجارة الحكومية الجديدة GitHub


قبل أكثر من أربع سنوات، اتخذنا موقفًا ظلّ في صميم عملنا لدعم حرية المطوّرين: «ينبغي أن يتمتّع جميع المطوّرين بالحرية في استخدام GitHub، أينما كانوا». واليوم نبلغ محطةً مهمةً في ذلك المسعى؛ إذ إن تخفيف العقوبات والقيود المفروضة على الصادرات تجاه سوريا أتاح إعادة فتح الخدمات الخاصة والمدفوعة على موقع بشكل واسع أمام المطوّرين في حلب، حمص، دمشق، وفي جميع أنحاء البلد.

‎ظلّ التعاون في المشاريع مفتوحة المصدر والمستودعات العامة الأخرى متاحًا دائمًا، كما يظهر في مخطط ابتكار وهي مجموعة بيانات مفتوحة تُقدم أرقامًا مجمّعة عن مساهمات المستودعات العامة من سوريا.

نُعرب عن امتناننا الصادق للمطورين الذين دعوا لهذا التغيير وسعوا باستمرار للحصول على التحديثات. GitHub ترحّب بالمطورين السوريين للمساهمة بمشاريعهم في مجتمع المطورين العالمي، سواء كانت مساعيهم كبيرة أم صغيرة. نفتخر بأن نكون الموطن للمطورين الذين يدفعهم شغفهم للابتكار والبناء والتعلّم والتعليم، ونبقى ملتزمين بجعل GitHub متاحاً لأكبر عدد ممكن من المطورين ضمن إطار القانون.

نقوم الآن باتخاذ الخطوات اللازمة بصورة عاجلة لرفع القيود المفروضة على المطورين في السوريا، بما يتيح استعادة وظائف الحساب الكاملة، إضافةً إلى إتاحة خدمة GitHub Copilot. التغييرات جارية ومن المتوقّع أن تصل إلى الحسابات خلال الأسبوع المقبل.

More than four years ago, we took a stance that has remained at the center of our work advancing developer freedom: “All developers should be free to use GitHub, no matter where they live.” Today marks one important milestone in that endeavor. With the relaxation of sanctions and export controls on Syria, private and paid features of GitHub.com will once again be broadly available to developers in Aleppo, Homs, Damascus, and the entire country. Collaboration on open source projects and other public repositories has always been available, as can be seen in the GitHub Innovation Graph, an open dataset which provides aggregate numbers on public repository contributions from Syria.

We extend our sincere gratitude to the developers who advocated for this change and consistently sought updates. GitHub welcomes Syrian developers to contribute their projects to the global developer community,  whether they be big or small endeavors. We are proud to be the home for developers whose passion drives them to innovate, build, learn, and teach—and we remain committed to making GitHub available to as many developers as legally possible.

We are moving promptly to lift restrictions on developers in Syria, enabling normal account functionality, as well as access to GitHub Copilot. Changes are underway and expected to reach accounts within the next week. 



The post GitHub is enabling broader access for developers in Syria following new government trade rules appeared first on The GitHub Blog.

]]>
90643
How to streamline GitHub API calls in Azure Pipelines https://github.blog/enterprise-software/ci-cd/how-to-streamline-github-api-calls-in-azure-pipelines/ Thu, 24 Jul 2025 16:00:00 +0000 https://github.blog/?p=89733 Build a custom Azure DevOps extension that eliminates the complexity of JWT generation and token management, enabling powerful automation and enhanced security controls.

The post How to streamline GitHub API calls in Azure Pipelines appeared first on The GitHub Blog.

]]>

Azure Pipelines is a cloud-based continuous integration and continuous delivery (CI/CD) service that automatically builds, tests, and deploys code similarly to GitHub Actions. While it is part of Azure DevOps, Azure Pipelines has built-in support to build and deploy code stored in GitHub repositories.

Because Azure Pipelines is fully integrated into GitHub development flows, pipelines can be triggered by pushes or pull requests, and it reports the results of the job execution back to GitHub via GitHub status checks. This way, developers can easily see if a given commit is healthy or block pull request merges if the pipeline is not compliant with GitHub rulesets.

When you need additional functionality, you can use either extensions available in the marketplace  or GitHub APIs to deepen the integration with GitHub. Below, we’ll show how you can streamline the process of calling the GitHub API from Azure Pipelines by abstracting authentication with GitHub Apps and introducing a custom Azure DevOps extension, this will allow pipeline authors to easily authenticate against GitHub and call GitHub APIs without implementing authentication logic themselves. This approach provides enhanced security through centralized credential management, improved maintainability by standardizing GitHub integrations, time savings through cross-project reusability, and simplified operations with centrally managed updates for bug fixes.

Common use cases and scenarios

The GitHub API is very rich, so the possibilities for customization are almost endless. Some of the most common scenarios for GitHub calls in Azure Pipelines include:

  • Setting status checks on commits or pull requests: Report the success or failure of pipeline steps (like tests, builds, or security scans) back to GitHub, enabling rulesets utilization to enforce policies, and providing clear feedback to developers about the health of their code changes.
  • Adding comments to pull requests: Automatically post pipeline results, test coverage reports, performance metrics, or deployment information directly to pull request discussions, keeping all relevant information in one place for code reviewers.
  • Updating files in repositories: Automatically update documentation, configuration files, or version numbers as part of your CI/CD process, such as updating a CHANGELOG.md file or bumping version numbers in package files.
  • Managing GitHub Issues: Automatically create, update, or close issues based on pipeline results, such as creating bug reports when tests fail or closing issues when related features are successfully deployed.
  • Integrating with GitHub Advanced Security: Send code scanning results to GitHub’s code scanning, enabling centralized vulnerability management, security insights, and supporting DevSecOps practices across your development workflow.
  • Managing releases and assets: Automatically create GitHub releases and upload build artifacts, binaries, or documentation as release assets when deployments are successful, streamlining your release management process.
  • Tracking deployments with GitHub deployments: Integrate with GitHub’s deployment API to provide visibility into deployment history and status directly in the GitHub interface.
  • Triggering GitHub Actions workflows: Orchestrate hybrid CI/CD scenarios where Azure Pipelines handles certain build or deployment tasks and then triggers GitHub Actions workflows for additional processing or notifications.

Understanding GitHub API: REST vs. GraphQL

The GitHub API provides programmatic access to most of GitHub’s features and data, offering two distinct interfaces: REST and GraphQL. The REST API follows RESTful principles and provides straightforward HTTP endpoints for common operations like managing repositories, issues, pull requests, and workflows. It’s well documented, easy to get started with, and supports authentication via personal access tokens, GitHub Apps, or OAuth tokens.

GitHub’s GraphQL API offers a more flexible and efficient approach to data retrieval. Unlike REST, where you might need multiple requests to gather related data, GraphQL allows you to specify exactly what data you need in a single request, reducing over-fetching and under-fetching of data. This is particularly valuable when you need to retrieve complex, nested data structures or when you want to optimize network requests in your applications. You can see some examples in Exploring GitHub CLI: How to interact with GitHub’s GraphQL API endpoint.

Both APIs serve as the foundation for integrating GitHub’s functionality into external tools, automating workflows, and building custom solutions that extend GitHub’s capabilities.

How to choose the right authentication method

GitHub offers three primary authentication methods for accessing its APIs. Personal Access Tokens (PATs) are the simplest method, providing a token tied to a user account with specific permissions. OAuth tokens are designed for third-party applications that need to act on behalf of different users, implementing a standard authorization flow where users grant specific permissions to the application. 

GitHub Apps provide the most robust and scalable solution, operating as their own entities with fine-grained permissions, installation-based access, and higher rate limits — making them ideal for organizations and production applications that need to interact with multiple repositories or organizations while maintaining tight security controls.

Authentication TypeProsCons
Personal Access Tokens (PATs)– Simple to create and use
– Quick to get started
– Good for personal automation
– Can be scoped to multiple organizations
– Configurable permissions per token
– Admins can revoke organization access
– Configurable expiration dates
– Work with most GitHub API libraries
– No additional infrastructure needed
– Tied to user account lifecycle
– Limited to user’s permissions
– Classic PATs have coarse-grained permissions
– Require manual rotation
– Browser-based management only
– If compromised, expose all accessible organization(s)/repositories
OAuth Tokens– Standard OAuth 2.0 flow
– Organization admins control app access
– Can act on behalf of multiple users
– Excellent for web applications
– User-approved permissions
– Refresh token mechanism
– Widely supported by frameworks
– Good for user-facing applications
– Require storing refresh tokens securely
– Need server infrastructure
– More complex than PATs for simple automation
– Still tied to user accounts
– Require initial browser authorization
– Token management complexity
– Potential for scope creep
– User revocation affects functionality
GitHub Apps– Act as independent identity
– Fine-grained, repository-level permissions
– Installation-based access control
– Tokens can be scoped down at runtime
– Short-lived tokens (1 hour max)
– Higher rate limits
– Best security model available
– No user account dependency
– Audit trail for all actions
– Can be installed across multiple orgs
– More complex initial setup
– Require JWT implementation
– May be overkill for simple scenarios
– Require understanding of installation concept
– Private key management responsibility
– More moving parts to maintain
– Not all APIs support Apps

PATs have two flavors: classic and fine-grained. Classic PATs provide repository-wide access with coarse permissions. Fine-grained PATs offer more granular control, since they are  scoped to a single organization, allow specified permissions at the repository level, and limit access to specific repositories. Administrators can also require approval of fine-grained tokens before they can be used, making them a more secure choice for repository access management. However, they currently do not support all API calls and still have some limitations compared to classic PATs.

Because of their fine-grained permissions, security features, and higher rate limits, GitHub Apps are the ideal choice for machine-to-machine integration with Azure Pipelines. What’s more, the short-lived tokens and installation-based access model provide better security controls compared to PATs and OAuth tokens, making them particularly well-suited for automation in CI/CD scenarios.

Registering and installing a GitHub App

In order to use an application for authentication, register it as a GitHub App, and then install it on the accounts, organizations, or enterprises the application will interact with.

These are the steps to follow:

  1. Register the GitHub App in GitHub enterprise, organization, or account.
    • Make sure to select the appropriate permissions for the application. The permissions will determine what the application can do in the enterprise, organization, and repositories to which it has access.
    • Permissions may be modified at any time. Note that if the application is already installed, changes will require a new authorization from the owner administrators before they take effect.
    • Take care to understand the consequences of making the app public or private. It is very likely that you will want to make the app private, as it is only intended to be used by you or your organization. The semantics of public and private also vary depending on the  GitHub Enterprise Cloud type (Enterprise with personal accounts, with managed users, or with data residency).
    • If a private key was generated, save it in a safe place. Private keys are used to authenticate against GitHub to generate an installation token. Note that a key can be revoked or up to 20 more may be generated if desired. 
  2. Install the GitHub App on the accounts or organizations the application will interact with.
    • When an app is installed, select which repositories the app will have access to. Options include all repositories (current and future) or you can select individual repositories.

Note: An unlimited number of GitHub Apps may be installed on each account, but only 100 GitHub Apps may be registered per enterprise, organization, or account.

GitHub App authentication flow

GitHub Apps use a two-step authentication process to access the GitHub API. First, the app authenticates itself using a JSON Web Token (JWT) signed with its private key. This JWT proves the app’s identity but doesn’t provide access to any GitHub resource. To call GitHub APIs, the app needs to obtain an installation token. Installation tokens are scoped (enterprise, organization, or account) access tokens that are generated using the app’s JWT authentication. These tokens are short-lived (valid for one hour) and can only access the resources on the scope they are installed on (enterprise, organization, or repository) and use at max the permissions granted during the app’s installation.

To obtain an installation token, there are two approaches: either use a known installation ID, or retrieve the ID by calling the installations API. Once the app has the installation ID, it requests a new token using that ID. The resulting installation token inherits the app’s permissions and repository access for that installation. It can optionally request the token with reduced permissions or limited to specific repositories — a useful security feature when you don’t need the app’s full access scope.

The resulting installation token can then be used to make GitHub API calls with the returned permissions.

Note: The application can also authenticate on a user’s behalf, but it’s not an ideal scenario for CI/CD pipelines where we want to use a service account and not a user account.

Sequence diagram showing GitHub App authentication flow between Client and GitHub, including JWT generation, installation ID retrieval, and installation token creation steps.

From a pipeline perspective, generating an installation token is all that’s needed to call GitHub APIs.

Pipeline authors have three main options to generate installation tokens in Azure Pipelines:

  1. Use a command-line tool: Several tools are available that can generate installation tokens directly from a pipeline step. For example, gh-token is a popular open source tool that handles the entire token generation process.
  2. Write custom scripts: Implement the token generation process using bash/curl or PowerShell scripts following the authentication steps described above. This grants full control over the process but requires more implementation effort.
  3. Use Azure Pipeline tasks: While Azure Pipelines doesn’t provide built-in GitHub App authentication, you can either:
    • Find a suitable task in the Azure DevOps marketplace.
    • Create a custom task that implements the GitHub App authentication flow.

Next, we’ll explore creating a custom task using an Azure DevOps extension to provide an integration with GitHub App authentication and dynamically generated installation tokens.

Azure DevOps extension for GitHub App authentication

When creating an integration between Azure Pipelines and GitHub, security of the app private key should be top of mind. Possession of this key grants permissions to generate installation tokens and make API calls on behalf of the app, so it must be stored securely. Within Azure Pipelines, we have several options for storing sensitive data:

Service connections in Azure Pipelines provide several key benefits for managing external service authentication, including:

  • Centralized access control where administrators can specify which pipelines can use the connection
  • Support for multiple authentication schemes
  • Ability to share connections across multiple pipelines within a project
  • Built-in security controls for managing who can view or modify connection details
  • Keep sensitive credentials hidden from pipeline authors while still allowing usage
  • Shared connections across multiple projects, reducing duplication and management overhead

For GitHub App authentication, service connections are particularly valuable because they:

  • Securely store the app’s private key
  • Allow administrators to configure and enforce connection behaviors
  • Provide better security compared to storing secrets directly in pipelines or variable groups

For those eager to explore the sample code, check out the repository. The key components and configuration are detailed below.

Creating a custom Azure DevOps extension

Azure DevOps extensions are packages that add new capabilities to Azure DevOps services. In our case, we need to create an extension that provides two key components:

  • Custom service connection type for securely storing GitHub App credentials (and other settings)
  • Custom task that uses those credentials to generate installation tokens

An extension consists of a manifest file that describes what the extension provides, along with the actual implementation code.

The development process involves creating the extension structure, defining the service connection schema, implementing the custom task logic in PowerShell (Windows only) or JavaScript/TypeScript for cross-platform compatibility, and packaging everything into a distributable format. Once created, the extension can be published privately for your organization or shared publicly through the Azure DevOps Marketplace, making it available for others who have similar GitHub integration needs.

We are not going to do a full walkthrough of the extension creation process, but we will demonstrate the most important steps. You can find all the information here: 

Adding a custom service connection

To enable GitHub App authentication in Azure Pipelines, we need to create a custom service connection type since there isn’t a built-in one. This can be done by adding a custom endpoint contribution to our extension, which will define how the service connection stores and validates the GitHub App credentials, and provides a user-friendly UI for configuring the connection settings like App ID, private key, and other properties.

We need to add a contribution of type ms.vss-endpoint.service-endpoint-type to the extension contributions manifest. This contribution will define the service connection type and its properties, like the authentication scheme, the endpoint schema, and the input fields that will be displayed in the service connection configuration dialogue.

Something like this (see a snippet below, or explore the full contribution definition in reference implementation):

"contributions": [
  {
    "id": "github-app-service-endpoint-type",
    "description": "GitHub App Service Connection",
    "type": "ms.vss-endpoint.service-endpoint-type",
    "targets": [ "ms.vss-endpoint.endpoint-types" ],
    "properties": {
        "name": "githubappauthentication",
        "isVerifiable": false,
        "displayName": "GitHub App",
        "url": {
            "value": "https://api.github.com/",
            "displayName": "GitHub API URL",
            "isVisible": "true"
        },
        ...
  },

Once you install the extension, you can add/manage the service connection of type “GitHub App” and configure the app’s ID, private key, and other settings. The service connection will securely store the private key and can be used by custom tasks to generate installation tokens in a pipeline.

Azure DevOps new service connection dialog showing different connection types including Generic, GitHub, GitHub App (highlighted with red arrow), GitHub Enterprise Server, and Incoming WebHook options.

In addition to storing the private key, the custom service connection can also store other settings, such as the GitHub API URL and the app client ID. It can also be used to limit token permissions or scope the token to specific repositories. By optionally enforcing these settings at the service connection level, administrators can ensure consistency and security, rather than leaving configuration decisions to pipeline authors.

Azure DevOps service connection configuration form for custom GitHub App authentication, showing fields for GitHub API URL, Client ID, Private Key, Token Permissions, and Service Connection Name.

Adding a custom task

Now that we have a secure way to store the GitHub App credentials, we can create a custom task that will use the service connection to generate an installation token. The task will be a TypeScript application (cross platform) and use the Azure DevOps Extension SDK.

While I already shared the full walkthrough of creating a custom task, here is an abbreviated list to follow:

  • Create the custom task skeleton
  • Declare the inputs and outputs on the task manifest (task.json)
  • Implement the code
  • Declare the task and its assets on the extension manifest (vss-extension.json)

I have created an extension sample that contains both the service connection as well as a custom task that generates a GitHub installation token for API calls. Since the extension is not published to the marketplace, you have to (privately) publish under your account, share it with your Azure DevOps enterprise or organization, and then install it on all organizations where you want to use the custom task.

Jump to the next section If you choose this path, as you are now ready to use the custom task in your pipeline.

Note: The sample includes both a GitHub Actions workflow and an Azure Pipelines YAML pipeline that builds and packages the extension as an Azure DevOps extension that can be published in the Azure DevOps marketplace.

Using the custom task in Azure Pipelines

The task supports receiving the private key, as a string, a file (to be combined with secure files), or preferably a service connection (see input parameters).

Assuming you have a service connection named my-github-app-service-connection, let’s see how can use task to create a comment in a pull request in the GitHub repository that triggers the pipeline using the GitHub CLI to call the GitHub API:

steps:
- task: create-github-app-token@1
  displayName: create installation token
  name: getToken
  inputs:
    githubAppConnection: my-github-app-service-connection

- bash: |
    pr_number=$(System.PullRequest.PullRequestNumber)
    repo=$(Build.Repository.Name)
    echo "Creating comment in pull request #${pr_number} in repository ${repo}"
    gh api -X POST "/repos/${repo}/issues/${pr_number}/comments" -f body="Posting a comment from Azure Pipelines"
  displayName: Create comment in pull request
  condition: eq(variables['Build.Reason'], 'PullRequest')
  env:
    GH_TOKEN: $(getToken.installationToken)

Running this pipeline will result in a comment being posted in the pull request:

Screenshot of a GitHub pull request snippet showing an Azure Pipelines Status check, and comment that reads 'Posting a comment from Azure Pipelines' written by our pipeline.

Pretty simple, right? The task will create an installation token using the service connection and export it as a variable, which can be accessed as getToken.installationToken (with getToken being the identifier of the step). It can then be used to authenticate against GitHub, in this case using the GitHub CLI command, which will take care of the API call and authentication for us (we could have also used curl or any other HTTP client).

The task also exports other variables:

  • tokenExpiration: the expiration date of the generated token, in ISO 8601 format
  • installationId: the ID of the installation for which the token was generated

Unlocking powerful automation capabilities beyond basic CI/CD

By leveraging GitHub Apps for authentication, organizations can establish secure, scalable Azure Pipelines integrations that provide fine-grained permissions, short-lived tokens, and better security controls compared to traditional PATs.

The custom Azure DevOps extension approach provides a seamless integration experience that abstracts away the complexities of GitHub App authentication. Through service connections and custom tasks, pipeline authors can easily generate installation tokens without worrying about JWT generation, installation ID management, or token lifecycle concerns.

The streamlined approach also enables development teams to implement rich GitHub integrations, including automated status checks, pull request comments, issue management, security scanning integration, and deployment tracking. The result? A more cohesive development workflow where Azure Pipelines and GitHub work together seamlessly to provide comprehensive visibility and automation throughout the software development lifecycle.

Whether you’re looking to enhance your existing CI/CD processes or build entirely new automated workflows, the combination of Azure Pipelines and GitHub API through GitHub Apps provides a robust foundation for modern DevOps practices. This will allow you to enrich your existing pipelines with GitHub capabilities as you move your code from Azure Repos to GitHub.

Explore more blog posts covering a range of topics essential for enterprise software development >

The post How to streamline GitHub API calls in Azure Pipelines appeared first on The GitHub Blog.

]]>
89733
First Look: Exploring OpenAI o1 in GitHub Copilot https://github.blog/news-insights/product-news/openai-o1-in-github-copilot/ Thu, 12 Sep 2024 18:00:14 +0000 https://github.blog/?p=79733 We've tested integrating OpenAI o1-preview with GitHub Copilot. Here's a first look at where we think it can add value to your day to day.

The post First Look: Exploring OpenAI o1 in GitHub Copilot appeared first on The GitHub Blog.

]]>

Today, OpenAI released OpenAI o1, a new series of AI models equipped with advanced reasoning capabilities to solve hard problems. Like you, we are excited to put the new o1 model through its paces and have tested integrating o1-preview with GitHub Copilot. While we are exploring many use cases with this new model, such as debugging large-scale systems, refactoring legacy code, and writing test suites, our initial testing showed promising results in code analysis and optimization. This is because of o1-preview’s ability to think through challenges before responding, which enables Copilot to break down complex tasks into structured steps.

In this blog, we’ll describe two scenarios showcasing the new model’s capabilities within Copilot and how it could work for your day to day. Keep reading for an inside look at what happens when a new model launches, what we test, and how we approach AI-powered software development at GitHub.

Optimize complex algorithms with advanced reasoning

In our first test, we wanted to understand how o1-preview could help write or refine complex algorithms, a task that requires deep logical reasoning to find more efficient or innovative solutions. Developers need to understand the constraints, optimize edge cases, and iteratively improve the algorithm without losing track of the overall objective. This is exactly where o1-preview excels. With this in mind, we developed a new code optimization workflow that benefits from the model’s reasoning capabilities.

In this demo, a new built-in Optimize chat command provides rich editor context out of the box, like imports, tests, and performance profiles. We tested how well o1-preview could analyze and iterate code to come up with a more thorough and efficient optimization in one shot.

The video shows optimizing the performance of a byte pair encoder used in Copilot Chat’s tokenizer library (yes, this means we use AI to optimize a key AI development building block).

This was a real problem the VS Code team faced, as Copilot needs to repeatedly tokenize large amounts of data while it assembles prompts.

The results highlight how o1-preview’s reasoning capability allows a deeper understanding of the code’s constraints and edge cases, which helps produce a more efficient and higher quality result. Meanwhile, GPT-4o sticks to obvious optimizations and would need a developer’s help to steer Copilot towards more complex approaches.

Beyond handling complex code tasks, o1-preview’s math abilities shine as it effortlessly calculates the benchmark results from the raw terminal output, then summarizes them succinctly.

Optimize application code to fix a performance bug

In this next demo on GitHub, o1-preview was able to identify and develop a solution for a performance bug within minutes. The same bug took one of our software engineers a few hours before they came up with the same solution. At the time, we wanted to add a folder tree to the file view in GitHub.com, but the number of elements was causing our focus management code to stall and crash the browser. The video shows side-by-side the difference of using GPT-4o and o1-preview to try and resolve the issue:

With 1,000 elements managed by this code, it was hard to isolate the problem. Eventually we implemented a change that improved the runtime of this function from over 1,000ms to about 16ms. If we had Copilot with o1-preview, we could have quickly identified the problem and fixed it faster.

Through this experimentation, we found a subtle but powerful difference, which is how deliberate and purposeful o1-preview’s responses are, making it easy for the developer to pinpoint problems and quickly implement solutions. With GPT-4o, a similar prompt might result in a blob of code instead of a solution with recommendations broken down line by line.

Bringing the power of o1-preview to developers building on GitHub

Not only are we excited to experiment with integrating o1-preview into GitHub Copilot, we can’t wait to see what you’ll be able to build with it too. That’s why we’re bringing the o1 series to GitHub Models. You’ll find o1-preview and o1-mini, a smaller, faster, and 80% cheaper model, in our marketplace later today, but because it is still in preview you’ll need to sign up for Azure AI for early access.

Stay tuned

As part of Microsoft’s collaboration with OpenAI, GitHub is able to constantly explore how we can leverage the latest AI breakthroughs to drive developer productivity, and, most importantly, increase developer happiness. Although these demos showcase o1-preview’s enhanced capabilities for two specific optimization problems, we’re still early in our experimentation and are excited to see what else it can do.

We’re currently exploring more use cases across Copilot—in IDEs, Copilot Workspace, and on GitHub—to leverage o1-preview’s strong reasoning capabilities to accelerate developer workflows even further. The advancements we’re showcasing today barely scratch the surface of what developers will be able to build with o1-preview in GitHub Copilot. And with the expected evolution of both the o1 and GPT series, this is just the beginning.

Interested in trying out the latest Copilot and AI innovations?

The post First Look: Exploring OpenAI o1 in GitHub Copilot appeared first on The GitHub Blog.

]]>
79733
Fuzzing sockets: Apache HTTP, Part 3: Results https://github.blog/company/fuzzing-sockets-apache-http-part-3-results/ Tue, 21 Dec 2021 18:36:36 +0000 https://github.blog/?p=81262 In this third and last part, I’ll share the results of my research on Apache HTTP server, and I’ll show some of the vulnerabilities that I’ve found.

The post Fuzzing sockets: Apache HTTP, Part 3: Results appeared first on The GitHub Blog.

]]>

In the first part of this series, I explained my fuzzing workflow and covered some of the custom mutators I’ve built for fuzzing Apache HTTP. In the second part, I explained how to build custom ASAN interceptors in order to catch memory bugs when custom memory pools are used.

In this third and last part, I’ll share the results of my research on Apache HTTP server, and I’ll show some of the vulnerabilities that I’ve found.

So, let’s get to it!

NULL dereference in session_identity_decode

This bug can be triggered setting a cookie with a NULL key and value:

setting a cookie with a NULL key and value

In the example above, you can see that in the first position of the cookies there is a session key and a choco value. In the second position, we can find the admin-user key and the number 2 as a value. However, in the third position there is an empty key and value pair.

What’s the problem here? Well, if you look at the following code snippet, you can see two calls to apr_strtok, to extract the first and the second string (key and value):

const char *psep = "=";
char *key = apr_strtok(pair, psep, &plast);
char *val = apr_strtok(NULL, psep, &plast);

Let’s see now what happens in the apr_strtok function when the first argument is NULL:

APR_DECLARE(char *) apr_strtok(char *str, const char *sep, char **last)
{
    char *token;

    if (!str)
        str = *last;

    while (*str && strchr(sep, *str))
        ++str;

You can see in this code snippet how the while loop tries to dereference the first function argument (str pointer). So, if this first argument is NULL, it will trigger a NULL dereference bug. Also,this is exactly what happens with the following statement char *val = apr_strtok(NULL, psep, &plast); when the previous key value is also null.

In order to exploit this bug, mod_session needs to be enabled. This vulnerability can lead to a denial of service at the child level, affecting the other threads in the same process.

Off-by-one (stack-based) in check_nonce

In order to exploit this bug, the mod_auth_digest module should be enabled, and the application has to be using the DIGEST authentication.

For triggering this bug, we need to assign a specific set of values to the nonce field as follows:

GET http://127.0.0.1/i?proxy=yes HTTP/1.1
Host: foo.example
Accept: */*
Authorization: Digest username="2",
                     realm="private area",
                     nonce="d2hhdGFzdXJwcmlzZXhkeGR4ZHhkeGR4ZHhkeGR4ZHhkeGR4ZA==",
                     uri="http://127.0.0.1:80/i?proxy=yes",
                     qop=auth,
                     nc=00000001,
                     cnonce="0a4f113b",
                     response="53849ce65ba787cd0a07a272ece3bba6",
                     opaque="5ccc069c403ebaf9f0171e9517f40e41"

As you can see, the nonce field is a BASE64 value. In order to decode this value, the check_nonce function does a call to:

apr_base64_decode_binary(nonce_time.arr, resp->nonce)

where nonce_time.arr is a local array of size 8. Let’s see the code of the apr_base64_decode_binary function:

APR_DECLARE(int) apr_base64_decode_binary(unsigned char *bufplain, const char *bufcoded)
{
    int nbytesdecoded;
    register const unsigned char *bufin;
    register unsigned char *bufout;
    register apr_size_t nprbytes;

    bufin = (const unsigned char *) bufcoded;
    while (pr2six[*(bufin++)] <= 63);
    nprbytes = (bufin - (const unsigned char *) bufcoded) - 1;
    nbytesdecoded = (((int)nprbytes +3) / 4) * 3;

bufout = (unsigned char *) bufplain;
    bufin = (const unsigned char *) bufcoded;

    while (nprbytes > 4) {
    *(bufout++) =
        (unsigned char) (pr2six[*bufin] << 2 | pr2six[bufin[1]] >> 4);
    *(bufout++) =
        (unsigned char) (pr2six[bufin[1]] << 4 | pr2six[bufin[2]] >> 2);
    *(bufout++) =
        (unsigned char) (pr2six[bufin[2]] << 6 | pr2six[bufin[3]]);
    bufin += 4;
    nprbytes -= 4;
    }

    if (nprbytes > 1) {
    *(bufout++) =
        (unsigned char) (pr2six[*bufin] << 2 | pr2six[bufin[1]] >> 4);
    }
    if (nprbytes > 2) {
    *(bufout++) =
        (unsigned char) (pr2six[bufin[1]] << 4 | pr2six[bufin[2]] >> 2);
    }
    if (nprbytes > 3) {
    *(bufout++) =
        (unsigned char) (pr2six[bufin[2]] << 6 | pr2six[bufin[3]]);
    }

Under normal circumstances, the nprbytes variable will give 11 as a result, and the while loop will be executed two times, writing a total of 8 bytes into the bufplain array (6 + 2).

However, if the date format is wrong, the calculation of the nprbytes variable can give 12 as a result. So in these cases, the while loop will be executed three times and 9 bytes will be written into the bufplain array. Consequently, the program is writing 1 byte outside of the boundaries of the local array nonce_time.arr, overwriting 1 byte in the program stack (aka off-by-one).

Use-after-free in cleanup_tables

Here we have a use-after-free (UAF) in the cleanup_tables function. Let’s take a look at the code:

static apr_status_t cleanup_tables(void *not_used)
{
    ap_log_error(APLOG_MARK, APLOG_INFO, 0, NULL, APLOGNO(01756)
                  "cleaning up shared memory");

    if (client_rmm) {
        apr_rmm_destroy(client_rmm);
        client_rmm = NULL;
    }

    if (client_shm) {
        apr_shm_destroy(client_shm);
        client_shm = NULL;
    }

You can see this function calls the apr_rmm_destroy function in order to free the client_rmm memory block. Yet the problem here is that, under certain circumstances, this memory block could have been previously freed by the apr_allocator_destroy function (not shown in this code snippet).

So, the program is trying to access an address that is no longer valid, leading to a use-after-free vulnerability. It’s important to mention that this vulnerability can only be triggered in the ONE_PROCESS mode.

OOB-write (heap-based) in ap_escape_quotes

In this case, we have a heap out-of-bounds write affecting the ap_escape_quotes function. This function escapes any quotes in the given input string. The origin of this bug is a calculation mismatch, between the length of the input string and the size of the “malloced” outstring buffer.

In the following code snippet, you can see the code that calculates the length of the input string:

while (*inchr != '\0'){
    newlen++;
    if (*inchr == '"') {
        newlen++;
    }
    if ((*inchr == '\\') && (inchr[1] != '\0')) {
        inchr++;
        newlen++;
    }
    inchr++;
}
outstring = apr_palloc(p, newlen + 1);

In this second code snippet, you can see the code that calculates the size of outstring:

while (*inchr != '\0') {
        if ((*inchr == '\\') && (inchr[1] != '\0')) {
            *outchr++ = *inchr++;
            *outchr++ = *inchr++;
        }
        if (*inchr == '"') {
            *outchr++ = '\\';
        }
        if (*inchr != '\0') {
            *outchr++ = *inchr++;
        }
    }
    *outchr = '\0';
    return outstring;

As you can see, it uses a different logic for this second size calculation. As a result, if we provide a malicious input to the ap_escape_quotes function, it is possible to write outside the bounds of the outchr array.

This bug was previously reported by Google OSS-Fuzz, just a few days before I found it.

Race condition leading to UAF

Now, I’m going to explain something totally different. In this case, the bug is a race condition leading to use-after-free and affecting the Apache Core.

During my fuzzing work, I found multiple non-reproducible UAF crashes. After looking into it more deeply, I discovered a kind of race condition between calls to apr_allocator_destroy and allocator_alloc. All the signs suggested that these functions might not be thread safe in concurrent scenarios. This could lead to a corruption of some nodes of the memory pool and, occasionally, the program tries to release a block that is already present in the free pool. This bug shares some similarities with the bug I reported in ProFTPD (CVE-2020-9273), a year ago.

Here you can see an example ASAN trace:

==106820==ERROR: AddressSanitizer: heap-use-after-free on address 0x625000091100 at pc 0x7ffff7d2ff4d bp 0x7fffffffd800 sp 0x7fffffffd7f8
READ of size 8 at 0x625000091100 thread T0
    #0 0x7ffff7d2ff4c in apr_allocator_destroy /home/antonio/Downloads/httpd-trunk/srclib/apr/memory/unix/apr_pools.c:197:26
    #1 0x7ffff7d3306c in apr_pool_terminate /home/antonio/Downloads/httpd-trunk/srclib/apr/memory/unix/apr_pools.c:756:5
    #2 0x7ffff77aeba6 in __run_exit_handlers /build/glibc-5mDdLG/glibc-2.30/stdlib/exit.c:108:8
    #3 0x7ffff77aed5f in exit /build/glibc-5mDdLG/glibc-2.30/stdlib/exit.c:139:3
    #4 0x5b1ae8 in clean_child_exit /home/antonio/Downloads/httpd-trunk/server/mpm/event/event.c:777:5
    #5 0x5b19a5 in child_main /home/antonio/Downloads/httpd-trunk/server/mpm/event/event.c:2957:5
    #6 0x5afa7b in make_child /home/antonio/Downloads/httpd-trunk/server/mpm/event/event.c:2981:9
    #7 0x5af005 in startup_children /home/antonio/Downloads/httpd-trunk/server/mpm/event/event.c:3046:13
    #8 0x5a74c1 in event_run /home/antonio/Downloads/httpd-trunk/server/mpm/event/event.c:3407:9
    #9 0x6212b1 in ap_run_mpm /home/antonio/Downloads/httpd-trunk/server/mpm_common.c💯1
    #10 0x5e67e6 in main /home/antonio/Downloads/httpd-trunk/server/main.c:891:14
    #11 0x7ffff778c1e2 in __libc_start_main /build/glibc-5mDdLG/glibc-2.30/csu/../csu/libc-start.c:308:16
    #12 0x44da7d in _start ??:0:0

This is not a new problem. Similar issues were reported by Hanno Böck (@hanno) in 2018. You can check Hanno’s previous reports here.

Minor bugs

During my fuzzing session, I found some other minor bugs, and I would like to show you one of them: an integer overflow in the Session_Identity_Decode function. It’s not a dangerous bug, but I think that it can be interesting to show an example of how trivial it is to trigger this bug.

So, take a look at the following example in which we send a LOCK WebDav request to MOD_DAV with a large TimeOut value (Second-41000000004100000000):

LOCK /dav/c HTTP/1.1
Host: 127.0.0.1
Timeout: Second-41000000004100000000
Content-Type: text/xml; charset="utf-8"
Content-Length: XXX
Authorization: Basic Mjoz
<?xml version="1.0" encoding="utf-8" ?>
<d:lockinfo xmlns:d="DAV:">

In the following code snippet you can see the statement:

return now + expires; 

where there is an addition of two (32-bit) integer values, that will be stored in another integer variable. So if these values are big enough, we can overflow the returned value.

while ((val = ap_getword_white(r->pool, &timeout))
    if (!strncmp(val, "Infinite", 8)) {
        return DAV_TIMEOUT_INFINITE;
    }

    if (!strncmp(val, "Second-", 7)) {
        val += 7;
        expires = atol(val);
        now = time(NULL);
        return now + expires;
    }
}

Since this bug is triggered with a LOCK request, the MOD_DAV module should be enabled in order to be triggered.

Conclusions

While Apache HTTP security has been extensively studied by researchers, based on recently disclosed vulnerabilities involving path traversals and file disclosures (CVE-2021-41773 and CVE-2021-42013), it is clear that there is still room for discovering new critical vulnerabilities.

With this research, I wanted to make my own contributions to improve the security of the Apache HTTP server, and show that it is possible to use fuzzing for finding vulnerabilities in one of the most used open source software out there. At the same time, I hope that I was able to share all the knowledge I learned with you.

What next?

With this third part of my “fuzzing Apache” research, I concluded the “Fuzzing sockets” series. You can find a summary of all my previous posts here:

In my next blog entry, I’ll start a new series focused on fuzzing Javascript engines. Stay tuned!

Need more information?

I’ve used the following resources in this blog post:

The post Fuzzing sockets: Apache HTTP, Part 3: Results appeared first on The GitHub Blog.

]]>
81262