GitHub Enterprise Cloud - EU Status - Incident History https://eu.githubstatus.com Statuspage Mon, 16 Mar 2026 05:28:52 +0000 EU - Disruption with some GitHub services <p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:30</var> UTC</small><br><strong>Resolved</strong> - On March 5, 2026, between 12:53 UTC and 13:35 UTC, the Copilot mission control service was degraded. This resulted in empty responses returned for users' agent session lists across GitHub web surfaces. Impacted users were unable to see their lists of current and previous agent sessions in GitHub web surfaces. This was caused by an incorrect database query that falsely excluded records that have an absent field.<br /><br />We mitigated the incident by rolling back the database query change. There were no data alterations nor deletions during the incident.<br /><br />To prevent similar issues in the future, we're improving our monitoring depth to more easily detect degradation before changes are fully rolled out.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:30</var> UTC</small><br><strong>Update</strong> - Copilot coding agent mission control is fully restored. Tasks are now listed as expected.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:21</var> UTC</small><br><strong>Update</strong> - Users were temporarily unable to see tasks listed in mission control surfaces. The ability to submit new tasks, view existing tasks via direct link, or manage tasks was unaffected throughout. A revert is currently being deployed and we are seeing recovery.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Thu, 05 Mar 2026 01:30:38 +0000 https://eu.githubstatus.com/incidents/xc3gm34trprw https://eu.githubstatus.com/incidents/xc3gm34trprw EU - Some OpenAI models degraded in Copilot <p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Resolved</strong> - On March 5th, 2026, between approximately 00:26 and 00:44 UTC, the Copilot service experienced a degradation of the GPT 3.5 Codex model due to an issue with our upstream provider. Users encountered elevated error rates when using GPT 3.5 Codex, impacting approximately 30% of requests. No other models were impacted.<br /><br />The issue was resolved by a mitigation put in place by our provider.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and gpt-5.3-codex is once again available in Copilot Chat and across IDE integrations. We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>00:53</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the gpt-5.3-codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /></p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>00:47</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Thu, 05 Mar 2026 01:13:29 +0000 https://eu.githubstatus.com/incidents/3bgzsgpddqvw https://eu.githubstatus.com/incidents/3bgzsgpddqvw EU - Claude Opus 4.6 Fast not appearing for some Copilot users <p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>21:11</var> UTC</small><br><strong>Resolved</strong> - On March 3, 2026, between 19:44 UTC and 21:05 UTC, some GitHub Copilot users reported that the Claude Opus 4.6 Fast model was no longer available in their IDE model selection. After investigation, we confirmed that this was caused by enterprise administrators adjusting their organization's model policies, which correctly removed the model for users in those organizations. No users outside the affected organizations lost access.<br /><br />We confirmed that the Copilot settings were functioning as designed, and all expected users retained access to the model. The incident was resolved once we verified that the change was intentional and no platform regression had occurred.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>21:05</var> UTC</small><br><strong>Update</strong> - We believe that all expected users still have access to Claude Opus 4.6. We confirm that no users have lost access.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>20:31</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Tue, 03 Mar 2026 21:11:31 +0000 https://eu.githubstatus.com/incidents/xwh8w5lmg8bv https://eu.githubstatus.com/incidents/xwh8w5lmg8bv EU - Incident with Copilot and Actions <p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>20:09</var> UTC</small><br><strong>Resolved</strong> - On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact. <br /><br />This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment. <br /><br />We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps: <br /><br />- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly. <br />- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:32</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:17</var> UTC</small><br><strong>Update</strong> - We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>18:59</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Tue, 03 Mar 2026 20:09:17 +0000 https://eu.githubstatus.com/incidents/38gs7szkgxvj https://eu.githubstatus.com/incidents/38gs7szkgxvj EU - Incident with Copilot <p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>11:06</var> UTC</small><br><strong>Resolved</strong> - On February 26, 2026, between 09:27 UTC and 10:36 UTC, the GitHub Copilot service was degraded and users experienced errors when using Copilot features including Copilot Chat, Copilot Coding Agent and Copilot Code Review. During this time, 5-15% of affected requests to the service returned errors.<br /><br />The incident was resolved by infrastructure rebalancing.<br /><br />We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>11:06</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>10:22</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Thu, 26 Feb 2026 11:06:31 +0000 https://eu.githubstatus.com/incidents/6cb1qrfsydh3 https://eu.githubstatus.com/incidents/6cb1qrfsydh3 EU - Incident with Copilot Agent Sessions impacting CCA/CCR <p><small>Feb <var data-var='date'>25</var>, <var data-var='time'>16:44</var> UTC</small><br><strong>Resolved</strong> - On February 25, 2026, between 15:05 UTC and 16:34 UTC, the Copilot coding agent service was degraded, resulting in errors for 5% of all requests and impacting users starting or interacting with agent sessions. <br /><br />This was due to an internal service dependency running out of allocated resources (memory and CPU). We mitigated the incident by adjusting the resource allocation for the affected service, which restored normal operations for the coding agent service.<br /><br />We are working to implement proactive monitoring for resource exhaustion across our services, review and update resource allocations, and improve our alerting capabilities to reduce our time to detection and mitigation of similar issues in the future.</p><p><small>Feb <var data-var='date'>25</var>, <var data-var='time'>16:38</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Wed, 25 Feb 2026 16:44:50 +0000 https://eu.githubstatus.com/incidents/rkh4wvvhrqf6 https://eu.githubstatus.com/incidents/rkh4wvvhrqf6 EU - Incident with Copilot <p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>16:19</var> UTC</small><br><strong>Resolved</strong> - On February 23, 2026, between 14:45 UTC and 16:19 UTC, the Copilot service was degraded for Claude Haiku 4.5 model. On average, 6% of the requests to this model failed due to an issue with an upstream provider. During this period, automated model degradation notifications directed affected users to alternative models. No other models were impacted. The upstream provider identified and resolved the issue on their end. <br />We are working to improve automatic model failover mechanisms to reduce our time to mitigation of issues like this one in the future.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>14:56</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Mon, 23 Feb 2026 16:19:34 +0000 https://eu.githubstatus.com/incidents/2vyccmpfpxv8 https://eu.githubstatus.com/incidents/2vyccmpfpxv8 EU - Incident with Copilot GPT-5.1-Codex <p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>11:41</var> UTC</small><br><strong>Resolved</strong> - On February 20, 2026, between 07:30 UTC and 11:21 UTC, the Copilot service experienced a degradation of the GPT 5.1 Codex model. During this time period, users encountered a 4.5% error rate when using this model. No other models were impacted.<br />The issue was resolved by a mitigation put in place by the external model provider. GitHub is working with the external model provider to further improve the resiliency of the service to prevent similar incidents in the future.</p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>11:19</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and GPT 5.1 Codex is once again available in Copilot Chat and across IDE integrations [VSCode, Visual Studio, JetBrains].<br />We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>10:36</var> UTC</small><br><strong>Update</strong> - We are still experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /></p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>10:02</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br />Other models are available and working as expected.</p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>10:02</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Fri, 20 Feb 2026 11:41:40 +0000 https://eu.githubstatus.com/incidents/mw9g1x4f6kv2 https://eu.githubstatus.com/incidents/mw9g1x4f6kv2 EU - Disruption with some GitHub services regarding file upload <p><small>Feb <var data-var='date'>13</var>, <var data-var='time'>22:58</var> UTC</small><br><strong>Resolved</strong> - On February 13, 2026, between 21:46 UTC and 22:58 UTC (72 minutes), the GitHub file upload service was degraded and users uploading from a web browser on GitHub.com were unable to upload files to repositories, create release assets, or upload manifest files. During the incident, successful upload completions dropped by ~85% from baseline levels. This was due to a code change that inadvertently modified browser request behavior and violated CORS (Cross-Origin Resource Sharing) policy requirements, causing upload requests to be blocked before reaching the upload service.<br /><br />We mitigated the incident by reverting the code change that introduced the issue.<br /><br />We are working to improve automated testing for browser-side request changes and to add monitoring/automated safeguards for upload flows to reduce our time to detection and mitigation of similar issues in the future.</p><p><small>Feb <var data-var='date'>13</var>, <var data-var='time'>22:30</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Fri, 13 Feb 2026 22:58:43 +0000 https://eu.githubstatus.com/incidents/663qlvbkm8bd https://eu.githubstatus.com/incidents/663qlvbkm8bd EU - Intermittent disruption with Copilot completions and inline suggestions <p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>16:50</var> UTC</small><br><strong>Resolved</strong> - Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.<br /><br />The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>15:33</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability in Western Europe for Copilot completions and suggestions. We are working to resolve the issue.<br /></p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>14:08</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability in some regions for Copilot completions and suggestions. We are working to resolve the issue.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>14:06</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Thu, 12 Feb 2026 16:50:02 +0000 https://eu.githubstatus.com/incidents/rwvpcr264nd7 https://eu.githubstatus.com/incidents/rwvpcr264nd7 EU - Disruption with some GitHub services <p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>11:12</var> UTC</small><br><strong>Resolved</strong> - From Feb 12, 2026 09:16:00 UTC to Feb 12, 2026 11:01 UTC, users attempting to download repository archives (tar.gz/zip) that include Git LFS objects received errors. Standard repository archives without LFS objects were not affected. On average, the archive download error rate was 0.0042% and peaked at 0.0339% of requests to the service. This was caused by deploying a corrupt configuration bundle, resulting in missing data used for network interface connections by the service.<br /><br />We mitigated the incident by applying the correct configuration to each site. We have added checks for corruption in this deployment, and will add auto-rollback detection for this service to prevent issues like this in the future.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>11:01</var> UTC</small><br><strong>Update</strong> - We have resolved the issue and are seeing full recovery.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>10:39</var> UTC</small><br><strong>Update</strong> - We are investigating an issue with downloading repository archives that include Git LFS objects.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>10:38</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Thu, 12 Feb 2026 11:12:15 +0000 https://eu.githubstatus.com/incidents/rwqr7934g1rt https://eu.githubstatus.com/incidents/rwqr7934g1rt EU - Copilot Policy Propagation Delays <p><small>Feb <var data-var='date'>10</var>, <var data-var='time'>10:01</var> UTC</small><br><strong>Resolved</strong> - GitHub experienced degraded Copilot policy propagation from enterprise to organizations between February 3 at 21:00 UTC through February 10 at 16:00 UTC. During this period, policy changes could take up to 24 hours to apply. We mitigated the issue on February 10 at 16:00 UTC after rolling back a regression that caused the delays. The propagation queue fully caught up on the delayed items by February 11 at 10:35 UTC, and policy changes now propagate normally.<br /><br />During this incident, whenever an enterprise updated a Copilot policy (including model policies), there were significant delays before those policy changes reached their child organizations and assigned users. The delay was caused by a large backlog in the background job queue responsible for propagating Copilot policy updates.<br /><br />Our investigation determined the incident was caused by a code change shipped on February 3 that increased the number of background jobs enqueued per policy update, in order to accommodate upcoming feature work. When new Copilot models launched on February 5th and 7th, triggering policy updates across many enterprises, the higher job volume overwhelmed the shared background worker queue, resulting in prolonged propagation delays. No policy updates were lost; they were queued and processed once the backlog cleared.<br /><br />We understand these delays disrupted policy management for customers using Copilot at scale and have taken the following immediate steps:<br /><br />1. Restored the optimized propagation path and put tests in place to avoid a regression.<br />2. Ensured upcoming features are compatible with this design. <br />3. Added alerting on queue depth to detect propagation backlogs immediately.<br /><br />GitHub is critical infrastructure for your work, your teams, and your businesses. We are focused on these mitigations and continued improvements so Copilot policy changes propagate reliably and quickly.<br /></p><p><small>Feb <var data-var='date'>10</var>, <var data-var='time'>00:52</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Feb <var data-var='date'>10</var>, <var data-var='time'>00:26</var> UTC</small><br><strong>Update</strong> - We're continuing to address an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users. This may prevent newly enabled models from appearing when users try to access them.<br /> <br />This issue is understand and we are working to get the mitigation applied. Next update in one hour.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>22:09</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.<br /><br />This may prevent newly enabled models from appearing when users try to access them.<br /><br />Next update in two hours.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>20:39</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.<br /><br />This may prevent newly enabled models from appearing when users try to access them.<br /><br />Next update in two hours.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>18:49</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.<br /><br />This may prevent newly enabled models from appearing when users try to access them.<br /><br />Next update in two hours.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>18:06</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.<br /><br />This may prevent newly enabled models from appearing when users try to access them.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>17:23</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate a an issue where Copilot policy updates are not propagating correctly for all customers.<br /><br />This may prevent newly enabled models from appearing when users try to access them.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>Update</strong> - We’ve identified an issue where Copilot policy updates are not propagating correctly for some customers. This may prevent newly enabled models from appearing when users try to access them.<br /><br />The team is actively investigating the cause and working on a resolution. We will provide updates as they become available.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>16:29</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Tue, 10 Feb 2026 10:01:16 +0000 https://eu.githubstatus.com/incidents/frl62n451cky https://eu.githubstatus.com/incidents/frl62n451cky EU - Incident with Pull Requests <p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:36</var> UTC</small><br><strong>Resolved</strong> - On February 6, 2026, between 17:49 UTC and 18:36 UTC, the GitHub Mobile service was degraded, and some users were unable to create pull request review comments on deleted lines (and in some cases, comments on deleted files). This impacted users on the newer comment-positioning flow available in version 1.244.0 of the mobile apps. Telemetry indicated that the failures increased as the Android rollout progressed. This was due to a defect in the new comment-positioning workflow that could result in the server rejecting comment creation for certain deleted-line positions.<br /><br />We mitigated the incident by halting the Android rollout and implementing interim client-side fallback behavior while a platform fix is in progress. The client-side fallback is scheduled to be published early this week. We are working to (1) add clearer client-side error handling (avoid infinite spinners), (2) improve monitoring/alerting for these failures, and (3) adopt stable diff identifiers for diff-based operations to reduce the likelihood of recurrence.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:36</var> UTC</small><br><strong>Update</strong> - Some GitHub Mobile app users may be unable to add review comments on deleted lines in pull requests. We're working on a fix and expect to release it early next week.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:04</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:00</var> UTC</small><br><strong>Update</strong> - We're currently investigating an issue affecting the Mobile app that can prevent review comments from being posted on certain pull requests when commenting on deleted lines.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>17:49</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p> Fri, 06 Feb 2026 18:36:53 +0000 https://eu.githubstatus.com/incidents/0fx8lrr9pvhb https://eu.githubstatus.com/incidents/0fx8lrr9pvhb EU - Incident with Copilot <p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:56</var> UTC</small><br><strong>Resolved</strong> - On February 3, 2026, between 09:35 UTC and 10:15 UTC, GitHub Copilot experienced elevated error rates, with an average of 4% of requests failing.<br /><br />This was caused by a capacity imbalance that led to resource exhaustion on backend services. The incident was resolved by infrastructure rebalancing, and we subsequently deployed additional capacity.<br /><br />We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:55</var> UTC</small><br><strong>Update</strong> - We are now seeing recovery.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:21</var> UTC</small><br><strong>Update</strong> - We are investigating elevated 500s across Copilot services.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:16</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Tue, 03 Feb 2026 10:56:29 +0000 https://eu.githubstatus.com/incidents/k5tg0khmvyg3 https://eu.githubstatus.com/incidents/k5tg0khmvyg3 EU - Incident with Actions <p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>00:56</var> UTC</small><br><strong>Resolved</strong> - On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enterprise Importer, and Pages. All regions and runner types were impacted. Self-hosted runners on other providers were not impacted. <br /><br />This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. <br /><br />We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>00:55</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:50</var> UTC</small><br><strong>Update</strong> - Based on our telemetry, most customers should see full recovery from failing GitHub Actions jobs on hosted runners.<br />We are monitoring closely to confirm complete recovery.<br />Other GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot) should also see recovery.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:43</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:42</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:31</var> UTC</small><br><strong>Update</strong> - Pages is operating normally.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>22:53</var> UTC</small><br><strong>Update</strong> - Our upstream provider has applied a mitigation to address queuing and job failures on hosted runners.<br />Telemetry shows improvement, and we are monitoring closely for full recovery.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>22:10</var> UTC</small><br><strong>Update</strong> - We continue to investigate failures impacting GitHub Actions hosted-runner jobs.<br />We're waiting on our upstream provider to apply the identified mitigations, and we're preparing to resume job processing as safely as possible.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>21:30</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>21:13</var> UTC</small><br><strong>Update</strong> - We continue to investigate failures impacting GitHub Actions hosted-runner jobs.<br />We have identified the root cause and are working with our upstream provider to mitigate.<br />This is also impacting GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot).</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>20:33</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Actions and Pages</p> Tue, 03 Feb 2026 00:56:05 +0000 https://eu.githubstatus.com/incidents/3fyjyy8fx4ys https://eu.githubstatus.com/incidents/3fyjyy8fx4ys EU - Incident with Actions <p><small>Feb <var data-var='date'> 1</var>, <var data-var='time'>06:21</var> UTC</small><br><strong>Resolved</strong> - On February 1, 2026 between 05:05 UTC and 05:40 UTC, customers using the Sweden stamp of GitHub Enterprise Cloud experienced workflow failures and slow job starts on GitHub Actions. During the incident, approximately 2.7% of runs failed, and around 27.5% saw start times averaging 22 minutes. The incident was caused by connection churn in our stream processing system. We've implemented connection churn throttling, improved metrics for faster detection, and are enhancing client connection tooling to prevent recurrence.</p><p><small>Feb <var data-var='date'> 1</var>, <var data-var='time'>06:20</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> Sun, 01 Feb 2026 06:21:31 +0000 https://eu.githubstatus.com/incidents/6p5q2tghfzfs https://eu.githubstatus.com/incidents/6p5q2tghfzfs EU - Copilot Chat - Grok Code Fast 1 Outage <p><small>Jan <var data-var='date'>21</var>, <var data-var='time'>12:39</var> UTC</small><br><strong>Resolved</strong> - On Jan 21st, 2025, between 11:15 UTC and 13:00 UTC the Copilot service was degraded for Grok Code Fast 1 model. On average, more than 90% of the requests to this model failed due to an issue with an upstream provider. No other models were impacted.<br /><br />The issue was resolved after the upstream provider fixed the problem that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.</p><p><small>Jan <var data-var='date'>21</var>, <var data-var='time'>12:09</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Jan <var data-var='date'>21</var>, <var data-var='time'>11:33</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Wed, 21 Jan 2026 12:39:00 +0000 https://eu.githubstatus.com/incidents/sb4r63z7syc5 https://eu.githubstatus.com/incidents/sb4r63z7syc5 EU - Copilot's GPT-5.1 model has degraded performance <p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>10:52</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>10:32</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate issues with the GPT-5.1 model. We are also seeing an increase in failures for Copilot Code Reviews.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>09:53</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate issues with the GPT-5.1 model with our model provider. Uses of other models are not impacted.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>09:26</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance when using the GPT-5.1 model. We are investigating the issue.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>09:24</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Wed, 14 Jan 2026 10:52:12 +0000 https://eu.githubstatus.com/incidents/xfd5tdv0ggvb https://eu.githubstatus.com/incidents/xfd5tdv0ggvb EU - Disruption with some GitHub services <p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>10:17</var> UTC</small><br><strong>Resolved</strong> - From January 9 13:11 UTC to January 12 10:17 UTC, new Linux Custom Images generated for Larger Hosted Runners were broken and not able to run jobs. Customers who did not generate new Custom Images during this period were not impacted. This issue was caused by a change to improve reliability of the image creation process. Due to a bug, the change triggered an unrelated protection mechanism which determines if setup has already been attempted on the VM and caused the VM to be marked unhealthy. Only Linux images which were generated while the change was enabled were impacted. The issue was mitigated by rolling back the change.<br /><br />We are improving our testing around Custom Image generation as part of our GA readiness process for the public preview feature.. This includes expanding our canary suite to detect this and similar interactions as part of a controlled rollout in staging prior to any customer impact.</p><p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>10:09</var> UTC</small><br><strong>Update</strong> - Actions jobs that use custom Linux images are failing to start. We've identified the underlying issue and are working on mitigation.</p><p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>10:05</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>10:02</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Mon, 12 Jan 2026 10:17:27 +0000 https://eu.githubstatus.com/incidents/m02fwn4b89zt https://eu.githubstatus.com/incidents/m02fwn4b89zt EU - Disruption with some GitHub services <p><small>Jan <var data-var='date'>10</var>, <var data-var='time'>02:33</var> UTC</small><br><strong>Resolved</strong> - From January 5, 2026, 00:00 UTC to January 10, 2026, 02:30 UTC, customers using the AI Controls public preview feature experienced delays in viewing Copilot agent session data. Newly created sessions took progressively longer to appear, initially hours, then eventually exceeding 24 hours. Since the page displays only the most recent 24 hours of activity, once processing delays exceeded this threshold, no recent data was visible. Session data remained available in audit logs throughout the incident.<br /><br />Inefficient database queries in the data processing pipeline caused significant processing latency, creating a multi-day backlog. As the backlog grew, the delay between when sessions occurred and when they appeared on the page increased, eventually exceeding the 24-hour display window.<br /><br />The issue was resolved on January 10, 2026, 02:30 UTC, after query optimizations and a database index were deployed. We are implementing enhanced monitoring and automated testing to detect inefficient queries before deployment to prevent recurrence.</p><p><small>Jan <var data-var='date'>10</var>, <var data-var='time'>02:33</var> UTC</small><br><strong>Update</strong> - Our queue has cleared. The last 24 hours of agent session history should now be visible on the AI Controls UI. No data was lost due to this incident.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>23:56</var> UTC</small><br><strong>Update</strong> - We estimate the backlogged queue will take 3 hours to process. We will post another update once it is completed, or if anything changes with the recovery process.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>23:44</var> UTC</small><br><strong>Update</strong> - We have deployed an additional fix and are beginning to see recovery to the queue preventing AI Sessions from showing in the AI Controls UI. We are working on an estimate for when the queue will be fully processed, and will post another update once we have that information.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>22:41</var> UTC</small><br><strong>Update</strong> - We are seeing delays processing the AI Session event queue, which is causing sessions to not be displayed on the AI Controls UI. We have deployed a fix to improve the queue processing and are monitoring for effectiveness. We continue to investigate other mitigation paths.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>21:36</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in AI Controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>21:08</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>20:07</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>19:35</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>19:02</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>18:39</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>18:08</var> UTC</small><br><strong>Update</strong> - Agent Session activity is still observable in audit logs, and this only impacts the AI Controls UI.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>17:57</var> UTC</small><br><strong>Update</strong> - We are investigating an incident affecting missing Agent Session data on the AI Settings page on Agent Control Plane.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>17:54</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Sat, 10 Jan 2026 02:33:19 +0000 https://eu.githubstatus.com/incidents/jsj78q1bd9t1 https://eu.githubstatus.com/incidents/jsj78q1bd9t1 EU - Incident with multiple services <p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>13:08</var> UTC</small><br><strong>Resolved</strong> - On January 8th, 2026, between 04:00 UTC and 12:44 UTC multiple GitHub services were partially degraded and had delayed response times or returned errors. The error rate was low throughout the incident, peaking at 2.65% of requests for customers of GitHub Enterprise Cloud with Data Residency in the EU. This was due to a degradation in one of our underlying infrastructure providers.<br /><br />The incident was mitigated by partially draining the degraded infrastructure and escalating with the provider. In order to address these types of issues more promptly in the future we have tuned the sensitivity of our monitoring in this area and updated our escalation policies accordingly. We are also implementing improvements to better isolate and limit the impact of partial infrastructure degradations.</p><p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>12:52</var> UTC</small><br><strong>Update</strong> - We're beginning to see recovery across multiple services and will continue to monitor.</p><p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>12:11</var> UTC</small><br><strong>Update</strong> - We are seeing intermittent failures broadly impacting services due to an ongoing networking issue.</p><p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>11:58</var> UTC</small><br><strong>Update</strong> - We're investigating issues impacting multiple services resulting in intermittent errors.</p><p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>11:57</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> Thu, 08 Jan 2026 13:08:46 +0000 https://eu.githubstatus.com/incidents/5zll45hr4ggs https://eu.githubstatus.com/incidents/5zll45hr4ggs EU - Some models missing in Copilot <p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>21:07</var> UTC</small><br><strong>Resolved</strong> - On January 7th, 2026, between 17:16 and 19:33 UTC Copilot Pro and Copilot Business users were unable to use certain premium models, including Claude Opus 4.5 and GPT-5.2. This was due to a misconfiguration with Copilot models, inadvertently marking these premium models as inaccessible for users with Copilot Pro and Copilot Business licenses.<br /><br />We mitigated the incident by reverting the erroneous config change. We are improving our testing processes to reduce the risk of similar incidents in the future, and refining our model availability alerting to improve detection time.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:43</var> UTC</small><br><strong>Update</strong> - We have implemented a mitigation and confirmed that Copilot Pro and Business accounts now have access to the previously missing models. We will continue monitoring to ensure complete resolution.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:29</var> UTC</small><br><strong>Update</strong> - We continue to investigate. We'll post another update by 19:50 UTC.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:10</var> UTC</small><br><strong>Update</strong> - Correction - Copilot Pro and Business users are impacted. Copilot Pro+ and Enterprise users are not impacted.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:06</var> UTC</small><br><strong>Update</strong> - We continue to investigate this problem and have confirmed only Copilot Business users are impacted. We'll post another update by 19:30 UTC.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>18:44</var> UTC</small><br><strong>Update</strong> - We are currently investigating reports of some Copilot Pro premium models including Opus and GPT 5.2 being unavailable in Copilot products. We'll post another update by 19:08 UTC.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>18:33</var> UTC</small><br><strong>Update</strong> - We have received reports that some expected models are missing from VSCode and other products using Copilot. We are investigating the cause of this to restore access.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>18:32</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Wed, 07 Jan 2026 21:07:10 +0000 https://eu.githubstatus.com/incidents/g0r7h47f5c34 https://eu.githubstatus.com/incidents/g0r7h47f5c34 EU - Incident with Copilot <p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>10:08</var> UTC</small><br><strong>Resolved</strong> - On January 6th, 2026, between approximately 8:41 and 10:07 UTC, the Copilot service experienced a degradation of the GPT-5.1-Codex-Max model due to an issue with our upstream provider. During this time, up to 14.17% of requests to GPT-5.1-Codex-Max failed. No other models were impacted.<br /><br />The issue was resolved by a mitigation put in place by our provider. GitHub is working with our provider to further improve the resiliency of the service to prevent similar incidents in the future.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>10:07</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and GPT-5.1-Codex-Max is once again available.<br />We will continue monitoring to ensure stability.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>09:03</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the GPT-5.1-Codex-Max model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>08:56</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Tue, 06 Jan 2026 10:08:05 +0000 https://eu.githubstatus.com/incidents/j6fg3fdxbl0t https://eu.githubstatus.com/incidents/j6fg3fdxbl0t EU - Incident with Copilot Grok Code Fast 1 <p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>15:45</var> UTC</small><br><strong>Resolved</strong> - On Dec 15th, 2025, between 14:00 UTC and 15:45 UTC the Copilot service was degraded for Grok Code Fast 1 model. On average, 4% of the requests to this model failed due to an issue with our upstream provider. No other models were impacted.<br /><br />The issue was resolved after the upstream provider fixed the problem that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>15:06</var> UTC</small><br><strong>Update</strong> - We are continuing to work with our provider on resolving the incident with Grok Code Fast 1. Users can expect some requests to intermittently fail until all issues are resolved.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>14:13</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>14:12</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Mon, 15 Dec 2025 15:45:53 +0000 https://eu.githubstatus.com/incidents/381vryvt2k62 https://eu.githubstatus.com/incidents/381vryvt2k62 EU - Incident with Git Operations <p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:55</var> UTC</small><br><strong>Resolved</strong> - On December 10, 2025 between 18:10 UTC and 20:10 UTC Git Operations for GitHub Data Residency environments experienced periods of failed or delayed git requests to repository, raw, and archive data. On average, the error rate was 4% and peaked at 23% of total requests. This was due to an infrastructure configuration change. <br /><br />We mitigated the incident by updating our configuration and adding additional capacity to serve the traffic spikes.<br /><br />We are working to improve our change management in order to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:55</var> UTC</small><br><strong>Update</strong> - The GHEC-DR Sweden region has also seen full recovery. At this time all services are expected to be operating normally.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:49</var> UTC</small><br><strong>Update</strong> - Git Operations is operating normally.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:43</var> UTC</small><br><strong>Update</strong> - We have applied the mitigation to all GHEC-DR environments, and are seeing recovery for all regions except Sweden. We're investigating the remaining impact for this region.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:28</var> UTC</small><br><strong>Update</strong> - We have identified the issue and are working to mitigate it.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:24</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:17</var> UTC</small><br><strong>Update</strong> - We are currently investigating elevated error rates with Git operations in GHEC-DR environments.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:17</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Git Operations</p> Fri, 12 Dec 2025 20:55:15 +0000 https://eu.githubstatus.com/incidents/5rwlsm2d25tp https://eu.githubstatus.com/incidents/5rwlsm2d25tp