tag:www.githubstatus.com,2005:/history GitHub Status - Incident History 2026-03-16T05:28:43Z GitHub tag:www.githubstatus.com,2005:Incident/28968105 2026-03-13T16:15:33Z 2026-03-13T16:15:33Z Degraded performance for various services <p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>16:15</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>16:02</var> UTC</small><br><strong>Update</strong> - We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>15:47</var> UTC</small><br><strong>Update</strong> - We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>15:20</var> UTC</small><br><strong>Update</strong> - Packages is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>15:14</var> UTC</small><br><strong>Update</strong> - We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>15:12</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions and Issues</p> tag:www.githubstatus.com,2005:Incident/28942649 2026-03-12T18:53:33Z 2026-03-12T18:53:33Z Degraded Codespaces experience <p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>18:53</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>17:59</var> UTC</small><br><strong>Update</strong> - Codespaces IPs are no longer being blocked from Visual Studio Marketplace operations and we are monitoring for full recovery</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>17:20</var> UTC</small><br><strong>Update</strong> - We're seeing intermittent failures downloading from the extension marketplace from codespaces, caused by IP blocks for some codespaces. We're working to remove those blocks.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>16:09</var> UTC</small><br><strong>Update</strong> - We're seeing intermittent failures downloading from the extension marketplace from codespaces and are investigating.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>15:08</var> UTC</small><br><strong>Update</strong> - We're seeing partial recovery for the issue affecting extension installation in newly created Codespaces. Some users may still experience degraded functionality where extensions hit errors. The team continues to investigate the root cause while monitoring the recovery.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>14:29</var> UTC</small><br><strong>Update</strong> - We have deployed a fix for the issue affecting extension installation in newly created Codespaces. New Codespaces are now being created with working extensions. We'll post another update by 15:30 UTC.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>13:50</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate an issue where extensions fail to install in newly created Codespaces. Users can create and access Codespaces, but extensions will not be operational, resulting in a degraded experience. The team is working on a fix. All newly created Codespaces are affected. We'll post another update by 15:00 UTC.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>13:07</var> UTC</small><br><strong>Update</strong> - We're investigating an issue where extensions fail to install in newly created Codespaces. Users can still create and access Codespaces, but extensions will not be operational, resulting in a degraded development experience. Our team is actively working to identify and resolve the root cause. We'll post another update by 14:00 UTC.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>13:06</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p> tag:www.githubstatus.com,2005:Incident/28935006 2026-03-12T06:02:07Z 2026-03-12T06:02:07Z Actions failures to download (401 Unauthorized) <p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>06:02</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>06:02</var> UTC</small><br><strong>Monitoring</strong> - Actions is operating normally.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>05:40</var> UTC</small><br><strong>Update</strong> - We are continuing investigation of reports of degraded performance for Actions and GitHub Apps</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>04:46</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/28933013 2026-03-12T02:45:55Z 2026-03-12T02:45:55Z Disruption with some GitHub services <p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>02:45</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>02:44</var> UTC</small><br><strong>Update</strong> - We've identified the root cause and are working on resolving the underlying issue. Some users may have encountered intermittent failures and errors. We're continuing to see reduced error rates.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>02:13</var> UTC</small><br><strong>Update</strong> - We are investigating elevated error rates. Error rates are now decreasing and we're continuing to monitor the situation.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>01:54</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28922400 2026-03-11T15:53:15Z 2026-03-13T20:03:40Z Degraded experience with Copilot Code Review <p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>15:53</var> UTC</small><br><strong>Resolved</strong> - On March 11, 2026, between 13:00 UTC and 15:23 UTC the Copilot Code Review service was degraded and experienced longer than average review times. On average, Copilot Code Review requests took 4 minutes and peaked at just under 8 minutes. This was due to hitting worker capacity limits and CPU throttling. We mitigated the incident by increasing partitions, and we are improving our resource monitoring to identify potential issues sooner.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>15:53</var> UTC</small><br><strong>Update</strong> - Copilot Code Review queue processing has returned to normal levels.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>15:31</var> UTC</small><br><strong>Update</strong> - We experienced degraded performance with Copilot Code Review starting at 14:01 UTC. Customers experienced extended review times and occasional failures. Some extended processing times may continue briefly. We are monitoring for full recovery. We'll post another update by 16:30 UTC.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>14:28</var> UTC</small><br><strong>Monitoring</strong> - We are investigating degraded performance with Copilot Code Review. Customers may experience extended review times or occasional failures. We are seeing signs of improvement as our team works to restore normal service. We'll post another update by 15:30 UTC.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>14:25</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28922599 2026-03-11T15:02:23Z 2026-03-11T15:02:23Z Incident with API Requests <p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>15:02</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>15:02</var> UTC</small><br><strong>Update</strong> - We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>14:37</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for API Requests</p> tag:www.githubstatus.com,2005:Incident/28882382 2026-03-09T17:03:40Z 2026-03-09T17:03:40Z Incident with Webhooks <p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>17:03</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>17:03</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>15:56</var> UTC</small><br><strong>Update</strong> - We are experiencing latency on the API and UI endpoints. We are working to resolve the issue.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>15:50</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Webhooks</p> tag:www.githubstatus.com,2005:Incident/28870787 2026-03-09T03:51:42Z 2026-03-10T18:32:23Z Incident with Codespaces <p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>03:51</var> UTC</small><br><strong>Resolved</strong> - On March 9, 2026, between 01:23 UTC and 03:25 UTC, users attempting to create or resume codespaces in the Australia East region experienced elevated failures, peaking at a 100% failure rate for this region. Codespaces in other regions were not affected.<br /><br />The create and resume failures were caused by degraded network connectivity between our control plane services and the VMs hosting the codespaces. This was resolved by redirecting traffic to an alternate site within the region. While we are addressing the core network infrastructure issue, we have also improved our observability of components in this area to improve detection. This will also enable our existing automated failovers to cover this failure mode. These changes will prevent or significantly reduce the time any similar incident causes user impact.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>03:51</var> UTC</small><br><strong>Update</strong> - This incident has been resolved. New Codespace creation requests are now completing successfully.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>03:32</var> UTC</small><br><strong>Update</strong> - We are seeing recovery, with the failure rate for new Codespace creation requests dropping from 5% to about 3%.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>03:04</var> UTC</small><br><strong>Update</strong> - We are seeing about 5% of new Codespace creation requests failing. We are investigating the root cause and identifying the impacted regions.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>03:04</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p> tag:www.githubstatus.com,2005:Incident/28829528 2026-03-06T23:28:13Z 2026-03-12T16:53:04Z Incident with Webhooks <p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>23:28</var> UTC</small><br><strong>Resolved</strong> - On March 6, 2026, between 16:16 UTC and 23:28 UTC the Webhooks service was degraded and some users experienced intermittent errors when accessing webhook delivery histories, retrying webhook deliveries, and listing webhooks via the UI and API. On average, the error rate was 0.57% and peaked at approximately 2.73% of requests to the service. This was due to unhealthy infrastructure affecting a portion of webhook API traffic.<br /><br />We mitigated the incident by redeploying affected services, after which service health returned to normal.<br /><br />We are working to improve detection of unhealthy infrastructure and strengthen service safeguards to reduce time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>23:28</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>23:26</var> UTC</small><br><strong>Update</strong> - We have deployed a fix and are observing a full recovery. The affected endpoint was the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. We will continue monitoring to confirm stability.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>22:35</var> UTC</small><br><strong>Update</strong> - We are preparing a new mitigation for the issue affecting the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>21:34</var> UTC</small><br><strong>Update</strong> - The previous mitigation did not resolve the issue. We are investigating further. The affected endpoint is the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>20:18</var> UTC</small><br><strong>Update</strong> - We have deployed a fix for the issue causing some users to experience intermittent failures when accessing the Webhooks API and configuration pages. We are monitoring to confirm full recovery.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>19:39</var> UTC</small><br><strong>Update</strong> - We continue working on mitigations to restore service.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>19:07</var> UTC</small><br><strong>Update</strong> - We continue working on mitigations to restore service.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>18:39</var> UTC</small><br><strong>Update</strong> - We continue working on mitigations to restore service.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>18:07</var> UTC</small><br><strong>Update</strong> - We continue working on mitigations to restore full service.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>17:43</var> UTC</small><br><strong>Update</strong> - Our engineers have identified the root cause and are actively implementing mitigations to restore full service.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>17:19</var> UTC</small><br><strong>Update</strong> - This problem is impacting less than 1% of UI and webhook API calls.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>17:12</var> UTC</small><br><strong>Update</strong> - We are investigating an issue affecting a subset of customers experiencing errors when viewing webhook delivery histories and retrying webhook deliveries. UI and webhook API is impacted. Engineers have identified the cause and are actively working on mitigation.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>16:58</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Webhooks</p> tag:www.githubstatus.com,2005:Incident/28813935 2026-03-05T23:55:20Z 2026-03-05T23:55:20Z Actions is experiencing degraded availability <p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:55</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:40</var> UTC</small><br><strong>Update</strong> - We are close to full recovery. Actions and dependent services should be functioning normally now.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:37</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:15</var> UTC</small><br><strong>Update</strong> - Actions and dependent services, including Pages, are recovering.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:00</var> UTC</small><br><strong>Update</strong> - We applied a mitigation and we should see a recovery soon.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>22:54</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>22:53</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/28808429 2026-03-05T19:30:54Z 2026-03-06T17:21:02Z Multiple services are affected, service degradation <p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>19:30</var> UTC</small><br><strong>Resolved</strong> - On Mar 5, 2026, between 16:24 UTC and 19:30 UTC, Actions was degraded. During this time, 95% of workflow runs failed to start within 5 minutes with an average delay of 30 minutes and 10% workflow runs failed with an infrastructure error. This was due to Redis infrastructure updates that were being rolled out to production to improve our resiliency. These changes introduced a set of incorrect configuration change into our Redis load balancer causing internal traffic to be routed to an incorrect host leading to two incidents. <br /><br />We mitigated this incident by correcting the misconfigured load balancer. Actions jobs were running successfully starting at 17:24 UTC. The remaining time until we closed the incident was burning through the queue of jobs. <br /><br />We immediately rolled back the updates that were a contributing factor and have frozen all changes in this area until we have completed follow-up work from this. We are working to improve our automation to ensure incorrect configuration changes are not able to propagate through our infrastructure. We are also working on improved alerting to catch misconfigured load balancers before it becomes an incident. Additionally, we are updating the Redis client configuration in Actions to improve resiliency to brief cache interruptions.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>19:17</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>19:05</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>18:59</var> UTC</small><br><strong>Update</strong> - Actions is now fully recovered.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>18:15</var> UTC</small><br><strong>Update</strong> - The queue of requested Actions jobs continues to make progress. Job delays are now approximately 6 minutes and continuing to decrease.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>17:48</var> UTC</small><br><strong>Update</strong> - We are back to queueing Actions workflow runs at nominal rates and we are monitoring the clearing of queued runs during the incident.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>17:25</var> UTC</small><br><strong>Update</strong> - We have applied mitigations for connection failures across backend resources and we are observing a recovery in queueing Actions workflow runs.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>16:52</var> UTC</small><br><strong>Update</strong> - We are observing delays in queuing Actions workflow runs. We’re still investigating the causes of these delays.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>16:47</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>16:41</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>16:35</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/28795113 2026-03-05T01:30:37Z 2026-03-06T20:15:53Z Disruption with some GitHub services <p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:30</var> UTC</small><br><strong>Resolved</strong> - On March 5, 2026, between 12:53 UTC and 13:35 UTC, the Copilot mission control service was degraded. This resulted in empty responses returned for users' agent session lists across GitHub web surfaces. Impacted users were unable to see their lists of current and previous agent sessions in GitHub web surfaces. This was caused by an incorrect database query that falsely excluded records that have an absent field.<br /><br />We mitigated the incident by rolling back the database query change. There were no data alterations nor deletions during the incident.<br /><br />To prevent similar issues in the future, we're improving our monitoring depth to more easily detect degradation before changes are fully rolled out.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:30</var> UTC</small><br><strong>Update</strong> - Copilot coding agent mission control is fully restored. Tasks are now listed as expected.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:21</var> UTC</small><br><strong>Update</strong> - Users were temporarily unable to see tasks listed in mission control surfaces. The ability to submit new tasks, view existing tasks via direct link, or manage tasks was unaffected throughout. A revert is currently being deployed and we are seeing recovery.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28794783 2026-03-05T01:13:31Z 2026-03-11T19:35:39Z Some OpenAI models degraded in Copilot <p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Resolved</strong> - On March 5th, 2026, between approximately 00:26 and 00:44 UTC, the Copilot service experienced a degradation of the GPT 3.5 Codex model due to an issue with our upstream provider. Users encountered elevated error rates when using GPT 3.5 Codex, impacting approximately 30% of requests. No other models were impacted.<br /><br />The issue was resolved by a mitigation put in place by our provider.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and gpt-5.3-codex is once again available in Copilot Chat and across IDE integrations. We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>00:53</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the gpt-5.3-codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /></p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>00:47</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28771344 2026-03-03T21:11:30Z 2026-03-03T23:03:29Z Claude Opus 4.6 Fast not appearing for some Copilot users <p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>21:11</var> UTC</small><br><strong>Resolved</strong> - On March 3, 2026, between 19:44 UTC and 21:05 UTC, some GitHub Copilot users reported that the Claude Opus 4.6 Fast model was no longer available in their IDE model selection. After investigation, we confirmed that this was caused by enterprise administrators adjusting their organization's model policies, which correctly removed the model for users in those organizations. No users outside the affected organizations lost access.<br /><br />We confirmed that the Copilot settings were functioning as designed, and all expected users retained access to the model. The incident was resolved once we verified that the change was intentional and no platform regression had occurred.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>21:05</var> UTC</small><br><strong>Update</strong> - We believe that all expected users still have access to Claude Opus 4.6. We confirm that no users have lost access.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>20:31</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28769993 2026-03-03T20:09:16Z 2026-03-06T00:31:53Z Incident with all GitHub services <p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>20:09</var> UTC</small><br><strong>Resolved</strong> - On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact. <br /><br />This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment. <br /><br />We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps: <br /><br />- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly. <br />- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>20:06</var> UTC</small><br><strong>Update</strong> - We're seeing recovery across all services. We're continuing to monitor for full recovery.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:55</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:54</var> UTC</small><br><strong>Update</strong> - Git Operations is operating normally.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:36</var> UTC</small><br><strong>Update</strong> - Git Operations is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:33</var> UTC</small><br><strong>Update</strong> - We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:31</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:31</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:28</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:27</var> UTC</small><br><strong>Update</strong> - Issues is operating normally.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:25</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:25</var> UTC</small><br><strong>Update</strong> - Codespaces is operating normally.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:24</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:23</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:17</var> UTC</small><br><strong>Update</strong> - We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:15</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:14</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:11</var> UTC</small><br><strong>Update</strong> - Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:05</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:04</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:03</var> UTC</small><br><strong>Update</strong> - We're seeing some service degradation across GitHub services. We're currently investigating impact.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:02</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:00</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:00</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>18:59</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Actions, Copilot and Issues</p> tag:www.githubstatus.com,2005:Incident/28753588 2026-03-03T05:54:17Z 2026-03-04T20:46:37Z Delayed visibility of newly added issues on project boards <p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>05:54</var> UTC</small><br><strong>Resolved</strong> - Between March 2, 21:42 UTC and March 3, 05:54 UTC project board updates, including adding new issues, PRs, and draft items to boards, were delayed from 30 minutes to over 2 hours, as a large backlog of messages accumulated in the Projects data denormalization pipeline.<br /><br />The incident was caused by an anomalously large event that required longer processing time than expected. Processing this message exceeded the Kafka consumer heartbeat timeout, triggering repeated consumer group rebalances. As a result, the consumer group was unable to make forward progress, creating head-of-line blocking that delayed processing of subsequent project board updates.<br /><br />We mitigated the issue by deploying a targeted fix that safely bypassed the offending message and allowed normal message consumption to resume. Consumer group stability recovered at 04:10 UTC, after which the backlog began draining. All queued messages were fully processed by 05:53 UTC, returning project board updates to normal processing latency.<br /><br />We have identified several follow-up improvements to reduce the likelihood and impact of similar incidents in the future, including improved monitoring and alerting, as well as introducing limits for unusually large project events.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>05:53</var> UTC</small><br><strong>Update</strong> - This incident has been resolved. Project board updates are now processing in near-real-time.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>04:36</var> UTC</small><br><strong>Update</strong> - The backlog of delayed updates is expected to fully clear within approximately 1 hour, after which project board updates will return to near-real-time.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>04:17</var> UTC</small><br><strong>Update</strong> - The fix has been deployed and processing speeds have returned to normal. There is a backlog of delayed updates that will continue to be worked through — we're estimating how long that will take and will provide an update in the next 60 minutes.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>03:22</var> UTC</small><br><strong>Update</strong> - The fix is still building and is expected to deploy within 60 minutes. The current delay for GitHub Projects updates has increased to up to 5 hours.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>02:27</var> UTC</small><br><strong>Update</strong> - We're deploying a fix targeting the increased delay in GitHub Projects updates. The rollout should complete within 60 minutes. If successful, the current delay of up to 4 hours should begin to decrease.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>01:40</var> UTC</small><br><strong>Update</strong> - The delay for project board updates has increased to up to 3 hours. We've identified a potential cause and are working on remediation.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>00:52</var> UTC</small><br><strong>Update</strong> - Project board updates — including adding issues, pull requests, and changing fields such as "Status" — are currently delayed by 1–2 hours. Normal behavior is near-real-time. We're actively investigating the root cause.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>00:05</var> UTC</small><br><strong>Update</strong> - The impact extends beyond adding issues to project boards. Adding pull requests and updating fields such as "Status" may also be affected. We're continuing to investigate the root cause.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>23:46</var> UTC</small><br><strong>Update</strong> - Newly added issues are taking 30–60 minutes to appear on project boards, compared to the normal near-real-time behavior. We're investigating the root cause and possible mitigations.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>23:12</var> UTC</small><br><strong>Update</strong> - Newly added issues can take up to 30 minutes to appear on project boards. We're investigating the cause of this delay.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>23:11</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>23:10</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28750704 2026-03-02T22:04:27Z 2026-03-05T00:04:22Z Incident with Pull Requests /pulls <p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>22:04</var> UTC</small><br><strong>Resolved</strong> - On March 2nd, 2026, between 7:10 UTC and 22:04 UTC the pull requests service was degraded. Users navigating between tabs on the pull requests dashboard were met with 404 errors or blank pages.<br /><br />This was due to a configuration change deployed on February 27th at 11:03 PM UTC. We mitigated the incident by reverting the change.<br /><br />We’re working to improve monitoring for the page to automatically detect and alert us to routing failures.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>22:04</var> UTC</small><br><strong>Update</strong> - The issue on https://github.com/pulls is now fully resolved. All tabs are working again.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>21:04</var> UTC</small><br><strong>Update</strong> - We're deploying a fix for pull request filtering. Full rollout across all regions is expected within 60 minutes.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>20:02</var> UTC</small><br><strong>Update</strong> - We are experiencing issues with the Pull Requests dashboard that prevent users from filtering their pull requests. We have identified a mitigation and are deploying a fix. We'll post another update by 21:00 UTC.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>19:23</var> UTC</small><br><strong>Update</strong> - We are seeing a degraded experience when attempting to filter the /pulls dashboard. We are working on a mitigation.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>19:11</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p> tag:www.githubstatus.com,2005:Incident/28703281 2026-02-27T23:49:05Z 2026-03-04T18:12:15Z Incident with Copilot agent sessions <p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>23:49</var> UTC</small><br><strong>Resolved</strong> - On February 27, 2026, between 22:53 UTC and 23:46 UTC, the Copilot coding agent service experienced elevated errors and degraded functionality for agent sessions. Approximately 87% of attempts to start or interact with agent sessions encountered errors during this period.<br /><br />This was due to an expired authentication credential for an internal service component, which prevented Copilot agent session operations from completing successfully.<br /><br />We mitigated the incident by rotating the expired credential and deploying the updated configuration to production. Services began recovering within minutes of the fix being deployed.<br /><br />We are working to improve automated credential rotation coverage across all Copilot service components, add proactive alerting for credentials approaching expiration, and validate configuration consistency to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>23:45</var> UTC</small><br><strong>Update</strong> - We have identified the cause of the elevated errors and are rolling out a fix to production. We are observing initial recovery in Copilot agent sessions.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>23:35</var> UTC</small><br><strong>Update</strong> - We are investigating networking issues with some requests to our models.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>23:18</var> UTC</small><br><strong>Update</strong> - We are investigating a spike in errors in Copilot agent sessions</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>23:18</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28688857 2026-02-27T06:04:02Z 2026-02-27T21:04:29Z Code view fails to load when content contains some non-ASCII characters <p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>06:04</var> UTC</small><br><strong>Resolved</strong> - Starting February 26, 2026 at 22:10 UTC through February 27, 05:50 UTC, the repository browsing UI was degraded and users were unable to load pages for files and directories with non-ASCII characters (including Japanese, Chinese, and other non-Latin scripts). On average, the error rate was 0.014% and peaked at 0.06% of requests to the service. Affected users saw 404 errors when navigating to repository directories and files with non-ASCII names. This was due to a code change that altered how file and directory names were processed, which caused incorrectly formatted data to be stored in an application cache.<br /><br />We mitigated the incident by deploying a fix that invalidated the affected cache entries and progressively rolling it out across all production environments.<br /><br />We are working to improve our pre-production testing to cover non-ASCII character handling, establish better cache invalidation mechanisms, and enhance our monitoring to detect this type of failure mode earlier, to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>06:03</var> UTC</small><br><strong>Update</strong> - We have cleared all caches and everything is operating normally.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>05:21</var> UTC</small><br><strong>Update</strong> - We have mitigated the issue but are working on invalidating caches in order to fix the issue for all impacted repos.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>04:17</var> UTC</small><br><strong>Update</strong> - We have performed a mitigation but some repositories may still see issues. We are working on a full mitigation.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>03:28</var> UTC</small><br><strong>Update</strong> - We are looking into recent code changes to mitigate the error loading some code view pages.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>03:08</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28686655 2026-02-27T00:04:01Z 2026-03-05T21:17:06Z High latency on webhook API requests <p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>00:04</var> UTC</small><br><strong>Resolved</strong> - Between February 26, 2026 UTC and February 27, 2026 UTC, customers hitting the webhooks delivery API may have experienced higher latency or failed requests. During the impact window, 0.82% of requests took longer than 3s and 0.004% resulted in a 500 error response.<br /><br />Our monitors caught the impact on the individual backing data source, and we were able to attribute the degradation to a noisy neighbor effect due requests to a specific webhook generating excessive load on the API. The incident was mitigated once traffic from the specific hook decreased.<br /><br />We have since added a rate limiter for this webhooks API to prevent similar spikes in usage impacting others and will further refine the rate limits for other webhook API routes to help prevent similar occurrences in the future.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>00:02</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>00:01</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28676547 2026-02-26T11:06:32Z 2026-03-03T08:32:53Z Incident with Copilot <p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>11:06</var> UTC</small><br><strong>Resolved</strong> - On February 26, 2026, between 09:27 UTC and 10:36 UTC, the GitHub Copilot service was degraded and users experienced errors when using Copilot features including Copilot Chat, Copilot Coding Agent and Copilot Code Review. During this time, 5-15% of affected requests to the service returned errors.<br /><br />The incident was resolved by infrastructure rebalancing.<br /><br />We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>11:06</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>10:22</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28663359 2026-02-25T16:44:50Z 2026-03-04T16:06:12Z Incident with Copilot Agent Sessions impacting CCA/CCR <p><small>Feb <var data-var='date'>25</var>, <var data-var='time'>16:44</var> UTC</small><br><strong>Resolved</strong> - On February 25, 2026, between 15:05 UTC and 16:34 UTC, the Copilot coding agent service was degraded, resulting in errors for 5% of all requests and impacting users starting or interacting with agent sessions. <br /><br />This was due to an internal service dependency running out of allocated resources (memory and CPU). We mitigated the incident by adjusting the resource allocation for the affected service, which restored normal operations for the coding agent service.<br /><br />We are working to implement proactive monitoring for resource exhaustion across our services, review and update resource allocations, and improve our alerting capabilities to reduce our time to detection and mitigation of similar issues in the future.</p><p><small>Feb <var data-var='date'>25</var>, <var data-var='time'>16:38</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28630011 2026-02-24T00:46:29Z 2026-02-26T22:55:02Z Code search experiencing degraded performance <p><small>Feb <var data-var='date'>24</var>, <var data-var='time'>00:46</var> UTC</small><br><strong>Resolved</strong> - Between 2026-02-23 19:10 and 2026-02-24 00:46 UTC, all lexical code search queries in GitHub.com and the code search API were significantly slowed, and during this incident, between 5 and 10% of search queries timed out. This was caused by a single customer who had created a network of hundreds of orchestrated accounts which searched with a uniquely expensive search query. This search query concentrated load on a single hot shard within the search index, slowing down all queries. After we identified the source of the load and stopped the traffic, latency returned to normal.<br /><br />To avoid this situation occurring again in the future, we are making a number of improvements to our systems, including: improved rate limiting that accounts for highly skewed load on hot shards, improved system resilience for when a small number of shards time out, improved tooling to recognize abusive actors, and capabilities that will allow us to shed load on a single shard in emergencies.</p><p><small>Feb <var data-var='date'>24</var>, <var data-var='time'>00:38</var> UTC</small><br><strong>Update</strong> - We have identified a cause for the latency and timeouts and have implemented a fix. We are observing initial recovery now.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>23:10</var> UTC</small><br><strong>Update</strong> - Customers using code search continue to see increased latency and timeout errors. We are working to mitigate issues on the affected shard.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>22:22</var> UTC</small><br><strong>Update</strong> - Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are taking steps to isolate and mitigate the affected shard.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>21:18</var> UTC</small><br><strong>Update</strong> - Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are continuing to investigate the cause and steps to mitigate.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>20:33</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate elevated latency and timeouts for code search.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>19:59</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28630931 2026-02-23T21:30:42Z 2026-03-02T16:34:09Z Incident with Issues and Pull Requests Search <p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>21:30</var> UTC</small><br><strong>Resolved</strong> - On February 23, 2026, between 21:01 UTC and 21:30 UTC the Search service experienced degraded performance, resulting in an average of 3.5% of search requests for Issues and Pull Requests being rejected. During this period, updates to Issues and Pull Requests may not have been immediately reflected in search results. <br /><br />During a routine migration, we observed a spike in internal traffic due to a configuration change in our search index. We were alerted to the increase in traffic as well as the increase in error rates and rolled back to the previous stable index. <br /><br />We are working to enable more controlled traffic shifting when promoting a new index to allow us to detect potential limitations earlier and ensure these operations succeed in a more controlled manner.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>21:24</var> UTC</small><br><strong>Update</strong> - Some customers are seeing timeout errors when searching for issues or pull requests. Team is currently investigating a fix.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>21:16</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Issues and Pull Requests</p> tag:www.githubstatus.com,2005:Incident/28627142 2026-02-23T17:03:56Z 2026-03-03T17:40:47Z Incident with Actions <p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>17:03</var> UTC</small><br><strong>Resolved</strong> - On February 23, 2026, between 15:00 UTC and 17:00 UTC, GitHub Actions experienced degraded performance. During the time, 1.8% of Actions workflow runs experienced delayed starts with an average delay of 15 minutes. The issue was caused by a connection rebalancing event in our internal load balancing layer, which temporarily created uneven traffic distribution across sites and led to request throttling. <br /><br />To prevent recurrence, we are tuning connection rebalancing behavior to spread client reconnections more gradually during load balancer reloads. We are also evaluating improvements to site-level traffic affinity to eliminate the uneven distribution at its source. We have overprovisioned critical paths to prevent any impact if a similar event occurs before those workstreams finish. Finally, we are enhancing our monitoring to detect capacity imbalances proactively.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>16:17</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>