All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Jun 19, 2025

No incidents reported today.

Jun 18, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jun 18, 23:13 UTC
Update - We are experiencing degraded availability for the Claude 4 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected. We recommend using Claude 3.7 as an alternative.

Jun 18, 22:42 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Jun 18, 22:40 UTC
Investigating - We are currently investigating this issue.
Jun 18, 22:39 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jun 18, 18:47 UTC
Update - We are continuing to rollout a mitigation and are progressing towards having this rolled out for all customers.
Jun 18, 18:11 UTC
Update - We are currently deploying a mitigation for this issue and will be rolling it out shortly. We will update our progress as we monitor the deployment.
Jun 18, 17:22 UTC
Update - We are actively investigating and working on a mitigation for database instability leading to replication lag in the Actions Cache service. We will continue to post updates on progress towards mitigation.
Jun 18, 17:03 UTC
Update - The actions cache service is experiencing degradation in a number of regions causing cache misses when attempting to download cache entries. This is not causing workflow failures, but workflow runtime might be elevated for certain runs.
Jun 18, 16:46 UTC
Investigating - We are currently investigating this issue.
Jun 18, 16:46 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jun 18, 17:42 UTC
Update - We have confirmed that we are currently within SLA for Issues experience. Remaining clean up will complete over the next few hours to fully restore the ability to search Issues by reaction as well as related GraphQL API queries.
Jun 18, 17:41 UTC
Update - We have confirmed that impact is restricted to failing to display reactions on some issues and searching issues by reaction. Mitigation is in progress to restore these features and should be fully rolled out to all customers in the next few hours.
Jun 18, 17:07 UTC
Update - Some users are seeing errors when accessing issues on GitHub. We have identified the problem and are working on a revert to restore full functionality.
Jun 18, 16:25 UTC
Investigating - We are investigating reports of degraded performance for Issues
Jun 18, 16:21 UTC
Jun 17, 2025
Resolved - On June 17, 2025, between 19:32 UTC and 20:03 UTC, an internal routing policy deployment to a subset of network devices caused reachability issues for certain network address blocks within our datacenters.
Authenticated users of the github.com UI experienced 3-4% error rates for the duration. Authenticated callers of the API experienced 40% error rates. Unauthenticated requests to the UI and API experienced nearly 100% error rates for the duration. Actions service experienced 2.5% of runs being delayed for an average of 8 minutes and 3% of runs failing. Large File Storage (LFS) requests experienced 0.978% errors.
At 19:54 UTC, the deployment was rolled back, and network availability for the affected systems was restored. At 20:03 UTC, we fully restored normal operations.
To prevent similar issues, we are expanding our validation process for routing policy changes.

Jun 17, 20:22 UTC
Update - Actions is operating normally.
Jun 17, 20:15 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Jun 17, 20:14 UTC
Update - Webhooks is operating normally.
Jun 17, 20:13 UTC
Update - Pull Requests is operating normally.
Jun 17, 20:12 UTC
Update - API Requests is operating normally.
Jun 17, 20:10 UTC
Update - Issues is operating normally.
Jun 17, 20:06 UTC
Update - API Requests is experiencing degraded performance. We are continuing to investigate.
Jun 17, 20:05 UTC
Update - We experienced problems with multiple services, causing disruptions for some users. We have identified the cause and are rolling out changes to restore normal service. Many services are recovering, but full recovery is ongoing.
Jun 17, 20:04 UTC
Update - Copilot is operating normally.
Jun 17, 20:04 UTC
Update - Pages is operating normally.
Jun 17, 20:03 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Jun 17, 20:01 UTC
Update - Pull Requests is experiencing degraded availability. We are continuing to investigate.
Jun 17, 19:55 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Jun 17, 19:55 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Jun 17, 19:54 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
Jun 17, 19:54 UTC
Update - API Requests is experiencing degraded availability. We are continuing to investigate.
Jun 17, 19:53 UTC
Update - We are investigating reports of issues with many services impacting segments of customers. We will continue to keep users updated on progress towards mitigation.
Jun 17, 19:53 UTC
Update - API Requests is experiencing degraded performance. We are continuing to investigate.
Jun 17, 19:51 UTC
Update - Pages is experiencing degraded performance. We are continuing to investigate.
Jun 17, 19:49 UTC
Update - Copilot is experiencing degraded availability. We are continuing to investigate.
Jun 17, 19:49 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Jun 17, 19:47 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Jun 17, 19:42 UTC
Jun 16, 2025

No incidents reported.

Jun 15, 2025

No incidents reported.

Jun 14, 2025

No incidents reported.

Jun 13, 2025

No incidents reported.

Jun 12, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jun 12, 21:07 UTC
Update - All impacted chat models have recovered, and users should no longer experience reduced availability.
Jun 12, 21:07 UTC
Update - We are seeing recovery in success rates for impacted Claude models (Sonnet 4 and Opus 4), and limited recovery in Gemini models (2.5. Pro and 2.0 Flash). We will continue to monitor and provide updates until full recovery.
Jun 12, 20:39 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Jun 12, 20:21 UTC
Update - Claude Sonnet 4 and Opus 4 models continue to have degraded availability in Copilot Chat, VS Code, and other Copilot products. Gemini Pro 2.5 and 2.0 Flash are currently unavailable. Our upstream model provider has indicated that they have identified the problem and are applying mitigations.

Jun 12, 20:05 UTC
Update - Gemini (2.5 Pro and 2.0 Flash) and Claude (Sonnet 4 and Opus 4) chat models in Copilot are still experiencing reduced availability. We are actively communicating with our upstream model provider to resolve the issue and restore full service. We will provide another update by 20:15 UTC.
Jun 12, 19:14 UTC
Update - We redirected requests for Claude 3.7 Sonnet to additional partners and users should see recovery when using that model. We still are experiencing degraded availability for the Gemini (2.5 Pro, 2.0 Flash) and Claude (Sonnet 4, Opus 4) models in Copilot Chat, VS Code and other Copilot products.
Jun 12, 18:37 UTC
Update - We are experiencing degraded availability for the Gemini (2.5 Pro, 2.0 Flash) and Claude (Sonnet 3.7, Sonnet 4, Opus 4) models in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Jun 12, 18:23 UTC
Investigating - We are currently investigating this issue.
Jun 12, 18:19 UTC
Resolved - Multiple services critical to GitHub's attestation infrastructure experienced an outage which prevented Fulcio from issuing signing certificates. During the outage, GitHub customers who use the "actions/attest-build-provenance" action from public repositories were not able to generate attestations.
Jun 12, 20:26 UTC
Update - Customers are currently unable to generate attestations from public repositories due to a broader outage with our partners.
Jun 12, 18:56 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jun 12, 18:50 UTC
Jun 11, 2025
Resolved - Between 2025-06-10 12:25 UTC and 2025-06-11 01:51 UTC, GitHub Enterprise Cloud (GHEC) customers with approximately 10,000 or more users, saw performance degradation and 5xx errors when loading the Enterprise Settings’ People management page. Less than 2% of page requests resulted in an error. The issue was caused by a database change that replaced an index required for the page load. The issue was resolved by reverting the database change.

To prevent similar incidents, we are improving the testing and validation process for replacing database indexes.

Jun 11, 01:51 UTC
Update - Fix is currently rolling out to production. We will update here once we verify.
Jun 11, 01:08 UTC
Update - We are working to deploy the fix for this issue. We will update again once it is deployed and as we monitor recovery.
Jun 10, 23:32 UTC
Update - We have the fix ready, once it's ready to deploy we will provide another update confirming that it has resolved the issue.
Jun 10, 22:42 UTC
Update - We have identified the solution to the performance issue and are working on the mitigation. Impact continues to be limited to very large enterprise customers when viewing the People page.
Jun 10, 21:04 UTC
Update - The mitigation to add a supporting index to improve the performance of the People page did not resolve the issue, and we are continuing to investigate a solution.
Jun 10, 20:09 UTC
Update - We are working on the mitigation and anticpate recovery within an hour.
Jun 10, 18:57 UTC
Update - Large enterprise customers may encounter issues loading the People page
Jun 10, 18:35 UTC
Investigating - We are currently investigating this issue.
Jun 10, 18:17 UTC
Jun 10, 2025
Resolved - On June 10, 2025, between 12:15 UTC and 19:04 UTC, Codespaces billing data processing experienced delays due to capacity issues in our worker pool. Approximately 57% of codespaces were affected during this incident, during which some customers may have observed incomplete or delayed billing usage information in their dashboards and usage reports, and may not have received timely notifications about approaching usage or spending limits.

The incident was caused by an increase in the number of jobs in our worker pool without a corresponding increase in capacity, resulting in a backlog of unprocessed Codespaces billing jobs.

We mitigated the issue by scaling up worker capacity, allowing the backlog to clear and billing data to catch up. We started seeing recovery immediately at 17:40 UTC and were fully caught up by 19:04 UTC.

To prevent recurrence, we are moving critical billing jobs into a dedicated worker pool monitored by the Codespaces team, and are reviewing alerting thresholds to ensure more rapid detection and mitigation of delays in the future.

Jun 10, 19:08 UTC
Update - We've increased capacity to process the codespaces billing jobs and see are seeing recovery, we expect a full mitigation within the hour.
Jun 10, 18:21 UTC
Investigating - We are currently investigating this issue.
Jun 10, 17:47 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jun 10, 14:46 UTC
Investigating - We are investigating reports of degraded performance for Pull Requests
Jun 10, 14:28 UTC
Jun 9, 2025

No incidents reported.

Jun 8, 2025

No incidents reported.

Jun 7, 2025

No incidents reported.

Jun 6, 2025
Resolved - On June 6, 2025, an update to mitigate a previous incident led to automated scaling of database infrastructure used by Copilot Coding Agent. The clients of the service were not implemented to automatically handle an extra partition. Hence it was unable to retrieve data across partitions, resulting in unexpected 404 errors.

As a result, approximately 17% of coding sessions displayed an incorrect final state - such as sessions appearing in-progress when they were actually completed. Additionally, some Copilot-authored pull requests were missing timeline events indicating task completion. Importantly, this did not affect Copilot Coding Agent’s ability to finish code tasks and submit pull requests.

To prevent similar issues in the future we are taking steps to improve our systems and monitoring.

Jun 6, 23:00 UTC
Resolved - On June 6, 2025, between 00:21 UTC to 12:40 UTC the Copilot service was degraded and a subset of Copilot Free users were unable to sign up for or use the Copilot Free service on github.com. This was due to a change in licensing code that resulted in some users losing access despite being eligible for Copilot Free.
We mitigated this through a rollback of the offending change at 11:39 AM UTC. This resulted in users once again being able to utilize their Copilot Free access.
As a result of this incident, we have improved monitoring of Copilot changes during rollout. We are also working to reduce our time to detect and mitigate issues like this one in the future.

Jun 6, 12:40 UTC
Update - Copilot is operating normally.
Jun 6, 12:40 UTC
Update - We are continuing to monitor recovery and expect a complete resolution very shortly.
Jun 6, 12:18 UTC
Update - The changes have been reverted and we are seeing signs of recovery. We expect impact to be largely mitigated, but are continuing to monitor and will update further as progress continues.
Jun 6, 11:31 UTC
Update - We have identified changes that may be causing the issue and are working to revert the offending changes. We will continue to keep users updated as we work toward mitigation.
Jun 6, 10:39 UTC
Update - We are investigating reports of users unable to utilize Copilot Free after a trial subscription has ended for Copilot Pro. We will continue to keep users updated on progress towards mitigation.
Jun 6, 10:04 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Jun 6, 09:58 UTC
Jun 5, 2025
Resolved - On June 5th, 2025, between 17:47 UTC and 19:20 UTC the Actions service was degraded, leading to run start delays and intermittent job failures. During this period, 47.2% of runs had delayed starts, and 21.0% of runs failed. The impact extended beyond Actions itself - 60% of Copilot Coding Agent sessions were cancelled, and all Pages sites using branch-based builds failed to deploy (though Pages serving remained unaffected). The issue was caused by a spike in load between internal Actions services exposing a misconfiguration that caused throttling of requests in the critical path of run starts. We mitigated the incident by correcting the service configuration to prevent throttling and have updated our deployment process to ensure the correct configuration is preserved moving forward.
Jun 5, 19:29 UTC
Update - We have applied a mitigation and we are beginning to see recovery. We are continuing to monitor for recovery.
Jun 5, 19:02 UTC
Update - Actions is experiencing degraded availability. We are continuing to investigate.
Jun 5, 18:35 UTC
Update - Users of Actions will see delays in jobs starting or job failures. Users of Pages will see slow or failed deployments
Jun 5, 18:30 UTC
Update - Pages is experiencing degraded performance. We are continuing to investigate.
Jun 5, 18:01 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jun 5, 18:00 UTC