I moved my own projects to GitLab a while back, so I have been watching the GitHub situation from a slight distance. But "slight distance" does not mean unaffected — half the open source libraries I depend on live there, CI integrations point to it, and every developer I work with uses it as a default. When GitHub has a bad week, everyone has a bad week.
April 2026 was a very bad week.
The Uptime Numbers Are Bad
Third-party monitoring of GitHub infrastructure shows uptime dipping below 90% in 2025, and April 2026 tracking at around 86%. AWS S3, for comparison, promises eleven nines of availability. GitHub is operating at one.
GitHub's own status page reports something much higher — well above 99% for all services. The gap between those two numbers is itself a story. Either the third-party monitoring is measuring something different, or the official status page definitions of "degraded" and "down" are doing a lot of work.
In practice, it does not matter which number you believe. If your CI jobs are hanging, your pull requests are missing, or your search returns nothing, the status page percentage is not the experience you are having.
What Actually Happened in One Week
Three distinct incidents in the space of a week tells you something.
April 23rd: The Merge Queue silently unmerged 292 pull requests across 658 repositories. The platform whose entire purpose is to not lose your code quietly lost code. No immediate notification, no bulk recovery tool surfaced to affected maintainers. You had to notice it yourself.
April 27th: A botnet hit GitHub's Elasticsearch subsystem and took search down for hours. Search being down is annoying. Search being down because a botnet found an attack surface in core infrastructure is a different category of problem.
April 28th: GitHub's CTO published an apology post about reliability. On the same morning, a separate post disclosed a critical remote code execution vulnerability — a crafted git push could execute arbitrary code on GitHub's servers. Two blog posts, same morning, same week that search fell over and PRs disappeared.
The RCE disclosure is the one that deserves more attention than it got. The Merge Queue incident and the search outage are operational failures. A git push executing code on the host is a security failure. Those are different threat models and they require different post-mortems.
Why This Is Happening
The GitHub CTO admitted in the reliability post that agentic development workflows have "accelerated sharply" since 2025. That is a careful way of saying: AI agents are hammering the infrastructure at a rate the system was not designed for.
GitHub was built as a tool for developers — humans who commit several times a day, open a handful of pull requests per week, and run CI on reasonable trigger conditions. The load model assumed human-paced activity.
Agents do not work at human pace. An AI coding agent can open dozens of pull requests, trigger hundreds of CI runs, and push commits in tight loops over the course of a single session. Scale that across millions of developers all running agents simultaneously, and the load profile looks nothing like what the infrastructure was originally sized for.
The compounding irony is that GitHub is also hosting Copilot — the tool driving a lot of that agentic activity. It is absorbing load from its own product, and the infrastructure has not kept up.
Mitchell Hashimoto Leaving Is a Signal, Not Just a Story
Mitchell Hashimoto built Vagrant and Terraform, took HashiCorp public, joined GitHub in 2008 as user 1299, and has logged in nearly every day since. He kept a journal for a month marking off every day a GitHub outage blocked his work on Ghosty, his terminal emulator project with 50,000 stars. Almost every day got a mark.
He published a post on April 28th saying, simply, that he wants to ship software and GitHub does not want him to ship software. The Ghosty project is migrating off the platform entirely.
This matters not because of the drama but because of what it represents. Someone with 18 years of loyalty, a 50k-star project, and deep roots in the GitHub ecosystem decided the cost of staying outweighed the cost of moving. That calculation takes a long time to tip. When it does, it means the problems are not occasional — they are structural.
Smaller projects have been migrating for a while. When a project of Ghosty's size and profile goes, other maintainers start doing the maths.
The Alternatives Are Actually Fine
GitLab is reliable and has a generous free tier. The UX is slightly more involved but the CI/CD pipelines are excellent and the uptime record is considerably better. I use it, it works.
Codeberg is a German nonprofit running Forgejo (a Gitea fork) with a focus on privacy and no commercial AI training on your code. It is smaller and simpler, which is a feature if you want that.
SourceHut is minimal to the point of being intentional — no JavaScript, mailing list-based patches, zero AI features. It is not for everyone but it is genuinely fast and has never had a week like GitHub just had.
None of these have GitHub's network effect. Pull requests, stars, and contributor graphs are not portable. Moving is a real cost, not a trivial one.
What to Do Right Now
If you rely on GitHub Actions for deployment, have a fallback in mind. Knowing in advance what you would do if CI hangs for four hours is better than figuring it out when it happens.
If you maintain an open source project, mirroring to a second host costs almost nothing and means your contributors can still find the repo if the primary goes dark.
If you have not checked your CI notification settings recently, this is a good week to do it. Merge Queue silently unmerging your PRs is worse if you find out three days later.
GitHub will almost certainly stabilise. It has too much infrastructure investment and too many enterprise contracts behind it to stay in this state permanently. But "it will get better eventually" is not a deployment strategy.