Back to Blog
    AWS
    Cloud
    Outage
    Internet
    US-EAST-1
    Infrastructure

    When the Cloud Breaks: How One AWS Outage Took Down Half the Internet

    October 20, 2025
    8 min read read
    At around midnight Pacific Time on October 20, the internet started acting weird. Amazon wouldn't load. Mercado Livre—the Brazilian e-commerce giant—was glitching out. Duolingo crashed mid-lesson. Fortnite players were booted mid-match. Slack messages stopped sending. PagerDuty didn't page. What began as a "Hmm, maybe it's my Wi-Fi" moment quickly escalated into something much bigger. A full-blown AWS outage—again. And like a digital domino effect, apps, games, tools, and websites across the globe started to blink out. All signs pointed to a familiar culprit: us-east-1. ## One Region to Rule Them All Amazon Web Services (AWS) is the invisible backbone of much of the internet. Its us-east-1 region—located in Northern Virginia—isn't just one of its oldest and largest data hubs, it's also the nerve center for many global services. When it stumbles, a shocking chunk of the internet faceplants right along with it. That's exactly what happened. According to AWS's own status dashboard, the problem began with "increased error rates and latencies" affecting a broad range of services—CloudFront, EC2, DynamoDB, Kinesis, SageMaker, and more. A cascade of issues followed, impacting app access, transactions, and even internal services like Amazon.com and Audible. In other words: the cloud broke. And when AWS falters, it takes a sizable chunk of the digital world down with it. ## "Who Else is On-Call? Let's Goooo" The outage hit the tech world like a freight train. DevOps engineers scrambled. On-call rotations kicked in. Monitoring systems like PagerDuty, ironically, also suffered from the outage, leaving some teams in the dark. One user joked, "Thankfully AWS doesn't use PagerDuty," while another noted grimly, "500 sev-2 tickets now. Still growing!" Slack lit up—but then froze for some users too. Jira? Down. Autodesk services? Out. Banking portals in Brazil and Australia? Glitching. OnlyFans, McDonald's Monopoly app, Clash Royale, and even Tidal—everything that leans on AWS showed some form of hiccup. Someone commented, "I just wanted to play Fortnite and I had 30 kills. RIP." It was a surreal, cross-industry disruption that pulled back the curtain on just how centralized the modern internet really is. ## The Single Point of Failure Nobody Wants to Talk About If there's one thing the outage laid bare, it's this: we've created a digital world that leans hard—really hard—on a few major cloud providers. And sometimes, just one cloud region. us-east-1 has become infamous. It's where many default deployments land. It hosts control planes for services that span multiple regions. So when it chokes, redundancy plans that look solid on paper can fail in real life. "IAM issues will affect all regions," one user noted, highlighting how central identity and access services—often hosted there—can ripple across supposedly isolated systems. A comment summed it up perfectly: "This is what happens when we centralize the Internet." ## Was It a Cyberattack? With Fortnite, Apex Legends, and Epic Games all affected, the gaming crowd quickly raised eyebrows. Was this a cyberattack? AWS didn't say that—at least not in their initial communications. They stuck to vague phrases like "investigating increased error rates." Still, when a global web backbone goes wobbly, the rumor mill revs up. People compared it to the infamous CrowdStrike update debacle, others speculated on a botched deployment or internal slip-up. One joke made the rounds: "Hi, I just started at AWS and I saw a bug with the DynamoDB cluster. I fixed it and pushed to prod before taking my kids to school. Be back soon!" ## The Human Side of the Outage Despite the tech-heavy context, the most telling moments came from real people dealing with real fallout: - One federal worker posted dryly that their AWS-based enterprise resources were kaput—but hey, they're furloughed, so, shrug. - Another user in Poland tried to log into Autodesk Fusion to do some 3D modeling, only to get smacked with errors: "Bing bong, can't login." - A university student from Phoenix missed two assignment deadlines because their go-to tools were offline. Others used the moment for good old-fashioned mischief. "No one's winning prizes on McDonald's app right now," someone realized. Another pounced and scored a $50 DoorDash gift card once the servers partially came back. It was equal parts chaos, comedy, and crisis management. ## So… Now What? AWS eventually mitigated the problem within a few hours. For many users, that meant everything slowly started coming back to life in the early morning hours. But the questions linger. Why are we still architecting systems around a single cloud region? Why haven't more companies embraced multi-cloud or at least multi-region failover strategies? The hard truth is: it's complicated. Running hot standbys across regions isn't cheap. Routing traffic around outages requires infrastructure few companies can justify when the cloud "only" breaks once every couple of years. As one comment pointed out, "Sounds great for the 34 minutes a year you're impacted… not so much the other 364 days when you're over budget." ## The Illusion of Invincibility AWS sells itself on uptime. 99.99999% availability. Multiple layers of redundancy. Yet time and again, we see just how fragile the stack is when one cog slips out of place. A sarcastic comment captured the mood perfectly: "Yup they sell it to us like it's invincible, while every other cloud is bubble gum and tape. But I can't even mute my Ring camera." The deeper issue? It's not just AWS. It's our collective overconfidence in cloud resilience—and our tendency to put too many eggs in one virtual basket. ## Takeaways, If Anyone's Listening 1. **Diversify your infrastructure** if you can. Seriously. 2. **Don't assume default settings** (like deploying to us-east-1) are safe just because they're popular. 3. **Build in human-aware DR strategies**. Know what fails, and who needs to know. 4. **If you're on-call, stock up on snacks.** The outage was a wake-up call—not because it was catastrophic, but because it was so ordinary. It happened at night, during a routine deployment window, and was resolved in hours. But in that small window, it left millions of users in the dark, many of whom didn't even know AWS was behind their apps until things broke. So yeah. When the cloud breaks, we all feel the storm.