On December 5, 2025 at 08:47 UTC, Cloudflare experienced a significant network disruption that lasted for about 25 minutes, ending at approximately 09:12 UTC. Around 28% of all HTTP traffic moving through Cloudflare was affected, though only a subset of customers under specific conditions felt the impact.
Importantly, this outage was not caused by a cyberattack. Instead, it resulted from an internal configuration change made during an attempt to roll out protections for a newly disclosed vulnerability in React Server Components.
Root cause explained simply
Here’s the incident in plain language:
- Cloudflare’s Web Application Firewall (WAF) buffers incoming HTTP request content. To better support modern frameworks like Next.js and address a newly discovered security issue, Cloudflare increased the buffer size from 128 KB to 1 MB.
- During this rollout, their internal WAF testing tool failed because it didn’t support the larger buffer size. To move forward, engineers disabled the testing tool using a global configuration change.
- That global change triggered a bug in a legacy version of Cloudflare’s proxy system (known as FL1). The rules engine threw an exception that caused HTTP 500 errors for affected traffic.
- Customers were only impacted if they were using this older proxy version and the Cloudflare Managed Ruleset.
Timeline of the outage
| Time (UTC) | Event |
|---|---|
| 08:47 | Change deployed; incident begins |
| 08:50 | Full customer impact; internal alerts fire |
| 09:11 | Change is rolled back |
| 09:12 | Services fully restored |
How Cloudflare plans to prevent future incidents
Cloudflare acknowledged that this type of failure is unacceptable and outlined several preventative measures:
- Better rollout and versioning controls
Configuration changes, especially those affecting core traffic systems, will undergo more gradual and validated deployments. - Improved emergency controls (“break glass” features)
Engineers will have faster, safer ways to intervene when unexpected behaviour appears. - Fail-open protections
Instead of breaking traffic when a configuration is invalid, systems will default to a safer state that keeps services running while logging the issue.
Additionally, Cloudflare temporarily paused similar network-wide changes until all safeguards are in place.
What this means for customers and the wider internet
This outage highlights an important truth about modern infrastructure: even short configuration errors in major platforms can have global ripple effects. With Cloudflare handling such a large portion of global traffic, a 25-minute disruption can impact businesses, developers, and end-users worldwide.
For customers, it’s a reminder that:
- Third-party infrastructure changes can affect your applications even when nothing changes on your end.
- Resilience planning and monitoring remain critical.
- Transparency in post-incident reports is essential, and Cloudflare’s detailed explanation helps rebuild trust.
Key takeaways
- The outage lasted 25 minutes but affected a meaningful portion of the internet.
- It originated from an internal configuration change—not from malicious activity.
- A legacy proxy combined with a specific ruleset created the perfect conditions for failure.
- Cloudflare is implementing stronger change-control processes and safer system defaults.
- Businesses should keep visibility into upstream dependencies and maintain their own contingency plans


