Cloudflare Outage Hits Major Websites: Here’s What Happened

A major Cloudflare outage on Nov 18 caused widespread errors across the web. The company cited a bot management bug, not a cyberattack, as the cause.
Cloudflare Outage Hits Major Websites: Here's What Happened

Key Takeaways

  • Widespread Disruption: Cloudflare experienced a major, multi-hour outage on November 18, 2025, causing error messages for users across a significant portion of the internet.
  • Internal Bug, Not an Attack: The company confirmed the outage was caused by an internal bug related to its Bot Management system and was not the result of a cyberattack.
  • The Technical Cause: A database permission change caused a critical configuration file to double in size, exceeding system memory limits and crashing core network services.
  • Full Recovery: Services were systematically restored over several hours, with all systems returning to normal operation by 17:06 UTC.

A significant Cloudflare outage on Tuesday, November 18, 2025, led to widespread service disruptions, leaving many internet users unable to access their favorite websites and applications. The company, a critical backbone for a large part of the web, issued a public apology, detailing the internal bug that caused the global incident.

What Caused the Cloudflare Outage?

Beginning at approximately 11:20 UTC, users attempting to visit sites that rely on Cloudflare’s network were met with 5xx error pages, indicating a failure within the company’s infrastructure. In a detailed post-mortem, Cloudflare explained that the incident was not the result of a malicious attack, a possibility the team initially considered.

The root cause was a bug triggered by a change in database permissions. This change led to a query generating duplicate entries for a “feature file” used by Cloudflare’s Bot Management system. Consequently, the file’s size doubled unexpectedly.

When this oversized file was distributed across Cloudflare’s global network, the software responsible for routing traffic failed because it had a pre-allocated memory limit that the new file exceeded. This caused the core proxy system to panic and fail, triggering the widespread errors.

A Spectrum of Services Affected

The failure of the core proxy system had a cascading effect, impacting a wide spectrum of Cloudflare’s products. Key services that experienced significant issues included:

  • Core CDN and Security: Displayed HTTP 5xx errors to end-users.
  • Turnstile: The CAPTCHA alternative failed to load, preventing users from logging into many services, including the Cloudflare Dashboard itself.
  • Workers KV: The data storage service returned a high level of errors.
  • Access: The Zero Trust authentication service saw widespread authentication failures.

Resolution and Next Steps

Cloudflare’s engineering teams initially investigated the possibility of a large-scale DDoS attack, but a coincidental outage of their own externally-hosted status page complicated early diagnostic efforts.

After identifying the true source of the problem, engineers stopped the oversized configuration file from being distributed. They then rolled back to a last-known-good version of the file. Core traffic began flowing normally by 14:30 UTC, and the team spent the next few hours stabilizing all related systems.

In its public apology, Cloudflare called the outage “unacceptable” and outlined several steps to prevent a recurrence, including:

  • Hardening the system to treat internal configuration files with the same scrutiny as user-generated input.
  • Implementing better global kill switches for features.
  • Reviewing failure modes across all core proxy modules to improve resilience.

The company assured its customers that work has already begun to strengthen its network against similar failures in the future.

Image Referance: https://blog.cloudflare.com/18-november-2025-outage/