blog.cloudfare.com: Incident report on memory leak caused by Cloudflare parser bug

blog.cloudfare.com: Incident report on memory leak caused by Cloudflare parser bug

Last Friday, Tavis Ormandy from Google’s Project Zero contacted Cloudflare to report a security problem with our edge servers. He was seeing corrupted web pages being returned by some HTTP requests run through Cloudflare.

It turned out that in some unusual circumstances, which I’ll detail below, our edge servers were running past the end of a buffer and returning memory that contained private information such as HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data. And some of that data had been cached by search engines.

For the avoidance of doubt, Cloudflare customer SSL private keys were not leaked. Cloudflare has always terminated SSL connections through an isolated instance of NGINX that was not affected by this bug.

We quickly identified the problem and turned off three minor Cloudflare features (email obfuscation, Server-side Excludes and Automatic HTTPS Rewrites) that were all using the same HTML parser chain that was causing the leakage. At that point it was no longer possible for memory to be returned in an HTTP response.

Because of the seriousness of such a bug, a cross-functional team from software engineering, infosec and operations formed in San Francisco and London to fully understand the underlying cause, to understand the effect of the memory leakage, and to work with Google and other search engines to remove any cached HTTP responses.

Having a global team meant that, at 12 hour intervals, work was handed over between offices enabling staff to work on the problem 24 hours a day. The team has worked continuously to ensure that this bug and its consequences are fully dealt with. One of the advantages of being a service is that bugs can go from reported to fixed in minutes to hours instead of months. The industry standard time allowed to deploy a fix for a bug like this is usually three months; we were completely finished globally in under 7 hours with an initial mitigation in 47 minutes.

The bug was serious because the leaked memory could contain private information and because it had been cached by search engines. We have also not discovered any evidence of malicious exploits of the bug or other reports of its existence.
The greatest period of impact was from February 13 and February 18 with around 1 in every 3,300,000 HTTP requests through Cloudflare potentially resulting in memory leakage (that’s about 0.00003% of requests).

We are grateful that it was found by one of the world’s top security research teams and reported to us.

This blog post is rather long but, as is our tradition, we prefer to be open and technically detailed about problems that occur with our service.

Parsing and modifying HTML on the fly

Many of Cloudflare’s services rely on parsing and modifying HTML pages as they pass through our edge servers. For example, we can insert the Google Analytics tag, safely rewrite http:// links to https://, exclude parts of a page from bad bots, obfuscate email addresses, enable AMP, and more by modifying the HTML of a page.
To modify the page, we need to read and parse the HTML to find elements that need changing. Since the very early days of Cloudflare, we’ve used a parser written using Ragel. A single .rl file contains an HTML parser used for all the on-the-fly HTML modifications that Cloudflare performs.

About a year ago we decided that the Ragel-based parser had become too complex to maintain and we started to write a new parser, named cf-html, to replace it. This streaming parser works correctly with HTML5 and is much, much faster and easier to maintain.
We first used this new parser for the Automatic HTTP Rewrites feature and have been slowly migrating functionality that uses the old Ragel parser to cf-html.

Both cf-html and the old Ragel parser are implemented as NGINX modules compiled into our NGINX builds. These NGINX filter modules parse buffers (blocks of memory) containing HTML responses, make modifications as necessary, and pass the buffers onto the next filter.

For the avoidance of doubt: the bug is not in Ragel itself. It is in Cloudflare’s use of Ragel. This is our bug and not the fault of Ragel.

It turned out that the underlying bug that caused the memory leak had been present in our Ragel-based parser for many years but no memory was leaked because of the way the internal NGINX buffers were used. Introducing cf-html subtly changed the buffering which enabled the leakage even though there were no problems in cf-html itself.

Once we knew that the bug was being caused by the activation of cf-html (but before we knew why) we disabled the three features that caused it to be used. Every feature Cloudflare ships has a corresponding feature flag, which we call a ‘global kill’. We activated the Email Obfuscation global kill 47 minutes after receiving details of the problem and the Automatic HTTPS Rewrites global kill 3h05m later. The Email Obfuscation feature had been changed on February 13 and was the primary cause of the leaked memory, thus disabling it quickly stopped almost all memory leaks.

Within a few seconds, those features were disabled worldwide. We confirmed we were not seeing memory leakage via test URIs and had Google double check that they saw the same thing.

We then discovered that a third feature, Server-Side Excludes, was also vulnerable and did not have a global kill switch (it was so old it preceded the implementation of global kills). We implemented a global kill for Server-Side Excludes and deployed a patch to our fleet worldwide. From realizing Server-Side Excludes were a problem to deploying a patch took roughly three hours. However, Server-Side Excludes are rarely used and only activated for malicious IP addresses.

Root cause of the bug

The Ragel code is converted into generated C code which is then compiled. The C code uses, in the classic C manner, pointers to the HTML document being parsed, and Ragel itself gives the user a lot of control of the movement of those pointers. The underlying bug occurs because of a pointer error.

/* generated code */
if ( ++p == pe )
    goto _test_eof;

The root cause of the bug was that reaching the end of a buffer was checked using the equality operator and a pointer was able to step past the end of the buffer. This is known as a buffer overrun. Had the check been done using >= instead of == jumping over the buffer end would have been caught. The equality check is generated automatically by Ragel and was not part of the code that we wrote. This indicated that we were not using Ragel correctly.

The Ragel code we wrote contained a bug that caused the pointer to jump over the end of the buffer and past the ability of an equality check to spot the buffer overrun.

Here’s a piece of Ragel code used to consume an attribute in an HTML tag. The first line says that it should attempt to find zero of more unquoted_attr_char followed by (that’s the :>> concatenation operator) whitespace, forward slash or then > signifying the end of the tag.

script_consume_attr := ((unquoted_attr_char)* :>> (space|'/'|'>'))
>{ ddctx("script consume_attr"); }
@{ fhold; fgoto script_tag_parse; }
$lerr{ dd("script consume_attr failed");
       fgoto script_consume_attr; };

If an attribute is well-formed, then the Ragel parser moves to the code inside the @{ } block. If the attribute fails to parse (which is the start of the bug we are discussing today) then the $lerr{ } block is used.

For example, in certain circumstances (detailed below) if the web page ended with a broken HTML tag like this:

> (space|'/'|'>'))
>{ ddctx("script consume_attr"); }
@{ fhold; fgoto script_tag_parse; }
$lerr{ dd("script consume_attr failed");
       fgoto script_consume_attr; };

What happens when the parser runs out of characters to parse while consuming an attribute differs whether the buffer currently being parsed is the last buffer or not. If it’s not the last buffer, then there’s no need to use $lerr as the parser doesn’t know whether an error has occurred or not as the rest of the attribute may be in the next buffer.
But if this is the last buffer, then the $lerr is executed. Here’s how the code ends up skipping over the end-of-file and running through memory.

The entry point to the parsing function is ngx_http_email_parse_email (the name is historical, it does much more than email parsing).

ngx_int_t ngx_http_email_parse_email(ngx_http_request_t *r, ngx_http_email_ctx_t *ctx) {
    u_char  *p = ctx->pos;
    u_char  *pe = ctx->buf->last;
    u_char  *eof = ctx->buf->last_buf ? pe : NULL;
You can see that p points to the first character in the buffer, pe to the character after the end of the buffer and eof is set to pe if this is the last buffer in the chain (indicated by the last_buf boolean), otherwise it is NULL.
When the old and new parsers are both present during request handling a buffer such as this will be passed to the function above:

(gdb) p *in->buf
$8 = {
  pos = 0x558a2f58be30 "buf
$6 = {
  pos = 0x558a238e94f7 "<script type=\"",
  last = 0x558a238e9504 "",

  [...]

  last_buf = 0,

  [...]
}

A final empty buffer (pos and last both NULL and last_buf = 1) will follow that buffer but ngx_http_email_parse_email is not invoked if the buffer is empty.

So, in the case where only the old parser is present, the final buffer that contains data has last_buf set to 0. That means that eof will be NULL. Now when trying to handle script_consume_attr with an unfinished tag at the end of the buffer the $lerr will not be executed because the parser believes (because of last_buf) that there may be more data coming.
The situation is different when both parsers are present. last_buf is 1, eof is set to pe and the $lerr code runs. Here’s the generated code for it:

/* #line 877 "ngx_http_email_filter_parser.rl" */
{ dd("script consume_attr failed");
              {goto st1266;} }
     goto st0;

[...]

st1266:
    if ( ++p == pe )
        goto _test_eof1266;

The parser runs out of characters while trying to perform script_consume_attr and p will be pe when that happens. Because there’s no fhold (that would have done p--) when the code jumps to st1266 p is incremented and is now past pe.
It then won’t jump to _test_eof1266 (where EOF checking would have been performed) and will carry on past the end of the buffer trying to parse the HTML document.

So, the bug had been dormant for years until the internal feng shui of the buffers passed between NGINX filter modules changed with the introduction of cf-html.
Going bug hunting
Research by IBM in the 1960s and 1970s showed that bugs tend to cluster in what became known as “error-prone modules”. Since we’d identified a nasty pointer overrun in the code generated by Ragel it was prudent to go hunting for other bugs.
Part of the infosec team started fuzzing the generated code to look for other possible pointer overruns. Another team built test cases from malformed web pages found in the wild. A software engineering team began a manual inspection of the generated code looking for problems.
At that point it was decided to add explicit pointer checks to every pointer access in the generated code to prevent any future problem and to log any errors seen in the wild. The errors generated were fed to our global error logging infrastructure for analysis and trending.
#define SAFE_CHAR ({\
if (!__builtin_expect(p connection->log, 0, "email filter tried to access char past EOF");\
RESET();\
output_flat_saved(r, ctx);\
BUF_STATE(output);\
return NGX_ERROR;\
}\
*p;\
})
And we began seeing log lines like this:
2017/02/19 13:47:34 [crit] 27558#0: *2 email filter tried to access char past EOF while sending response to client, client: 127.0.0.1, server: localhost, request: "GET /malformed-test.html HTTP/1.1”
Every log line indicates an HTTP request that could have leaked private memory. By logging how often the problem was occurring we hoped to get an estimate of the number of times HTTP request had leaked memory while the bug was present.
In order for the memory to leak the following had to be true:
The final buffer containing data had to finish with a malformed script or img tag
The buffer had to be less than 4k in length (otherwise NGINX would crash)
The customer had to either have Email Obfuscation enabled (because it uses both the old and new parsers as we transition),
… or Automatic HTTPS Rewrites/Server Side Excludes (which use the new parser) in combination with another Cloudflare feature that uses the old parser. … and Server-Side Excludes only execute if the client IP has a poor reputation (i.e. it does not work for most visitors).
That explains why the buffer overrun resulting in a leak of memory occurred so infrequently.
Additionally, the Email Obfuscation feature (which uses both parsers and would have enabled the bug to happen on the most Cloudflare sites) was only enabled on February 13 (four days before Tavis’ report).
The three features implicated were rolled out as follows. The earliest date memory could have leaked is 2016-09-22.
2016-09-22 Automatic HTTP Rewrites enabled
2017-01-30 Server-Side Excludes migrated to new parser
2017-02-13 Email Obfuscation partially migrated to new parser
2017-02-18 Google reports problem to Cloudflare and leak is stopped
The greatest potential impact occurred for four days starting on February 13 because Automatic HTTP Rewrites wasn’t widely used and Server-Side Excludes only activate for malicious IP addresses.

Internal impact of the bug

Cloudflare runs multiple separate processes on the edge machines and these provide process and memory isolation. The memory being leaked was from a process based on NGINX that does HTTP handling. It has a separate heap from processes doing SSL, image re-compression, and caching, which meant that we were quickly able to determine that SSL private keys belonging to our customers could not have been leaked.

However, the memory space being leaked did still contain sensitive information. One obvious piece of information that had leaked was a private key used to secure connections between Cloudflare machines.

When processing HTTP requests for customers’ web sites our edge machines talk to each other within a rack, within a data center, and between data centers for logging, caching, and to retrieve web pages from origin web servers.

In response to heightened concerns about surveillance activities against Internet companies, we decided in 2013 to encrypt all connections between Cloudflare machines to prevent such an attack even if the machines were sitting in the same rack.

The private key leaked was the one used for this machine to machine encryption. There were also a small number of secrets used internally at Cloudflare for authentication present.

External impact and cache clearing

More concerning was that fact that chunks of in-flight HTTP requests for Cloudflare customers were present in the dumped memory. That meant that information that should have been private could be disclosed.

This included HTTP headers, chunks of POST data (perhaps containing passwords), JSON for API calls, URI parameters, cookies and other sensitive information used for authentication (such as API keys and OAuth tokens).

Because Cloudflare operates a large, shared infrastructure an HTTP request to a Cloudflare web site that was vulnerable to this problem could reveal information about an unrelated other Cloudflare site.

An additional problem was that Google (and other search engines) had cached some of the leaked memory through their normal crawling and caching processes. We wanted to ensure that this memory was scrubbed from search engine caches before the public disclosure of the problem so that third-parties would not be able to go hunting for sensitive information.

Our natural inclination was to get news of the bug out as quickly as possible, but we felt we had a duty of care to ensure that search engine caches were scrubbed before a public announcement.

The infosec team worked to identify URIs in search engine caches that had leaked memory and get them purged. With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory. Those 770 unique URIs covered 161 unique domains. The leaked memory has been purged with the help of the search engines.

We also undertook other search expeditions looking for potentially leaked information on sites like Pastebin and did not find anything.

Some lessons

The engineers working on the new HTML parser had been so worried about bugs affecting our service that they had spent hours verifying that it did not contain security problems.

Unfortunately, it was the ancient piece of software that contained a latent security problem and that problem only showed up as we were in the process of migrating away from it. Our internal infosec team is now undertaking a project to fuzz older software looking for potential other security problems.

Detailed Timeline

We are very grateful to our colleagues at Google for contacting us about the problem and working closely with us through its resolution. All of which occurred without any reports that outside parties had identified the issue or exploited it.

All times are UTC.
2017-02-18 0011 Tweet from Tavis Ormandy asking for Cloudflare contact information
2017-02-18 0032 Cloudflare receives details of bug from Google
2017-02-18 0040 Cross functional team assembles in San Francisco
2017-02-18 0119 Email Obfuscation disabled worldwide
2017-02-18 0122 London team joins
2017-02-18 0424 Automatic HTTPS Rewrites disabled worldwide
2017-02-18 0722 Patch implementing kill switch for cf-html parser deployed worldwide
2017-02-20 2159 SAFE_CHAR fix deployed globally
2017-02-21 1803 Automatic HTTPS Rewrites, Server-Side Excludes and Email Obfuscation re-enabled worldwide
NOTE: This post was updated to reflect updated information.

withgoogle.com: Test how mobile-friendly your site is

withgoogle.com: Test how mobile-friendly your site is

Daten der Website werden analysiert…
HTML/CSS wird geprüft
JavaScript wird ausgeführt
Komprimierung wird ermittelt

7s
Ladezeit: Ausreichend
26%
Geschätzter Besucherverlust
(aufgrund der Ladezeit)
Ladezeit:

Gibt an, wie schnell die sichtbaren Inhalte Ihrer Seite mit Chrome auf einem Moto G4 in einem 3G-Netzwerk dargestellt werden. Die Ladezeit kann je nach Serverstandort, Gerät, Browser und Drittanbieter-Apps variieren. Wenn Ihre Website Karussell-, Overlay- oder Interstitial-Formate enthält, kann sich dies ebenfalls auf die Ergebnisse auswirken. Auf webpagetest.org lässt sich die Geschwindigkeit Ihrer Website mit verschiedenen Einstellungen testen.

Geschätzter Besucherverlust:

Gibt an, mit welcher Wahrscheinlichkeit ein Nutzer die Webseite verlässt, weil sie nicht innerhalb von drei Sekunden geladen wird. [Quelle]
Ihre Website entspricht nicht den Richtlinien für die Nutzererfahrung.Weitere Informationen

http://www.eklausmeier.tk
ist langsamer als die leistungsstärksten Websites im Bereich

Unternehmen und Industrie

OpenResty, Scalable Web Platform by Extending NGINX with Lua

OpenResty, Scalable Web Platform by Extending NGINX with Lua

OpenResty ™ is a full-fledged web platform by integrating the standard Nginx core, LuaJIT, many carefully written Lua libraries, lots of high quality 3rd-party Nginx modules, and most of their external dependencies. It is designed to help developers easily build scalable web applications, web services, and dynamic web gateways.

By taking advantage of various well-designed Nginx modules (most of which are developed by the OpenResty team themselves), OpenResty effectively turns the nginx server into a powerful web app server, in which the web developers can use the Lua programming language to script various existing nginx C modules and Lua modules and construct extremely high-performance web applications that are capable to handle 10K ~ 1000K+ connections in a single box.

OpenResty aims to run your server-side web app completely in the Nginx server, leveraging Nginx’s event model to do non-blocking I/O not only with the HTTP clients, but also with remote backends like MySQL, PostgreSQL, Memcached, and Redis.

Real-world applications of OpenResty range from dynamic web portals and web gateways, web application firewalls, web service platforms for mobile apps/advertising/distributed storage/data analytics, to full-fledged dynamic web applications and web sites. The hardware used to run OpenResty also ranges from very big metals to embedded devices with very limited resources. It is not uncommon for our production users to serve billions of requests daily for millions of active users with just a handful of machines.

OpenResty is not an Nginx fork. It is just a software bundle. Most of the patches applied to the Nginx core in OpenResty have already been submitted to the official Nginx team and most of the patches submitted have also been accepted. We are trying hard not to fork Nginx and always to use the latest best Nginx core from the official Nginx team.

See Components for the complete list of software bundled in OpenResty.

See GettingStarted on how to quickly setup an OpenResty server that can say hello world over HTTP. Or you can go to the Download section to grab OpenResty’s source code tarball directly.

We provide free technical support in the openresty and openresty-en mailing lists. See Community.

Derek Bruening

Derek Bruening

Palais des Papes

I’m the primary author of DynamoRIO.

I was one of the founders of Determina.

I completed my PhD in the Compilers group in the Computer Science and Artificial Intelligence Laboratory at MIT.

Projects I’ve worked on

Publications

Clustered/stacked bar graph generation script

Making Windows usable

Information on installing Linux on a Dell Inspiron 8000

Solar panels

My board game collection

BurningCutlery

stackoverflow.com: How to find primary key on a table in DB2?

stackoverflow.com: How to find primary key on a table in DB2?

DB2 mainframe:

SELECT
     RTRIM(name)    AS index_name
    ,RTRIM(creator) AS index_schema
    ,uniquerule
    ,clustering
FROM sysibm.sysindexes
WHERE tbname     = @table
  AND tbcreator  = @schema
  AND clustering = 'Y'

Then, to see the actual columns in that index, you perform this query:

SELECT colname AS name
FROM sysibm.sysindexes a
JOIN sysibm.syskeys b
    ON a.name       = b.ixname 
    AND a.tbcreator = b.ixcreator 
WHERE a.name        = @index_name
    AND a.tbcreator = @index_schema
ORDER BY COLSEQ

Go Resources

Go Resources

Go Resources

Books

An Introduction to Programming in Go
A short, concise introduction to computer programming using the language Go. Designed by Google, Go is a general purpose programming language with modern features, clean syntax and a robust well-documented common library, making it an ideal language to learn as your first programming language.

Introducing Go
Perfect for beginners familiar with programming basics, this hands-on guide provides an easy introduction to Go, the general-purpose programming language from Google. Author Caleb Doxsey covers the language’s core features with step-by-step instructions and exercises in each chapter to help you practice what you learn.
Guides
Machine Setup
Windows
OSX
Bootcamp
A 4-week instructional series that covers the material in An Introduction to Programming in Go as well as the basics of server-side web development and Google App Engine.
© 2017 Caleb Doxsey. Cover Art: © 2012 Abigail Doxsey Anderson. All Rights Reserved.

Caleb Doxsey

Caleb Doxsey

Caleb Doxsey

Hi I’m Caleb Doxsey and welcome to my website. I’m a software developer interested in a lot of different languages and technologies.

If you’d like to contact me, you can email me here: caleb@doxsey.net.

Blog

doxsey.net/blog
Code

github.com/calebdoxsey
github.com/badgerodon
Experience

2015 – DataDog: Software Engineer
2013 – Gnip: Software Developer
2012 – SendHub: Web Developer, Android Developer
2011 – Trada: Web Developer
2006 – Markit on Demand: Web Developer
Books

2016 – Introducing Go: Build Reliable, Scalable Programs, O’Reilly Media
2012 – An Introduction to Programming in Go, self-published
Education

2006 – Auburn University: B.S. in Computer Science
Some personal stuff: I’m an evangelical, born-again Christian with a more than passing interest in theology and philosophy (and honestly just about any -y). I lean conservative, play guitar (classical and acoustic), enjoy video and board games, and blog haphazardly here.

© 2016 Caleb Doxsey (Email)