Clear Linux Project for Intel Architecture Clear Linux Project for Intel Architecture

Clear Linux Project
for Intel® Architecture
The Clear Linux* Project for Intel® Architecture is a project that is building a Linux OS distribution for various cloud use cases. The goal of Clear Linux OS is to showcase the best of Intel Architecture technology, from low-level kernel features to more complex items that span across the entire operating system stack.


Clear Linux* OS for Intel® Architecture is the first Linux distribution that supports auto proxy. This allows the OS to discover a Proxy Auto-Config (PAC) script and use it to automatically resolve what proxy is needed for a given connection. Autoproxy enables end users — both internal and external to Intel — to use Clear Linux OS for Intel Architecture inside any proxy…

Function Multiversioning (FMV)

Imagine that you are developing software that could work in multiple platforms. At the end of the day, it could be running anywhere, maybe on a server or a home computer. While Intel® architecture provides many powerful instruction set extensions, it is challenging for developers to generate code that takes advantage of these capabilities.Currently, developers have these choices:Write multiple…


In support of the goal to provide an agile Linux* distribution that rapidly detects and responds to quality issues in the field, Clear Linux for Intel® Architecture includes a telemetry solution, which notes events of interest and reports them back to the development team. The solution adheres to Intel’s privacy policies regarding the collection and use of Personally Identifiable Information (PII…


For the longest time, compilers have been producing optimized binaries. However, in today’s world it can often be daunting to know exactly which optimizations — among the more than 80 options for basic optimizations — to choose, and which of those will really be of benefit to you. In the Clear Linux* Project for Intel® Architecture we use a lot of these optimizations, and one in particular…

Clear Containers

Containers are immensely popular in the cloud world. With Clear Containers we’re working on a way to improve security of containers by using Intel® Virtualization Technology (Intel® VT). We set out to build Clear Containers by leveraging the isolation of virtual-machine technology along with the deployment benefits of containers. As part of this, we let go of the “generic PC hardware”…


You just made a mistake in configuring OpenStack* on your system, and out of frustration ran the following commands on your Linux distro as root: # rm -rf /etc /var # reboot What do you think would happen? How long would it take you to recover? Without backups? With Clear Linux* OS for Intel® Architecture, the system will boot correctly! In fact, this will effectively perform a “…

Software update

Linux*-based operating systems contain the code of several hundred, if not thousands, of open source projects. To make this manageable, distributions use a concept called “packages” to configure and compile the source code of these projects into binaries, which can then be logically installed.Many distributions then combine these compiled binaries into so-called packages, resolving dependencies…

All debug information, all the time

Debug information is generated when a program is compiled from source code into a binary. Programs like the GDB debugger use the information to map machine instructions back to the original source code. Developers can then debug and analyze their programs by stepping through original source code, rather than going through the much lower level (and harder to understand) CPU instructions one by…

Julia Evans Blog

Julia Evans Blog

Hi! I’m Julia.

I live in Montreal and work on Stripe’s machine learning team. You can find me elsewhere on the internet:
This blog is mostly about having fun with systems programming, with lots of forays into other areas. There’s a list of my favorite posts, as well as some projects I’ve worked on.

I spent the fall of 2013 at Hacker School, which houses the best programming community I’ve seen anywhere. I wrote down what I did every day while there, if you want to know what it’s like.

In the last year or two I’ve discovered that I like organizing community events and giving talks about programming. A few things I’ve worked on:

Montreal All-Girl Hack Night with my awesome friend Monica
PyLadies Montreal.
!!Con, a 2-day conference about what excites us about programming, where all the talks are lightning talks (with several amazing people) Linux kernel bypass Linux kernel bypass

Unfortunately the speed of vanilla Linux kernel networking is not sufficient for more specialized workloads. For example, here at CloudFlare, we are constantly dealing with large packet floods. Vanilla Linux can do only about 1M pps. This is not enough in our environment, especially since the network cards are capable of handling a much higher throughput. Modern 10Gbps NIC’s can usually process at least 10M pps.

et’s prepare a small experiment to convince you that working around Linux is indeed necessary. Let’s see how many packets can be handled by the kernel under perfect conditions. Passing packets to userspace is costly, so instead let’s try to drop them as soon as they leave the network driver code. To my knowledge the fastest way to drop packets in Linux, without hacking the kernel sources, is by placing a DROP rule in the PREROUTING iptables chain:
$ sudo iptables -t raw -I PREROUTING -p udp –dport 4321 –dst -j DROP
$ sudo ethtool -X eth2 weight 1
$ watch ‘ethtool -S eth2|grep rx’
rx_packets: 12.2m/s
rx-0.rx_packets: 1.4m/s
rx-1.rx_packets: 0/s

Ethtool statistics above show that the network card receives a line rate of 12M packets per second. By manipulating an indirection table on a NIC with ethtool -X, we direct all the packets to RX queue #0. As we can see the kernel is able to process 1.4M pps on that queue with a single CPU.
Processing 1.4M pps on a single core is certainly a very good result, but unfortunately the stack doesn’t scale. When the packets hit many cores the numbers drop sharply. Let’s see the numbers when we direct packets to four RX queues:
$ sudo ethtool -X eth2 weight 1 1 1 1
$ watch ‘ethtool -S eth2|grep rx’
rx_packets: 12.1m/s
rx-0.rx_packets: 477.8k/s
rx-1.rx_packets: 447.5k/s
rx-2.rx_packets: 482.6k/s
rx-3.rx_packets: 455.9k/s
Now we process only 480k pps per core. This is bad news. Even optimistically assuming the performance won’t drop further when adding more cores, we would still need more than 20 CPU’s to handle packets at line rate. So the kernel is not going to work.

Solarflare network cards support OpenOnload, a magical network accelerator. It achieves a kernel bypass by implementing the network stack in userspace and using an LD_PRELOAD to overwrite network syscalls of the target program. For low level access to the network card OpenOnload relies on an “EF_VI” library. This library can be used directly and is well documented.
EF_VI, being a proprietary library, can be only used on Solarflare NIC’s, but you may wonder how it actually works behind the scenes. It turns out EF_VI reuses the usual NIC features in a very smart way.
Under the hood each EF_VI program is granted access to a dedicated RX queue, hidden from the kernel. By default the queue receives no packets, until you create an EF_VI “filter”. This filter is nothing more than a hidden flow steering rule. You won’t see it in ethtool -n, but the rule does in fact exist on the network card. Having allocated an RX queue and managed flow steering rules, the only remaining task for EF_VI is to provide a userspace API for accessing the queue.

Ekiga SoftPhone, Video Conferencing and Instant Messenger

Ekiga SoftPhone, Video Conferencing and Instant Messenger

Ekiga (formely known as GnomeMeeting) is an open source SoftPhone, Video Conferencing and Instant Messenger application over the Internet.

It supports HD sound quality and video up to DVD size and quality.

It is interoperable with many other standard compliant softwares, hardwares and service providers as it uses both the major telephony standards (SIP and H.323).

Ekiga was first released back in 2001 under the GnomeMeeting name, as a graduation thesis. In 2001, voice over IP, IP Telephony, and videoconferencing were not widespread technologies as they are now. The GNU/Linux desktop was at its infancy, and let’s not speak about multimedia capabilities. Most webcam drivers were buggy, ALSA had not been released yet and full-duplex audio was something difficult to achieve. General performance could also be an issue, especially when most efficient codecs were closed source. Generally speaking, the technology was not ready yet but Ekiga was already kicking!

Nowadays, everyone knows about voice over IP and videoconferencing. However, proprietary programs using closed communication protocols are dominating the market. Few people know that alternatives exist, and even less people know that using standard tools allows doing voice over IP, videoconferencing but also IP Telephony. The purpose of Ekiga has always been to be a mix between a simple chat application and a professional IP Telephony tool for the GNU/Linux destop. As a SIP softphone, it can completely replace hardware SIP IP phones and many people are using it as such.

With the upcoming 5.0 release, we were very ambitious. Most of the code has been reorganized, some parts have been completely rewritten. Things are getting simplified to attract new contributors (I am now 15 years older than when I started coding on Ekiga). The user interface is completely new and is now using cutting-edge technologies like GTK+3 or even Clutter to display video. New codecs have been added, and new features are still being added regularly (I would like to complete TLS and SRTP support before the release as well as MSRP chat).

CloudFlare Railgun Web Cache

CloudFlare Railgun Web Cache

Railgun accelerates the connection between each CloudFlare data center and an origin server so that requests that cannot be served from the CloudFlare cache are nevertheless served very fast.

For example, it’s hard to cache the New York Times home page for any length of time because the news changes and being up to date is essential to their business. And for a personalized web site like Facebook each user sees a different page even though the URL may be the same for different users.

Experiments at CloudFlare has revealed similar change values across the web. For example, changes by about 2.15% over five minutes and 3.16% over an hour. The New York Times home page changes by about 0.6% over five minutes and 3% over an hour. BBC News changes by about 0.4% over five minutes and 2% over an hour.

Although the dynamic web is not cacheable, it’s also not changing quickly. That means that from moment to moment there’s only a small change between versions of a page. Railgun uses this fact to achieve very high rates of compression. This is very similar to how video compression looks for changes from frame to frame; Railgun looks for changes on a page from download to download.

Railgun Listener is a single executable whose only dependency is a running Memcache instance. It runs on 64-bit Linux and BSD systems as a daemon.

The Listener requires a single port open onto the Internet for the Railgun protocol so that CloudFlare data centers can contact it. And it requires access to the website via HTTP and HTTPS. Ideally, the Listener would be placed on a server with fast access to the Internet and low latency.

Installation is simply a matter of installing via an RPM or .deb file. Greg Kroah-Hartman, Linux kernel developer Greg Kroah-Hartman, Linux kernel developer

My laptop is a MacBook Pro Retina. My workstation is an old pieced-together Intel machine, the parts selected for the size and lack of noise more than anything else, with two large monitors connected. The laptop and the workstation all only have SSD drives in them. I have an old Dell workstation as a build machine for kernel testing, with an extremely fast Micron Flash PCI drive in it for building kernels. Thanks to Amazon’s generosity, I’ve been doing a lot more kernel build testing on their AWS systems, utilizing a 32 processor, 64Gb virtual machine, allowing me to build multiple kernels at the same time all on a RAM disk in minutes.

Linux everywhere of course. On the desktop I am running the openSUSE Tumbleweed distribution (a rolling version of openSUSE that provides the latest stable packages) as I’m the one responsible for that distribution. I run Gentoo Linux on my servers and my build machine. I am trying out Arch Linux on my MacBook as I wanted to see how that distro was due to hearing good things about it. So far I’m impressed with it, and given that openSUSE doesn’t work on the MacBook Pro yet, I’m sticking with it for now.

For daily use, I live in mutt and vim for email and editing. I use offlineimap for syncing email around on all of the different systems I use. To send email I rely on msmtp to be able to handle queueing of emails when I do not have any connectivity (like on airplanes). git for source code control in dealing with Linux kernel patches as well as other more mundane stuff (email archiving, configuration files, etc.) I use quilt on top of git for handling the Linux kernel stable patch set as the workflow there does not lend itself to a pure git environment.

For desktop environment, on my workstation, I use GNOME 3, despite me complaining about it all the time. On my laptop I’ve converted over to using the i3 window manager, on top of the GNOME 3 session handling logic. Give me a few more weeks and I might just move my desktop over to that environment as well – I’m finding that i3 is really good for my use cases (lots of terminal windows open, virtual screens, and fast keyboard navigation of everything.) For a web browser I switch between Firefox and Chrome every few weeks for no valid reason.

I want to be able to do a full kernel build in less than a minute, on a machine that doesn’t require me wiring it to my home dryer power outlet and doesn’t sound like a small airplane is taking off in the basement. The Linux kernel code base keeps growing at a constant rate, so over time, kernel build times should decrease with new CPU releases, but in the past few years, that hasn’t seemed to happen. Junio C Hamano Developer (git) Junio C Hamano Developer (git)

I maintain Git, a distributed version control system. Linus Torvalds (of Linux fame) started the project in April 2005, which quickly grew and gained many contributors, which I was one of. Linus passed the project to me later in that year, and I’ve been running the project ever since. We’ll have the 10 year anniversary this coming April.

I use Secure Shell SSH client on my Chromebooks to log in to my primary development environment, which runs some version of Ubuntu Linux the IT folks at Google manage for me.

I have a set of long running sessions in screen. In one of whose windows I run emacs, in which I use gnus newsreader, and there I spend most of my day, exchanging emails with the project participants. Since the project I work on is a command-line tool that is primarily implemented in C, I use the usual CLI development tools, e.g. make, gcc, gdb, etc. The documentation is in AsciiDoc. And of course, the history of the source code is kept in Git.

I use GnuCash to keep track of my checking account and credit card usage. This program unfortunately does not natively run on Chromebooks, and that’s why I have a Vizio Ultrabook that runs either Ubuntu or Windows. I however recently started to experiment with crouton, which allows me install a chrooted Ubuntu (or other variants of Linux) on a Chromebook and I can use GnuCash there. So far, this set-up seems to be working well enough for me, so I may be able to lose the Vizio someday.

I use Calibre to manage my e-books, installing them to and removing them from my Nook e-Readers. I haven’t experimented with this in the crouton environment yet, though. When I don’t have my Nook with me, I use either Google Play Books (on Android) or Google Play Books (on Chromebook or other laptops) for my reading.

bcache and/vs. LVM cache

bcache and/vs. LVM cache

While SSDs are much faster than HDDs especially in doing random I/O operations they are much more expensive and thus not so great for big amounts of data. As usual in today’s world, the key word for a win-win solution is the word “hybrid”. In this case a combination of HDD and SSD (or just their technologies in a single piece of hardware) using a lot of HDD-based space together with small SSD-based space as a cache providing fast access to (typically) most frequently used data.

bcache or Block (level) cache is a software cache technology being developed and maintained as part of the Linux kernel codebase which as it’s name suggests provides cache functionality on top of arbitrary (pair of) block devices.