Website Optimization and Results

Website Optimization and Results

My Attempts at Website Optimisation and the Results

Having decided to move to a static site (using Hugo), I thought I would try and make a few improvements to help boost page loading times and share both my methodology and the results of what I did.

Measuring Progress

I decided to use three different services to track my progress:

Google’s PageSpeed Insights
Web Page Test
GT Metrix
The site would remain on the original host, part of my package supplied by Vidahost, so DNS and server times should be consistent.

Round 1

Removing Unused CSS Styles

I started with a simple one: removing unused CSS styles from the stylesheet. You always end up with unused ones over time, more so in my case as I think I adapted it from a WordPress theme of some sort.

Method: I used the Audit tab in Chrome’s Developer Tools to do this.
Change Icon Font Social Icons to SVG

Fairly straightforward. I had a bunch of links to various social networks that were using the Socialicious icon font. Better than images, yes, but it still involved an HTTP request and a lot of CSS to get them to appear.

Method: I tracked down some SVG code to display them instead and stuck it inline, removing a bunch of CSS and a resource request.

Optimise the Images

I had a massive background image and several book covers that were resizing the original image down to match the set height and width. By producing them in the correct sizes I could reduce the amount downloaded and therefore speed up the load.

Method: I used Imagemagick to create images at the correct size and resized the background image based on the most popular browser resolution. Then I used File Optimizer to improve them further.

Minify CSS

Having cleared out the unused styles I wanted to remove all the whitespace to shrink the file size even further.

Method: I used CSS Minifier to shrink it.

Minify JavaScript

I did the same for the JavaScript used (albeit most of it was an analytics script that I couldn’t touch).

Method: I used JS Compress to shrink both the code in files and inline.

Round 2

Enable Gzip Compression

After the first round, which focused on the site content, I turned my eyes to the server. The site was reportedly running on Apache.

Method: I added most of the rules in this GT Metrix article to my htaccess file.

Enable Caching

None of the content had any caching rules on it, yet they were unlikely to change often (if at all).

Method: I added most of the rules in this Varvy article to my htaccess file (making them all one month).

The Stats

Original WP Site Static Site After Round 1 After Round 2
PS Rank (Mobile) 55/100 51/100 70/100 88/100
PS Rank (Desktop) 64/100 51/100 84/100 93/100
WPT First Byte 2.179s 0.202s 0.161s 0.156s
WPT Fully Loaded 9.789s 5.117s 2.723s 2.892s
WPT Bytes (KB) 1,987 2,453 1,110 1,093
GTMet YSlow 73% 79% 80% 88%
GTMet Load Time 3.9s 5.2s 2.8s 2.5s
GTMet Size (Mb) 1.93 2.38 1.07 1.05
GTMet Requests 26 16 15 15
The Results

Some things are fairly obvious. The number of requests immediately dropped when moving to the static site (I blame plugins for this), the time to first byte improved massively too and, despite less optimised images, the page load time was nearly halved. Despite that, my PageSpeed ranking dropped.

There were dramatic improvements after round one too, with most of the load stats dropping by half again. Well worth optimising your images and minifying everything.

Round two, which largely consisted of server-side stuff, had some benefit, but far less.


So, moving to a static site alone shows big improvements. No querying of databases or calculating of variables means instant response. The time to first byte dropped by 90%!

It’s well worth optimising your content too, both images and external files (and probably the HTML files themselves, although I didn’t try that). Aside from load time, it’ll reduce your bandwidth bill too.

On the other hand, the server-side changes appear to have had minimal impact. They’re great for tweaking that extra couple of percent, but far less essential than the various ranking algorithms would have you believe. At least that’s how it appears.

One thing I mean to try is a CDN, to see what impact that would have.

There you have it, a simple series of things that can massively impact your site’s performance. Don’t despair if you’re stuck on WordPress or some other platform, there are plenty of plugins that will do much of this for you. If you can go static it appears worth it though.

Of course, if this site was being updated daily I’d need to find a way to automate these tasks (Grunt or Gulp quite probably) so they get done automatically when I want to publish.

3rd Feb 2016

Google Fonts

Google Fonts


Making the web more beautiful, fast, and open through great typography

We believe the best way to bring personality and performance to websites and products is through great design and technology. Our goal is to make that process simple, by offering an intuitive and robust directory of open source designer web fonts. By using our extensive catalog, you can share and integrate typography into any design project seamlessly—no matter where you are in the world.
Discover Great Typography

Our font directory places typography front and center, inviting users to explore, sort, and test fonts for use in more than 135 languages. We showcase individual type designers and foundries, giving you valuable information about the people and their processes, as well as analytics on usage and demographics. Our series of thematic collections helps you discover new fonts that have been vetted and organized by our team of designers, engineers, and collaborators, and our default sort organizes fonts based on popularity, trends, and your geographic location. You can also create your own highly customized collections by filtering families, weights, and scripts, plus test color themes, and review sample copy. Collections can be shared, making it easy to collaborate on projects and ensure typography is optimized and streamlined throughout the design and engineering process.

Collaborate with Open Source

All the fonts in our catalog are free and open source, making beautiful type accessible to anyone for any project. This means you can share favorites and collaborate easily with friends and colleagues. Google Fonts takes care of all the licensing and hosting, ensuring that the latest and greatest version of any font is available to everyone.

Make the Web Faster

Google Fonts makes product and web pages run faster by safely caching fonts without compromising users’ privacy or security. Our cross-site caching is designed so that you only need to load a font once, with any website, and we’ll use that same cached font on any other website that uses Google Fonts.

Using the code generated by Google Fonts, our servers will automatically send the smallest possible file to every user based on the technologies that their browser supports. For example, we use WOFF 2.0 compression when available. This makes the web faster for all users—particularly in areas where bandwidth and connectivity are an issue. Now everyone can enjoy the same quality and design integrity in their products and web pages, no matter where they are in the world.

Join our community

We are working with designers around the world to produce best-in-class typeface designs that are made for the web, and because we are open source, this means that we can release early access trials to our community for testing and feedback.


Frequently Asked Questions
API documentation
Early Access
Privacy Policy
Terms of Use
Contact us on Twitter


Sans Serif


All Languages
Number of styles





Load Time: Fast
Your Selection

Clear All

Load Time: Fast
No styles have been selected

Go back to the customize tab to choose your styles.

– – –


Merriweather was designed to be a text face that is pleasant to read on screens. It features a very large x height, slightly condensed letterforms, a mild diagonal stress, sturdy serifs and open forms.
There is also Merriweather Sans, a sans-serif version which closely harmonizes with the weights and styles of this serif family.

NATS at Netlify – New Possibilities for Ultra-fast Web Content Publishing

NATS at Netlify – New Possibilities for Ultra-fast Web Content Publishing

Netlify is the leading platform for deploying high performance websites and applications. The traditional way of making websites is being disrupted by technologies like static site generators, build automation, and CDN hosting. Netlify is building the modern day platform that developers and companies use to manage and publish their content online. Launched in March 2015, Netlify already serves close to a billion page views per month for thousands of developers and clients such as WeWork, Wikia, Sequoia Capital, Uber, and Vice Media.

At Netlify I am the Head of Infrastructure. That sounds like a devops job (and some days it is), but it means that I build out the backend platform that powers our 70,000+ sites. Our platform spans most of the cloud providers – Rackspace, AWS, Google Cloud Platform, Digital Ocean, and our own anycast network. We do this to leverage each provider’s strength and nimbly move between them when we face problems such as provider outages or DDoS. We use Go, Ruby, C/C++, RabbitMQ, ATS, Ansible (playbook here), MongoDB, and – of course – NATS.

Our data plane is designed to be where our services dump data such as metrics and log messages. Other interested parties can hook up and consume the data stream. We have been debating between using RabbitMQ and NATS for the message bus. We already use RabbitMQ for our command and control plane, but have experienced some administration headaches, and cumbersome client code. In addition, we were concerned about the throughput and didn’t need the enterprise messaging features (e.g. topic durability, guaranteed delivery) that RabbitMQ provided (and NATS now has via NATS Streaming). The decision to use NATS was based on the performance, easy setup, and clean client code.

Netlify NATS Architecture

For a concrete example, let’s look at how we built our logging framework. We have two types of services, ones that we wrote and ones we use (e.g. ATS). For the ones we wrote we have standardized to log with logrus with a nats hook. For services where we can’t edit the code we use a log tailer to dump those logs to NATS. Now that the data is flowing through NATS we have a handful of services that act on that data. One such service is elastinats; it listens to different channels and then pushes them to Elasticsearch. Now we have a searchable, unified view of our platform. This has been immensely helpful in detecting and diagnosing problems that come up when running a large distributed system.

When building out this system I uncovered a little bug. After confirming that it was indeed not my code – which had a non-zero chance of being the culprit -I reached out to the community over the mailing list, was quickly added to Slack (you can request an invite here), and now I am part of the community. Community engineers and the NATS team jumped on the issue and hammered out a solution in a few days. This quick turnaround between bug discovery to fix confirmed that I had chosen the right messaging system for our next iteration of our architecture.

Getting access to our own data via NATS was the first step, and now it is a question of what we can dream up. At Netlify we are very excited about all the ideas we’ve had and new features this is going to enable. We are also very passionate about open-source, we look forward to giving back to the NATS community.

If you have any questions or would like to know more, feel free to find me in NATS Slack, or on Twitter!

Verified CDN Usage Statistics — Statistics for websites using Verified CDN technologies

Verified CDN Usage Statistics — Statistics for websites using Verified CDN technologies

About Verified CDN
Verified CDNs are content delivery networks that are being used by the website as hosting services for their own content.

A content delivery network provides several features, firstly it allows a website to remove load from its web server by moving ancillary files onto another server for them to process, theoretically speeding up the website. Secondly a CDN is normally geographically dispersed to provide a regionally local copy of your data to the recipient. For example, a user in Japan will receive an image from your website from a server in Japan, rather than having to request it from the USA, if that is the original location of the media. CDNs are also suited for large distributions of data; this includes web video and large files as they are better equipped to handle the amount of data being requested.

The most detected CDN by BuiltWith is currently Akamai which are one of the early pioneers of the edge network; geographically dispersed content delivery.

Brian Beej Hall

Brian Beej Hall

**** BEEJ.US 64 WEBPAGE V2 ****


LOAD “$”,8

My tech blog
Beej’s Guides—including the world-infamous Network Programming Guide
My Tech Links (Google+ page)
Join the Bend Hackers Guild
The Pirate Image Archive
Using a Brother HL-2040 printer with Ghostscript
Lava Beds National Monument
Constitution vs. Guerrière
Constitution vs. Java—Commodore Bainbridge was a cousin of mine
My modest github account—ever expanding
goatee—file includer and macroish processor
goatbrot—command-line Mandelbrot Set generation
bgitsh—access control wrapper for muxing users into the same git ssh account
FlickrAPI—Python Flickr API; I wrote a version of this before handing it off
Slackware Stuff—TIGER Census Data Manipulation in Python
genmaze—quick and dirty text and SVG maze generator written in Python
SLAG—interactive fiction invisiclues
The Chico Hackers Guild—long defunct, just a piece of Chico non-history
Graffiti Central Archive—vector SVG graffiti graphics
The Moria Page!
Internet Pizza Server
Motorcycle stuff
GPS and Geocaching command line tools
Commodore 64 BDF font for the X Window System
Scuba dive flag image generator
Double Decker Pizza Ex-Employee Register
My raytraced desk
A raytraced sphere thing
Maps created from USGS data
The US Bill of Rights
Pirate Bartholomew Roberts’ Articles
Lincoln Highway stuff
My Vanity Page
My Resume
My videos
My photos
My LinkedIn profile
My Old Homepage—for posterity
10 PRINT “”
20 GOTO 10

Beej’s Guide to Unix IPC

Beej’s Guide to Unix IPC


Beej’s Guide to Unix IPC
Brian “Beej Jorgensen” Hall

Version 1.1.3
December 1, 2015
Copyright © 2015 Brian “Beej Jorgensen” Hall

1. Intro
1.1. Audience
1.2. Platform and Compiler
1.3. Official Homepage
1.4. Email Policy
1.5. Mirroring
1.6. Note for Translators
1.7. Copyright and Distribution
2. A fork() Primer
2.1. “Seek ye the Gorge of Eternal Peril”
2.2. “I’m mentally prepared! Give me The Button!”
2.3. Summary
3. Signals
3.1. Catching Signals for Fun and Profit!
3.2. The Handler is not Omnipotent
3.3. What about signal()
3.4. Some signals to make you popular
3.5. What I have Glossed Over
4. Pipes
4.1. “These pipes are clean!”
4.2. fork() and pipe()—you have the power!
4.3. The search for Pipe as we know it
4.4. Summary
5. FIFOs
5.1. A New FIFO is Born
5.2. Producers and Consumers
5.4. Concluding Notes
6. File Locking
6.1. Setting a lock
6.2. Clearing a lock
6.3. A demo program
6.4. Summary
7. Message Queues
7.1. Where’s my queue?
7.2. “Are you the Key Master?”
7.3. Sending to the queue
7.4. Receiving from the queue
7.5. Destroying a message queue
7.6. Sample programs, anyone?
7.7. Summary
8. Semaphores
8.1. Grabbing some semaphores
8.2. Controlling your semaphores with semctl()
8.3. semop(): Atomic power!
8.4. Destroying a semaphore
8.5. Sample programs
8.6. Summary
9. Shared Memory Segments
9.1. Creating the segment and connecting
9.2. Attach me—getting a pointer to the segment
9.3. Reading and Writing
9.4. Detaching from and deleting segments
9.5. Concurrency
9.6. Sample code
10. Memory Mapped Files
10.1. Mapmaker
10.2. Unmapping the file
10.3. Concurrency, again?!
10.4. A simple sample
10.5. Summary
11. Unix Sockets
11.1. Overview
11.2. What to do to be a Server
11.3. What to do to be a client
11.4. socketpair()—quick full-duplex pipes
12. More IPC Resources
12.1. Books
12.2. Other online documentation
12.3. Linux man pages