High Performance Browser Networking

High Performance Browser Networking

Performance is a feature. This book provides a hands-on overview of what every web developer needs to know about the various types of networks (WiFi, 3G/4G), transport protocols (UDP, TCP, and TLS), application protocols (HTTP/1.1, HTTP/2), and APIs available in the browser (XHR, WebSocket, WebRTC, and more) to deliver the best—fast, reliable, and resilient—user experience.

Table of Contents
Networking 101
Primer on Latency and Bandwidth
Speed Is a Feature
The Many Components of Latency
Speed of Light and Propagation Latency
Last-Mile Latency
Bandwidth in Core Networks
Bandwidth at the Network Edge
Delivering Higher Bandwidth and Lower Latencies
Building Blocks of TCP
Three-Way Handshake
+
Congestion Avoidance and Control
Bandwidth-Delay Product
Head-of-Line Blocking
+
Optimizing for TCP
Building Blocks of UDP
Null Protocol Services
+
UDP and Network Address Translators
Optimizing for UDP
Transport Layer Security (TLS)
Encryption, Authentication, and Integrity
HTTPS Everywhere
+
TLS Handshake
+
TLS Session Resumption
Chain of Trust and Certificate Authorities
+
Certificate Revocation
TLS Record Protocol
+
Optimizing for TLS
Testing and Verification
Performance of Wireless Networks
Introduction to Wireless Networks
Ubiquitous Connectivity
Types of Wireless Networks
+
Performance Fundamentals of Wireless Networks
Measuring Real-World Wireless Performance
WiFi
From Ethernet to a Wireless LAN
WiFi Standards and Features
+
Measuring and Optimizing WiFi Performance
+
Optimizing for WiFi Networks
Mobile Networks
+
Brief History of the G’s
+
Device Features and Capabilities
+
Radio Resource Controller (RRC)
+
End-to-End Carrier Architecture
+
Packet Flow in a Mobile Network
Heterogeneous Networks (HetNets)
Real-World 3G, 4G, and WiFi Performance
Optimizing for Mobile Networks
Preserve Battery Power
+
Eliminate Periodic and Inefficient Data Transfers
+
Anticipate Network Latency Overhead
Design for Variable Network Interface Availability
Burst Your Data and Return to Idle
Offload to WiFi Networks
Apply Protocol and Application Best Practices
HTTP
Brief History of HTTP
HTTP 0.9: The One-Line Protocol
HTTP/1.0: Rapid Growth and Informational RFC
HTTP/1.1: Internet Standard
HTTP/2: Improving Transport Performance
Primer on Web Performance
Hypertext, Web Pages, and Web Applications
+
Anatomy of a Modern Web Application
+
Performance Pillars: Computing, Rendering, Networking
Synthetic and Real-User Performance Measurement
Browser Optimization
HTTP/1.X
Benefits of Keepalive Connections
HTTP Pipelining
Using Multiple TCP Connections
Domain Sharding
Measuring and Controlling Protocol Overhead
Concatenation and Spriting
Resource Inlining
HTTP/2
Brief History of SPDY and HTTP/2
Design and Technical Goals
Binary Framing Layer
Streams, Messages, and Frames
Request and Response Multiplexing
Stream Prioritization
One Connection Per Origin
Flow Control
Server Push
Header Compression
Upgrading to HTTP/2
+
Brief Introduction to Binary Framing
Optimizing Application Delivery
Optimizing Physical and Transport Layers
+
Evergreen Performance Best Practices
Optimizing for HTTP/1.x
+
Optimizing for HTTP/2
Browser APIs and Protocols
Primer on Browser Networking
Connection Management and Optimization
Network Security and Sandboxing
Resource and Client State Caching
Application APIs and Protocols
XMLHttpRequest
Brief History of XHR
Cross-Origin Resource Sharing (CORS)
Downloading Data with XHR
Uploading Data with XHR
Monitoring Download and Upload Progress
Streaming Data with XHR
+
Real-Time Notifications and Delivery
XHR Use Cases and Performance
Server-Sent Events (SSE)
EventSource API
Event Stream Protocol
SSE Use Cases and Performance
WebSocket
+
WebSocket API
+
WebSocket Protocol
+
WebSocket Use Cases and Performance
Performance Checklist
WebRTC
Standards and Development of WebRTC
+
Audio and Video Engines
+
Real-Time Network Transports
+
Establishing a Peer-to-Peer Connection
+
Delivering Media and Application Data
+
DataChannel
+
WebRTC Use Cases and Performance
Performance Checklist
§About the author
Ilya Grigorik is a web performance engineer at Google and co-chair of the W3C Web Performance Working Group. Follow him on his blog and Twitter for the latest web performance news, tips, and talks.

opensource.com: Perl and the birth of the dynamic web

opensource.com: Perl and the birth of the dynamic web

The fascinating story of Perl’s role in the dynamic web spans newsgroups and mailing lists, computer science labs, and continents.

The web’s early history is generally remembered as a few seminal events: the day Tim Berners-Lee announced the WWW-project on Usenet, the document with which CERN released the project’s code into the public domain, and of course the first version of the NCSA Mosaic browser in January 1993. Although these individual moments were certainly crucial, the period is far richer and reveals that technological development is not a set of discrete events, but rather a range of interconnected stories.

One such story is how exactly the web became dynamic, which is to say, how we got web servers to do more than serve static HTML documents. This is a story that spans newsgroups and mailing lists, computer science labs, and continents—its focus is not so much one person as one programming language: Perl.

CGI scripts and infoware

In the mid- to late-1990s, Perl and the dynamic web were nearly synonymous. As a relatively easy-to-learn interpreted language with powerful text-processing features, Perl made it easy to write scripts to connect a website to a database, handle form data sent by users, and of course create those unmistakeable icons of the ’90s web, hit counters and guestbooks.

Such website features came in the form of CGI scripts, named for the Common Gateway Interface, first implemented by Rob McCool in the NCSA HTTPD server in November 1993. CGI was designed to allow for drop-in functionality, and within a few years one could easily find archives of pre-cooked scripts written in Perl. An infamous case was Matt’s Scripts Archive, a popular source that unintentionally carried security flaws and inspired members of the Perl community to create a professional alternative called Not Matt’s Scripts.

At the same time that amateur and professional programmers took up Perl to create dynamic websites and applications, Tim O’Reilly coined the term “infoware” to describe how the web and Perl were part of a sea of change in the computing industry. With innovations by Yahoo! and Amazon in mind, O’Reilly wrote: “Traditional software embeds small amounts of information in a lot of software; infoware embeds small amounts of software in a lot of information.” Perl was the perfect small-but-powerful tool—the Swiss Army Chainsaw—that powered informational media from large web directories to early platforms for user-generated content.

Forks in the road

Although Perl’s relationship to CGI is well-documented, the links between the programming language and the rise of the dynamic web go deeper. In the brief period between the appearance of the first website (just before Christmas 1990) and McCool’s work on CGI in 1993, much of what defined the web in the 1990s and beyond—from forms to bitmaps and tables—was up in the air. Although Berners-Lee was often deferred to in these early years, different people saw different potential uses for the web, and pushed it in various directions. On the one hand, this resulted in famous disputes, such as questions of how closely HTML should follow SGML, or whether to implement an image tag. On the other hand, change was a slower process without any straightforward cause. The latter best describes how the dynamic web developed.

In one sense, the first gateways can be traced to 1991 and 1992, when Berners-Lee and a handful of other computer scientists and hypertext enthusiasts wrote servers that connected to specific resources, such as particular CERN applications, general applications such as Oracle databases, and wide area information servers (WAIS). (WAIS was the late 1980s precursor to the web developed by, among others, Brewster Kahle, a digital librarian and founder of the Internet Archive.) In this way, a gateway was a custom web server designed to do one thing: connect with another network, database, or application. Any dynamic feature meant running another daemon on a different port (read, for example, Berners-Lee’s description of how to add a search function to a website). Berners-Lee intended the web to be a universal interface to diverse information systems, and encouraged a proliferation of single-purpose servers. He also noted that Perl was “a powerful (if otherwise incomprehensible) language with which to hack together” one.

However, another sense of “gateway” suggested not a custom machine but a script, a low-threshold add-on that wouldn’t require a different server. The first of this kind was arguably Jim Davis’s Gateway to the U Mich Geography server, released to the WWW-talk mailing list in November 1992. Davis’s script, written in Perl, was a kind of proto-Web API, pulling in data from another server based on formatted user queries. Highlighting how these two notions of gateway differed, Berners-Lee responded to Davis requesting that he and the author of the Michigan server “come to some arrangement,” as it would make more sense “from the network point of view” to only have one server providing this information. Berners-Lee, as might be expected of the person who invented the web, preferred an orderly information resource. Such drop-in gateways and scripts that pulled data in from other servers meant a potential qualitative shift in what the web could be, extending but also subtly transforming Berners-Lee’s original vision.

Going Wayback to the Perl HTTPD

An important step between Davis’s geography gateway and the standardization of such low-threshold web scripting through CGI was the Perl HTTPD, a web server written entirely in Perl by grad student Marc Van Heyningen at Indiana University in Bloomington in early 1993. Among the design principles Van Heyningen laid out was easy extensibility—beyond the fact that using Perl meant no compiling was necessary, the server included “a feature to restart the server when new features are added to the code with zero downtime,” making it “trivial” to add new functionality.

The Perl HTTPD stood in contrast to the idea that servers should have a single, dedicated purpose. Instead, it hinted at an incremental, permanently beta approach to software products that would eventually be considered common sense in web work. Van Heyningen later wrote that his reason for building a server from scratch was there was no easy way to create “virtual documents” (i.e., dynamically generated pages) with the CERN server, and joked that the easiest way to do this was to use “the language of the gods.” Among the scripts he added early on was a web interface to Sun’s man pages as well as a a Finger Gateway (an early protocol for sharing information about a computer system or user).

Although the Indiana University server used by Van Heyningen was primarily used to connect to existing information resources, Van Heyningen and fellow students also saw the potential for personal publishing. One of its more popular pages from 1993-1994 published documents, photographs, and news stories around a famous Canadian court case for which national media had been gagged.

The Perl HTTPD wasn’t necessarily built to last. Today, Van Heyningen remembers it as a “hacked up prototype.” Its original purpose was to demonstrate the web’s usefulness to senior staff who had chosen Gopher to be the university’s network interface. Van Heyningen’s argument-in-code included an appeal to his professors’ vanity in the form of a web-based, searchable index of their publications. In other words, a key innovation in server technology was created to win an argument, and in that sense the code did all that was asked of it.

Despite the servers’s temporary nature, the ideas that accompanied the Perl HTTPD would stick around. Van Heyningen began to receive requests for the code and shared it online, with a note that one would need to know some Perl (or someone who did) to port the server to other systems. Soon after, Austin-based programmer Tony Sanders created a portable version called Plexus. Sanders’s web server was a fully fledged product that cemented the kind of easy extensibility that the Perl HTTPD suggested, while adding a number of new features such as image decoding. Plexus in turn directly inspired Rob McCool to create an “htbin” for scripts on the NCSA HTTPD server, and soon after that the implementation of the Common Gateway Interface.

Alongside this historical legacy, the Perl HTTPD is also preserved in a more tangible form—thanks to the wonderful Internet Archive (the Wayback Machine), you can still download the tarball today.

Future histories

For all the tech world’s talk of disruption, technological change is in fact a contradictory process. Existing technologies are the basis for thinking about new ones. Archaic forms of programming inspire new ways of doing things today. Something as innovative as the web was very much an extension of older technologies—not least, Perl.

To go beyond simple timelines of seminal events, perhaps web historians could take a cue from Perl. Part of the challenge is material. Much of what must be done involves wrangling structure from the messy data that’s available, gluing together such diverse sources as mailing lists, archived websites, and piles of books and magazines. And part of the challenge is conceptual—to see that web history is much more than the release dates of new technologies, that it encompasses personal memory, human emotion, and social processes as much as it does protocols and Initial Public Offerings, and that it is not one history but many. Or as the Perl credo goes, “There’s More Than One Way To Do It.”

This is the first article in Opensource.com’s Open Community Archive, a new community-curated collection of stories about the history of open source technologies, projects, and people. Send your story ideas to open@opensource.com.

Prof. Dr. Müller BWL-Portal

Prof. Dr. Müller BWL-Portal

Prof. Dr. Werner Müller ist gelernter Industriekaufmann und geprüfter Bilanz-buchhalter. Er hat Betriebs- und Volks-wirtschaftslehre studiert, an einem Institut für Wirtschaftsinformatik promoviert und ist seit 1997 Professor für “Rechnungswesen und Controlling unter besonderer Berücksichtigung internationaler Aspekte” an der Hochschule Mainz, Fachbereich Wirtschaft.

Die ersten amerikanischen Astronauten landen auf dem Mond. Als sie aus der Mondlandefähre aussteigen kommt eine Gruppe älterer Männer auf sie zu. Erstaunt fragen die Astronauten, wer sie seien. Darauf antwortet einer: „Wir sind deutsche Professoren. Wir leben schon lange hinter dem Mond!“

Es gibt in der BWL – wie auch in anderen Disziplinen – Autoren, die von der Fachwelt als „Päpste“ auf ihrem Gebiet bezeichnet werden. Einige von ihnen sind so weltfremd wie die Päpste im Vatikan! Sie verbreiten in Ihren Büchern (die dann als Bibel angesehen werden) Weisheiten, die schon immer in den Lehrbüchern (Leerbücher?) standen, die sich in der Praxis aber häufig überlebt haben.

Der Verfasser vergleicht die BWL mit den Ingenieurwissenschaften. Wie sich der Ingenieur mit der Funktion und Optimierung z.B. von Maschinen befasst, kümmert sich die BWL um die Funktion von Betrieben. Die folgende Abhandlung gliedert sich deshalb auch nach den betrieblichen Funktionen, die in Funktionen in der Wertschöpfungskette, unterstützende Funktionen und übergreifende Funktionen (siehe Abb. 2) unterschieden werden.

ethode.com: Intro to React JS for Beginners

ethode.com: Intro to React JS for Beginners

So Will It Replace Angular?

Uh, no. Comparing React to Angular is like comparing Bootstrap to Ember. Again, React is just a UI library for building components.

Since React only represents the “V”** in MVC, you could potentially use React as the view layer in Angular.

**Well, actually, some could argue that complex UI views in React also act as a “C”, but this is more like the concept of a “Presenter” — a presenter is basically a root view that controls sub views — think of it as a “view controller”.

What Are The Pros?

Data travels down. In short, this has the same benefit as dependency injection in object-oriented development.

Virtual DOM means the UI only updates when it is actually changed. This leads to super fast rendering times.

Awesome garbage collection, memory usage and performance (https://auth0.com/blog/2016/01/07/more-benchmarks-virtual-dom-vs-angular-12-vs-mithril-js-vs-the-rest/#results-link)

Can render a lot of frames per second

You can render your views on both the server and the client. No more rendering with PhantomJS and serving that up to search engines like you would have to do with a client-only rendering system like Angular’s view layer. Yuck.

Makes modifying and reusing UI components super easy

Backed by Facebook (and powering their site), so not likely to disappear anytime soon

Even though it’s still not at 1.0 release, it is already in production use by Facebook, Netflix, Airbnb, Yahoo

Big community online

As it is not a framework, it can be used with other frameworks and libraries such as Angular, jQuery, Backbone, Vue, etc

What Are The Cons?

Again, React is only concerned about the view. There is nothing related to data modelling or routing, etc. If you’re looking for a golden hammer to solve all of your JavaScript woes, React is not the answer.

Because it is only concerned with the view layer, it means gluing the other parts has been a bit chaotic during early adoption. It doesn’t come bundled with an event framework, so a lot of people have reinvented the wheel there.

React is a large library for not being a framework

Documentation is not the best, so the learning curve can be steep. Not something you want to roll out for a large new project — best to start practicing with it on side projects.

The problem with trying to learn React is that there are so many other web technologies that are usually combined with it that, if you are not familiar with those technologies, you can easily get lost of all of the new and shiny

No native support for IE8 or below

React Speak

Components. Think of these as custom HTML elements.

, or

Follow Single Responsibility here: each component does one thing and one thing well — several smaller components are better than one big monolithic component

E.g. Don’t build a list and the ajax call to fetch that list data in the same component — first build the list component and then use that component inside the data fetcher. A good example of this is shown here.

Just like when refactoring functions — if a function does more than one thing, you break it down until it does only one thing — do this same thing with components

Props: Initialize component state

These are declarative attributes of a component. Think of these in the same way you would think of things like ‘src’, ‘width’ and ‘height’ on an ” element. Another way to think of this is the component’s config options. This is injected into the component: “data travels down”. These are immutable — set once during initialization, and not changed. If you need to change these values, that is handled with State (see below).

Accessible via this.props in the component

E.g., if we have a “startAtFrame” variable, it would be in this.props.startAtFrame

Think of props as the “public” data that you can set

PropTypes: Allows your properties to be strictly typed.

const ListOfNumbers = props => (

    {
    props.numbers.map(number => (

  1. {number}
  2. )
    )
    }

);

ListOfNumbers.propTypes = {
className: React.PropTypes.string.isRequired,
numbers: React.PropTypes.arrayOf(React.PropTypes.number)
};

Events: used to communicate updates and changes to a component

React wraps native browser events to given them a standard interface

More info on the types of events available are available here: https://facebook.github.io/react/docs/events.html

State: Mutable data of a component.

Accessible via this.state in the component

If you “like” a photo, that photo’s state now changes to “liked”

Managing state can get complicated. Rather than reinvent the wheel, it’s best to start off with a community supported library, such as Redux, which was based on Facebook’s Flux theory for state management. Here is a cartoon intro to Redux that helps explain it.

Think of state as “private” data that only the component can set

You will typically have a root component, such as , that will act as your view presenter (or view controller). This container component manages state by pushing data down, in reaction to events that have flown up.

JSX

JSX is the part inside the render() function that looks like HTML. This was done to allow you create components in the way you would think of them like in HTML.

Some tips:

Do not use class. Instead, use className. It’s because JSX gets translated to JS, and class is a keyword in the newest version of JS.

If you use
instead of

, it won’t work. Make sure to use self-closing tags.

JSX is not required, but is widely used.

React Native

https://facebook.github.io/react-native/

Allows you to use React to build mobile apps

Redux

Redux helps to manage how components interact with one another

Components are given callback functions which are called whenever a UI Event happens

The callback functions create and dispatch Actions based on the Event

Reducers process the Actions, computing the new State

oldState + Action = newState

Components receive the new State as props and re-render if needed

Redux makes it much easier to test transitions between states

Helps to reduce framework lock-in, since all the components have to do is receive new props (data flows in) and emit events (events flow out)

Immutable.Js

This library can be used to manage state for components. It ensures that, once set, the data cannot be changed.

Some Tips

Keep components small — follow The Single Responsibility Theory here — just like functions and classes, you want your components to do one thing and do it really well

Easier to understand

Easier to share

Easier to reuse

const LatestPostsComponent = props => (

Latest posts

{ props.posts.map(post => ) }

);

Keep components free of state unless absolutely necessary

Do not place any business logic in components. That belongs on the domain model on the server. Only rendering logic should be handled in the components.

Warning signs include:

Validation

Calculations

All The Links

Ye Olde Cheat Sheet: http://reactcheatsheet.com/

Best Practices:

https://blog.risingstack.com/react-js-best-practices-for-2016/

https://github.com/planningcenter/react-patterns

http://brewhouse.io/blog/2015/03/24/best-practices-for-component-state-in-reactjs.html

React for jQuery Users: http://reactfordesigners.com/labs/reactjs-introduction-for-people-who-know-just-enough-jquery-to-get-by/

Intro to React: https://blog.risingstack.com/the-react-way-getting-started-tutorial/

Quick start guide: http://www.jackcallister.com/2015/01/05/the-react-quick-start-guide.html

http://developer.telerik.com/featured/5-steps-for-learning-react-application-development/

Great general overview of React: http://blog.andrewray.me/reactjs-for-stupid-people/

Structuring React Components: http://tusharm.com/articles/structuring-react-components/

Dumb vs Smart Components: https://medium.com/@dan_abramov/smart-and-dumb-components-7ca2f9a7c7d0#.50yne2ohp

In-depth overview of the problems React solves (warning: long read): http://jlongster.com/Removing-User-Interface-Complexity,-or-Why-React-is-Awesome

A list of components being built in the React community: https://react.rocks/

http://wix.github.io/react-templates/

https://camjackson.net/post/9-things-every-reactjs-beginner-should-know

http://tylermcginnis.com/reactjs-tutorial-a-comprehensive-guide-to-building-apps-with-react/

Aaaaaand.. a little humor 😉 http://swizec.com/blog/reactflux-can-do-in-just-137-lines-what-jquery-can-do-in-10/swizec/6740

http://blog.zigomir.com/react.js/jquery/2015/01/11/jquery-versus-react-thinking.html

Redux Links

https://code-cartoons.com/a-cartoon-intro-to-redux-3afb775501a6#.njz9abheu

Redux best practices: https://medium.com/lexical-labs-engineering/redux-best-practices-64d59775802e#.7oowaja79

View at Medium.com

Follow @JasonAAbles

mozilla.org: A re-introduction to JavaScript (JS tutorial)

mozilla.org: A re-introduction to JavaScript (JS tutorial)

Why a re-introduction? Because JavaScript is notorious for being the world’s most misunderstood programming language. It is often derided as being a toy, but beneath its layer of deceptive simplicity, powerful language features await. JavaScript is now used by an incredible number of high-profile applications, showing that deeper knowledge of this technology is an important skill for any web or mobile developer.

It’s useful to start with an overview of the language’s history. JavaScript was created in 1995 by Brendan Eich while he was an engineer at Netscape. JavaScript was first released with Netscape 2 early in 1996. It was originally going to be called LiveScript, but it was renamed in an ill-fated marketing decision that attempted to capitalize on the popularity of Sun Microsystem’s Java language — despite the two having very little in common. This has been a source of confusion ever since.

Several months later, Microsoft released JScript with Internet Explorer 3. It was a mostly-compatible JavaScript work-alike. Several months after that, Netscape submitted JavaScript to Ecma International, a European standards organization, which resulted in the first edition of the ECMAScript standard that year. The standard received a significant update as ECMAScript edition 3 in 1999, and it has stayed pretty much stable ever since. The fourth edition was abandoned, due to political differences concerning language complexity. Many parts of the fourth edition formed the basis for ECMAScript edition 5, published in December of 2009, and for the 6th major edition of the standard, published in June of 2015.

Because it is more familiar, we will refer to ECMAScript as “JavaScript” from this point on.
Unlike most programming languages, the JavaScript language has no concept of input or output. It is designed to run as a scripting language in a host environment, and it is up to the host environment to provide mechanisms for communicating with the outside world. The most common host environment is the browser, but JavaScript interpreters can also be found in a huge list of other places, including Adobe Acrobat, Adobe Photoshop, SVG images, Yahoo’s Widget engine, server-side environments such as Node.js, NoSQL databases like the open source Apache CouchDB, embedded computers, complete desktop environments like GNOME (one of the most popular GUIs for GNU/Linux operating systems), and others.

OverviewEDIT
JavaScript is a multi-paradigm, dynamic language with types and operators, standard built-in objects, and methods. Its syntax is based on the Java and C languages — many structures from those languages apply to JavaScript as well. JavaScript supports object-oriented programming with object prototypes, instead of classes (see more about prototypical inheritance and ES2015 Classes). JavaScript also supports functional programming — functions are objects, giving functions the capacity to hold executable code and be passed around like any other object.

Let’s start off by looking at the building blocks of any language: the types. JavaScript programs manipulate values, and those values all belong to a type. JavaScript’s types are:

Number
String
Boolean
Function
Object
Symbol (new in ES2015)
… oh, and undefined and null, which are … slightly odd. And Array, which is a special kind of object. And Date and RegExp, which are objects that you get for free. And to be technically accurate, functions are just a special type of object. So the type diagram looks more like this:

Number
String
Boolean
Symbol (new in ES2015)
Object
Function
Array
Date
RegExp
null
undefined
And there are some built-in Error types as well. Things are a lot easier if we stick with the first diagram, however, so we’ll discuss the types listed there for now.