Lin Clark: A cartoon intro to WebAssembly

Lin Clark: A cartoon intro to WebAssembly

By Lin Clark
Posted on February 28, 2017 in A cartoon intro to WebAssembly, Featured Article, Performance, and WebAssembly
Share This
WebAssembly is fast. You’ve probably heard this. But what is it that makes WebAssembly fast?

In this series, I want to explain to you why WebAssembly is fast.

Wait, so what is WebAssembly?
WebAssembly is a way of taking code written in programming languages other than JavaScript and running that code in the browser. So when people say that WebAssembly is fast, what they are comparing it to is JavaScript.

Now, I don’t want to imply that it’s an either/or situation — that you’re either using WebAssembly or using JavaScript. In fact, we expect that developers will use both WebAssembly and JavaScript in the same application.

But it is useful to compare the two, so you can understand the potential impact that WebAssembly will have.

A little performance history
JavaScript was created in 1995. It wasn’t designed to be fast, and for the first decade, it wasn’t fast.

Then the browsers started getting more competitive.

In 2008, a period that people call the performance wars began. Multiple browsers added just-in-time compilers, also called JITs. As JavaScript was running, the JIT could see patterns and make the code run faster based on those patterns.

The introduction of these JITs led to an inflection point in the performance of JavaScript. Execution of JS was 10x faster.

A graph showing JS execution performance increasing sharply in 2008

With this improved performance, JavaScript started being used for things no one ever expected it to be used for, like server-side programming with Node.js. The performance improvement made it feasible to use JavaScript on a whole new class of problems.

We may be at another one of those inflection points now, with WebAssembly.

A graph showing another performance spike in 2017 with a question mark next to it

So, let’s dive into the details to understand what makes WebAssembly fast.

A crash course in just-in-time (JIT) compilers
A crash course in assembly
WebAssembly, the present:
Creating and working with WebAssembly modules
What makes WebAssembly fast?
WebAssembly, the future:
Where is WebAssembly now and what’s next?
About Lin Clark
Lin is an engineer on the Mozilla Developer Relations team. She tinkers with JavaScript, WebAssembly, Rust, and Servo, and also draws code cartoons.
More articles by Lin Clark… Review performance and get detailed guidance on how to improve it. Review performance and get detailed guidance on how to improve it.

What is is the ultimate resource for developers of all backgrounds to learn, create, and solve on the web. It’s meant to not only educate developers, but help them apply what they’ve learned to any site they work on, be it personal or business. was borne of a belief that if we make high quality experiences easy to build, it will enable more meaningful engagement on the web—for users and developers alike. Simply put, we realized the only way the web gets better is if we help the people building it succeed.

And the web can be better.

Joe Nelson Blog

Joe Nelson Blog

Github Email Atom Feed
Video: C Portability Lessons from Weird Machines
November 15, 2018
Complete guide to running a mailing list
October 15, 2018
Actually, DMARC works fine with mailing lists
September 18, 2018
Mailing lists vs Github
June 5, 2018
User-defined Order in SQL
March 20, 2018
SQL Keys in Depth
January 1, 2018
PostgreSQL Domain Integrity In Depth
October 21, 2017
Deferrable SQL Constraints in Depth
August 27, 2017
Practical Guide to SQL Transaction Isolation
August 1, 2017
OpenBSD Workstation Guide
May 17, 2017
Good books for deep hacks
April 13, 2017
Video: Monad Transformer Workshop
April 9, 2017
The Design and Use of QuickCheck
January 14, 2017
Video: An Advanced Intro to GnuPG
November 5, 2016
External: Faster PostgreSQL Counting
October 12, 2016
Video: Purely Functional Linux with NixOS
August 8, 2016
Video: A Sensible Intro to FRP
July 27, 2016
The best linear algebra books
July 24, 2016
Returning to the Original Social Network
July 8, 2016
Video: Fast, Elegant Regexes in Haskell
June 27, 2016
Video: Pragmatic Haskell for Beginners, Lecture 2
June 1, 2016
The real responsive design challenge? RSS.
May 28, 2016
Relocatable PostgreSQL Builds
May 21, 2016
Video: Pragmatic Haskell for Beginners, Lecture 1
May 14, 2016
Video: Postgres Adores a Vacuum
April 19, 2016
Video: Software Transactional Memory
April 2, 2016
External: Five Ways to Paginate in PostgreSQL
March 30, 2016
Video: Sharing Haskell Builds Across a Team
March 26, 2016
Video: A Tour of PostgREST
March 20, 2016
Video: Difference Lists and the Codensity Monad
February 4, 2016
Making 20% Time Work
January 29, 2016
Video: Demo of IHaskell Notebook
January 20, 2016
Video: A Clear Intro to Lenses
January 7, 2016
DIY Backpack Base Station
January 4, 2016
Video: Efficient Linear Algebra with Plover
January 3, 2016
Video: What Code Does vs What Code Means
December 26, 2015
Dream Standing-Desk Setup
December 20, 2015
Video: FFT With Circat
December 14, 2015
Video: Phantheck, the Type-Level QuickCheck
December 6, 2015
Video: Functional Front-End Coding
November 30, 2015
Video: Learning Real Haskell Incrementally
October 24, 2015
Video: The Internet of Code
October 16, 2015
Video: PostgREST Workshop
October 2, 2015
Video: FP Graph Algorithms
September 4, 2015
Video: Applicatives in Math vs Code
August 30, 2015
Video: Dev and Deploy Haskell on Docker
August 11, 2015
Video: Nix ±Cabal
August 7, 2015
Video: Haskell Source Navigation
July 27, 2015
Video: The Essence of FRP
July 22, 2015
Video: The Design of Purescript Halogen
July 10, 2015
Video: From Haskell to Hardware
June 28, 2015
Video: Stack, the Haskell Build Tool
June 22, 2015
Video: Thinking with Laziness
June 17, 2015
Video: Continuation Passing Style in Haskell
June 3, 2015
Video: Safe Haskell
May 24, 2015
Haskell “God Mode” Sandbox
May 15, 2015
Going Write-Only, Halfway Report
May 6, 2015
Choosing 功夫
April 25, 2015
To Stalk a Muni
April 22, 2015
Going “Write-Only”
April 20, 2015
Video: SF CloudCamp Lightning Talks
April 18, 2015
Video: Deploying Predictive Models in R
April 10, 2015
Video: Circular Statistics of SF Bus Routes
April 5, 2015
Tracking Joy at Work
March 15, 2015
Video: Machine Learning at the Limit
March 13, 2015
Better Tweets Through Data Science
March 10, 2015
Video: Text Mining in R (Sentiment Analysis, LDA, and Syuzhet)
February 25, 2015
Video: How Transparent Encryption Works in HDFS
February 22, 2015
Video: Deploying Microservices
February 15, 2015
Video: Filling Haskell’s Type Holes Like It’s Agda
February 7, 2015
Video: Virtualizing a Hadoop Cluster (two videos)
January 28, 2015
Video: Writing a React JS front-end in Haskell
January 12, 2015
Video: Declaring RESTful APIs with PostgREST
December 30, 2014
Video: Intro to the Jut Dataflow Platform
December 11, 2014
Video: Datacenter to AWS Cloud Migration
December 4, 2014
A Survey of Data Science
November 30, 2014
Video: Intro to Apache Mesos, the distributed systems SDK
November 28, 2014
Video: Robot programming in APL
November 26, 2014
Video: Type-Safe DB Access with Persistent
November 18, 2014
Creating a package on Hackage
October 25, 2014
Writing controller specs for a Warp server
October 19, 2014
Video: GPU Programming with Accelerate
October 18, 2014
Create a static site with Hakyll, Github and Travis CI
August 12, 2014
Video: Pair programming with Haskell and Digital Ocean
June 9, 2014
Database migrations without merge conflicts
April 30, 2014
Good songs in classical, romantic, impressionist and 20th century art music
April 14, 2014
Tikhon Jelvis’ ideas about Structural Merging
April 8, 2014
Magic numbers in polynomial hash functions
March 28, 2014
Beyond HTTP Header Links
March 6, 2014
Thoughts for a new API server stack
March 1, 2014
API embedded resources considered harmful
February 14, 2014
API versioning best practices
February 10, 2014
Unlocking Deep HTTP with JavaScript, pt 2
January 2, 2014
Unlocking Deep HTTP with JavaScript, pt 1
December 31, 2013
The weird forest of “Big-Oh” asymptotics
December 17, 2013
Popularizing Haskell through easy web deployment
December 6, 2013
Eight tips for leading a tech workshop
November 30, 2013
Humane Computing and the Eras of Information
November 2, 2013
Fixing GHC for xcode 5 and OS X 10.9 Mavericks
October 31, 2013
How to compile Haskell libraries for Heroku
October 14, 2013
Haskell postgresql-simple examples, part 2
September 14, 2013
Haskell postgresql-simple examples, part 1
September 10, 2013
Using cabal-dev exclusively
September 5, 2013
An example of software transactional memory
September 4, 2013
Software transactional memory
September 3, 2013
Miscellaneous database stuff and an interesting book
September 1, 2013
Creating sqlite tables with Groundhog’s default settings
August 31, 2013
Groundhog: a Haskell db wrapper that gets it right
August 30, 2013
Haskell Monads Explained Without Words
August 29, 2013
Haskell Applicative Functors Explained Without Words
August 28, 2013
Weird symbols in their native tongue
August 27, 2013
Some extra safety with Yesod routing
August 26, 2013
Interactively discovering the best type classes for Haskell functions
August 25, 2013
Deploying Yesod to Heroku with Postgres support
August 24, 2013
Haskell on Heroku, let’s simplify
August 22, 2013
Getting dirty – cabal dependencies, string types, JSON
August 21, 2013
Try GHCi on Acid and watch function arguments melt away
August 20, 2013
Video: Connecting Vim with your Haskell repl
August 19, 2013
Don’t be partial to partial functions
August 18, 2013
Tricking Haskell into being dynamic
August 16, 2013
Writing Haskell every damn day
August 15, 2013
Video: An Interview with Brent Beer
August 6, 2013
Video: An Interview with Jack “Danger” Canty
July 22, 2013
Madison Thinkerspace Now Open
June 4, 2013
Exploiting Symmetry
April 30, 2013
A New Kind of Learning and Coworking
April 23, 2013
Give Yourself a Security Makeover
April 9, 2013
Feedback on the “Thinkerspace”
March 28, 2013
The Pilgrimage Begins
March 8, 2013
The Tension of Finishing What You Start
February 27, 2013
What might programming become?
February 23, 2013
You Don’t Know Your Visitors, So Stop Pretending
February 10, 2013
Programming Pilgrimage
February 4, 2013
Put Quality on Autopilot
November 11, 2012
Three-Month 1up Retrospective
October 26, 2012
Bespoke Vim
September 10, 2012
Version control for poetic time travelers
August 28, 2012
Git’s History and Design Decisions
August 21, 2012
Faster, Safer ActiveRecord
August 5, 2012
The Hidden Life of Stylesheet Preprocessing
July 27, 2012
Styleguide soufflé
July 15, 2012
Don’t Play CSS Tetris
July 12, 2012
The Order of the Lambda
July 4, 2012
Combinatory Logic, the Bytecode of Functional Programming
May 5, 2012
Creativity Bootcamps — the modern αγωγη
December 17, 2011
Honest Code
November 11, 2011
A Problem with the Infinite
October 19, 2011
Onion Testing
October 14, 2011
Structuring a Code Dojo
August 17, 2011
Overthrowing Syntactic Rituals
August 9, 2011
How to make a simple computer. Really simple.
July 20, 2011
Spam is Dead, long live email forwarding
July 13, 2011
How to Remember, Learn and Teach
July 13, 2011
No Whitespace
July 7, 2011
Minimal Instruction Set
July 5, 2011
Directional Quotes
July 5, 2011
Calm, Simple Things
June 2, 2011
Do you like these videos and blog posts? Sign up for the Begriffs Newsletter for notifications of new posts, events, and ideas. (Pretty low-volume, once every few weeks.)

Email *
First name *
Last name *
Written by Joe “begriffs” Nelson. 🔑

Clang is now used to build Chrome for Windows

Clang is now used to build Chrome for Windows

As of Chrome 64, Chrome for Windows is compiled with Clang. We now use Clang to build Chrome for all platforms it runs on: macOS, iOS, Linux, Chrome OS, Android, and Windows. Windows is the platform with the second most Chrome users after Android according to statcounter, which made this switch particularly exciting.

Clang is the first-ever open-source C++ compiler that’s ABI-compatible with Microsoft Visual C++ (MSVC) – meaning you can build some parts of your program (for example, system libraries) with the MSVC compiler (“cl.exe”), other parts with Clang, and when linked together (either by MSVC’s linker, “link.exe”, or LLD, the LLVM project’s linker – see below) the parts will form a working program.

Note that Clang is not a replacement for Visual Studio, but an addition to it. We still use Microsoft’s headers and libraries to build Chrome, we still use some SDK binaries like midl.exe and mc.exe, and many Chrome/Win developers still use the Visual Studio IDE (for both development and for debugging).

This post discusses numbers, motivation, benefits and drawbacks of using Clang instead of MSVC, how to try out Clang for Windows yourself, project history, and next steps. For more information on the technical side you can look at the slides of our 2015 LLVM conference talk, and the slides linked from there.
This is what most people ask about first, so let’s talk about it first. We think the other sections are more interesting though.
Build time
Building Chrome locally with Clang is about 15% slower than with MSVC. (We’ve heard that Windows Defender can make Clang builds a lot slower on some machines, so if you’re seeing larger slowdowns, make sure to whitelist Clang in Windows Defender.) However, the way Clang emits debug info is more parallelizable and builds with a distributed build service (e.g. Goma) are hence faster.
Binary size
Chrome installer size gets smaller for 64-bit builds and slightly larger for 32-bit builds using Clang. The same difference shows in uncompressed code size for regular builds as well (see the tracking bug for Clang binary size for many numbers). However, compared to MSVC builds using link-time code generation (LTCG) and profile-guided optimization (PGO) Clang generates larger code in 64-bit for targets that use /O2 but smaller code for targets that use /Os. The installer size comparison suggests Clang’s output compresses better.

Some raw numbers for versions 64.0.3278.2 (MSVC PGO) and 64.0.3278.0 (Clang). mini_installer.exe is Chrome’s installer that users download, containing the LZMA-compressed code. chrome_child.dll is one of the two main dlls; it contains Blink and V8, and generally has many targets that are built with /O2. chrome.dll is the other main dll, containing the browser process code, mostly built with /Os.

32-bit win-pgo
45.46 MB
36.47 MB
53.76 MB
1.38 MB
32-bit win-clang
45.65 MB
42.56 MB (+16.7%)
62.38 MB
1.45 MB
64-bit win-pgo
49.4 MB
53.3 MB
65.6 MB
1.6 MB
64-bit win-clang
46.27 MB
50.6 MB
72.71 MB
1.57 MB
We conducted extensive A/B testing of performance. Performance telemetry numbers are about the same for MSVC-built and clang-built Chrome – some metrics get better, some get worse, but all of them are within 5% of each other. The official MSVC builds used LTCG and PGO, while the Clang builds currently use neither of these. This is potential for improvement that we look forward to exploring. The PGO builds took a very long time to build due to the need for collecting profiles and then building again, and as a result, the configuration was not enabled on our performance-measurement buildbots. Now that we use Clang, the perf bots again track the configuration that we ship.

Startup performance was worse in Clang-built Chrome until we started using a link-order file – a form of “PGO light” .
We A/B-tested stability as well and found no difference between the two build configurations.
There were many motivating reasons for this project, the overarching theme being the benefits of using the same compiler across all of Chrome’s platforms, as well as the ability to change the compiler and deploy those changes to all our developers and buildbots quickly. Here’s a non-exhaustive list of examples.
Chrome is heavily using technology that’s based on compiler instrumentation (ASan, CFI, ClusterFuzz—uses ASan). Clang supports this instrumentation already, but we can’t add it to MSVC. We previously used after-the-fact binary instrumentation to mitigate this a bit, but having the toolchain write the right bits in the first place is cleaner and faster.
Clang enables us to write compiler plugins that add Chromium-specific warnings and to write tooling for large-scale refactoring. Chromium’s code search can now learn to index Windows code.
Chromium is open-source, so it’s nice if it’s built with an open-source toolchain.
Chrome runs on 6+ platforms, and most developers are only familiar with 1-3 platforms. If your patch doesn’t compile on a platform you’re unfamiliar with, due to a compiler error that you can’t locally reproduce on your local development machine, it’ll take you a while to fix. On the other hand, if all platforms use the same compiler, if it builds on your machine then it’s probably going to build on all platforms.
Using the same compiler also means that compiler-specific micro-optimizations will help on all platforms (assuming that the same -O flags are used on all platforms – not yet the case in Chrome, and only on the same ISAs – x86 vs ARM will stay different).
Using the same compiler enables cross-compiling – developers who feel most at home on a Linux box can now work on Windows-specific code, from their Linux box (without needing to run Wine).
We can continuously build Chrome trunk with Clang trunk to find compiler regressions quickly. This allows us to update Clang every week or two. Landing a major MSVC update in Chrome usually took a year or more, with several rounds of reporting internal compiler bugs and miscompiles. The issue here isn’t that MSVC is more buggy than Clang – it isn’t, all software is buggy – but that we can continuously improve Clang due to Clang being open-source.
C++ receives major new revisions every few years. When C++11 was released, we were still using six different compilers, and enabling C++11 was difficult. With fewer compilers, this gets much easier.
We can prioritize compiler features that are important to us. For example:
Deterministic builds were important to us before they were important for the MSVC team. For example, link.exe /incremental depends on an incrementing mtime timestamp in each object file.
We could enable warnings that fired in system headers long before MSVC added support for the system header concept.
cl.exe always prints the name of the input file, so that the build system has to filter it out for quiet builds.

Of course, not all – or even most – of these reasons will apply to other projects.
Benefits and drawbacks of using Clang instead of Visual C++
Benefits of using Clang, if you want to try for your project:
Clang supports 64-bit inline assembly. For example, in Chrome we built libyuv (a video format conversion library) with Clang long before we built all of Chrome with it. libyuv had highly-tuned 64-bit inline assembly with performance not reachable with intrinsics, and we could just use that code on Windows.
If your project runs on multiple platforms, you can use one compiler everywhere. Building your project with several compilers is generally considered good for code health, but in Chrome we found that Clang’s diagnostics found most problems and we were mostly battling compiler bugs (and if another compiler has a great new diagnostic, we can add that to Clang).
Likewise, if your project is Windows-only, you can get a second compiler’s opinion on your code, and Clang’s warnings might find bugs.
You can use Address Sanitizer to find memory bugs.
If you don’t use LTCG and PGO, it’s possible that Clang might create faster code.
Clang’s diagnostics and fix-it hints.
There are also drawbacks:
Clang doesn’t support C++/CX or #import “foo.dll”.
MSVC offers paid support, Clang only gives you the code and the ability to write patches yourself (although the community is very active and helpful!).
MSVC has better documentation.
Advanced debugging features such as Edit & Continue don’t work when using Clang.
How to use
If you want to give Clang for Windows a try, there are two approaches:
You could use clang-cl, a compiler driver that tries to be command-line flag compatible with cl.exe (just like Clang tries to be command-line flag compatible with gcc). The Clang user manual describes how you can tell popular Windows build systems how to call clang-cl instead of cl.exe. We used this approach in Chrome to keep the Clang/Win build working alongside the MSVC build for years, with minimal maintenance cost. You can keep using link.exe, all your current compile flags, the MSVC debugger or windbg, ETW, etc. clang-cl even writes warning messages in a format that’s compatible with cl.exe so that you can click on build error messages in Visual Studio to jump to the right file and line. Everything should just work.
Alternatively, if you have a cross-platform project and want to use gcc-style flags for your Windows build, you can pass a Windows triple (e.g. –target=x86_64-windows-msvc) to regular Clang, and it will produce MSVC-ABI-compatible output. Starting in Clang 7.0.0, due Fall 2018, Clang will also default to CodeView debug info with this triple.
Since Clang’s output is ABI-compatible with MSVC, you can build parts of your project with clang and other parts with MSVC. You can also pass /fallback to clang-cl to make it call cl.exe on files it can’t yet compile (this should be rare; it never happens in the Chrome build).

clang-cl accepts Microsoft language extensions needed to parse system headers but tries to emit -Wmicrosoft-foo warnings when it does so (warnings are ignored for system headers). You can choose to fix your code, or pass -Wno-microsoft-foo to Clang.

link.exe can produce regular PDB files from the CodeView information that Clang writes.
Project History
We switched chrome/mac and chrome/linux to Clang a while ago. But on Windows, Clang was still missing support for parsing many Microsoft language extensions, and it didn’t have any Microsoft C++ ABI-compatible codegen at all. In 2013, we spun up a team to improve Clang’s Windows support, consisting half of Chrome engineers with a compiler background and half of other toolchain people. In mid-2014, Clang could self-host on Windows. In February 2015, we had the first fallback-free build of 64-bit Chrome, in July 2015 the first fallback-free build of 32-bit Chrome (32-bit SEH was difficult). In Oct 2015, we shipped a first clang-built Chrome to the Canary channel. Since then, we’ve worked on improving the size of Clang’s output, improved Clang’s debug information (some of it behind -instcombine-lower-dbg-declare=0 for now), and A/B-tested stability and telemetry performance metrics.

We use versions of Clang that are pinned to a recent upstream revision that we update every one to three weeks, without any local patches. All our work is done in upstream LLVM.

Mid-2015, Microsoft announced that they were building on top of our work of making Clang able to parse all the Microsoft SDK headers with clang/c2, which used the Clang frontend for parsing code, but cl.exe’s codegen to generate code. Development on clang/c2 was halted again in mid-2017; it is conceivable that this was related to our improvements to MSVC-ABI-compatible Clang codegen quality. We’re thankful to Microsoft for publishing documentation on the PDB file format, answering many of our questions, fixing Clang compatibility issues in their SDKs, and for giving us publicity on their blog! Again, Clang is not a replacement for MSVC, but a complement to it.

Opera for Windows is also compiled with Clang starting in version 51.

Firefox is also looking at using clang-cl for building Firefox for Windows.
Next Steps
Just as clang-cl is a cl.exe-compatible interface for Clang, lld-link is a link.exe-compatible interface for lld, the LLVM linker. Our next step is to use lld-link as an alternative to link.exe for linking Chrome for Windows. This has many of the same advantages as clang-cl (open-source, easy to update, …). Also, using clang-cl together with lld-link allows using LLVM-bitcode-based LTO (which in turn enables using CFI) and using PE/COFF extensions to speed up linking. A prerequisite for using lld-link was its ability to write PDB files.
We’re also considering using libc++ instead of the MSVC STL – this allows us to instrument the standard library, which is again useful for CFI and Address Sanitizer.
In Closing
Thanks to the whole LLVM community for helping to create the first new production C++ compiler for Windows in over a decade, and the first-ever open-source C++ compiler that’s ABI-compatible with MSVC!

Posted by Nico Weber at 12:46 PM
Labels: C++, Clang, Products The Web Storage API The Web Storage API


The Web Storage API

How to access the storage
setItem(key, value)
Storage size limits
Going over quota
Developer Tools
The Web Storage API defines two storage mechanisms which are very important: Session Storage and Local Storage.

They are part of the set of storage options available on the Web Platform, which includes:

The Cache API
Application Cache is deprecated, and Web SQL is not implemented in Firefox, Edge and IE.

Both Session Storage and Local Storage provide a private area for your data. Any data you store cannot be accessed by other websites.

Session Storage maintains the data stored into it for the duration of the page session. If multiple windows or tabs visit the same site, they will have two different Session Storage instances.

When a tab/window is closed, the Session Storage for that particular tab/window is cleared.

Session storage is meant to allow the scenario of handling different processes happening on the same site independently, something not possible with cookies for example, which are shared in all sessions.

Local Storage instead persists the data until it’s explicitly removed, either by you or by the user. It’s never cleaned up automatically, and it’s shared in all the sessions that access a site.

Both Local Storage and Session Storage are protocol specific: data stored when the page is accessed using http is not available when the page is served with https, and vice versa.

Web Storage is only accessible in the browser. It’s not sent to the server like cookies do.

Both Local and Session Storage are available on the window object, so you can access them using sessionStorage and localStorage.

Their set of properties and methods is exactly the same, because they return the same object, a Storage object.

The Storage Object has a single property, length, which is the number of data items stored into it.

setItem() adds an item to the storage. Accepts a string as key, and a string as a value:

localStorage.setItem(‘username’, ‘flaviocopes’)
localStorage.setItem(‘id’, ‘123’)
If you pass any value that’s not a string, it will be converted to string:

localStorage.setItem(‘test’, 123) //stored as the ‘123’ string
localStorage.setItem(‘test’, { test: 1 }) //stored as “[object Object]”
getItem() is the way to retrieve a string value from the storage, by using the key string that was used to store it:

localStorage.getItem(‘username’) // ‘flaviocopes’
localStorage.setItem(‘id’) // ‘123’
removeItem() removes the item identified by key from the storage, returning nothing (an undefined value):

Every item you store has an index number. So the first time you use setItem(), that item can be referenced using key(0). The next with key(1) and so on.

If you reference a number that does not point to a storage item, it returns null.

Every time you remove an item with removeItem(), the index consolidates:

localStorage.setItem(‘a’, ‘a’)
localStorage.setItem(‘b’, ‘b’)
localStorage.key(0) //”a”
localStorage.key(1) //”b”
localStorage.key(1) //null

localStorage.setItem(‘b’, ‘b’)
localStorage.setItem(‘c’, ‘c’)
localStorage.key(1) //”b”
localStorage.key(1) //”c”
clear() removes everything from the storage object you are manipulating:

localStorage.setItem(‘a’, ‘a’)
localStorage.setItem(‘b’, ‘b’)
localStorage.length //2
localStorage.length //0
Through the Storage API you can store a lot more data than you would be able with cookies.

The amount of storage available on Web might differ by storage type (local or session), browser, and by device type. A research by points out those limits:

Chrome, IE, Firefox: 10MB
Safari: 5MB for local storage, unlimited session storage
Chrome, Firefox: 10MB
iOS Safari and WebView: 5MB for local storage, session storage unlimited unless in iOS6 and iOS7 where it’s 5MB
Android Browser: 2MB local storage, unlimited session storage
You need to handle quota errors, especially if you store lots of data. You can do so with a try/catch:

try {
localStorage.setItem(‘key’, ‘value’)
} catch (domException) {
if (
[‘QuotaExceededError’, ‘NS_ERROR_DOM_QUOTA_REACHED’].includes(
) {
// handle quota limit exceeded error
The DevTools of the major browsers all offer a nice interface to inspect and manipulate the data stored in the Local and Session Storage.

Chrome DevTools local storage

Firefox DevTools local storage

Safari DevTools local storage

How to add a click event to a list of DOM elements returned from querySelectorAll
How to change a DOM node value
How to check if a DOM element has a class
How to remove a class from a DOM element
How to loop over DOM elements from querySelectorAll
How to add a class to a DOM element
How to wait for the DOM ready event in plain JavaScript
The Speech Synthesis API
Working with the DevTools Console and the Console API
request Animation Frame
Web Workers
Roadmap to learn the Web Platform
What are Data URLs
An in-depth SVG tutorial
The WebP Image Format
The History API
Learn how HTTP Cookies work
The Web Storage API
The Document Object Model (DOM)
Efficiently load JavaScript with defer and async
The Selectors API: querySelector and querySelectorAll
Dive into IndexedDB
The Notification API Guide
The Cache API Guide
A list of sample Web App Ideas
The Channel Messaging API
The Push API Guide

Search a tutorial


© 2018 Flavio Copes 🏡 Home
This website makes use of cookies to enhance browsing experience and provide additional functionality. Allow cookies