Get rid of annoying cookie warnings from almost all “infected” websites

Get rid of annoying cookie warnings from almost all “infected” websites

I don’t care about cookies

Get rid of annoying cookie warnings from almost all “infected” websites!
Due to EU regulations and increased awareness of online privacy problems, every website must get user’s permission before installing cookies. If you surf anonymously or if you delete cookies automatically every time you close the browser, websites will ask for that permission again and again, and it will soon become very irritating to click the same I agree buttons every day.

This browser extension removes annoying cookie warnings from almost all websites and saves you thousands of unnecessary clicks!

By using it, you explicitly allow websites to do whatever they want with cookies they set on your computer (which they mostly do anyway, whether you allow them or not). Please educate yourself about cookie related privacy issues and ways to protect yourself and your data. For example, you can block 3rd party cookies, install ad blocking extensions and then block tracking tools, delete browsing data regularly, enable Tracking Protection in your browser etc.
Firefox instructions FREE
Install the extension into your Firefox browser for free from the official Firefox website.

Chrome instructions FREE
Install the extension into your Chrome browser for free from the Chrome Web Store.

Opera instructions FREE
Install the extension into your Opera browser for free from the official Opera website.

Pale Moon
Pale Moon instructions FREE
Install the extension into your Pale Moon browser for free from the Pale Moon addon repository.

Adblock Plus
Adblock Plus and uBlock FREE
Safari, Microsoft Edge and other browser users can install the filter list into their favorite ad blocker extension. Add to Adblock Plus or download the list.

Be aware that the filter list is not as effective as a browser extension but it will hide most cookie warnings.

How can you help?
This extension requires a lot of my effort and time. Your donation directly helps the development and keeps the project alive.

Do you own a website?
If your website needs to have a cookie warning, you can easily spread the word about this project. Simply add I don’t care link next to your “OK, I agree” button.


This extension is promoted by:

My other extensions:
Autoplay No More
(Chrome, Opera, Firefox)

Auto Allow Fullscreen
(Chrome, Opera)

Youtube’s Annotations No More
(Chrome, Firefox)

I am Daniel from Croatia. I like foreign languages, travelling, nature, permaculture, gardening, science, walking, music, books, european films, documentaries, comedy, talking to strangers, playing antichess… You name it. Everything except cookie warnings I guess LOL

I am also a web developer (PHP and everything related). If you’re looking for one, you can check my LinkedIn profile to see if I match your criteria.

If you want to say hello or well done, have a bug to report or a suggestion, feel free to contact me.

Nikita Popov, PHP

Nikita Popov, PHP

Blog by nikic. Find me on GitHub, StackOverflow and Twitter. Learn more about me.
PHP 7 Virtual Machine 14. April 2017
An overview of the PHP virtual machine.
Internal value representation in PHP 7 – Part 2 19. June 2015
Covers the implementation of complex types like strings and objects in PHP 7, as well as a number of special types like indirect zvals.
Internal value representation in PHP 7 – Part 1 05. May 2015
Describes and compares the zval implementations used by PHP 5 and PHP 7, including a discussion of references. Other types are covered in the next part.
PHP’s new hashtable implementation 22. December 2014
In this article we’ll explore how the new hashtable implementation used by PHP 7 improved memory usage and performance.
Methods on primitive types in PHP 14. March 2014
This article discusses the merits of allowing method calls on primitive PHP types, like strings and arrays.
Fast request routing using regular expressions 18. February 2014
This article describes a number of techniques to improve performance of regular expression based dispatch processes, as used in request routing or lexing.
The case against the ifsetor function 10. January 2014
The ifsetor function can be used to suppress notices when accessing array indices. This post discusses some of the issues this function has and how to resolve them.
Cooperative multitasking using coroutines (in PHP!) 22. December 2012
This post explains how coroutines can be used for task scheduling and handling asynchronous operations in a synchronous-seeming way.
Are PHP developers functophobic? 10. August 2012
PHP developers don’t seem to like normal functions much. I think that this is related to the one-to-one class to file mapping that PHP has inherited from Java.
How to add new (syntactic) features to PHP 27. July 2012
In this post I’m describing how one can add new syntax to PHP. At the same time, this post can be seen as a general introduction to the workings of the Zend Engine.
What PHP 5.5 might look like 10. July 2012
PHP 5.5 is still in an early development stage, but there are already many proposals that are being worked on. This post gives some insights into the recent developments.
A plea for less (XML) configuration files 09. July 2012
Configuration files typically use XML or some other domain specific language. But why? Why not just use the usual programming language instead?
PHP solves problems. Oh, and you can program with it too! 29. June 2012
PHP is a great language to start programming. And once you started, PHP is also good for “real” programming. So what’s the problem?
The true power of regular expressions 15. June 2012
There is a major misunderstanding about what modern regular expression implementation can or cannot do. This article analyses the situation by walking through the different grammar classes.
Understanding PHP’s internal array implementation (PHP’s Source Code for PHP Developers – Part 4) 28. March 2012
The fourth part of the “PHP’s Source Code for PHP Developers” series, covering how arrays are internally implemented in PHP and how they are used in the source code.
PHP’s Source Code for PHP Developers – Part 3 – Variables 21. March 2012
Cross-link to the third part of the “PHP’s Source Code for PHP Developers” series, covering how PHP values are represented internally and used throughout the source code.
Understanding PHP’s internal function definitions (PHP’s Source Code for PHP Developers – Part 2) 16. March 2012
The second part of the “PHP’s Source Code for PHP Developers” series, covering how to find functions in the PHP source code and how they are structured.
Scalar type hinting is harder than you think 06. March 2012
A quick overview of the different scalar type hinting proposals and why PHP is having such a hard time deciding.
Pointer magic for efficient dynamic value representations 02. February 2012
JS implementations use some neat tricks to achieve good performance. One of those tricks is a good bit of pointer magic to make dynamically typed values more efficient.
htmlspecialchars() improvements in PHP 5.4 28. January 2012
There is some nice new stuff for htmlspecialchars() in PHP 5.4, which hasn’t yet got the attention it deserves.
Careful: XDebug can skew your performance numbers 19. January 2012
In some cases XDebug can significantly skew your benchmarking and profiling numbers. So make sure that you do measurements without it.
Disproving the Single Quotes Performance Myth 09. January 2012
One of the oldest myths around PHP is that single quotes are faster than double quotes. And. It. Is. Not. True.
Supercolliding a PHP array 28. December 2011
Inserting 65536 specially crafted values into a PHP array can take 30 seconds, whereas normally it would only take 0.01 seconds.
Don’t be STUPID: GRASP SOLID! 27. December 2011
Introducing STUPID: Singleton, Tight Coupling, Untestability, Premature Optimization, Indescriptive Naming, Duplication.
How big are PHP arrays (and values) really? (Hint: BIG!) 12. December 2011
PHP’s memory usage might seem atrocious to some – twenty times more than the optimum you would have in C. This post tries to explain those numbers and why they are necessary.
PCRE and newlines 10. December 2011
There is a huge number of newline related features in PCRE (regular expressions) that nearly nobody knows about. I want to shed light on some of those.
Manually installing PEAR on Windows 03. December 2011
If you ever tried to install PEAR on Windows you probably know what a woeful task it is. This is a short instruction on how to manually install PEAR (without using go-pear.phar).
PHP internals: When does foreach copy? 11. November 2011
PHP’s foreach language construct sometimes copies the array it iterates and sometimes does not. This post analyzes when and why this happens.
Improving lexing performance in PHP 23. October 2011
Some thoughts on improving lexing performance in PHP by compiling the individual token regexes into one big super-regex.

Can banks individually create money out of nothing? — The theories and the empirical evidence

Can banks individually create money out of nothing? — The theories and the empirical evidence

Skip to main content
Journals & BooksRegisterSign in

Search ScienceDirect
JEL classification
1. Introduction
2. The literature on whether banks can create money
3. The empirical test
4. Results
5. Conclusion
Appendix 1. Sequence of steps for the extension of a loan Raiffeisenbank Wildenberg e.G.
Appendix 2. Letter of confirmation of facts by Raiffeisenbank Wildenberg e.G. (Translation; original in online Appendix 3).
Appendix 2. Letter of confirmation of facts by Raiffeisenbank Wildenberg
Figures (1)
Fig.1. The fractional reserve theory as represented in many textbooks
Tables (7)
Table 1
Table 2
Table 3
Table 4
Table 5
Table 6
Extras (5)
Supplementary material 1.
Supplementary material 2
Supplementary material 3
Supplementary material 4.
Supplementary material 5.
International Review of Financial Analysis
Volume 36, December 2014, Pages 1-19
International Review of Financial Analysis
Can banks individually create money out of nothing? — The theories and the empirical evidence☆
Author links open overlay panelRichard A.Werner
Show more rights and content
Under a Creative Commons licenseopen access
This paper presents the first empirical evidence in the history of banking on the question of whether banks can create money out of nothing. The banking crisis has revived interest in this issue, but it had remained unsettled. Three hypotheses are recognised in the literature. According to the financial intermediation theory of banking, banks are merely intermediaries like other non-bank financial institutions, collecting deposits that are then lent out. According to the fractional reserve theory of banking, individual banks are mere financial intermediaries that cannot create money, but collectively they end up creating money through systemic interaction. A third theory maintains that each individual bank has the power to create money ‘out of nothing’ and does so when it extends credit (the credit creation theory of banking). The question which of the theories is correct has far-reaching implications for research and policy. Surprisingly, despite the longstanding controversy, until now no empirical study has tested the theories. This is the contribution of the present paper. An empirical test is conducted, whereby money is borrowed from a cooperating bank, while its internal records are being monitored, to establish whether in the process of making the loan available to the borrower, the bank transfers these funds from other accounts within or outside the bank, or whether they are newly created. This study establishes for the first time empirically that banks individually create money out of nothing. The money supply is created as ‘fairy dust’ produced by the banks individually, “out of thin air”.

Previous article in issueNext article in issue
Bank creditCredit creationFinancial intermediationFractional reserve bankingMoney creation
JEL classification
“The choice of a measure of value, of a monetary system, of currency and credit legislation — all are in the hands of society, and natural conditions … are relatively unimportant. Here, then, the decision-makers in society have the opportunity to directly demonstrate and test their economic wisdom — or folly. History shows that the latter has often prevailed.”1

Wicksell (1922, p. 3)

1. Introduction
Since the American and European banking crisis of 2007–8, the role of banks in the economy has increasingly attracted interest within and outside the disciplines of banking, finance and economics. This interest is well justified: Thanks to the crisis, awareness has risen that the most widely used macroeconomic models and finance theories did not provide an adequate description of crucial features of our economies and financial systems, and, most notably, failed to include banks.2 These bank-less dominant theories are likely to have influenced bank regulators and may thus have contributed to sub-optimal bank regulation: Systemic issues emanating from the banking sector are impossible to detect in economic models that do not include banks, or in finance models that are based on individual, representative financial institutions without embedding these appropriately into macroeconomic models.3

Consequently, many researchers have since been directing their efforts at incorporating banks or banking sectors in economic models.4 This is a positive development, and the European Conferences on Banking and the Economy (ECOBATE) are contributing to this task, showcased in this second special issue, on ECOBATE 2013, held on 6 March 2013 in Winchester Guildhall and organised by the University of Southampton Centre for Banking, Finance and Sustainable Development. As the work in this area remains highly diverse, this article aims to contribute to a better understanding of crucial features of banks, which would facilitate their suitable incorporation in economic models. Researchers need to know which aspects of bank activity are essential — including important characteristics that may distinguish banks from non-bank financial institutions. In other words, researchers need to know whether banks are unique in crucial aspects, and if so, why.

In this paper the question of their potential ability to create money is examined, which is a candidate for a central distinguishing feature. A review of the literature identifies three different, mutually exclusive views on the matter, each holding sway for about a third of the twentieth century. The present conventional view is that banks are mere financial intermediaries that gather resources and re-allocate them, just like other non-bank financial institutions, and without any special powers. Any differences between banks and non-bank financial institutions are seen as being due to regulation and effectively so minimal that they are immaterial for modelling or for policy-makers. Thus it is thought to be permissible to model the economy without featuring banks directly. This view shall be called the financial intermediation theory of banking. It has been the dominant view since about the late 1960s.

Between approximately the 1930s and the late 1960s, the dominant view was that the banking system is ‘unique’, since banks, unlike other financial intermediaries, can collectively create money, based on the fractional reserve or ‘money multiplier’ model of banking. Despite their collective power, however, each individual bank is in this view considered to be a mere financial intermediary, gathering deposits and lending these out, without the ability to create money. This view shall be called the fractional reserve theory of banking.

There is a third theory about the functioning of the banking sector, with an ascendancy in the first two decades of the 20th century. Unlike the financial intermediation theory and in line with the fractional reserve theory it maintains that the banking system creates new money. However, it goes further than the latter and differs from it in a number of respects. It argues that each individual bank is not a financial intermediary that passes on deposits, or reserves from the central bank in its lending, but instead creates the entire loan amount out of nothing. This view shall be called the credit creation theory of banking.

The three theories are based on a different description of how money and banking work and they differ in their policy implications. Intriguingly, the controversy about which theory is correct has never been settled. As a result, confusion reigns: Today we find central banks – sometimes the very same central bank – supporting different theories; in the case of the Bank of England, central bank staff are on record supporting each one of the three mutually exclusive theories at the same time, as will be seen below.

It matters which of the three theories is right — not only for understanding and modelling the role of banks correctly within the economy, but also for the design of appropriate bank regulation that aims at sustainable economic growth without crises. The modern approach to bank regulation, as implemented at least since Basel I (1988), is predicated on the understanding that the financial intermediation theory is correct.5 Capital adequacy-based bank regulation, even of the counter-cyclical type, is less likely to deliver financial stability, if one of the other two banking hypotheses is correct.6 The capital-adequacy based approach to bank regulation adopted by the BCBS, as seen in Basel I and II, has so far not been successful in preventing major banking crises. If the financial intermediation theory is not an accurate description of reality, it would throw doubt on the suitability of Basel III and similar national approaches to bank regulation, such as in the UK.7

It is thus of importance for research and policy to determine which of the three theories is an accurate description of reality. Empirical evidence can be used to test the relative merits of the theories. Surprisingly, no such test has so far been performed. This is the contribution of the present paper.

The remainder of the paper is structured as follows. Section 2 provides an overview of relevant literature, differentiating authors by their adherence to one of the three banking theories. It will be seen that leading economists have gone on the record in support of each one of the theories. In Section 3, I then present an empirical test that is able to settle the question of whether banks are unique and whether they can individually create money ‘out of nothing’. It involves the actual processing of a ‘live’ bank loan, taken out by the researcher from a representative bank that cooperates in the monitoring of its internal records and operations, allowing access to its documentation and accounting systems. The results and some implications are discussed in Section 4.

2. The literature on whether banks can create money
Much has been written on the role of banks in the economy in the past century and beyond. Often authors have not been concerned with the question of whether banks can create money, as they often simply assume their preferred theory to be true, without discussing it directly, let alone in a comparative fashion. This literature review is restricted to authors that have contributed directly and explicitly to the question of whether banks can create credit and money. During time periods when in the authors’ countries banks issued promissory notes (bank notes) that circulated as paper money, writers would often, as a matter of course, mention, even if only in passing, that banks create or issue money. In England and Wales, the Bank Charter Act of 1844 forbade banks to “make any engagement for the payment of money payable to bearer on demand.” This ended bank note issuance for most banks in England and Wales, leaving the (until 1946 officially privately owned) Bank of England with a monopoly on bank note issuance. Meanwhile, the practice continued in the United States until the 20th century (and was in fact expanded with the similarly timed New York Free Banking Act of 1838), so that US authors would refer to bank note issuance as evidence of the money creation function of banks until much later.8 For sake of clarity, our main interest in this paper is the question whether banks that do not issue bank notes are able to create money and credit out of nothing. As a result, earlier authors, writing mainly about paper money issuance, are only mentioned in passing here, even if it could be said that their arguments might also apply to banks that do not issue bank notes. These include John Law (1705), James Steuart (1767), Adam Smith (1776), Henry Thornton (1802), Thomas Tooke (1838), and Adam Müller (1816), among others, who either directly or indirectly state that banks can individually create credit (in line with the credit creation theory).9

2.1. The credit creation theory of banking
Influential early writers that argue that non-issuing banks have the power to individually create money and credit out of nothing wrote mainly in English or German, namely Wicksell, 1898, Wicksell, 1907, Withers (1909), Schumpeter (1912), Moeller (1925) and Hahn (1920).10 The review of proponents of the credit creation theory must start with Henry Dunning Macleod, of Trinity College, Cambridge, and Barrister at Law at the Inner Temple.11 Macleod produced an influential opus on banking, entitled The Theory and Practice of Banking, in two volumes. It was published in numerous editions well into the 20th century (Macleod, 1855–6; the quotes here are from the 6th edition of 1905). Concerning credit creation by individual banks, Macleod unequivocally argued that individual banks create credit and money out of nothing, whenever they do what is called ‘lending’:
“In modern times private bankers discontinued issuing notes, and merely created Credits in their customers’ favour to be drawn against by Cheques. These Credits are in banking language termed Deposits. Now many persons seeing a material Bank Note, which is only a Right recorded on paper, are willing to admit that a Bank Note is cash. But, from the want of a little reflection, they feel a difficulty with regard to what they see as Deposits. They admit that a Bank Note is an “Issue”, and “Currency,” but they fail to see that a Bank Credit is exactly in the same sense equally an “Issue,” “Currency,” and “Circulation”.”

Macleod (1905, vol. 2, p. 310)
“… Sir Robert Peel was quite mistaken in supposing that bankers only make advances out of bona fide capital. This is so fully set forth in the chapter on the Theory of Banking, that we need only to remind our readers that all banking advances are made, in the first instance, by creating credit” (p. 370, emphasis in original).

In his Theory of CreditMacleod (1891) put it this way:
“A bank is therefore not an office for “borrowing” and “lending” money, but it is a Manufactory of Credit.”

Macleod (1891: II/2, 594)

According to the credit creation theory then, banks create credit in the form of what bankers call ‘deposits’, and this credit is money. But how much credit can they create? Wicksell (1907) described a credit-based economy in the Economic Journal, arguing that
“The banks in their lending business are not only not limited by their own capital; they are not, at least not immediately, limited by any capital whatever; by concentrating in their hands almost all payments, they themselves create the money required….”

“In a pure system of credit, where all payments were made by transference in the bank-books, the banks would be able to grant at any moment any amount of loans at any, however diminutive, rate of interest.”12

Wicksell (1907, 214)

Withers (1909), from 1916 to 1921 the editor of the Economist, also saw few restraints on the amount of money banks could create out of nothing:
“… it is a common popular mistake, when one is told that the banks of the United Kingdom hold over 900 millions of deposits, to open one’s eyes in astonishment at the thought of this huge amount of cash that has been saved by the community as a whole, and stored by them in the hands of their bankers, and to regard it as a tremendous evidence of wealth. But this is not quite the true view of the case. Most of the money that is stored by the community in the banks consists of book-keeping credits lent to it by its bankers.”

Withers (1909, pp. 57 ff.)
“… The greater part of the banks’ deposits is thus seen to consist, not of cash paid in, but of credits borrowed. For every loan makes a deposit ….”

Withers (1909, p. 63)
“When notes were the currency of commerce a bank which made an advance or discounted a bill gave its customer its own notes as the proceeds of the operation, and created a liability for itself. Now, a bank makes an advance or discounts a bill, and makes a liability for itself in the corresponding credit in its books.”

Withers (1909, p. 66)
“… It comes to this that, whenever a bank makes an advance or buys a security, it gives some one the right to draw a cheque upon it, which cheque will be paid in either to it or to some other banks, and so the volume of banking deposits as a whole will be increased and the cash resources of the banks as a whole will be unaltered.”

Withers (1916, p. 45)
“When once this fact is recognised, that the banks are still, among other things, manufacturers of currency, just as much as they were in the days when they issued notes, we see how important a function the banks exercise in the economic world, because it is now generally admitted that the volume of currency created has a direct and important effect upon prices. This arises from what is called the “quantity theory” of money ….”

Withers (1916, p. 47)
“If, then, the quantity theory is, as I believe, broadly true, we see how great is the responsibility of the bankers as manufacturers of currency, seeing that by their action they affect, not only the convenience of their customers and the profits of their shareholders, but the general level of prices. If banks create currency faster than the rate at which goods are being produced, their action will cause a rise in prices which will have a perhaps disastrous effect ….”13

Withers (1916, pp. 54 ff.)
“And so it becomes evident, as before stated, that the deposits of the banks which give the commercial community the right to draw cheques are chiefly created by the action of the banks themselves in lending, discounting, and investing” (pp. 71 ff.).

“… then, it thus appears that credit is the machinery by which a very important part of modern currency is created …” (p. 72).

Withers argues that the sovereign prerogative to manufacture the currency of the nation has effectively been privatised and granted to the commercial banks:
“By this interesting development the manufacture of currency, which for centuries has been in the hands of Government, has now passed, in regard to a very important part of it, into the hands of companies, working for the convenience of their customers and the profits of their shareholders.”

Withers (1916, p. 40)

While Withers was a financial journalist, his writings had a high circulation and likely contributed to the dissemination of the credit creation theory in the form proposed by Macleod (1855–6). This view also caught on in Germany with the publication of Schumpeter’s (1912, English 1934) influential book The Theory of Economic Development, in which he was unequivocal in his view that each individual bank has the power to create money out of nothing.
“Something like a certificate of future output or the award of purchasing power on the basis of promises of the entrepreneur actually exists. That is the service that the banker performs for the entrepreneur and to obtain which the entrepreneur approaches the banker. … (The banker) would not be an intermediary, but manufacturer of credit, i.e. he would create himself the purchasing power that he lends to the entrepreneur …. One could say, without committing a major sin, that the banker creates money.”14

Schumpeter (1912, p. 197, emphasis in original)
“[C]redit is essentially the creation of purchasing power for the purpose of transferring it to the entrepreneur, but not simply the transfer of existing purchasing power. … By credit, entrepreneurs are given access to the social stream of goods before they have acquired the normal claim to it. And this function constitutes the keystone of the modern credit structure.”

Schumpeter (1954, p. 107)
“The fictitious certification of products, which, as it were, the credit means of payment originally represented, has become truth.”15

Schumpeter (1912, p. 223)

This view was also well represented across the Atlantic, as the writings of Davenport (1913) or Robert H. Howe (1915) indicate. Hawtrey (1919), another leading British economist who like Keynes, had a Treasury background and moved into academia, took a clear stance in favour of the credit creation theory:
“… for the manufacturers and others who have to pay money out, credits are still created by the exchange of obligations, the banker’s immediate obligation being given to his customer in exchange for the customer’s obligation to repay at a future date. We shall still describe this dual operation as the creation of credit. By its means the banker creates the means of payment out of nothing, whereas when he receives a bag of money from his customer, one means of payment, a bank credit, is merely substituted for another, an equal amount of cash” (p. 20).

Apart from Schumpeter, a number of other German-language authors also argued that banks create money and credit individually through the process of lending.16 Highly influential in both academic discourse and public debate was Dr. Albert L. Hahn (1920), scion of a Frankfurt banking dynasty (similarly to Thornton who had been a banker) and since 1919 director of the major family-owned Effecten- und Wechsel-Bank, Frankfurt. Like Macleod a trained lawyer, he became an honorary professor at Goethe-University Frankfurt in 1928. Clearly not only aware of the works of Macleod, whom he cites, but also likely aware of actual banking practice from his family business, Hahn argued that banks do indeed ‘create money out of nothing’:
“Every credit that is extended in the economy creates a deposit and thus the means to fund it. … The conclusion from the process described can be expressed in reverse by saying … that every deposit that exists somewhere and somehow in the economy has come about by a prior extension of credit.”17

Hahn (1920, p. 28)
“We thus maintain – contrary to the entire literature on banking and credit – that the primary business of banks is not the liability business, especially the deposit business, but that in general and in each and every case an asset transaction of a bank must have previously taken place, in order to allow the possibility of a liability business and to cause it: The liability business of banks is nothing but a reflex of prior credit extension. The opposite view is based on a kind of optical illusion ….”18

Hahn (1920, p. 29)

Overall, Hahn probably did more than anyone to popularise the credit creation theory in Germany, his book becoming a bestseller, and spawning much controversy and new research among economists in Germany. It also greatly heightened awareness among journalists and the general public of the topic in the following decades. The broad impact of his book was likely one of the reasons why this theory remained entrenched in Germany, when it had long been discarded in the UK or the US, namely well into the post-war period. Hahn’s book was however not just a popular explanation without academic credibility. Schumpeter cited it positively in the second (German) edition of his Theory of Economic Development (Schumpeter, 1926), praising it as a further development in line with, but beyond, his own book. The English translation of Schumpeter’s influential book Schumpeter (1912 [1934]) also favourably cites Hahn.

It can be said that support for the credit creation theory appears to have been fairly widespread in the late 19th and early 20th century in English and German language academic publications. By 1920, the credit creation theory had become so widespread that it was dubbed the ‘current view’, the ‘traditional theory’ or the ‘time-worn theory of bank credit’ by later critics.19

The early Keynes seemed to also have been a supporter of this dominant view. In his Tract on Monetary Reform (Keynes, 1924), he asserts, apparently without feeling the need to establish this further, that banks create credit and money, at least in aggregate:
“The internal price level is mainly determined by the amount of credit created by the banks, chiefly the Big Five …” (p. 178).

“The amount of credit, so created, is in its turn roughly measured by the volume of the banks’ deposits — since variations in this total must correspond to the variations in the total of their investments, bill-holdings, and advances” (p. 178).

We know from Keynes’ contribution to the Macmillan Committee (1931) that Keynes meant with this that each individual bank was able to create credit:
“It is not unnatural to think of the deposits of a bank as being created by the public through the deposit of cash representing either savings or amounts which are not for the time being required to meet expenditure. But the bulk of the deposits arise out of the action of the banks themselves, for by granting loans, allowing money to be drawn on an overdraft or purchasing securities a bank creates a credit in its books, which is the equivalent of a deposit” (p. 34).

Concerning the banking system as a whole, this bank credit and deposit creation was thought to influence aggregate demand and the formation of prices, as Schumpeter (1912) had argued:
“The volume of bankers’ loans is elastic, and so therefore is the mass of purchasing power …. The banking system thus forms the vital link between the two aspects of the complex structure with which we have to deal. For it relates the problems of the price level with the problems of finance, since the price level is undoubtedly influenced by the mass of purchasing power which the banking system creates and controls, and by the structure of credit which it builds …. Thus, questions relating to the volume of purchasing power and questions relating to the distribution of purchasing power find a common focus in the banking system” (Macmillan Committee, 1931, pp. 12 ff.).

“… if, finally, the banks pursue an easier credit policy and lend more freely to the business community, forces are set in motion increasing profits and wages, and therefore the possibility of additional spending arises” (p. 13).

Concerning the question whether credit demand or credit supply is more important, the report argued that the root cause is the movement of the supply of credit:
“The expansion or contraction of the amount of credit made available by the banking system in other directions will, through a variety of channels, affect the ease of embarking on new investment propositions. This, in turn, will affect the volume and profitableness of business, and hence react in due course on the amount of accommodation required by industry from the banking system. … Thus what started as an alteration in the supply of credit ends up in the guise of an alteration in the demand for credit” (p. 99).20

While money is thus seen as endogenous to credit, when what is called a ‘bank loan’ is extended, the Committee argued that bank credit was exogenous as far as loan applicants are concerned:
“There can be no doubt as to the power of the banking system … to increase or decrease the volume of bank money” (p. 102).

“In normal conditions we see no reason to doubt the capacity of the banking system to influence the volume of active investment by increasing the volume and reducing the cost of bank credit. … Thus we consider that in any ordinary times the power of the banking system … to increase or diminish the active employment of money in enterprise and investment is indisputable” (p. 102).

The Macmillan Committee also argued that bank credit could be manipulated by the Bank of England, and thus was also considered exogenous in this sense.

The credit creation theory remained influential until the early post-war years. The links of credit creation to macroeconomic and financial variables were later formalised in the Quantity Theory of Credit (Werner, 1992, Werner, 1997, Werner, 2005, Werner, 2012), which argues that credit for (a) productive use in the form of investments for the production of goods and services is sustainable and non-inflationary, as well as less likely to become a non-performing loan, (b) unproductive use in the form of consumption results in consumer price inflation and (c) unproductive use in the form of asset transactions results in asset inflation and, if large enough, banking crises. However, since the 1920s serious doubts had spread about the veracity of the credit creation theory of banking. These doubts were initially uttered by economists who in principle supported the theory, but downplayed its significance. It is this group of writers that served as a stepping stone to the formulation of the modern fractional reserve theory, which in its most widespread (and later) version however argues that individual banks cannot create credit, but only the banking system in aggregate. It is this theory about banks that we now turn to.

2.2. The fractional reserve theory
An early proponent of the fractional reserve theory was Alfred Marshall (1888). He testified to a government committee about the role of banks as follows:
“I should consider what part of its deposits a bank could lend and then I should consider what part of its loans would be redeposited with it and with other banks and, vice versa, what part of the loans made by other banks would be received by it as deposits. Thus I should get a geometrical progression; the effect being that if each bank could lend two thirds of its deposits, the total amount of loaning power got by the banks would amount to three times what it otherwise would be.”

Marshall (1888), as quoted by Yohe (1995, p. 530)

With this, he contradicted Macleod’s arguments. However, Marshall’s view was still a minority view at the time. After the end of the First World War, a number of influential economists argued that the ‘Old Theory’ (Phillips, 1920:72) of bank credit creation by individual banks was mistaken. Their view gradually became more influential. “The theory of deposit expansion reached its zenith with the publication of C.A. Phillips’ Bank Credit …” (Goodfriend, 1991, as quoted by Yohe, 1995, p. 532).

Phillips (1920) argued that it was important to distinguish between the theoretical possibility of an individual bank ‘manufacturing money’ by lending in excess to cash and reserves on the one hand, and, on the other, the banking system as a whole being able to do this. He argued that the ‘Old Theory’ (the credit creation theory) was
“predicated upon the contention that a bank would be able to make loans to the extent of several times the amount of additional cash newly acquired and held at the time the loans were made, whereas a representative bank in a system is actually able ordinarily to lend an amount only roughly equal to such cash” (p. 72).21

According to Phillips (1920), individual banks cannot create credit or money, but collectively the banking system does so, as a new reserve is “split into small fragments, becomes dispersed among the banks of the system. Through the process of dispersion, it comes to constitute the basis of a manifold loan expansion” (p. 40). Each bank is considered mainly a financial intermediary: “… the banker … handles chiefly the funds of others” (pp. 4–5). Phillips argued that since banks target particular cash to deposit and reserve to deposit ratios (as cited in the money multiplier), which they wish to maintain, each bank effectively works as an intermediary, lending out as much as it is able to gather in new cash. Through the process of dispersion and re-iteration, the financial intermediation function of individual banks, without the power to create credit, adds up to an expansion in the money supply in aggregate.22

Crick (1927) shared this conclusion (with some minor caveats). Thus he argued:
“The important point, which is responsible for much of the controversy and most of the misunderstanding, is that while one bank receiving an addition to its cash cannot forthwith undertake a full multiple addition to its own deposits, yet the cumulative effect of the additional cash is to produce a full multiple addition to the deposits of all the banks as a whole” (p. 196).

“Summing up, then, it is clear … that the banks, so long as they maintain steady ratios of cash to deposits, are merely passive agents of the Bank of England policy, as far as the volume of money in the form of credit is concerned. … The banks … have very little scope for policy in the matter of expansion or contraction of deposits, though they have in the matter of disposition of resources between loans, investments and other assets. But this is not to say that the banks cannot and do not effect multiple additions to or subtractions from deposits as a whole on the basis of an expansion of or contraction in bank cash” (p. 201).

The role of banks remained disputed during the 1920s and 1930s, as several writers criticised the credit creation theory. Views not only diverged, but were also in a flux, as several experts apparently shifted their position gradually — overall an increasing number moving away from the credit creation theory and towards the fractional reserve theory.

Sir Josiah C. Stamp, a former director of the Bank of England, summarised the state of debate in his review of an article by Pigou (1927):
“The general public economic mind is in a fair state of muddlement at the present moment on the apparently simple question: “Can the banks create credit, and if so, how, and how much?” and between the teachings of Dr. Leaf and Mr. McKenna, Messrs. Keynes, Hawtrey, Cassel and Cannan and Gregory, people have not yet found their way.”

Stamp (1927, p. 424)

Contributions to this debate were also made by Dennis Robertson (1926), who was influenced by Keynes.23Keynes (1930) explains the role of reserve holdings and the mechanics of determining a bank’s behaviour based on its preference to hold cash and reserves, together with the amount of reserves provided by the central bank — the fairly predetermined mechanics postulated by the money multiplier in a fractional reserve model:
“Thus in countries where the percentage of reserves to deposits is by law or custom somewhat rigid, we are thrown back for the final determination of M, the Volume of Bank-money on the factors which determine the amount of these reserves” (p. 77).

Keynes (1930) also backed a key component of the fractional reserve theory, namely that banks gather deposits and place parts of them with the central bank, or, alternatively, may withdraw funds from their reserves at the central bank in order to lend these out to the non-banking sector of the economy:
“When a bank has a balance at the Bank of England in excess of its usual requirements, it can make an additional loan to the trading and manufacturing world, and this additional loan creates an additional deposit (to the credit of the borrower or to the credit of those to whom he may choose to transfer it) on the other side of the balance sheet of this or some other bank.”

Keynes (1930, vol. 2, p. 218)

Keynes here argues that new deposits, based on new loans, are dependent upon and connected to banks’ reserve balances held at the central bank. This view is sometimes also supported by present-day central bankers, such as in Paul Tucker’s or the ECB’s proposal to introduce negative interest rates on banks’ reserve holdings at the central bank, as an incentive for them to ‘move’ their money from the central bank and increase lending.24 Nevertheless, part of Keynes (1930), and much of his most influential work, his General Theory (1936), appears more in line with the financial intermediation theory, as will be discussed in the following section.

A representative example of the fractional reserve theory that at the same time was beginning to point in the direction of the financial intermediation theory is the work by Lutz (1939), who published in Economica, a forum for some of these debates at the time:
“The expansion of the economic system leads to an increase in the volume of deposits to a figure far in excess of the amount of the additional cash in use, simply because the same cash is deposited with the banking system over and over again. … The fact that banking statistics show an aggregate of deposits far above the amount of cash in the banking system, is therefore not of itself a sign that the banks must have created the whole of the difference. This conclusion is also, of course, somehow implicit in the “multiple expansion” theory of the creation of bank deposits (of the Phillips or Crick variety). That theory explains the creation of deposits by the fact that the same cash (in decreasing amounts) is successively paid into different banks. It does, however, look upon this cash movement rather in the nature of a technical affair between banks … which would disappear if the separate banks were merged into one. In that case the deposits would be regarded as coming into existence by outright creation. In our example we assume throughout only one bank, and still the deposits grow out of the return, again and again, of the same cash by the public. … The force which really creates expansion is the trade credit given by producers to one another. … The bank plays the role of a mere intermediary.”

… This seems to lead not to a new, but to a very old theory of the function of banks: the function of a mere intermediary … (pp. 166 ff.).

“The modern idea of banks being able to create deposits seemed to be a startling departure from the view held by most economists in the nineteenth century. If, however, we approach this modern idea along the lines followed above, we find that it resolves itself into much the same elements as those which many of the older writers regarded as the essence of banking operations: the provision of confidence which induces the economic subjects to extend credit to each other by using the bank as an intermediary” (p. 169).

Phillips’ influence has indeed been significant. Even in 1995 Goodfriend still argued that
“… Phillips showed that the summation of the loan- and deposit-creation series across all individual banks yields the multiple expansion formulas for the system as a whole. Phillips’ definitive exposition essentially established the theory once and for all in the form found in economics textbooks today.”

as reprinted in Yohe (1995, p. 535)

Statements like this became the mainstream view in the 1950s and 1960s.25 The view of the fractional reserve theory in time also came to dominate textbook descriptions of the functioning of the monetary and banking system. There is no post-war textbook more representative and influential than that of Samuelson (1948). The original first edition is clear in its description of the fractional reserve theory: Under the heading “Can banks really create money?”, Samuelson first dismisses “false explanations still in wide circulation” (p. 324):
“According to these false explanations, the managers of an ordinary bank are able, by some use of their fountain pens, to lend several dollars for each dollar left on deposit with them. No wonder practical bankers see red when such behavior is attributed to them. They only wish they could do so. As every banker well knows, he cannot invest money that he does not have; and any money that he does invest in buying a security or making a loan will soon leave his bank” (p. 324).

Samuelson thus argues that a bank needs to gather the funds first, before it can extend bank loans. This is not consistent with the credit creation theory. However, Samuelson argues that, in aggregate, the banking system creates money. He illustrates his argument with the example of a ‘small bank’ that faces a 20% reserve requirement, and considering the accounts of the bank (B/S). If this bank receives a new cash deposit of $1000, “What can the bank now do?”, Samuelson asks (p. 325).
“Can it expand its loans and investments by $4000 …?”

“The answer is definitely ‘no’. Why not? Total assets equal total liabilities. Cash reserves meet the legal requirement of being 20 per cent of total deposits. True enough. But how does the bank pay for the investments or earning assets that it buys? Like everyone else it writes out a check — to the man who sells the bond or signs the promissory note. … The borrower spends the money on labor, on materials, or perhaps on an automobile. The money will very soon, therefore, have to be paid out of the bank. … A bank cannot eat its cake and have it too. Table 4b gives, therefore a completely false picture of what an individual bank can do” (pp. 325 ff.).

Instead, Samuelson explains, since all the money lent out will leave the bank, an individual bank cannot create credit out of nothing:
“As far as this first bank is concerned, we are through. Its legal reserves are just enough to match its deposits. There is nothing more it can do until the public decides to bring in some more money on deposit” (p. 326).

On the other hand, Samuelson emphasises that
“The banking system as a whole can do what each small bank cannot do!” (p. 324),

namely create money. This, Samuelson explains via the iterative process of one bank’s loans (based on prior deposits) becoming another bank’s deposits, and so forth. He shows “this chain of deposit creation” in a table, amounting to total deposits in the banking system of $5000 (out of the $1000), due to the reserve requirement of 20% implying a ‘money multiplier’ of 5 times (assuming no cash ‘leakage’).

What Samuelson calls the “multiple deposit expansion” is described in the same way and with remarkable similarity in the fifteenth edition of his book (Samuelson & Nordhaus, 1995) half a century later, only that the reserve requirement cited as example has been lowered to 10%: “All banks can do what one can’t do alone” (p. 493). There are subtle though important differences. The overall space devoted to this topic is much smaller in 1995 compared to 1948. The modern textbook says that the central bank-created reserves are used by the banks “as an input” and then “transformed” “into a much larger amount of bank money” (p. 490). There is far less of an attempt to deal with the credit creation theory. Instead, each bank is unambiguously represented as a pure financial intermediary, collecting deposits and lending out this money (minus the reserve requirement).26 The fractional reserve theory had become mainstream:
“Each small bank is limited in its ability to expand its loans and investments. It cannot lend or invest more than it has received from depositors” (p. 496).

Meanwhile, bank deposit money is “supplied” by “the financial system” in an abstract process that each individual bank has little control over (p. 494). The unambiguous fractional reserve theory thus appears to have come about in the years after the 1950s. It can be described in Fig. 1.

Download full-size image
Fig. 1. The fractional reserve theory as represented in many textbooks.

In this scheme, funds move between the public, the banks and the central bank without any barriers. Each bank is a financial intermediary, but in aggregate, due to fractional reserve banking, money is created (multiplied) in the banking system. Specifically, each bank can only grant a loan if it has previously received new reserves, of which a fraction will always be deposited with the central bank. It will then only be able to lend out as much as these excess reserves, as is made clear in major textbooks. In the words of Stiglitz (1997):
“It should be clear that when there are many banks, no individual bank can create multiple deposits. Individual banks may not even be aware of the role they play in the process of multiple-deposit creation. All they see is that their deposits have increased and therefore they are able to make more loans” (p. 737).

In another textbook on money and banking:
“In this example, a person went into bank 1 and deposited a $100,000 check drawn on another bank. That $100,000 became part of the reserves of bank 1. Because that deposit immediately created excess reserves, further loans were possible for bank 1. Bank 1 lent the excess reserves to earn interest. A bank will not lend more than its excess reserves because, by law, it must hold a certain amount of required reserves.”

Miller and VanHoose (1993, p. 331)

The deposit of a cheque from another bank does not however increase the “total amounts of deposits and money”:
“Remember, though, that the deposit was a check written on another bank. Therefore, the other bank suffered a decline in its transactions deposits and its reserves. While total assets and liabilities in bank 1 have increased by $100,000, they have decreased in the other bank by $100,000. Thus the total amount of money and credit in the economy is unaffected by the transfer of funds from one depository institution to another. Each depository institution can create loans (and deposits) only to the extent that it has excess reserves. The thing to remember is that new reserves are not created when checks written on one bank are deposited in another bank. The Federal Reserve System, however, can create new reserves” (p. 331).

The textbook by Heffernan (1996) says:
“To summarise, all modern banks act as intermediaries between borrowers and lenders, but they may do so in a variety of different ways, from the traditional function of taking deposits and lending a percentage of these deposits, to fee-based financial services” (p. 18).

“For the bank, which pools these surplus funds, there is an opportunity for profit through fractional reserve lending, that is, lending out money at an interest rate which is higher than what the bank pays on the deposit, after allowing for the riskiness of the loan and the cost of intermediation” (p. 20).

While the fractional reserve theory succeeded in attracting many followers, rendering it an important and influential theory until this day, it is not famous for its clarity:
“The problem of the manner in which the banking system increases the total volume of the circulating medium, while at the same time the lending power of the individual banks is severely limited, has proved to be one of the most baffling for writers on banking theory.”

Mints (1945, p. 39)

Several attempts were made to resolve this within the fractional reserve theory of banking, such as that by Saving (1977), who rendered the supply of bank deposits a function of the behaviour of the savers — arguing that the money supply is endogenous. This effectively pushed out the intermediary function from the individual bank level to the economy level, and helped ushering in the formulation of the financial intermediation theory to which we now turn.

2.3. The financial intermediation theory
While the fractional reserve theory of banking was influential from the 1930s to the 1960s, Keynes may have sown important seeds of doubt. Already in his ‘Treatise’, Keynes (1930) makes use of inverted commas in order to refer, suggestively, to ‘The “Creation” of Bank-Money’ (a section title). This rhetorical device, employed by the expert already hailed as the leading economist in the world, implied disapproval, as well as mockery of the concept that banks could create money out of nothing. The device was copied by many other writers after Keynes who also emphasised the role of banks as ‘financial intermediaries’. In Keynes’ words:
“A banker is in possession of resources which he can lend or invest equal to a large proportion (nearly 90%) of the deposits standing to the credit of his depositors. In so far as his deposits are Savings-deposits, he is acting merely as an intermediary for the transfer of loan-capital. In so far as they are Cash-deposits, he is acting both as a provider of money for his depositors, and also as a provider of resources for his borrowing-customers. Thus the modern banker performs two distinct sets of services. He supplies a substitute for State Money by acting as a clearing-house and transferring current payments backwards and forwards between his different customers by means of book-entries on the credit and debit sides. But he is also acting as a middleman in respect of a particular type of lending, receiving deposits from the public which he employs in purchasing securities, or in making loans to industry and trade mainly to meet demands for working capital. This duality of function is the clue to many difficulties in the modern Theory of Money and Credit and the source of some serious confusions of thought.”

Keynes (1930, vol. 2, p. 213)

The Keynes of the Treatise seems to say that the two functions of banks are to either act as financial intermediary fulfilling the utility banking function of settling trades, or to act as financial intermediary gathering deposits and lending the majority of these out. There seems no money creation at all involved, certainly not on the individual bank level. Keynes’ most influential opus, General Theory (Keynes, 1936) quickly eclipsed his earlier Treatise on Money in terms of its influence on public debate. In the General Theory, Keynes did not place any emphasis on banks, which he now argued were financial intermediaries that needed to acquire deposits before they could lend:
“The notion that the creation of credit by the banking system allows investment to take place to which ‘no genuine saving’ corresponds can only be the result of isolating one of the consequences of the increased bank-credit to the exclusion of the others. … It is impossible that the intention of the entrepreneur who has borrowed in order to increase investment can become effective (except in substitution for investment by other entrepreneurs which would have occurred otherwise) at a faster rate than the public decide to increase their savings. … No one can be compelled to own the additional money corresponding to the new bank-credit, unless he deliberately prefers to hold more money rather than some other form of wealth. … Thus the old-fashioned view that saving always involves investment, though incomplete and misleading, is formally sounder than the newfangled view that there can be saving without investment or investment without ‘genuine’ saving.”

Keynes (1936, pp. 82 ff.)

Schumpeter (1954) commented on this shift in Keynes’ view:
The “deposit-creating bank loan and its role in the financing of investment without any previous saving up of the sums thus lent have practically disappeared in the analytic schema of the General Theory, where it is again the saving public that holds the scene. Orthodox Keynesianism has in fact reverted to the old view …. Whether this spells progress or retrogression, every economist must decide for himself” (p. 1115, italics in original).

The early post-war period saw unprecedented influence of Keynes’ General Theory, and a Keynesian school of thought that managed to ignore Keynes’ earlier writings on bank credit creation, became dominant in academia. Given that a former major proponent of both the credit creation and the fractional reserve theories of banking had shifted his stance to the new financial intermediation theory, it is not surprising that others would follow.

A highly influential challenge to the fractional reserve theory of banking was staged by Gurley and Shaw, 1955, Gurley and Shaw, 1960. They rejected the view that “banks stand apart in their ability to create loanable funds out of hand while other intermediaries in contrast are busy with the modest brokerage function of transmitting loanable funds that are somehow generated elsewhere” (1955, p. 521). Beyond the usual rhetorical devices to denigrate the alternative theories, Gurley and Shaw’s actual argument was that banks should not be singled out as being ‘special’, since the banks’ financial intermediation function is identical to that of other financial intermediaries:
“There are many similarities between the monetary system and nonmonetary intermediaries, and the similarities are more important than the differences. Both types of financial institutions create financial claims; and both may engage in multiple creation of their particular liabilities in relation to any one class of asset that they hold.”

Gurley and Shaw (1960, p. 202)

Banks and the banking system, we are told, like other financial intermediaries, need to first gather deposits, and then are able to lend these out. In this view, any remaining special role of banks is due to outmoded regulations, which treat banks differently. Therefore, they argue, the Federal Reserve should extend its banking supervision to the growing set of non-bank financial intermediaries, thus treating them equally to banks.

Initial challenges by proponents of the fractional reserve theory of banking (see Guttentag & Lindsay, 1968) were swept away during the 1960s, when James Tobin, a new rising star in economics, took a clear stand to proclaim another ‘new view’ of banking, formulating the modern version of the financial intermediation theory of banking.
“Tobin (1963), standing atop the wreckage in 1963 to set forth the ‘new view’ of commercial banking, stands squarely with Gurley and Shaw against the traditional view.”

Guttentag and Lindsay (1968, p. 993)

Like Keynes, Alhadreff and others before him, Tobin only referred to bank credit creation in inverted commas, and used rhetorical devices to ridicule the idea that banks, individually or collectively, could create money and credit. Tobin (1963) argued:
“Neither individually nor collectively do commercial banks possess a widow’s cruse” (p. 412).

“The distinction between commercial banks and other financial intermediaries has been too sharply drawn. The differences are of degree, not of kind …. In particular, the differences which do exist have little intrinsically to do with the monetary nature of bank liabilities … The differences are more importantly related to the special reserve requirements and interest rate ceilings to which banks are subject. Any other financial industry subject to the same kind of regulations would behave in much the same way” (p. 418).

Banks only seem to be different from others, because regulators erroneously chose to single them out for special regulation. In Tobin’s view, “commercial banks are different, because they are controlled, and not the other way around” (Guttentag & Lindsay, 1968, p. 993). Tobin and Brainard’s (1963) portfolio model made no distinction between banks and non-bank financial intermediaries, indeed, ignored the role of banks altogether and contributed much towards the modern mainstream view of economics models without banks. Branson (1968) further developed Tobin’s new approach, which was popular in the leading journals.

Guttentag and Lindsay (1968) wrote in the Journal of Political Economy that despite the challenge by Gurley and Shaw (1955) “The uniqueness issue, on the other hand, remains unsettled” (p. 992). Banks, they argued, are different in their role and impact from non-bank financial intermediaries, since “commercial banks have a greater capacity for varying the aggregate volume of credit than other financial intermediaries” (p. 991). “These points provide a rationale for special controls on commercial banks that goes beyond the need to prevent financial panic. It is the rationale that has been sought by defenders of the traditional view that commercial banks are ‘unique’ ever since the Gurley–Shaw challenge to this view” (p. 991).

Undaunted, Tobin (1969) re-states his view in an article establishing his portfolio balance approach to financial markets, which argues that financial markets are complex webs of assets and prices, leaving banks as one of many types of intermediaries, without any special role.27 This was the first article in the first edition of a new journal, the Journal of Money, Credit and Banking. While its name may suggest openness towards the various theories of banking, in practice it has only published articles that did not support the credit creation theory and were mainly in line with the financial intermediation theory. This is also true for most other journals classified as ‘leading journals’ in economics (for instance, using the 4-rated journals from the UK Association of Business Schools list in economics). Henceforth, the portfolio balance approach, which treated all financial institutions as mere portfolio managers, was to hold sway. It helped the financial intermediation theory become the dominant creed among economists world-wide.

Modern proponents of the ubiquitous financial intermediation theory include, among others, Klein (1971), Monti (1972), Sealey and Lindley (1977), Diamond and Dybvig, 1983, Diamond, 1984, Diamond, 1991, Diamond, 2007, Spring, Eatwell, Milgate, and Newman (1989), Gorton and Pennacchi (1990), Bencivenga and Smith (1991), Bernanke and Gertler (1995), Rajan (1998), Myers and Rajan (1998), Allen and Gale, 2000, Allen and Gale, 2004a, Allen and Gale, 2004b, Allen and Santomero (2001), Diamond and Rajan (2001), Kashyap et al., 2002, Hoshi and Kashyap, 2004, Matthews and Thompson (2005), Casu and Girardone, 2006, Dewatripont et al, 2010, Gertler and Kiyotaki (2011) and Stein (2014). There are many more: It is impossible to draw up a conclusive list, since the vast majority of articles published in leading economics and finance journals in the last thirty to forty years is based on the financial intermediation theory as premise.28

Quoting only a few examples, Klein (1971), Monti (1972) (later to become EU commissioner and prime minister of Italy), and others model banks as financial intermediaries, gathering deposits and lending these funds out:
“The bank has two primary sources of funds; the equity originally invested in the firm … and borrowed funds secured through the issuance of various types of deposits ….”

Klein (1971, p. 208)
“… It will be shown how the bank determines the prices it will pay for various types of deposits and how these prices, in conjunction with the deposit supply functions the bank confronts, determine the scale and composition of the bank’s deposit liabilities the bank will assume.”

Klein (1971, p. 210)

Diamond and Dybvig (1983) are cited as the seminal work on banking, and they argue that “Illiquidity of assets provides the rationale both for the existence of banks and for their vulnerability to runs” (p. 403). But in actual fact their theory makes no distinction between banks and non-banks. They therefore are unable to explain why we have heard of bank runs, but not of ‘insurance runs’ or ‘finance company runs’, although the latter also hold illiquid assets and give out loans. Diamond and Dybvig fail to identify what could render banks special since they assume that they are not.

Other theories of banks as financial intermediaries are presented by Mayer (1988) and Hellwig, 1977, Hellwig, 1991, Hellwig, 2000, who also believe that banks are merely financial intermediaries:
“The analysis uses the original model of Diamond (1984) of financial contracting with intermediation as delegated monitoring. … Monitoring is assumed to be too expensive to be used by the many households required to finance a firm or an intermediary. However direct finance of firms based on nonpecuniary penalties may be dominated by intermediated finance with monitoring of firms by an intermediary who in turn obtains funds from households through contracts involving nonpecuniary penalties.”

Hellwig (2000, pp. 721 ff.)

Banking expert Heffernan (1996) states:
“The existence of the “traditional” bank, which intermediates between borrower and lender, and which offers a payments service to its customers, fits in well with the Coase theory” (p. 21).

… or a leading textbook on international economics and finance, by Krugman and Obstfeld (2000):
“Banks use depositors’ funds to make loans and to purchase other assets …” (p. 659).

A widely used reference work on banking and money – the New Palgrave Money (Eatwell et al., 1989) – contains a number of contributions by leading monetary economists and banking experts. In it, Baltensperger (1989) clearly supports the financial intermediation theory:
“The role of credit as such must be clearly separated from the economic role of credit institutions, such as banks, playing the role of specialised intermediaries in the credit market by buying and simultaneously selling credit instruments (of a different type and quality). Since the ultimate borrowers and lenders can, in principle, do business with each other directly, without the help of such an intermediary, the function of these middlemen must be viewed as separate from that of credit as such. Two main functions of institutions of this kind can be distinguished. The first is the function of risk consolidation and transformation. … The second major function of these institutions is that of a broker in the credit markets. As such, they specialise in producing intertemporal exchange transactions and owe their existence to their ability to bring together creditors and debtors at lower costs than the latter can achieve in direct transactions themselves” (pp. 100 ff.).

Indeed, almost all authors in this reference book refer to banks as mere financial intermediaries, even Goodhart (1989):
“‘Intermediation’ generally refers to the interposition of a financial institution in the process of transferring funds between ultimate savers and ultimate borrowers. … Disintermediation is then said to occur when some intervention, usually by government agencies for the purpose of controlling, or regulating, the growth of financial intermediaries, lessens their advantages in the provision of financial services, and drives financial transfers and business into other channels. … An example of this is to be found when onerous reserve requirements on banks lead them to raise the margin (the spread) between deposit and lending rates, in order to maintain their profitability, so much that the more credit-worthy borrowers are induced to raise short-term funds directly from savers, for example, in the commercial paper market” (p. 144).

Myers and Rajan (1998) state:
“We model the intermediary as a bank that borrows from a number of individual investors for its own core business and to lend on to a project. … Even though the bank can extract more from the ultimate borrower, the bank has to finance these loans by borrowing from individual investors” (p. 755).

Allen and Santomero (2001), in their paper entitled ‘What do financial intermediaries do?’ state:
“In this paper we use these observations as a starting point for considering what it is that financial intermediaries do. At center, of course, financial systems perform the function of reallocating the resources of economic units with surplus funds (savers) to economic units with funding needs (borrowers)” (p. 272).

Kashyap (2002) also believes that banks are pure financial intermediaries, not materially distinguishable from other non-bank financial institutions.29

Stein (2014) states, albeit with some hesitation:
“… at least in some cases, it seems that a bank’s size is determined by its deposit franchise, and that, taking these deposits as given, its problem then becomes one of how best to invest them” (p. 5).

“Overall, our synthesis of these stylised facts is that banks are in the business of taking deposits and investing these deposits in fixed-income assets that have certain well-defined risk and liquidity attributes but which can be either loans or securities” (p. 7).

The financial intermediation theory includes the ‘credit view’ in macroeconomics, proposing a ‘bank lending channel’ of monetary transmission (Bernanke and Blinder, 1989, Bernanke and Gertler, 1995), as well as the neo-classical and new classical macroeconomic models (if they consider banks at all). To these and most contemporary authors in economics and finance, banks are financial intermediaries like other firms in the financial sector, which focus on the ‘transformation’ of liabilities with particular features into assets with other features (e.g. with respect to maturity, liquidity and quantity/size), or which focus on ‘monitoring’ others (Sheard, 1989, another adherent of the financial intermediation theory of banking), but do not create credit individually or collectively. This is true for many ‘Post-Keynesians’ who argue that the money supply is determined by the demand for money. It is also true for popular descriptions, such as that by Koo and Fujita (1997) who argue that banks are merely financial intermediaries:
“But those financial institutions that are counterparties of the Bank of Japan obtain their funding primarily from the money that depositors have deposited with them. This money they cannot pass on for consumption and capital investment, because they have to lend it at interest to earn money. In other words, for this money to support the economy, these financial institutions must lend it to firms and individuals. Those borrowers must then use it to buy assets such as machinery or housing or services” (p. 31).

A recent paper by Allen, Carletti, and Gale (2014) introduces money — albeit only cash created by the central bank, while banks are mere financial intermediaries that cannot create money or credit.

As a result, the leading forecasting models used by policy makers also do not include banks (Bank of England, 2014a). Even the original meaning of credit creation seems forgotten by the modern literature: Bernanke (1993) uses the expression ‘credit creation’ much in his article, but explains that this concept is defined as “the process by which saving is channeled to alternative uses”, i.e. financial intermediation of savers’ deposits into loans:
“This fortuitous conjunction of events and ideas has contributed to an enhanced appreciation of the role of credit in the macroeconomy by most economists and policymakers. The purpose of this paper is to review and interpret some recent developments in our understanding of the macroeconomic role of credit or, more accurately, of the credit creation process. By credit creation process I mean the process by which, in exchange for paper claims, the savings of specific individuals or firms are made available for the use of other individuals or firms (for example to make capital investments or simply to consume). Note that I am drawing a strong distinction between credit creation, which is the process by which saving is channeled to alternative uses, and the act of saving itself …. In my broad conception of the credit creation process I include most of the value-added of the financial industry, including the information-gathering, screening, and monitoring activities required to make sound loans or investments, as well as much of the risk-sharing, maturity transformation, and liquidity provision services that attract savers and thus support the basic lending and investment functions. I also want to include in my definition of the credit creation process activities undertaken by potential borrowers to transmit information about themselves to lenders: for example, for firms, these activities include provision of data to the public, internal or external auditing, capital structure decisions, and some aspects of corporate governance. The efficiency of the credit creation process is reflected both in its ability to minimise the direct costs of extending credit (for example, the aggregate wage bill of the financial industry) and in the degree to which it is able to channel an economy’s savings into the most productive potential uses. The presumption of traditional macroeconomic analysis is that this credit creation process, through which funds are transferred from ultimate savers to borrowers, works reasonably smoothly and therefore can usually be ignored.”

Bernanke (1993, pp. 50 ff.)

As Bernanke points out, those works that assume such a financial intermediation role for banks will therefore often ignore banks entirely: they cannot be particularly important or relevant in the economy. Many went as far as to leave out any kind of money (there are no monetary aggregates in Kiyotaki and Moore, 1997, Woodford, 2003). The most widely used textbook in advanced Master-level economics at leading British universities in 2010 was Romer (2006). On page 3, Romer tells us:
“Incorporating money in models of [economic] growth would only obscure the analysis” (p. 3).

2.4. Conclusion of the literature review
Since the 1960s it has become the conventional view not to consider banks as unique and able to create money, but instead as mere financial intermediaries like other financial firms, in line with the financial intermediation theory of banking. Banks have thus been dropped from economics models, and finance models have not suggested that bank action has significant macroeconomic effects. The questions of where money comes from and how the money supply is created and allocated have remained unaddressed.

The literature review has identified a gradual progression of views from the credit creation theory to the fractional reserve theory to the present-day ubiquitous financial intermediation theory. The development has not been entirely smooth; several influential writers have either changed their views (on occasion several times) or have shifted between the theories. Keynes, as an influential economist, did little to enhance clarity in this debate, as it is possible to cite him in support of each of the three hypotheses, through which he seems to have moved sequentially.30 Some institutions, such as the Bank of England, manage to issue statements in support of all three theories.

We conclude from the literature survey that all three theories of banking have been well represented in the course of the 20th century, by leading figures of the day. However, the conclusion by Sir Josiah Stamp (1927), a director at the Bank of England, still seems to hold today, namely that there is “a fair state of muddlement … on the apparently simple question: ‘Can the banks create credit, and if so, how, and how much?’” Despite a century or so of theorising on the matter, there has been little progress in establishing facts unambiguously. Thus today the conclusion of 1968 applies, namely that the issue cannot be considered as ‘settled’. It is possible that the pendulum is about to swing away from the financial intermediation theory to one of the other two. But how can we avoid that history will merely repeat itself and the profession will spend another century locked into a debate without firm conclusion?

How can the issue be settled and the ‘muddlement’ cleared up? One reason for this “state of muddlement” is likely to be the methodology dominant in 20th century economics, namely the hypothetico-deductive method. Unproven ‘axioms’ are ‘posed’ and unrealistic assumptions added, to build a theoretical model. This can be done for all three theories, and we would be none the wiser about which of them actually applied. How can the issue be settled? The only way the facts can be established is to leave the world of deductive theoretical models and consider empirical reality as the arbiter of truth, in line with the inductive methodology. In other words, it is to empirical evidence we must turn to settle the issue.

3. The empirical test
The simplest possible test design is to examine a bank’s internal accounting during the process of granting a bank loan. When all the necessary bank credit procedures have been undertaken (starting from ‘know-your-customer’ and anti-money laundering regulations to credit analysis, risk rating to the negotiation of the details of the loan contract) and signatures are exchanged on the bank loan, the borrower’s current account will be credited with the amount of the loan. The key question is whether as a prerequisite of this accounting operation of booking the borrower’s loan principal into their bank account the bank actually withdraws this amount from another account, resulting in a reduction of equal value in the balance of another entity — either drawing down reserves (as the fractional reserve theory maintains) or other funds (as the financial intermediation theory maintains). Should it be found that the bank is able to credit the borrower’s account with the loan principal without having withdrawn money from any other internal or external account, or without transferring the money from any other source internally or externally, this would constitute prima facie evidence that the bank was able to create the loan principal out of nothing. In that case, the credit creation theory would be supported and the theory that the individual bank acts as an intermediary that needs to obtain savings or funds first, before being able to extend credit (whether in conformity with the fractional reserve theory or the financial intermediation theory), would be rejected.

3.1. Expected results
With a bank loan of €200,000, drawn by the researcher from a bank, the following changes in the lending bank’s accounting entries are expected a priori according to each theory:
Bank credit accounting according to the credit creation theory.

According to this theory, banks behave very differently from financial intermediaries, such as stock brokers, since they do not separate customer funds from own funds. Money ‘deposited’ with a bank becomes the legal property of the bank and the ‘depositor’ is actually a lender to the bank, ranking among the general creditors. When extending bank credit, banks create an imaginary deposit, by recording the loan amount in the borrower’s account, although no new deposit has taken place (credit creation out of nothing). The balance sheet lengthens. Cash, central bank reserves or balances with other banks are not immediately needed, as reserve and capital requirements only need to be met at particular measurement intervals. The account changes are shown in Table 1.

Table 1. Account changes due to bank loan (credit creation theory).

Bank credit accounting according to the fractional reserve theory.

The distinguishing feature of this theory is that each individual bank cannot create credit out of nothing. The bank is a financial intermediary indistinguishable from other financial intermediaries, such as stock brokers and securities firms. However, banks are said to be different in one respect, namely the regulatory treatment: regulators have placed onerous rules concerning reserves that have to be held with the central bank only on banks, not other financial intermediaries. A bank can only lend money, when it has previously received the same amount in excess reserves from another bank, whose own reserve balances will have declined, or from the central bank (Table 2).
“A bank will not lend more than its excess reserves because, by law, it must hold a certain amount of required reserves. … Each depository institution can create loans (and deposits) only to the extent that it has excess reserves.”

Miller and VanHoose (1993, p. 331)

Table 2. Account changes due to bank loan (fractional reserve theory).

Following the exposition in Miller and VanHoose (1993, pp. 330–331), the balance sheet evolution in case of a €200,000 loan is as shown in Table 2.

In other words, for the bank to be able to make a loan, it first has to check its excess reserves, as this is, according to this theory, a strictly binding requirement and limitation, as well as its distinguishing feature. The bank cannot at any moment lend more money than its excess reserves, and it will have to draw down the reserve balance to lend. (Thus, as noted, another distinguishing feature is that the balance sheet expansion is driven by the prior increase in a deposit that boosted excess reserves, not by the granting of a loan).

It needs to be verified when the empirical test of bank lending is implemented, whether the bank first confirmed the precise amount of its available excess reserves before entering into the loan contract or paying out the loan funds to the customer, so as not to exceed that figure. If the bank is found not to have checked or not to have drawn down its reserve balances then this constitutes a rejection of the fractional reserve theory.

Bank credit accounting according to the financial intermediation theory.

According to this theory, banks are, as far as payments and accounts are concerned, not different from non-bank financial institutions. The reserve requirement is not an issue — a claim supported by the empirical observation that reserve requirements have been abolished in a number of major economies, such as the UK and Sweden many years ago. However, UK financial intermediaries are required by FSA/FCA-administered Client Money rules to hold deposits in custody for customers (a form of warehousing, the deposits legally being bailments). Client funds of financial intermediaries, such as securities firms, stock brokers and the like are therefore still owned by the depositors and thus kept separately from the financial institutions’ own funds, so that customer deposits are not shown on the balance sheet as liabilities. If banks are merely financial intermediaries, indistinguishable from other intermediaries, then all bank funds are central bank money that can be held in reserve at the central bank or deposited with other banks. The balance sheet implications are shown below in Table 3.

Table 3. Account changes due to bank loan (fin. intermediation theory).

According to this theory, the bank balance sheet does not lengthen as a result of the bank loan: the funds for the loan are drawn from the bank’s reserve account at the central bank.

3.2. A live empirical test
The design of the empirical test takes the form of a researcher entering into a live loan contract with the bank, and the bank extending a loan, while its relevant internal accounting is disclosed. Several banks in the UK and Germany were approached and asked to cooperate in an academic study of bank loan operations.

The large banks declined to cooperate. The reason given was usually twofold: the required disclosure of internal accounting data and procedures would breach their confidentiality or IT security rules; secondly, the transactions volumes of the banks were so large that the planned test would be very difficult to conduct when borrowing sensibly sized amounts of money that would not clash with the banks’ internal risk management rules. In that case, any single transaction would not be easy to isolate within the bank’s IT systems. Despite various discussions with a number of banks, in the end the banks declined on the basis of the above reasons and additionally that the costs of operating their systems and controlling for any potential other transactions would be prohibitive.

It was therefore decided to approach smaller banks, of which there are many in Germany (there are approximately 1700 local, mostly small banks in Germany). Each owns a full banking license and engages in universal banking, offering all major banking services, including stock trading and currencies, to the general public. A local bank with a balance sheet of approximately €3 billion was approached, as well as a bank with a balance sheet of about €700 million. Both declined on the same grounds as the larger banks, but one suggested that a much smaller bank might be able to oblige, pointing out the advantage that there would be fewer transactions booked during the day, allowing a clearer identification of the empirical test transaction. At the same time the empirical information value would not diminish with bank size, since all banks in the EU conform to identical European bank regulations.

Thus an introduction to Raiffeisenbank Wildenberg e.G., located in a small town in the district of Lower Bavaria was made. The bank is a cooperative bank within the Raiffeisen and cooperative banking association of banks, with eight full-time staff. The two joint directors, Mr. Michael Betzenbichler and Mr. Marco Rebl both agreed to the empirical examination and also to share all available internal accounting records and documentation on their procedures. A written agreement was signed that confirmed that the planned transactions would be part of a scientific empirical test, and the researcher would not abscond with the funds when they would be transferred to his personal account, and undertakes to immediately repay the loan upon completion of the test (Supplementary material 1 in online Appendix 3). One limitation on the accounting records which is common to most banks is that they are outsourcing the IT to a specialised bank IT company, which maintains its own rules concerning data protection and confidentiality.

The IT firm serves the majority of the 1,100 cooperative banks in Germany, using the same software and internal systems and accounting rules, ensuring that the test is representative of more than 15% of bank deposits in Germany.

It was agreed that the researcher would personally borrow €200,000 from the bank. The transaction was undertaken on 7 August 2013 in the offices of the bank in Wildenberg in Bavaria. Apart from the two (sole) directors, also the head (and sole staff) of the credit department, Mr. Ludwig Keil was present. The directors were bystanders not engaging in any action. Mr. Keil was the only bank representative involved in processing the loan from the start of the customer documentation, to the signing of the loan contract and finally paying out the loan into the borrower’s account. The entire transaction, including the manual entries made by Mr. Keil, was filmed. The screens of the bank’s internal IT terminal were also photographed. Moreover, a team from the BBC was present and filmed the central part of the empirical bank credit experiment (Reporter Alistair Fee and a cameraman).

The bank disclosed their standard internal credit procedure. The sequence of the key steps is shown in Appendix 1. As can be seen, the last two steps are the signing of the ‘credit documents’ by the borrower (the researcher) and, finally, the payment of the loan at the value date.31

The loan conditions were agreed: the researcher would borrow EUR 200,000 from the bank at the prime rate (the interest rate for the best customer). In the event the bank waived the actual interest proceeds, in support of the scientific research project.

When the bank loan contract was signed by both the bank and the borrower on 7 August 2013, the loan amount was immediately credited to the borrower’s account with the bank, as agreed in the loan contract. Supplementary material 2 in online Appendix 2 shows the original borrower’s accounts and balances with Raiffeisenbank Wildenberg. The key information from the account summary table is repeated here, in English, in Table 4.

Table 4. The empirical researcher’s new bank account.

Bank: Raiffeisenbank Wildenberg e.G.

Customer: Richard Werner.

Date: 7 August 2013.

Account no. Type of product Currency A/C balance
Current account
44636 Current account w/o fees EUR 200,000.00
Total in EUR: 200,000.00

20044636 Other private financing EUR − 200,000.00
Total in EUR: − 200,000.00
The bank also issued the following accounts overview, which is a standard T-account of the transaction from the borrower’s perspective (Table 5).

Table 5. The empirical researcher’s new bank account balances.

Accounts’ overview
EUR Credit Liabilities Balance No. contracts
Current account 200,000.00 200,000.00 1
Loan 200,000.00 − 200,000.00 1
Bank sum total 200,000.00 200,000.00 0.00 2
The borrower confirmed that his new current account with the bank now showed a balance of EUR 200,000 that was available for spending (An extension of the experiment, to be reported on separately, used the balance the following day for a particular transaction outside the banking institution, transferring the funds to another account of the researcher, held with another bank; this transfer was duly completed, demonstrating that the funds could be used for actual transactions).

We are now moving to the empirical test of the three banking theories. The critical question is: where did Raiffeisenbank Wildenberg e.G. obtain the funds from that the borrower (researcher) was credited with (and duly used and transferred out of the bank the following day)? When the researcher inquired about the bank’s reserve holdings, in line with the fractional reserve theory of banking, director Marco Rebl explained that the bank maintained its reserves with the central organisation of cooperative banks, which in turn maintained an account with the central bank. These reserves amounted to a fixed amount of €350,000 that did not change during the observation period. Concerning the bank credit procedure, the researcher attempted to verify the source of the funds that were about to be lent.

Firstly, the researcher confirmed that the only three bank officers involved in this test and bank transaction were present throughout, whereby two (the directors) only watched and neither accessed any computer terminal nor transmitted any instructions whatsoever. The accounts manager (head of the credit department, Mr. Keil) was the only operator involved in implementing, booking and paying out the loan. His actions were filmed. It was noted and confirmed that none of the bank staff present engaged in any additional activity, such as ascertaining the available deposits or funds within the bank, or giving instructions to transfer funds from various sources to the borrower’s account (for instance by contacting the bank internal treasury desk and contacting bank external interbank funding sources). Neither were instructions given to increase, draw down or borrow reserves from the central bank, the central cooperative bank or indeed any other bank or entity. In other words, it was apparent that upon the signing of the loan contract by both parties, the funds were credited to the borrower’s account immediately, without any other activity of checking or giving instructions to transfer funds. There were no delays or deliberations or other bookings. The moment the loan was implemented, the borrower saw his current account balance increase by the loan amount. The overall credit transaction, from start to finish, until funds were available in the borrower’s account, took about 35 min (and was clearly slowed down by the filming and frequent questions by the researcher).

Secondly, the researcher asked the three bank staff present whether they had, either before or after signing the loan contract and before crediting the borrower’s account with the full loan amount inquired of any other parties internally or externally, checked the bank’s available deposit balances, or made any bookings or transfers of any kind, in connection to this loan contract. They all confirmed that they did not engage in any such activity. They confirmed that upon signing the loan contract the borrower’s account was credited immediately, without any such steps.

Thirdly, the researcher obtained the internal daily account statements from the bank. These are produced only once a day, after close of business. Since the bank is small, it was hoped that it would be possible to identify the impact of the €200,000 loan transaction, and distinguish the accounting pattern corresponding to one of the three banking hypotheses.

4. Results
Supplementary material 3 in online Appendix 3 shows the scan of the bank’s balance sheet at the end of 6 August 2013, the day before the transaction of the empirical test was undertaken. Supplementary material 4 in online Appendix 3 shows the daily balance of the following day. In Table 6 the key asset positions are summarised and account names translated, for the end of the day prior to the loan experiment, and for the end of the day on which the researcher had borrowed the money. Table 7 shows the key liability positions for the same periods:

Table 6. Raiffeisenbank Wildenberg e.G.: daily accounts’ assets.

6 August 2013, 22.46 h. vs. 7 August 2013, 22.56 h.


Assets Balance 6 Aug. 2013 Balance 7 Aug. 2013 Difference
1. Cash 181,703.03 340,032.89 158,329.86
2. Bills of exchange
3. Claims on financial. inst. 5,298,713.76 5,079,709.21 − 219,004.55
4. Claims on customers 23,712,558.13 23,947,729.92 235,171.79
–Maturing daily 932,695.44 967,767.32 35,071.88
–Maturity under 4 years 1,689,619.97 1,889,619.97 200,000.00
–Maturity 4 years or longer 21,090,242.72 21,090,342.72 100.00
5. Bonds, bills, debt instr. 19,178,065.00 19,178,065.00
6. Stocks and shares
7. Stake holdings 397,768.68 397,768.68
8. Stakes in related firms
9. Trust assets 5262.69 5262.69
10. Compensation claims on the public sector
11. Immaterial assets 102.00 102.00
12. Fixed assets 221,549.46 221,549.46
13. Called but not deployed capital
14. Other assets 707,569.26 711,288.64 3719.38
15. Balancing item 2844.32 2844.32
16. Sum of assets 49,706,136.33 49,884,352.81 178,216.48
Table 7. Raiffeisenbank Wildenberg e.G.: daily accounts’ liabilities.

6 August 2013, 22.46 h. vs. 7 August 2013, 22.56 h.


Liabilities Balance 6 Aug. 2013 Balance 7 Aug. 2013 Difference
1. Claims by financial inst. 5,621,456.60 5,621,879.66 423.06
2. Claims by customers 39,589,177.09 39,759,156.42 169,979.33
2A. Savings accounts 10,234,806.01 10,237,118.24 2312.23
2B. Other liabilities 29,354,371.08 29,522,038.18 167,667.10
–BA daily 13,773,925.93 13,963,899.89 189,973.96
–BB maturity less 4 years 13,296,042.92 13,273,736.06 − 22,306.86
–BC maturity 4 years or longer 2,284,402.23 2,284,402.23
4. Trust liabilities 5262.70 5262.70
5. Other liabilities 12,378.81 12,599.44 220.63
6. Balancing item 16,996.04 16,996.04
7. Reserves 1,138,497.64 1,138,497.64
11. Fund for bank risk 250,000.00 250,000.00
12. Own capital 3,057,248.57 3,057,248.57
13. Sum liabilities 49,706,136.33 49,884,352.81 178,216.48
Starting by analysing the liability side information (Table 7), we find that customer deposits are considered part of the financial institution’s balance sheet. This contradicts the financial intermediation theory, which assumes that banks are not special and are virtually indistinguishable from non-bank financial institutions that have to keep customer deposits off balance sheet. In actual fact, a bank considers a customers’ deposits starkly differently from non-bank financial institutions, who record customer deposits off their balance sheet. Instead we find that the bank treats customer deposits as a loan to the bank, recorded under rubric ‘claims by customers’, who in turn receive as record of their loans to the bank (called ‘deposits’) what is known as their ‘account statement’. This can only be reconciled with the credit creation or fractional reserve theories of banking.

We observe that an amount not far below the loan balance (about €190,000) has been deposited with the bank. This is itself not far from the increase in total liabilities (and assets). Since the fractional reserve hypothesis requires such an increase in deposits as a precondition for being able to grant the bank loan, i.e. it must precede the bank loan, it is difficult to reconcile this observation with the fractional reserve theory. Moreover, the researcher confirmed that in his own bank account the loan balance of €200,000 was shown on the same day. This means that the increase in liabilities was driven by the increase by €200,000 in daily liabilities (item 2B BA in Table 7). Thus the total increase in liabilities could not have been due to a coincidental increase in customer deposits on the day of the loan. The liability side account information seems only fully in line with the credit creation theory.

Turning to an analysis of the asset side, we note that the category where we find our loan is item 4, claims on customers — fortunately the only one that day with a maturity below 4 years and hence clearly identifiable on the bank balance sheet. Apparently, customers also took out short-term loans (most likely overdrafts) amounting to €35,071.88, producing a total new loan balance of €235,071.88. In order to keep the analysis as simple as possible, let us proceed from here assuming that our test loan amounted to this total loan figure (€235,071.88). So the balance sheet item of interest on the asset side is ΔA4, the increase in loans (claims on customers) amounting to €235,071.88.

We now would like to analyse the balance sheet in order to see whether this new loan of €235,071.88 was withdrawn from other accounts at the bank, or how else it was funded. We first proceed with considering activity on the asset side. Denoting balances in thousands below, we can summarise the balance sheet changes during the observation period, within the balance sheet constraints as follows:

Numerically, these are, rounded in thousand euro:

The fractional reserve theory says that the loan balance must be paid from reserves. These can be either cash balances or reserves with other banks (including the central bank). The deposits (claims) with other financial institutions (which effectively includes the bank’s central bank reserve balances) declined significantly, by €219,000. At the same time cash reserves increased significantly. What may have happened is that the bank withdrew legal tender from its account with the cooperative central bank, explaining both the rise in cash and decline in balances with other financial institutions. Since the theories do not distinguish between these categories, we can aggregate A1 and A3, the cash balances and reserves. Also, to simplify, we aggregate A14 (other assets) with A4 (claims on customers), to obtain:

We observe that reserves fell, while claims on customers rose significantly. Moreover, total assets also rose, by an amount not dissimilar to our loan balance. Can this information be reconciled with the three theories of banking?

Considering the financial intermediation hypothesis, we would expect a decline in reserves (accounts with other financial institutions and cash) of the same amount as customer loans increased. Reserves however declined by far less. At the same time, the balance sheet expanded, driven by a significant increase in claims on customers. If the bank borrowed money from other banks in order to fund the loan (thus reducing its balance of net claims on other banks), in line with the financial intermediation theory of banking, vault cash should not increase and neither should the balance sheet. We observe both a significant rise in cash holdings and an expansion in the total balance sheet (total assets and total liabilities), which rose by €178,000. This cannot be reconciled with the financial intermediation theory, which we therefore must consider as rejected.

Considering the fractional reserve theory, we confirmed by asking the credit department’s Mr. Keil, as well as the directors, that none of them checked their reserve balance or balance of deposits with other banks before signing the loan contract and making the funds available to the borrower (see the translated letter in Appendix 2, and the original letter in the online Appendix 3. Furthermore, there seems no evidence that reserves (cash and claims on other financial institutions) declined in an amount commensurate with the loan taken out. Finally, the observed increase in the balance sheet can also not be reconciled with the standard description of the fractional reserve theory. We must therefore consider it as rejected, too.

This leaves us with the credit creation theory. Can we reconcile the observed accounting asset side information with it? And what do we learn from the liability side information?

The transactions are linked to each other via the accounting identities of the balance sheet (Eqs. (1), (2), (3)). We can therefore ask the question what would have happened to total assets, if we assumed for the moment that no other transaction had taken place, apart from the loan (235). We can set the change in each asset item (except for ΔA4, our loan) to zero, if we subtract the same amount from the change in total assets. The new total asset balance in this hypothetical scenario would be:
or, in general,

In other words, if the other transactions had not happened, the bank’s balance sheet would have expanded by the same amount as the loans taken out. This finding is consistent only with the credit creation theory of bank lending.

The evidence is not as easily interpreted as may have been desired, since in practice it is not possible to stop all other bank transactions that may be initiated by bank customers (who are nowadays able to implement transactions via internet banking even on holidays). But the available accounting data cannot be reconciled with the fractional reserve and the financial intermediation hypotheses of banking.

5. Conclusion
This paper was intended to serve two functions. First, the history of economic thought was examined concerning the question of how banks function. It was found that a long-standing controversy exists that has not been settled empirically. Secondly, empirical tests were conducted to settle the existing and continuing controversies and find out which of the three theories of banking is consistent with the empirical observations.

5.1. Three theories but no empirical test
Concerning the first issue, in this paper we identified three distinct hypotheses concerning the role of banks, namely the credit creation theory, the fractional reserve theory and the financial intermediation theory. It was found that the first theory was dominant until about the mid- to late 1920s, featuring leading proponents such as Macleod and Schumpeter. Then the second theory became dominant, under the influence of such economists as Keynes, Crick, Phillips and Samuelson, until about the early 1960s. From the early 1960s, first under the influence of Keynes and Tobin and the Journal of Money, Credit and Banking, the financial intermediation theory became dominant.

Yet, despite these identifiable eras of predominance, doubts have remained concerning each theory. Most recently, the credit creation theory has experienced a revival, having been championed again in the aftermath of the Japanese banking crisis in the early 1990s (Werner, 1992, Werner, 1997) and in the run-up to and aftermath of the European and US financial crises since 2007 (see Bank of England, 2014a, Bank of England, 2014b, Benes and Kumhof, 2012, Ryan-Collins et al., 2011, Ryan-Collins et al., 2012, Werner, 2003b, Werner, 2005, Werner, 2012). However, such works have not yet become influential in the majority of models and theories of the macro-economy or banking. Thus it had to be concluded that the controversy continues, without any scientific attempt having been made at settling it through empirical evidence.

5.2. The empirical evidence: credit creation theory supported
The second contribution of this paper has been to report on the first empirical study testing the three main hypotheses. They have been successfully tested in a real world setting of borrowing from a bank and examining the actual internal bank accounting in an uncontrolled real world environment.

It was examined whether in the process of making money available to the borrower the bank transfers these funds from other accounts (within or outside the bank). In the process of making loaned money available in the borrower’s bank account, it was found that the bank did not transfer the money away from other internal or external accounts, resulting in a rejection of both the fractional reserve theory and the financial intermediation theory. Instead, it was found that the bank newly ‘invented’ the funds by crediting the borrower’s account with a deposit, although no such deposit had taken place. This is in line with the claims of the credit creation theory.

Thus it can now be said with confidence for the first time – possibly in the 5000 years’ history of banking – that it has been empirically demonstrated that each individual bank creates credit and money out of nothing, when it extends what is called a ‘bank loan’. The bank does not loan any existing money, but instead creates new money. The money supply is created as ‘fairy dust’ produced by the banks out of thin air.32 The implications are far-reaching.

5.3. What is special about banks
Henceforth, economists need not rely on assertions concerning banks. We now know, based on empirical evidence, why banks are different, indeed unique — solving the longstanding puzzle posed by Fama (1985) and others — and different from both non-bank financial institutions and corporations: it is because they can individually create money out of nothing.

5.4. Implications
5.4.1. Implications for economic theory
The empirical evidence shows that of the three theories of banking, it is the one that today has the least influence and that is being belittled in the literature that is supported by the empirical evidence. Furthermore, it is the theory which was widely held at the end of the 19th century and in the first three decades of the twentieth. It is sobering to realise that since the 1930s, economists have moved further and further away from the truth, instead of coming closer to it. This happened first via the half-truth of the fractional reserve theory and then reached the completely false and misleading financial intermediation theory that today is so dominant. Thus this paper has found evidence that there has been no progress in scientific knowledge in economics, finance and banking in the 20th century concerning one of the most important and fundamental facts for these disciplines. Instead, there has been a regressive development. The known facts were unlearned and have become unknown. This phenomenon deserves further research. For now it can be mentioned that this process of unlearning the facts of banking could not possibly have taken place without the leading economists of the day having played a significant role in it. The most influential and famous of all 20th century economists, as we saw, was a sequential adherent of all three theories, which is a surprising phenomenon. Moreover, Keynes used his considerable clout to slow scientific analysis of the question whether banks could create money, as he instead engaged in ad hominem attacks on followers of the credit creation theory. Despite his enthusiastic early support for the credit creation theory (Keynes, 1924), only six years later he was condescending, if not dismissive, of this theory, referring to credit creation only in inverted commas. He was perhaps even more dismissive of supporters of the credit creation theory, who he referred to as being part of the “Army of Heretics and Cranks, whose numbers and enthusiasm are extraordinary”, and who seem to believe in “magic” and some kind of “Utopia” (Keynes, 1930, vol. 2, p. 215).33

Needless to mention, such rhetoric is not conducive to scientific argument. But this technique was followed by other economists engaged in advancing the fractional reserve and later financial intermediation theories. US Federal Reserve staffer Alhadeff (1954) argued similarly during the era when economists worked on getting the fractional reserve theory established:
“One complication worth discussing concerns the alleged “creation” of money by bankers. It used to be claimed that bankers could create money by the simple device of opening deposit accounts for their business borrowers. It has since been amply demonstrated that under a fractional reserve system, only the totality of banks can expand deposits to the full reciprocal of the reserve ratio. [Original footnote: ‘Chester A. Phillips, Bank Credit (New York: Macmillan, 1921), chapter 3, for the classical refutation of this claim.’] The individual bank can normally expand to an amount about equal to its primary deposits” (p. 7).

The creation of credit by banks had become, in the style of Keynes (1930), an “alleged ‘creation’”, whereby rhetorically it was suggested that such thinking was simplistic and hence could not possibly be true. Tobin used the rhetorical device of abductio ad absurdum to denigrate the credit creation theory by incorrectly suggesting it postulated a ‘widow’s cruse’, a miraculous vessel producing unlimited amounts of valuable physical goods, and thus its followers were believers in miracles or utopias.

This same type of rhetorical denigration of and disengagement with the credit creation theory is also visible in the most recent era. For instance, the New Palgrave Money (Eatwell et al., 1989), is an influential 340-page reference work that claims to present a ‘balanced perspective on each topic’ (Eatwell et al., 1989, p. viii). Yet the financial intermediation theory is dominant, with a minor representation of the fractional reserve theory. The credit creation theory is not presented at all, even as a possibility. But the book does include a chapter entitled “Monetary cranks”. In this brief chapter, Keynes’ (1930) derogatory treatment of supporters of the credit creation theory is updated for use in the 1990s, with sharpened claws: Ridicule and insult is heaped on several fateful authors that have produced thoughtful analyses of the economy, the monetary system and the role of banks, such as Nobel laureate Sir Frederick Soddy (1934) and C.H. Douglas (1924). Even the seminal and influential work by Georg Friedrich Knapp (1905), still favourably cited by Keynes (1936), is identified as being created by a ‘crank’. What these apparently wretched authors have in common, and what seems to be their main fault, punishable by being listed in this inauspicious chapter, is that they are adherents of the credit creation theory. But, revealingly, their contributions are belittled without it anywhere being stated what their key tenets are and that their analyses centre on the credit creation theory, which itself remains unnamed and is never spelled out. This is not a small feat, and leaves one pondering the possibility that the Eatwell et al. (1989) tome was purposely designed to ignore and distract from the rich literature supporting the credit creation theory. Nothing lost, according to the authors, who applaud the development that due to
“the increased emphasis given to monetary theory by academic economists in recent decades, the monetary cranks have largely disappeared from public debate …” (p. 214).

And so has the credit creation theory. Since the tenets of this theory are never stated in Eatwell et al. (1989), the chapter on ‘Cranks’ ends up being a litany of ad hominem denigration, defamation and character assassination, liberally distributing labels such as ‘cranks’, ‘phrase-mongers’, ‘agitators’, ‘populists’, and even ‘conspiracy theorists’ that believe in ‘miracles’ and engage in wishful thinking, ultimately deceiving their readers by trying to “impress their peers with their apparent understanding of economics, even though they had no formal training in the discipline” (p. 214). All that we learn about their actual theories is that, somehow, these ill-fated authors are “opposed to private banks and the ‘Money Power’ without their opposition leading to more sophisticated political analysis” (p. 215). Any reading of the highly sophisticated Soddy (1934) quickly reveals such labels as unfounded defamation.

To the contrary, the empirical evidence presented in this paper has revealed that the many supporters of the financial intermediation theory and also the adherents of the fractional reserve theory are flat-earthers that believe in what is empirically proven to be wrong and which should have been recognisable as being impossible upon deeper consideration of the accounting requirements. Whether the authors in Eatwell et al. (1989) did in fact know better is an open question that deserves attention in future research. Certainly the unscientific treatment of the credit creation theory and its supporters by such authors as Keynes, who strongly endorsed the theory only a few years before authoring tirades against its supporters, or by the authors in Eatwell et al. (1989), raises this possibility.

5.4.2. Implications for government policy
There are other, far-reaching ramifications of the finding that banks individually create credit and money when they do what is called ‘lending money’. It is readily seen that this fact is important not only for monetary policy, but also for fiscal policy, and needs to be reflected in economic theories. Policies concerning the avoidance of banking crises, or dealing with the aftermath of crises require a different shape once the reality of the credit creation theory is recognised. They call for a whole new paradigm in monetary economics, macroeconomics, finance and banking (for details, see for instance Werner, 1997, Werner, 2005, Werner, 2012, Werner, 2013a, Werner, 2013a, Werner, 2013b) that is based on the reality of banks as creators of the money supply. It has potentially important implications for other disciplines, such as accounting, economic and business history, economic geography, politics, sociology and law.

5.4.3. Implications for bank regulation
The implications are far-reaching for bank regulation and the design of official policies. As mentioned in the Introduction, modern national and international banking regulation is predicated on the assumption that the financial intermediation theory is correct. Since in fact banks are able to create money out of nothing, imposing higher capital requirements on banks will not necessarily enable the prevention of boom–bust cycles and banking crises, since even with higher capital requirements, banks could still continue to expand the money supply, thereby fuelling asset prices, whereby some of this newly created money can be used to increase bank capital. Based on the recognition of this, some economists have argued for more direct intervention by the central bank in the credit market, for instance via quantitative credit guidance (Werner, 2002, Werner, 2003b, Werner, 2005).

5.4.4. Monetary reform
The Bank of England, 2014a, Bank of England, 2014b recent intervention has triggered a public debate about whether the privilege of banks to create money should in fact be revoked (Wolf, 2014). The reality of banks as creators of the money supply does raise the question of the ideal type of monetary system. Much research is needed on this account. Among the many different monetary system designs tried over the past 5000 years, very few have met the requirement for a fair, effective, accountable, stable, sustainable and democratic creation and allocation of money. The view of the author, based on more than twenty-three years of research on this topic, is that it is the safest bet to ensure that the awesome power to create money is returned directly to those to whom it belongs: ordinary people, not technocrats. This can be ensured by the introduction of a network of small, not-for-profit local banks across the nation. Most countries do not currently possess such a system. However, it is at the heart of the successful German economic performance in the past 200 years. It is the very Raiffeisen, Volksbank or Sparkasse banks – the smaller the better – that were helpful in the implementation of this empirical study that should serve as the role model for future policies concerning our monetary system. In addition, one can complement such local public bank money with money issued by local authorities that is accepted to pay local taxes, namely a local public money that has not come about by creating debt, but that is created for services rendered to local authorities or the community. Both forms of local money creation together would create a decentralised and more accountable monetary system that should perform better (based on the empirical evidence from Germany) than the unholy alliance of central banks and big banks, which have done much to create unsustainable asset bubbles and banking crises (Werner, 2013a, Werner, 2013b).

Appendix 1. Sequence of steps for the extension of a loan Raiffeisenbank Wildenberg e.G.
Negotiations concerning the details of the loan.

Receipt of KYC information and opening of a new customer file (new customer).

Opening of a current account (new customer).

Calculation of the loan and repayment schedule, model calculation, European required customer notification information, record of customer advisory.

Entry of loan application into the bank IT system.

Check of ability to service and repay the loan/conducting liquidity calculation in loan application.

Credit rating of customer, entry into customer file.

Search of customer data on central bank data base for singular economic dependencies and entry of results into bank IT.

Bank board recommendation on loan application with justification (2 directors).

Print out of loan contract, general loan conditions, with handover receipted by customer.

Print out of the protocol of the loan process.

Approval of credit by bank directors by signing the protocol and the loan contract.

Creation of loan account in the IT system.

Establishment of credit limit and availability of credit.

Appointment with customer.

Customer signs credit documents.

Payment of loan at the value date, in exchange for evidence of use of the loan in line with the declared use in the loan application.

Appendix 2. Letter of confirmation of facts by Raiffeisenbank Wildenberg e.G. (Translation; original in online Appendix 3).
10 June 2014

Dear Prof. Dr. Werner,

Confirmation of Facts

In connection with the extension of credit to you in August 2014 I am pleased to confirm that neither I as director of Raiffeisenbank Wildenberg eG, nor our staff checked either before or during the granting of the loan to you, whether we keep sufficient funds with our central bank, DZ Bank AG, or the Bundesbank. We also did not engage in any such related transaction, nor did we undertake any transfers or account bookings in order to finance the credit balance in your account. Therefore we did not engage in any checks or transactions in order to provide liquidity.

Yours sincerely,

M. Rebl,

Director, Raiffeisenbank Wildenberg e.G.

Appendix 2. Letter of confirmation of facts by Raiffeisenbank Wildenberg
Download Acrobat PDF file (335KB)Help with pdf files
Supplementary material 1.

Download Acrobat PDF file (453KB)Help with pdf files
Supplementary material 2. The account statement of the borrower.

Download Acrobat PDF file (29KB)Help with pdf files
Supplementary material 3. Bank balance sheet before extending the loan.

Download Acrobat PDF file (29KB)Help with pdf files
Supplementary material 4.. Bank balance sheet after extending the loan.

Download Acrobat PDF file (620KB)Help with pdf files
Supplementary material 5.. Scan letter Raiffeisenbank Wildenberg

Alhadeff, 1954
David A. Alhadeff
The Rise of Commercial Banking
University of California Press, Berkeley (1954)
(reprinted in 1980 by Arno Press as: Monopoly and Competition in Banking)
Allen et al., 2014
Franklin Allen, Elena Carletti, Douglas Gale
Money, financial stability and efficiency
Journal of Economic Theory, Vol. 149(C), Elsevier (2014), pp. 100-127
ArticleDownload PDFView Record in Scopus
Allen and Gale, 2000
F. Allen, D. Gale
Financial contagion
Journal of Political Economy, 108 (2000), pp. 1-33
CrossRefView Record in Scopus
Allen and Gale, 2004a
F. Allen, D. Gale
Financial intermediaries and markets
Econometrica, 72 (2004), pp. 1023-1061
View Record in Scopus
Allen and Gale, 2004b
F. Allen, D. Gale
Competition and financial stability
Journal of Money, Credit and Banking, 36 (2004), pp. 453-480
CrossRefView Record in Scopus
Allen and Santomero, 2001
F. Allen, Santomero
What do financial intermediaries do?
Journal of Banking & Finance, 25 (2001), pp. 271-294
ArticleDownload PDFView Record in Scopus
Baltensperger, 1989
Ernst Baltensperger
John Eatwell, Murray Milgate, Peter Newman (Eds.), The New Palgrave Money, Macmillan, Basingstoke (1989)
Bank of England, 2014a
Bank of England
The Bank of England’s forecasting platform: COMPASS, MAPS, EASE and the suite of models
Bank of England working paper no. 471 (2014)
Bank of England, 2014b
Bank of England
Money in the modern economy: An introduction, by Michael McLeay, Amar Radia and Ryland Thomas of the bank’s monetary analysis directorate
Quarterly Bulletin, Q1 (2014)
BBC, 2013, February 26
Negative interest rates idea floated by bank’s Paul Tucker
(2013, February 26)
(accessed online)
BCBS, 1999
Capital requirements and bank behaviour: The impact of the Basel Accord
BCBS working paper no. 1, BIS, Basel (1999, April)
Bencivenga and Smith, 1991
V.R. Bencivenga, B. Smith
Financial intermediation and endogenous growth
Review of Economic Studies, 58 (1991), pp. 195-209
CrossRefView Record in Scopus
Benes and Kumhof, 2012
Jaromir Benes, Michael Kumhof
The Chicago plan revisited
IMF, Washington, DC (2012)
Bernanke, 1993
Ben Bernanke
Credit in the macroeconomy
FRBNY Quarterly Review (1993), pp. 50-70
(Spring 1992–93)
Bernanke and Blinder, 1989
Ben Bernanke, Alan S. Blinder
Credit, money and aggregate demand
American Economic Review, 78 (1989), pp. 435-439
Bernanke and Gertler, 1995
Ben Bernanke, Mark Gertler
Inside the black box: The credit channel of monetary policy transmission
Journal of Economic Perspectives, 9 (4) (1995), pp. 27-48
Branson, 1968
William Branson
Financial Flows in the US Balance of Payments
North-Holland, Amsterdam (1968)
Casu and Girardone, 2006
Barbara Casu, Claudia Girardone
Bank competition, concentration and efficiency in the single European market
The Manchester School, 74 (4 Special Issue) (2006), pp. 441-468
CrossRefView Record in Scopus
Crick, 1927
W.F. Crick
The genesis of bank deposits
Economica (1927), pp. 191-202
(says the same)
CrossRefView Record in Scopus
Davenport, 1913
Herbert J. Davenport
The Economics of Enterprise
Augustus M. Kelley, New York (1913)
Dewatripont et al, 2010
Mathias Dewatripont, Jean-Charles Rochet, Jean Tirole
Balancing the Banks: Global Lessons from the Financial Crisis
Princeton University Press, Princeton (2010)
Diamond, 1984
Douglas W. Diamond
Financial intermediation as delegated monitoring
Review of Economic Studies, 51 (1984), pp. 393-414
CrossRefView Record in Scopus
Diamond, 1991
Douglas W. Diamond
Monitoring and reputation: The choice between bank loans and directly placed debt
Journal of Political Economy, 99 (4) (1991), pp. 689-721
CrossRefView Record in Scopus
Diamond, 2007, Spring
Douglas W. Diamond
Banks and liquidity creation: A simple exposition of the diamond-dybvig model
Economic Quarterly, 93 (2) (2007, Spring), pp. 189-200
View Record in Scopus
Diamond and Dybvig, 1983
Douglas W. Diamond, P.H. Dybvig
Bank runs, deposit insurance, and liquidity
Journal of Political Economy, 91 (3) (1983), pp. 401-419
CrossRefView Record in Scopus
Diamond and Rajan, 2001
Douglas W. Diamond, Raghuram G. Rajan
Banks, short-term debt and financial crises: Theory, policy implications and applications
Carnegie-Rochester Conference Series on Public Policy, Vol. 54(1), Elsevier (2001, June), pp. 37-71
ArticleDownload PDFView Record in Scopus
Douglas, 1924
C.H. Douglas
Social Credit
Eyre & Spottiswoode, London (1924)
Eatwell et al., 1989
John Eatwell, Murray Milgate, Peter Newman (Eds.), The New Palgrave Money, Macmillan, Basingstoke (1989)
Fama, 1985
Eugene Fama
What’s different about banks?
Journal of Monetary Economics, 15 (1985) (1985), pp. 29-39
ArticleDownload PDFView Record in Scopus
Gertler and Kiyotaki, 2011
M. Gertler, N. Kiyotaki
Financial intermediation and credit policy in business cycle analysis
B. Friedman, M. Woodford (Eds.), Handbook of Monetary Economics, Vol. 3A, Elsevier, North Holland (2011)
Goodfriend, 1991, Jan/Feb
Marvin Goodfriend
Money, credit, banking, and payments system policy
Economic Review, Federal Reserve Bank of Richmond (1991, Jan/Feb), pp. 7-23
View Record in Scopus
Goodhart, 1989
C.A.E. Goodhart
John Eatwell, Murray Milgate, Peter Newman (Eds.), The new Palgrave money, Macmillan, Basingstoke (1989)
Gorton and Pennacchi, 1990
Gary Gorton, George Pennacchi
Financial intermediaries and liquidity creation
The Journal of Finance, 45 (1) (1990, Mar.), pp. 49-71
CrossRefView Record in Scopus
Gurley and Shaw, 1955
John G. Gurley, E.S. Shaw
Financial aspects of economic development
AER, XLV (1955, September), pp. 515-528
Gurley and Shaw, 1960
John G. Gurley, E.S. Shaw
Money in a Theory of Finance
The Brookings Institution, Washington, D.C. (1960)
Guttentag and Lindsay, 1968, Sep.–Oct
Jack M. Guttentag, Robert Lindsay
The uniqueness of commercial banks
Journal of Political Economy, 76 (5) (1968, Sep.–Oct.), pp. 991-1014
CrossRefView Record in Scopus
Hahn, 1920
Albert C. Hahn
Volkswirtschaftliche Theorie des Bankkredits
J.C.B. Mohr, Tübingen (1920)
Hawtrey, 1919
Ralph George Hawtrey
Currency and Credit
Longmans, Green and Co. (1919)
Heffernan, 1996
Shelagh Heffernan
Modern Banking in Theory and Practice
John Wiley and Sons, Chichester (1996)
Hellwig, 1977
Martin Hellwig
A model of borrowing and lending with bankruptcy
Econometrica, 45 (1977), pp. 1879-1906
CrossRefView Record in Scopus
Hellwig, 1991
Martin Hellwig
Banking, financial intermediation and corporate finance
Alberto Giovannini, Colin Mayer (Eds.), European Financial Integration, Cambridge University Press, Cambridge (1991)
Hellwig, 2000
Martin F. Hellwig
Financial intermediation with risk aversion
Review of Economic Studies, 67 (4) (2000), pp. 719-742
CrossRefView Record in Scopus
Hoshi and Kashyap, 2004
Takeo Hoshi, Anil K. Kashyap
Japan’s financial crisis and economic stagnation
Journal of Economic Perspectives, 18 (1) (2004), pp. 3-26
CrossRefView Record in Scopus
Howe, 1915
Robert Harrison Howe
The evolution of banking; A study of the development of the credit system
C.H. Kerr & Company, Chicago (1915)
Kashyap, 2002
Anil K. Kashyap
Sorting out the Japanese financial crisis
Federal Reserve Bank of Chicago Economic Perspectives, 26 (4) (2002), pp. 42-55
View Record in Scopus
Kashyap et al., 2002
A. Kashyap, R. Rajan, J. Stein
Banks as liquidity providers: An explanation for the coexistence of lending and deposit-taking
Journal of Finance, 57 (2002), pp. 33-73
CrossRefView Record in Scopus
Keynes, 1924
John Maynard Keynes
Tract on Monetary Reform
Macmillan, London (1924)
Keynes, 1930
John Maynard Keynes
A Treatise on Money
Macmillan, London (1930)
Keynes, 1936
John Maynard Keynes
The General Theory of Employment, Interest and Money
Macmillan, London (1936)
Kiyotaki and Moore, 1997
Nobuhiro Kiyotaki, John Moore
Credit cycles
Journal of Political Economy, 105 (1997), pp. 211-248
CrossRefView Record in Scopus
Klein, 1971
Michael A. Klein
A theory of the banking firm
Journal of Money, Credit and Banking, 3 (1971, May), pp. 205-218
Knapp, 1905
Georg Friedrich Knapp
Staatliche Theorie des Geldes
Duncker & Humblot, Leipzig (1905)
Kohn, 2009, October 9
Donald Kohn
Monetary policy research and the financial crisis: Strength and shortcomings
Speech at the Federal Reserve Conference on key developments in monetary policy, Washington, D.C (2009, October 9)
Koo and Fujita, 1997, March 26
Richard Koo, Shigeru Fujita
Koka asai ryoteki kin’yu kanwa [Quantitative monetary easing is hardly effective]
Keizai Kyoshitsu, Nikkei (1997, March 26), p. 31
View Record in Scopus
Krugman and Obstfeld, 2000
Paul R. Krugman, Maurice Obstfeld
International Economics, Theory and Policy
Addison-Wesley Publishing Company, Reading, MA (2000)
Law, 1705
John Law
Money and Trade Consider’d with a Proposal for Supplying the Nation with Money
Glasgow: R. & A. Foulis (1705)
Glasgow: R. & A. Foulis
Lutz, 1939
F.A. Lutz
Velocity analysis and the theory of the creation of deposits
Economica, 6 (22) (1939, May), pp. 156-169
CrossRefView Record in Scopus
MacLeod, 1855–6
Henry Dunning Macleod
The Theory and Practice of Banking, in 2 volumes
Citations from the
(6th ed.), Longman, Greens and Co., London (1855–6)
MacLeod, 1891
Henry Dunning Macleod
Theory of Credit, Vol. 2 (1891)
Macmillan Committee, 1931
Macmillan Committee
British Parliamentary Reports on International Finance: The Report of the Macmillan Committee
H.M. Stationery Office, London (1931)
Marshall, 1888
Alfred Marshall
Report by the Gold and Silver Commission of 1887, command 5512xxx
Matthews and Thompson, 2005
Kent Matthews, John Thompson
The Economics of Banking
John Wiley and Sons, Chichester (2005)
Mayer, 1988
Colin Mayer
New issues in corporate finance
European Economic Review, 32 (1988), pp. 1167-1188
View Record in Scopus
Miller and VanHoose, 1993
Roger L. Miller, David D. VanHoose
Modern Money and Banking, International Editions
(3rd ed.), McGraw-Hill, New York (1993)
Mints, 1945
Lloyd W. Mints
A History of Banking Theory in Great Britain and the United States
University of Chicago Press, Chicago (1945)
Moeller, 1925
Hero Moeller
Die Lehr vom Gelde
Quelle und Meyer, Leipzig (1925)
Monti, 1972
Mario Monti
Deposit, credit and interest rates determination under alternative bank objective functions
G.P. Szegö, K. Shell (Eds.), Mathematical Methods in Investment Finance, North-Holland, Amsterdam (1972)
Müller, 1816
Adam Heinrich Müller
Versuch einer neuen Theorie des Geldes, mit besonderer Rücksicht auf Großbritannien
Leipzig: F. A. Brockhaus (1816)
Myers and Rajan, 1998
S.C. Myers, R.G. Rajan
The paradox of liquidity
The Quarterly Journal of Economics, 113 (3) (1998), pp. 733-771
CrossRefView Record in Scopus
Phillips, 1920
Chester A. Phillips
Bank Credit
Macmillan, New York (1920)
Pigou, 1927
Arthur C. Pigou
Industrial Fluctuations
Ryan-Collins et al., 2011
Josh Ryan-Collins, Tony Greenham, Richard A. Werner, Andrew Jackson
Where does Money Come From?
New Economics Foundation, London (2011)
Robertson, 1926
Dennis Robertson
Banking Policy and the Price
Romer, 2006
David Romer
Advanced Macroeconomics
(3rd ed.) (2006)
Ryan-Collins et al., 2012
Josh Ryan-Collins, Tony Greenham, Richard A. Werner, Andrew Jackson
Where does Money Come From?
(2nd ed.), New Economics Foundation, London (2012)
Samuelson, 1948
Paul Samuelson
McGraw-Hill, New York (1948)
Samuelson and Nordhaus, 1995
Paul Samuelson, William Nordhaus
McGraw-Hill, New York (1995)
Saving, 1977
Thomas R. Saving
A theory of the money supply with competitive banking
Journal of Monetary Economics, 3 (1977), pp. 289-303
ArticleDownload PDFView Record in Scopus
Schumpeter, 1912
Joseph Alois Schumpeter
Die Theorie der wirtschaftlichen Entwicklung.
Harvard University Press), Boston (1912)
Schumpeter, 1926
Joseph A. Schumpeter
Theorie der wirtschatlichen Entwicklung
(2nd edition), Duncker & Humblot, Leipzig (1926)
Schumpeter, 1954
Joseph Alois Schumpeter
History of Economic Analysis
Oxford University Press, New York (1954)
Sealey and Lindley, 1977
C. Sealey, J.T. Lindley
Inputs, outputs and a theory of production and cost at depositary financial institutions
Journal of Finance, 32 (1977), pp. 1251-1266
CrossRefView Record in Scopus
Sheard, 1989
Paul Sheard
The mainbank system and corporate monitoring and control in Japan
Journal of Economic Behavior and Organization, 11 (1989), pp. 399-422
ArticleDownload PDFView Record in Scopus
Smith, 1776
Adam Smith
An Inquiry into the Nature and Causes of the Wealth of Nations
Soddy, 1934
Frederick Soddy
The Role of Money
Elkin Mathews & Marrot, London (1934)
Stamp, 1927
Josiah Stamp
Industrial fluctuations, by A.C. Pigou, review by: J.C. Stamp
The Economic Journal, 37 (147) (1927, Sep.), pp. 418-424
CrossRefView Record in Scopus
Stein, 2014, January 3
Jeremy C. Stein
Banks as patient debt investors
American Economic Association/American Finance Association Joint Luncheon, Philadelphia, Pennsylvania (2014, January 3)
Steuart, 1767
James Steuart
An inquiry into the principles of political economy
Stiglitz, 1997
Joseph Stiglitz
(2nd ed.), W.W. Norton, New York (1997)
Thornton, 1802
Henry Thornton
An Enquiry into the Nature and Effects of the Paper Credit of Great Britain
Tobin, 1963
James Tobin
Commercial banks as creators of ‘money’
D. Carson (Ed.), Cowles foundation paper 205, Banking and Monetary Studies, Irwin, Homewood (1963)
Tobin, 1969
James Tobin
A general equilibrium approach to monetary theory
Journal of Money, Credit and Banking, 1 (1969), pp. 15-29
CrossRefView Record in Scopus
Tobin and Brainard, 1963
James Tobin, William C. Brainard
Financial intermediaries and the effectiveness of monetary controls
American Economic Review, 53 (1963), pp. 383-400
View Record in Scopus
Tooke, 1838
Thomas Tooke
An Inquiry into the Currency Principle
Walsh, 2003
Carl Walsh
Monetary Theory and Policy
MIT Press, Cambridge (2003)
Werner, 1992
Richard A. Werner
Towards a quantity theory of disaggregated credit and international capital flows
Paper presented at the Royal Economic Society Annual Conference, York, April 1993 and at the 5th Annual PACAP Conference on Pacific–Asian Capital Markets in Kuala Lumpur, June 1993, reviewed by the Economist on 19 June 1993 in the Economics Focus (1992)
Werner, 1997
Richard A. Werner
Towards a new monetary paradigm: A quantity theorem of disaggregated credit, with evidence from Japan
Kredit und Kapital, 30 (2) (1997, July), pp. 276-309
View Record in Scopus
Werner, 2002
Richard A. Werner
Monetary policy implementation in Japan: What they say versus what they do
Asian Economic Journal, 16 (2) (2002), pp. 111-151
View Record in Scopus
Werner, 2003a
Richard A. Werner
Princes of the Yen, Japan’s Central Bankers and the Transformation of the Economy
M.E. Sharpe, New York (2003)
Werner, 2003b
Richard A. Werner
A discussion of Anil K. Kashyap’s paper ‘Sorting out Japan’s financial crisis’
The Japanese Economy, Vol. 30 (no. 4), M.E. Sharpe, New York (2003)
Werner, 2005
Richard A. Werner
New paradigm in macroeconomics
Palgrave Macmillan, Basingstoke (2005)
Werner, 2010a
Richard A. Werner
Comment on “Strengthening the resilience of the banking sector”
Official submission to public call for comments on ‘Strengthening the resilience of the banking sector, consultative document’ by the Basel Committee on Banking Supervision, September 2009, Bank for International Settlements, Basel (2010)
( Submitted 16 April 2010)
Werner, 2010b
Werner, R.A. (2010b), Towards stable and competitive banking in the UK — Evidence for the ICB, University of Southampton CBFSD Discussion Paper 2010, submitted to the Independent Commission on Banking, UK (Chair: Professor Sir John Vickers), submitted 19 November 2010
Werner, 2012
Richard A. Werner
Towards a new research programme on ‘Banking and the economy’ — Implications of the quantity theory of credit for the prevention and resolution of banking and debt crises
International Review of Financial Analysis, 25 (2012), pp. 94-105, 10.1016/j.irfa.2012.06.002
CrossRefView Record in Scopus
Werner, 2013a
Richard A. Werner
Towards a more stable and sustainable financial architecture — A discussion and application of the quantity theory of credit
Kredit und Kapital, 46 (3) (2013), pp. 353-389
View Record in Scopus
Werner, 2013b
Richard A. Werner
Commentary: Crises, the spatial distribution of economic activity and the geography of banking
Environment & Planning A, 45 (2013), pp. 2789-2796
CrossRefView Record in Scopus
Wicksell, 1898
Knut Wicksell
Geldzins und Güterpreise
Wicksell, 1907
Knut Wicksell
The influence of the rate of interest on prices
Economic Journal, 17 (1907), pp. 213-220
CrossRefView Record in Scopus
Wicksell, 1922
Knut Wicksell
Vorlesungen über Nationalökonomie auf Grundlage des Marginalprinzipes/Bd. 2
Geld und Kredit, Jena (1922)
Wicksell, 1935
Knut Wicksell
Lectures on Political Economy
Routledge, London (1935)
Withers, 1909
Hartley Withers
The Meaning of Money
Withers, 1916
Hartley Withers
The Business of Finance
Wolf, 2014, Apr. 24
Martin Wolf
Strip private banks of their power to create money
Financial Times (2014, Apr. 24)
Wolfe, 1997
Simon Wolfe
An Economic Analysis of Financial Institutions’ Accounting Practice
(Dissertation submitted in the)
Department of Management, Faculty of Social Sciences, University of Southampton (1997)
Woodford, 2003
Michael Woodford
Interest and Prices
Princeton University, Princeton (2003)
Yohe, 1995
William P. Yohe
Interactive Money and Banking
Blackwell, Oxford (1995)

The author wishes to acknowledge excellent research support from Dr. Kostas Voutsinas and Shamsher Dhanda. Moreover, the author is grateful to the many bank staff at numerous banks involved in this study, who have given their time for meetings and interviews. Most of all, the author would like to thank Mr. Marco Rebl, Director of Raiffeisenbank Wildenberg e.G., for his cooperation and arranging the cooperation of his colleagues in conducting the empirical examination of bank credit creation and making the facilities, accounts and staff of his bank accessible to the researcher. Finally, should grains of wisdom be found in this article, the author wishes to attribute them to the source of all wisdom (Jeremiah 33:3).

Translated into English by the author. See also Wicksell (1935).

Federal Reserve Vice-Chairman Kohn (2009) bemoaned this issue. Examples of leading macroeconomic and monetary models without any banks include Walsh (2003) and Woodford (2003), but this problem applies to all the conventional macromodels proposed by the major conventional schools of thought, such as the classical, Keynesian, monetarist and neo-classical theories, including real business cycle and DSGE models.

The ‘Basel’ approach to bank regulation focuses on regulation of capital adequacy. Werner (2010a) has argued that this is based on economic theories that do not feature a special role for banks. For an overview and critique, see Werner (2012).

One older attempt that has stood up to the test of time is Werner (1997).

See, for instance, the first BCBS Working Paper (BCBS, 1999), looking back on the first decade of experience with Basel I for insights into the thinking of the Basel bank regulators. In a section headlined ‘Do fixed minimum capital requirements create credit crunches affecting the real economy?’, the authors argue: “It would in fact be strange if fixed minimum capital requirements did not bite in some periods, thereby constraining the banks, given that the purpose of bank [capital] requirements is to limit the amount of risk that can be taken relative to capital. However, for this to have an effect on output, it would have to be true that any shortfall in bank lending was not fully made up through lending by other intermediaries or by access to securities markets.” This statement presupposes that the financial intermediation theory holds. If banks are the creators of the money supply, and in this role unique and different from non-bank financial intermediaries, as the other two hypotheses maintain, then a reduction in bank credit (creation) must have effects that non-bank financial intermediaries cannot compensate for.

See, for instance, Werner, 2005, Werner, 2010a.

As seen in the work of the Independent Commission on Banking, ICB, 2011, also known as the Vickers Commission. For contributions to the consultation of the ICB, see, for instance, Werner (2010b). The recommendations therein, especially the recommendation to discard the financial intermediation theory, were not heeded.

The practice of issuance of promissory notes by commercial banks has continued for far longer in Scotland and Northern Ireland — namely until today. This did not seem, however, to result in a sizeable literature on bank money creation in the UK throughout the 20th century.

Referring to the issuance of bank notes that circulate as paper money, Smith comments “The banks, when their customers apply to them for money, generally advance it to them in their own promissory notes” (p. 242). … “It is chiefly by discounting bills of exchange, that is, by advancing money upon them before they are due, that the greater part of banks and bankers issue their promissory notes. … The banker, who advances to the merchant whose bill he discounts, not gold and silver, but his own promissory notes, has the advantage of being able to discount to a greater amount by the whole value of his promissory notes, which he finds, by experience, are commonly in circulation. He is thereby enabled to make his clear gain of interest on so much a larger sum” (Smith, 1776, p. 241).

“Jeder Provinzialbanquier strebt dahin, sein Privatgeld zum Nationalgelde zu erheben: er strebt nach der größtmöglichen und möglichst allgemeinen Umsetzbarkeit seines Privatgeldes. Es ist in England nicht bloß die Regierung, welche Geld macht, sondern die Bank von England, jede Privatbank, ja jede einzelne Haushaltung (ohne gerade bestimmte Noten auszugeben, aber, in wie fern sie sich an eine bestimmte Bank thätig anschließt) helfen das Geld machen” (Müller, 1816, p. 240).

“Sobald die Regierung also die Geldzeichen mechanisch vermehrt, ohne in demselben Maaße jene andern Organe, denen die Vortheile der Geldvermehrung nur indirekt zu gute kommen, zu stärken, ohne um so kräftiger und gerechter das Ganze zu umfassen, so überträgt sie im Grunde nur das Privilegium der Gelderzeugung, das sie im Nahmen des Ganzen ausübt, auf ein einzelnes Organ. … sollte sie [die Regierung] also ihr Privilegium der Gelderzeugung nicht bloß aufheben, sondern das bisher erzeugte Geld zurück nehmen, so gibt sie damit nur dem Privatcredit, das heißt, dem verwöhnten verderbten Privatcredit, oder dem Wucher die förmliche Befugniß in die Hände, die Lücken zu ergänzen, selbst Geldmarken zu machen, und somit seinen verderblichen und vernichtenden Einfluß auf das Ganze nun erst recht zu äußern” (Müller, 1816, p. 305).

There is also another group of writers who to some extent agree with this description, but one way or another downplay its role or importance in practice. In terms of the history of economic thought it can be said that the latter group laid the groundwork and were the founding fathers of the fractional reserve theory. To the extent that they recognise the creation of credit by banks out of nothing under certain circumstances one might argue that they could be classified as supporter of either the credit creation theory or the fractional reserve theory, but to minimise confusion, here the impact their work has had in its common interpretation was chosen, as well as their emphasis on reserves as a key mechanism, so that they were included in the latter theory.

An Inn of Court with the status of a local authority, inside the territory of the City of London Corporation.

This paper was read by Wicksell in London in the Economic Section of the British Association in 1906 and it is recorded in the Economic Journal that Palgrave and Edgeworth commented on it. There is no mentioning of any objections to the claims about the ability of banks to create money out of nothing.

“Since, then, variations in the quantity of currency have these widespread effects, it is a matter which bankers have to consider seriously, how far it is possible from them to apply some scientific regulation to the volume of currency, and whether it is possible to modify the evils that follow from wide fluctuations in prices by some such regulation” (p. 55). For a more recent application and more precise formulation of this principle, see Werner’s Quantity Theory of Credit (Werner, 1992, Werner, 1997, Werner, 2005, Werner, 2012).

“… the most important of the modern forms of currency, namely the cheque, is, in effect, manufactured for the use of its customers by banks; and, further, that since the volume of currency has an important effect upon raising prices, the extent to which currency is thus created is a responsibility which has to be seriously considered by those who work the financial machine. This manufacture of currency is worked through the granting of credit, and credit may thus be defined, for the purposes of this inquiry, as the process by which finance makes currency for its customers. As we saw in the last chapter, deposits, which are potential currency as they carry with them the right to draw a cheque, are produced largely through the loans, discounts and investments made by bankers” (p. 63).

“The creation of credit is thus seen clearly to result in the manufacture of currency whenever the banks buy bills of exchange … or make an advance …. In either case the banks give somebody the right to draw cheques. … When a bank makes an advance to a stock broker the result is exactly the same …. The same result, in rather a different form, happens when a bank makes investments on its own account. … There has thus been, in each case, an increase in deposits through the operation of the bank in lending, discounting, or investing. If we can imagine all the banks suddenly selling all their investments and bills of exchange and calling in all their advances, the process could only be brought about by the cancelling of deposits, their own and one another’s” (p. 72).

“Etwas Ähnliches wie eine Bescheinigung künftiger Produkte oder wie die Verleihung von Zahlkraft an die Versprechungen des Unternehmers gibt es nun wirklich. Das ist der Dienst, den der Bankier dem Unternehmer erweist und um den sich der Unternehmer an den Bankier wendet. … so wäre er nicht Zwischenhändler, sondern Produzent von Kredit, d.h. er würde die Kaufkraft, die er dem Unternehmer leiht, selbst schaffen …. Man könnte ohne große Sünde sagen, daß der Bankier Geld schaffe” (S. 197). Translated by author.

“Die fiktive Bescheinigung von Produkten, die die Kreditzahlungsmittel sozusagen ursprünglich darstellten, ist zur Wahrheit geworden” (Schumpeter, 1912, S. 223). Translated by author.

For instance, Moeller (1925) states that “In the modern monetary system the creation of new paper or bank accounting currency (‘Buchungsgeld’, or ‘bank book money’) is primarily in the hands of the banks. … For the deposit money the same largely applies as for paper money …” (pp. 177 ff.).

“Jeder Kredit der gegeben wird, erzeugt seinerseits ein Deposit und damit die Mittel zu seiner Unterbringung. … Die Folgerung aus dem skizzierten Vorgang kann man auch umgekehrt ausrücken, indem man sagt – und dieser Schluß ist ebenso zwingend – , daß jedes irgendwie und irgendwo in der Volkswirtschaft vorhandene Scheck- oder Ueberweisungsguthaben sein Entstehen einer vorausgegangenen Kreditgewährung, einem zuvor eingeräumten Kredit zu verdanken hat” (S. 28). Translated by author.

“Wir behaupten also im Gegensatz zu der gesamten, in dieser Beziehung so gut wie einigen Bank- und Kreditliteratur, daß nicht das Passivgeschäft der Banken, insbesondere das Depositengeschäft das Primäre ist, sondern daß allgemein und in jedem einzelnen Falle ein Aktivgeschäft einer Bank vorangegangen sein muß, um erst das Passivgeschäft einer Bank möglich zu machen und es hervorzurufen: Das Passivgeschäft der Banken ist nichts anderes als ein Reflex vorangegangener Kreditgewährung. Die entgegengesetzte Ansicht beruht auf einer Art optischer Täuschung …” (S. 29). Translated by author.

See, for instance, Phillips (1920, p. 72, p. 119).

This is in line with the credit supply determination view proposed by Werner, 1997, Werner, 2005 and his Quantity Theory of Credit, as opposed to the endogenous credit supply view of many post-Keynesians.

His analysis was based on the “overlooked … pivotal fact that an addition to the usual volume of a bank’s loans tends to result in a loss of reserve for that bank only somewhat less on average than the amount of the additional loans. … Manifold loans are not extended by an individual bank on the basis of a given amount of reserve” (Phillips, 1920, p. 73).

It should be noted here that Phillips’ (1920) work can be interpreted in a more differentiated manner. For instance, Phillips did also point out that if all banks increased their lending at roughly the same pace, each bank would, after all, be able to create credit without losing reserves or cash, on balance (pp. 78 ff.). However, subsequent writers citing Phillips usually do not mention this. While a more detailed discussion of Phillips is, however, beyond the scope of this paper, it is here merely claimed that Phillips’ argument was an important stepping stone towards the formulation of the fractional reserve theory of banking, which is unequivocal in treating individual banks as mere financial intermediaries without the power to create credit or money individually under any and all circumstances, even though it could possibly be argued that Phillips himself may not have agreed with the latter in all respects.

In the Introduction, Robertson says: “I have had so many discussions with Mr. J. M. Keynes on the subject matter of chapters V and VI, and have rewritten them so drastically at his suggestion, that I think neither of us now knows how much of the ideas therein contained is his, and how much is mine (p. 5).” (As cited in Keynes, 1930.)

On Paul Tucker’s proposal, see BBC (2013), and also the critique by Werner (2013a). Negative rates on bank reserves at the central bank were actually imposed by the Swedish central bank in 2009, the Danish central bank in 2012 and for the first time by the Swiss central bank in 1978 on deposits by foreign banks.

Even though a closer reading of Alhadeff (1954) shows that the author agreed that, under certain circumstances, banks can create credit and money: “In certain cases, the proportion between the legal reserve ratio and residual deposits is such that even a single bank can expand its deposits to a somewhat greater amount than its primary deposits. … Again, it might be possible for a very large bank, or a bank in an isolated community with few business connections with outside banks, literally to create money because of flow back deposits. [Footnote: ‘Flow-back deposits refer to the circulation of deposits among the depositors of the same bank.’] In either case, this amounts to a partial reduction in the average cost of producing credit (making loans), at least in terms of the raw material costs …” (Alhadeff, 1954, p. 7). Although Alhadeff, if studied closely, could be said to have agreed that an individual bank can create credit out of nothing, he clearly thought this to be a special case without practical relevance, while it is normally only the banking system in aggregate that creates credit.

Moreover, the original Samuelson (1948: 331) offered an important (even though not prominently displayed) section headed ‘Simultaneous expansion or contraction by all banks’, which provided the caveat that each individual bank could, after all, create deposits, if only all banks did the same at the same rate (thus outflows being on balance cancelled by inflows, as Alhadeff, 1954, also mentioned). There is no such reference in the modern, ‘up-to-date’ textbook.

The conclusion of Tobin’s paper: “According to this approach, the principal way in which financial policies and events affect aggregate demand is by changing the valuations of physical assets relative to their replacement costs. Monetary policies can accomplish such changes, but other exogenous events can too. In addition to the exogenous variables explicitly listed in the illustrative models, changes can occur, and undoubtedly do, in the portfolio preferences – asset demand functions – of the public, the banks, and other sectors. These preferences are based on expectations, estimates of risk, attitudes towards risk, and a host of other factors. In this complex situation, it is not to be expected that the essential impact of monetary policies and other financial events will be easy to measure in the absence of direct observation of the relevant variables (q in the models). There is no reason to think that the impact will be captured in any single exogenous or intermediate variable, whether it is a monetary stock or a market interest rate” (Tobin, 1969, p. 29).

This also means that the innumerable PhD theses and Masters dissertations produced in this area in the last thirty years or so are mainly based on the financial intermediation theory. For instance, Wolfe (1997) states: “Banks possess the power of intermediation, which is the ability to transform deposits into loans. Deposits with one set of characteristics are transformed into assets with other or different characteristics” (p. 12).

See Werner (2003b) for a detailed critique of Kashyap (2002).

Though with the caveat that several of his statements, made at the same time, seem to support different theories of banking.

It is of interest that the last step expressly requires the bank staff implementing this credit procedure to only pay out the loan for the agreed purpose, as evidence for which a receipt for any purchases undertaken with the loan funds are demanded by the bank. This demonstrates that the implementation of policies of credit guidance by purpose of the loan is practically possible, since such data is available and the use of the loan is monitored and enforced by each bank.

Thanks to Charlie Haswell for the ‘fairy dust’ allegory.

“There is a common element in the theories of nearly all monetary heretics. Their theories of Money and Credit are alike in supposing that in some way the banks can furnish all the real resources which manufacture and trade can reasonably require without real cost to anyone …. For they argue thus. Money (meaning loans) is the life-blood of industry. If money (in this sense) is available in sufficient quantity and on easy terms, we shall have no difficulty in employing to the full the entire available supply of the factors of production. For the individual trader or manufacturer “bank credit” means “working capital”; a loan from his bank furnishes him with the means to pay wages, to buy materials and to carry stocks. If, therefore, sufficient bank credit was freely available, there need never be unemployment. Why then, he asks, if the banks can create credit, should they refuse any reasonable request for it? And why should they charge a fee for what costs them little or nothing? … There can only be one answer: the bankers, having a monopoly of magic, exercise their powers sparingly in order to raise the price. … Where magic is at work, the public do not get the full benefit unless it is nationalised. Our heretic admits, indeed, that we must take care to avoid “inflation”; but that only occurs when credit is created which does not correspond to any productive process. To create credit to meet a genuine demand for working capital can never be inflationary; for such a credit is “self-liquidating” and is automatically paid off when the process of production is finished. … If the creation of credit is strictly confined within these limits, there can never be inflation. Further, there is no reason for making any charge for such credit beyond what is required to meet bad debts and the expense of administration. Not a week, perhaps not a day or an hour, goes by in which some well-wisher of mankind does not suddenly see the light — that here is the key to Utopia” (vol. 2, pp. 217 ff.).

Copyright © 2014 Published by Elsevier Inc.

Recommended articles
Financial liberalization and contagion with unobservable savings
International Review of Financial Analysis, Volume 36, 2014, pp. 20-35
Purchase PDFView details
A lost century in economics: Three theories of banking and the conclusive evidence
International Review of Financial Analysis, Volume 46, 2016, pp. 361-379
Download PDFView details
A half-century diversion of monetary policy? An empirical horse-race to identify the UK variable most likely to deliver the desired nominal GDP growth rate
Journal of International Financial Markets, Institutions and Money, Volume 43, 2016, pp. 158-176
Download PDFView details

Citing articles (38)

Article Metrics
Exports-Saves: 18
Readers: 202
News Mentions: 3
Comments: 187
Q&A Site Mentions: 2
References: 3
Social Media
Shares, Likes & Comments: 4009
Tweets: 1364
Citation Indexes: 38
View details
About ScienceDirectRemote accessShopping cartContact and supportTerms and conditionsPrivacy policy

We use cookies to help provide and enhance our service and tailor content and ads. By continuing you agree to the use of cookies.

Copyright © 2018 Elsevier B.V. or its licensors or contributors. ScienceDirect ® is a registered trademark of Elsevier B.V.

Now that Google+ has been shuttered, I should air my dirty laundry on how awful the project and exec team was. I’m still pissed about the ba […]

Now that Google+ has been shuttered, I should air my dirty laundry on how awful the project and exec team was. I’m still pissed about the ba […]

Thread Reader Account Share
Support Thread Reader!

Thread by @morganknutson: “Now that Google+ has been shuttered, I should air my dirty laundry on how awful the project and exec team was. I’m still pissed about the ba […]”
9 days ago, 149 tweets, 23 min read 22,896 views
Profile picture
Follow Read on Twitter
Subscribe Read later Archive
Now that Google+ has been shuttered, I should air my dirty laundry on how awful the project and exec team was.

I’m still pissed about the bait and switch they pulled by telling me I’d be working on Chrome, then putting me on this god forsaken piece of shit on day one.
This will be a super slow burn that goes back many years. I’ll continue to add to over the next couple of days. I’ll preface it with a bunch of backstory and explain what I had left behind, which made me more unhappy about the culture I had come into.
I spent most of my early career working for two radical sister non-profit orgs. I was the only designer working on
anywhere from 4-5 different products at the same time. All centered around activism and used by millions of people.
It’s how I cut my teeth. Learned to be the designer that I am today. Most importantly, the people I worked for are imho some of the greatest people on the planet. Highly intelligent, empathetic, caring, and true role models for a young me. I adore them.
You might not know who they are, but if you’re reading this then you have definitely seen their work. Maybe OpenCongress, or Miro, or maybe Amara which is Vimeo’s partner transcription service. Definitely Fight for the Future, our internet defenders, which was shortly after me.
I married the love of my life in 2008, started a family, and at some point realized that I simply needed to make a better living. No matter how prolific, non-profits usually can’t provide the type of income that you need for a growing family with huge ambitions.
So as I gained visibility – via @dribbble – I started to field recruiters and consider new opportunities. Mostly little startups. I interviewed at one (Rockmelt) and they passed on me (hi, @iamxande 🤗).
Got an email from Kickstarter (hi, @amotion 🤗). Schlepped to New York and wasted days of time to be passed on by their founders. Then they unfollowed me on twitter. At least I ate some deli. 😂
Then Google reached out. I remember that ”holy shit” moment. “Me!? Are they kidding?? The schmuck who tested out of high school and dropped out of college??” They told me I’d interview to work on Chrome. I was over the moon. I remember Manda tearing up. God I love her.
They gave me a little bit of time for a design exercise. You can see it here in all it’s dated glory: Click and hold for the overlay. More schlepping from LA and an interview at their silly college-like campus. I was a nervous wreck.
The process felt very haphazard. At one point a front-end dev with a bow-tie grilled me on CSS and asked some super dumb questions. My advocate (a sweetheart named Peter) seemed to be rushing people through, quelling their fears. I still appreciate his belief in me to this day.
I felt like I had done ok. The last two interviews that I failed at were real shots to the heart. I took this one incredibly seriously. I wanted this job so badly. I wanted to prove I was worthy.
Weeks went by and I heard nothing. I accepted the inevitable and started responding to other recruiters. It was ok. I wasn’t joining the big leagues. I could play triple-A ball for longer. As long as I got up to SF where the opportunities were.
I took a gig with a failing news startup (lol) called Ongo (hi, @bethdean 🤗). They got me up here. I guess it was a bit of a Hail Mary for them. In a couple of months I knocked out more work than they could have built in a year with their eng team. Then…
Google got back in touch almost 3-4 months after the interview (who does this??).

I got the job.

To be continued…
Day one was so weird. It’s exactly what I’d imagined a freshmen orientation at a prestigious college would be like. I dropped out of art school, so this was foreign to me.
I was with a big group of “nooglers” (so lame). We were led into a large room that looked like it was set up for a time-share pitch. I found my seat. Sat down and read the paper that was in front of me.
There was some kind of codename for the team I’d landed on that I don’t recall.

I was told it meant I’d be working on Google+.

Fuck. “Whatever”, I thought, “I’ll just do my best and move to Chrome or something cooler after a while” Heh, so naive.
I later found out that one of the interviewers who I had liked the most – let’s call him Chuck – was the design manager on Plus and that he had fought to bring me into his team. Bittersweet.
Chuck was a sweet guy. A bit of an OG at Google. Super chill, super kind, and really funny. Seemingly non-political. My kind of manager. We’ll talk about him more later.
At some point I was given a badge and shown around the building I’d be working in. This was the first indication for me that something was awry.

Aside: The building design could only be described as kitsch. Goofy colored furniture. A slide. Crap…everywhere.
Google+ was situated in THE main building. 1900. A floor away from Larry’s office (CEO). If you were one of the 12,000+ people at google in MV who didn’t work on Plus, then you didn’t have access to these floors.
The CEO didn’t just have an office. The entire floor was his. We all had access to it and were encouraged to use it sparingly. A “war room” here and there.

We had access to “his” cafe too. A super fancy vegan cafe called “cloud” that wouldn’t be sustainable in the real world.
Why this exclusivity? What made this project so special? Why was it held so closely to Google’s chest? I’d find out later that the SVP of Plus used his clout to swing all of this.

His name was Vic Gundotra.
He was relatively charismatic. I remember him frequently flirting with the women on the team. Gave me a compounded horrible impression of him.
My desk was directly next to Vic’s glass-walled office. He would walk by my desk dozens of times during the day. He could see my screen from his desk.
During the 8 months I was there, culminating in me leading the redesign of his product, Vic didn’t say a word to me. No hello. No goodbye, or thanks for staying late. No handshake. No eye contact.
Vic’s product vision was fear-based. “Google built the knowledge graph, and Facebook swooped in and built the social graph. If we don’t own the social graph then we can’t claim to have indexed ALL the world’s data.”
It made sense at the time. That was a valuable dataset that Google would never be able to leverage.
Vic was powerful at Google. He had buy-in from the top and he wielded that stick aggressively. He made Plus as pervasive as he could. Each product org had a mandate to integrate its social features.
If your team, say on Gmail or Android, was to integrate Google+’s features then your team would be awarded a 1.5-3x multiplier on top of your yearly bonus. Your bonus was already something like 15% of your salary.
You read that correctly. A fuck ton of money to ruin the product you were building with bloated garbage that no one wanted 😂 No one really liked this. People drank the kool-aid though, but mostly because it was green and made of paper.
This made Plus the center of the Google universe and made Vic feel invincible, I presume. Once, I had to hold back laughter after he announced his “brilliant” idea to redesign the product from the ground up…every 6 months. lol
Vic left the company in 2014. Maybe because of this type of thing? Hard to say.

To be continued…

A former Google employee claims she was reprimanded for speaking out about sexual harassment
“‘He feels like you humiliated him in front of his reports.’ Something HR actually f—ing said to me.”
Google, like many companies, has different tracks and levels for different disciplines. They of course asked me how much I made in my previous role. I made substantially below market rate, but *amazing* for non-profits.
They low-balled me. My offer was $115k a year with $100k in stock vesting over 4 years. It was way more than I’d ever made, but still below market rate. I accepted with no negotiating. My title was UI Designer Level II. Also low.
Aside: Never do this. Always negotiate. Never tell a prospective employer how much you currently make. Tell them what you want. Their goal is to save money and yours is to make it. Your best interest is not theirs.
On my second day, I found out that I was sitting next to another designer and I was so stoked!

I had been solo & remote for 3+ years with the non-profits since I had left my second job, a little agency in LA.

(still reminisce about the old days, @WesOHaire?🤗)
I introduced myself.

This was their first job out of an Ivy League. They were one level below me. They were working on a tiny sliver of a sidebar tucked away on an internal page of Plus. Games, or something. “Inconsequential” is a good description.
My first thought was “wtf? how is this a job? they pay you for this??”

I was kind, of course, and let them know if they needed any help to just let me know.

Now my second indication that Google wasn’t what I expected.
Thought this was the pros.

Never would’ve imagined that I was joining a team of 50+ designers where a bunch of them had never designed before.

And I was “evaluated” at *about* their level? These weren’t interns, these were designers in their very first roles ever…at Google.
I was first placed on the Google Photos team which had been swallowed up by Plus. It had some seriously good front-end devs (and good people). It was a small team within a very large team.
My first project was to redesign the photos lightbox. I introduced some new and basic patterns, and drew all the iconography in the given aesthetic. Made some prototypes. Eng started building. Non-controversial.
This was a matter of weeks. But…now what? I didn’t have anything else lined up in the sprint.

My desk mate was still cranking away on their little area.
Well, I didn’t know the rest of the team at all, so I figured I’d go around meeting folks and offer to help with anything they needed.

I’d grab a seat and draw some ui, or and icon, or rattle off thoughts. Whatever they needed.
I got to know a few people, but most importantly I got to know what they were working on, and it wasn’t pretty.

Everything being produced felt disjointed or siloed. Not part of the whole. The M.O. was build and copy as much shit as possible. “Win the race.”
There was a distinct lack of a grand vision.
None of it had been made with the consideration of all the products in the Google ecosystem. Just a bunch of “UX designers” not caring about the actual customer experience. Just focusing on their silos because that’s how you complete tasks and play the game.
It’s now November and I’m tasked with designing the opt-in UI, and parts of the functional UI for facial recognition in photos. That was about a week worth of work.

FB copied some of my visuals on this, but whatevs our whole platform was a ripoff of theirs. 😆😅
I went back to knocking things out for other people.

Designed some community branding, did a sweater design for SWSX, drew some visuals for people. The entire time I was also noodling about the disparate stuff I was seeing.
I think it was around this time that Chuck, my manager who wouldn’t micro-manage but pumped you up and encouraged you to shoot for the moon (I truly liked him), got replaced by a guy that he was managing. An awfully bad designer with a love for bureaucracy.
Let’s call him Greg because his real name is just as vanilla.

He was a smarmy, politically motivated little fella who had no intentions of ever leaving Google. He told me that. I didn’t like him from the moment I met him and the feeling was mutual.
He was now my manager.

I knew it wouldn’t go well.
Half the team was out of office beginning early to mid-Dec.

I took the standard vacation time towards the end of the month, except I didn’t really vacation, I worked through all of it.

I had a vision.

To be continued…
The common thread between 99.8% of the people that I interacted with at Google is that they were ethical, highly intelligent, and hard working.

I had a lot of admiration and respect for many of them and wish more of them had stayed in touch.
The design organization was massive and spanning almost every product, of course. (Some products didn’t have designers) It felt like design had many rival factions split by not just certain product orgs, but between skill and schools of thought.
There was the “creative team” which was a service team that would create icons or other assets based on the outdated style guides. You’d have to submit a form and wait for them. They were a bit surly.
Then there was a team that was iterating on Kennedy, the name for the new Google UI style. It was if they were on a mountain, secluded. This is the style that Gmail and Calendar has had for years, now replaced with some Frankenstein version of Material.
The craziest thing about this team is that one of the most pivotal players was a…contractor. I was blown away when I found this out.
Anyhow, it wasn’t super clear what other teams were doing. It wasn’t clear who I should have been collaborating with to achieve my loftier goals of affecting all of the UI patterns at Google. Nothing was ever clear.
Which is weird because Google seemed to be very transparent. I mean, you could pop open a web page and see every detail about every employee, from their level to their desk location.
Back at home, during my break, I was cranking away. Playing with my baby girl in between trying to rationalize everything going on in the project. Real Time Communications (RTC) was hugely important to the team. Hangouts came from Plus.
Greg was managing that team originally. Part of why he was respected. Hangouts was the only good thing about this shitty product. It was sort of at odds with the current RTC paradigms. Chat Moles.
Moles were the name for the little chat boxes that pop up from bottom of gmail, enabling you to chat with someone. They’re an effective UI paradigm.
None of this stuff was tightly integrated. More of a layer on top of everything. I wanted to change that. This was Plus when I joined. Lots of sections. Lots of junk. Bad navigation. Left-aligned content.

I wanted to make RTC a first-class citizen. I designed a responsive layout that would give you list of friends on the right, and if your screen was large enough, you’d also get all of your message threads. Almost identical UI to messages on OSX (slightly before it existed).
That top gray bar was called the Sandbar. Just kind of slotted in there. I integrated it with a new navigation on the left side with a rounded edge. Drew a new set of icons that fit with an overall polished aesthetic. I put a lot of attention into crafting their various states.
The whole vision was to integrate google’s other apps into this sidebar navigation. Obviously it’s a bit of a technical challenge, but this would have been a viable UI framework to work towards.
No separate websites for each product e.g. gmail, calendar, plus, just one place to go with everything seamlessly integrated. I also put a new coat of paint on literally everything. Polished it up nicely.
It’s now January of 2012. I’m back in the office. I have a little something up my sleeve. I show it around to various folks and people seem to like it. Awesome.
Even Greg reluctantly liked it. Someone (maybe him) told me I needed to talk to another guy to get buy-in on my chat plans. He was the exec that was responsible for it.

I found him and was able to give him a quick elevator pitch on what I was going for with RTC.
He said “Haha, there’s no way we’re doing that. I/O is coming up and we’ve just spent the last 4 months building a Chrome extension that does something similar. If Plus has this, then we’re going to get laughed off the stage.”
I was like, “…what? But shouldn’t these all work together in a sort of ‘your conversations are available wherever you are?’ kind if way?”

“No, we’re not doing that.”
I was floored. Did he say chrome extension? Did he even understand what I had explained and showed him? Was he a huge fucking idiot? Sigh. Whatever, I pushed forward.
Sitting behind me was the most badass group of people on the team. The User Experience Engineering (UXE) team. They made things come to life. Wielded front-end code like no other. I showed them my work and they were legitimately stoked.
This team was lead by the imitable Andy Hertzfeld. A god damned saint. Honestly, working with Andy was the highlight of my short tenure. He seemed genuinely sad the day I told him I was leaving. I was too.
Anyhow, one guy on the team offered to build a prototype. His name was Chris, and without him this would have never gotten off the ground. He was incredible. Kind and generous, and he put in his all.
We collaborated for a couple of weeks and when he was done, you could replace your hash (unique identifier for your account) in the URL and use the prototype with your data. It was superb.
Greg used it to pitch the redesign to Vic. Made me feel shut out and like a disposable employee due to the fact that I wasn’t involved. What was said in that meeting? Was it pitched right? Was the whole vision conveyed? So much anxiety.
It didn’t matter. Vic bought in.

The whole team would rally around my work.

To be…

Just kidding I’ll keep going. 😂
Everything at google has to have code name in order to be taken seriously. I had huge ambitions for this work and the paradigms it introduced. I wanted it to be the North Star. Arrogant right? 😆
I told a handful of people I wanted to call it North Star while trying to gauge their reaction. The consensus was pretty much “yeah, I guess.”

Greg tells me a couple days later “Vic wants to call it North Star. He thinks it will be foundational.”

🤦‍♂️ OK.
I quickly designed a logo for the project. It was essentially a compass rose. I printed a huge version and stuck it on the designated war room. I’d put my flag in the ground.
The team huddled together and started phasing the project. Areas of the product were divvied up, people took ownership, everyone worked in tandem. I was the go-to for any questions and direction. I was responsible for finishing the visuals that I had started.
One of the other designers on the team – we’ll call him Jim – had worked on Plus from its inception. He was one of the better visual designers on the team. He had done most of the icon work on the project. He was understandably proud and a bit protective.
He’s a timid and generally kind person (running theme). But also not sure of himself. Not confident in his work. Also understandable considering it wasn’t the best. Often times he seemed anxiety stricken.
I made him a bit uneasy whenever I was around him. I could tell, and I did my damndest to be sweet to him to try to make him feel comfortable. I wanted him to like me. I want everyone to like me. It’s a problem.
Aside: I was tasked earlier with redesigning the +1 button. I absolutely hated the Plus logo. It was compositionally unbalanced, and the rationale was ridiculous. “The + hangs off the edge to signify that there’s more that’s unseen.”
Aside: Nah man, it just looks like an amateur made it. I fixed the composition. Centered the “+” on the “g”. Jim and another designer fought me hard on it. I relented. I made the dumbass button with their bad logo.
Aside: They shipped my version of the logo after I left. It looked way better. They didn’t care about what was better. They just wanted THEIR work to be used, or to be able to take credit for it. I hate that weak ass shit.
Anyhow, my wholesale redesign of all of Jim’s work was obviously making him feel bad.
One day, he came up to me totally flustered in one of the micro-kitchens. He said “People aren’t liking these icons.” I said, “Oh, ok, let me know who and I’ll collect their feedback and we can make them better.”
Jim: “Just lots of people.”

Me: ??? “I’m going to need to know who, or at least what the feedback is…”

Jim: “They’re just not working and a lot of people are saying that.”

Me: “Dude, what? How am I supposed to work with this?”

Jim: “I don’t know I…”
I interrupted. I had a feeling it was actually just him and another gal that considered herself a master visual designer. They were both frustrated that I had come in and basically taken over. I got it.
Me: “OK, no worries man. Would it make you feel better if I put something together that explains my decision making and then you and whomever else can punch holes in it, and give me direct feedback?”

Jim: “Yeah, ok, sounds good. I’ll put something on the calendar.”
9am the next morning. A dick move.
I went home and got to work.

In the early evening I got a call from my dad. My grandmother’s health took a turn for the worst. They weren’t sure she’d make it past the evening.
I couldn’t grieve. I needed to make this happen. These two designers were beloved by Greg. I had to win them over or they’d screw everything up for me. Everything I’d worked for could come crashing down due to their pettiness.
I forged ahead. At about 9 or 10 I checked Twitter. It just so happened that there was a party going on. A bunch of people from Facebook and Path were there. Competitors to Plus, obviously.
So was Jim. Jim tweeted about having so much fun with all these Path and Facebook people.

I was working my ass off, stifling the grief of my dying grandmother, all because of his passive aggressive scheduling. And he was out with our competitors??

Angry is an understatement.
She passed away at about 11:30pm. Fuck, man 😞 I lost it. I couldn’t hold it back anymore. She was gone. I wasn’t able to tell her I loved her one last time. I’d never see her again.

I finished working through tears. I got it done and fell asleep at about 4am.
When I woke up the next morning I had an email from Jim. “Hey Morgan, I’m really sorry, I’m dealing with a headache and won’t be able to make it in. Can we reschedule for tomorrow?”

I took some deep breaths. I contemplated how I should react. I wanted to physically fight him.

“No worries Jim, let’s do it tomorrow.”

I had to do something about this. This was totally unacceptable to me. I forwarded the email to Greg, like the idiot that I am.
“Greg, I had to work most of the night because of Jim, and he canceled our meeting because he was partying with our competitors. In my book, this is totally unacceptable. What am I supposed to do?”

No response. On my commute in, I got a notification of a meeting.
Title: Morgan
Time: 5pm
Location: Greg’s Desk
What? Am I being fired? There’s never a time when a room is not booked for a meeting. This felt like I was in trouble over something. Like a grade-schooler.
5pm rolled around and I schlepped to Greg’s desk. He gave me his usual smug smile and his manufactured calmness and pointed to an empty room near him.
Greg: “So what happened?”

I told him. He patronized me with his stupid fucking nods and his shit-eating grin.

Greg: “So I know that you have big ambitions here. I know you think might want to manage this team.”

I never said that shit.
Greg: “Well, it’s not going to go as you planned. In fact, I think I’m going to make Jim your manager after we launch.”

Me: “Uhhhh, ok? But why aren’t you addressing what I told you? You’re making me feel like I don’t belong here.”

Greg: “I’m not sure you do.”
Me: “Soooo, are you firing me?”

He laughed and said, “No, but you can go home now.”

What. The. Fuck.
Google was completely and utterly ruined for me after this moment. I became very depressed. I didn’t want go in to the office. It was clear that I was not welcome.

To be continued…
Aside: Some people might be turned off my use of profanity. That’s ok. This may not be for you. Admittedly, I get passionate when recounting this story.

I may sound petty and bitter when referring to Greg and Jim. I am. I’m a human and I wear my heart on my sleeve.
Aside: Some folks have misread why I mentioned the co’s that passed on me earlier in my career. This was to show my journey to Google. The people I mentioned are my friends to this day. Not shameful call outs.
I still went in. I’m not a TOTAL piece of shit. My heart was no longer in it though. My work suffered. My relationships suffered. A vicious cycle.

People could tell I wasn’t happy. I could tell they weren’t happy with me anymore. I’d would try to see the project to launch.
I mentioned Andy Hertzfeld earlier. He really liked the design. We, and Chris and another guy named Matt, collaborated heavily on a bunch of the micro-interactions. These guys were so good.
I mentioned the rounded corner that I added to the Sandbar. Well, I also added a “beak” to point to the active navigation item. Pretty standard stuff.
Andy spent 3 days writing custom canvas animations to make it so that the beak would slide up and around that corner to point at the search box when you clicked into it.
It was gorgeous. It made me feel good to work with people that cared as much as I did about the minutia.

These fleeting moments kept me from quitting each day.
Back to Jim. We eventually had that meeting. Sure enough it was just him and the other person who I had expected. Let’s call her Jane.

I gave my whole spiel and rattled through the deck I had made. Took about 10 minutes.
He said, “ok, now Jane has some stuff to show us.”

No acknowledgement of anything I’d just presented.
She pulled out a deck and started walking us through it. It was basically a series of images of products from Muji. She explained how she liked these forms.

Is said, “Ok, I’m having trouble seeing how this translates, but I’d love to see what you can do with it.”
She did nothing.

I moved on and continued to do my work.
Google+ was such a massive waste of resources. For example, every person at Google gets a corporate card.

The entire design team was given a $500 allowance to buy any device they wanted. 🤦‍♂️
At one point I bought some shit I shouldn’t have. I just didn’t care anymore. Greg brought me to HR and tried to have me fired for it. HR was like, “Uhh, I think he understands not do that again.” 😆
To get the redesign done, many more engineers were onboarded to the team. It didn’t even matter what type of experience they had.

The conventional thinking was that “If they made it through the hiring process, then they can figure anything out.” This isn’t true.
An engineer was tasked with building out the new “share box.” He was an infrastructure engineer. Terrible mismanagement of resources. He never should have been put on this task.
I felt badly for him. I sat next to him and wrote the CSS in a chat window, which he would then add to the app. A highly dysfunctional way of writing code.
The marketing team was stellar! They would make these beautiful animated shorts that would show off how to use new features.

I was a huge fan of the “Dear Sophie” ad. Makes me cry every time I see it.
I wanted to our marketing team to be set up for success. They needed the entirety of the UI recreated in illustrator.

No one else on the team could do it, so I did. It was a trial. Took so much effort. Didn’t matter in the scheme of things.
The redesign of Google+ launched. People thought it was pretty good. I was thankful that it wasn’t panned.

Now I needed to figure out what was next. I wasn’t planning on leaving. Google is a massive company and it’s relatively trivial to switch teams.
The way you do that is by sending an email to a google group and various teams will reach out if they’re interested. I drafted a very brief email.

“Hi all, I’m looking to join another team. Thanks for your consideration.”
I held it in my drafts for a couple of days. Everyone on the team would see it. I felt really bad. I didn’t want to make everyone else feel bad too.

I sent it. It did make people feel bad 😔
I got an offer to interview for the Fiber TV team. I met them and I was blown away. They had produced a beautiful and usable product, with no designer.

Designing a TV experience has always been a minor fantasy of mine. This team was impressive and I wanted to join them.
A few days later I got an email from the head of product at Dropbox. There were toe other designers there. He wanted to know if I wanted to interview. I immediately responded with an emphatic yes.
Another designer at google hit me up around this time and wanted to know if I wanted join his crack team for a special project. It was to design a system of iOS components for use by all of google. I agreed.
Apparently, other teams were complaining that they didn’t have the resources to build iOS apps. He wanted to solve that.
I interviewed at Dropbox shortly after. It was a pleasant experience. These people weren’t mired in bullshit and politics. They were at the top of their games. I was so stoked to interview.
About a week later I got on a plane to Google NY to collaborate on this special project. Ironically, the person sitting next to me also worked at Google. They didn’t care. Super antisocial.

I thought, “I hate this company so much.”
Showed up to Google NY and joined the other people. It was truly an all-star design team. I was the schlubiest person there.
The engineers helping with the project worked on Drive. During our first meeting we all shared the homework we had done. Mine was on Dropbox. I didn’t like Drive. They asked me to use Drive.
Getting my work into Drive was a shitshow. I felt bad that it failed as they watched. I felt bad that I’d just interviewed with the company that Drive was a carbon copy of.
Half way through the meeting I got a call. It was the Dropbox recruiter. He said everyone loved me and I got the job. Ughh, what a relief.

I negotiated hard. I got what I felt I was worth. They gave me a signing bonus worth almost as much as the 4-year equity grant from Google.
Aside: Dropbox was the one of the best jobs I’ve ever had. They treated me with so much love and respect. I did a massive amount of work for them. Was leading teams. Hired a ton. Was there for years.
Aside: I was on the Forbes 30 under 30 list while at Dropbox. Someone there added me to the list and advocated for me. Superfluous, but it felt amazing.
Aside: After Dropbox I co-founded
The completely reimagined way to buy or sell a used car
Skip the dealership. Shift is where peer-to-peer car buying meets certified quality, for thousands less. We bring the no-obligation test drive to you.
. Shift just raised a $140M series D. I’m currently working on another startup.
I got back to the Google Mountain View office after working with the guys in NY.

I sent a resignation email to Greg. He forwarded it to HR within seconds.
I sent an email to the company saying goodbye (standard practice). I got hundreds of heartfelt replies from google employees. Felt great and terrible at the same time.
On my way out, Greg tried to chit-chat and shake my hand. It took everything in me not to tell him to go fuck himself.

I walked past his extended hand, and said “Nah, man.”
He said, “Pfftt, really!?”

I turned around and looked him the eye as I backed out of the door.

“Yeah, really.”

The end –
Missing some Tweet in this thread?
You can try to force a refresh.
Share on Facebook
Share on Twitter
Share on Reddit
Share by email
Profile picture
Follow Read on Twitter
Subscribe Read later Archive
This content can be removed from Twitter at anytime, get a PDF archive by mail!
This is a Premium feature, you will be asked to pay $30.00/year
for a one year Premium membership with unlimited archiving.
Get a PDF Archive
Did Thread Reader help you today?
Support us: We are indie developers! Read more about the story
Become a 💎 Premium member ($30.00/year) and get exclusive features!
Too expensive?
Make a small donation instead. Buy us a coffee ($5) or help for the server cost ($10):
Donate with 😘 Paypal or Become a Patron 😍 on

Related threads
Profile picture
K🇺🇸🇪🇺 #QueenofEngland
6 days ago
The Spaces Between [THREAD] So, I’ve been thinking and agonizing over all this for a good long while. The situation we find ourselves in is causing serious damage. Women are being debased & demeaned. TS, those males we loved & accepted among us are being used by TRAs.
There are oceans of pain underlying this all. Vast oceans. We women, who have been treated as the lesser, the other, the inferior, are under attack by men (TRAs) who claim to be us. They are not us. Transsexuals who wanted to identify with us, now suffer backlash.
Transsexuals, males who have gotten as close as they can to the female form, are not women, but yet they don’t belong in the category of man. They exist in the in between. TRAs have forced a division between us that didn’t need to exist.
Read 11 tweets
Profile picture
Noah Toly
20 days ago
I don’t know what happened at Kavanaugh’s high school and college parties, but I do know this, if you think these accusations are just manufactured in order to prevent his confirmation, you really have to answer this, “Why weren’t accusations manufactured against Gorsuch?”
Maybe you have an answer, but if you don’t or if it’s not a convincing one, maybe you should drop that point. Our ability even to have this conversation is so strained by our collective stupor. Are we actually going to think? Or are we going to be cogs in a propaganda machine?
I’ll explain further what I mean here. The ONLY reason someone could give for people manufacturing allegations against Kavanaugh when they did not do so for Gorsuch is the fact that Kavanaugh’s appointment would move the court further right, bc he’s replacing Kennedy.
Read 6 tweets
Profile picture
Pip Taylor
23 days ago
My Love letter to Christian White men….. I have a message for Christian white men about the lessons that you taught me.
I am going to start with me. White Christian men are really all I know. I am a white middle class suburban girl from a ‘nice’ family, so it was all I was exposed to. I have loved you all my life.
Every significant man in my life was a white man. Very different level of religiosity, but all self-identifying as Christian, at least culturally. I tell you this so you understand – this is not a “my one man friend” I have a lifetime of loving and being loved by men.
Read 25 tweets
Profile picture
2 months ago
I’ve been very vaguely implicated in the most bat crap crazy thread of all time. So here it is:…
My involvement in this was being informed by Mike that Bay and Jagger were catfish accounts. Then liked a couple of tweets making that information public.
Mike was angry it was made public. As you can see by his thread, that’s because he had sent explicit messages to those accounts. Something I didn’t know, but assumed after his completely irrational response to the information being made public.
Read 12 tweets
Profile picture
G ⚡️
3 months ago
I want to share something kind of personal that I don’t really talk about
I have really bad eczema on my arms, hands, chest, and sometimes even on my face. I’ve had it since I was a child and it flares up so much, I’m bringing this up because it’s flaring right now but I’ve realized that this is a part of me. I’ve used so many ointments with steroids
On my skin for over a decade that it’s made some parts of my skin lighter than other parts which is usually where I flare. I used to be so insecure as a kid about showing my hands or my arms. I would wear long sleeves year round to hide myself
Read 8 tweets
Profile picture
Graham Sutherland [Polynomial^DSS]
9 months ago
It was known by Intel, AMD, Google Project Zero, and a few others during the multiparty responsible disclosure process, which took a couple of months. There’s some speculation that Intel’s CEO made his recent share divestment decision based on the disclosure timeline.
Now that it has been disclosed I fully expect others to focus on more microarchitectural vulnerability research on x86, ARM, and other architectures. I think a lot of this was spurred by recent developments in cache side-channel attacks, so we’re likely to see a repeat pattern.
Of course we’re also likely to see refinement of the current exploitation approaches too. These attacks will continue to get better and will probably adapt to work around the latest updates to Intel’s microcode, and new kernel patches.
Read 4 tweets
Trending hashtags
#Manafort #McConnell #BecauseofJesus #vivajacky #FOIA #badumching #mentalheathparity #Putin #VisitTheUSA #president #hospitality #thedonald #Kennedy #Qanon8chan #IIBAWC18 #oligarchs #osint #madeamericagreatagain #FOIAFriday #JosiahsThreads #CPAC2018 #WERKforConsent #TBT #nofilter #psyops
Give feedback
Help | About | TOS | Privacy
Did Thread Reader help you today?
Support us: We are indie developers! Read more about the story
Become a 💎 Premium member ($30.00/year) and get exclusive features!
Too expensive?
Make a small donation instead. Buy us a coffee ($5) or help for the server cost ($10):
Donate with 😘 Paypal or Become a Patron 😍 on
This website uses cookies. Learn more about Thread Reader Privacy Policy

Two bits per transistor: high-density ROM in Intel’s 8087 floating point chip

Two bits per transistor: high-density ROM in Intel’s 8087 floating point chip

Ken Shirriff’s blog
Xerox Alto restoration, IC reverse engineering, chargers, and whatever

Two bits per transistor: high-density ROM in Intel’s 8087 floating point chip
The 8087 chip provided fast floating point arithmetic for the original IBM PC and became part of the x86 architecture used today. One unusual feature of the 8087 is it contained a multi-level ROM (Read-Only Memory) that stored two bits per transistor, twice as dense as a normal ROM. Instead of storing binary data, each cell in the 8087’s ROM stored one of four different values, which were then decoded into two bits. Because the 8087 required a large ROM for microcode1 and the chip was pushing the limits of how many transistors could fit on a chip, Intel used this special technique to make the ROM fit. In this article, I explain how Intel implemented this multi-level ROM.

Intel introduced the 8087 chip in 1980 to improve floating-point performance on the 8086 and 8088 processors. Since early microprocessors operated only on integers, arithmetic with floating point numbers was slow and transcendental operations such as trig or logarithms were even worse. Adding the 8087 co-processor chip to a system made floating point operations up to 100 times faster. The 8087’s architecture became part of later Intel processors, and the 8087’s instructions (although now obsolete) are still a part of today’s x86 desktop computers.

I opened up an 8087 chip and took die photos with a microscope yielding the composite photo below. The labels show the main functional blocks, based on my reverse engineering. (Click here for a larger image.) The die of the 8087 is complex, with 40,000 transistors.2 Internally, the 8087 uses 80-bit floating point numbers with a 64-bit fraction (also called significand or mantissa), a 15-bit exponent and a sign bit. (For a base-10 analogy, in the number 6.02×1023, 6.02 is the fraction and 23 is the exponent.) At the bottom of the die, “fraction processing” indicates the circuitry for the fraction: from left to right, this includes storage of constants, a 64-bit shifter, the 64-bit adder/subtracter, and the register stack. Above this is the circuitry to process the exponent.

Die of the Intel 8087 floating point unit chip, with main functional blocks labeled.

Die of the Intel 8087 floating point unit chip, with main functional blocks labeled.
An 8087 instruction required multiple steps, over 1000 in some cases. The 8087 used microcode to specify the low-level operations at each step: the shifts, adds, memory fetches, reads of constants, and so forth. You can think of microcode as a simple program, written in micro-instructions, where each micro-instruction generated control signals for the different components of the chip. In the die photo above, you can see the ROM that holds the 8087’s microcode program. The ROM takes up a large fraction of the chip, showing why the compact multi-level ROM was necessary. To the left of the ROM is the “engine” that ran the microcode program, essentially a simple CPU.

The 8087 operated as a co-processor with the 8086 processor. When the 8086 encountered a special floating point instruction, the processor ignored it and let the 8087 execute the instruction in parallel.3 I won’t explain in detail how the 8087 works internally, but as an overview, floating point operations were implemented using integer adds/subtracts and shifts. To add or subtract two floating point numbers, the 8087 shifted the numbers until the binary points (i.e. the decimal points but in binary) lined up, and then added or subtracted the fraction. Multiplication, division, and square root were performed through repeated shifts and adds or subtracts. Transcendental operations (tan, arctan, log, power) used CORDIC algorithms, which use shifts and adds of special constants, processing one bit at a time. The 8087 also dealt with many special cases: infinities, overflows, NaN (not a number), denormalized numbers, and several rounding modes. The microcode stored in ROM controlled all these operations.

Implementation of a ROM
The 8087 chip consists of a tiny silicon die, with regions of the silicon doped with impurities to give them the desired semiconductor properties. On top of the silicon, polysilicon (a special type of silicon) formed wires and transistors. Finally, a metal layer on top wired the circuitry together. In the photo below, the left side shows a small part of the chip as it appears under a microscope, magnifying the yellowish metal wiring. On the right, the metal has been removed with acid, revealing the polysilicon and silicon. When polysilicon crosses silicon, a transistor is formed. The pink regions are doped silicon, and the thin vertical lines are the polysilicon. The small circles are contacts between the silicon and metal layers, connecting them together.

Structure of the ROM in the Intel 8087 FPU. The metal layer is on the left and the polysilicon and silicon layers are on the right.

Structure of the ROM in the Intel 8087 FPU. The metal layer is on the left and the polysilicon and silicon layers are on the right.
While there are many ways of building a ROM, a typical way is to have a grid of “cells,” with each cell holding a bit. Each cell can have a transistor for a 0 bit, or lack a transistor for a 1 bit. In the diagram above, you can see the grid of cells with transistors (where silicon is present under the polysilicon) and missing transistors (where there are gaps in the silicon). To read from the ROM, one column select line is energized (based on the address) to select the bits stored in that column, yielding one output bit from each row. You can see the vertical polysilicon column select lines and the horizontal metal row outputs in the diagram. The vertical doped silicon lines are connected to ground.

The schematic below (corresponding to a 4×4 ROM segment) shows how the ROM functions. Each cell either has a transistor (black) or no transistor (grayed-out). When a polysilicon column select line is energized, the transistors in that column turn on and pull the corresponding metal row outputs to ground. (For our purposes, an NMOS transistor is like a switch that is open if the input (gate) is 0 and closed if the input is 1.) The row lines output the data stored in the selected column.

Schematic of a 4×4 segment of a ROM.

Schematic of a 4×4 segment of a ROM.
The column select signals are generated by a decoder circuit. Since this circuit is built from NOR gates, I’ll first explain the construction of a NOR gate. The schematic below shows a four-input NOR gate built from four transistors and a pull-up resistor (actually a special transistor). On the left, all inputs are 0 so all the transistors are off and the pull-up resistor pulls the output high. On the right, an input is 1, turning on a transistor. The transistor is connected to ground, so it pulls the output low. In summary, if any inputs are high, the output is low so this circuit implements a NOR gate.

4-input NOR gate constructed from NMOS transistors.

4-input NOR gate constructed from NMOS transistors.
The column select decoder circuit takes the incoming address bits and activates the appropriate select line. The decoder contains an 8-input NOR gate for each column, with one NOR gate selected for the desired address. The photo shows two of the NOR gates generating two of the column select signals. (For simplicity, I only show four of the 8 inputs). Each column uses a different combination of address lines and complemented address lines as inputs, selecting a different address. The address lines are in the metal layer, which was removed for the photo below; the address lines are drawn in green. To determine the address associated with a column, look at the square contacts associated with each transistor and note which address lines are connected. If all the address lines connected to a column’s transistors are low, the NOR gate will select the column.

Part of the address decoder. The address decoder selects odd columns in the ROM, counting right to left. The numbers at the top show the address associated with each output.

Part of the address decoder. The address decoder selects odd columns in the ROM, counting right to left. The numbers at the top show the address associated with each output.
The photo below shows a small part of the ROM’s decoder with all 8 inputs to the NOR gates. You can read out the binary addresses by carefully examining the address line connections. Note the binary pattern: a1 connections alternate every column, a2 connections alternate every two columns, a3 connections every four columns, and so forth. The a0 connection is fixed because this decoder circuit selects the odd columns; a similar circuit above the ROM selects the even addresses. (This split was necessary to make the decoder fit on the chip because each decoder column is twice as wide as a ROM cell.)

Part of the address decoder for the 8087’s microcode ROM. The decoder converts an 8-bit address into column select signals.

Part of the address decoder for the 8087’s microcode ROM. The decoder converts an 8-bit address into column select signals.
The last component of the ROM is the set of multiplexers that reduces the 64 output rows down to 8 rows.4 Each 8-to-1 multiplexer selects one of its 8 inputs, based on the address. The diagram below shows one of these row multiplexers in the 8087, built from eight large pass transistors, each one connected to one of the row lines. All the transistors are connected to the output so when the selected transistor is turned on, it passes its input to the output. The multiplexer transistors are much, much larger than the transistors in the ROM to reduce distortion of the ROM signal. A decoder (similar to the one discussed earlier, but smaller) generates the eight multiplexer control lines from three address lines.

One of eight row multiplexers in the ROM. This shows the poly/silicon layers, with metal wiring drawn in orange.

One of eight row multiplexers in the ROM. This shows the poly/silicon layers, with metal wiring drawn in orange.
To summarize, the ROM stores bits in a grid. It uses eight address bits to select a column in the grid. Then three address bits select the desired eight outputs from the row lines.

The multi-level ROM
The discussion so far explained of a typical ROM that stores one bit per cell. So how did 8087 store two bits per cell? If you look closely, the 8087’s microcode ROM has four different transistor sizes (if you count “no transistor” as a size).6 With four possibilities for each transistor, a cell can encode two bits, approximately doubling the density.7 This section explains how the four transistor sizes generate four different currents, and how the chip’s analog and digital circuitry converts these currents into two bits.

A closeup of the 8087’s microcode ROM shows four different transistor sizes. This allows the ROM to store two bits per cell.

A closeup of the 8087’s microcode ROM shows four different transistor sizes. This allows the ROM to store two bits per cell.
The size of the transistor controls the current through the transistor.8 The important geometric factor is the varying width of the silicon (pink) where it is crossed by the polysilicon (vertical lines), creating transistors with different gate widths. Since the gate width controls the current through the transistor, the four transistor sizes generate four different currents: the largest transistor passes the most current and no current will flow if there is no transistor at all.

The ROM current is converted to bits in several steps. First, a pull-up resistor converts the current to a voltage. Next, three comparators compare the voltage with reference voltages to generate digital signals indicating if the ROM voltage is lower or higher. Finally, logic gates convert the comparator output signals to the two output bits. This circuitry is repeated eight times, generating 16 output bits in total.

The circuit to read two bits from a ROM cell.

The circuit to read two bits from a ROM cell.
The circuit above performs these conversion steps. At the bottom, one of the ROM transistors is selected by the column select line and the multiplexer (discussed earlier), generating one of four currents. Next, a pull-up resistor12 converts the transistor’s current to a voltage, resulting in a voltage depending on the size of the selected transistor. The comparators compare this voltage to three reference voltages, outputting a 1 if the ROM voltage is higher than the reference voltage. The comparators and reference voltages require careful design because the ROM voltages could differ by as little as 200 mV.

The reference voltages are mid-way between the expected ROM voltages, allowing some fluctuation in the voltages. The lowest ROM voltage is lower than all the reference voltages so all comparators will output 0. The second ROM voltage is higher than Reference 0, so the bottom comparator outputs 1. For the third ROM voltage, the bottom two comparators output 1, and for the highest ROM voltage all comparators output 1. Thus, the three comparators yield four different output patterns depending on the ROM transistor. The logic gates then convert the comparator outputs into the two output bits.10

The design of the comparator is interesting because it is the bridge between the analog and digital worlds, producing a 1 or 0 if the ROM voltage is higher or lower than the reference voltage. Each comparator contains a differential amplifier that amplifies the difference between the ROM voltage and the reference voltage. The output from the differential amplifier drives a latch that stabilizes the output and converts it to a logic-level signal. The differential amplifier (below) is a standard analog circuit. A current sink (symbol at the bottom) provides a constant current. If one of the transistors has a higher input voltage than the other, most of the current passes through that transistor. The voltage drop across the resistors will cause the corresponding output to go lower and the other output to go higher.

Diagram showing the operation of a differential pair. Most of the current will flow through the transistor with the higher input voltage, pulling the corresponding output lower. The double-circle symbol at the bottom is a current sink, providing a constant current I.

Diagram showing the operation of a differential pair. Most of the current will flow through the transistor with the higher input voltage, pulling the corresponding output lower. The double-circle symbol at the bottom is a current sink, providing a constant current I.
The photo below shows one of the comparators on the chip; the metal layer is on top, with the transistors underneath. I’ll just discuss the highlights of this complex circuit; see the footnote12 for details. The signal from the ROM and multiplexer enters on the left. The pull-up circuit12 converts the current into a voltage. The two large transistors of the differential amplifier compare the ROM’s voltage with the reference voltage (entering at top). The outputs from the differential amplifier go to the latch circuitry (spread across the photo); the latch’s output is in the lower right. The differential amplifier’s current source and pull-up resistors are implemented with depletion-mode transistors. Each output circuit uses three comparators, yielding 24 comparators in total.

One of the comparators in the 8087. The chip contains 24 comparators to convert the voltage levels from the multi-level ROM into binary data.

One of the comparators in the 8087. The chip contains 24 comparators to convert the voltage levels from the multi-level ROM into binary data.
Each reference voltage is generated by a carefully-sized transistor and a pull-up circuit. The reference voltage circuit is designed as similar as possible to the ROM’s signal circuitry, so any manufacturing variations in the chip will affect both equally. The reference voltage and ROM signal both use the same pull-up circuit. In addition, each reference voltage circuit includes a very large transistor identical to the multiplexer transistor, even though there is no multiplexing in the reference circuit, just to make the circuits match. The three reference voltage circuits are identical except for the size of the reference transistor.9

Circuit generating the three reference voltages. The reference transistors are sized between the ROM’s transistor sizes. The oxide layer wasn’t fully removed from this part of the die, causing the color swirls in the photo.

Circuit generating the three reference voltages. The reference transistors are sized between the ROM’s transistor sizes. The oxide layer wasn’t fully removed from this part of the die, causing the color swirls in the photo.
Putting all the pieces together, the photo below shows the layout of the microcode ROM components on the chip.12 The bulk of the ROM circuitry is the transistors holding the data. The column decoder circuitry is above and below this. (Half the column select decoders are at the top and half are at the bottom so they fit better.) The output circuitry is on the right. The eight multiplexers reduce the 64 row lines down to eight. The eight rows then go into the comparators, generating the 16 output bits from the ROM at the right. The reference circuit above the comparators generates the three reference voltage. At the bottom right, the small row decoder controls the multiplexers.

Microcode ROM from the Intel 8087 FPU with main components labeled.

Microcode ROM from the Intel 8087 FPU with main components labeled.
While you’d hope for the multi-level ROM to be half the size of a regular ROM, it isn’t quite that efficient because of the extra circuitry for the comparators and because the transistors were slightly larger to accommodate the multiple sizes. Even so, the multi-level ROM saved about 40% of the space a regular ROM would have taken.

Now that I have determined the structure of the ROM, I could read out the contents of the ROM simply (but tediously) by looking at the size of each transistor under a microscope. But without knowing the microcode instruction set, the ROM contents aren’t useful.

The 8087 floating point chip used an interesting two-bit-per-cell structure to fit the microcode onto the chip. Intel re-used the multi-level ROM structure in 1981 in the doomed iAPX 432 system.11 As far as I can tell, interest in ROMs with multiple-level cells peaked in the 1980s and then died out, probably because Moore’s law made it easier to gain ROM capacity by shrinking a standard ROM cell rather than designing non-standard ROMs requiring special analog circuits built to high tolerances.14

Surprisingly, the multi-level concept has recently returned, but this time in flash memory. Many flash memories store two or more bits per cell.13 Flash has even achieved a remarkable 4 bits per cell (requiring 16 different voltage levels) with “quad-level cell” consumer products announced recently. Thus, an obscure technology from the 1980s can show up again decades later.

I announce my latest blog posts on Twitter, so follow me at @kenshirriff for future 8087 articles. I also have an RSS feed. Thanks to Jeff Epler for suggesting that I investigate the 8087’s ROM.

Notes and references
The 8087 has 1648 words of microcode (if I counted correctly), with 16 bits in each word, for a total of 26368 bits. The ROM size didn’t need to be a power of two since Intel could build it to the exact size required. ↩

Sources provide inconsistent values for the number of transistors in the 8087: Intel claims 40,000 transistors while Wikipedia claims 45,000. The discrepancy could be due to different ways of counting transistors. In particular, since the number of transistors in a ROM, PLA or similar structure depends on the data stored in it, sources often count “potential” transistors rather than the number of physical transistors. Other discrepancies can be due to whether or not pull-up transistors are counted and if high-current drivers are counted as multiple transistors in parallel or one large transistor. ↩

The interaction between the 8086 processor and the 8087 floating point unit is somewhat tricky; I’ll discuss some highlights. The simplified view is that the 8087 watches the 8086’s instruction stream, and executes any instructions that are 8087 instructions. The complication is that the 8086 has an instruction prefetch buffer, so the instruction being fetched isn’t the one being executed. Thus, the 8087 duplicates the 8086’s prefetch buffer (or the 8088’s smaller prefetch buffer), so it knows that the 8086 is doing. Another complication is the complex addressing modes used by the 8086, which use registers inside the 8086. The 8087 can’t perform these addressing modes since it doesn’t have access to the 8086 registers. Instead, when the 8086 sees an 8087 instruction, it does a memory fetch from the addressed location and ignores the result. Meanwhile, the 8087 grabs the address off the bus so it can use the address if it needs it. If there is no 8087 present, you might expect a trap, but that’s not what happens. Instead, for a system without an 8087, the linker rewrites the 8087 instructions, replacing them with subroutine calls to the emulation library. ↩

The reason ROMs typically use multiplexers on the row outputs is that it is inefficient to make a ROM with many columns and just a few output bits, because the decoder circuitry will be bigger than the ROM’s data. The solution is to reshape the ROM, to hold the same bits but with more rows and fewer columns. For instance, the ROM can have 8 times as many rows and 1/8 the columns, making the decoder 1/8 the size.

In addition, a long, skinny ROM (e.g. 1K×16) is inconvenient to lay out on a chip, since it won’t fit as a simple block. However, a serpentine layout could be used. For example, Intel’s early memories were shift registers; the 1405 held 512 bits in a single long shift register. To fit this onto a chip, the shift register wound back and forth about 20 times (details). ↩

Some IBM computers used an unusual storage technique to hold microcode: Mylar cards had holes punched in them (just like regular punch cards), and the computer sensed the holes capacitively (link). Some computers, such as the Xerox Alto, had some microcode in RAM. This allowed programs to modify the microcode, creating a new instruction set for their specific purposes. Many modern processors have writeable microcode so patches can fix bugs in the microcode. ↩

I didn’t notice the four transistor sizes in the microcode ROM until a comment on Hacker News mentioned that the 8087 used two-bit-per-cell technology. I was skeptical, but after looking at the chip more closely I realized the comment was correct. ↩

Several other approaches were used in the 1980s to store multiple bits per cell. One of the most common was used by Mostek and other companies: transistors in the ROM were doped to have different threshold voltages. By using four different threshold voltages, two bits could be stored per cell. Compared to Intel’s geometric approach, the threshold approach was denser (since all the transistors could be as small as possible), but required more mask layers and processing steps to produce the multiple implantation levels. This approach used the new (at the time) technology of ion implantation to carefully tune the doping levels of each transistor.

Ion implantation’s biggest impact on integrated circuits was its use to create depletion transistors (transistors with a negative threshold voltage), which worked much better as pull-up resistors in logic gates. Ion implantation was also used in the Z-80 microprocessor to create some transistor “traps”, circuits that looked like regular transistors under a microscope but received doping implants that made them non-functional. This served as copy protection since a manufacturer that tried to produce clones on the Z-80 by copying the chip with a microscope would end up with a chip that failed in multiple ways, some of them very subtle. ↩

The current through the transistor is proportional to the ratio between the width and length of the gate. (The length is the distance between the source and drain.) The ROM transistors (and all but the smallest reference transistor) keep the length constant and modify the width, so shrinking the width reduces the current flow. For MOSFET equations, see Wikipedia. ↩

The gate of the smallest reference transistor is made longer rather than narrower, due to the properties of MOS transistors. The problem is that the reference transistors need to have sizes between the sizes of the ROM transistors. In particular, Reference 0 needs a transistor smaller than the smallest ROM transistor. But the smallest ROM transistor is already as small as possible using the manufacturing techniques. To solve this, note that the polysilicon crossing the middle reference transistor is much thicker horizontally. Since a MOS transistor’s properties are determined by the width to height ratio of its gate, expanding the polysilicon is as good as shrinking the silicon for making the transistor act smaller (i.e. lower current). ↩

The ROM logic decodes the transistor size to bits as follows: No transistor = 00, small transistor = 01, medium transistor = 11, large transistor = 10. This bit ordering saves a few gates in the decoding logic; since the mapping from transistor to bits is arbitrary, it doesn’t matter that the sequence is not in order. (See “Two Bits Per Cell ROM”, Stark for details.) ↩

Intel’s iAPX 43203 interface processor (1981) used a multiple-level ROM very similar to the one in the 8087 chip. For details, see “The interface processor for the Intel VLSI 432 32 bit computer,” J. Bayliss et al., IEEE J. Solid-State Circuits, vol. SC-16, pp. 522-530, Oct. 1981.
The 43203 interface processor provided I/O support for the iAPX 432 processor. Intel started the iAPX 432 project in 1975 to produce a “micromainframe” that would be Intel’s revolutionary processor for the 1980s. When the iAPX 432 project encountered delays, Intel produced the 8086 processor as a stopgap, releasing it in 1978. While the Intel 8086 was a huge success, leading to the desktop PC and the current x86 architecture, the iAPX 432 project ended up a failure and ended in 1986. ↩

The schematic below (from “Multiple-Valued ROM Output Circuits”) provides details of the circuitry to read the ROM. Conceptually the ROM uses a pull-up resistor to convert the transistor’s current to a voltage. The circuit actually uses a three transistor circuit (T3, T4, T5) as the pull-up. T4 and T5 are essentially an inverter providing negative feedback via T3, making the circuit less sensitive to perturbations (such as manufacturing variations). The comparator consists of a simple differential amplifier (yellow) with T6 acting as the current source. The differential amplifier output is converted into a stable logic-level signal by the latch (green).

↩Diagram of 8087 ROM output circuit.

Diagram of 8087 ROM output circuit.
Flash memories are categorized as SLC (single level cell—one bit per cell), MLC (multi level cell—two bits per cell), TLC (triple level cell—three bits per cell) and QLC (quad level cell—four bits per cell). In general, flash with more bits per cell is cheaper but less reliable, slower, and wears out faster due to the smaller signal margins. ↩

The journal Electronics published a short article “Four-State Cell Doubles ROM Bit Capacity” (p39, Oct 9, 1980), describing Intel’s technique, but the article is vague to the point of being misleading. Intel published a detailed article “Two bits per cell ROM” in COMPCON (pp209-212, Feb 1981). An external group attempted to reverse engineer more detailed specifications of the Intel circuits in “Multiple-valued ROM output circuits” (Proc. 14th Int. Symp. Multivalue Logic, 1984). Two papers describing multiple-value memories are A Survey of Multivalued Memories (IEEE Transactions on Computers, Feb 1986, pp 99-106) and A review of multiple-valued memory technology (IEEE Symposium on Multiple-Valued Logic, 1998). ↩

Email This
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: 8087, electronics, reverse-engineering
Peter Ibbotson said…
Microsoft and Borland both used int 34h-3eh followed by the 8087 opcodes, and IIRC if a co-processor was found the int xx instruction was overwritten by a NOP so this was done at runtime rather than link.

I believe that both of the emulators were written by “Tanj Bennett” but I could be wrong.

September 30, 2018 at 1:22 PM
Gregory P. Smith said…
Such a code modification normally be done by the executable loader today. If DOS had that concept, it could. Otherwise the fixer could just be in the executable itself as the first code to be run.

September 30, 2018 at 2:20 PM
KE5FX said…
As I recall, Dave Stafford (now at Amazon) wrote an x87 emulator for Borland. Whether it was used for Turbo Pascal, Turbo C, or both, I’m not sure.

September 30, 2018 at 5:03 PM
Josh O said…
Great write-up. I love these techniques that were used to squeeze the most performance out of limited technology. Coming from early days programming 6502 ASM as a teen, and then later using microcontrollers for some projects at work, I’ve always had a tendency to not waste resources, which I fear is lost on newer developers who think nothing of creating large buffers or using data-types that are much larger than needed (though matching the native word size of the CPU may be more important depending on whether you are targeting speed or memory use). I think every young developer should be taught a history of computing where they learn about such techniques.

I love the idea of multiplexing multiple bits into a cell. Reminds me of QAM which besides the phase of the wave at a particular moment also uses varying amplitudes/signal-strengths to encode a “bit” more data on each symbol.

September 30, 2018 at 7:37 PM
Anonymous said…
Hi, iAPX432 circuit designer here. The active-mask programmed 2 bits/cell NMOS ROM had two more disadvantages that you didn’t mention. First, it was slower. It had 3X as many sense amps as a 1 bit/cell ROM, so each SA was operated at far less current, making them slower. And because the voltage windows around each of the 4 possible bitline voltages were smaller, you need to wait longer (greater N in the expression Twait = N*Rcell*Cbitline) before latching the sensed value, in order to guarantee success in the worst case of imperfect mask alignment and linewidth variation.

Second, it required one metal-to-active contact per two cells. The next generation mask ROM cell layout (at 1 bit/cell) only required one metal-to-active contact per four cells, so it was a lot smaller — small enough to negate the 2b/cell ROM’s supposed area advantages when looking at the entire ROM including decoders & sense amps. And the nextgen 1b/cell ROM was faster too. And it was implant programmed too (rather than active programmed), which is later in the process, which means quicker turnaround for a code change.

October 1, 2018 at 7:56 AM
Jimmy_James Jams_A_Lot said…
Thank you very much! So after I spend another eight months wrapping my brain around this very comprehensive article, I’ll be off to Santa Clara to sell my newly acquired skill set to Intel, where I’ll likely amaze their staff with my prolific knowledge of their chips.

October 1, 2018 at 8:08 AM
Zom-B said…
Where can we find the die images of the same nmpp resolution as the close-ups, in both pre and post-etching?

October 1, 2018 at 11:46 AM
Ageev Sergei said…
This comment has been removed by the author.
October 2, 2018 at 10:57 AM
Ageev Sergei said…
Exactly. The 1b ROM may be comparable by area, if drawn carefully, but has much better performance.

October 2, 2018 at 11:00 AM
Cole Johnson said…
NOOOOOO!! My perfect world of 1s and 0s has been ruined! HOW DARE THEY MIX PURE DIGITAL LOGIC WITH ANALOG MIRE!!

Sigh… I guess our world isn’t just made of just ONs and OFFs, but of noise, timing, parasitic properties and uncertainty. Its hard to accept for people who’ve started in programming and digital logic.

I’m wondering how many ROMs were made using this technology, for the people who decode ROMs that can’t be obtained any way other than decapping this could make the process more difficult and error prone, but not impossible.

Thanks Anonymous for your knowlege on this subject. I really appreciate the people in the comments section of Ken’s blog and CuriousMarc’s channel who share their knowlege from old jobs or experiences, much of which is at risk of disappearing as the years go on. Sometimes the discussion is almost as informative as the post.

There’s one thing I can add to the discussion, one of the things I’ve been working on recently is a simulation of the SP0256 speech chip. This varient contains a 2048 cell ROM with normal 1-bit cells. The simulation is here The (7 bit) decoder is above the ROM, the (4 bit) vertical multiplexer is on the right, and the bits leave into a shift register on the left. The full chip does not work correctly yet, due to a(t least one) bug somewhere, but the ROM and the ROM supporting circuitry seems to function as expected

October 2, 2018 at 3:48 PM
Post a Comment

Older Post Home
Follow by Email

Email address…
About Ken Shirriff
Popular Posts

Two bits per transistor: high-density ROM in Intel’s 8087 floating point chip

The printer that wouldn’t print: Fixing an IBM 1401 mainframe from the 1960s

A Multi-Protocol Infrared Remote Library for the Arduino

Bad relay: Fixing the card reader for a vintage IBM 1401 mainframe

Apple iPhone charger teardown: quality in a tiny expensive package

Bitcoin mining the hard way: the algorithms, protocols, and bytes

A dozen USB chargers in the lab: Apple is very good, but not quite the best
Macbook charger teardown: The surprising complexity inside Apple’s power adapter
Search This Blog

6502 8008 8085 8087 alto apple arc arduino arm beaglebone bitcoin c# calculator css electronics f# fpga fractals genome haskell html5 ibm1401 intel ipv6 ir java javascript math oscilloscope photo power supply random reverse-engineering sheevaplug snark spanish teardown theory unicode Z-80
Blog Archive
▼ 2018 (16)
▼ September (4)
Two bits per transistor: high-density ROM in Intel…
Bad relay: Fixing the card reader for a vintage IB…
The printer that wouldn’t print: Fixing an IBM 140…
Glowing mercury thyratrons: inside a 1940s Teletyp…
► August (1)
► June (1)
► May (1)
► April (1)
► March (3)
► February (1)
► January (4)
► 2017 (21)
► 2016 (34)
► 2015 (12)
► 2014 (13)
► 2013 (24)
► 2012 (10)
► 2011 (11)
► 2010 (22)
► 2009 (22)
► 2008 (27)
This site uses cookies from Google to deliver its services and to analyze traffic. Your IP address and user-agent are shared with Google along with performance and security metrics to ensure quality of service, generate usage statistics, and to detect and address abuse.LEARN MOREOK

Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups

Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups

Medium uses browser cookies to give you the best possible experience. To make Medium work, we log user data and share it with processors. To use Medium, you must agree to our Privacy Policy, including cookie policy.
Go to the profile of Thomas Wolf
Thomas Wolf
Natural Language Processing, Deep learning and Computational Linguistics – Science Lead @ Huggingface
Oct 15

By David Marcu
💥 Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups
I’ve spent most of 2018 training neural networks that tackle the limits of my GPUs. Whether it was a 150 millions parameters language model like OpenAI’s huge Generative Pre-trained Transformer (or the recent and similar BERT model) or a meta-learning neural net fed with 30 million element inputs like the one of our ICLR ‘18 paper, I could barely fit more than a few training samples on a GPU.

But most of the time stochastic gradient descent algorithms require larger batches than just a handful of examples to get decent results.

How can you train your model on large batches when your GPU can’t hold more than a few samples?
There are several tools, tips and tricks you can use to do that and I thought it would be nice to gather all the things I use and learned in a post.

In this post I will mainly talk about the PyTorch framework. Some of these tools are not in PyTorch yet (as of 1.0) so I include some custom code as well.

In particular, we’ll talk about:

How you can train a model on a single or multi GPU server with batches larger than the GPUs memory or when even a single training sample won’t fit (!),
How you can make the most efficient use of a multi-GPU machine, and
The simplest way to train using several machines in a distributed setting.
Let’s start by the simplest trick: gradient accumulation.

⌛️Large batches on one or several GPU(s)
So, you’ve build a nice model that might be the new SOTA on this neat task but every time you try to stack more than a few samples in a batch you get a CUDA RuntimeError: out of memory.

Adam confirms your predicament! 😱Oh no!
But you’re pretty sure that doubling the batch size will improve the results.

How can you do that?
There is an easy solution to this problem: accumulating gradients. Here is a quick reminder on how stochastic gradient descent works from my earlier post on meta-learning:

The 5-steps of a gradient descent optimization algorithm
The PyTorch code equivalent of these 5 steps can also be written in 5 lines:

During the loss.backward() operation, gradients are computed for each parameter (in green on our animation) and stored in a tensor associated to each parameter: parameter.grad (the middle graph on our animation).

Accumulating gradients just means that, before calling optimizer.step() to perform a step of gradient descent, we will sum the gradients of several backward operations in the parameter.grad tensors. This is straightforward to do in PyTorch as the gradient tensors are not reset unless we call model.zero_grad() or optimizer.zero_grad(). We’ll also need to divide by the number of accumulation steps if our loss is averaged over the training samples.

Here is a simple gist for training a model using gradient accumulation. In this example we can train with a batch size that is accumulation_steps-larger than the maximum size that fits on our GPU(s):

😱 Pushing that to the extreme
Can you train a model for which not even a single sample can fit on a GPU?

Well if your architecture doesn’t have too-much skip connections, yes, it’s possible! The solution is to trade compute for memory using gradient-checkpointing.

Basically, the idea is to back-propagate the gradients in small chunks along the model, trading the memory needed to store a full back propagation graph with the additional compute of a partial forward pass associated to each chunk. This is a rather slow method as we add additional compute to reduce the memory requirements but it can be interesting in some settings, e.g. to train RNN models over very long sequences (see for example my previous introduction to meta-learning).

I won’t go into more details here and will just refer you to the relevant links:

PyTorch doc:

A “Memory-poor” strategy that needs O(1) memory (but requires O(n²) computation steps) — From Yaroslav Bulatov’s nice post:
🕰 Making the best of a multi-GPU machine
Now let’s talk more specifically about training model on multi-GPUs.

The go-to strategy to train a PyTorch model on a multi-GPU server is to use torch.nn.DataParallel. It’s a container which parallelizes the application of a module by splitting the input across the specified devices, chunking along the batch dimension.

DataParallel is very easy to use, we just add one line to encapsulate the model:

However one issue can arise with DataParallel: unbalanced GPU usage.

Under some settings GPU-1 will be used a lot more than the other GPUs.
Where does this come from? I made an illustration to better explain what DataParallel does under the hood:

Forward and Backward passes with torch.nn.DataParallel
During step 4 of the Forward pass (top-right), the results of all the parallel computations are gathered on GPU-1. This is fine for a lot of classification problems but it can become problematic when you train a language model on large batch for example.

Let’s quickly compute the size of the output for a language model:

Number of elements in the output of a language model
If we assume a 40k vocabulary, 250 tokens in our sequences, 32 samples per batch and 4 bytes to store each element in the memory, the output of our model takes about 1,2 GB. We need to double that to store the associated gradient tensors, our model output thus requires 2,4 GB of memory!

That’s a significant portion of a typical 10 GB GPU memory and means that GPU-1 will be over-used with regards to the other GPUs, limiting the effect of the parallelization.

We cannot easily reduce the number of elements in this output without tweaking the model and/or optimization scheme. But we can make sure the memory load is more evenly distributed among the GPUs.

⚖️ Balanced load on a multi-GPU machine
The solution is to keep each partial output on its GPU instead of gathering all of them to GPU-1. We well need to distribute our loss criterion computation as well to be able to compute and back propagate our loss.

Thankfully for us, Hang Zhang (张航) has open-sourced a nice PyTorch package called PyTorch-Encoding which comprises these custom parallelization functions.

I’ve extracted and slightly adapted this module and you can download here a gist ( to include and call from your code. It mainly comprises two modules: DataParallelModel and DataParallelCriterion which are made to be used as follows:

The difference between DataParallelModel and torch.nn.DataParallel is just that the output of the forward pass (predictions) is not gathered on GPU-1 and is thus a tuple of n_gpu tensors, each tensor being located on a respective GPU.

The DataParallelCriterion container encapsulate the loss function and takes as input the tuple of n_gpu tensors and the target labels tensor. It computes the loss function in parallel on each GPU, splitting the target label tensor the same way the model input was chunked by DataParallel.

I made an illustration of DataParallelModel/DataParallelCriterion internals:

Using DataParallelModel and DataParallelCriterion
Here is how to handle two particular cases you may encounter:

Your model outputs several tensors: you likely want to disentangle them: output_1, output_2 = zip(*predictions)
Sometimes you don’t want to use a parallel loss function: gather all the tensors on the cpu: gathered_predictions = parallel.gather(predictions)
⏰ Distributed training: training on several machines
Now how can we harness the power of several servers to train on even larger batches?

The simplest option is to use PyTorch DistributedDataParallel which is meant to be almost a drop-in replacement for DataParallel discussed above.

But be careful: while the code looks similar, training your model in a distributed setting will change your workflow because you will actually have to start an independent python training script on each node (these scripts are all identical). As we will see, once started, these training scripts will be synchronized together by PyTorch distributed backend.

In practice, this means that each training script will have:

its own optimizer and performs a complete optimization step with each iteration, no parameter broadcast (step 2 in DataParallel) is needed,
an independent Python interpreter: this will also avoid the GIL-freeze that can come from driving several parallel execution threads in a single Python interpreter.
Models that make heavy use of Python loops/call in their forward passes can be slowed down by the python interpreter’s GIL when several parallel forward calls are driven by a single interpreter. In these settings, DistributedDataParallel can advantageously replace DataParallel even on a single-machine setup.
Now let’s just dive straight in the code and usage.

DistributedDataParallel is build on top of torch.distributed package which provide low-level primitives for synchronizing distributed operations and can make use of several backends (tcp, gloo, mpi, nccl) with different capabilities.

In this post I will select one simple way to use it out-of-the-box but you should read the doc and this nice tutorial by Séb Arnold to dive deeper in this module.

We will consider a simple but general setup with two 4-GPU servers (nodes):

The main server (server 1) has an accessible IP and an open port for communication.
🏃 Adapting our Python training script for distributed training
First we need to adapt our script so that it can be run separately on each machine (node). We are actually going to go fully distributed and run a separate process for each GPU of each node, so 8 process in total.

Our training script is a bit longer as we need to initialize the distributed backend for synchronization, encapsulate the model and prepare the data to train each process on a separate subset of the data (every process is independent so we have to care of that ourselves). Here is the updated code:

✨ Launching multiple instances of our Python training script
We are almost done now. We just have to start an instance of our training script on each server.

To run our script, we’ll use the torch.distributed.launch utility of PyTorch. It will take care of setting the environment variables and call each script with the right local_rank argument.
The first machine will be our master, it need to be accessible from all the other machine and thus have an accessible IP address ( in our example) and an open port (1234 in our case). On this first machine, we run our training script using torch.distributed.launch:

python -m torch.distributed.launch –nproc_per_node=4 –nnodes=2 –node_rank=0 –master_addr=”″ –master_port=1234 (–arg1 –arg2 –arg3 and all other arguments of our training script)
On the second machine we similarly start our script:

python -m torch.distributed.launch –nproc_per_node=4 –nnodes=2 –node_rank=1 –master_addr=”″ –master_port=1234 (–arg1 –arg2 –arg3 and all other arguments of our training script)
These two commands are identical excepted for the –node_rank argument which is set to 0 on the first machine and 1 on the second (and would be 2 on an additional server etc…)

The process of running a bunch of almost identical commands on a cluster of machine might looks a bit tedious. So now is probably a good time to learn about the magic of… GNU parallel:

One exciting improvement of PyTorch v1.0 is the release of the c10d backend for the distributed module. I will update this simple introduction when v1.0 is released with more details on the new backend 🔥

This conclude our quick post on a few tips, tricks and tools to train your model on larger batches in a variety of settings.

I hope you enjoyed this more technical post!

Clap 👏 a couple of times if you liked it and want us to post more of these!

Machine LearningNLPPytorchAITutorial
Go to the profile of Thomas Wolf
Thomas Wolf
Natural Language Processing, Deep learning and Computational Linguistics – Science Lead @ Huggingface

Stories @ Hugging Face

More from HuggingFace
🚀 100 Times Faster Natural Language Processing in Python
Go to the profile of Thomas Wolf
Thomas Wolf
More from HuggingFace
⛵ Learning Meaning in Natural Language Processing – The Semantics Mega-Thread
Go to the profile of Thomas Wolf
Thomas Wolf
Also tagged AI
Deep Learning Performance Cheat Sheet
Go to the profile of Christopher Dossman
Christopher Dossman
Applause from Thomas Wolf (author)
Go to the profile of Brian
Oct 15
Thank you for the great explanation of how to train large models.

Conversation between Avnish Kumar and Thomas Wolf.
Go to the profile of Avnish Kumar
Avnish Kumar
Oct 15
This was a really helpful article Thomas. Thank you.
One question for you – what is “2,4 Go of memory”?

Go to the profile of Thomas Wolf
Thomas Wolf
Oct 15
Oh yeah that’s GB (GigaBytes). Corrected that, thanks!

Never miss a story from HuggingFace, when you sign up for Medium. Learn more