CyberWizard Institute Training Material

CyberWizard Institute Training Material

Welcome friends.

Cyber Wizard Institute is an open, collaborative, and free programming school based out of the Sudo Room hackerspace in Oakland, CA.

We ran two sessions of Cyber Wizard in 2014 and 2015. One session was a month long, on weekdays, from 12PM-6PM. The second session was two weeks long, at night on weekdays and during the day on weekends (because it was only two weeks, we met every day).

There is a session of CWI happening in Prishtina, Kosovo at the Prishtina hackerspace from June 6 to June 23, 2017.

We have provided all of our materials for anyone who would like to learn and for anyone who is interested in running their own Cyber Wizard sessions.
Course Materials
intro to unix & command line notes video
html and css notes video
text editors: vim notes video
text editors: emacs no notes available video
intro to networking notes video
intro to javascript notes video
intro to python notes video
intro to node.js notes video
git and github notes video
the DOM notes video
intro to SQL no notes available video
leveldb notes video
streams in node.js notes video
regular expressions notes video
svg notes video
npm and using npm modules notes video
turing machines no notes available video
user experience (UX) notes video
user interface (UI) notes video
mobile app development and cordova notes video
using software to make physical things happen no notes available video
security and penetration testing notes no video available
building a web app notes video 1
video 2
browserify no notes available video
fuzzy logic and operator overloading notes video
intro to markdown notes no video available
writing modules with regular expressions no notes available video
set theory video 1
video 2
algorithmic complexity notes no video available
map reduce notes no video available
intro to data analysis notes
data and data types in js notes no video available
screen notes no video available
fun with synths no notes available video

The above is a partial list. See all lecture notes and all videos.
Learn more

See our github organization.

On IRC we are in #cyberwizard on freenode.

IQ is largely a pseudoscientific swindle

IQ is largely a pseudoscientific swindle

Background : “IQ” is a stale test meant to measure mental capacity but in fact mostly measures extreme unintelligence (learning difficulties), as well as, to a lesser extent (with a lot of noise), a form of intelligence, stripped of 2nd order effects — how good someone is at taking some type of exams designed by unsophisticated nerds. It is via negativa not via positiva. Designed for learning disabilities, and given that it is not too needed there (see argument further down), it ends up selecting for exam-takers, paper shufflers, obedient IYIs (intellectuals yet idiots), ill adapted for “real life”. (The fact that it correlates with general incompetence makes the overall correlation look high, even when it is random, see Figures 1 and 2.) The concept is poorly thought out mathematically by the field (commits a severe flaw in correlation under fat tails; fails to properly deal with dimensionality; treats the mind as an instrument not a complex system), and seems to be promoted by

racists/eugenists, people bent on showing that some populations have inferior mental abilities based on IQ test=intelligence; those have been upset with me for suddenly robbing them of a “scientific” tool, as evidenced by the bitter reactions to the initial post on twitter/smear campaigns by such mountebanks as Charles Murray. (Something observed by the great Karl Popper, psychologists have a tendency to pathologize people who bust them by tagging them with some type of disorder, or personality flaw such as “childish” , “narcissist”, “egomaniac”, or something similar).
psychometrics peddlers looking for suckers (military, large corporations) buying the “this is the best measure in psychology” argument when it is not even technically a measure — it explains at best between 2 and 13% of the performance in some tasks (those tasks that are similar to the test itself)[see interpretation of .5 correlation further down], minus the data massaging and statistical cherrypicking by psychologists; it doesn’t satisfy the monotonicity and transitivity required to have a measure (at best it is a concave measure). No measure that fails 80–95% of the time should be part of “science” (nor should psychology — owing to its sinister track record — be part of science (rather scientism), but that’s another discussion).

Typical confusion: Graphs in Intelligence showing an effect of IQ and income for a large cohort. Even ignoring circularity (test takers get clerical and other boring jobs), injecting noise would show the lack of information in the graph. Note that the effect shown is lower than the variance between tests for the same individual!
Fig 1: The graph that summarizes the first flaw (assuming thin tailed situations), showing that “correlation” is meaningless in the absence of symmetry. We construct (in red) an intelligence test (horizontal), that is 100% correlated with negative performance (when IQ is, say, below 100) and 0% with upside, positive performance. We progressively add noise (with a 0 mean) and see correlation (on top) drop but shift to both sides. Performance is on the vertical axis. The problem gets worse with the “g” intelligence based on principal components. By comparison we show (graph below) the distribution of IQ and SAT scores. Most “correlations” entailing IQ suffer the same pathology. Note: this is in spite of the fact that IQ tests overlap with the SAT! (To echo Haldane, one ounce of rigorous algebra is worth more than a century of verbalistic statisticopsycholophastering).

It is at the bottom an immoral measure that, while not working, can put people (and, worse, groups) in boxes for the rest of their lives.
There is no significant statistical association between IQ and hard measures such as wealth. Most “achievements” linked to IQ are measured in circular stuff s.a. bureaucratic or academic success, things for test takers and salary earners in structured jobs that resemble the tests. Wealth may not mean success but it is the only “hard” number, not some discrete score of achievements. You can buy food with a $30, not with other “successes” s.a. rank, social prominence, or having had a selfie with the Queen.

The informational interpretation of correlation, in terms of “how much information do I get about A knowing B”. Add to that the variance in results of IQ tests for the very same person.
An extension of the first flaw that shows how correlations are overestimated. Probability is hard.

Psychologists do not realize that the effect of IQ (if any, ignoring circularity) is smaller then the difference between IQ tests for the same individual.
Some argue that IQ measures intellectual capacity — real world results come from, in addition, “wisdom” or patience, or “conscientiousness”, or decision-making or something of the sort. No. It does not even measure intellectual capacity/mental powers.

If you want to detect how someone fares at a task, say loan sharking, tennis playing, or random matrix theory, make him/her do that task; we don’t need theoretical exams for a real world function by probability-challenged psychologists. Traders get it right away: hypothetical P/L from “simulated” paper strategies doesn’t count. Performance=actual. What goes in people’s head as a reaction to an image on a screen doesn’t exist (except via negativa).
IQ and wealth at low scale (outside the tail). Mostly Noise and no strikingly visible effect above $40K, but huge noise. Psychologists responding to this piece do not realize that statistics is about not interpreting noise. From Zagorsky (2007)
There is little information IQ/Income above 45K (Assume 2007 $s). Even in situations showing presence of a correlation, you see MONSTROUS noise. Even at low IQ, and low income! Shows IQ is designed for subservient low-salary earners. From Zagorsky (2007). This truncates the big upside, so we not even seeing the effect of fat tails.

Fat Tails If IQ is Gaussian by construction (well, almost) and if real world performance were, net, fat tailed (it is), then either the covariance between IQ and performance doesn’t exist or it is uninformational. It will show a finite number in sample but doesn’t exist statistically — and the metrics will overestimare the predictability. Another problem: when they say “black people are x standard deviations away”, they don’t know what they are talking about. Different populations have different variances, even different skewness and these comparisons require richer models. These are severe, severe mathematical flaws (a billion papers in psychometrics wouldn’t count if you have such a flaw). See the formal treatment in my next book.
Mensa members: typically high “IQ” losers in Birkenstocks.

But the “intelligence” in IQ is determined by academic psychologists (no geniuses) like the “paper trading” we mentioned above, via statistical constructs s.a. correlation that I show here (see Fig. 1) that they patently don’t understand. It does correlate to very negative performance (as it was initially designed to detect learning special needs) but then any measure would work there. A measure that works in the left tail not the right tail (IQ decorrelates as it goes higher) is problematic. We have gotten similar results since the famous Terman longitudinal study, even with massaged data for later studies. To get the point, consider that if someone has mental needs, there will be 100% correlation between performance and IQ tests. But the performance doesn’t correlate as well at higher levels, though, unaware of the effect of the nonlinearity, the psychologists will think it does.(The statistical spin, as a marketing argument, is that a person with an IQ of 70 cannot prove theorems, which is obvious for a measure of unintelligence — but they fail to reveal how many IQs of 150 are doing menial jobs).

It is a false comparison to claim that IQ “measures the hardware” rather than the software. It can measures some arbitrarily selected mental abilities (in a testing environment) believed to be useful. However, if you take a Popperian-Hayekian view on intelligence, you would realize that to measure future needs it you would need to know the mental skills needed in a future ecology, which requires predictability of said future ecology. It also requires the skills to make it to the future (hence the need for mental biases for survival).
The Best Map Fallacy (Technical Incerto)

Real Life: In academia there is no difference between academia and the real world; in the real world there is. 1) When someone asks you a question in the real world, you focus first on “why is he/she asking me that?”, which shifts you to the environment (see Fat Tony vs Dr John in The Black Swan) and detracts you from the problem at hand. Philosophers have known about that problem forever. Only suckers don’t have that instinct. Further, take the sequence {1,2,3,4,x}. What should x be? Only someone who is clueless about induction would answer 5 as if it were the only answer (see Goodman’s problem in a philosophy textbook or ask your closest Fat Tony) [Note: We can also apply here Wittgenstein’s rule-following problem, which states that any of an infinite number of functions is compatible with any finite sequence. Source: Paul Bogossian]. Not only clueless, but obedient enough to want to think in a certain way. 2) Real life never never offers crisp questions with crisp answers (most questions don’t have answers; perhaps the worst problem with IQ is that it seem to selects for people who don’t like to say “there is no answer, don’t waste time, find something else”.) 3) It takes a certain type of person to waste intelligent concentration on classroom/academic problems. These are lifeless bureaucrats who can muster sterile motivation. Some people can only focus on problems that are real, not fictional textbook ones (see the note below where I explain that I can only concentrate with real not fictional problems). 4) IQ doesn’t detect convexity of mistakes (by an argument similar to bias-variance you need to make a lot of small inconsequential mistake in order to avoid a large consequential one. See Antifragile and how any measure of “intelligence” w/o convexity is sterile edge.org/conversation/n…). To do well you must survive; survival requires some mental biases directing to some errors. 5) Fooled by Randomness: seeing shallow patterns in not a virtue — it leads to naive interventionism. Some psychologist wrote back to me: “IQ selects for pattern recognition, essential for functioning in modern society”. No. Not seeing patterns except when they are significant is a virtue in real life. 6) To do well in life you need depth and ability to select your own problems and to think independently.
This is no longer a regression. It is scientific fraud. A few random points from the same distribution can invert the slope of the regression. (From Jones and Schneider, 2010 attempting to make sense of the race-motivated notion of Average National IQ).
Upper bound: discount the massaging and correlation effects. Note that 50% correlation corresponds to 13% improvement over random picks. Figure from the highly unrigorous Intelligence: All That Matters by S. Ritchie.

National IQ is a Fraud. From engaging participants (who throw buzzwords at you), I realized that the concept has huge variance, enough to be uninformative. See graph. And note that the variance within populations is not used to draw conclusions (you average over functions, don’t use the funciton over averages) — a problem acute for tail contributions.
Notice the noise: the top 25% of janitors have higher IQ than the bottom 25% of college professors, even counting the circularity. The circularity bias shows most strikingly with MDs as medical schools require a higher SAT score.

Recall from Antifragile that if wealth were fat tailed, you’d need to focus on the tail minority (for which IQ has unpredictable payoff), never the average. Further it is leading to racist imbeciles who think that if a country has an IQ of 82 (assuming it is true not the result of lack of such training), it means politically that all the people there have an IQ of 82, hence let’s ban them from immigrating. As I said they don’t even get elementary statistical notions such as variance. Some people use National IQ as a basis for genetic differences: it doesn’t explain the sharp changes in Ireland and Croatia upon European integration, or, in the other direction, the difference between Israeli and U.S. Ashkenazis.

Additional Variance: Unlike measurements of height or wealth, which carry a tiny relative error, many people get yuugely different results for the same IQ test (I mean the same person!), up to 2 standard deviations as measured across people, higher than sampling error in the population itself! This additional source of sampling error weakens the effect by propagation of uncertainty way beyond its predictability when applied to the evaluation of a single individual. It also tells you that you as an individual are vastly more diverse than the crowd, at least with respect to that measure!

Biases in Research: If, as psychologists show (see figure) MDs and academics tend to have a higher “IQ” that is slightly informative (higher, but on a noisy average), it is largely because to get into schools you need to score on a test similar to “IQ”. The mere presence of such a filter increases the visible mean and lower the visible variance. Probability and statistics confuse fools.

Functionary Quotient: If you renamed IQ , from “Intelligent Quotient” to FQ “Functionary Quotient” or SQ “Salaryperson Quotient”, then some of the stuff will be true. It measures best the ability to be a good slave confined to linear tasks. “IQ” is good for @davidgraeber’s “BS jobs”.

Metrification: If someone came up w/a numerical“Well Being Quotient” WBQ or “Sleep Quotient”, SQ, trying to mimic temperature or a physical quantity, you’d find it absurd. But put enough academics w/physics envy and race hatred on it and it will become an official measure.
Notes And Technical Notes

The argument by psychologists to make IQ useful is of the sort: who would you like to do brain surgery on you/who would you hire in your company/who would you recommend, someone with a 90 IQ or one with 130 is …academic. Well, you pick people on task-specific performance, which should include some filtering. In the real world you interview people from their CV (not from some IQ number sent to you as in a thought experiment), and, once you have their CV, the 62 IQ fellow is naturally eliminated. So the only think for which IQ can select, the mentaly disabled, is already weeded out in real life: he/she can’t have a degree in engineering or medicine. Which explains why IQ is unnecessary and using it is risky because you miss out on the Einsteins and Feynmans.
“IQ” is most predictive of performance in military training, with correlation~.5, (which is circular since hiring isn’t random and training is another test).
There are contradictory stories about whether IQ ceases to work past a threshold, since Terman’s longitudinal study of “geniuses”. What these researchers don’t get is these contradictions come from the fact that the variance of the IQ measure increases with IQ. Not a good thing.
The argument that “some races are better at running” hence [some inference about the brain] is stale: mental capacity is much more dimensional and not defined in the same way running 100 m dash is.
I have here no psychological references in this piece (except via negativa, taking their “best”): simply, the field is bust. So far ~ 50% of the research does notreplicate, and papers that do have weaker effect. Not counting the poor transfer to reality (psychological papers are ludic). How P values often — rather almost always — fraudulent: my paper arxiv.org/pdf/1603.07532…
The Flynn effect should warn us not just that IQ is somewhat environment dependent, but that it is at least partly circular.
Verbalism: Psychologists have a skin-deep statistical education & can’t translate something as trivial as “correlation” or “explained variance” into meaning, esp. under nonlinearities (see paper at the end).
The “best measure” charlatans: IQ is reminiscent of risk charlatans insisting on selling “value at risk”, VaR, and RiskMetrics saying “it’s the best measure”. That “best” measure, being unreliable blew them up many many times. Note the class of suckers for whom a bad measure is better than no measure across domains.
You can’t do statistics without probability.
Much of the stuff about IQ of physicists is suspicious, from self-reporting biases/selection in tests.
If you looked at Northern Europe from Ancient Babylon/Ancient Med/Egypt, you would have written the inhabitants off as losers who are devoid of potential… Then look at what happened after 1600. Be careful when you discuss populations.
The same people hold that IQ is heritable, that it determines success, that Asians have higher IQs than Caucasians, degrade Africans, then don’t realize that China for about a Century had one order of magnitude lower GDP than the West.

Responses by Psychologists

Alt-Right groups such as James Thompson

Reactions to this piece in the Alt-Right Media: all they got is a psychologist who still hasn’t gotten to the basics of elementary correlation and noise/signal. The fact that psychologists selected him to defend them (via retweets) speaks volumes about their sophistication.

Hack job by one Jonatan Pallesen, full of mistakes about this piece (and the “empiricism”), promoted by mountebanks such as Murray. He didn’t get that of course one can produce “correlation” from data. It is the interpretation of these correlations that is full of BS. Pallesen also produces some lies about what I said which have been detected in online comments (e.g. the quiz I gave and using Log vs X ).

The Democratization of Censorship

The Democratization of Censorship

John Gilmore, an American entrepreneur and civil libertarian, once famously quipped that “the Internet interprets censorship as damage and routes around it.” This notion undoubtedly rings true for those who see national governments as the principal threats to free speech.

However, events of the past week have convinced me that one of the fastest-growing censorship threats on the Internet today comes not from nation-states, but from super-empowered individuals who have been quietly building extremely potent cyber weapons with transnational reach.

underwater

More than 20 years after Gilmore first coined that turn of phrase, his most notable quotable has effectively been inverted — “Censorship can in fact route around the Internet.” The Internet can’t route around censorship when the censorship is all-pervasive and armed with, for all practical purposes, near-infinite reach and capacity. I call this rather unwelcome and hostile development the “The Democratization of Censorship.”

Allow me to explain how I arrived at this unsettling conclusion. As many of you know, my site was taken offline for the better part of this week. The outage came in the wake of a historically large distributed denial-of-service (DDoS) attack which hurled so much junk traffic at Krebsonsecurity.com that my DDoS protection provider Akamai chose to unmoor my site from its protective harbor.

Let me be clear: I do not fault Akamai for their decision. I was a pro bono customer from the start, and Akamai and its sister company Prolexic have stood by me through countless attacks over the past four years. It just so happened that this last siege was nearly twice the size of the next-largest attack they had ever seen before. Once it became evident that the assault was beginning to cause problems for the company’s paying customers, they explained that the choice to let my site go was a business decision, pure and simple.

Nevertheless, Akamai rather abruptly informed me I had until 6 p.m. that very same day — roughly two hours later — to make arrangements for migrating off their network. My main concern at the time was making sure my hosting provider wasn’t going to bear the brunt of the attack when the shields fell. To ensure that absolutely would not happen, I asked Akamai to redirect my site to 127.0.0.1 — effectively relegating all traffic destined for KrebsOnSecurity.com into a giant black hole.

Today, I am happy to report that the site is back up — this time under Project Shield, a free program run by Google to help protect journalists from online censorship. And make no mistake, DDoS attacks — particularly those the size of the assault that hit my site this week — are uniquely effective weapons for stomping on free speech, for reasons I’ll explore in this post.
Google’s Project Shield is now protecting KrebsOnSecurity.com

Google’s Project Shield is now protecting KrebsOnSecurity.com

Why do I speak of DDoS attacks as a form of censorship? Quite simply because the economics of mitigating large-scale DDoS attacks do not bode well for protecting the individual user, to say nothing of independent journalists.

In an interview with The Boston Globe, Akamai executives said the attack — if sustained — likely would have cost the company millions of dollars. In the hours and days following my site going offline, I spoke with multiple DDoS mitigation firms. One offered to host KrebsOnSecurity for two weeks at no charge, but after that they said the same kind of protection I had under Akamai would cost between $150,000 and $200,000 per year.

Ask yourself how many independent journalists could possibly afford that kind of protection money? A number of other providers offered to help, but it was clear that they did not have the muscle to be able to withstand such massive attacks.

I’ve been toying with the idea of forming a 501(c)3 non-profit organization — ‘The Center for the Defense of Internet Journalism’, if you will — to assist Internet journalists with obtaining the kind of protection they may need when they become the targets of attacks like the one that hit my site. Maybe a Kickstarter campaign, along with donations from well-known charitable organizations, could get the ball rolling. It’s food for thought.
CALIBRATING THE CANNONS

Earlier this month, noted cryptologist and security blogger Bruce Schneier penned an unusually alarmist column titled, “Someone Is Learning How to Take Down the Internet.” Citing unnamed sources, Schneier warned that there was strong evidence indicating that nation-state actors were actively and aggressively probing the Internet for weak spots that could allow them to bring the entire Web to a virtual standstill.

“Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services,” Schneier wrote. “Who would do this? It doesn’t seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It’s not normal for companies to do that.”

Schneier continued:

“Furthermore, the size and scale of these probes — and especially their persistence — points to state actors. It feels like a nation’s military cyber command trying to calibrate its weaponry in the case of cyberwar. It reminds me of the US’s Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.”

Whether Schneier’s sources were accurate in their assessment of the actors referenced in his blog post is unknown. But as my friend and mentor Roland Dobbins at Arbor Networks eloquently put it, “When it comes to DDoS attacks, nation-states are just another player.”

“Today’s reality is that DDoS attacks have become the Great Equalizer between private actors & nation-states,” Dobbins quipped.

“Today’s reality is that DDoS attacks have become the Great Equalizer between private actors & nation-states,” Dobbins quipped.
UM…YOUR RERUNS OF ‘SEINFELD’ JUST ATTACKED ME

What exactly was it that generated the record-smashing DDoS of 620 Gbps against my site this week? Was it a space-based weapon of mass disruption built and tested by a rogue nation-state, or an arch villain like SPECTRE from the James Bond series of novels and films? If only the enemy here was that black-and-white.

No, as I reported in the last blog post before my site was unplugged, the enemy in this case was far less sexy. There is every indication that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — mainly routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords. Most of these devices are available for sale on retail store shelves for less than $100, or — in the case of routers — are shipped by ISPs to their customers.

Some readers on Twitter have asked why the attackers would have “burned” so many compromised systems with such an overwhelming force against my little site. After all, they reasoned, the attackers showed their hand in this assault, exposing the Internet addresses of a huge number of compromised devices that might otherwise be used for actual money-making cybercriminal activities, such as hosting malware or relaying spam. Surely, network providers would take that list of hacked devices and begin blocking them from launching attacks going forward, the thinking goes.

As KrebsOnSecurity reader Rob Wright commented on Twitter, “the DDoS attack on @briankrebs feels like testing the Death Star on the Millennium Falcon instead of Alderaan.” I replied that this maybe wasn’t the most apt analogy. The reality is that there are currently millions — if not tens of millions — of insecure or poorly secured IoT devices that are ripe for being enlisted in these attacks at any given time. And we’re adding millions more each year.

I suggested to Mr. Wright perhaps a better comparison was that ne’er-do-wells now have a virtually limitless supply of Stormtrooper clones that can be conscripted into an attack at a moment’s notice.
A scene from the 1978 movie Star Wars, which the Death Star tests its firepower by blowing up a planet.

A scene from the 1977 movie Star Wars, in which the Death Star tests its firepower by blowing up a planet.
SHAMING THE SPOOFERS

The problem of DDoS conscripts goes well beyond the millions of IoT devices that are shipped insecure by default: Countless hosting providers and ISPs do nothing to prevent devices on their networks from being used by miscreants to “spoof” the source of DDoS attacks.

As I noted in a November 2015 story, The Lingering Mess from Default Insecurity, one basic step that many ISPs can but are not taking to blunt these attacks involves a network security standard that was developed and released more than a dozen years ago. Known as BCP38, its use prevents insecure resources on an ISPs network (hacked servers, computers, routers, DVRs, etc.) from being leveraged in such powerful denial-of-service attacks.

Using a technique called traffic amplification and reflection, the attacker can reflect his traffic from one or more third-party machines toward the intended target. In this type of assault, the attacker sends a message to a third party, while spoofing the Internet address of the victim. When the third party replies to the message, the reply is sent to the victim — and the reply is much larger than the original message, thereby amplifying the size of the attack.

BCP38 is designed to filter such spoofed traffic, so that it never even traverses the network of an ISP that’s adopted the anti-spoofing measures. However, there are non-trivial economic reasons that many ISPs fail to adopt this best practice. This blog post from the Internet Society does a good job of explaining why many ISPs ultimately decide not to implement BCP38.

Fortunately, there are efforts afoot to gather information about which networks and ISPs have neglected to filter out spoofed traffic leaving their networks. The idea is that by “naming and shaming” the providers who aren’t doing said filtering, the Internet community might pressure some of these actors into doing the right thing (or perhaps even offer preferential treatment to those providers who do conduct this basic network hygiene).

A research experiment by the Center for Applied Internet Data Analysis (CAIDA) called the “Spoofer Project” is slowly collecting this data, but it relies on users voluntarily running CAIDA’s software client to gather that intel. Unfortunately, a huge percentage of the networks that allow spoofing are hosting providers that offer extremely low-cost, virtual private servers (VPS). And these companies will never voluntarily run CAIDA’s spoof-testing tools.
CAIDA’s Spoofer Project page.

CAIDA’s Spoofer Project page.

As a result, the biggest offenders will continue to fly under the radar of public attention unless and until more pressure is applied by hardware and software makers, as well as ISPs that are doing the right thing.

How might we gain a more complete picture of which network providers aren’t blocking spoofed traffic — without relying solely on voluntary reporting? That would likely require a concerted effort by a coalition of major hardware makers, operating system manufacturers and cloud providers, including Amazon, Apple, Google, Microsoft and entities which maintain the major Web server products (Apache, Nginx, e.g.), as well as the major Linux and Unix operating systems.

The coalition could decide that they will unilaterally build such instrumentation into their products. At that point, it would become difficult for hosting providers or their myriad resellers to hide the fact that they’re allowing systems on their networks to be leveraged in large-scale DDoS attacks.

To address the threat from the mass-proliferation of hardware devices such as Internet routers, DVRs and IP cameras that ship with default-insecure settings, we probably need an industry security association, with published standards that all members adhere to and are audited against periodically.

The wholesalers and retailers of these devices might then be encouraged to shift their focus toward buying and promoting connected devices which have this industry security association seal of approval. Consumers also would need to be educated to look for that seal of approval. Something like Underwriters Laboratories (UL), but for the Internet, perhaps.
THE BLEAK VS. THE BRIGHT FUTURE

As much as I believe such efforts could help dramatically limit the firepower available to today’s attackers, I’m not holding my breath that such a coalition will materialize anytime soon. But it’s probably worth mentioning that there are several precedents for this type of cross-industry collaboration to fight global cyber threats.

In 2008, the United States Computer Emergency Readiness Team (CERT) announced that researcher Dan Kaminsky had discovered a fundamental flaw in DNS that could allow anyone to intercept and manipulate most Internet-based communications, including email and e-commerce applications. A diverse community of software and hardware makers came together to fix the vulnerability and to coordinate the disclosure and patching of the design flaw.

deathtoddosIn 2009, Microsoft heralded the formation of an industry group to collaboratively counter Conficker, a malware threat that infected tens of millions of Windows PCs and held the threat of allowing cybercriminals to amass a stupendous army of botted systems virtually overnight. A group of software and security firms, dubbed the Conficker Cabal, hashed out and executed a plan for corralling infected systems and halting the spread of Conficker.

In 2011, a diverse group of industry players and law enforcement organizations came together to eradicate the threat from the DNS Changer Trojan, a malware strain that infected millions of Microsoft Windows systems and enslaved them in a botnet that was used for large-scale cyber fraud schemes.

These examples provide useful templates for a solution to the DDoS problem going forward. What appears to be missing is any sense of urgency to address the DDoS threat on a coordinated, global scale.

That’s probably because at least for now, the criminals at the helm of these huge DDoS crime machines are content to use them to launch petty yet costly attacks against targets that suit their interests or whims.

For example, the massive 620 Gbps attack that hit my site this week was an apparent retaliation for a story I wrote exposing two Israeli men who were arrested shortly after that story ran for allegedly operating vDOS — until recently the most popular DDoS-for-hire network. The traffic hurled at my site in that massive attack included the text string “freeapplej4ck,” a reference to the hacker nickname used by one of vDOS’s alleged co-founders.

Most of the time, ne’er-do-wells like Applej4ck and others are content to use their huge DDoS armies to attack gaming sites and services. But the crooks maintaining these large crime machines haven’t just been targeting gaming sites. OVH, a major Web hosting provider based in France, said in a post on Twitter this week that it was recently the victim of an even more massive attack than hit my site. According to a Tweet from OVH founder Octave Klaba, that attack was launched by a botnet consisting of more than 145,000 compromised IP cameras and DVRs.

I don’t know what it will take to wake the larger Internet community out of its slumber to address this growing threat to free speech and ecommerce. My guess is it will take an attack that endangers human lives, shuts down critical national infrastructure systems, or disrupts national elections.

But what we’re allowing by our inaction is for individual actors to build the instrumentality of tyranny. And to be clear, these weapons can be wielded by anyone — with any motivation — who’s willing to expend a modicum of time and effort to learn the most basic principles of its operation.

The sad truth these days is that it’s a lot easier to censor the digital media on the Internet than it is to censor printed books and newspapers in the physical world. On the Internet, anyone with an axe to grind and the willingness to learn a bit about the technology can become an instant, self-appointed global censor.

I sincerely hope we can address this problem before it’s too late. And I’m deeply grateful for the overwhelming outpouring of support and solidarity that I’ve seen and heard from so many readers over the past few days. Thank you.

Tags: Akamai, BCP38, CAIDA, censorship, Center for Applied Internet Data Analysis, Conficker Cabal, DDoS, DNS, DNS Changer Trojan, google, internet of things, IoT, John Gilmore, Project Shield, Prolexic, Rob Wright, SPECTRE, spoofer project, United States Computer Emergency Readiness Team

Digital Attack Map

Digital Attack Map

Digital Attack Map

Top daily DDoS attacks worldwide

Map• Gallery• Understanding DDoS• FAQ• About•

Digital Attack Map is a live data visualization of DDoS attacks around the globe, built through a collaboration between Google Ideas and Arbor Networks. The tool surfaces anonymous attack traffic data to let users explore historic trends and find reports of outages happening on a given day.
Why?
DDoS Attacks Matter

Distributed Denial of Service (DDoS) attacks can be used to make important online information unavailable to the world. Sites covering elections are brought down to influence their outcome, media sites are attacked to censor stories, and businesses are taken offline by competitors looking for a leg up. Protecting access to information is important for the Internet and important for free expression.
Visualizing Trends

Understanding the raw data behind DDoS Attacks is not intuitive. As a result, the impact, scale and scope of the challenge can be easily overlooked. We hope this tool allows more people to understand the challenges posed by DDoS attacks. We also hope it triggers a dialogue about how we can work together to reduce the threat of DDoS Attacks, improving the Internet for everyone.
Who?
Google Ideas logo

Google Ideas is a think/do tank at Google that explores how technology can enable people to confront threats in the face of conflict, instability or repression. We connect users, experts and engineers to research and seed new technology-driven initiatives. Google Ideas worked in partnership with Google’s Big Picture Team to design and develop the Digital Attack Map.
Google Big Picture logo

The “Big Picture” team, a part of Google research, creates interactive visualizations to captivate and delight users. They blend algorithmic, data-driven approaches with fluid design to make complex data more accessible. They portray large-scale data sets — such as books, videos, images, news, or social behavior — in creative ways that simultaneously educate and entertain.
Arbor Networks logo

DDoS attack data is provided by Arbor Networks. Established in 2000, Arbor provides network security and management solutions for some of the world’s largest and most complex networks. DDoS attack data is sourced from Arbor’s ATLAS® global threat intelligence system. To learn more, visit the ATLAS Threat Portal.
Powered by Google Ideas. DDoS data ©2013, Arbor Networks, Inc.
Privacy & Terms • Jigsaw logo • Arbor logo •

Embedded Linux / ARM consultant living and working in Taipei

Embedded Linux / ARM consultant living and working in Taipei

warmcat git – libwebsockets – libwebsockets git
Posts

February 1, 2017
RISC-V and Microsemi Polarfire on Fedora 27
Septempber 5, 2017
MIPI I3C
August 18, 2017
SLA 3D printing
August 13, 2017
Implementing ssh and scp serving with libwebsockets
August 12, 2017
Mailman and captcha
Nov 21, 2016
Let’s play, “What’s my ESD rating”
Nov 6, 2016
ICE5 FPGA Where did all the LUTs go?
Nov 1, 2016
Visualizing bulk samples with a statistical summarizer
Oct 17, 2016
Generic MOSFET Power Switching
Oct 7, 2016
Advantages, Limitations and Costs of Galvanic Isolation
Oct 4, 2016
Driving Piezo Sounders
Sep 26, 2016
SPI as video and alpha compositor
Sep 20, 2016
Hyperram Bus Interface Unit and Arbitrator
Sep 19, 2016
ICE5 Hyperbus DDR, PLLs Implementation tips
Sep 8, 2016
In Praise of Kicad
Sep 7, 2016
Lattice’s Unintended Darwinism in Tech Support
Sep 2, 2016
Hyperbus and Hyperram
Aug 26, 2016
ST7735 TFT LCD Goodness
Aug 15, 2016
Getting started with ICE40 Ultra FPGAs
Jul 22, 2016
ESP8266 Wifi module on Linux
Mar 29, 2016
Silego GreenPAK crosses Analogue, CPLD and FPGA
Dec 21, 2015
Hall-effect current sensing
Nov 29, 2015
mbed3 libwebsockets port
Nov 23, 2015
HDMI Audio on Hikey
Nov 3, 2015
Mbed3 starting libwebsockets port
Nov 1, 2015
Mbed3 diving into network
Oct 31, 2015
Mbed3 registry and deps
Oct 31, 2015
Mbed3 fixing the client app and library
Oct 30, 2015
Mbed3 and Minar
Oct 29, 2015
The mbed maze
Oct 25, 2015
HDMI Capture and Analysis FPGA Project 6
Oct 25, 2015
HDMI Capture and Analysis FPGA Project 5
Oct 23, 2015
HDMI Capture and Analysis FPGA Project 4
Oct 22, 2015
HDMI Capture and Analysis FPGA Project 3
Oct 21, 2015
HDMI Capture and Analysis FPGA Project 2
Oct 20, 2015
HDMI Capture and Analysis FPGA Project
Sep 23, 2013
Nokia Msft Lol
Sep 19, 2013
Adios WordPress
Jan 12, 2013
libwebsockets.org
Mar 6, 2011
libwebsockets new features
Feb 11, 2011
Nokia failure
Jan 22, 2011
libwebsockets now with 04 protocol and simultaneous client / server
Nov 29, 2010
New NXP LPC32x0 in Qi bootloader
Nov 8, 2010
libwebsockets now with SSL / WSS
Nov 1, 2010
libwebsockets – HTML5 Websocket server library in C
Feb 12, 2010
Don’t let Production Test Be Special
Feb 8, 2010
Fosdem and the Linux Cross Niche
Feb 8, 2010
Bootloader Envy
May 21, 2009
Whirlygig Verification and rngtest analysis
May 21, 2009
Whirlygig PCB
May 23, 2008
Exhaustion and the GPL
Nov 24, 2007
Whirlygig GPL’d HWRNG
Nov 15, 2007
FIPS-140-2 and ENT validation vs ring RNG
Nov 14, 2007
Diehard validation vs ring RNG
Nov 12, 2007
Ring oscillator RNG performance
Nov 7, 2007
Adding entropy to /dev/random
Oct 25, 2007
Drumbeat
Oct 25, 2007
CE Technical Documentation
Sep 28, 2007
Heading deeper into the noise
Sep 18, 2007
QPSK demodulator / slicer / correlator vs noise
Sep 18, 2007
Magic Correlator and baseband QPSK
Sep 17, 2007
AT91RM9200 FIQ FAQ and simple Example code / patch
Sep 12, 2007
Magic correlator code analysis
Sep 12, 2007
Autocorrelation code and weak signal recovery
Sep 6, 2007
Embedded procmail and dovecot
Sep 5, 2007
selinux magic for gitweb
Sep 5, 2007
Forcing 1&1 to make F7
Jul 30, 2007
It’s Fedora, Jim: but not as we know it
Jul 16, 2007
mac80211 Injection patches accepted in Linus git tree
Jun 8, 2007
Jamendobox
May 25, 2007
The Alignment Monster
Mar 31, 2007
Bonsai code-kittens
Mar 4, 2007
Nasty Crash at Luton Airport
Mar 3, 2007
Out of your tree
Jan 27, 2007
Octotux Packaged Linux Distro
Oct 3, 2006
Your code might be Free, but what about your data?
Sep 21, 2006
Rights and Wrongs of Hacking Source Licenses at Distribution Time
Sep 19, 2006
GPL2 “or later” distributor sends mixed signals when distributing as GPL2
Sep 18, 2006
I’ll make you free if I have to lock you up!
Sep 14, 2006
Old Tech
Sep 14, 2006
Next Generation
Aug 28, 2006
libtool and .la files
Aug 16, 2006
Greylisting is back in town
Aug 14, 2006
Dead Languages
Aug 14, 2006
Conexant ADSL Binary driver damage
Aug 11, 2006
Autotools crosscompile hall of shame
Aug 9, 2006
RT73 Belkin stick depression
Aug 9, 2006
Postfix relaying for Dynamic clients
Jul 27, 2006
VMware networking in Fedora
Jul 16, 2006
Behind The Embedded Sofa.html
Jul 16, 2006
Behind the Embedded sofa
Jul 15, 2006
Yahoeuvre broken by Yahoo changes
Jul 12, 2006
Interesting AT91 clock quirk
Jul 11, 2006
Chip of weirdness
Jul 11, 2006
Broadcomm and WPA
Jul 9, 2006
Cursed AMD64 box
Jul 9, 2006
Coolest Mailserver
Jul 9, 2006
Blog logic

subscribe via RSS
Warmcat

Warmcat
andy@warmcat.com

lws-team

Embedded Linux / ARM consultant living and working in Taipei

How Qualcomm shook down the cell phone industry for almost 20 years

How Qualcomm shook down the cell phone industry for almost 20 years

Ars Technica


0

Biz & IT
Tech
Science
Policy
Cars
Gaming & Culture
Forums
Subscribe
Store

View Full Site Disable floating nav

Dark on light
Light on dark

Log in Register
Forgot your password?

Resend activation e-mail
Policy / Civilization & Discontents
How Qualcomm shook down the cell phone industry for almost 20 years
We did a deep-dive into the 233-page ruling declaring Qualcomm a monopolist.

by Timothy B. Lee – May 30, 2019 10:00pm CEST
Login to bookmark
302
Getty / Aurich Lawson

In 2005, Apple contacted Qualcomm as a potential supplier for modem chips in the first iPhone. Qualcomm’s response was unusual: a letter demanding that Apple sign a patent licensing agreement before Qualcomm would even consider supplying chips.

“I’d spent 20 years in the industry, I had never seen a letter like this,” said Tony Blevins, Apple’s vice president of procurement.

Most suppliers are eager to talk to new customers—especially customers as big and prestigious as Apple. But Qualcomm wasn’t like other suppliers; it enjoyed a dominant position in the market for cellular chips. That gave Qualcomm a lot of leverage, and the company wasn’t afraid to use it.

Blevins’ comments came when he testified earlier this year in the Federal Trade Commission’s blockbuster antitrust case against Qualcomm. The FTC filed this lawsuit in 2017 partly at the urging of Apple, which had chafed under Qualcomm’s wireless chip dominance for a decade.

Last week, a California federal judge provided the FTC and Apple with sweet vindication. In a scathing 233-page opinion [PDF], Judge Lucy Koh ruled that Qualcomm’s aggressive licensing tactics had violated American antitrust law.

I read every word of Judge Koh’s book-length opinion, which portrays Qualcomm as a ruthless monopolist. The legal document outlines a nearly 20-year history of overcharging smartphone makers for cellular chips. Qualcomm structured its contracts with smartphone makers in ways that made it almost impossible for other chipmakers to challenge Qualcomm’s dominance. Customers who didn’t go along with Qualcomm’s one-sided terms were threatened with an abrupt and crippling loss of access to modem chips.

“Qualcomm has monopoly power over certain cell phone chips, and they use that monopoly power to charge people too much money,” says Charles Duan, a patent expert at the free-market R Street Institute. “Instead of just charging more for the chips themselves, they required people to buy a patent license and overcharged for the patent license.”

Now, all of that dominance might be coming to an end. In her ruling, Koh ordered Qualcomm to stop threatening customers with chip cutoffs. Qualcomm must now re-negotiate all of its agreements with customers and license its patents to competitors on reasonable terms. And if Koh’s ruling survives the appeals process, it could produce a truly competitive market for wireless chips for the first time in this century.
Qualcomm’s perfect profit machine
Enlarge
JeanbaptisteM

Different cellular networks operate on different wireless networking standards, and these standards change every few years. For much of the last 20 years, Qualcomm has enjoyed a lead—and in some cases a stranglehold—on chips that support major cellular standards. So if a smartphone company aspired to sell its wares around the world, it had little choice but to do business with Qualcomm.

For example, in the early 2010s Qualcomm enjoyed a big lead on chips for the CDMA standards favored by Verizon and Sprint in the US and some other carriers overseas. Qualcomm Chief Technology Officer James Thompson bluntly explained in an internal 2014 email to CEO Steve Mollenkopf how this gave the company leverage over Apple.

“We are the only supplier today that can give them a global launch,” Thompson wrote, according to court documents. “In fact, without us they would lose big parts of North America, Japan and China. That would really hurt them.”

It wasn’t just Apple. BlackBerry was in a similar predicament around 2010. In a deposition, BlackBerry executive John Grubbs stated that without access to Qualcomm’s chips, “30 percent of our device sales would have gone away overnight if we couldn’t have supplied CDMA devices.”

Over the last two decades, Qualcomm has had deals in place with most of the leading cell phone makers, including LG, Sony, Samsung, Huawei, Motorola, Lenovo, ZTE, and Nokia. These deals gave Qualcomm enormous leverage over these companies—leverage that allowed Qualcomm to extract patent royalty rates that were far higher than those earned by other companies with similar patent portfolios.

Qualcomm’s patent licensing fees were calculated based on the value of the entire phone, not just the value of chips that embodied Qualcomm’s patented technology. This effectively meant that Qualcomm got a cut of every component of a smartphone—most of which had nothing to do with Qualcomm’s cellular patents.

“Qualcomm charges us more than everybody else put together,” Apple executive Jeff Williams said. “We’ve never seen such a significant licensing fee tied to any other IP we license,” said Motorola’s Todd Madderom.

Internal Qualcomm documents supported these claims. One showed that Qualcomm’s patent licensing operation brought in $7.7 billion in 2016—more than the combined patent licensing revenue of 12 other companies with significant patent portfolios.
No license, no chips
Enlarge
Qualcomm

These high royalties reflected an unusual negotiating tactic called “no license, no chips.” No one could buy Qualcomm’s cellular chips unless they first signed a license to Qualcomm’s patent portfolio. And the terms of these patent deals were heavily tilted in Qualcomm’s favor.

Once a phone maker had signed its first deal with Qualcomm, Qualcomm gained even more leverage. Qualcomm had the right to unilaterally terminate a smartphone maker’s chip supply once the patent licensing deal expired.

“If we are unable to source the modem, we are unable to ship the handset,” said Motorola executive Todd Madderom in a deposition. “It takes many months of engineering work to design a replacement solution, if there is even a viable one on the market that supports the need.”

That made Qualcomm’s customers extremely vulnerable as they neared the expiration of a patent licensing deal. If a customer tried to negotiate more favorable terms—to say nothing of formally challenging Qualcomm’s patent claims in court—Qualcomm could abruptly cut off the company’s chip supply.

“We explained that we were contemplating terminating the license,” Lenovo executive Ira Blumberg testified during the trial. A senior Qualcomm executive “was very calm about it, and said we should feel free to do that, but if we did, we would no longer be able to purchase Qualcomm chips.”

“You’re looking at months and months, if not a year or more, without supply,” Blumberg said in a deposition. That “would be, if not fatal, then nearly fatal to almost any company in this business.”

Judge Koh found that Qualcomm used this tactic over and over again over the last 20 years: Qualcomm threatened to cut off Samsung’s chip supply in 2001, LG’s chip supply in 2004, Sony and ZTE’s chip supplies in 2012, Huawei and Lenovo’s chip supplies in 2013, and Motorola’s chip supply in 2015.
Qualcomm’s chip deals boxed out competitors
Enlarge
Getty Images | Boonrit Panyaphinitnugoon

An obvious question is how Qualcomm maintained its stranglehold over the supply of modem chips. Partly, Qualcomm employed talented engineers and spent billions of dollars keeping its chips on the cutting edge.

Qualcomm also bolstered its dominant position by selling systems on a chip that included a CPU and other functions as well as modem functionality. This yielded significant cost and power savings, and it was hard for smaller chipmakers to compete with.

But besides these technical reasons, Qualcomm also structured its agreements with customers to make it difficult for other companies to break into the cellular modem chip business.

Qualcomm’s first weapon against competitors: patent licensing terms requiring customers to pay a royalty on every phone sold—not just phones that contained Qualcomm’s wireless chips. This gave Qualcomm an inherent advantage in competition with other chipmakers. If another chipmaker tried to undercut Qualcomm’s chips on price, Qualcomm could easily afford to cut the price of its own chips, knowing that the customer would still be paying Qualcomm a hefty patent licensing fee on every phone.

Judge Koh draws a direct parallel to licensing behavior that got Microsoft in legal trouble in the 1990s. Microsoft would offer PC makers a discount if they agreed to pay Microsoft a licensing fee for every PC sold—whether or not the PC shipped with a copy of MS-DOS. This effectively meant that a PC maker had to pay twice if it shipped a PC running a non-Microsoft operating system. In 1999, a federal judge ruled that a reasonable jury could conclude this arrangement violated antitrust law by making it difficult for Microsoft’s competitors to break into the market.

And some of Qualcomm’s licensing deals included terms that explicitly discouraged companies from using non-Qualcomm wireless chips. Qualcomm would offer cell phone makers rebates on every Qualcomm chip they sold. But cell phone makers would only get those rebates if they used Qualcomm chips for at least 85 percent—or in some cases even 100 percent—of the phones they sold.

For example, Apple signed a deal with Qualcomm in 2013 that effectively guaranteed that Apple would exclusively use Qualcomm’s wireless chips. Under the deal, Qualcomm paid Apple hundreds of millions of dollars in rebates and marketing incentives between 2013 and 2016. However, Qualcomm would stop making those payments if Apple started selling an iPhone or iPad with a non-Qualcomm cellular chip.

Apple was even required to pay back some of those funds if it used non-Qualcomm cellular chips before February 2016. One internal Qualcomm email calculated that Apple would owe $645 million if it launched an iPhone with a non-Qualcomm cellular chip in 2015.

Qualcomm made similar deals with other major cell phone makers. In 2003, Qualcomm signed a 10-year deal granting Huawei a reduced royalty rate of 2.65 percent if Huawei purchased 100 percent of its CDMA chips for the Chinese market from Qualcomm. If Huawei bought non-Qualcomm CDMA chips, the royalty rate jumped to five percent or more.

A 2004 deal gave LG rebates if LG purchased at least 85 percent of its CDMA chips from Qualcomm. The deal also required LG to pay a higher patent royalty rate when it sold phones with non-Qualcomm cellular chips. A 2018 deal makes incentive payments to Samsung if the company buys 100 percent of its “premium” cellular chips from Qualcomm—as well as lower thresholds (the exact percentages are redacted) for lower-tier chips.

Ars Technica


0

Biz & IT
Tech
Science
Policy
Cars
Gaming & Culture
Forums
Subscribe
Store

View Full Site Disable floating nav

Dark on light
Light on dark

Log in Register
Forgot your password?

Resend activation e-mail
Policy / Civilization & Discontents
How Qualcomm shook down the cell phone industry for almost 20 years
We did a deep-dive into the 233-page ruling declaring Qualcomm a monopolist.

by Timothy B. Lee – May 30, 2019 10:00pm CEST
Login to bookmark
302
“It is unlikely there will be enough standalone modem volume”
Enlarge
Ken Hawkins

These exclusive or near-exclusive terms were important because huge scale is required to profitably enter the cellular modem business. It costs hundreds of millions of dollars to design a competitive cellular chip from scratch. And designs are only useful for a few years before they become obsolete.

This means that it only makes sense for a company to enter this business if it already has some major customers lined up—customers willing and able to order millions of chips in the first year. And there are only a few customers capable of placing those kinds of orders.

Qualcomm’s executives clearly understood this. In a 2010 internal email, Qualcomm’s Steve Mollenkopf wrote that “there are significant strategic benefits” to signing an exclusive deal with Apple because “it is unlikely that there will be enough standalone modem volume to sustain a viable competitor without that slot.”

This was more than a theoretical issue. Apple hated being dependent on Qualcomm and was looking to cultivate a second source for modem chips. The strongest candidate was Intel—which didn’t have a significant modem chip business but was interested in building one. By 2012, Apple was already planning to have Intel design a cellular chip for the 2014 iPad.

Apple’s 2013 deal with Qualcomm forced the company to put that plan—and its larger relationship with Intel’s cellular team—on the back burner. Apple’s Blevins testified that “we cut off the work we were doing with Intel on an iPad” after it was signed. And without Apple as an anchor customer, Intel had to put its own modem chip work on the back burner as well.

Intel and Apple resumed their collaboration ahead of the 2016 expiration of Apple’s deal with Qualcomm. That year Apple introduced the iPhone 7. Some units shipped with Qualcomm modems while others used new Intel modems.

Apple’s commitment to buy millions of Intel wireless chips allowed Intel to pour resources into its development efforts. After securing its deal with Apple, Intel acquired VIA Telecom, one of the few companies struggling to compete with Qualcomm in the CDMA chip market. Intel needed CDMA chips to make its wireless offerings competitive worldwide and lacked the capacity to develop them internally on the schedule Apple demanded. Acquiring VIA helped Intel accelerate its CDMA work. But Intel’s own projections showed that the VIA acquisition would not have been financially viable without the volume of business Apple promised to Intel.

The relationship with Apple helped Intel in other ways, too. The knowledge that the next iPhone would sport Intel cellular chips motivated network operators to help Intel test its chips on their networks. Intel also found that its status as an Apple supplier gave it more clout in standard-setting organizations.
The empire strikes back
Enlarge / A 5G Intel logo is seen during the Mobile World Congress on February 26, 2019 in Barcelona.
Miquel Benitez/Getty Images

Apple’s deal with Intel posed a serious threat to Qualcomm’s dominance of the cellular chip business. Once Intel developed the full range of cellular chips Apple needed for the iPhone, Intel could turn around and offer the same chips to other smartphone makers. That would improve every smartphone maker’s leverage when it came time for them to renew their patent licenses with Qualcomm. So, Qualcomm went to war with Apple and Intel.

Freed of Qualcomm’s chip supply threat, Apple began to challenge Qualcomm’s high patent royalty rates. Qualcomm responded by cutting Apple off from access to Qualcomm’s chips for new iPhone models, forcing Apple to rely entirely on Intel for the cellular chips in its 2018 models. Qualcomm sued Apple for patent infringement in courts around the world, while Apple pressed the Federal Trade Commission to investigate Qualcomm’s business practices.

The dispute put both Apple and Intel in a precarious position. Qualcomm was trying to use its patent arsenal to get iPhone sales banned in jurisdictions around the world. If Qualcomm scored a win in a major market, it could force Apple to come to the table. Then Qualcomm might force Apple to buy fewer Intel chips, endangering Intel’s wireless chip business—especially since other potential customers would be wary of leaping in front of Qualcomm’s patent buzzsaw.

At the same time, Apple was relying on Intel to keep its phones on the cutting edge of wireless technology. Intel successfully developed modem chips suitable for the 2017 and 2018 iPhone models, but the wireless industry is due to make a transition to 5G wireless technology over the next couple of years. The iPhone is a premium product that needs to support the latest wireless standards. If Intel failed to develop 5G chips quickly enough for use in the 2020 iPhone model, it could put Apple in an untenable position.

It appears that this latter scenario is what ultimately happened. Last month, Apple announced a wide-ranging settlement with Qualcomm that required Apple to pay for a six-year license to Qualcomm’s patents. Hours later, Intel announced that it was canceling work on 5G modem chips.

While we don’t know all the behind-the-scenes details, it appears that earlier this year Apple started to doubt Intel’s ability to deliver 5G modem chips quickly enough to meet Apple’s needs. That made Apple’s confrontational posture toward Qualcomm unviable, and Apple decided to cut a deal while it still had some leverage. Apple’s decision to make peace with Qualcomm instantly cut the legs out from Intel’s modem chip efforts.
Qualcomm has long refused to license its patents to competitors

The story of Qualcomm’s battle with Apple and Intel illustrates how Qualcomm has used its patent portfolio to buttress its chip monopoly.

Chipmakers are ordinarily expected to acquire patents related to their chips and indemnify their customers for patent problems. But Qualcomm refused to license its patents to competitors, putting them in a difficult position.

“The prevailing message from all of the customers I engaged with was that they expected us to have a license agreement with Qualcomm before they would consider purchasing 3G chipsets from MediaTek,” said Finbarr Moynihan, an executive at chipmaker MediaTek.

If a chipmaker asked to license Qualcomm’s patents, Qualcomm would only offer a promise not to sue the chipmaker itself—not the chipmaker’s customers. Qualcomm also demanded that chipmakers—its own competitors—only sell chips to a Qualcomm-supplied list of “Authorized Purchasers” who had already licensed Qualcomm’s patents.

Needless to say this put Qualcomm’s competitors—and would-be competitors—at a disadvantage. Qualcomm’s patent licensing regime not only allowed it to impose a de facto tax on its competitors’ sales, it effectively let Qualcomm choose its competitors’ customers. Indeed, Qualcomm demanded that other chipmakers provide it with data on how many chips it had sold to each of its customers—sensitive commercial data that would allow Qualcomm to figure out exactly how much pressure it needed to apply to prevent a rival from gaining traction.

An internal Qualcomm presentation prepared within days of a 2009 deal with MediaTek (“MTK” in this slide) provides a comically candid visualization of Qualcomm’s anticompetitive approach:
Enlarge

“WCDMA SULA” refers to a Qualcomm patent license. Qualcomm believed that limiting MediaTek to Qualcomm-licensed companies would prevent MediaTek from getting more than 50 customers for its forthcoming 3G chips. Meanwhile, Qualcomm aimed to deprive MediaTek of cash it could invest in the chips.

A few smaller chipmakers like MediaTek and VIA agreed to Qualcomm’s one-sided terms. Even more significant, a number of more formidable companies were deterred from entering the market—or encouraged to exit—by Qualcomm’s tactics.

Qualcomm twice refused to grant patent licenses to Intel—in 2004 and 2009—delaying Intel’s entry into the wireless modem business. A joint chip venture between Samsung and NTT DoCoMo called Project Dragonfly was rebuffed by Qualcomm in 2011; Samsung wound up making some modem chips for its own use but not offering them to others. Qualcomm refused LG a patent license for a potential modem chip in 2015.

Qualcomm refused patent licenses to Texas Instruments and Broadcom ahead of their departures from the modem business in 2012 and 2014, respectively.
Fair, reasonable, and non-discriminatory
Enlarge
Brent Lewin/Bloomberg via Getty Images

When a standards group is developing a new wireless standard, it assembles a list of patents that are essential to implement the standard—these are known as standards essential patents. It then asks patent holders to promise to license those patents on fair, reasonable, and non-discriminatory (FRAND) terms. Patent holders usually agree to these terms because incorporating a patent into a standard enhances its value.

But Qualcomm doesn’t seem to be honoring its FRAND commitments. FRAND patents are supposed to be available on the same terms to anyone who wants to license them—either customers or competitors. But Qualcomm refuses to license its standards-essential patents to other chipmakers.

And when handset manufacturers tried to license Qualcomm’s standard-essential patents, Qualcomm usually bundled them together with its larger patent portfolio, which included patents that were not subject to FRAND commitments and in many cases had nothing to do with modem chips. As a result, handset makers effectively had to pay inflated prices for Qualcomm’s standards-essential patents.

But no one was in a good position to challenge Qualcomm’s creative interpretation of FRAND requirements. Qualcomm didn’t directly sue other chipmakers, so there was no easy way for them to challenge Qualcomm’s policies. Meanwhile, Qualcomm’s chip supply threats deterred customers from challenging Qualcomm’s licensing practices.

Judge Koh ruled that Qualcomm’s failure to honor its FRAND commitments was a violation of antitrust law. Qualcomm had an obligation to license its patents to anyone who wanted to, she ruled, and Qualcomm had an obligation to do so at reasonable rates—rates far lower than those Qualcomm has been charging in recent years.
No more “no license, no chips”
Enlarge / Judge Lucy Koh.
Pelicanbrieflaw

Judge Koh orders several changes that are designed to stop Qualcomm’s anticompetitive conduct and restore some competitive balance to the marketplace.

The most important change is to decouple Qualcomm’s patent licensing efforts from its chip business. Koh ordered Qualcomm not to “condition the supply of modem chips on a customer’s patent license status.” Qualcomm must renegotiate all of its patent licenses without threatening anyone’s supply of modem chips.

Koh also ordered Qualcomm to license its standards-essential patents to other chipmakers on FRAND terms, submitting to arbitration, if necessary, to determine fair royalty rates. These licenses must be “exhaustive”—meaning that Qualcomm is precluded from suing a chipmaker’s customers for violating patents licensed by the chipmaker.

Third, Koh bans Qualcomm from entering into exclusivity deals with customers. That means no more rebates if a customer buys 85 or 100 percent of its chips from Qualcomm.

Patent expert Charles Duan argues that Koh’s ruling “deals with the largest problems that people have observed in terms of Qualcomm’s behavior.”

A big winner here could be Samsung, one of the few major technology companies to have retained significant in-house modem capabilities. In recent years, Samsung has often shipped smartphones with its own Exynos chips in some markets, while selling Qualcomm chips in others—particularly the United States and China. It’s not clear exactly why it does this, but a reasonable guess is that Samsung believes that it’s more vulnerable to Qualcomm’s patent threats in those countries.

Now it’ll be easier for Samsung to use its own chips worldwide, simplifying product design and giving the company greater economies of scale for its own chips. Eventually, Samsung might start offering those chips to other smartphone makers—as it tried to do back in 2011.

On the other hand, Koh’s ruling might come too late for Intel, which announced it was shuttering its 5G chip efforts last month and may not have the appetite (or enough time) to restart them.

Koh’s most important requirements, however, may be her mandate for seven years of monitoring by the FTC and the courts.

“I imagine that over the next year or so Qualcomm will come up with some new way to get back to [its] old revenue model,” Duan told Ars in an email. It will take continued vigilance by the authorities to ensure Qualcomm complies with both the letter and the spirit of Koh’s ruling.

But first the ruling must survive an appeal to the Ninth Circuit Court of Appeals. On Tuesday, Qualcomm asked Koh to put her ruling on hold until the appeals court has a chance to weigh in. Qualcomm’s customers and competitors won’t be able to truly breathe easy until the appeals process is over.
Page: 1 2
Promoted Comments

Chipotle Ars Scholae Palatinae et Subscriptor
jump to post
A brief response from Qualcomm: https://www.qualcomm.com/ftc
(for any of you interested in it)
819 posts | registered 2/15/2003
johnsonwax Ars Tribunus Angusticlavius
jump to post
bigmushroom wrote:
This is a bit of a hit piece. The author doesn’t even mention the license fee.

It seems that Qualcomm charges Apple $7.50 while Apple wanted to only pay $1.50.

http://fortune.com/2019/04/16/apple-qua … ettlement/

Is $7.50 really inappropriate for a $1,000 smartphone whose primary function is to connect to cellular networks?

And what’s wrong with charging a percentage of the entire device. Why shouldn’t Apple pay more than a cheaper manufacturer? All kinds of fees in the real world are percentages such as sales taxes etc.

I have no doubt that Qualcomm is a bit greedy but I don’t think their practices are outrageous.

Well, let’s think about this. If Apple needed to license 500 patents, and each wanted 1% of the device price, then Apple would owe $5,000 in licensing for a $1000 device. If they raised the price to $5,000 in order to break even, then they’d owe $25,000. So, yeah, I’d say that’s unreasonable. By licensing only for the value of the components the license contributes to, then you can’t exceed the value of the device.

Also keep in mind that Apple is selling 250 million units per year, so we’re not talking about a disagreement over $6, but over $1.5B. In all likelihood, Apple could have replicated Qualcomms research for that IP for $1.5B, but the federal government prohibits them from avoiding that license fee. Qualcomm should certainly be able to recover their R&D costs, but that recovery is supposed to be a shared effort – that’s the point of the patent – to share the load. If a single customer is being held to the entire cost, then we’re well in rent seeking territory because it likely would have been better for the customer if the patent was never granted, and that can’t possibly be the goal of the patent system.
7358 posts | registered 6/16/2002
Timothy B. Lee Senior tech policy reporter
jump to post
Wildbiftek wrote:
show nested quotes

There’s no doubt that given time, some other (possibly free) standard would have emerged but the industry adopted Qualcomm’s patents DESPITE the fact that they were royalty encumbered because faster speed to market and doing the foundational technical things well are definitely worth something.

Given Apple’s efforts with the backing of Intel’s warchest much of the time not paying Qualcomm royalties, it’s unlikely you’d have seen a 4G iPhone before 2016 and a 5G one before 2021. The value of these capabilities is substantial as important apps such as Uber or Google Maps would be far less compelling if one had to find WiFi, acquire the password before using those functionalities.

Maybe I’m misunderstanding things, but didn’t “the industry adopt Qualcomm’s patents” (as well as patents from a bunch of other companies) in exchange for promises from those companies to license them on fair, reasonable, and non-discriminatory terms? Qualcomm could have refused to participate in that process, in which case the industry likely would have chosen a different set of standards that aren’t encumbered with Qualcomm’s patents.

Instead, Qualcomm chose to participate in the standards process and get its own patents included in industry wireless standards. But then when people asked Qualcomm to live up to its FRAND commitments, Qualcomm threatened to cut off the chip supplies of anyone who made trouble.

One other relevant episode in the Koh ruling that I didn’t include in my article: around 2007 Intel was trying to convince the industry to adopt WiMax, which posed a threat to Qualcomm. Qualcomm signed a deal with Apple in which Apple would publicly announce it would not be using WiMax. Qualcomm also paid Apple rebates it would have to pay back if Apple started selling handsets that used WiMax.

So the industry partly adopted Qualcomm’s technology because it was faster than what came before. But the industry also adopted Qualcomm technologies partly because Qualcomm was able to throw its weight around and discourage the adoption of promising alternatives.
1192 posts | registered 1/6/2007
mebeSajid Ars Praefectus
jump to post
Tim Lee wrote:
show nested quotes

I’m not a lawyer but I don’t think this aspect of Qualcomm’s conduct would have violated antitrust law on its own. If you own a patent you’re supposed to be able to collect a royalty from everyone who uses the technology it covers.

The problem is the other stuff Qualcomm did in conjunction with its patents—threatening to cut off chip supplies, reneging on FRAND commitments, refusing to license to competitors, etc. Those tactics helped Qualcomm secure much higher royalties than they would have gotten without them, which in turn gave them a lot more leeway to offer customers rebates that undercut competitors’ prices. In some cases Qualcomm also tied the rebates specifically to customers promising not to use competitors’ products.

Microsoft did some stuff like this in the 1990s. I would expect they’re more careful about it today, though I haven’t looked into it in any detail.

Exactly. I’m not an antitrust lawyer (I am a patent litigator however), but Qualcomm’s use of it’s position as the leader in making baseband chips to force customers to take a patent license agreement seems like a pretty clearcut Sherman Act violation. On top of that, cutting off supply if customers didn’t take a license agreement, and using that to impose fairly onerous licensing terms, seems like a pretty straightforward antitrust case. If you read the opinion (not specifically directed to Tim), you’ll Judge Koh pointed out very early on that Qualcomm straight up lied when it said that it had never cut off supply to a customer.

I’ve litigated multiple SEP cases – both Qualcomm’s rates and insistence on using the entire handset as the royalty base are pretty shocking to me. I’d be shocked if a Court imposed either that royalty rate or that royalty base in litigation.
3978 posts | registered 5/29/2001
mebeSajid Ars Praefectus
jump to post
Wildbiftek wrote:

What would guarantee the absence of competition would be if Qualcomm kept all of its patents to itself and didn’t submit them to standards committees for open licensing. You would get a completely vertically integrated giant like Intel who had a controlling stake in CPU standards and implementations with minimal competition for some 2 decades, during which the quality of CPUs languished while pricing was high. There was no competition due to largely gate-keeping patents (as well as the difficulty of rewriting decades of software…) and little innovation from Intel itself from the early 2000’s until around 2015.

This manages to both miss the point and get basic facts wrong. First, there isn’t a CPU “standard”, at least as we’re using the term here. There’s an x86 instruction set with contributions from both Intel and AMD (more Intel than AMD), but it’s origin is largely proprietary: nothing in x86 has gone through a standard setting organization.

In contrast, there are robust standard setting organizations that participate in standards for wireless communication, and there are very important reasons for having those standards: interoperability, and predictability. Because of this, being able to introduce one’s technology into a standard is quite lucrative (aside from any IP revenue), and lots of parties try to get their IP into standards. Having IP that reads on a standard is quite lucrative, and it tends to be a fairly straightforward patent infringement case.

For this reason, the tradeoff for participating in the standards setting process is an obligation to license any IP on Fair, Reasonable, and Non-Discriminatory terms. Standards Setting Organizations don’t want any one party to be able to “hold up” an industry.

Quote:
An open FRAND based licensing scheme for standards in the case of cellular standards allowed for vibrant competition in SoC implementations from multiple companies, a fact that was distorted by the FTC and missed by Koh because of their myopic focus on “premium standalone modems” where only Apple was a big customer. There is also value inherent to good standards which are not cheap to develop and may not ultimately be adopted either.

Premium standalone modems was, by itself, a several billion dollar market. Further, while standards certainly aren’t cheap to develop, Qualcomm was far from the only party to contribute to those standards. You make it sound like Qualcomm alone developed 3G and LTE. That is not the case.
3980 posts | registered 5/29/2001

Reader comments 302

Share
Tweet
Reddit

Timothy B. Lee / Timothy is a senior reporter covering tech policy, blockchain technologies and the future of transportation. He lives in Washington DC.
← Older Story Newer Story →

CNMN Collection
WIRED Media Group
© 2019 Condé Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 5/25/18) and Privacy Policy and Cookie Statement (updated 5/25/18) and Ars Technica Addendum (effective 8/21/2018). Ars may earn compensation on sales from links on this site. Read our affiliate link policy.
Your California Privacy Rights
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices