Embedded Linux / ARM consultant living and working in Taipei

Embedded Linux / ARM consultant living and working in Taipei

warmcat git – libwebsockets – libwebsockets git
Posts

February 1, 2017
RISC-V and Microsemi Polarfire on Fedora 27
Septempber 5, 2017
MIPI I3C
August 18, 2017
SLA 3D printing
August 13, 2017
Implementing ssh and scp serving with libwebsockets
August 12, 2017
Mailman and captcha
Nov 21, 2016
Let’s play, “What’s my ESD rating”
Nov 6, 2016
ICE5 FPGA Where did all the LUTs go?
Nov 1, 2016
Visualizing bulk samples with a statistical summarizer
Oct 17, 2016
Generic MOSFET Power Switching
Oct 7, 2016
Advantages, Limitations and Costs of Galvanic Isolation
Oct 4, 2016
Driving Piezo Sounders
Sep 26, 2016
SPI as video and alpha compositor
Sep 20, 2016
Hyperram Bus Interface Unit and Arbitrator
Sep 19, 2016
ICE5 Hyperbus DDR, PLLs Implementation tips
Sep 8, 2016
In Praise of Kicad
Sep 7, 2016
Lattice’s Unintended Darwinism in Tech Support
Sep 2, 2016
Hyperbus and Hyperram
Aug 26, 2016
ST7735 TFT LCD Goodness
Aug 15, 2016
Getting started with ICE40 Ultra FPGAs
Jul 22, 2016
ESP8266 Wifi module on Linux
Mar 29, 2016
Silego GreenPAK crosses Analogue, CPLD and FPGA
Dec 21, 2015
Hall-effect current sensing
Nov 29, 2015
mbed3 libwebsockets port
Nov 23, 2015
HDMI Audio on Hikey
Nov 3, 2015
Mbed3 starting libwebsockets port
Nov 1, 2015
Mbed3 diving into network
Oct 31, 2015
Mbed3 registry and deps
Oct 31, 2015
Mbed3 fixing the client app and library
Oct 30, 2015
Mbed3 and Minar
Oct 29, 2015
The mbed maze
Oct 25, 2015
HDMI Capture and Analysis FPGA Project 6
Oct 25, 2015
HDMI Capture and Analysis FPGA Project 5
Oct 23, 2015
HDMI Capture and Analysis FPGA Project 4
Oct 22, 2015
HDMI Capture and Analysis FPGA Project 3
Oct 21, 2015
HDMI Capture and Analysis FPGA Project 2
Oct 20, 2015
HDMI Capture and Analysis FPGA Project
Sep 23, 2013
Nokia Msft Lol
Sep 19, 2013
Adios WordPress
Jan 12, 2013
libwebsockets.org
Mar 6, 2011
libwebsockets new features
Feb 11, 2011
Nokia failure
Jan 22, 2011
libwebsockets now with 04 protocol and simultaneous client / server
Nov 29, 2010
New NXP LPC32x0 in Qi bootloader
Nov 8, 2010
libwebsockets now with SSL / WSS
Nov 1, 2010
libwebsockets – HTML5 Websocket server library in C
Feb 12, 2010
Don’t let Production Test Be Special
Feb 8, 2010
Fosdem and the Linux Cross Niche
Feb 8, 2010
Bootloader Envy
May 21, 2009
Whirlygig Verification and rngtest analysis
May 21, 2009
Whirlygig PCB
May 23, 2008
Exhaustion and the GPL
Nov 24, 2007
Whirlygig GPL’d HWRNG
Nov 15, 2007
FIPS-140-2 and ENT validation vs ring RNG
Nov 14, 2007
Diehard validation vs ring RNG
Nov 12, 2007
Ring oscillator RNG performance
Nov 7, 2007
Adding entropy to /dev/random
Oct 25, 2007
Drumbeat
Oct 25, 2007
CE Technical Documentation
Sep 28, 2007
Heading deeper into the noise
Sep 18, 2007
QPSK demodulator / slicer / correlator vs noise
Sep 18, 2007
Magic Correlator and baseband QPSK
Sep 17, 2007
AT91RM9200 FIQ FAQ and simple Example code / patch
Sep 12, 2007
Magic correlator code analysis
Sep 12, 2007
Autocorrelation code and weak signal recovery
Sep 6, 2007
Embedded procmail and dovecot
Sep 5, 2007
selinux magic for gitweb
Sep 5, 2007
Forcing 1&1 to make F7
Jul 30, 2007
It’s Fedora, Jim: but not as we know it
Jul 16, 2007
mac80211 Injection patches accepted in Linus git tree
Jun 8, 2007
Jamendobox
May 25, 2007
The Alignment Monster
Mar 31, 2007
Bonsai code-kittens
Mar 4, 2007
Nasty Crash at Luton Airport
Mar 3, 2007
Out of your tree
Jan 27, 2007
Octotux Packaged Linux Distro
Oct 3, 2006
Your code might be Free, but what about your data?
Sep 21, 2006
Rights and Wrongs of Hacking Source Licenses at Distribution Time
Sep 19, 2006
GPL2 “or later” distributor sends mixed signals when distributing as GPL2
Sep 18, 2006
I’ll make you free if I have to lock you up!
Sep 14, 2006
Old Tech
Sep 14, 2006
Next Generation
Aug 28, 2006
libtool and .la files
Aug 16, 2006
Greylisting is back in town
Aug 14, 2006
Dead Languages
Aug 14, 2006
Conexant ADSL Binary driver damage
Aug 11, 2006
Autotools crosscompile hall of shame
Aug 9, 2006
RT73 Belkin stick depression
Aug 9, 2006
Postfix relaying for Dynamic clients
Jul 27, 2006
VMware networking in Fedora
Jul 16, 2006
Behind The Embedded Sofa.html
Jul 16, 2006
Behind the Embedded sofa
Jul 15, 2006
Yahoeuvre broken by Yahoo changes
Jul 12, 2006
Interesting AT91 clock quirk
Jul 11, 2006
Chip of weirdness
Jul 11, 2006
Broadcomm and WPA
Jul 9, 2006
Cursed AMD64 box
Jul 9, 2006
Coolest Mailserver
Jul 9, 2006
Blog logic

subscribe via RSS
Warmcat

Warmcat
andy@warmcat.com

lws-team

Embedded Linux / ARM consultant living and working in Taipei

How Qualcomm shook down the cell phone industry for almost 20 years

How Qualcomm shook down the cell phone industry for almost 20 years

Ars Technica


0

Biz & IT
Tech
Science
Policy
Cars
Gaming & Culture
Forums
Subscribe
Store

View Full Site Disable floating nav

Dark on light
Light on dark

Log in Register
Forgot your password?

Resend activation e-mail
Policy / Civilization & Discontents
How Qualcomm shook down the cell phone industry for almost 20 years
We did a deep-dive into the 233-page ruling declaring Qualcomm a monopolist.

by Timothy B. Lee – May 30, 2019 10:00pm CEST
Login to bookmark
302
Getty / Aurich Lawson

In 2005, Apple contacted Qualcomm as a potential supplier for modem chips in the first iPhone. Qualcomm’s response was unusual: a letter demanding that Apple sign a patent licensing agreement before Qualcomm would even consider supplying chips.

“I’d spent 20 years in the industry, I had never seen a letter like this,” said Tony Blevins, Apple’s vice president of procurement.

Most suppliers are eager to talk to new customers—especially customers as big and prestigious as Apple. But Qualcomm wasn’t like other suppliers; it enjoyed a dominant position in the market for cellular chips. That gave Qualcomm a lot of leverage, and the company wasn’t afraid to use it.

Blevins’ comments came when he testified earlier this year in the Federal Trade Commission’s blockbuster antitrust case against Qualcomm. The FTC filed this lawsuit in 2017 partly at the urging of Apple, which had chafed under Qualcomm’s wireless chip dominance for a decade.

Last week, a California federal judge provided the FTC and Apple with sweet vindication. In a scathing 233-page opinion [PDF], Judge Lucy Koh ruled that Qualcomm’s aggressive licensing tactics had violated American antitrust law.

I read every word of Judge Koh’s book-length opinion, which portrays Qualcomm as a ruthless monopolist. The legal document outlines a nearly 20-year history of overcharging smartphone makers for cellular chips. Qualcomm structured its contracts with smartphone makers in ways that made it almost impossible for other chipmakers to challenge Qualcomm’s dominance. Customers who didn’t go along with Qualcomm’s one-sided terms were threatened with an abrupt and crippling loss of access to modem chips.

“Qualcomm has monopoly power over certain cell phone chips, and they use that monopoly power to charge people too much money,” says Charles Duan, a patent expert at the free-market R Street Institute. “Instead of just charging more for the chips themselves, they required people to buy a patent license and overcharged for the patent license.”

Now, all of that dominance might be coming to an end. In her ruling, Koh ordered Qualcomm to stop threatening customers with chip cutoffs. Qualcomm must now re-negotiate all of its agreements with customers and license its patents to competitors on reasonable terms. And if Koh’s ruling survives the appeals process, it could produce a truly competitive market for wireless chips for the first time in this century.
Qualcomm’s perfect profit machine
Enlarge
JeanbaptisteM

Different cellular networks operate on different wireless networking standards, and these standards change every few years. For much of the last 20 years, Qualcomm has enjoyed a lead—and in some cases a stranglehold—on chips that support major cellular standards. So if a smartphone company aspired to sell its wares around the world, it had little choice but to do business with Qualcomm.

For example, in the early 2010s Qualcomm enjoyed a big lead on chips for the CDMA standards favored by Verizon and Sprint in the US and some other carriers overseas. Qualcomm Chief Technology Officer James Thompson bluntly explained in an internal 2014 email to CEO Steve Mollenkopf how this gave the company leverage over Apple.

“We are the only supplier today that can give them a global launch,” Thompson wrote, according to court documents. “In fact, without us they would lose big parts of North America, Japan and China. That would really hurt them.”

It wasn’t just Apple. BlackBerry was in a similar predicament around 2010. In a deposition, BlackBerry executive John Grubbs stated that without access to Qualcomm’s chips, “30 percent of our device sales would have gone away overnight if we couldn’t have supplied CDMA devices.”

Over the last two decades, Qualcomm has had deals in place with most of the leading cell phone makers, including LG, Sony, Samsung, Huawei, Motorola, Lenovo, ZTE, and Nokia. These deals gave Qualcomm enormous leverage over these companies—leverage that allowed Qualcomm to extract patent royalty rates that were far higher than those earned by other companies with similar patent portfolios.

Qualcomm’s patent licensing fees were calculated based on the value of the entire phone, not just the value of chips that embodied Qualcomm’s patented technology. This effectively meant that Qualcomm got a cut of every component of a smartphone—most of which had nothing to do with Qualcomm’s cellular patents.

“Qualcomm charges us more than everybody else put together,” Apple executive Jeff Williams said. “We’ve never seen such a significant licensing fee tied to any other IP we license,” said Motorola’s Todd Madderom.

Internal Qualcomm documents supported these claims. One showed that Qualcomm’s patent licensing operation brought in $7.7 billion in 2016—more than the combined patent licensing revenue of 12 other companies with significant patent portfolios.
No license, no chips
Enlarge
Qualcomm

These high royalties reflected an unusual negotiating tactic called “no license, no chips.” No one could buy Qualcomm’s cellular chips unless they first signed a license to Qualcomm’s patent portfolio. And the terms of these patent deals were heavily tilted in Qualcomm’s favor.

Once a phone maker had signed its first deal with Qualcomm, Qualcomm gained even more leverage. Qualcomm had the right to unilaterally terminate a smartphone maker’s chip supply once the patent licensing deal expired.

“If we are unable to source the modem, we are unable to ship the handset,” said Motorola executive Todd Madderom in a deposition. “It takes many months of engineering work to design a replacement solution, if there is even a viable one on the market that supports the need.”

That made Qualcomm’s customers extremely vulnerable as they neared the expiration of a patent licensing deal. If a customer tried to negotiate more favorable terms—to say nothing of formally challenging Qualcomm’s patent claims in court—Qualcomm could abruptly cut off the company’s chip supply.

“We explained that we were contemplating terminating the license,” Lenovo executive Ira Blumberg testified during the trial. A senior Qualcomm executive “was very calm about it, and said we should feel free to do that, but if we did, we would no longer be able to purchase Qualcomm chips.”

“You’re looking at months and months, if not a year or more, without supply,” Blumberg said in a deposition. That “would be, if not fatal, then nearly fatal to almost any company in this business.”

Judge Koh found that Qualcomm used this tactic over and over again over the last 20 years: Qualcomm threatened to cut off Samsung’s chip supply in 2001, LG’s chip supply in 2004, Sony and ZTE’s chip supplies in 2012, Huawei and Lenovo’s chip supplies in 2013, and Motorola’s chip supply in 2015.
Qualcomm’s chip deals boxed out competitors
Enlarge
Getty Images | Boonrit Panyaphinitnugoon

An obvious question is how Qualcomm maintained its stranglehold over the supply of modem chips. Partly, Qualcomm employed talented engineers and spent billions of dollars keeping its chips on the cutting edge.

Qualcomm also bolstered its dominant position by selling systems on a chip that included a CPU and other functions as well as modem functionality. This yielded significant cost and power savings, and it was hard for smaller chipmakers to compete with.

But besides these technical reasons, Qualcomm also structured its agreements with customers to make it difficult for other companies to break into the cellular modem chip business.

Qualcomm’s first weapon against competitors: patent licensing terms requiring customers to pay a royalty on every phone sold—not just phones that contained Qualcomm’s wireless chips. This gave Qualcomm an inherent advantage in competition with other chipmakers. If another chipmaker tried to undercut Qualcomm’s chips on price, Qualcomm could easily afford to cut the price of its own chips, knowing that the customer would still be paying Qualcomm a hefty patent licensing fee on every phone.

Judge Koh draws a direct parallel to licensing behavior that got Microsoft in legal trouble in the 1990s. Microsoft would offer PC makers a discount if they agreed to pay Microsoft a licensing fee for every PC sold—whether or not the PC shipped with a copy of MS-DOS. This effectively meant that a PC maker had to pay twice if it shipped a PC running a non-Microsoft operating system. In 1999, a federal judge ruled that a reasonable jury could conclude this arrangement violated antitrust law by making it difficult for Microsoft’s competitors to break into the market.

And some of Qualcomm’s licensing deals included terms that explicitly discouraged companies from using non-Qualcomm wireless chips. Qualcomm would offer cell phone makers rebates on every Qualcomm chip they sold. But cell phone makers would only get those rebates if they used Qualcomm chips for at least 85 percent—or in some cases even 100 percent—of the phones they sold.

For example, Apple signed a deal with Qualcomm in 2013 that effectively guaranteed that Apple would exclusively use Qualcomm’s wireless chips. Under the deal, Qualcomm paid Apple hundreds of millions of dollars in rebates and marketing incentives between 2013 and 2016. However, Qualcomm would stop making those payments if Apple started selling an iPhone or iPad with a non-Qualcomm cellular chip.

Apple was even required to pay back some of those funds if it used non-Qualcomm cellular chips before February 2016. One internal Qualcomm email calculated that Apple would owe $645 million if it launched an iPhone with a non-Qualcomm cellular chip in 2015.

Qualcomm made similar deals with other major cell phone makers. In 2003, Qualcomm signed a 10-year deal granting Huawei a reduced royalty rate of 2.65 percent if Huawei purchased 100 percent of its CDMA chips for the Chinese market from Qualcomm. If Huawei bought non-Qualcomm CDMA chips, the royalty rate jumped to five percent or more.

A 2004 deal gave LG rebates if LG purchased at least 85 percent of its CDMA chips from Qualcomm. The deal also required LG to pay a higher patent royalty rate when it sold phones with non-Qualcomm cellular chips. A 2018 deal makes incentive payments to Samsung if the company buys 100 percent of its “premium” cellular chips from Qualcomm—as well as lower thresholds (the exact percentages are redacted) for lower-tier chips.

Ars Technica


0

Biz & IT
Tech
Science
Policy
Cars
Gaming & Culture
Forums
Subscribe
Store

View Full Site Disable floating nav

Dark on light
Light on dark

Log in Register
Forgot your password?

Resend activation e-mail
Policy / Civilization & Discontents
How Qualcomm shook down the cell phone industry for almost 20 years
We did a deep-dive into the 233-page ruling declaring Qualcomm a monopolist.

by Timothy B. Lee – May 30, 2019 10:00pm CEST
Login to bookmark
302
“It is unlikely there will be enough standalone modem volume”
Enlarge
Ken Hawkins

These exclusive or near-exclusive terms were important because huge scale is required to profitably enter the cellular modem business. It costs hundreds of millions of dollars to design a competitive cellular chip from scratch. And designs are only useful for a few years before they become obsolete.

This means that it only makes sense for a company to enter this business if it already has some major customers lined up—customers willing and able to order millions of chips in the first year. And there are only a few customers capable of placing those kinds of orders.

Qualcomm’s executives clearly understood this. In a 2010 internal email, Qualcomm’s Steve Mollenkopf wrote that “there are significant strategic benefits” to signing an exclusive deal with Apple because “it is unlikely that there will be enough standalone modem volume to sustain a viable competitor without that slot.”

This was more than a theoretical issue. Apple hated being dependent on Qualcomm and was looking to cultivate a second source for modem chips. The strongest candidate was Intel—which didn’t have a significant modem chip business but was interested in building one. By 2012, Apple was already planning to have Intel design a cellular chip for the 2014 iPad.

Apple’s 2013 deal with Qualcomm forced the company to put that plan—and its larger relationship with Intel’s cellular team—on the back burner. Apple’s Blevins testified that “we cut off the work we were doing with Intel on an iPad” after it was signed. And without Apple as an anchor customer, Intel had to put its own modem chip work on the back burner as well.

Intel and Apple resumed their collaboration ahead of the 2016 expiration of Apple’s deal with Qualcomm. That year Apple introduced the iPhone 7. Some units shipped with Qualcomm modems while others used new Intel modems.

Apple’s commitment to buy millions of Intel wireless chips allowed Intel to pour resources into its development efforts. After securing its deal with Apple, Intel acquired VIA Telecom, one of the few companies struggling to compete with Qualcomm in the CDMA chip market. Intel needed CDMA chips to make its wireless offerings competitive worldwide and lacked the capacity to develop them internally on the schedule Apple demanded. Acquiring VIA helped Intel accelerate its CDMA work. But Intel’s own projections showed that the VIA acquisition would not have been financially viable without the volume of business Apple promised to Intel.

The relationship with Apple helped Intel in other ways, too. The knowledge that the next iPhone would sport Intel cellular chips motivated network operators to help Intel test its chips on their networks. Intel also found that its status as an Apple supplier gave it more clout in standard-setting organizations.
The empire strikes back
Enlarge / A 5G Intel logo is seen during the Mobile World Congress on February 26, 2019 in Barcelona.
Miquel Benitez/Getty Images

Apple’s deal with Intel posed a serious threat to Qualcomm’s dominance of the cellular chip business. Once Intel developed the full range of cellular chips Apple needed for the iPhone, Intel could turn around and offer the same chips to other smartphone makers. That would improve every smartphone maker’s leverage when it came time for them to renew their patent licenses with Qualcomm. So, Qualcomm went to war with Apple and Intel.

Freed of Qualcomm’s chip supply threat, Apple began to challenge Qualcomm’s high patent royalty rates. Qualcomm responded by cutting Apple off from access to Qualcomm’s chips for new iPhone models, forcing Apple to rely entirely on Intel for the cellular chips in its 2018 models. Qualcomm sued Apple for patent infringement in courts around the world, while Apple pressed the Federal Trade Commission to investigate Qualcomm’s business practices.

The dispute put both Apple and Intel in a precarious position. Qualcomm was trying to use its patent arsenal to get iPhone sales banned in jurisdictions around the world. If Qualcomm scored a win in a major market, it could force Apple to come to the table. Then Qualcomm might force Apple to buy fewer Intel chips, endangering Intel’s wireless chip business—especially since other potential customers would be wary of leaping in front of Qualcomm’s patent buzzsaw.

At the same time, Apple was relying on Intel to keep its phones on the cutting edge of wireless technology. Intel successfully developed modem chips suitable for the 2017 and 2018 iPhone models, but the wireless industry is due to make a transition to 5G wireless technology over the next couple of years. The iPhone is a premium product that needs to support the latest wireless standards. If Intel failed to develop 5G chips quickly enough for use in the 2020 iPhone model, it could put Apple in an untenable position.

It appears that this latter scenario is what ultimately happened. Last month, Apple announced a wide-ranging settlement with Qualcomm that required Apple to pay for a six-year license to Qualcomm’s patents. Hours later, Intel announced that it was canceling work on 5G modem chips.

While we don’t know all the behind-the-scenes details, it appears that earlier this year Apple started to doubt Intel’s ability to deliver 5G modem chips quickly enough to meet Apple’s needs. That made Apple’s confrontational posture toward Qualcomm unviable, and Apple decided to cut a deal while it still had some leverage. Apple’s decision to make peace with Qualcomm instantly cut the legs out from Intel’s modem chip efforts.
Qualcomm has long refused to license its patents to competitors

The story of Qualcomm’s battle with Apple and Intel illustrates how Qualcomm has used its patent portfolio to buttress its chip monopoly.

Chipmakers are ordinarily expected to acquire patents related to their chips and indemnify their customers for patent problems. But Qualcomm refused to license its patents to competitors, putting them in a difficult position.

“The prevailing message from all of the customers I engaged with was that they expected us to have a license agreement with Qualcomm before they would consider purchasing 3G chipsets from MediaTek,” said Finbarr Moynihan, an executive at chipmaker MediaTek.

If a chipmaker asked to license Qualcomm’s patents, Qualcomm would only offer a promise not to sue the chipmaker itself—not the chipmaker’s customers. Qualcomm also demanded that chipmakers—its own competitors—only sell chips to a Qualcomm-supplied list of “Authorized Purchasers” who had already licensed Qualcomm’s patents.

Needless to say this put Qualcomm’s competitors—and would-be competitors—at a disadvantage. Qualcomm’s patent licensing regime not only allowed it to impose a de facto tax on its competitors’ sales, it effectively let Qualcomm choose its competitors’ customers. Indeed, Qualcomm demanded that other chipmakers provide it with data on how many chips it had sold to each of its customers—sensitive commercial data that would allow Qualcomm to figure out exactly how much pressure it needed to apply to prevent a rival from gaining traction.

An internal Qualcomm presentation prepared within days of a 2009 deal with MediaTek (“MTK” in this slide) provides a comically candid visualization of Qualcomm’s anticompetitive approach:
Enlarge

“WCDMA SULA” refers to a Qualcomm patent license. Qualcomm believed that limiting MediaTek to Qualcomm-licensed companies would prevent MediaTek from getting more than 50 customers for its forthcoming 3G chips. Meanwhile, Qualcomm aimed to deprive MediaTek of cash it could invest in the chips.

A few smaller chipmakers like MediaTek and VIA agreed to Qualcomm’s one-sided terms. Even more significant, a number of more formidable companies were deterred from entering the market—or encouraged to exit—by Qualcomm’s tactics.

Qualcomm twice refused to grant patent licenses to Intel—in 2004 and 2009—delaying Intel’s entry into the wireless modem business. A joint chip venture between Samsung and NTT DoCoMo called Project Dragonfly was rebuffed by Qualcomm in 2011; Samsung wound up making some modem chips for its own use but not offering them to others. Qualcomm refused LG a patent license for a potential modem chip in 2015.

Qualcomm refused patent licenses to Texas Instruments and Broadcom ahead of their departures from the modem business in 2012 and 2014, respectively.
Fair, reasonable, and non-discriminatory
Enlarge
Brent Lewin/Bloomberg via Getty Images

When a standards group is developing a new wireless standard, it assembles a list of patents that are essential to implement the standard—these are known as standards essential patents. It then asks patent holders to promise to license those patents on fair, reasonable, and non-discriminatory (FRAND) terms. Patent holders usually agree to these terms because incorporating a patent into a standard enhances its value.

But Qualcomm doesn’t seem to be honoring its FRAND commitments. FRAND patents are supposed to be available on the same terms to anyone who wants to license them—either customers or competitors. But Qualcomm refuses to license its standards-essential patents to other chipmakers.

And when handset manufacturers tried to license Qualcomm’s standard-essential patents, Qualcomm usually bundled them together with its larger patent portfolio, which included patents that were not subject to FRAND commitments and in many cases had nothing to do with modem chips. As a result, handset makers effectively had to pay inflated prices for Qualcomm’s standards-essential patents.

But no one was in a good position to challenge Qualcomm’s creative interpretation of FRAND requirements. Qualcomm didn’t directly sue other chipmakers, so there was no easy way for them to challenge Qualcomm’s policies. Meanwhile, Qualcomm’s chip supply threats deterred customers from challenging Qualcomm’s licensing practices.

Judge Koh ruled that Qualcomm’s failure to honor its FRAND commitments was a violation of antitrust law. Qualcomm had an obligation to license its patents to anyone who wanted to, she ruled, and Qualcomm had an obligation to do so at reasonable rates—rates far lower than those Qualcomm has been charging in recent years.
No more “no license, no chips”
Enlarge / Judge Lucy Koh.
Pelicanbrieflaw

Judge Koh orders several changes that are designed to stop Qualcomm’s anticompetitive conduct and restore some competitive balance to the marketplace.

The most important change is to decouple Qualcomm’s patent licensing efforts from its chip business. Koh ordered Qualcomm not to “condition the supply of modem chips on a customer’s patent license status.” Qualcomm must renegotiate all of its patent licenses without threatening anyone’s supply of modem chips.

Koh also ordered Qualcomm to license its standards-essential patents to other chipmakers on FRAND terms, submitting to arbitration, if necessary, to determine fair royalty rates. These licenses must be “exhaustive”—meaning that Qualcomm is precluded from suing a chipmaker’s customers for violating patents licensed by the chipmaker.

Third, Koh bans Qualcomm from entering into exclusivity deals with customers. That means no more rebates if a customer buys 85 or 100 percent of its chips from Qualcomm.

Patent expert Charles Duan argues that Koh’s ruling “deals with the largest problems that people have observed in terms of Qualcomm’s behavior.”

A big winner here could be Samsung, one of the few major technology companies to have retained significant in-house modem capabilities. In recent years, Samsung has often shipped smartphones with its own Exynos chips in some markets, while selling Qualcomm chips in others—particularly the United States and China. It’s not clear exactly why it does this, but a reasonable guess is that Samsung believes that it’s more vulnerable to Qualcomm’s patent threats in those countries.

Now it’ll be easier for Samsung to use its own chips worldwide, simplifying product design and giving the company greater economies of scale for its own chips. Eventually, Samsung might start offering those chips to other smartphone makers—as it tried to do back in 2011.

On the other hand, Koh’s ruling might come too late for Intel, which announced it was shuttering its 5G chip efforts last month and may not have the appetite (or enough time) to restart them.

Koh’s most important requirements, however, may be her mandate for seven years of monitoring by the FTC and the courts.

“I imagine that over the next year or so Qualcomm will come up with some new way to get back to [its] old revenue model,” Duan told Ars in an email. It will take continued vigilance by the authorities to ensure Qualcomm complies with both the letter and the spirit of Koh’s ruling.

But first the ruling must survive an appeal to the Ninth Circuit Court of Appeals. On Tuesday, Qualcomm asked Koh to put her ruling on hold until the appeals court has a chance to weigh in. Qualcomm’s customers and competitors won’t be able to truly breathe easy until the appeals process is over.
Page: 1 2
Promoted Comments

Chipotle Ars Scholae Palatinae et Subscriptor
jump to post
A brief response from Qualcomm: https://www.qualcomm.com/ftc
(for any of you interested in it)
819 posts | registered 2/15/2003
johnsonwax Ars Tribunus Angusticlavius
jump to post
bigmushroom wrote:
This is a bit of a hit piece. The author doesn’t even mention the license fee.

It seems that Qualcomm charges Apple $7.50 while Apple wanted to only pay $1.50.

http://fortune.com/2019/04/16/apple-qua … ettlement/

Is $7.50 really inappropriate for a $1,000 smartphone whose primary function is to connect to cellular networks?

And what’s wrong with charging a percentage of the entire device. Why shouldn’t Apple pay more than a cheaper manufacturer? All kinds of fees in the real world are percentages such as sales taxes etc.

I have no doubt that Qualcomm is a bit greedy but I don’t think their practices are outrageous.

Well, let’s think about this. If Apple needed to license 500 patents, and each wanted 1% of the device price, then Apple would owe $5,000 in licensing for a $1000 device. If they raised the price to $5,000 in order to break even, then they’d owe $25,000. So, yeah, I’d say that’s unreasonable. By licensing only for the value of the components the license contributes to, then you can’t exceed the value of the device.

Also keep in mind that Apple is selling 250 million units per year, so we’re not talking about a disagreement over $6, but over $1.5B. In all likelihood, Apple could have replicated Qualcomms research for that IP for $1.5B, but the federal government prohibits them from avoiding that license fee. Qualcomm should certainly be able to recover their R&D costs, but that recovery is supposed to be a shared effort – that’s the point of the patent – to share the load. If a single customer is being held to the entire cost, then we’re well in rent seeking territory because it likely would have been better for the customer if the patent was never granted, and that can’t possibly be the goal of the patent system.
7358 posts | registered 6/16/2002
Timothy B. Lee Senior tech policy reporter
jump to post
Wildbiftek wrote:
show nested quotes

There’s no doubt that given time, some other (possibly free) standard would have emerged but the industry adopted Qualcomm’s patents DESPITE the fact that they were royalty encumbered because faster speed to market and doing the foundational technical things well are definitely worth something.

Given Apple’s efforts with the backing of Intel’s warchest much of the time not paying Qualcomm royalties, it’s unlikely you’d have seen a 4G iPhone before 2016 and a 5G one before 2021. The value of these capabilities is substantial as important apps such as Uber or Google Maps would be far less compelling if one had to find WiFi, acquire the password before using those functionalities.

Maybe I’m misunderstanding things, but didn’t “the industry adopt Qualcomm’s patents” (as well as patents from a bunch of other companies) in exchange for promises from those companies to license them on fair, reasonable, and non-discriminatory terms? Qualcomm could have refused to participate in that process, in which case the industry likely would have chosen a different set of standards that aren’t encumbered with Qualcomm’s patents.

Instead, Qualcomm chose to participate in the standards process and get its own patents included in industry wireless standards. But then when people asked Qualcomm to live up to its FRAND commitments, Qualcomm threatened to cut off the chip supplies of anyone who made trouble.

One other relevant episode in the Koh ruling that I didn’t include in my article: around 2007 Intel was trying to convince the industry to adopt WiMax, which posed a threat to Qualcomm. Qualcomm signed a deal with Apple in which Apple would publicly announce it would not be using WiMax. Qualcomm also paid Apple rebates it would have to pay back if Apple started selling handsets that used WiMax.

So the industry partly adopted Qualcomm’s technology because it was faster than what came before. But the industry also adopted Qualcomm technologies partly because Qualcomm was able to throw its weight around and discourage the adoption of promising alternatives.
1192 posts | registered 1/6/2007
mebeSajid Ars Praefectus
jump to post
Tim Lee wrote:
show nested quotes

I’m not a lawyer but I don’t think this aspect of Qualcomm’s conduct would have violated antitrust law on its own. If you own a patent you’re supposed to be able to collect a royalty from everyone who uses the technology it covers.

The problem is the other stuff Qualcomm did in conjunction with its patents—threatening to cut off chip supplies, reneging on FRAND commitments, refusing to license to competitors, etc. Those tactics helped Qualcomm secure much higher royalties than they would have gotten without them, which in turn gave them a lot more leeway to offer customers rebates that undercut competitors’ prices. In some cases Qualcomm also tied the rebates specifically to customers promising not to use competitors’ products.

Microsoft did some stuff like this in the 1990s. I would expect they’re more careful about it today, though I haven’t looked into it in any detail.

Exactly. I’m not an antitrust lawyer (I am a patent litigator however), but Qualcomm’s use of it’s position as the leader in making baseband chips to force customers to take a patent license agreement seems like a pretty clearcut Sherman Act violation. On top of that, cutting off supply if customers didn’t take a license agreement, and using that to impose fairly onerous licensing terms, seems like a pretty straightforward antitrust case. If you read the opinion (not specifically directed to Tim), you’ll Judge Koh pointed out very early on that Qualcomm straight up lied when it said that it had never cut off supply to a customer.

I’ve litigated multiple SEP cases – both Qualcomm’s rates and insistence on using the entire handset as the royalty base are pretty shocking to me. I’d be shocked if a Court imposed either that royalty rate or that royalty base in litigation.
3978 posts | registered 5/29/2001
mebeSajid Ars Praefectus
jump to post
Wildbiftek wrote:

What would guarantee the absence of competition would be if Qualcomm kept all of its patents to itself and didn’t submit them to standards committees for open licensing. You would get a completely vertically integrated giant like Intel who had a controlling stake in CPU standards and implementations with minimal competition for some 2 decades, during which the quality of CPUs languished while pricing was high. There was no competition due to largely gate-keeping patents (as well as the difficulty of rewriting decades of software…) and little innovation from Intel itself from the early 2000’s until around 2015.

This manages to both miss the point and get basic facts wrong. First, there isn’t a CPU “standard”, at least as we’re using the term here. There’s an x86 instruction set with contributions from both Intel and AMD (more Intel than AMD), but it’s origin is largely proprietary: nothing in x86 has gone through a standard setting organization.

In contrast, there are robust standard setting organizations that participate in standards for wireless communication, and there are very important reasons for having those standards: interoperability, and predictability. Because of this, being able to introduce one’s technology into a standard is quite lucrative (aside from any IP revenue), and lots of parties try to get their IP into standards. Having IP that reads on a standard is quite lucrative, and it tends to be a fairly straightforward patent infringement case.

For this reason, the tradeoff for participating in the standards setting process is an obligation to license any IP on Fair, Reasonable, and Non-Discriminatory terms. Standards Setting Organizations don’t want any one party to be able to “hold up” an industry.

Quote:
An open FRAND based licensing scheme for standards in the case of cellular standards allowed for vibrant competition in SoC implementations from multiple companies, a fact that was distorted by the FTC and missed by Koh because of their myopic focus on “premium standalone modems” where only Apple was a big customer. There is also value inherent to good standards which are not cheap to develop and may not ultimately be adopted either.

Premium standalone modems was, by itself, a several billion dollar market. Further, while standards certainly aren’t cheap to develop, Qualcomm was far from the only party to contribute to those standards. You make it sound like Qualcomm alone developed 3G and LTE. That is not the case.
3980 posts | registered 5/29/2001

Reader comments 302

Share
Tweet
Reddit

Timothy B. Lee / Timothy is a senior reporter covering tech policy, blockchain technologies and the future of transportation. He lives in Washington DC.
← Older Story Newer Story →

CNMN Collection
WIRED Media Group
© 2019 Condé Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 5/25/18) and Privacy Policy and Cookie Statement (updated 5/25/18) and Ars Technica Addendum (effective 8/21/2018). Ars may earn compensation on sales from links on this site. Read our affiliate link policy.
Your California Privacy Rights
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices

SKS Keyserver Network Under Attack

SKS Keyserver Network Under Attack

rjhansen / keyservers.md
Sign in
Sign up
SKS Keyserver Network Under Attack
keyservers.md
SKS Keyserver Network Under Attack

This work is released under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Terminological Note

“OpenPGP” refers to the OpenPGP protocol, in much the same way that HTML refers to the protocol that specifies how to write a web page. “GnuPG”, “SequoiaPGP”, “OpenPGP.js”, and others are implementations of the OpenPGP protocol in the same way that Mozilla Firefox, Google Chromium, and Microsoft Edge refer to software packages that process HTML data.
Who am I?

Robert J. Hansen . I maintain the GnuPG FAQ and unofficially hold the position of crisis communicator. This is not an official statement of the GnuPG project, but does come from someone with commit access to the GnuPG git repo.
Executive Summary

In the last week of June 2019 unknown actors deployed a certificate spamming attack against two high-profile contributors in the OpenPGP community (Robert J. Hansen and Daniel Kahn Gillmor, better known in the community as “rjh” and “dkg”). This attack exploited a defect in the OpenPGP protocol itself in order to “poison” rjh and dkg’s OpenPGP certificates. Anyone who attempts to import a poisoned certificate into a vulnerable OpenPGP installation will very likely break their installation in hard-to-debug ways. Poisoned certificates are already on the SKS keyserver network. There is no reason to believe the attacker will stop at just poisoning two certificates. Further, given the ease of the attack and the highly publicized success of the attack, it is prudent to believe other certificates will soon be poisoned.

This attack cannot be mitigated by the SKS keyserver network in any reasonable time period. It is unlikely to be mitigated by the OpenPGP Working Group in any reasonable time period. Future releases of OpenPGP software will likely have some sort of mitigation, but there is no time frame. The best mitigation that can be applied at present is simple: stop retrieving data from the SKS keyserver network.
How Keyservers Work

When Phil Zimmermann first developed PGP (“Pretty Good Privacy”) in the early 1990s there was a clear chicken and egg problem. Public key cryptography could revolutionize communications but required individuals possess each other’s public keys. Over time terminology has shifted: now public key cryptography is mostly called “asymmetric cryptography” and public keys are more often called “public certificates”, but the chicken-and-egg problem remains. To communicate privately, each party must have a small piece of public data with which to bootstrap a private communication channel.

Special software was written to facilitate the discovery and distribution of public certificates. Called “keyserver software”, it can be thought of as analogous to a telephone directory. Users can search the keyserver by a variety of different criteria to discover public certificates which claim to belong to the desired user. The keyserver network does not attest to the accuracy of the information, however: that’s left for each user to ascertain according to their own criteria.

Once a user has verified a certificate really and truly belongs to the person in question, they can affix an affidavit to the certificate attesting that they have reason to believe the certificate really belongs to the user in question.

For instance: John Hawley (john@example.org) and I (rjh@example.org) are good friends in real life. We have sat down face-to-face and confirmed certificates. I know with complete certainty a specific public certificate belongs to him; he knows with complete certainty a different one belongs to me. John also knows H. Peter Anvin (hpa@example.org) and has done the same with him. If I need to communicate privately with Peter, I can look him up in the keyserver. Whichever certificate bears an attestation by John, I can trust really belongs to Peter.
Keyserver Design Goals

In the early 1990s we were concerned repressive regimes would attempt to force keyserver operators to replace certificates with different ones of the government’s choosing. (I speak from firsthand experience. I’ve been involved in the PGP community since 1992. I was there for these discussions.) We made a quick decision that keyservers would never, ever, ever, delete information. Keyservers could add information to existing certificates but could never, ever, ever, delete either a certificate or information about a certificate.

To meet this goal, we started running an international network of keyservers. Keyservers around the world would regularly communicate with each other to compare directories. If a government forced a keyserver operator to delete or modify a certificate, that would be discovered in the comparison step. The maimed keyserver would update itself with the content in the good keyserver’s directory. This was a simple and effective solution to the problem of government censorship.

In the early 1990s this design seemed sound. It is not sound in 2019. We’ve known it has problems for well over a decade.
Why Hasn’t It Been Fixed?

There are powerful technical and social factors inhibiting further keyserver development.

The software is Byzantine. The standard keyserver software is called SKS, for “Synchronizing Key Server”. A bright fellow named Yaron Minsky devised a brilliant algorithm that could do reconciliations very quickly. It became the keystone of his Ph.D thesis, and he wrote SKS originally as a proof of concept of his idea. It’s written in an unusual programming language called OCaml, and in a fairly idiosyncratic dialect of it at that. This is of course no problem for a proof of concept meant to support a Ph.D thesis, but for software that’s deployed in the field it makes maintenance quite difficult. Not only do we need to be bright enough to understand an algorithm that’s literally someone’s Ph.D thesis, but we need expertise in obscure programming languages and strange programming customs.

The software is unmaintained. Due to the above, there is literally no one in the keyserver community who feels qualified to do a serious overhaul on the codebase.

Changing a design goal is not the same as fixing a bug. The design goal of the keyserver network is “baked into” essentially every part of the infrastructure. This isn’t a case where there’s a bug that’s inhibiting the keyserver network from functioning correctly. Bugs are generally speaking fairly easy to fix once you know where the problem is. Changing design goals often requires an overhaul of such magnitude it may be better to just start over with a fresh sheet of paper.

There is no centralized authority in the keyserver network. The lack of centralized authority was a feature, not a bug. If there is no keyserver that controls the others, there is no single point of failure for a government to go after. On the other hand it also means that even after the software is overhauled and/or rewritten, each keyserver operator has to commit to making the upgrade and stomping out the difficulties that inevitably arise when new software is fielded. The confederated nature of the keyserver network makes changing the design goals even harder than it would normally be—and rest assured, it would normally be very hard!

The Vulnerabilities

The keyserver network is susceptible to a variety of attacks as a consequence of its write-only design. The keyserver network can be thought of as an extremely large, extremely reliable, extremely censorship-resistant distributed filesystem which anyone can write to.

Imagine if Dropbox allowed any Tom, Dick, or Harry to not only put information in your public Dropbox folder, but made it impossible for you to delete it. How would everyone from spammers to child pornographers abuse this?

Many of the same attacks are possible on the keyserver network. We have known about these vulnerabilities for well over a decade. Fixing the keyserver network is, however, problematic for the reasons listed above.

In order to limit the scope of this document a detailed breakdown of only one such vulnerability will be presented (see below).
The Certificate Spamming Attack

Consider public certificates. In order to make them easier to use, they have a list of attestations: statements from other people, represented by their own public certificates, that this certificate really belongs to the individual in question. In my example from before, John Hawley attested to H. Peter Anvin’s certificate. When I looked for H. Peter Anvin’s certificate I checked all the certificates which claimed to belong to him and selected the one John attested as being really his.

These attestations — what we call certificate signatures — can be made by anyone for any purpose. And once made, they never go away. Ever. Even when a certificate signature gets revoked the original remains on the certificate: all that happens is a second signature is affixed saying “don’t trust the previous one I made”.

The OpenPGP specification puts no limitation on how many signatures can be attached to a certificate. The keyserver network handles certificates with up to about 150,000 signatures.

GnuPG, on the other hand … doesn’t. Any time GnuPG has to deal with such a spammed certificate, GnuPG grinds to a halt. It doesn’t stop, per se, but it gets wedged for so long it is for all intents and purposes completely unusable.

My public certificate as found on the keyserver network now has just short of 150,000 signatures on it.

Further, pay attention to that phrase any time GnuPG has to deal with such a spammed certificate. If John were to ask GnuPG to verify my signature on H. Peter Anvin’s certificate, GnuPG would attempt to comply and in the course of business would have to deal with my now-spammed certificate.
The Consequences

We’ve known for a decade this attack is possible. It’s now here and it’s devastating. There are a few major takeaways and all of them are bad.

If you fetch a poisoned certificate from the keyserver network, you will break your GnuPG installation.
Poisoned certificates cannot be deleted from the keyserver network.
The number of deliberately poisoned certificates, currently at only a few, will only rise over time.
We do not know whether the attackers are intent on poisoning other certificates.
We do not even know the scope of the damage.

That last one requires some explanation. Any certificate may be poisoned at any time, and is unlikely to be discovered until it breaks an OpenPGP installation.

The number one use of OpenPGP today is to verify downloaded packages for Linux-based operating systems, usually using a software tool called GnuPG. If someone were to poison a vendor’s public certificate and upload it to the keyserver network, the next time a system administrator refreshed their keyring from the keyserver network the vendor’s now-poisoned certificate would be downloaded. At that point upgrades become impossible because the authenticity of downloaded packages cannot be verified. Even downloading the vendor’s certificate and re-importing it would be of no use, because GnuPG would choke trying to import the new certificate. It is not hard to imagine how motivated adversaries could employ this against a Linux-based computer network.
Mitigations

At present I (speaking only for myself) do not believe the global keyserver network is salvageable. High-risk users should stop using the keyserver network immediately.

Users who are confident editing their GnuPG configuration files should follow the following process:

Open gpg.conf in a text editor. Ensure there is no line starting with keyserver. If there is, remove it.
Open dirmngr.conf in a text editor. Add the line keyserver hkps://keys.openpgp.org to the end of it.

keys.openpgp.org is a new experimental keyserver which is not part of the keyserver network and has some features which make it resistant to this sort of attack. It is not a drop-in replacement: it has some limitations (for instance, its search functionality is sharply constrained). However, once you make this change you will be able to run gpg –refresh-keys with confidence.
Repairs

If you know which certificate is likely poisoned, try deleting it: this normally goes pretty quickly. If your OpenPGP installation becomes usable again, congratulations. Acquire a new unpoisoned copy of the certificate and import that.

If you don’t know which certificate is poisoned, your best bet is to get a list of all your certificate IDs, delete your keyrings completely, and rebuild from scratch using known-good copies of the public certificates.
A Personal Postscript

dkg wrote a blog post about this. He sums up my feelings pretty well, so I’m going to quote him liberally with only a trivial correction.

I’ve spent a significant amount of time over the years trying to push the ecosystem into a more responsible posture with respect to OpenPGP certificates, and have clearly not been as successful at it or as fast as I wanted to be. Complex ecosystems can take time to move.

To have my own certificate directly spammed in this way felt surprisingly personal, as though someone was trying to attack or punish me, specifically. I can’t know whether that’s actually the case, of course, nor do I really want to. And the fact that Robert J. Hansen’s certificate was also spammed makes me feel a little less like a singular or unique target, but I also don’t feel particularly proud of feeling relieved that someone else is also being “punished” in addition to me.

But this report wouldn’t be complete if I didn’t mention that I’ve felt disheartened and demotivated by this situation. I’m a stubborn person, and I’m trying to make the best of the situation by being constructive about at least documenting the places that are most severely broken by this. But I’ve also found myself tempted to walk away from this ecosystem entirely because of this incident. I don’t want to be too dramatic about this, but whoever did this basically experimented on me (and Rob) directly, and it’s a pretty shitty thing to do.

If you’re reading this, and you set this off, and you selected me specifically because of my role in the OpenPGP ecosystem, or because I wrote the abuse-resistant-keystore draft, or because I’m part of the Autocrypt project, then you should know that I care about making this stuff work for people. If you’d reached out to me to describe what you were planning to do, we could have done all of the above bug reporting and triage using demonstration certificates, and worked on it together. I would have happily helped. I still might! But because of the way this was done, I’m not feeling particularly happy right now. I hope that someone is, somewhere.

To which I’d like to add: I have never in my adult life wished violence on any human being. I have witnessed too much of it and its barbaric effects, stood by the graves of too many people cut down too young. I do not hate you and I do not wish any harm to befall you.

But if you get hit by a bus while crossing the street, I’ll tell the driver everyone deserves a mulligan once in a while.

You fool. You absolute, unmitigated, unadulterated, complete and utter, fool.

Peace to everyone — including you, you son of a bitch.

— Rob
@idvorkin
idvorkin commented 6 days ago • edited 6 days ago

For those interested in the source: https://bitbucket.org/skskeyserver/sks-keyserver/src
@troyengel
troyengel
commented 6 days ago

My dirmgr was failing while attempting to complete the TLS handshake as it could not validate the Let’s Encrypt based certificate. Generally it looked like this:

Jun 29 10:40:25 grimm dirmngr[1382]: TLS connection authentication failed: General error
Jun 29 10:40:25 grimm dirmngr[1382]: error connecting to ‘https://keys.openpgp.org:443’: General error
Jun 29 10:40:26 grimm dirmngr[1382]: TLS verification of peer failed: status=0x0042
Jun 29 10:40:26 grimm dirmngr[1382]: TLS verification of peer failed: The certificate is NOT trusted. The certificate issuer is unknown.
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: expected hostname: keys.openpgp.org
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: BEGIN Certificate ‘server[0]’:
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: serial: 03D804FE5B5614E157F04E714A7BEBC64E91
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: notBefore: 2019-06-07 13:24:06
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: notAfter: 2019-09-05 13:24:06
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: issuer: CN=Let’s Encrypt Authority X3,O=Let’s Encrypt,C=US
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: subject: CN=keys.openpgp.org
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: aka: (8:dns-name16:keys.openpgp.org)
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: hash algo: 1.2.840.113549.1.1.11
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: SHA1 fingerprint: 163C12F44D3D2597904C2A86C091B9A7464E8BC4
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: END Certificate
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: BEGIN Certificate ‘server[1]’:
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: serial: 0A0141420000015385736A0B85ECA708
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: notBefore: 2016-03-17 16:40:46
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: notAfter: 2021-03-17 16:40:46
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: issuer: CN=DST Root CA X3,O=Digital Signature Trust Co.
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: subject: CN=Let’s Encrypt Authority X3,O=Let’s Encrypt,C=US
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: hash algo: 1.2.840.113549.1.1.11
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: SHA1 fingerprint: E6A3B45B062D509B3382282D196EFE97D5956CCB
Jun 29 10:40:26 grimm dirmngr[1382]: DBG: END Certificate
Jun 29 10:40:26 grimm dirmngr[1382]: TLS connection authentication failed: General error
Jun 29 10:40:26 grimm dirmngr[1382]: error connecting to ‘https://keys.openpgp.org:443’: General error

I went to the LE certificate page, saved the Root CA and both signed X3 (active) to a single PEM file in ~/.gnupg/ and added this to dirmgr.conf:

hkp-cacert /.gnupg/le.pem
keyserver hkps://keys.openpgp.org

Killed the running dirmgr process and retried the refresh and everything is happy. Hope this helps someone else having the same problem trying to test out the new keyserver.
@Mikotochan
Mikotochan
commented 6 days ago

Not only do we need to be bright enough to understand an algorithm that’s literally someone’s Ph.D thesis, but we need expertise in obscure programming languages and strange programming customs.

A lot of popular algorithms were first introduced as part of someone’s Ph.D thesis. There is nothing that makes them inherently more difficult to understand. In addition to that, calling OCaml obscure is silly.

Due to the above, there is literally no one in the keyserver community who feels qualified to do a serious overhaul on the codebase.

If they feel that way then they might also not be qualified to fiddle in cryptography. Just a thought.

How would everyone from spammers to child pornographers abuse this?

“How would everyone from spammers to homosexual pornographers abuse this?” is the version from 80 years ago. Maybe it is time for people to accept that nobody is harmed by people masturbating to certain sequences of bits. There is no need to be a bigot in an irrelevant topic.

Poisoned certificates cannot be deleted from the keyserver network.

The solution (for the server-side) is simple: each server individually handles spam. Although I would argue that the issue is not with the servers. I might have legitimate reasons for adding >150000 signatures to a certificate.

We’ve known for a decade this attack is possible

The fact that it was not fixed for the past decade might mean that the people behind the standard and its implementations are not qualified to develop cryptographic software.

But if you get hit by a bus while crossing the street, I’ll tell the driver everyone deserves a mulligan once in a while.
You fool. You absolute, unmitigated, unadulterated, complete and utter, fool.

That person actually deserves a reward, if only for actually brining the attention of the community at large to this easily to exploit and known for a decade DoS attack. Even if you are not qualified enough to fix it then this might at least be the way to finally educate people of the dangers of OpenPGP and GPG.

@ghuntley
I am actually amused from the fact that he is using docx despite his involvement in a GNU project.
@cridenour
cridenour commented 6 days ago • edited 6 days ago

How would everyone from spammers to child pornographers abuse this?

“How would everyone from spammers to homosexual pornographers abuse this?” is the version from 80 years ago. Maybe it is time for people to accept that nobody is harmed by people masturbating to certain sequences of bits. There is no need to be a bigot in an irrelevant topic.

@Mikotochan Are you seriously defending CP? You do understand how those sequences of bits are made, right?
@lfam
lfam
commented 5 days ago

These comments are mostly disgraceful and should be closed.
@Mikotochan
Mikotochan commented 5 days ago • edited 5 days ago

@lfam
May I ask why you are defending censorship? I believe that there is a great opportunity for discussion concerning the issues of OpenPGP and how to get past them.

For example concerning the point of package signatures for distribution I believe that a minimalistic signature-verification tool (such as signify from OpenBSD) and fetching the signatures via package updates (just like debian does via the debian-keyring package) would be a great solution to the issue raised in the article above. In general I believe that avoiding big and monolithic software with bad security record (see https://dev.gnupg.org/T1579 as an example) such as GPG would be the best solution for anyone interested in a secure framework for future projects.

@cridenour
I would rather not pollute the thread with offtopic comments, especially with a topic that controversial. If you wish to engage in that discussion we could talk via some other medium.

Edit: There is a new article for anyone interested https://gist.github.com/rjhansen/f716c3ff4a7068b50f2d8896e54e4b7e
@qptain-Nemo
qptain-Nemo
commented 5 days ago

Wouldn’t it be possible for the PGP implementations to skip processing redundant signatures in an intelligent way?
@dmerillat
dmerillat
commented 5 days ago

I’m missing something: I understand the keyservers have to synchronize with each other, but you can’t help clients by limiting results to the oldest 10,000 signatures or whatever GPG can handle until it asks with “I am a fixed version, give me the full set”?
@DigitalBrains1
DigitalBrains1
commented 5 days ago

Due to the above, there is literally no one in the keyserver community who feels qualified to do a serious overhaul on the codebase.

If they feel that way then they might also not be qualified to fiddle in cryptography. Just a thought.

I think the context is that we’re taking about modifying the reconciliation algorithm.

And yes, indeed. I do like to see my cryptographic algorithms modified only by people with a PhD in the field, and then peer-reviewed extensively before I use it. People modifying cryptographic algorithms outside those conditions are what I call “rolling your own crypto”.

I get the feeling you’re thinking about using an algorithm. Using an algorithm doesn’t need intelligence, otherwise computers would be bad at it (since artificial intelligence is an algorithm, it’s turtles all the way down).
@judge2020
judge2020 commented 5 days ago • edited 5 days ago

@lfam Unless we get @nat /other staff in here, authors can’t close gist comments.
@dmerillat
dmerillat
commented 5 days ago

I’m missing something: I understand the keyservers have to synchronize with each other, but you can’t help clients by limiting results to the oldest 10,000 signatures or whatever GPG can handle until it asks with “I am a fixed version, give me the full set”?

Replying to myself:
The most obvious problem is flooding a certificate with bogus signatures so that future signature revocations don’t “take”, but given that GPG is completely unusable with a flooded cert until something is fixed it’s a reasonable risk for the mitigation.

Secondly, the “attackers” were being generous by targeting the keys of the maintainers instead of something that would cause a widespread outage. What would happen if they flooded a distribution signing key, or some key linux kernel developers, or really anything that is used constantly every single day for automated verification? There was a potential to cause real widespread harm here that they clearly knew how to do and chose not to.
@Mikotochan
Mikotochan
commented 5 days ago

@DigitalBrains1

I get the feeling you’re thinking about using an algorithm. Using an algorithm doesn’t need intelligence, otherwise computers would be bad at it (since artificial intelligence is an algorithm, it’s turtles all the way down).

Producing implementations of crypto algorithms should also only be done by experts. Just take a look at all side-channels that libgcrypt (the library that gpg uses) had, here is a relatively recent example: https://eprint.iacr.org/2017/627.pdf.
@diafygi
diafygi
commented 5 days ago

Howdy all, I’m interested in writing an sks-compatible keyserver implementation that can handle poisoned keys gracefully. I understand the HKP protocol, but I can’t seem to find any documentation on the gossip/reconciliation protocol. All I can find is the academic paper, which is very hard to understand.

Does anyone understand the gossip protocol on a technical level and would you be willing to mentor me as I learn it?
@IzzySoft
IzzySoft
commented 5 days ago

@troyengel on Linux Mint 18.3 I wasn’t able to get the DBG lines. Not even the TLS ones. Only gpg: keyserver refresh failed: General error. Thought it cannot hurt and followed your advice, worked like a charm. Thanks!
@sundhaug92
sundhaug92
commented 5 days ago

Seems Kristian Fiskerstrand (0x0B7F8B60E3EDFAE3), who runs the SKS pool might also be affected
@BrainBlasted
BrainBlasted
commented 5 days ago

For those interested in the source: https://bitbucket.org/skskeyserver/sks-keyserver/src

Is this the source to the code running on keys.openpgp.org or the unmaintained code?
@dmbaturin
dmbaturin
commented 5 days ago

@Mikotochan

A lot of popular algorithms were first introduced as part of someone’s Ph.D thesis. There is nothing that makes them inherently more difficult to understand. In addition to that, calling OCaml obscure is silly.

From a quick look, there’s no “idiosyncratic dialect” of it either, in the current codebase anyway. There’s some camlp4 extension usage here and there, antiquated build system, and deprecated modules, but it doesn’t look much worse than most other projects started more than a few years ago in the pre-OPAM, pre-PPX, pre safe string era. The only thing that caught my eye as peculiar is declaring things as mutually recursive with “let … and …” without an obvious reason, but that’s still clear and not harmful.
In the commit history there are quite recent signs of modernization effort by a well-known OCaml community member too.
@ageis
ageis commented 5 days ago • edited 4 days ago

I’m pretty sure I’ve already fallen victim, due to my tendency to refresh keys periodically. Question: can this “attack” manifest in messages about an “invalid packet”? My suspicion is that people might be injecting signatures containing invalid GPG packets, which also causes keyring corruption.

For a specific example, take a look at the Tor Project signing key:

$ apt-key adv –recv-keys –keyserver keys.gnupg.net 886DDD89
gpg: requesting key 886DDD89 from hkp server keys.gnupg.net
gpg: packet(13) too large
gpg: read_block: read_error: invalid packet
gpg: Total number processed: 0
gpg: no valid OpenPGP data found.

@DigitalBrains1
DigitalBrains1
commented 5 days ago

Producing implementations of crypto algorithms should also only be done by experts.

I fully agree. The latter bit was tongue-in-cheek, but as you point out, that unfortunately just made it wrong.
@optmzr
optmzr
commented 5 days ago

@Mikotochan

How would everyone from spammers to child pornographers abuse this?

“How would everyone from spammers to homosexual pornographers abuse this?” is the version from 80 years ago. Maybe it is time for people to accept that nobody is harmed by people masturbating to certain sequences of bits. There is no need to be a bigot in an irrelevant topic.

This must be the most ignorant comment I’ve ever read on GitHub.

The thing is, someone is harmed, because you create a demand of new content. And this content is seldom created with consent (and there are many cases of sick abuse).

And then you go on and say:

I would rather not pollute the thread with offtopic comments, especially with a topic that controversial. If you wish to engage in that discussion we could talk via some other medium.

Well, weren’t you the one who begun with defending this controversial topic? Don’t mention it if you’re not interested in defending your stance here.
@Mikotochan
Mikotochan
commented 5 days ago

@optmzr

Well, weren’t you the one who begun with defending this controversial topic?

Very well then, let me fix it: “I would rather not pollute the thread anymore with offtopic comments”. If you are truly interested in a debate make a new gist or something and I will join you.
@yminsky
yminsky
commented 5 days ago

I’ve been mostly uninvolved in SKS and the OpenPGP world more generally for 15 years or so, but I thought I would pipe in with a few quick thoughts.

Some points have been made about the difficulty of getting into this codebase. Some of the concerns are about the complexity of the math in the papers that describe the underlying synchronization techniques, and some have to do with the language and the way it was written.

I think the concerns about the complexity of the math are mostly misplaced. The math is all there to do one simple thing: quickly discover the set of keys that are different between two servers, so they can exchange the missing data. That’s not the bit that would need to be fixed here.

The concerns about how the code is written are a bit off, I think. The code is definitely old, and uses some not-well-supported bits of technology (the build system, the now mostly-deprecated camlp4), but it’s not written in style that would be foreign to most OCaml programmers. (It’s not as well tested or documented as I would like, but that hardly distinguishes it from, say, PKS, the software it largely replaced.) OCaml itself is, obviously, not widely known, and the community would do better if it could attract some interest from the OCaml community in helping maintain it. One avenue might be to reach out to http://discuss.ocaml.org and see if you can attract some interest there. I think modernizing the codebase so it didn’t depend on camlp4 and built cleanly with Dune, would be a good start.

But really, the most interesting questions here are really ones of policy. How should deletion work? SKS is based on a notion of monotonicity: you need to have a notion of what it means to make progress. Currently, that notion of progress is just merging all the data together. If you have two copies of the same keys with different signatures, just merge them. If there’s a key you don’t know about, add it.

One way you could move forward would be to allow the owner of a key to have the discretion of deleting signatures on that key, by dint of creating a signed instruction to remove particular signatures. A harder question is how you decide to delete keys that are themselves malicious. Does one create a central authority for that? Or does one allow individual keyservers to just delete keys autonomously, and share the deleted nature of that key with others?

From my perspective, if the OpenPGP community (which I no longer really count myself among) wants to make this infrastructure work, it needs to find people who are willing to invest real time in it; either by building a new keyserver codebase with a different approach to replication, or by working through the problems with the SKS model. And it’s not clear to me that SKS’s model is the right one. SKS errs on the side of making replication highly reliable; but that has downsides, and in particular requires some thought as to how to make deletion work. There are other designs that are less insistent on getting all the data everywhere, but that make deletion simpler.
@hackbunny
hackbunny
commented 5 days ago

@lfam Unless we get @nat /other staff in here, authors can’t close gist comments.

How appropriate for an article on abuse of write-only storage
@dmbaturin
dmbaturin
commented 5 days ago

@yminsky I’m now seriously considering a pull request to modernize the codebase to build with 4.08 no camlp4, no mutable strings defaults. Since yesterday I’m a self-proclaimed coordinator of Concerned CAMLers Against Camlp4 project that so far consists of one person and has a whole one pull request in its track record. 😉
But that’s not the point.

If anyone in the PGP community is interested in an experience of a “moron in a hurry” who only occasionally used key servers by hand to lookup keys, I had no idea the system is byzantine. I never wondered if I can make my own key server because nothing on openpgp.org or any key server website ever says you actually can join the network. For this reason I always thought that replication is something the key servers are doing among themselves.
Even if I had an idea to follow the SKS repo link, a large project with just a repo and no website or docs other than a README screams “you won’t be able to run it”. I know it’s a faulty heuristic, but I’m just as prone to fauty heuristics as any other human.
@pmetzger
pmetzger
commented 5 days ago

@dmbaturin Please do that. It would be hard to maintain if it doesn’t build against at least 4.07 after all.

And all, I’m both an OCamler and a cryptoplumber. I might be able to spare a little time assisting with code cleanup if someone tells me what needs cleaning. (No guarantees I have real time to devote, but I can at least look.)
@XVilka
XVilka
commented 5 days ago

@dmbaturin – you are not the only one https://discuss.ocaml.org/t/camlp5-and-ocaml-4-08/3985
@thorsummoner
thorsummoner
commented 4 days ago

without knowing all the details, does it do anything to have a client electively only check the signatures of trusted keys (or keys of a specific/minimum trust level, or specified keys, whitelisting)?
The database is built to distribute full cooperates, so i expect that downloading 0.15megasignitures isn’t really the problem and any kind of blacklist is contrary to the design anyway. The problem manifesting in that a client that would both fetch an unbound set of referrential keys and signitures as well as check the signatures puts undue load on systems as well as probably causing faults in software that wasn’t tested at this signing scale, that’s how i interpret the problem anyway, lmk.
a communal whitelist/blacklist approach seems like the first line of defense to me.

thanks for the great write up and your support
@subscribernamegoeshere
subscribernamegoeshere
commented 4 days ago

general remark: pgp is not only meant for securing email communication. the pgp ID is not to be regarded as an email address of whatsoever nature. it mustn or does not need to be an email address. it is just to be regarded as a label. the public and private key combination and the possession thereof is resulting in actual ownership of a communication channel. not a mere email address or a string that calls itself an email address.

is nobody but me using pgp IDs in other places with little or no connection to email? i use PGP in fora and many other places where no smtp, no emails no of this stuff is being used.

think again people. this bug and the whole “validating email addresses” or trying to tie pgp to email foremostly is completely insane and laughable. 😦
@Nihlus
Nihlus
commented 4 days ago

I’m no expert in any way, shape, or form, but I do see one way for us to start making a directory of affected keys by intentionally invoking the fault. Setting up a series of distributed virtualized machines (Docker, or something similar) that continually trawl the published keys from the keyserver system and attempt to utilize the keys could give us a way to mark poisoned keys – if it can be synchronized and utilized, it’s green, and if the container breaks, it’s red. Both results go into an independent database that can be utilized to verify that a key is “safe” before use. Perhaps something to prototype and utilize as a mitigation?
@pmetzger
pmetzger
commented 4 days ago

Setting up a series of distributed virtualized machines (Docker, or something similar) that continually trawl the published keys from the keyserver system and attempt to utilize the keys could give us a way to mark poisoned keys – if it can be synchronized and utilized, it’s green, and if the container breaks, it’s red.

Given that access to the underlying database is possible and that a relatively simple algorithm will tell you if a key is impacted, this seems like a very heavyweight solution to a relatively straightforward problem.
@Nihlus
Nihlus
commented 4 days ago

Setting up a series of distributed virtualized machines (Docker, or something similar) that continually trawl the published keys from the keyserver system and attempt to utilize the keys could give us a way to mark poisoned keys – if it can be synchronized and utilized, it’s green, and if the container breaks, it’s red.

Given that access to the underlying database is possible and that a relatively simple algorithm will tell you if a key is impacted, this seems like a very heavyweight solution to a relatively straightforward problem.

Agreed, now that I’ve had more than five seconds to think about it 😛 Having a directory of affected keys for mitigation purposes may still be a reasonable idea, though.
@ssaavedra
ssaavedra
commented 4 days ago

Putting the blame in that the old SKS is written in OCaml while writing the new one in Rust seems somewhat counterintuitive for the long-term sustainability.

I’m sure there’s people already more engaged on this, but if you need help with the OCaml SKS code, I could chime in a bit.
@sipa
sipa commented 3 days ago • edited 3 days ago

This is probably not useful information unless someone is already committed to writing new keyserver infrastructure, but better set reconciliation algorithms exist now than the one used by SKS (which is based on cpisync, I believe). Me and a few other contributors have been working on a high performance C-callable library that implements a more performant one (called pinsketch), https://github.com/sipa/minisketch. It seems like a good choice to base new keyserver design on.

Feel free to reach out if you have any questions.
@NullFlex
NullFlex
commented 3 days ago

Not only do we need to be bright enough to understand an algorithm that’s literally someone’s Ph.D thesis, but we need expertise in obscure programming languages and strange programming customs.

A lot of popular algorithms were first introduced as part of someone’s Ph.D thesis. There is nothing that makes them inherently more difficult to understand. In addition to that, calling OCaml obscure is silly.

Due to the above, there is literally no one in the keyserver community who feels qualified to do a serious overhaul on the codebase.

If they feel that way then they might also not be qualified to fiddle in cryptography. Just a thought.

How would everyone from spammers to child pornographers abuse this?

“How would everyone from spammers to homosexual pornographers abuse this?” is the version from 80 years ago. Maybe it is time for people to accept that nobody is harmed by people masturbating to certain sequences of bits. There is no need to be a bigot in an irrelevant topic.

Poisoned certificates cannot be deleted from the keyserver network.

The solution (for the server-side) is simple: each server individually handles spam. Although I would argue that the issue is not with the servers. I might have legitimate reasons for adding >150000 signatures to a certificate.

We’ve known for a decade this attack is possible

The fact that it was not fixed for the past decade might mean that the people behind the standard and its implementations are not qualified to develop cryptographic software.

But if you get hit by a bus while crossing the street, I’ll tell the driver everyone deserves a mulligan once in a while.
You fool. You absolute, unmitigated, unadulterated, complete and utter, fool.

That person actually deserves a reward, if only for actually brining the attention of the community at large to this easily to exploit and known for a decade DoS attack. Even if you are not qualified enough to fix it then this might at least be the way to finally educate people of the dangers of OpenPGP and GPG.

@ghuntley
I am actually amused from the fact that he is using docx despite his involvement in a GNU project.

shut the fuck up stupid anime pedo
@remowxdx
remowxdx
commented 3 days ago

I have no idea of the algorithm, software, protocols, feasibility … but doesn’t it make sense if the keyservers only published to the clients the key attestation if the keys are reciprocally attestated?
If Alice signs Bob’s key, the servers should wait till also Bob signs Alice’s key, so they know Alice is not a spammer.
Also add a flag/command/whatever to get the old behaviour.
@Disasm
Disasm commented 3 days ago • edited 3 days ago

You can detect infected keys with this script: https://gist.github.com/Disasm/dc44684b1f2aa76cd5fbd25ffeea7332
4E2C6E8793298290 (Tor Browser Developers) is infected.
@co-dan
co-dan
commented 3 days ago

Why does GnuPG choke on the flooded keys? It doesn’t seem like it’s an unprocessable amount of data, at least at the moment.
@rozzin
rozzin
commented 3 days ago

Howdy all, I’m interested in writing an sks-compatible keyserver implementation that can handle poisoned keys gracefully. I understand the HKP protocol, but I can’t seem to find any documentation on the gossip/reconciliation protocol. All I can find is the academic paper, which is very hard to understand.

Does anyone understand the gossip protocol on a technical level and would you be willing to mentor me as I learn it?

@diafygi, you (and maybe @rjhansen?) might find this other keyserver implementation, Hockeypuck, of interest—looks like it’s supposed to be protocol-compatible with SKS, but written in Go instead of OCaml: https://hockeypuck.github.io/

Hockeypuck implements the HKP draft protocol specification as well as
several extensions to the protocol supported by SKS.
[…]
Hockeypuck can synchronize public key material with SKS and other Hockeypuck servers.
Recon protocol support is provided with the Conflux package.

@pmetzger
pmetzger
commented 2 days ago

Let me repeat that you have several OCaml types who have volunteered in the thread to help with the code.
@rozzin
rozzin
commented 2 days ago

Let me repeat that you have several OCaml types who have volunteered in the thread to help with the code.

@pmetzger, if that’s a response to my referencing Hockeypuck…, that wasn’t really the point I was trying to make. Just that people who have difficulty understanding an algorithm or protocol from one particular implementation, or as implemented in one particular language, often benefit from having other references to read and compare. I’ve been in that situation enough times, myself. :\

“unfamiliar algorithm in an unfamiliar language” can be hard, and having either of those pieces swapped out even temporarily can help convert the situation into something more like “familiar algorithm in not-so-unfamiliar language”.

Thank you for summarizing the OCaml situation as well 🙂

Such an effort does actually need someone to lead it, though; @dmbaturin, if you’re taking the lead on this, thank you!
@tlhonmey
tlhonmey
commented 1 day ago

Reading through all the ideas, I have to say that allowing signed instructions for modifying keys is probably a good idea. It would be just as secure as a revocation certificate in terms of censorship resistance and with a little capacity for simple logic like Etherium’s smart contracts it would allow mitigation of this attack and probably others as well without needing further modification of the server software.

Implementing this wouldn’t necessarily have to be done in the server software itself either. The current system prevents censorship by resynchronizing data deleted from one (or a few) servers. However modifications could still be made by having all server operators make the modifications between synchronization runs. All that is needed is an automated way to verify a signed command to replace a certificate’s data with a clean copy (Or just delete it even and the original user can reupload.) and a communications framework allowing the servers to coordinate running it simultaneously.
@dadosch
dadosch
commented 1 day ago

As @Dissam wrote key 4E2C6E8793298290 by the Tor Project also got spammed. This means that gpg hangs, even when you try to list your own secret keys.

You can delete such a key using gpg –delete-key 4E2C6E8793298290

To sort keys by size use the command from here: https://dev.gnupg.org/T3972#127356
@rozzin
rozzin commented 1 day ago • edited 1 day ago

Is there a discussion somewhere I can read about the idea of using RFC 6091 to restrict keyservers to only accept certificate-uploads from clients with access to the private part of the key being certified (e.g. “you can only attach signatures to your own key”), and why maybe that’s not a good idea or not workable?
@hunterpayne
hunterpayne commented 1 day ago • edited about 24 hours ago

Perhaps I have a solution. So the SKS software replicates in a write only pattern. What if a new feature were added to the protocol. A blacklist which can be sync’ed between servers just like the whitelist is now. Then when authorizing a cert, just authorize against both the white and black list and only approve if the white is authorized and black is not. Then put poisoned certs into the blacklist.

Putting a cert in a blacklist might require proof the cert was your or belonged to your organization somehow. I’m not a security researcher so I’m sorry if this suggestion is a bit naive but it seems like at least part of a successful plan to address this.

Good luck and thanks for the software PGP folks,
Hunter
@Mikaela
Mikaela
commented about 21 hours ago

Hi, I think the mitigations section is missing a third step; killall -HUP dirmngr so it reloads its config file and the keyserver change comes to force.
@yawpitch
yawpitch
commented about 19 hours ago

How would everyone from spammers to child pornographers abuse this?

“How would everyone from spammers to homosexual pornographers abuse this?” is the version from 80 years ago. Maybe it is time for people to accept that nobody is harmed by people masturbating to certain sequences of bits. There is no need to be a bigot in an irrelevant topic.

By confusing those who create those sequences of bits with those merely masturbating to them you abandon credibility. By pretending those masturbating to those sequences of bits aren’t complicit in, and party to, the crimes of those creating them, you dig the hole even further. Further, by drawing a line of moral equivalence between sequences of bits involving consenting adults and sequences of bits involving a consenting adult and children who cannot consent you abandon humanity.

Absent either credibility or humanity your views on any further pollution of a thread you polluted will be ignored.
@webdawg
webdawg
commented about 15 hours ago

make any address one time use, and sign all future updates/requests for that same address w/ previous certs…even if they are expired.

done.
@Ridderby
Ridderby
commented about 12 hours ago

@webdawg

make any address one time use, and sign all future updates/requests for that same address w/ previous certs…even if they are expired.

In times of ransomware it is reasonable that one might need to start all over without any access to earlier private keys. Same applies to disk crash or unintentionally formatted disks, or re-installation of Linux because of this very problem unless you are over average skilled to navigate the .-directories. I am, my wife is not (still KDE lover though).
@tessus
tessus
commented about 11 hours ago

We’ve known for a decade this attack is possible.

Hmm, I guess I stop right there. Well, maybe one little remark: Ok, so what the heck did you expect?

Also, it would have been useful to provide a list of poisened keys and/or a way to detect them.
@stevefan1999-personal
stevefan1999-personal commented about 4 hours ago • edited about 4 hours ago

No, it’s all about your ignorance to turn a blind eye on potential attacks since you claimed you know them for decades. Now we users, with less expertised hand at this, had to pay for this turmoil.
@webdawg
webdawg
commented about 2 hours ago

@webdawg

make any address one time use, and sign all future updates/requests for that same address w/ previous certs…even if they are expired.

In times of ransomware it is reasonable that one might need to start all over without any access to earlier private keys. Same applies to disk crash or unintentionally formatted disks, or re-installation of Linux because of this very problem unless you are over average skilled to navigate the .-directories. I am, my wife is not (still KDE lover though).

How about the possibility of just signing previous keys…at least at a bare minimal you could follow a cert chain…attacks could be smaller because the chain would be broken with any public disclosure of a private key, but how is that different now?
Comment on gist
Sign in to comment
or sign up to join this conversation on GitHub

Coverity Scan Static Analysis

Coverity Scan Static Analysis

Synopsys navbar logo

Scan Home
FAQ
OSS Success Stories
Projects Using Scan
About
Community

Coverity Scan
Static Analysis
Find and fix defects in your Java, C/C++, C#, JavaScript, Ruby, or Python open source project for free

Test every line of code and potential execution path.
The root cause of each defect is clearly explained, making it easy to fix bugs
Integrated with

Linux logo
Linux reduced time to fix new defects, found by Coverity Scan, from 120 days to 5 days. Read more >>
More than 6100 open source projects and 29000 developers use Coverity Scan

“The reports from Coverity are a valuable contribution to – among others – the LibreOffice development process. ”
LibreOffice
Announcements
SCAN has been upgraded to Coverity 2019.03
2019 June 21

Please download the new build tool and upgrade your builds to take advantage of new features
Coverity SCAN upgrade in progress
2019 June 17

Project creation and access to triage data is disabled during the upgrade process.
Coverity Upgrade to 2019.03
2019 June 7

Attention SCAN users! We will begin upgrading the Coverity tools in SCAN on Monday, 17 June at 0900 MDT to make this free service even better. The SCAN team has been hard at work stabilizing the service and getting ready for this upgrade.

SCAN will be switched to read-only during the upgrade, locking registration and triage, and halting builds. Defect data may be unavailable at times. The upgrade is expected to take three to five days.

After the upgrade, a new version of the Coverity build package will be available for download. The old 8.7 version some users are still using will no longer work after the upgrade. Be sure to download the new build package.

Full details of new features are available at the Community Site.
Please Reset Your Password
2018 March 16
Community

We’ve finally launched our new community site! If you have questions regarding SCAN or are looking for answers regarding our tools, feel free to post them here.
Events
Updates

Coverity 2019.03 has been released!

There are an number of checker additions and updated language support.

The following improvements have been made:

SCALA Language analysis?
Added macOS 10.13, 10.14 support
Added Java9, java10, openjdk11 support
Added .NET Core 2.0.2-1 support
Added TypeScript support
Added Swift 3.3 and 4.1.x support

All users who are experiencing build issues should upgrade to this version; a number of bugs have been fixed with this release.
Supported Versions

Versions 8.7.0.x and older are no longer supported.

The current supported versions are:

2017.07
2019.03

Users are encouraged to download the latest tools in Downloads.

Going forward, only the latest two releases will be supported. This means projects should be expected to update their tools approximately once a year (or more frequently if you want the latest features/support).
Build Limits

The number of weekly builds per project are as follows:

Up to 28 builds per week, with a maximum of 4 builds per day, for projects with fewer than 100K lines of code
Up to 21 builds per week, with a maximum of 3 builds per day, for projects with 100K to 500K lines of code
Up to 14 builds per week, with a maximum of 2 build per day, for projects with 500K to 1 million lines of code
Up to 7 builds per week, with a maximum of 1 build per day, for projects with more than 1 million lines of code

Once a project reaches the maximum builds per week, additional build requests will be rejected.
}

Interested in open source quality?

Check out what’s happening with your favorite open source projects.
Free Report
Agile Security Manifesto

Learn how adding four principles to your Agile process can help you integrate critical security measures in a natural, efficient way.

Get the eBook
Get Started in 3 Easy Steps

icon

1. Sign up and register your project

icon

2. Upload your build for analysis

icon

3. View and fix your defects
Follow Us

About Synopsys

Application Security Testing
Software Security Services
Program Development
Training

Communities and resources

StackOverflow
Resource Library
Community

Twitter

© Synopsys, Inc. | Policy Statement | Contact
Synopsys wht
© Synopsys, Inc.