Updated: Mar 9, 2020
Given enough eyeballs, all bugs are shallow. - Eric S. Raymond, the Cathedral and the Bazaar
Raymond’s thesis, pronounced in 1999, is that open source code, reviewable and reviewed by many reviewers, will be so well-scrutinized that all deep, major flaws will be discovered. The thesis is attractive and remains partially correct, but is flawed.
The fundamental fallacy that prevents all bugs from being shallow is that there are often insufficient eyeballs, even on open source code run by millions of people on millions of machines, over a long period. Most security vulnerabilities are bugs, though some software is vulnerable by design. X Window System code I wrote 35 years ago still occasionally generates security vulnerability reports, as fresh eyeballs and tools stumble over the source. Computer language design and choice has a substantial effect on bugs: while C was arguably the least evil of our alternatives in 1985, I would not choose C in 2020. Who, today, wants to specialize in debugging code written in a language popular nearly four decades ago? But economics often dictates that “working code” is never rewritten, though we have learned much since the 1980’s in how to build secure software.
Many of our attitudes about fixing bugs and handling security vulnerabilities, often formed in the era of Brook’s seminal “The Mythical Man Month” are badly informed. One eye opening paper is (PDF) Familiarity breeds contempt: The honeymoon effect and the role of legacy code in zero-day vulnerabilities by Clark, Blaze, Frei and Smith. It should be read by all software engineers, system managers and policy makers. Our software maintenance biases, often formed in the last century before the Internet was widespread are not correct.
Since we can’t prevent new vulnerabilities in our code occuring and being discovered, what then?
There are a number of conclusions to be drawn from experience:
There is no bug/vulnerability-free code, even 35 years after it was written and in constant use. Perhaps particularly 35 years after it was written.
Open source code has an advantage over closed source in that if people care, there might be fresh eyes applied to the code.
Old code, even that code in widespread use, does not mean that code does not have security vulnerabilities.
Threat models, computers, and languages change: some serious issues today were irrelevant 35 years ago.
Old code is dangerous with no active maintainer. How do we ensure on-going maintenance of code?
Dan Geer argues cogently in Lawfare in “Heartbleed as Metaphor” that there are few options open to us, and the best of unpalatable options is rapid repair of vulnerabilities. We must “to drive the mean time to repair failures to zero”. The essential problem of security vulnerabilities is knowing you have them in the first place. We can conclude:
Any Internet connected device without ongoing software maintainers and update streams are hazardous.
Timely detection of vulnerabilities are essential: spending our CPU cycles on virus checkers without checking the integrity of code is closing the barn door long after the horses have bolted. Better to spend more of the computer’s resources on ensuring our systems are up to date and have not been tampered with.
How do we solve these issues? Far from all of the solution is technical; some requires social and legal change. But as technologists, we will try to describe some technological aids in this blog series before tackling the social and legal reforms needed.