There's been a lot of debate by security practitioners about the impact of open source approaches on security. One of the key issues is that open source exposes the source code to examination by everyone, both the attackers and defenders, and reasonable people disagree about the ultimate impact of this situation.
Here are a few quotes from people who've examined the topic. Bruce Schneier argues that smart engineers should ``demand open source code for anything related to security'' [Schneier 1999], and he also discusses some of the preconditions which must be met to make open source software secure. Vincent Rijmen, a developer of the winning Advanced Encryption Standard (AES) encryption algorithm, believes that the open source nature of Linux provides a superior vehicle to making security vulnerabilities easier to spot and fix, ``Not only because more people can look at it, but, more importantly, because the model forces people to write more clear code, and to adhere to standards. This in turn facilitates security review'' [Rijmen 2000]. Elias Levy (Aleph1) discusses some of the problems in making open source software secure in his article "Is Open Source Really More Secure than Closed?". His summary is:
John Viega's article "The Myth of Open Source Security" also discusses issues, and summarizes things this way:So does all this mean Open Source Software is no better than closed source software when it comes to security vulnerabilities? No. Open Source Software certainly does have the potential to be more secure than its closed source counterpart. But make no mistake, simply being open source is no guarantee of security.
Michael H. Warfield's "Musings on open source security" is much more positive about the impact of open source software on security. Fred Schneider doesn't believe that open source helps security, saying ``there is no reason to believe that the many eyes inspecting (open) source code would be successful in identifying bugs that allow system security to be compromised'' and claiming that ``bugs in the code are not the dominant means of attack'' [Schneider 2000]. He also claims that open source rules out control of the construction process, though in practice there is such control - all major open source programs have one or a few official versions with ``owners'' with reputations at stake. Peter G. Neumann discusses ``open-box'' software (in which source code is available, possibly only under certain conditions), saying ``Will open-box software really improve system security? My answer is not by itself, although the potential is considerable'' [Neumann 2000]. Natalie Walker Whitlock's IBM DeveloperWorks article discusses the pros and cons as well.Open source software projects can be more secure than closed source projects. However, the very things that can make open source programs secure -- the availability of the source code, and the fact that large numbers of users are available to look for and fix security holes -- can also lull people into a false sense of security.
Sometimes it's noted that a vulnerability that exists but is unknown can't be exploited, so the system ``practically secure.'' In theory this is true, but the problem is that once someone finds the vulnerability, the finder may just exploit the vulnerability instead of helping to fix it. Having unknown vulnerabilities doesn't really make the vulnerabilities go away; it simply means that the vulnerabilities are a time bomb, with no way to know when they'll be exploited. Fundamentally, the problem of someone exploiting a vulnerability they discover is a problem for both open and closed source systems. It's been argued that a system without source code is more secure in this sense because, since there's less information available for an attacker, it would be harder for an attacker to find the vulnerabilities. A counter-argument is that attackers generally don't need source code, and if they want to use source code they can use disassemblers to re-create the source code of the product. See Flake [2001] for one discussion of how closed code can still be examined for security vulnerabilities (e.g., using disassemblers). In contrast, defenders won't usually look for problems if they don't have the source code, so not having the source code puts defenders at a disadvantage compared to attackers.
It's sometimes argued that open source programs, because there's no enforced control by a single company, permit people to insert Trojan Horses and other malicious code. This is true, but it's true for closed source programs - a disgruntled or bribed employee can insert malicious code, and in many organizations it's even less likely to be found (since no one outside the organization can review the code, and few companies review their code internally). And the notion that a closed-source company can be sued later has little evidence; nearly all licenses disclaim all warranties, and courts have generally not held software development companies liable.
Borland's Interbase server is an interesting case in point. Some time between 1992 and 1994, Borland inserted an intentional ``back door'' into their database server, ``Interbase''. This back door allowed any local or remote user to manipulate any database object and install arbitrary programs, and in some cases could lead to controlling the machine as ``root''. This vulnerability stayed in the product for at least 6 years - no one else could review the product, and Borland had no incentive to remove the vulnerability. Then Borland released its source code on July 2000. The "Firebird" project began working with the source code, and uncovered this serious security problem with InterBase in December 2000. By January 2001 the CERT announced the existence of this back door as CERT advisory CA-2001-01. What's discouraging is that the backdoor can be easily found simply by looking at an ASCII dump of the program (a common cracker trick). Once this problem was found by open source developers reviewing the code, it was patched quickly. You could argue that, by keeping the password unknown, the program stayed safe, and that opening the source made the program less secure. I think this is nonsense, since ASCII dumps are trivial to do and well-known as a standard attack technique, and not all attackers have sudden urges to announce vulnerabilities - in fact, there's no way to be certain that this vulnerability has not been exploited many times. It's clear that after the source was opened, the source code was reviewed over time, and the vulnerabilities found and fixed. One way to characterize this is to say that the original code was vulnerable, its vulnerabilites because easier to exploit when it was first made open source, and then finally these vulnerabilities were fixed.
So, what's the bottom line? I personally believe that when a program is first made open source, it often starts less secure for any users (through exposure of vulnerabilities), and over time (say a few years) it has the potential to be much more secure than a closed program. Just making a program open source doesn't suddenly make a program secure, and making an open source program secure is not guaranteed:
First, people have to actually review the code. This is one of the key points of debate - will people really review code in an open source project? All sorts of factors can reduce the amount of review: being a niche or rarely-used product (where there are few potential reviewers), having few developers, and use of a rarely-used computer language.
One factor that can particularly reduce review likelihood is not actually being open source. Some vendors like to posture their ``disclosed source'' (also called ``source available'') programs as being open source, but since the program owner has extensive exclusive rights, others will have far less incentive to work ``for free'' for the owner on the code. Even open source licenses which have unusually asymmetric rights (such as the MPL) have this problem. After all, people are less likely to voluntarily participate if someone else will have rights to their results that they don't have (as Bruce Perens says, ``who wants to be someone else's unpaid employee?''). In particular, since the most incentivized reviewers tend to be people trying to modify the program, this disincentive to participate reduces the number of ``eyeballs''. Elias Levy made this mistake in his article about open source security; his examples of software that had been broken into (e.g., TIS's Gauntlet) were not, at the time, open source.
Second, the people developing and reviewing the code must know how to write secure programs. Hopefully the existence of this book will help. Clearly, it doesn't matter if there are ``many eyeballs'' if none of the eyeballs know what to look for.
Third, once found, these problems need to be fixed quickly and their fixes distributed. Open source systems tend to fix the problems quickly, but the distribution is not always smooth. For example, the OpenBSD developers do an excellent job of reviewing code for security flaws - but they don't always report the identified problems back to the original developer. Thus, it's quite possible for there to be a fixed version in one system, but for the flaw to remain in another.
In short, the effect on security of open source software is still a major debate in the security community, though a large number of prominent experts believe that it has great potential to be more secure.