The Linux community has had major successes over the past few years, and many of these are a direct result of the high quality and stability of the code base. Linux server deployments solidified the platform's indispensability to everyone from governments to major corporations, but it was not so long ago that the battle for the desktop seemed lost. Today, Firefox and OpenOffice have made major in-roads to territory once completely within Microsoft's grip, and the polish provided to the look and feel of the Linux desktop by projects like X.org, Gnome, and Beryl has never been higher. How does this all relate to security? Allow me to offer a few perspectives. First, it is easier to secure systems that are less complex, and server systems that are designed to perform a specific set of functions (say a webserver) are less complex in general than desktop systems with lots of complex software installed derived from large code bases. This helped Linux together with high quality projects such as Apache as a server platform to at least not attain a poor reputation for security (that is a drastic understatement). Securing systems from client-side vulnerabilities is harder because the complexity of the target is higher, but here is where the open source development model has a huge role to play.
Consider for a moment the power of the scientific and academic communities around the world. Why does science really work? The main reason is the principle of peer review. Research that is reviewed by knowledgeable peers is worth more than research performed in a vacuum. Peer review makes community endeavors stronger, and is a driving factor behind Wikipedia's meteoric success. Similarly, code that is reviewed by many developers (as only the open source model can make possible) is of higher quality than closed-source code written by a single entity. A great example of this is the Firefox web browser - it's a better alternative on Windows systems by nearly every measure (security not least) than Microsoft's own IE browser.