Fixing the broken windows of software security

Last month I discussed how we can fix software security issues only by making sure we use libraries and frameworks that don’t allow classes of vulnerability to exist. Of course, that’s easier said than done. Although we have plenty of people who understand this today, they are not necessarily at positions of power where they can make a difference. This is a great opportunity for the idealists among us, who are not afraid to architect their lives so that they end up wielding sufficient influence to attempt to solve the problem of insecure software. We have people like that, but there aren’t enough of them. Thus, to increase our chances of succeeding, we need other strategies.

Although last month I dismissed education as futile in the context of completely solving software security problems, education is actually a crucial part of the overall process. The obvious benefit is the increase of the quality of the software produced by organizations that invest in education. Short term, that’s probably the best return on investment you can expect to get.

More interesting perhaps, is how education changes the culture and guides developers toward an understanding that security should be an inherent attribute of all software they produce. This cultural shift is significant because, within an organization and globally, developers set standards for other developers. They also educate newcomers, reducing the need for education over time.

This idea is not new and even has a name; it’s called the Broken Windows theory. The name comes from the article with the same name published in The Atlantic in 1982. I encourage you to read the article in its entirety, but here’s the main observation (emphasis mine):

Social psychologists and police officers tend to agree that if a window in a building is broken and is left unrepaired, all the rest of the windows will soon be broken. […] one unrepaired broken window is a signal that no one cares, and so breaking more windows costs nothing.

In the context of software security, developing new software without paying attention is a signal that no one cares, and that’s where many organizations are today. But, with sufficient effort sponsored from the highest levels, the culture can change, leading to a much improved security posture of the entire organization. Not overnight, of course.

If enough organizations start treating software security seriously, they will start to affect the global development structure, paving way for lasting systemic improvements. First, security-aware developers are more likely to embrace more secure libraries, even if they are more difficult to use. Second, security leaders are more likely to emerge, increasing the pace of improvement and leading to even more security. Third, in a culture that prefers secure software, key industry players are more likely to make tough decisions to reject backward compatibility in favor of future security.

I have observed the effects of caring about security in the SSL/TLS space, where my attention has been for the last five years. When I began my work on SSL Labs, quality of SSL configuration was generally poor and almost no one paid any attention. But, very slowly, things began to change. The key contributing events were discoveries of several critical flaws, which threw Internet PKI in the spotlight. These discoveries served as initial sparks, igniting the interest of wider audiences and eventually leading to a critical mass of people determined to fix encryption on the Internet.

The good news is that we can do the same for all software security, although some aspects will take longer than others. We’re generally on the right path, but we mustn’t be discouraged by the fact that things are improving ever so slowly. For as long as they’re improving, we’ll be all right.

Don't miss