You have suddenly been given the ultimate power to change the world in one of two ways: either you can make all programmers into perfect coders, or you can make all users knowledgeable about the security implications of their actions. Which do you choose?
This is a tough question. Even if all software is completely secure, then you still have a VEBKAC situation. (Vulnerability exists between keyboard and chair.) How many people click 'ok' when their browser says the remote site's SSL certificate isn't valid? How many use the same password for their home email as their work accounts, type it over unencrypted connections, and it's probably 'password' anyway?
Then again, if the user knows the correct response to any security-related action, that does no good if the underlying software is built poorly. Their only available response would be to not use any software at all.
So I'd need to pick the former. Magically modify all programmers to be flawless security geniuses. However, the best of both worlds could still be achieved. These uber-programmers will prevent user cluelessness from subverting security. The user will no longer have the opportunity to just click 'ok', instead you'll get a dialog box like this:
"The site to which you've connected does not have a certificate that is signed by a trusted CA. If you'd like to continue anyway, please explain the security ramifications of this decision and why you consider it necessary."
No "ok" button, just an empty text field. The user has a chance to provide a correct and applicable response, such as "The server's certificate is signed by my company's CA, not a global CA. I have verified the CA's information, including fingerprint, against the signed laminated card provided personally by my company's system administrator and they match, so I am reasonably sure that there is no chance of a man-in-the-middle attack."
Perhaps I should call this 'security through bullying the user into not accepting insecure modes of operation.'
Some believe the best security model is to have highly audited and secure code. Others believe that you should employ kernel modifications that go beyond the standard Unix security model. What are your thoughts?
The bare minimum standard for any secure system would be that the code itself is secure. Code auditing is one of the central tenants of OpenBSD, for example, and they've had bragging rights for many years by being able to point to internal code audits that fixed bugs before they were found to be exploitable, often years before. Unfortunately, in the Linux community there are less folks taking on this task of proactive code auditing. The Linux distro with the greatest emphasis on code audits is Openwall GNU/*/Linux, aka Owl. The OpenWall folks, which include such experts as Solar Designer, have stressed code audits, to the point that their code produces no warnings, even trivial ones, when compiled with 'gcc -Wall'.
Having secure, audited code, is a must. However I still prefer to use kernel security patches when possible. Audited code helps keep the bad guys off of the machine. However advanced kernel patches can both prevent them from getting in, and prevent them from doing damage if they do find a way in. Even if the software on your machine is completely locked down, a malicious cracker can find some avenue to get onto your system. For example if one of the administrator's desktop machines is broken into, and they access the secured server, the cracker can get in.