Interview with Arne Vidstrom, techical editor of the "HackNotes Windows Security Portable Reference"
by Mirko Zorz - Monday, 20 October 2003.
I've seen this question in lots of places and it usually is answered with a few more or less specific technical answers. Like "apply updates regularly", "turn off services that aren't necessary" and so on. I would like to point out some completely different things that I think applies to everybody involved in security. They aren't necessarily the most important things, but I know they cause a lot of problems out there when done wrong.
  • Never overestimate your own knowledge and capability or underestimate the need for knowledge. There are lots of things to learn for all of us, we all make mistakes, we all have more or less of a misbalance in the weight we put on various aspects of security. Make sure you work hard to have a proper balance - not overemphasize some things and underemphasize or completely leave out others. Learn enough so you *really* know what you're doing, learn at least some of what's "under the hood" of programs you use etc. Check what you have just done - or even better - have someone else check what you have just done. Don't just check from one point of view - for example, if you have made a configuration change then check that you did it correctly in itself, but also use some tool to make sure that the system actually works in the way you configured it to work. And never ever think that buying a security product can compensate for the lack of knowledge.
  • If you work at a technical level, then learn to work against the main goals of your organization. In some way your security work should be focused on making it possible for those goals to be fulfilled, usually through a chain of sub-goals. If you apply too little security bad things will happen, but if you apply too much security, or at the wrong places, you will spend time and money that could be spent on better things. Maximum security everywhere is almost never a main goal in itself, but that is too often forgotten. Learn this and you will get more respect from management people. Then they will listen more to what you say about the technology needs.
  • If you work at more of a management level, then learn never to underestimate the importance of details. No amounts of paperwork will make your systems and networks secure unless there is someone who is capable of performing, and actually performs, the final steps of securing them. Too many times I've heard people say the phrase "technology is the easy part, that's no problem", and ironically those people often have systems that look like swiss cheese - and they're not even aware of it (guess why). Learn this and you will get more respect from the technical people. Then they will listen more to what you say about the management needs.
  • Never forget that if the users don't understand security or don't understand what you're trying to accomplish, then your security work often is of no use, or at least of much less use than it could be. Make sure that people understand *why* they should do things in a certain way and are forbidden to do it in other ways. That will make them so much more motivated than if they are just given a long list of "don't" points. Never make users feel afraid of you by making them feel bad about their mistakes or in any other way, because if you do, they'll never come back to ask you for help when they have some kind of trouble - which will only make things worse.
Based on your experiences, do you find proprietary software or open source software to be more secure?

Personally I try not to think that way at all. My view is that people really don't use software because it is secure - they use it because they have something they want done and the software helps them doing it. So the main thing is the functionality needed. Unfortunately, because some people will attack the software, we also need a proper level of security - but the main thing still is that people need software that can do the things they want done. If they have a need for either proprietary or open source software to fulfill those goals, then they should pick the one they need.

Also, the people who develop software usually don't do so to make secure software - they want to earn money, or do something fun, or get the feeling that they have given something useful to society. Things like that. I don't think anybody should be forced to develop either proprietary or open source software. They should be able to pick one according to their own goals.

Since I hold this view, I view security as something that has to, in a way, come in second hand. That doesn't mean that security should be neglected or applied as an after-thought. It just means that *some* decisions rightfully are made *before* making decisions about security, most of the time, and by most people. One of these decisions is about proprietary / open source software. I also think that both kinds of software can have good security and both can have bad security. It depends on who develops the software, in which way it is developed, tested, supported, documented, and so on. There are many important variables other than proprietary / open source that can be changed in different ways to increase security without really affecting the other major goals of users and developers.

What do you see as the major problems in online security today?

I think it's hard to pick just a few and I rather would like to think about how to minimize the problems. I wish more people would learn more about security. I also think that some software needs to be shipped with more secure defaults. For example, operating systems shouldn't have a single open port by default. Mail clients should be much simpler in design - very few people really need anything more than a program that can send and receive plaintext mails with attached files (and of course it mustn't be possible to open dangerous attachments with just a mouse-click). I would like to see much more simplicity in the interfaces to the "outside world" in software in general. Complexity is a huge problem if you want few vulnerabilities in software. But on the other hand, complexity isn't really that much of a security problem as long as it isn't exposed to input from a potential attacker.

What's your take on the full disclosure of vulnerabilities?

I'm completely for full disclosure as long as some conditions are met. For example, the vendors must be given enough time to provide a patch before the vulnerability details are published. I also think that any example exploit code should be as harmless as possible, for example it shouldn't give the attacker a remote shell, instead just pop up a window in the target computer or something similar. Of course there will probably be someone implementing something nasty anyway if a vulnerability is important enough, but there is no reason for the discoverer to do so. In general I think the publication of a new vulnerability from the part of the discoverer should be focused on providing the information needed by all people on the "good side". That means no more of information, but also no less.

Spotlight

How to talk infosec with kids

Posted on 17 September 2014.  |  It's never too early to talk infosec with kids: you simply need the right story. In fact, as cyber professionals itís our duty to teach ALL the kids in our life about technology. If we are to make an impact, we must remember that children needed to be taught about technology on their terms.


Weekly newsletter

Reading our newsletter every Monday will keep you up-to-date with security news.
  



Daily digest

Receive a daily digest of the latest security news.
  

DON'T
MISS

Fri, Sep 19th
    COPYRIGHT 1998-2014 BY HELP NET SECURITY.   // READ OUR PRIVACY POLICY // ABOUT US // ADVERTISE //