Interview with Brian Hatch, author of “Hacking Exposed Linux”

Who is Brian Hatch? Introduce yourself to our readers.

Brian Hatch is a hacker in the positive sense – a coder, tinkerer, and tester. I love to prod software into doing things it shouldn’t be able to, be it for good or ill.

My love of Linux comes from the fact that all the code is there for your perusal, modifications, and bastardizations. I’m constantly testing and breaking my laptop, putting backdoors and Trojans on them, and occasionally need to reinstall my system from scratch to be sure I haven’t irrevocably destroyed any hope of stability and security in my quest to do weird things.

When I’m not tinkering in Linux or security, I’m… Hmmn. Wait, I can’t think of a time that I’m not tinkering in security. I should never have gotten a phone with an SSH client.

How long have you been working with Linux?

I first started playing with it in 1993, but it became my primary desktop OS in 1995. That was a laptop, and damn but was that a tricky beast to set up back then. It originally ran via loadlin and everything lived on a DOS partition because I needed to have Windows available for corporate email (Lotus Notes). Luckily I left that company and was able to ditch Windows for good. Of course I’d been using GNU software for a long time before I had Linux on my desktop. SunOS/Solaris and IRIX machines were my usual stomping grounds — I still have my Indy somewhere in the attic.

The beautiful thing about Linux is that the entire kernel is Free Software/Open Source, as are most of the userland tools. Having the entire code base of your software makes tweaking possible, and allows me to have complete control of my system. For example I’ve occasionally modified the ‘crypt’ password hashing function on my systems. Since most password crackers are run offline, or have crypt written in optimized assembly, the results from password crackers would never be valid on my machine. This is the kind of ability you have when a system’s source is completely available to you. I can’t imagine going back to using something where I can’t see each and every line of code.

How did you get interested in computer security?

I don’t ever remember getting interested in it — it seemed to be one of my innate desires for as long as I can remember. I guess I was always paranoid and mistrusting.

Back when I had my first Apple ][ machine, you’d need to boot off the floppy drive or tape. The computer would run the program called ‘hello’ on the floppy if it was available. Well, I sure didn’t want anyone looking at my files and programs, so my hello program was this paranoid thing that required two correct passwords (each more than 10 characters long) to get in or it would reboot the machine. If you correctly authenticated, the thing had a fully functional text file management/program execution environment.

Now not only was this exceedingly paranoid (aside from hello, the only other program on the disk was Snake Byte) but it was still vulnerable. Anyone could simply boot a different disk and then stick mine in to access what was on it. So I learned to modify the disk structure to foil that avenue of attack. Of course, anyone with a disk editor could still figure out what was on it if they tried hard enough. I considered adding some sort of encryption to the mix, but never got around to it, and likely I would have fallen pray to holy grail of all the newbie cryptographers — XOR with a short ASCII key.

Do you have any favourite security tools? Which are they?

A lot of tools that are considered ‘security’ tools are excellent for general network connectivity testing and debugging. For example I use both nmap every day in a non-security-specific context, yet it is generally considered a security tool.

If I were stranded alone on a desert island with only one tool from the major security categories, I’d use Nmap for port scanning, P0f for passive OS detection, Nessus for vulnerability testing, Snort for intrusion detection, Hogwash for inline intrusion filtering, GnuPG for file encryption, OpenSSL for crypto libraries, OpenSSH for file transfer / remote login / remote execution / X11 forwarding / secure port forwarding, Netfilter/iptables for firewall/acls, vi for file editing, and Netcat and Perl for everything else.

You have suddenly been given the ultimate power to change the world in one of two ways: either you can make all programmers into perfect coders, or you can make all users knowledgeable about the security implications of their actions. Which do you choose?

This is a tough question. Even if all software is completely secure, then you still have a VEBKAC situation. (Vulnerability exists between keyboard and chair.) How many people click ‘ok’ when their browser says the remote site’s SSL certificate isn’t valid? How many use the same password for their home email as their work accounts, type it over unencrypted connections, and it’s probably ‘password’ anyway?

Then again, if the user knows the correct response to any security-related action, that does no good if the underlying software is built poorly. Their only available response would be to not use any software at all.

So I’d need to pick the former. Magically modify all programmers to be flawless security geniuses. However, the best of both worlds could still be achieved. These uber-programmers will prevent user cluelessness from subverting security. The user will no longer have the opportunity to just click ‘ok’, instead you’ll get a dialog box like this:

“The site to which you’ve connected does not have a certificate that is signed by a trusted CA. If you’d like to continue anyway, please explain the security ramifications of this decision and why you consider it necessary.”

No “ok” button, just an empty text field. The user has a chance to provide a correct and applicable response, such as “The server’s certificate is signed by my company’s CA, not a global CA. I have verified the CA’s information, including fingerprint, against the signed laminated card provided personally by my company’s system administrator and they match, so I am reasonably sure that there is no chance of a man-in-the-middle attack.

Perhaps I should call this ‘security through bullying the user into not accepting insecure modes of operation.’

Some believe the best security model is to have highly audited and secure code. Others believe that you should employ kernel modifications that go beyond the standard Unix security model. What are your thoughts?

The bare minimum standard for any secure system would be that the code itself is secure. Code auditing is one of the central tenants of OpenBSD, for example, and they’ve had bragging rights for many years by being able to point to internal code audits that fixed bugs before they were found to be exploitable, often years before. Unfortunately, in the Linux community there are less folks taking on this task of proactive code auditing. The Linux distro with the greatest emphasis on code audits is Openwall GNU/*/Linux, aka Owl. The OpenWall folks, which include such experts as Solar Designer, have stressed code audits, to the point that their code produces no warnings, even trivial ones, when compiled with ‘gcc -Wall’.

Having secure, audited code, is a must. However I still prefer to use kernel security patches when possible. Audited code helps keep the bad guys off of the machine. However advanced kernel patches can both prevent them from getting in, and prevent them from doing damage if they do find a way in. Even if the software on your machine is completely locked down, a malicious cracker can find some avenue to get onto your system. For example if one of the administrator’s desktop machines is broken into, and they access the secured server, the cracker can get in.

An advanced security kernel patch can protect your machines more than the traditional Unix model. If the cracker, even if they can get in as root, can not remount the partitions in read-write mode, cannot stop or start your daemons, cannot bind any network sockets or make outbound connections, cannot read protected files even as root, they’ll probably move onto easier pickings.

How long did it take you to write “Hacking Exposed Linux” and what was it like? Any major difficulties?

Well, first of all, I still call it “Hacking Linux Exposed”, and you can read my rant on the topic if you want…

As with HLEv1, one of the biggest problems with writing was the fact that the publisher required everything in Word. Yes, that’s right, we had to write a Linux book in a proprietary document format. While VMWare (for HLEv1) and Crossover Office (for HLEv2) allowed me to run Word on my Linux box, it was still no more stable than Word on Windows, which means that I frequently lost huge chunks of my content and had to rewrite from scratch. Again, I’ll stop ranting about this difficulty; I’ve managed to mostly suppress those memories.

We had a lot of organisation issues in HLEv1 that we wanted to fix, plus we wanted to re-write some of the content that was written by the original contributing editors. Then, naturally, we wanted to add a lot of new content. For example the ‘post intrusion’ chapter grew so much it became three chapters on their own, covering more back doors, encrypted access methods, Trojans, and loadable kernel modules.

I didn’t want to create a book that was just umpteen hundred pages of “here’s how tool foo works; now, here’s how tool bar works.” At heart, I am a teacher, and though I could have had fun showing each and every security-related tool out there, it wouldn’t have taught the concepts I wanted to get across. Instead, I wanted to teach the theory of security, and the specific tools and methods to achieve it in the Linux/Unix world, illustrating it all with actual exploits and defences. The only way to really learn is by doing, and I wanted to get folks interested in going out and probing their own systems, testing or compromising their own security, to understand things more intimately.

Overall, HLEv2 probably took nine months for James and I to write. That would have been shortened to 6 months or so had we been able to write in a useful format, such as LaTeX, which would also have allowed us to use CVS more fully between we authors and the editors.

What’s your take on the adoption of Linux in the enterprise? Do you think it will give a boost to security?

The talk in all the trade rags and media is that there’s an increased focus on security throughout the business world. Unfortunately, from what I’ve seen recently, it seems that the focus is on “hoping no one realizes our company is only giving it lip service.” While some businesses are taking great steps, most are putting security on the back burner until the economy turns around. (I’m speaking from a US-centric position here because that’s where I am. Much of the world is taking better steps toward security than the US.)

The mentality is that security is a second tier, a nice added bonus, when times are going well and you can afford to ‘waste money’ on something that doesn’t bring in revenue. How many typical companies are growing their security teams currently, or for that matter employing people who are dedicated to nothing but security?

Security is still not part of ‘doing business as usual’ in the minds of most companies. When convenient, it is listed as a supplemental bullet point in their considerations, but never is it part of their primary decision. This extends from how they design and protect their networks to the desktop products they purchase. Internet Explorer comes with their Windows desktop, so it’s the standard, regardless how filthy it’s track record is.

Those companies that are taking security seriously, that are working to make it part of the mentality, are going to be finding that closed source operating systems cannot provide the same level of configurability and security as the open operating systems. Take a BSD or Linux machine and you can audit or change absolutely anything you wish. You can always investigate any worry you have. Think the vendor has left themselves a back door? Check the source. Think that something isn’t working properly, or the documentation is not accurate? Check the source. Found a bug but the vendor isn’t getting around to fixing it fast enough? You have the source for everything, and you can make any change you want.

This ultimate configurability allows you control over every aspect of your systems, and cannot be paralleled by closed source environments. Security conscious environments will be drawn toward open source systems. And for general business reasons open software is appealing. When you have the entire code base for your software, no vendor can manipulate you into licensing fees and costly upgrades. Your software could be desupported by the creator, just as occurs with proprietary software, but you have the code so you can keep it functioning without any vendor assistance. You can’t be locked out of code that you possess that is under one of the Open Source compatible licenses, such as the GPL or BSD licenses.

Of course, that last point is important – just having the source is not sufficient if you do not have the rights to do what you want with it. Some companies are making code source available in limited cases with extremely restrictive licenses. Having the source is not enough – you must have the ability to modify and rebuild from the source. Don’t get suckered into anything less.

What do you think about the full disclosure of vulnerabilities?

I believe in full disclosure when done in a responsible manner. If someone finds a bug in a software product, they should not go blabbing it to the world, they should contact the maintainers of that software, be it open or closed source, to get the problem fixed. Having exploits out in the wild before the maintainer can provide a patch does not help the security of our environment. There are several disclosure policies available that provide suggested timelines for working with a vendor before going public. My preference is still for Rain Forest Puppy’s RFPolicy, as it provides the correct balance between fixing the problem and forcing the vendor to fix the problem in a timely manner.

I adamantly do not support hiding or denying vulnerabilities. The vendor who gets a report about a vulnerability and attempts to convince the researcher to keep quiet is reprehensible. If one person can find a bug, someone else can too, so likely there’s a malicious cracker out there that has also discovered the vulnerability and will exploit it until it is fixed. Public disclosure of the vulnerability is often the only way to force the hand of the vendor, to shame them into fixing the problem. It is up to the discoverer to decide when a vendor is not living up to it’s obligations and go public with the vulnerability before the vendor has a patch.

So the last facet of this question is this: once a problem has been discovered, and a patch or upgrade made available, should full exploit details be provided to the public? This is one of the main points on which security folks disagree. I believe that there is no reason not to provide sufficient details and/or exploit code to prove the vulnerability. When a vulnerability is known, you need to upgrade, regardless if there is ready-to-run code to exploit it. Providing that code can force the hands of administrators. It also allows administrators to test their patch procedures to verify they were successful, and in this way proof of concept code is actually a beneficial thing. Also, it provides programmers, both security and non-security proficient, a way to learn what programming mistakes can be made and how they can be tested. This benefits everyone by hopefully producing better programmers and code over time.

In your opinion, where does Linux need the most software development at the moment?

There are two main places that Linux machines can live: in the server closet and on the desktop. Each has very different needs. My personal theory is that most Linux programmers are more interested in writing things such as web or mail or file servers, where your ‘bragging rights’ can be how fast and well it does what it needs to do. I know I’d be much more interested in creating a secure network socket between two custom embedded linux devices than creating a friendly gui user interface. I’d not be at all qualified for the latter. My normal usage involves 15 text TTYs, most running screen, and use of X11 only when I need to visit websites that require JavaScript.

However I’m extremely glad there are programmers out there who have the desire and abilities to create the software that will be needed to make Linux a solid desktop operating system — easy to use window managers, word processors, spread sheets, and the like. Unfortunately, that still leaves a lot of products we don’t have covered. We don’t have the specialty software, such as billing systems, payment processing, and other products that you need for the accounting departments for example. Some of these can be run natively under WINE/WINEX/Crossover or virtual machines such as VMWare. However the more things you run in emulation, the less appealing Linux is and the more ammunition closed minded managers and IT folk will have to prevent the switch to Linux.

The problem is that creating these tools takes time and manpower. I think in the future we will see more companies providing programmers to open source projects – those companies will see the features they need implemented first, and yet still get a global set of testers and users to find bugs – but this is a change in business operations that will take a bit of time to sink in. But the result will be a global situation in which companies all benefit and have control of their environment. No more vendor lock in, closed document formats, or forced upgrades. Life will be good.

Linux is already ready for prime time in any server closet. You can get a Linux box to be your firewall, mail server, file server, web server, anything that involves the word ‘server’ quickly, easily, and securely. Most Linux distributions configure their software to be secure in their default configurations. (Those that do not are at least trending that way over time.) Since Linux processes are discreet and isolated, unable to clobber or adversely affect each other without setting up such methods ahead of time, you have more innate stability. Should your mail server process die, it wouldn’t hurt your webserver at all. No GUI is required to configure your server software, so you can make configuration changes from your desk, from your home, or on a wireless network at Starbucks. Linux has enough server software to replace almost anything in any organisation today. While replacing legacy servers with Linux may be scary or unnecessary, new installations that use Linux invariably are more manageable and scalable.

What are your plans for the future? Any exciting new projects?

I’ve got a book or two in my head, and should I ever have time to get them down on paper, and I have a Linux security project that I can’t yet talk about that would be extremely fun for me and useful to the community that will hopefully begin soon. I also have a number of Stunnel (SSL wrapper) features and patches in my head I want to get integrated into the default code base.

However the most visible project for the next 9 months will be the development of a new child process that my fiancee and I have forked. The expected release date is late February, but things may fluctuate two weeks in either direction. We’ll be adding this to our current ptree listing which contains one leaf process Reegen (currently version 3.0). Ahh, life is good being a $PPID.

Don't miss