According to an April 2006 report from the Yankee Group, the various security investments enterprises have made do, indeed, make it more difficult for “criminals, spies and miscreants” to break into corporate networks. However, the report says the criminal element is focusing on new attack strategies, one of which is “quickly creating and launching exploits to vulnerabilities before enterprises can patch against them.”
The so-called zero-day (0 day) attack, where an attack is launched against a vulnerability before a patch is created to plug that vulnerability, has long been a great fear of any security professional. With the criminal element actively seeking out opportunities for such an exploit, it’s more important than ever for organizations to take stock of their patching strategy.
In so doing, you are likely to come across an age-old argument regarding which type of patching solution is more effective, agent-based or agentless. In many respects, the argument is a red herring because you can do most of the same things with one architecture that you can with the other. They are simply different ways of performing the same job, either using a small software “agent” or polling from a central location to collect data on the target system.
Despite these similarities, many myths are floating about that need dispelling. Here we focus on five, which fall into the categories of accuracy, scalability, bandwidth, speed and coverage.
Myth No. 1: Agent-based systems are more accurate.
Some IT professionals believe that being a resident on the client or server enables agent-based systems to collect more information, and ensures they won’t miss machines that are turned off.
Accuracy has to do with the logic behind the scan engine and the quality of the data sent to it for analysis. You can gather the exact same data from a client or server by polling with a resident agent. It’s what you do with that data that matters. And while it’s true that an agentless architecture cannot poll a machine that’s turned off, it’s also true that end users can — and do — disable software-based agents. Additionally, if a user attaches a rogue machine to the network, it won’t have an agent and may not be found unless the company has another means of detecting such machines.
Myth No. 2: Agent-based systems are more scalable.
Another argument for agent-based systems is that agents help perform some of the processing load, thereby enabling these systems to handle larger networks. But with the power of today’s servers, configuring one that can handle agentless scans for any number of machines is not a problem. In fact, large organizations are more likely to struggle with installing and managing agents on all their machines.
Myth No. 3: Agent-based systems consume less bandwidth.
Even an agent-based system still needs to evaluate data that the agent collects, which means that data must flow over the network at some point — so it does need a certain amount of bandwidth. But even though some older agentless systems did consume significant bandwidth because they had to read entire copies of files across the network to check versions, more advanced agentless systems have overcome this shortcoming and now consume only moderate amounts of bandwidth.
It’s a trade-off: Whether agent-based or agentless, doing a thorough, accurate scan will require more bandwidth to complete as compared to a superficial scan, but which would you rather have?
Myth No. 4: Agent-based systems are faster.
You may hear that agentless systems scan each machine sequentially, which means it takes them longer to scan an entire network and limits their ability to patch for a critical vulnerability in a timely fashion — as opposed to an agent-based system.