Traditional vs. Non-Traditional Database Auditing
by Michael Semaniuk - Compliance and Security Solutions Architect at Tizor Systems - Monday, 29 July 2008.
Bookmark and Share
Another problem with native auditing is manageability. The manageability problem is in direct relation to the number of systems that we have in place. If we only have a single system to worry about, it is not a concern. Having a bit more table space to manage or a single data source to analyze means we can probably find the time to deal with it; however, multiply that by two, ten, fifty, or a hundred systems and now we have a nightmare. Monitoring systems to make sure that they are functioning optimally for our users, system design, capacity planning and other duties tend to be the priorities in most environments and will suffer if there are a large number of audit tables to maintain. Add to that the consolidation and analyzing of the data found and it gets to be a very messy problem. Plus, native auditing tools aren’t designed to do high-end analysis for trends, anomalous activity or the infamous needle in the haystack type of data we are interested in finding. They are not designed to be intelligent analytical tools. They are designed to collect data. So in order to make sense of the data, we need to build or purchase tools that can handle the analysis of the piles of data that we’ve collected.

As I mentioned earlier, an often-overlooked problem with native auditing is information glut. Where do we put the data that we’ve gathered? The last thing we need is another database so that we can dig through mounds of data to find what we’re looking for. While disk space is fairly inexpensive these days, it isn’t easy to add additional storage to a system or to introduce a new server into the environment to simply act as the repository for audit data.

The DAM Cure for Performance and Manageability Issues

DAM technologies address this problem. The DAM solutions market is comprised of a number of vendors using a variety of methods to enable database auditing. Some concentrate on databases while others go beyond structured data to audit file share activity. For now, let’s focus on monitoring activity in databases. As with native audit tools, there are pluses and minuses for each method. Let’s review how a single or hybrid approach to monitoring activity can be accomplished without the performance issues, management overhead or information glut associated with native auditing.

The Many Flavors of DAM Deployment

Generally, there are three DAM deployment methods that can be leveraged to monitor activity within a database environment. They are: network, agent and agent-less auditing. The network-based approach typically utilizes a network-based appliance to monitor the activity occurring via the SQL protocol within the target environment. It will also store that information on the appliance for reporting and alerting purposes. Proper deployment does require a bit of forethought, as there are two deployment models for network-based monitoring: inline and passive.

Inline is exactly what it sounds like: the appliance resides between the target database(s) and the network infrastructure. All SQL activity passes through the appliance before it reaches the database server. This can allow for a high level of action to take place on the appliance. For instance, if a query is extracting 1,000 rows of data, but the client is only allowed to retrieve 100 rows of data, an inline device can typically alter the response to 100 rows. Many inline devices bring additional security-like functions to the table. Protocol validation and blocking of activity are a couple of these additions. The goal of protocol validation is to help prevent worm or zero-day attacks from taking place against the database server, and blocking is the capability to shut the user down.

There are advantages to inline solutions, such as the ability to take action, alter a response and validate the protocol. The downside is latency, service interruption and scalability. Any device that acts as a forwarding element (any piece of network equipment, a patch panel, an inline device such as an Intrusion Prevention System (IPS) or database firewall) introduces latency to the environment. If the performance is a concern, then carefully weigh the cost of running inline. It also requires a service interruption to install, remove or upgrade. So your mission critical applications may have more downtime than what is acceptable for the business. Finally, inline devices are limited in the total number of connections that can pass through them, causing the total number of devices protected to be rather low. This can be good for a point solution but not necessarily for an enterprise deployment.

Spotlight

Attackers use reflection techniques for larger DDoS attacks

Posted on 17 April 2014.  |  Instead of using a network of zombie computers, newer DDoS toolkits abuse Internet protocols that are available on open or vulnerable servers and devices. This approach can lead to the Internet becoming a ready-to-use botnet for malicious actors.


Weekly newsletter

Reading our newsletter every Monday will keep you up-to-date with security news.
  



Daily digest

Receive a daily digest of the latest security news.
  

DON'T
MISS

Fri, Apr 18th
    COPYRIGHT 1998-2014 BY HELP NET SECURITY.   // READ OUR PRIVACY POLICY // ABOUT US // ADVERTISE //