Traditional vs. Non-Traditional Database Auditing

Traditional native audit tools and methods are useful for diagnosing problems at a given point in time, but they typically do not scale across the enterprise. The auditing holes that are left in their wake leave us blind to critical activities being performed within the systems that contain our most coveted trade secrets, customer lists, intellectual property, and more.

Would we be happy if our bank allowed people into the vault that contains our money without a camera monitoring their activity? Would we want to share our most personal data with a company that isn’t a good fiduciary of our information? The odds are we wouldn’t want to participate in either scenario, but the reality is that this is what happens to our most private data all the time. We simply aren’t aware of it because in the world of electronic data, we don’t “see” what is going on. Employees and partners of companies have the ability to access our personal information in databases all over the world. And, although many of those companies have traditional security in place, most don’t know what is actually happening with our data—and the data of millions of other individuals.

In the recent past, native audit tools, such as SQL Profiler, trace functions, and triggers were all that we had. But they are no longer the only game in town. A new category of technology has emerged that empowers enterprises to “see” and immediately analyze what is going on with sensitive data. This new technology, called Data Activity Monitoring (DAM), has the ability to monitor sensitive data as it is being accessed from data servers and analyze the activity to determine if the user, or the particular access, has the potential to endanger data or create a non-compliant situation.

We have historically shied away from performing extensive monitoring and auditing within our database environments because of performance and manageability issues and something that I call “information glut.” We can gather all sorts of interesting data with native auditing tools, but the result has always been slower systems, more management overhead and so much raw data that making sense of it is nearly impossible.

Performance and native auditing have been diametrically opposed. The more knobs and switches we enabled within a database tool like SQL Profiler, the more overhead we introduced. This is an inherent problem because native tools leverage the same CPU and disk I/O as our production systems. While performance degradation is the downside, the upside is the plethora of data that we can extract. With auditing we can get information such as success/unsuccessful access, stored procedure activity, duration for a transaction and almost anything we can think of in relation to the activities that are taking place within our target environment. But auditing issues have typically outweighed the benefits.

Another problem with native auditing is manageability. The manageability problem is in direct relation to the number of systems that we have in place. If we only have a single system to worry about, it is not a concern. Having a bit more table space to manage or a single data source to analyze means we can probably find the time to deal with it; however, multiply that by two, ten, fifty, or a hundred systems and now we have a nightmare. Monitoring systems to make sure that they are functioning optimally for our users, system design, capacity planning and other duties tend to be the priorities in most environments and will suffer if there are a large number of audit tables to maintain. Add to that the consolidation and analyzing of the data found and it gets to be a very messy problem. Plus, native auditing tools aren’t designed to do high-end analysis for trends, anomalous activity or the infamous needle in the haystack type of data we are interested in finding. They are not designed to be intelligent analytical tools. They are designed to collect data. So in order to make sense of the data, we need to build or purchase tools that can handle the analysis of the piles of data that we’ve collected.

As I mentioned earlier, an often-overlooked problem with native auditing is information glut. Where do we put the data that we’ve gathered? The last thing we need is another database so that we can dig through mounds of data to find what we’re looking for. While disk space is fairly inexpensive these days, it isn’t easy to add additional storage to a system or to introduce a new server into the environment to simply act as the repository for audit data.

The DAM Cure for Performance and Manageability Issues
DAM technologies address this problem. The DAM solutions market is comprised of a number of vendors using a variety of methods to enable database auditing. Some concentrate on databases while others go beyond structured data to audit file share activity. For now, let’s focus on monitoring activity in databases. As with native audit tools, there are pluses and minuses for each method. Let’s review how a single or hybrid approach to monitoring activity can be accomplished without the performance issues, management overhead or information glut associated with native auditing.

The Many Flavors of DAM Deployment
Generally, there are three DAM deployment methods that can be leveraged to monitor activity within a database environment. They are: network, agent and agent-less auditing. The network-based approach typically utilizes a network-based appliance to monitor the activity occurring via the SQL protocol within the target environment. It will also store that information on the appliance for reporting and alerting purposes. Proper deployment does require a bit of forethought, as there are two deployment models for network-based monitoring: inline and passive.

Inline is exactly what it sounds like: the appliance resides between the target database(s) and the network infrastructure. All SQL activity passes through the appliance before it reaches the database server. This can allow for a high level of action to take place on the appliance. For instance, if a query is extracting 1,000 rows of data, but the client is only allowed to retrieve 100 rows of data, an inline device can typically alter the response to 100 rows. Many inline devices bring additional security-like functions to the table. Protocol validation and blocking of activity are a couple of these additions. The goal of protocol validation is to help prevent worm or zero-day attacks from taking place against the database server, and blocking is the capability to shut the user down.

There are advantages to inline solutions, such as the ability to take action, alter a response and validate the protocol. The downside is latency, service interruption and scalability. Any device that acts as a forwarding element (any piece of network equipment, a patch panel, an inline device such as an Intrusion Prevention System (IPS) or database firewall) introduces latency to the environment. If the performance is a concern, then carefully weigh the cost of running inline. It also requires a service interruption to install, remove or upgrade. So your mission critical applications may have more downtime than what is acceptable for the business. Finally, inline devices are limited in the total number of connections that can pass through them, causing the total number of devices protected to be rather low. This can be good for a point solution but not necessarily for an enterprise deployment.

The second type of network-based solution is passive. The network-appliance monitors activity by capturing and evaluating copies of the data stream between clients and the database servers as presented by the network infrastructure of the target environment – similar to the way a network engineer uses a network sniffer to monitor traffic. This is similar to the inline approach, in that it monitors via the network, but its deployment model is fundamentally different. Both analyze the SQL protocol to determine what is relevant and what is not. Passive deployment allows a single appliance to scale to a large number of devices because it is not in the traffic path. Passive deployment eliminates the latency that could be introduced with an inline solution and can be installed without any service interruption.

There are also tradeoffs with a passive solution. There is no ability to alter a response or block activity. Inline solutions and passive solutions handle threats somewhat differently. If an inline solution sees a username combined with an application that it has been told to intercept, it will prevent the network packets from reaching the target. Passive solutions typically reset a session at this point by sending a reset packet to both the client and the server, accomplishing the same goal in a different way.

The network-based approach will do all of the heavy lifting in typical database environments. They monitor and audit activity as it happens without impacting the server itself. But what about encrypted network communications or local console activity, such as database management on the local system via secure shell (SSH), remote desktop protocol (RDP) or console access via the local keyboard and mouse? Network-based solutions generally only understand the native database protocols themselves, leaving a hole in the audit trail. But there are other methods that capture the activity that network-based monitoring misses.

Agent-based and agent-less auditing are used to fill in the gaps left by network-based auditing. Like network-based auditing, agent-based comes in a couple of flavors, depending on the goal of the deployment. Agents can be used to parse database logs (i.e. redo logs), act as a proxy for console or encrypted activity, act as a sniffer to capture console or network activity or as a hybrid. They may utilize some combination of these methods to get the data. Agents, especially hybrids, can potentially do the same things that the network-based approach does by monitoring clients as they access the environment over the network. They can also capture the local and encrypted activity. This gives an agent-based approach a wide set of capabilities. Additionally, some agent technology can present before and after values for fields and, in addition to monitoring SQL activity, may also be able to monitor database configuration files. On the downside, agents can be very heavy on a system. If they do everything to capture the activity and aren’t supplemented by a network-based solution, they will negatively impact the target system, consuming significant CPU, disk and network I/O in order to monitor the activity of interest, and on occasion may require the use of native auditing. Not to mention the need to deploy another database to store audit data.

Agent-less auditing is similar to the agent-based approach, but it has a significant advantage in that it does not always require the deployment of software. Agent-less auditing is another word for native auditing, but the difference is in the details. Agent-less auditing DAM solutions focus the native audit tools to capture only the data of interest and manage that data once collected. SQL Profiler can be a powerful tool and when it’s tuned to capture just console activity versus all network activity, overhead is kept to a minimum. Combine that with some form of automated data management, and we can capture the encrypted or console activity. This approach allows for visibility into the DML activity that may be occurring as we call stored procedures. As with all of the approaches, there are downsides to agent-less auditing. Separation of duties is challenging when we use native tools, and many industry regulations require monitoring of DBA activity. Tuning of the native tools can also be a challenge. Databases from different vendors use different audit methods. This could lead to capturing more than just the encrypted or console activity and overtax servers.

Finding the Right Solution for your Database Auditing Situation
So how do we sort through the alternatives to arrive at the best solution—one that balances our need for intelligence about data activity with our need to keep business systems humming along? The key is to enable the visibility needed for data protection and compliance initiatives while protecting the performance of systems (and our sanity) by leveraging the strengths of multiple approaches.

The network-based approach alone can handle the network activity but not the console and encrypted activity. The 100% agent approach introduces data management issues and may cause significant pain on the database server itself. A 100% agent-less, like the agent-based approach, may introduce the same issues.

The key is a flexible, multi-pronged approach to database auditing. Network-based solutions are critical to the overall auditing effort but encrypted and console activity call for additional functionality. So most network-based appliances are complimented by an agent-based solution, or in some cases both an agent-based and agent-less solution. This combination of approaches allows the appliance to do the heavy lifting by monitoring the client/server traffic that originates from network-based clients. The agents and/or agent-less functionality covers encrypted and console activity. Combining the strengths of theses different or non-traditional approaches allows for the deployment of a comprehensive audit and monitoring solution with all of the upside and none of the down. Now we can get the visibility into data activity that we need without the issues associated with traditional/native auditing functionality.

Michael Semaniuk (CISSP, SSCP, MCSE, CCSE+) is the Compliance and Security Solutions Architect at Tizor Systems. Mr. Semaniuk has been involved with hundreds of installation and consulting engagements, including vulnerability and penetration testing, gap analysis and security product deployments. His professional experience includes: Triumph Technologies, Interliant, Akibia, Top Layer Networks and Tizor Systems.

Don't miss