Organizations need all the help they can get if they’re to adequately fight back against malware proliferation and malicious activity. We’re about to witness a new dawn for digital forensics.
We’re all familiar with the risks our enterprises face from rogue or untrained IT administrators gaining access to the corporate servers and wreaking havoc. This can be anything from accidental and/or unwanted changes and bad IT practices to corporate espionage and malicious revenge attacks.
This has been a key driver for organisations to develop and store an audit trail of privileged activity, across the network, to provide clear visibility of what’s taking place and who is performing it. More recently, this trail has also been critical to verify an organisations compliance with legislation.
These activity logs, often touted as irrefutable evidence of the organisations regulatory stance for auditors, to all intents and purposes are examples of digital forensics in action.
Digital forensics can be split into two practices – proactive and reactive forensics. Let’s look at the evidence:
As the name suggests, reactive forensics looks at something that has already happened then, retrospectively, conducts a post mortem and analyses the witnessed behaviour to glean what can be learned to prevent it happening again. Often considered the more traditional approach to security, it forms the bedrock of a number of security applications - such as firewalls and anti-virus software.
Conversely, proactive forensics is the practice of looking for something in advance based on high level futuristic rules. Rather than responding to a situation, proactive forensics can be used as an early warning system by using key characteristics to identify certain behavioural changes in applications, detect anomalies in network traffic or unexpected alterations to system configurations. It requires a very high level view of everything that’s going on across the entire network. However, to be truly effective it must also be capable of issuing timely alerts when something erroneous occurs.
The way I see it is both elements go hand in hand. You can’t build good proactive monitoring systems without first knowing what to look for. However, that’s just one element as it’s only as strong as the rules you use to analyse the information that’s coming back.
And therein lies the problem - they’re both based on rules. Unfortunately, malicious code writers and insider attackers don’t play by the rules so it’s an always going to be an ongoing struggle.
Ultimately, what it boils down to is the organisation’s ability to create and effectively use an intelligent set of rules, to filter the evidence digital forensics correlates to look for pre-determined behaviour or system configuration changes that it is not expecting.
For example, the use of a privileged identity can be a key indicator of suspicious activity, especially in applications that would not normally require admin rights to run. Take a web browser, for instance, if it were to ask for admin rights it should be flagged in any early warning system that something untoward may be about to occur.
From this proactive position, it should then reactively quantify the request to determine its legitimacy. It could be something benign - such as installing a trusted Active X control, or it could be sinister - such as a drive by download that is trying to gain admin rights to take control of the system.