Top 3 biggest mistakes enterprises make in application security

Enterprise information security encompasses a broad set of disciplines and technologies, but at the highest level it can be broken down into three main categories: network security, endpoint security and application security. Network security and endpoint security have advanced greatly in the last few years and enterprises and government agencies have invested appropriately.

Hackers, meanwhile, have switched their focus to a softer target: applications. Enterprise data (and most importantly their customer information) is the pot of gold at the end of the rainbow. Applications are the way in.

While there is an increasing recognition in many organizations that application security is an important piece of the overall puzzle, putting together a comprehensive strategy that will effectively reduce an organization’s risk profile is not easy.

In this post, we unpack the top three mistakes enterprises make when implementing an application security strategy.

1. Assuming that a one-size-fits-most perimeter approach can protect today’s distributed applications
Try to draw a border to mark where exactly your perimeter starts and ends, and you’ll likely encounter the same problem most major enterprises are now confronting: it’s impossible. One way or another, we’re all beginning to shoulder a shared risk by building applications that have numerous entry points, leverage web and cloud services, use APIs and, integrate with third party applications, databases and feeds. Where exactly does your perimeter start and end?

Additionally, any attempt at active prevention that takes place outside of the application itself has no context. Applications transform data by definition. Sophisticated hackers create never-before-seen, dynamic attacks that will go undetected by scanners or perimeter defenses, only to become malicious when reconstructed by the application. In other cases, an innocent, malformed payload may accidentally appear harmful to a one-size-fits-all firewall – resulting in false positives. Without knowing what the application is going to with a piece of data, it is impossible to know whether this is really something that is malicious or not.

2. Mistaking vulnerability management for application security
There is no question that secure software development is difficult. Application developers are hugely valuable assets to enterprises building critical business software, but they are not security experts. So implementing a Secure Software Development Lifecycle (SSDLC) is extremely challenging for security and application development teams alike.

A fully baked SSDLC will take full advantage of modern static and dynamic application security testing tools (SAST, DAST) to identify known vulnerabilities. This is good. But too often, organizations stop there when in fact there are two glaring issues:

  • The key phrase is “known vulnerabilities”. What about unknown vulnerabilities? After all, that’s what a sophisticated hacker will use.
  • Just finding vulnerabilities doesn’t actually fix anything. The task of remediation goes back to the development team who are incentivized to release new applications and features quickly. And this is how we end up with bulging vulnerability backlogs and un-remediated applications running in production.

To make things worse, SSDLC methodologies are typically applied to new application development. One ubiquitous frustration for developers is being asked to go back to old code in legacy applications that they may not have even written and attempting to fix vulnerabilities. This impacts schedules for delivering new applications, ultimately affecting business momentum.

3. Having no visibility into production application attacks
Since analysis of network traffic doesn’t provide any clues as to what an application will do with the data when it executes, security operations teams typically have no way to monitor what is happening inside an application. Successful attacks go undetected.

Ideally, adequate testing for vulnerabilities would happen before applications reach production and that will certainly reduce the risk of exposure, but in many cases, the pressure to deliver new features and applications quickly results in a production release with un-scanned applications or with un-remediated vulnerabilities.

For larger organizations with hundreds or even thousands of applications, the problem of remediating identified vulnerabilities is a huge resource drain and therefore a non-trivial business problem. This could be ameliorated greatly if an organization could prioritize based on understanding which applications are actually under attack and which are not.

What can be done to avoid or mitigate these mistakes
The good news is there is a new-class of security products that can help organizations secure modern application environments.

RASP products work inside the production application itself and, unlike perimeter-based security products such as WAF’s, use the context of the application together with the content that the application is processing to accurately identify and, in protection mode, automatically neutralize the attack.

Importantly the attack information is a much higher order piece of security intelligence than simply a known vulnerability. Identifying the applications actually under attack, and therefore most at risk, allows appropriate prioritization of remediation activities. If vulnerability is never exploited but the application is protected via RASP, do you really need to spend the time and money remediating?

Since no code changes to the application are required, this is an extremely low-risk way to protect applications while enabling the development teams to do what they do best: building and innovating.

More about

Don't miss