Understanding Technical vs. Logical Vulnerabilities

On Nov, 11, 2003, the chess-playing machine X3D Fritz tied grandmaster and former world champion Garry Kasparov in a four-game match. In this classic contest of Man vs. Machine, X3D Fritz performed so impressively that the game was heralded as a victory for artificial intelligence. X3D Fritz’s powerful play was achieved by calculating millions of moves per second accompanied by gigabytes of stored positions. Each time Kasparov moved a chess piece, X3D Fritz would analyze the board by drawing upon its vast knowledge base to select the best possible move.

What do chess, the world’s most dominant computer chess machine, and Garry Kasparov have to do with Web application security?

For many years, security professionals have thought there would come a day when technology alone could identify all Web application vulnerabilities and prevent all attacks, eliminating the need for the Kasparovs of the world. What we’ve come to understand is Web application security is a fundamentally different game than chess, or even network security. It’s highly unlikely that machines will ever replace man completely in the process of assessing Web site security. What’s important to understand is why.

Chess is a straightforward game. The board presents a finite number of legal moves and a limited amount of end-game positions. With chess it’s mathematically possible to calculate every move that may result from a given position and further “n” moves into the future. Since the game itself is defined and finite, although granted extremely large, the path to victory can be completely automated and followed precisely. Eventually computers will win at chess every time rather than settling for a tie.

Web sites are at the opposite end of the spectrum. They maintain an open door policy with regard to user interaction, rarely following Internet standards, and never operate the same way twice. Simple tasks such as shopping online or Web banking are drastically different functionally and architecturally. Web application vulnerability scanners operate in a complicated environment where the end result of a process is anything but obvious.

Web application vulnerability scanners depend on the relative predictability of Web sites to identify security issues. Using a loose set of rules, scanners function by simulating Web attacks and analyzing the responses for telltale signs of weakness. From experience, we know how a Web site will normally react when there is a security issue present. We know that if sending a Web site certain meta-characters produces a database ODBC error message, a SQL Injection issue has likely been detected. At WhiteHat Security we call these “technical vulnerabilities” and scanners have become fairly proficient at identifying them. But as Web sites become increasingly sophisticated, yesterday’s telltale signs are today’s false positives. As such, we’re not guaranteed that a specific result necessarily indicates that a security issue is present. This has made the automated process of finding simple vulnerabilities hard — and finding difficult ones impossible.

Consider the following example. If we visit a Web site and are presented with the following URL: http://example/order.asp?item=50&price=300.00

Can we guess what the application order.asp combined with the parameters item and price do? Using intelligence unique to humans, we can quickly deduce their purpose with relative certainty. This is a product ordering application. The item parameter is the particular product we are interested in. In our case, let’s say an iPod. The price parameter is the amount we are going to pay for our portable music player. What happens if we changed the price of 300.00 to 100.00? Or 1.00? Does the Web site still sell us the iPod? If so, we can easily understand that the Web site should not have allowed the price alteration. As humans, we possess a natural ability to assess context, and we aptly refer to these types of issues as “logical vulnerabilities,” issues that only humans can identify.

Now, if an automated scanner attempted the very same attack in a generic fashion, how would it decide if a custom Web site’s response was good or bad? How would it know if the attack worked or was adequately defended? Or what the item and price parameters were supposed to do in the first place? The answer is clear: Scanners cannot reliably make these assumptions. The numbers in the URL easily could have meant something else entirely when presented in a different context. The rules for what is supposed to happen on Web sites are not defined as they are in chess. These decisions require contextual knowledge of the system, plus the ability to “logically” understand any number of previously undefined results.

In mathematics, this very large obstacle is commonly referred to as the undecidable problem. An undecidable problem is a problem that cannot be solved for all cases by any algorithm (or computer program). Chess IS NOT an undecidable problem, since it can be accounted for in all instances at all times. Fully analyzing custom Web application software for vulnerabilities IS an undecidable problem. That’s why the game of chess can be fully automated by a computer and identifying vulnerabilities in custom Web applications cannot be. There are unique aspects of the human mind that computers have yet to duplicate. WhiteHat’s statistics, based on aggregate data from thousands of assessments, indicate that only about half of the possible Web application security issues can be tested for in a completely automated fashion. The remaining tests for logical issues require the involvement of a Web application security Garry Kasparov.

In a thorough Web application security assessment, potentially hundreds of thousands of customized tests must be performed. By hand, even the world’s best experts would never be able to complete this much work in a feasible amount of time.

Similar to the work of X3D Fritz, harnessing the power of a truly enterprise class vulnerability scanner greatly deceases the workload by performing the monotonous tasks that can be automated. Scanners are great at tackling technical vulnerabilities such as cross-site scripting and SQL injection, and not effective at identifying price list modification, credential/session prediction, insufficient authorization, and other logical vulnerabilities.

The industry has acknowledged the combination of a comprehensive, consistent, and efficient testing methodology backed by experienced security professionals as a best practice for ensuring complete vulnerability coverage. To build this program, our customers use WhiteHat Sentinel, a turnkey approach to continuous Web site vulnerability assessment and management.

While artificially intelligent computers like HAL 9000 may arrive or someone may achieve the mathematical breakthrough of the century, currently technology alone is no match for the human mind.

Don't miss