As there is a lot of data, the conclusions you reach will really depend on what you are looking for in the first place. I've documented in this article some of the things which I feel may be of general interest.
I would like to preface this article by saying that it is human nature to find the bad news more interesting than the good news. In our tests, we saw many impressive consultants. We tested companies which acted professionally and competently throughout, and there are consultancies who we admire and respect as a result of working either directly or indirectly with them. Many other companies could have set up the Sentinel program, and we don't place ourselves higher than our peers. It just so happened that it was Matta that was asked to do it. Typically, the clients who have run Sentinel programs, are either looking for a global Penetration Testing supplier - which Matta is not - or they are running internal accreditation schemes. Our reports have always been considered objective, and if we have something subjective to say, it goes on a separate page in the report, which is marked as a subjective observation.
Looking at some findings then, the first, and perhaps the most startling fact of all is that every consultant who has gone through the test has always found vulnerabilities with their tools, which then failed to make it on to their final report.
We sniff and log all the network traffic during the test, and are often required to demonstrate to the vendor that they did indeed find the issue, which was then absent on their report.
Clearly, there is a real problem with time limited tests, and the work required to go through reams of unqualified data to sort out the real issues from the false positives. Things just get missed. Importantly it seems, at least in our tests, something gets left out on every occasion. Our tests are intense, and time limited, so perhaps a fair conclusion to reach is that if the consultant is similarly under pressure, either internally, or from the client, then expect to get incomplete results.
In some cases though, we also feel that the consultants would probably have missed the vulnerabilities regardless of the time limit. We believe that the output from some common tools is not so easy to read for those with less experience. Second, and for me the most baffling observation, is that some consultants - a small minority - but enough to be significant, don't seem to read their briefing notes. This is really concerning. Each test we run has an engagement protocol. The vendor is given a set of briefing notes, with key information, including perhaps some login credentials to a web application, or a request to treat a database exactly as if it were a production system.
So whilst most consultants had no trouble executing the tests with these instructions, one consultant repeatedly crashed the database to get debug information. Not something you would want do on a production database! On a similar note, we did hear a real life story from a client, in which a penetration tester had tried to drop a database to prove he had effected a compromise. Fortunately, due to mitigating factors, he was unable to drop it, but the client was less than happy, and I don't believe they required his services again.