Increasing the relevance and value of Penetration Testing (why don’t people fix issues?)

In an Agile world, pen testing has become  something of a lumbering exception. Where the development process has matured and changed with changing business demand, pen testing has lagged behind in a world of static reports, out of date findings and limited successful remediation. If this sounds harsh, it is meant to, and I speak as a former pen tester and Check Team Leader and a principle of a UK security consultancy which ran a very successful penetration testing team for nearly 20 years.

The reasons are many and varied. For the main part, I think the definition of success in the pen testing process is rarely agreed and often incorrect or incomplete. The pen testing company will usually think that a successful pen test is one in which all possible vulnerabilities are found and identified; the customer will often think that the definition of success is to have the test successfully completed and available to whatever interested party has demanded it, and regulators and the board will often simply want to know that it is done. The real definition of success, of course, is that vulnerabilities are found and remediated  in a timely fashion in order to support the operational security of the system in question. For most engagements – and I have seen this from both the customer and supplier perspective across thousands of tests – the engagement is seen to have ended at the point the report is delivered and, perhaps, a wrap up meeting attended. Of course, this is really only the start of the real work, and it is critical that the pen testing company remain engaged throughout the remediation and validation cycle.

The second issue is that maintaining stuff is boring and patching is a mug’s game. It’s a bit like when the tyre pressure warning light comes on in my car. It’s reasonably tedious to go and pump up the tyre, nothing dramatic seems to happen if I leave it for a while and I also remember the occasion when I was diligent and my badly designed tyre valve snapped off while I was pumping the tyre up, leaving an unusable tyre and an afternoon of panic prior to a family weekend away. So, for the practitioner, the process of patching (or fixing a bad configuration) can be time consuming, dull and risky, and in the meantime nothing bad seems to be happening if the patch is not applied. While we all know that this is a false impression, and that few things are more effective at reducing cyber security risk than an effective patching regime, it is nonetheless the case that patching & other forms of security remediation are often neglected and de-prioritised and, while the importance of these activities is universally recognised, the practicalities and, indeed, psychology of actually getting them done is rarely focused on and poorly understood.

Another issue is one of clarity and prioritisation of results. To briefly revisit the tyre analogy, all I have is a warning light and a number. I have no idea if this number represents a tyre that is about to fail or one that is slightly off optimum but can happily do a 200 mile journey. And since it is often the latter I get accustomed to the warning light to the point where it ceases to have meaning even when it should. Think then of a pen test report running to hundreds of pages, identifying traffic light scored issues across hundreds of IP addresses. When this lands on the sysadmin’s desk where do they start? Who decides the priority? Is it the low hanging fruit first? The critical issues? Issues on critical systems (which, if they are critical, presumably have a limited maintenance window)? Where does this sit against the other tasks and priorities of the day? And how are these priorities managed in a larger environment where the work is delegated across teams? In a report where large numbers of findings are – in effect – given exactly the same priority, where does on start?

And when one has started, how does one validate that the issue is indeed fixed? In an ideal world this will be a part of the engagement and the pen testing team will be called upon after the remediation to validate that the issue no longer exists. But, practically speaking, this often doesn’t happen for any number of reasons (usually the time taken to fix and the availability of said team). It is imperative that a shift in attitude takes place where the pen testing company is seen to be part of the process until the issue is fixed, not take their leave of the process when the report is complete.  

Add into this the fact that by the time issues are finally addressed and, perhaps, validated, the report is out of date and the environment has moved on then we clearly have a process in need of urgent change if it is to stay relevant.


We need an agile pentesting process where tests can be spun up on demand, the pentesters remain engaged throughout the fix and validation cycle, remediation can be clearly prioritised in a way that is relevant to the business and the whole procedure carried out in a manner that is engaging and reverses the psychology of the remediation process in particular, to make it engaging and rewarding rather than risky and dull.

The answers lie therefore not only in change of attitude, process and approach, but in toolset and presentation. It is not enough to change the process to encompass validation of the fixes and pentester engagement with this if the fixes still aren’t happening because of an unengaging 500 page report and the lack of an easy way in which to engage the testing team for validation of results.

I am a firm believer that gamification and visualisation have a huge part to play here. Especially if allied to a platform that can support the process and make engagement with all parties easy and immediate. Yu-kai Chou points to the benefits of gamification in industries that are extremely important but relatively mundane, and the same surely goes for tasks, of which patching could certainly be said to be one. If in gamifying the process we can make it engaging and rewarding, and put this on a platform where clear visualisation of data allows for easy prioritisation, and further use said platform to engage all the players in the process, I think we can make a huge leap forward in the relevance and value of pentesting.

Kevin Dowd

Previous
Previous

I bet you there’s no better way to view your security data