It's aliiiiiive! Documenting for security as a process

There's something about the standard practice of how we document security assessments that's been bothering me for a while now.
If I had to summarize it, I'd say it's about treating the documentation of a security audit as a final product rather than an input to an ongoing process.

Let me illustrate what I mean.
Pretty much any report template that I've ever been given was structured around listing the vulnerabilities identified during the audit.
On some engagements, there was no expectation of a final report. Instead, the work products were issues filed in a bug tracker. In these cases, the number and severity of issues filed were also used as a measurement of productivity.

But when I document my security assessments, my process looks fairly different in that I also keep copious notes on coverage: What have I looked at (even if I didn't find anything there)? What were the attack vectors that I investigated, including the ones that didn't pan out? Why did they not pan out? What's the evidence that convinced me there's no issue there? Which parts of a source code base have I looked into, which parts did I examine extra-thoroughly, which ones did I only skim? What was my rationale for giving some parts of the code more attention than others? What features are there, how are they accessed, where are they implemented? Are there potential issues that are not a vulnerability right now but would instantly become a problem if certain features are added later (features that are likely to be added, based on what I know about the plans for the product).

All of this information is valuable, especially if you think of software/networks/IT infrastructure as living artifacts that keep changing and evolving. And as they change and evolve, so does their security posture. New features may eliminate old risks, and they may introduce new ones. The same is true of refactoring. Many software projects and systems I have reviewed have received several rounds of security audits over time. And each time I was given a project that I know has been reviewed before, I've been frustrated with the lack of information on previous audits. Yes, sometimes I got to read the reports from previous audits and vulnerabilities filed before, but these capture just a tiny part of what the previous testers must have known about the project after they were done testing.

I don't blame the testers for this, and that's because this thorough documentation is not usually asked for, and it may even be indirectly discouraged. It does cost time to document an audit thorougly, and that time could be used to hunt for more bugs, right? Which is all the more important since the number (and relative badass-ness) of vulnerabilities found is often treated as the most important measure of productivity and value of an assessment.
Which it is, if you're doing rubber-stamp security: Hey, we had an audit, it turned up this many vulnerabilities, we fixed them, bam. Case closed.

The thing is, I've become disillusioned with rubber-stamp security. I think it's a waste of time, and I'm trying to avoid these kinds of projects if I can. My own understanding of a job well done on a security assessment has changed over time, and I now think of "lotsa cool bugs" as insufficient. I'm still working out how to incorporate this into my work and actually making it useful, though.
Frequently, it's not even clear to me where to put this "extra" documentation - I often ended up awkwardly tacking on a "Coverage" section to a report template that wasn't designed to incorporate such information, or filing "Informational" findings in bug trackers just to capture this information somewhere. I wouldn't recommend the latter approach - it occasionally led to developers being confused or even pissed off because there were items in their issue tracker that weren't really work items for them. So, uh, maybe don't do that, I guess. On some occasions, I added a separate document to the actual report, but I'm not sure anyone ever used those. There was one occasion where I handed off all my files to the client at the end of a lengthy source code audit - including my OneNote notebooks where I kept all my notes - and two years later, a tester who had been tasked with re-reviewing the same component made a point of writing me an email to let me know that my notes really helped him with his audit. So that was nice, but it was the only time this ever happened in fourteen years of doing security audits.
When a client recently started making coverage documentation a standard requirement for every review, I may have squealed with joy a bit.

That's a bit of a shame, because taking a more thorough approach to documenting security assessments could accomplish several things:

  • It can reinforce the perception of security as an ongoing process around ever-changing artifacts. 
  • It can force the tester to approach their reviews more deliberately and methodically, which can be especially helpful for newcomers. 
  • It can create accountability, which can increase trust. 
  • It can increase transparency. This may be especially helpful if there weren't that many actual vulnerabilities - yeah, it happens. In this case, it also reduces the temptation of "padding" a report with weak findings just so you don't have to hand in what looks like an empty report. 
  • It can help testers learn from each other by making the process of conducting a security assessment more visible and explicit.
I'm aware that all of this may sound like it turns security audits from an exciting bug-hunting adventure into yet another boring engineering task, but eh. To me, it's more like this: I get to keep the bug-hunting adventure, but I also get to do what I can to support the massive and ongoing team effort that is building secure systems.

Comments

Popular posts from this blog

Getting my Pharmacia LKB Multidrive XL online... now with 3D printing!

A tale of three cities

Life after the diploma thesis, or: my case against a PhD