Quantcast
Channel: SecurityCurve » Blog
Viewing all articles
Browse latest Browse all 48

Is there an inverse to Boehm’s curve?

$
0
0

So Diana posited something really interesting a few months ago and I’ve been meaning to blog about it ever since; it took a while to do it because (as you’ll see) it involved creating graphics and I’m no great shakes when it comes to artistry.  So apologies for the delay in putting it out there, but I keep coming back to it.

Anyway, she and I were talking about application security testing and we were discussing why it might be that so many organizations fail to implement processes like Threat Modeling, security requirements analysis, source code evaluation, etc. as part of their development hygiene.  After all, we know that testing earlier leads to less cost, right?  So if that’s true, wouldn’t organizations tend to naturally gravitate to doing it even if they didn’t necessarily understand exactly why it was cheaper?  Seems like they might…  Sometimes the wisdom of the crowd is a useful barometer of a large relationship at work.

As background, most folks know about the precept (sometimes called Boehm’s Curve) that stipulates that the cost of fixing a defect increases over time as software goes through its natural development lifecycle.  Some speculate (as I think is the case) that it is non-linear — meaning, it is exponentially more expensive the longer the defect is allowed to persist.  Essentially, Boehm’s famous graph looks similar to this one below:

boehmn

 

So you’ve probably only seen that like a million times, right?

Anyway, I want to be cleat that there’s no argument with the validity of this graph.  I think it is demonstrably true – and better minds than mine have unpacked it time and again, evaluated it empirically, and done it more justice to it and with it than I ever could.  That said, there are a few things that this graph doesn’t account for: for example, cost of finding an individual defect.  Meaning, this graph outlines the relationship between cost to fix a defect once you find it to time at which you do so.  But what does it take to find it?  Is that cost constant throughout the lifecycle? Or does it change over time?

Consider the methods that we have at our disposal to find defects across each phase of the lifecycle:

  • Requirements Phase
    • Requirements analysis
  • Design
    • Threat modeling
  • Development
    • Code review/analysis
    • Unit testing
    • Fuzzing
  • QA
    • Structured testing
    • Mis-use testing
  • Production
    • Public scrutiny
    • Structured testing

Do you notice anything about this when considered on the basis of cost alone? One thing that stands out to me (having priced application consulting services in past lives) is that the stuff at the beginning tends to be more expensive and time consuming than the stuff at the end.  Requirements analysis tends to be more expensive than threat modeling; threat modeling tends to be more expensive than source code testing, etc., etc.  I’m not saying that’s hard and fast — just saying “tends to be.”

If that’s the case, could it be that there’s another property at work here that involves cost to discover?  If there were, it would certainly explain why people tend to do their testing when they do (even though they may not have thought about it that way.)  If we were to graph the expense of discovery on the same graph, maybe it might look something like this:

 

crossover

In that model, the important factor wouldn’t be the remediation curve necessarily or the discovery curve (assuming it is a curve) — instead, it’d be the crossover point where they meet since that would be the optimal balance between discovery cost and remediation cost.  Interesting, right?  Of course, demonstrating it empirically is a whole different matter… but just thought this was worth throwing out there.

<The views expressed are mine and do not necessarily reflect those of my employer.>


Viewing all articles
Browse latest Browse all 48

Trending Articles