Quantcast
Channel: SecurityCurve » Blog
Viewing all articles
Browse latest Browse all 48

SciCast: Toward a better model for prediction

0
0

So, it’s the new year again!  I know this because of the many corrections I’ve needed to make when filling out anything with a “date” field on it.  So “welcome 2014″: I’ll get used to calling you by name eventually.

And with the new year comes the many analyst predictions about what 2014 might bring.  There’s a ton of them – a casual google search at the time of this writing yields just over a quarter million hits.  For folks that are long-time followers of this blog, you know that I’ve historically been skeptical of these predictions.  Why the skepticism?  Because though we might be reasonably sure that something will happen, it’s so hard to know precisely when.  As anyone who regrets not having a flying car or a household butler android can tell you, very seldom are both the event and the timing both correct.

It’s even harder when you’re looking at events that impact a particular industry.  The concept might be right, but so many variables influence when something is likely to occur — and it’s hard to account for them all ahead of time.  As an example of what I mean, remember when Gartner said IDS would be dead by 2005?  Or when McAfee said 2006 was the year of mobile malware?  It’s not that these predictions were “wrong” per se since IDS (at least the way we do it now) does in fact have a lifespan (and I can prove it, but will spare you the thousand-ish words to do so unless someone challenges me) and mobile malware is, in fact, an emerging (though as of now nascent) problem.  But did IDS die in 2005?  Was mobile malware a huge problem in 2006?  Nope.

But it’s a dilemma… in practice, predictions are very seldom accurate.  But ignoring them entirely isn’t an option.  Why not? Because it’s fundamental to the scientific process.  All scientists predict; they have to because not only is predictability a fundamental component to the testing of a hypothesis under the scientific method (it goes to testability), but because (I think) it’s also a fundamental component of how we know things.   By this I mean that there’s an epistemological argument (epistemology = the philosophy of how we know what we know) which I happen to subscribe to, that we know things are true when you can predict an outcome.  A person in this camp might say for instance that we know gravity is true because we can predict the apple will fall from the tree (prediction) and not because we can come up with some rationale to explain the falling after it happens (accommodation).  If you want to delve into that concept deeper, a relatively concise paper supporting this particular position would be this one - read it: it’s more approachable than the title suggests.   

So long story short, predictions are good, but historically our industry has been challenged with doing them well.  In part I think this is because the mechanism by which we’ve made predictions has been pretty “loosey goosey” — there are literally hundreds of predictions out there, they rarely agree, and there’s nobody really tracking to see whether or not these things actually come to pass.  Meaning, somebody makes predictions (using as input almost solely their own point of view), and people form opinions based on the subset of predictions they happen to be exposed to… By the next year’s cycle, most everyone has either forgotten what the last ones were in the first place or they no longer care.

The problem with this cycle is it ensures that there’s little of either consensus or objectivity in the result. I realize that sounds harsh, but it’s not intended to be a condemnation or criticism — just my personal opinion on why things are the way they are.  But I think this could be on the cusp of changing.  For example, we’re starting to see some tools emerge that could help predict in an objective way.  For example, have you seen SciCast?  Check it out… Short story is that it’s a prediction market targeted toward science and technology.  Cool, right?  Unless you’re one of the highly cynical who speculate that because it’s funded by IARPA that it has some taint from the whole US intelligence debacle.

A tool like this one opens up new possibilities: possibilities for objective forecasting.  It’s a barometer of sorts – rather than one or two people’s “finger in the wind”, we now have a tool based on well-understood principles to provide feedback.  Is it perfect? No. Is it only one data point among many?  Sure.  But it’s objective – and it’s transparent.  Principles like these are the cornerstone of building a better understanding of forecasting.  So props to them.

In point of fact, tools like this open up avenues to do even more cool stuff.  For example, there’s a project underway that I’m particularly close to that requires objective and transparent data like this as input.  That means that SciCast means it’s more achievable now than it was just a few months ago.  Now, it’s too early to spill too many beans about the specifics of that (though if you’re particularly curious I’d encourage you to check out Bhavesh Bhagat’s teaser video up on YouTube alluding to some of the plans in the works) but suffice it to say that it’s more possible now because of this effort and it’s probably not the only such project out there.

Anyway, I’ve written more than I’ve wanted to about that today… so the TLDR: be skeptical of the predictions, check out SciCast if you have some time, and watch where it takes us since it’s opening new doors for predictions industry-wide.

<The views presented are my own and not necessarily those of my employer>


Viewing all articles
Browse latest Browse all 48

Latest Images

Trending Articles





Latest Images