No True Infosec…

Recent news coverage about West Virginia’s decision to use a smart-phone blockchain voting system (plot twist: calling it “blockchain” might be a stretch) is causing a stir on social media amongst the infosec community.  This XKCD cartoon is a popular one: 

This spawned a lot of thoughtful discussions and debates, and a fair number of ad hominem comments, per usual.  This was a particularly interesting thread (multiple levels deep):

And an equally interesting reply from Rob Graham:

There’s a lot covered in the layers of those threads, and they’re worth a read.  This got me to thinking about how cyber security fits into the world.  It seems that a lot of the struggle comes from attempting to find analogies for cyber security in other aspects of the world, like aviation, building codes, and war. Certainly, aspects of each of these apply, but none are a perfect fit. I previously wrote a little bit about the problem of comparing cyber security to kinetic concepts.

Designing software and IT systems is similar to, but not the same as designing physical structures, but can likely benefit from the concept of common standards. The cyber domain likely can learn from the continual improvements seen in the aviation industry, where failures are scrutinized and industry-wide fixes are implemented, whether something tangible like a defective or worn out component, or intangible like the command structure of personnel in a cockpit.

So much of this seems immensely sensible.  But there are sharp edges on the concept.  As pointed out in the twitter threads above, weather does not evolve to defeat improvements made to aircraft, in the way that adversaries do in the cyber domain.  The same is true for many things in the kinetic world: buildings, elevators, fire suppression systems, and so on.  All are critical, and all need to follow certain standards to help reduce the likelihood someone will be hurt, though these standards often vary somewhat by jurisdiction.  In general, most of these things are not designed to survive an intelligent adversary intent on subverting the system.  That’s not completely true, though, as we know that certain building codes, such as skyscrapers, are often designed to withstand a certain level of malicious intent.  But only to a point, and ready examples should come to mind where this breaks down.

I’ve been thinking about a lot of the ways that threat actors affect physical systems (assuming no electronic/remote access component) and I think it looks approximately like this:

Where the level of black indicates linkage between the motivation and the proximity.  It’s not perfect, and I’m sure if I think about it for a bit, I’ll come up with contradictory examples.

With regard to cyber issues, the “can be anywhere” column turns black at least for malicious, terroristic, and war.  We simply don’t design our elevators, airplanes, or cars with the thought that anyone anywhere in the world is a potential threat actor.  Certainly that’s changing as we IoT-ify everything and put it on the Internet.

So, all this to say we spend too much time arguing which analogies are appropriate.  In two hundred years, I assume that someone will analogize the complicated societal problem of the day with that of primitive cyber security, and someone else will complain about how cyber security is close, but not the same as that modern problem.

It seems intuitive that we *should* look across many different fields, inside and outside IT, for lessons to learn, including things like:

  • Epidemiology
  • Aviation
  • Civil engineering
  • Architecture
  • War fighting
  • Chemistry
  • Sociology
  • Psychology
  • Law enforcement
  • Fire fighting

…but it’s naive to expect that we can apply what worked in these areas to the cyber security problem without significant adaptation.  Rather than bicker whether or not software development needs a set of building codes, or that we should apply the aviation disaster response to cyber security incidents, in my estimation, we ought to be selecting the relevant parts of many different disciplines to create a construct that makes sense in the context of cyber security and all that it entails.

We have to accept that there *will* be electronic voting.  We have to accept that our refrigerator, toaster, toilet, and gym shoes *will* be connected to the Internet some day, if not already.  We don’t have to like these things, and they may scare the hell out of us.  But as the saying goes, progress happens one funeral at a time – some day, I will be gone.  My kids’ kids will be voting from their smart watches.  Technology advances are an unrelenting, irreversible tide.  Life goes on.  There are big problems that have to be dealt with in the area of technology.  We need new ways to reduce the macro risk, but must be cognizant that risk will never be zero.

I watch people I respect in the industry laying down on the tracks in front of the e-voting train, attempting to save us from the inevitable horrors to come.  Honestly, this has echoes of the IT security teams of old (and maybe of today) saying “no” to business requests to do some particular risky thing.  There’s a reason those teams said “no”: what the business was attempting to do was likely dangerous and held hidden consequences that weren’t apparent to the requester.  But over time, those teams were marginalized to the point where even people in the industry make jokes about the unhelpful “department of no” that IT security used to be.  The world moved on, and the department of no was marginalized and (mostly) run out of town.  I don’t think we should expect a different outcome here.

While we are busy grousing about the validity of an XKCD cartoon, or whether building codes or aviation is more representative model, companies like Voatz are off selling their wares to the government.

 

Leave a Reply