The Trouble With Applying Kinetic Metaphors to Cyber

I was having a good debate with some of my twitter friends that started off with this tweet:

At one point in the discussion, @JeffWilsontech brought up safes:

That got me thinking.  In the info/cyber security world, we draw a lot of comparisons to the physical world.  Who hasn’t seen this image in a security presentation?

As humans, we learn by building associations with things we already know.  Kinetic-space security concepts are relatively easy to grasp and there are intuitive relationships between security in cyber space and security in kinetic space.  For example, a firewall is like the walls of a building, and openings through the firewall are like doors and windows.

My observation is that the intuitiveness of this analogy can lead us astray when we think about IT security defenses, though.  For example, consider safes as Jeff mentioned above.  Commercial safes have a rating that denotes the amount of time it will resist attacks from picks, mechanical and electrical tools – usually less than an hour.  Attacks on safes generally involve the adversary, even a knowledgeable one, to run through a time consuming process to break into the safe.  The logical equivalent to think of here is some encrypted data, not network or system security.  To consider an equivalent attack on a safe, we would need to think about an attacker, who resides anywhere in the world, to almost instantly teleport the safe to the business end of a 1000T hydraulic press in an evil lair’s machine shop.  The safe is immediately popped open by the press without regard for the security rating.

In the case of the walls being like a firewall, the building’s doors could all be locked from the outside, i.e. nothing is allowed in via the firewall.  However, people from all over the world are able to watch people coming and going from the building and able to hide in the brief case of one of the building’s inhabitants while he is out for a walk.  Once the unsuspecting person is back inside the building, the intruders surreptitiously exit the brief case and now are able to come and go from the building as she pleases.

These are pretty dull examples that I suspect are intuitive to most of you.  However, I see many people in the industry drawing parallels to kinetic-space constructs such as insurance, building codes, fire codes, and electrical codes as a means to improve security through regulation.  I am in the camp that security will not generally improve unless there is a regulatory obligation to do so.  The free market simply does not provide an incentive for organizations to either produce secure products or design and operate secure applications, systems, and networks.  The challenge with this approach is that it’s fundamentally incompatible with the current philosophy of IT systems design and operation and the threats to them.  Fire codes, for example, define a pretty objective set of requirements that are able to address a broad swath of risks: sprinklers work pretty well at putting down a fire (almost) regardless of the cause.  Even so, there is a structured   Electrical codes seem conceptually similar to IT: anyone with some amount of electrical knowledge can wire up a building, similar to how anyone with some IT abilities can create an IT system.  From here, though, the two diverge.  There is a pretty rigid set of electrical standards, typically based on the National Electric Code.  Municipal fire and electric codes do not allow for much “innovation” in the way that is practiced in the IT world.

The “Underwriter’s Laboratories” approach to cyber security seems intuitively sensible, but we have to remember that it will necessarily have a negative impact on innovation in the IT product market, which I know many will not see as a bad thing, but it also does not address the consumer/integrator side of the equation, which I argue is where much of the problem comes from.  Then there are complicated questions about things like open source software, and the raspberry pi, and so on.

Pairing an “Underwriter’s Laboratories” approach with a “National Cyber Code” would seem to provide a more secure world, but it will come with a pretty steep cost.

A significant headwind against this approach is, well, the whole economy of existing producers and consumers of IT products and services, consulting companies, integrators, and so on.  We can’t discount the influence these entities have on the regulatory process, even if to varying degrees in different countries.  Even in countries with very progressive data protection laws, we can see the desire for regulations to provide latitude in IT.  The GDPR, which is my view the only regulation with the *potential* to drive major changes in security, is quite abstract in its data security obligations:

  1. Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:

    1. the pseudonymisation and encryption of personal data;
    2. the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;
    3. the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;
    4. a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.
  2. In assessing the appropriate level of security account shall be taken in particular of the risks that are presented by processing, in particular from accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data transmitted, stored or otherwise processed.

  3. Adherence to an approved code of conduct as referred to in Article 40 or an approved certification mechanism as referred to in Article 42 may be used as an element by which to demonstrate compliance with the requirements set out in paragraph 1 of this Article.

  4. The controller and processor shall take steps to ensure that any natural person acting under the authority of the controller or the processor who has access to personal data does not process them except on instructions from the controller, unless he or she is required to do so by Union or Member State law.

The problem with wording such as this is that it doesn’t define what is actually needed to protect data – that is left to the data controller and data processor to decide, presumably so they can each continue to “innovate” in their respective IT programs.  Now, I also know one of the objectives of this particular wording is to help the regulation remain relevant over time.  A regulation that is too specific may quickly become out of date due to the emergence of new threats.

Finally, another challenge I see with the “UL + NEC” approach is that the prescriptive IT regulations that do exist, such as HIPAA and PCI DSS*, haven’t proven to be very effective at protecting data since organizations that have been certified as compliant do end up being breached occasionally, but then again we see structures with NEC compliant electrical systems and UL reviewed appliances burn down periodically, too.

It seems to me that another component to the solution, at least to the question of protecting consumer data is to limit the use of such data, as the GDPR also does, and also to reduce the value of that data.  In economic terms, hitting both the supply side (data hoarding companies) and the demand side (data thieving criminals).  For example, credit card data is valuable because it can be used to steal money.  What if we changed the way credit worked to not rely on a static number that can be stolen?  Clearly that can’t work for everything, but sucking the value out of stealing personal data would go a long way.

* Yes, I know PCI DSS is not a regulation.

 

2 thoughts on “The Trouble With Applying Kinetic Metaphors to Cyber”

  1. I absolutely agree with you that meatspace analogies are of limited use in discussing IT security or, more broadly, tech policy with people who are deep in tech or in policy.

    We were talking with another chap and he mentioned that consumers & users of technology products and services suffer from an information asymmetry problem when it comes to determining the security bonafides of a product or service. I totally agree; for any given productized ‘stack’ in corporate or consumer IT, there are no human-readable norms, standards, certifications or Surgeon General’s warnings on the product which might help the buyer make an informed decision. I like to think of this as a Padlock vs Purses problem: before transacting business with a remote host, you and I know to look at the URL bar for a green padlock in the URL bar, which proves our data is secure in transit and validates the remote host’s identity. But all my mom sees is a strange green Purse.*

    There I go again with the metaphors/similes and analogies! Damnit sorry about that.

    Point is info asymmetry is definitely part of the problem for IT buyers* so when Fernando brought it up, I thought of my cool how-long-it-takes-to-crack-my-kinetic-safe-certificate example as one way to solve it. The idea would be that industry or government establish some kind of certification program that denotes the security durability of a technology product or service….knock-off Android tablets from China running Android KitKat in 2017 get a big fat Red F- on the box (Time to Pwn:Less than 1 Day!) while an iPad gets an A+ (Time to pwn > 30 days!). I know that’s silly but the idea is to make it stupid simple for the buyer to select the better product, even if it’s a bit more expensive. Of course such a label wouldn’t fully disclose the true security nature of the product or the various ways its user could be tricked into doing insecure things, but it doesn’t matter. Step 1 is to assign some sort of integer or value to insecure products and make the buyer aware of them so that s/he makes an informed decision.

    This includes IT, by the way. There should be a Surgeon General-style warning on any storage array that gives it’s buyers root access to the array! I shouldn’t have root access to such devices; I want that function abstracted as much as possible so I can get busy bringing value to the organization instead of dicking around in the CLI. But there’s not such a warning on these products currently; I buy NetApp I get root. I buy Nimble, I don’t get root… who follows this stuff? Nerds like me and bad guys.

    I’m cognizant of the concern that we’ll harm innovation by regulating security…hence my preference for light touch / market-based regulation. The best gov light touch regulations involve putting prices on bad behaviors we don’t like as a people/society. We fixed acid rain in this way (capping the amount of NO2 factories could release over time + a market to trade trade NO2 pollution credits) and that was great. Acid rain is gone and we still have factories. My preference would be to find some way to put a cost on the bad behaviors/practices that result in my family’s data getting traded and sold on both the dark web and by giant multinationals.

    Of course as an IT pro, I’d also love it if the NSA IAD best practices guide (or the Aussie equivalent) had the force of law so I could push back against bosses who don’t value security practices, but those guides don’t have the force of law.

    To your point about the operator-as-a-weak-link: much of that problem is solved if the operator doesn’t get root access, right? Least Privilege security will be on my tombstone baby 🙂

    * and Information Asymmetry is a traditional marker of market failure
    * I totally ripped off this padlock/purse metaphor from a podcast, but can’t remember whose

    P.S. How can you adopt use of the Pentagon-esque term ‘kinetic’ but throw shade at ‘killchain?’

  2. I was going to use “meat space” instead of kinetic, but it just didn’t feel right. I don’t know what my aversion to kill chain is – I think it’s like an irrational dislike of certain words such as “moist”. It’s probably more than that, like that I think, like many frameworks, it oversimplifies things to a fault.

    I think that is the crux of my crankiness: everything gets simplified to a point where any usefulness has been squeezed out, and people end up doing what they want anyhow.

Leave a Reply

Your email address will not be published. Required fields are marked *