I was having a good debate with some of my twitter friends that started off with this tweet:
At one point in the discussion, @JeffWilsontech brought up safes:
That got me thinking. In the info/cyber security world, we draw a lot of comparisons to the physical world. Who hasn’t seen this image in a security presentation?
As humans, we learn by building associations with things we already know. Kinetic-space security concepts are relatively easy to grasp and there are intuitive relationships between security in cyber space and security in kinetic space. For example, a firewall is like the walls of a building, and openings through the firewall are like doors and windows.
My observation is that the intuitiveness of this analogy can lead us astray when we think about IT security defenses, though. For example, consider safes as Jeff mentioned above. Commercial safes have a rating that denotes the amount of time it will resist attacks from picks, mechanical and electrical tools – usually less than an hour. Attacks on safes generally involve the adversary, even a knowledgeable one, to run through a time consuming process to break into the safe. The logical equivalent to think of here is some encrypted data, not network or system security. To consider an equivalent attack on a safe, we would need to think about an attacker, who resides anywhere in the world, to almost instantly teleport the safe to the business end of a 1000T hydraulic press in an evil lair’s machine shop. The safe is immediately popped open by the press without regard for the security rating.
In the case of the walls being like a firewall, the building’s doors could all be locked from the outside, i.e. nothing is allowed in via the firewall. However, people from all over the world are able to watch people coming and going from the building and able to hide in the brief case of one of the building’s inhabitants while he is out for a walk. Once the unsuspecting person is back inside the building, the intruders surreptitiously exit the brief case and now are able to come and go from the building as she pleases.
These are pretty dull examples that I suspect are intuitive to most of you. However, I see many people in the industry drawing parallels to kinetic-space constructs such as insurance, building codes, fire codes, and electrical codes as a means to improve security through regulation. I am in the camp that security will not generally improve unless there is a regulatory obligation to do so. The free market simply does not provide an incentive for organizations to either produce secure products or design and operate secure applications, systems, and networks. The challenge with this approach is that it’s fundamentally incompatible with the current philosophy of IT systems design and operation and the threats to them. Fire codes, for example, define a pretty objective set of requirements that are able to address a broad swath of risks: sprinklers work pretty well at putting down a fire (almost) regardless of the cause. Even so, there is a structured Electrical codes seem conceptually similar to IT: anyone with some amount of electrical knowledge can wire up a building, similar to how anyone with some IT abilities can create an IT system. From here, though, the two diverge. There is a pretty rigid set of electrical standards, typically based on the National Electric Code. Municipal fire and electric codes do not allow for much “innovation” in the way that is practiced in the IT world.
The “Underwriter’s Laboratories” approach to cyber security seems intuitively sensible, but we have to remember that it will necessarily have a negative impact on innovation in the IT product market, which I know many will not see as a bad thing, but it also does not address the consumer/integrator side of the equation, which I argue is where much of the problem comes from. Then there are complicated questions about things like open source software, and the raspberry pi, and so on.
Pairing an “Underwriter’s Laboratories” approach with a “National Cyber Code” would seem to provide a more secure world, but it will come with a pretty steep cost.
A significant headwind against this approach is, well, the whole economy of existing producers and consumers of IT products and services, consulting companies, integrators, and so on. We can’t discount the influence these entities have on the regulatory process, even if to varying degrees in different countries. Even in countries with very progressive data protection laws, we can see the desire for regulations to provide latitude in IT. The GDPR, which is my view the only regulation with the *potential* to drive major changes in security, is quite abstract in its data security obligations:
-
Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:
- the pseudonymisation and encryption of personal data;
- the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;
- the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;
- a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.
-
In assessing the appropriate level of security account shall be taken in particular of the risks that are presented by processing, in particular from accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data transmitted, stored or otherwise processed.
-
Adherence to an approved code of conduct as referred to in Article 40 or an approved certification mechanism as referred to in Article 42 may be used as an element by which to demonstrate compliance with the requirements set out in paragraph 1 of this Article.
-
The controller and processor shall take steps to ensure that any natural person acting under the authority of the controller or the processor who has access to personal data does not process them except on instructions from the controller, unless he or she is required to do so by Union or Member State law.
The problem with wording such as this is that it doesn’t define what is actually needed to protect data – that is left to the data controller and data processor to decide, presumably so they can each continue to “innovate” in their respective IT programs. Now, I also know one of the objectives of this particular wording is to help the regulation remain relevant over time. A regulation that is too specific may quickly become out of date due to the emergence of new threats.
Finally, another challenge I see with the “UL + NEC” approach is that the prescriptive IT regulations that do exist, such as HIPAA and PCI DSS*, haven’t proven to be very effective at protecting data since organizations that have been certified as compliant do end up being breached occasionally, but then again we see structures with NEC compliant electrical systems and UL reviewed appliances burn down periodically, too.
It seems to me that another component to the solution, at least to the question of protecting consumer data is to limit the use of such data, as the GDPR also does, and also to reduce the value of that data. In economic terms, hitting both the supply side (data hoarding companies) and the demand side (data thieving criminals). For example, credit card data is valuable because it can be used to steal money. What if we changed the way credit worked to not rely on a static number that can be stolen? Clearly that can’t work for everything, but sucking the value out of stealing personal data would go a long way.
* Yes, I know PCI DSS is not a regulation.
You must be logged in to post a comment.