Concentration of Risk and Internally Inconsistent Regulations

Concentration of Risk

Last week, I wrote a post on why we shouldn’t feel bad for cyber insurance carriers because they do a pretty good job of limiting their obligations in the event of a loss.  Since then, I recall reading a story about concentration risk with some insurance providers in Florida and started wondering if there is likely to be a similar challenge evolve in the cyber insurance arena.  The problem in the Florida instance was, as far as I can recall from the story, that a particular carrier insured many houses in Florida.  During a particularly bad year for hurricanes, the carrier went bankrupt because they weren’t able to cover the claims from their many customers.

As I understand it, larger carriers mitigate this risk by maintaining a broad portfolio of insured clients in geographically distributed areas, and limit exposure to any given area through rate adjustments.  Insurance only works when there are many more people paying in than cashing out through claims.  Of course, there are reinsurers who can also help cover losses from a carrier, but they too, have to monitor their risk concentrations.

Hopefully it’s obvious where I’m going here.  In the cyber security world, risk is not concentrated along geographic boundaries, but rather on technical boundaries.  I suspect that the cyber actuaries at insurance providers already have this problem well in hand, but I can see that it will be difficult to avoid risk concentration in cyber insurance, particularly given the mono-cultures we have in technology, that can lead to events such as WannaCry and NotPetya rapidly causing massive damage across geographic and industry lines.  While cyber insurance carriers may be able to limit their exposure on any given policy through liability caps, if all or most of a carrier’s clients simultaneously need to make a claim, we will likely see something similar to the Florida hurricane situation.  From a macro perspective, it seems to me that cyber insurance carriers may be better served to focus more on this technology mono-culture problem, than on monitoring the controls of any given organization.

Internally Inconsistent Regulations

The GDPR has been on my mind a lot lately.  The GDPR mandates that the personal data of EU citizens be both properly protected and accessible.  Exactly what that means is left to the Data Controller and Data Processor to guess, presuming that the data protection authorities will tell them, in the form of a potentially existential fine, if they guess wrong.  The recently disclosed Meltdown and Spectre CPU predictive execution “vulnerabilities” raise an interesting paradox: it may not be possible for an organization to both protect the data AND ensure it is available, at least on certain timelines.  I certainly don’t believe that Meltdown or Spectre create this situation, but they highlight that the situation could exist.

The vendor response to Meltdown and Spectre has been disastrous: vendors are releasing patches, only to later retract them, some software creates incompatibilities with fixes from other vendor, some vendors recommend not installing other vendor fixes, and so on.  And there is no meaningful recourse for technology users beyond waiting for software fixes that potentially cause significant performance impacts and/or random system reboots: hardware vendors have yet to release new processors that fully fix the problem.

If Meltdown and Spectre were more serious vulnerabilities, in terms of the potential risk to personal data, how do Data Controllers and Data Processors balance the risk between the dual mandates of security and availability in the GDPR, with such a potentially significant stick hanging over their heads.  The “regulatory doves”, no doubt, will respond that “regulators would take a measured view in any kind of enforcement situation”. Maybe they would, and maybe they would decide that an organization chose badly when making the determination of whether to shut off it’s systems to protect the personal data, or keep them running (and data vulnerable).

Hopefully this remains in the domain of an academic thought experiment, however with with trend line of EternalBlue, Intel AMT, Intel ME, and now Meltdown/Spectre, it seems highly likely that some organizations will face this decision in the not so distant future.


Leave a Reply

Your email address will not be published. Required fields are marked *