Thoughts on Incentives Driving Bad Security Behavior

Harry Truman once said “Give me a one-handed economist! All my economists say, ‘on the one hand…on the other hand…'”  I am quite frustrated by the state of affairs in the IT world, and I gave a presentation on it at the Tactical Edge conference in Colombia.  (Note: this was my first conference presentation and I consider my thoughts on the matter to be half-baked, at best right now, so adjust expectations appropriately).  The premise of my presentation is that we need more sophisticated IT and IT security people, who are able to effectively understand and communicate risk.  In particular, I believe that many IT people, and even many IT security people, do have have imaginations sufficient to envision the ways things can fail, and the extent of harms that can come.

Since giving the presentation, I’ve talked with many people about my ideas and their experiences in various organizations and I’m beginning to realize that there may not be much desire to improve the situation.  In many organizations, performance is judged by accomplishments and efficiency, rather than on some obscure, hard to measure thing like “security”.  Spending too much time on assessing risks slows progress on important projects, and trying to account for the many “bad things” that can happen to a system, but probably won’t, is not efficient.  This situation is a gamble; trading off the perceived remote possibility of a bad thing happening for the certainty that comes with not meeting business objectives.  Viewed through this lens, the less IT managers know about risks, the better off they are, since ignoring known risks moves a person out of the realm of “ignorance” and into the realm of “negligence”.

I’ve had this view that we collectively want and try to improve, but now I am not so sure.

If you’re interested, here is the video and the slides from my presentation.  I am going to make some significant updates to the content in the coming months, as well as improve my presentation abilities, and hopefully deliver this again at some upcoming conference.

Video:

Slides:

 

 

 

More Effective Security Policies

On my evening walk with my best friend this evening, I pondered the disconnect between security policies and security outcomes.  Every organization I’m aware of has well intentioned security policies that enumerate important security objectives, for example the maximum amount of time to apply security patches to systems and applications.

My hypothesis is that information security staff believe that IT and business teams will translate those policy requirements into functional requirements when developing systems, while IT and business teams are developing systems based on a set of business objectives.  Hard security requirements, for example, minimum password lengths, usually do end up in IT’s design requirements, whereas security requirements that are process-based are not included, or not included in a meaningful way.

Let’s consider patching.  For the sake of argument, let’s assume our security policy mandates applying high severity patches on Internet facing systems within 48 hours.  What does the IT team do with this requirement when developing a new system?  Likely not much.  Patching is an operational process, not a requirement on how systems are built.  All systems need to be patched, they all can be patched, we have teams to apply patches, good enough.

In practice, though, I don’t know of a single organization that doesn’t struggle with applying patches on time.  There aren’t enough people, or the system can only be taken down once a quarter and never between Thanksgiving and Christmas, and so on.  This made me wonder: would adding policy requirements that enumerate operational expectations, in addition to traditional security objectives, help this situation.

For example, rather than a policy that says:

“Apply high severity patches to Internet-facing systems within 48 hours”

We instead have one that says:

“Internet-facing systems must be designed in a manner to enable applying high severity patches within 48 hours, including during change freezes.

Support teams must be appropriately staffed to consistently perform patch testing and apply patches on all relevant Internet facing systems within 48 hours.

Operational processes, including change management, must support the 48-hour requirement for applying patches on Internet facing systems within 48 hours.

Appropriate test environments must be maintained to support the 48-hour requirement for applying patches on Internet facing systems within 48 hours.”

This is only a simplistic example; however, it provides very objective requirements to include in IT development plans.

Does anyone do such a thing?  Has it helped?

Thoughts On Cyber Insurance and Ponemon Surveys

Insurance

I read this post last week about expectations on cyber insurance shaping the future of cyber security.  At one point, I had the same view: there is a strategic advantage to a company, or an insurer, to develop optimized models on cyber security investment.  I’ve come to accept that, like forecasting the weather, there are just too many variables in IT for such a construct to take hold, at least for the foreseeable future.  Reports I read about cyber insurance typically cogitate on 3 things:

  1. How is the nebulous concept of “loss” from an cyber security event get calculated?  Some losses can be huge.  Where does the line get drawn?
  2. Partly because of #1, and partly because of the difficulty in predicting the rate at which cyber incident-related losses will happen, cyber insurance carriers are very likely carrying a lot of risk.
  3. Insurance companies are going to drive security discipline in their clients through variable rates, or withholding coverage, based on the clients’ hygiene.

Worry over the health of cyber insurance companies due to the perceived dual unknowns of loss magnitude and frequency seems misplaced, because insurances carriers do not offer uncapped damages in their policies, at least that I am aware of.  Indeed, the caps are relatively low, and the premiums are quite expensive.  Given that, insurers shouldn’t need to define loss rates or magnitudes with significant precision to avoid losing money overall – they just need to make an assumption about how many clients will file a claim in a given year and set premiums accordingly.  Even then, carriers cover themselves through their own policies with reinsurance carriers.

Now things will get interesting is when/if the cyber insurance market becomes highly competitive and carriers are competing on premium rates.  I expect that we will indeed see carriers trying to drive hygiene of clients, and to some extent, we are seeing this already through various partnerships between cyber security companies and some insurers, though I expect that is more a statement about the smooth sales pitches of those security firms, rather than necessarily a significant need of the carriers.  In the US, at least, Progressive Insurance offers customers a discount on auto insurance if they are permitted to monitor the driver’s behavior through a device that plugs into the car’s ODB2 port.  I can see a cyber equivalent, though I am not sure exactly what form it will take.  And to be honest, I am not sure how helpful the data would be in measuring the likelihood of a company being breached.

Net: don’t cry for insurance companies; don’t expect insurance companies to deliver the IT industry from our breachy ways.

Surveys

I saw several posts this week about Ponemon’s latest survey on data breaches.  I still contend that they are not very helpful for prioritizing security programs because they are non-statistically valid, backward looking, subjective opinion surveys.  So why do people pay Ponemon to do them?  It hit me this earlier this week: Ponemon reports, and reports like Ponemon, are not intended for infosec people.  They are intended to help infosec vendors understand the buyers of their wares.  I suspect most of the rest of the world already recognized this, but I am not always the sharpest knife in the drawer.

Random Thoughts From The OReilly Security Conference 2017

I had a chance to attend the O’Reilly Security Conference earlier this week.  I find that when I am at these conferences, I get into a mode of thinking that is more open and creative.  Here are some random thoughts I noted during the conference which I may write more about in the near future:

  • The somewhat unspoken theme of the conference – at least several of the keynotes – was on reducing the friction of security to the point where, hopefully, doing some given task the desired “secure” way is easier and/or faster than doing it some other way.  I really like that concept, but I think it likely requires talent and investment that a lot of companies don’t have available.  A great example was one of the presenters discussing how their company’s security team modified operating system libraries to implement a more streamlined user experience for logins.  Great in concept, but I suspect that idea doesn’t scale down very well to organizations that don’t have that kind of talent or ability to manage such customized code.
  • When I go to a security conference, I have, let’s say, 99 security problems.  By the end of the conference, I have 111 security problems.  By that I mean that security conference presentations are good at defining problems I previously didn’t know I had.
  • There is almost certainly a selection bias on the presentations that are picked by security conferences: talks are generally about problems that the presenters have solved, or mostly solved.  Those presenters, their problems, and their solutions exist in an ecosystem largely defined by their culture, skills, risk appetite, and so on.  I rarely get “actionable” information out of conference presentations.  For me, the most interesting part of security conferences is looking at the logic and creativity behind how the presenter got to their solution.  That feels like the important take away, and I wonder if conference presenters ought to play up the thought processes as much as their solutions.
  • Thinking about named vulnerabilities like KRACK, Shellshock, and Heartbleed, I’m reminded that we have a pretty immature threat prioritization problem, which has been made worse, in some instance anyhow, by effective vulnerability marketing programs.  With the recent spate of high profile worms (if three can be considered a spate), it seems likely that we should inject a “wormability” factor into the vulnerability assessment score.  I am sure it’s already represented, at least in part, but it seems intuitive, at least to me, that not all CVSS 10.0 vulnerabilities are created equally – some much more pressing than others.  ETERNALBLUE/ETERNALROMANCE/MS17-010 is a good example that enabled the WannaCry outbreak.  That presumes we get enough information with the vulnerability disclosure to make such an assessment.  It’s also clear to me that we have a “self-constructed vulnerabilities” in our environments that are wormable, but for which there is no patch.  NotPetya and Bad Rabbit seem like good examples.  I could have powered a small city with the energy spent on hand wringing when I mentioned there is no “patch” for those two issues.  As I’ve written on this site in the past, these techniques are commonly exploited by more focused attackers, however there’s been some luck at automating them, and I see no reason that trend won’t continue.  We have the CWE concept, but I don’t think that hits the mark of the “self-constructed vulnerabilities”.  I think this is more like an “OWASP TOP 10” for infrastructure.  Anyhow, I’m not aware of anything that uniformly identifies/measures/rates such “self-constructed vulnerabilities”.
  • I see a lot of focus on automation and orchestration creeping into infosec conferences, which I think is a good thing.  There was a presentation at this conference on “Inspec” compliance as code.  I also recently read the book “Infrastructure as Code” which is pretty enlightening and makes my mind spin with possibilities for having “IT as Code” which would include things like “Infrastructure as Code”, “Security as Code”, “Compliance as Code”, “Resiliency/Redundancy/Recovery as Code”, and so on.  I wonder if we will get to the point where our IT is defined in a configuration file that specifies basic organizational parameters which are interpreted and orchestrated into a more or less fully automated, self checking, self monitoring, self healing, and self recovering infrastructure.  This seems inevitable.

That’s it.  Any thoughts on these?

 

 

Why Putting Tape Over Your Webcam Might Make Sense

I will admit that I roll my eyes, even if it is only on the inside some times, when I see people with tape or some other device covering the webcam on their laptop.  My self-righteous logic goes like this: most people I interact with are using the computers I see them using for business purposes, and they likely aren’t perched on a night stand or bathroom counter in the evenings or early mornings.  If the webcam on my laptop was hijacked, the perpetrator would be exposed to hours upon hours of me making faces in reaction to emails and instant messages from co-workers.  Audio is a much, much larger threat to confidentiality, and I have yet to see anyone taking action against the built in microphone on their laptops.  Maybe as humans, we feed that someone secretly watching us is more of an invasion of privacy, but it doesn’t take a lot of thought to conclude that an attacker would obtain far more value from listening, than from looking.  That is, unless the attacker is a doing it for blackmail or out of some twisted, possibly perverted obsession with spying on people.

A few days back, The Verge posted the following video on Twitter:

A casual listen to the video left me laughing it off: haha – tape on the webcam won’t really do a lot, but it may make you feel better.  I listened to it again though, and caught something I missed the first time.  The narrator interviewed Todd Beardsley from Rapid7.  Kudos to Todd for giving what I thought was an amazingly insightful reason for covering a webcam.  In the video, I believe Todd called it “superstitious”, however the point he was making is very important and accurate.  If we believe something about ourselves, we generally act accordingly.  Todd is explaining that if I put a piece of tape on my webcam, that tape serves as a constant reminder throughout my work day that I am a security minded person.  One of the really interesting findings in behavioral psychology is that our mindset is often based on a perception we have of ourselves that according to what we have previously done.  I have to be – I put tape on my webcam after all.  And that constant reminder will permeate into decisions I make that have security consequences, such as picking a better password than I otherwise would, or thinking twice before clicking on a link.  As technologists, that idea probably doesn’t sit well because we expect that it wouldn’t work on “us”.  However, in the world of psychology unlike the world of computers, things are not deterministic and are more about averages.  So yes, this phenomenon will not work every time for every one or to the same extent every time, but on average, it likely does have some beneficial effect, and therefore I am going to stop rolling my eyes when I see tape over webcams.

As I learn more about behavioral psychology, it’s clear that there is a lot of opportunity to explore potential benefits for making security improvements.  If you are interested in learning more, I recommend reading books by Dan Ariely, Daniel Kahneman, Richard Thaler, and Tom Gilovich.

*note: some of my twitter friends pointed out that that tape their webcam to ensure they are not caught by surprise when joining webex-style meetings. That makes sense.

Game Theoretic Impacts of NotPetya and Bad Rabbit

The lateral movement techniques used by NotPetya and Bad Rabbit are not new, particularly to those of us who have to clean up the mess following breaches perpetrated by “sophisticated actors”.  Those techniques, in fact, are a pretty common feature in many targeted attacks.  Until recently, however, they have been carried out by a person, or team, sitting at a keyboard, meaning the damage of a single campaign was more or less contained to a single organization, usually with the intention of surreptitiously stealing data, rather than wanton destruction.  Many such breaches likely either go unnoticed or unreported.

NotPetya and Bad Rabbit are changing the economics of these techniques.  What was the domain of “sophisticated actors” targeting a specific entity has now been largely automated in manner that can target an arbitrary number of victims simultaneously.  The move to wide-spread system and data destruction rather than targeted exfiltration means that organizations generally can’t hide the fact that they’ve been compromised.

NotPetya was seeded into victim organizations through a tainted auto-update to relatively obscure tax software used by some Ukrainian organizations, and yet had dramatic impacts around the world.  Bad Rabbit was seeded into victim organizations through compromised web servers pushing fake Flash updates.  While both NotPetya and Bad Rabbit are alleged to have come from the same actor, it’s not a far leap to expect to see copy cats using use all manner of entry techniques like exploit kits, trojaned downloads, and USBs in the parking lot, to deliver malware that drops RATs, garden variety ransomware, data stealers, and so on that uses these automated lateral movement techniques to broadly infect victim systems.

As the apology letters of most breached organizations state, they take security very seriously.  Likely those letters really mean they take security seriously after the breach.  Most people in IT security can relate to this: it’s tough to get management to invest in mitigating a risk until after a loss is realized from that risk.  NotPetya and Bad Rabbit style attacks have a potential to change that dynamic, though.  These attacks are HIGHLY visible in the media, and for once, the victims weren’t “missing a patch” that “caused” the breach, and damages publicly reported by victims are significant.  The perception of “vulnerability” is also dramatically different in these instances: traditional threats generally impacted only a single employee system, and maybe the data that employee had access to*.  (*remember, the no one thinks the “sophisticated actor” is coming after them)

This type of attack also highlights one of the challenges with relying on the “human” firewall.  Exceptional organizations are able to get their rate of falling for phishing attacks down from the 30-40% average range to the low single digits.  In an organization of any size though, that means at least a few people are likely to fall for any given campaign.  If the attack is one that moves laterally such as Bad Rabbit, the 98% of people that do not fall for the attack don’t change the outcome.  It only took one person to bring the house down.  This is likely true in many other types of targeted attacks on organizations, but that is a post for another day.

The challenge, as always, is figuring out what to do.  Fortunately or unfortunately depending on your perspective, robust architecture design and solid operational processes are an effective mitigant to the types of attacks we have seen thus far.  Security is hard, and remains so.  Possibly NotPetya and Bad Rabbit, and the inevitable next volley that follow in their foot steps, begins to raise interest in making the fundamental improvements necessary to avoid being another statistic in these attacks.

 

Cyber Security Awareness Month 2017

Cyber Security Awareness Month is a time when many organizations run their internal security awareness programs for employees, and the time that those of us in the security industry are encouraged to help raise awareness of cyber threats with friends and family using tools like SANS’ most excellent OUCH! newsletter.  While I think those are great things to do, I propose we consider some new traditions for CSAM.  We should apply effort to raising awareness in populations that are more significant points of leverage than rank and file employees, in order to maximize security improvements.  These are communities that we typically do not consider the primary targets for CSAM, such as:

  • IT staff, including developers, architects, engineers, network, database, and systems admins
  • Infosec staff
  • Internal audit staff

In my experience, the most common root causes and significant contributing factors of security incidents is poorly designed IT environments that are designed, implemented and operated by people that don’t understand how technology can be, and indeed is being, abused. Let’s consider phishing for a moment. While training employees to recognize phishing emails is beneficial, it should be intuitive that, over time, people WILL fall for phishes occasionally, and blaming an ensuing breach on an employee’s failure to recognize a phish is not helpful. In addition to training employees to recognize phishing emails, we should also provide ongoing training to the IT staff that design and operate the mail, workstation, and network environments to understand these how these attacks work, new techniques, prevention mechanisms and detection strategies. Phishing, of course, is just an example and there is much to learn and stay on top of across the infosec spectrum.

I’m not aware of any such training that is readily available in the format I’m describing, so this is an aspirational idea, not necessarily something we can run out and implement tomorrow.  A good, and usually free, source for this are security conference videos.  As much as I like them, though, they are not the mot efficient means of getting an overview across a broad set of topics.  I do suspect we can tailor the content to roles.  For instance, developers and network administrators likely won’t benefit from some of the same types of information on attacks.

Training needs to be ongoing, too. Tactics and threat actors evolve. The continuing education model of certifications seems like a good avenue for keeping people accountable, however the things counted as “continuing education” can be more than a bit dubious. Another trap to watch out for is “training” provided by infosec vendors, such as webinars, that are effectively just marketing vehicles for the vendors’ offerings. Remember that vendors are in the business to sell, and part of doing that is to convince us that a) we have a problem that they can solve and b) they can solve our problem better/faster/cheaper than anyone else.

I am not proposing that we make these groups of people experts in offensive security tactics, but rather that we provide a periodic, up-to-date overview of how adversaries use those tactics so that our employees will be able to make more informed decisions when performing their jobs, in the same way that we expect regular awareness training to help an employee identify a phishing email.

Bloodletting and Ransomware

I just read this post on “How to protect your network from ransomware.” The post doesn’t contain advice that will prevent modern ransomware attacks, though. I do not intend to pick on the author or Network World; I know they are trying to help, and the advice is certainly sound general security hygiene.

Until about a hundred years ago, bloodletting was a pretty common medical treatment for many kinds of diseases. Looking back at it now, the practice is pretty disturbing and counterproductive. But at the time, the treatment appeared to work great. People were treated and either the bloodletting worked (i.e., they recovered) or it didn’t work (they died). Patients that recovered were held as evidence the treatment worked, and patients that died were simply considered to have been too far gone for anything to have helped.

I see a lot of the same faulty logic in security advice. No ransomware outbreak means the advice worked, and an outbreak is attributed to some issue so extraordinary that no advice could have helped. Attacks that successfully trick our fully phishing-awareness trained staff and evade our antivirus applications are so cutting-edge that nothing we could have done would have prevented it anyway. Right?

Why don’t we write guides that contain advice on actually preventing ransomware attacks?

Major Cyber Attacks As A Source of Renewal

It is pretty well accepted that, while devastating, some types of natural disasters, such as forest fires, have the effect of allowing new life to take root and flourish.

I’ve often lamented how difficult it can be, particularly in larger organizations, to make significant security enhancements because of the costs involved and requisite interruption of business operations.  We’ve now witness a number of pretty high profile cases where the IT environments of organizations were all but destroyed and had to be rebuilt, such as with Saudi Aramco and Sony, and most recently with NotPetya’s effect on companies around the world.  I am not intending to minimize the devastation to these companies, however these types of events seem similar to the forest fire analogy, providing an opportunity in the midst of disaster to make strategic improvements.

I wonder, though, can an organization take meaningful advantage of this bad situation?  In the aftermath of such an event, the priority is almost certainly on restoring functionality as quickly as possible, and the straightest line to get there is likely to implement things as they previously work, likely with some slight adjustments to account for the perceived cause of the problems.  Many organizations have disaster recovery and business continuity plans, and some of those plans are starting to incorporate the concept recovering from a “cyber disaster”, however those plans all deal with getting back to operations quickly through recreating the existing functionality.  I am thinking that such plans may benefit from keeping a punch list of “things we would do differently if we could start over”.  We all have those lists, if only in our heads, and the utility documenting such a list isn’t limited to just these mega-bad recovery scenarios – they are also useful in normal planning cycles, technology refreshes, and so on.

What do you think?

The Trouble With Applying Kinetic Metaphors to Cyber

I was having a good debate with some of my twitter friends that started off with this tweet:

At one point in the discussion, @JeffWilsontech brought up safes:

That got me thinking.  In the info/cyber security world, we draw a lot of comparisons to the physical world.  Who hasn’t seen this image in a security presentation?

As humans, we learn by building associations with things we already know.  Kinetic-space security concepts are relatively easy to grasp and there are intuitive relationships between security in cyber space and security in kinetic space.  For example, a firewall is like the walls of a building, and openings through the firewall are like doors and windows.

My observation is that the intuitiveness of this analogy can lead us astray when we think about IT security defenses, though.  For example, consider safes as Jeff mentioned above.  Commercial safes have a rating that denotes the amount of time it will resist attacks from picks, mechanical and electrical tools – usually less than an hour.  Attacks on safes generally involve the adversary, even a knowledgeable one, to run through a time consuming process to break into the safe.  The logical equivalent to think of here is some encrypted data, not network or system security.  To consider an equivalent attack on a safe, we would need to think about an attacker, who resides anywhere in the world, to almost instantly teleport the safe to the business end of a 1000T hydraulic press in an evil lair’s machine shop.  The safe is immediately popped open by the press without regard for the security rating.

In the case of the walls being like a firewall, the building’s doors could all be locked from the outside, i.e. nothing is allowed in via the firewall.  However, people from all over the world are able to watch people coming and going from the building and able to hide in the brief case of one of the building’s inhabitants while he is out for a walk.  Once the unsuspecting person is back inside the building, the intruders surreptitiously exit the brief case and now are able to come and go from the building as she pleases.

These are pretty dull examples that I suspect are intuitive to most of you.  However, I see many people in the industry drawing parallels to kinetic-space constructs such as insurance, building codes, fire codes, and electrical codes as a means to improve security through regulation.  I am in the camp that security will not generally improve unless there is a regulatory obligation to do so.  The free market simply does not provide an incentive for organizations to either produce secure products or design and operate secure applications, systems, and networks.  The challenge with this approach is that it’s fundamentally incompatible with the current philosophy of IT systems design and operation and the threats to them.  Fire codes, for example, define a pretty objective set of requirements that are able to address a broad swath of risks: sprinklers work pretty well at putting down a fire (almost) regardless of the cause.  Even so, there is a structured   Electrical codes seem conceptually similar to IT: anyone with some amount of electrical knowledge can wire up a building, similar to how anyone with some IT abilities can create an IT system.  From here, though, the two diverge.  There is a pretty rigid set of electrical standards, typically based on the National Electric Code.  Municipal fire and electric codes do not allow for much “innovation” in the way that is practiced in the IT world.

The “Underwriter’s Laboratories” approach to cyber security seems intuitively sensible, but we have to remember that it will necessarily have a negative impact on innovation in the IT product market, which I know many will not see as a bad thing, but it also does not address the consumer/integrator side of the equation, which I argue is where much of the problem comes from.  Then there are complicated questions about things like open source software, and the raspberry pi, and so on.

Pairing an “Underwriter’s Laboratories” approach with a “National Cyber Code” would seem to provide a more secure world, but it will come with a pretty steep cost.

A significant headwind against this approach is, well, the whole economy of existing producers and consumers of IT products and services, consulting companies, integrators, and so on.  We can’t discount the influence these entities have on the regulatory process, even if to varying degrees in different countries.  Even in countries with very progressive data protection laws, we can see the desire for regulations to provide latitude in IT.  The GDPR, which is my view the only regulation with the *potential* to drive major changes in security, is quite abstract in its data security obligations:

  1. Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:

    1. the pseudonymisation and encryption of personal data;
    2. the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;
    3. the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;
    4. a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.
  2. In assessing the appropriate level of security account shall be taken in particular of the risks that are presented by processing, in particular from accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data transmitted, stored or otherwise processed.

  3. Adherence to an approved code of conduct as referred to in Article 40 or an approved certification mechanism as referred to in Article 42 may be used as an element by which to demonstrate compliance with the requirements set out in paragraph 1 of this Article.

  4. The controller and processor shall take steps to ensure that any natural person acting under the authority of the controller or the processor who has access to personal data does not process them except on instructions from the controller, unless he or she is required to do so by Union or Member State law.

The problem with wording such as this is that it doesn’t define what is actually needed to protect data – that is left to the data controller and data processor to decide, presumably so they can each continue to “innovate” in their respective IT programs.  Now, I also know one of the objectives of this particular wording is to help the regulation remain relevant over time.  A regulation that is too specific may quickly become out of date due to the emergence of new threats.

Finally, another challenge I see with the “UL + NEC” approach is that the prescriptive IT regulations that do exist, such as HIPAA and PCI DSS*, haven’t proven to be very effective at protecting data since organizations that have been certified as compliant do end up being breached occasionally, but then again we see structures with NEC compliant electrical systems and UL reviewed appliances burn down periodically, too.

It seems to me that another component to the solution, at least to the question of protecting consumer data is to limit the use of such data, as the GDPR also does, and also to reduce the value of that data.  In economic terms, hitting both the supply side (data hoarding companies) and the demand side (data thieving criminals).  For example, credit card data is valuable because it can be used to steal money.  What if we changed the way credit worked to not rely on a static number that can be stolen?  Clearly that can’t work for everything, but sucking the value out of stealing personal data would go a long way.

* Yes, I know PCI DSS is not a regulation.