More Effective Security Policies

On my evening walk with my best friend this evening, I pondered the disconnect between security policies and security outcomes.  Every organization I’m aware of has well intentioned security policies that enumerate important security objectives, for example the maximum amount of time to apply security patches to systems and applications.

My hypothesis is that information security staff believe that IT and business teams will translate those policy requirements into functional requirements when developing systems, while IT and business teams are developing systems based on a set of business objectives.  Hard security requirements, for example, minimum password lengths, usually do end up in IT’s design requirements, whereas security requirements that are process-based are not included, or not included in a meaningful way.

Let’s consider patching.  For the sake of argument, let’s assume our security policy mandates applying high severity patches on Internet facing systems within 48 hours.  What does the IT team do with this requirement when developing a new system?  Likely not much.  Patching is an operational process, not a requirement on how systems are built.  All systems need to be patched, they all can be patched, we have teams to apply patches, good enough.

In practice, though, I don’t know of a single organization that doesn’t struggle with applying patches on time.  There aren’t enough people, or the system can only be taken down once a quarter and never between Thanksgiving and Christmas, and so on.  This made me wonder: would adding policy requirements that enumerate operational expectations, in addition to traditional security objectives, help this situation.

For example, rather than a policy that says:

“Apply high severity patches to Internet-facing systems within 48 hours”

We instead have one that says:

“Internet-facing systems must be designed in a manner to enable applying high severity patches within 48 hours, including during change freezes.

Support teams must be appropriately staffed to consistently perform patch testing and apply patches on all relevant Internet facing systems within 48 hours.

Operational processes, including change management, must support the 48-hour requirement for applying patches on Internet facing systems within 48 hours.

Appropriate test environments must be maintained to support the 48-hour requirement for applying patches on Internet facing systems within 48 hours.”

This is only a simplistic example; however, it provides very objective requirements to include in IT development plans.

Does anyone do such a thing?  Has it helped?

Thoughts on Autosploit

This announcement created quite a stir in the infosec community last week:

Much of the debate is centered on the concern that any real use of the tool is likely to be illegal and that there is no particular security research utility from the tool; all the tools serves to do is make it simple for script kids to break into our systems.

Many people have rightly pointed out that this tool isn’t enabling anything, in terms of breaking into vulnerable systems, than is already possible – said another way, we shouldn’t see this tool as a problem – we should see the vulnerable devices as the problem, and if the tool can affect your devices because they are vulnerable, a) that’s not the tool’s fault, and b) you’re probably already pwnt or are living on borrowed time.

I think there are two big issues that autosploit raises that I haven’t seen discussed (not to say someone hasn’t brought it up before me):

  1. Autosploit will likely serve as a framework for future automated exploitation, and using shodan for targeting effectively allows an attacker to efficiently target all vulnerable systems accessible from the Internet which haven’t blocked shodan’s scanners.  This means that we should expect the marginal time to compromise of our vulnerable internet-connected systems to drop precipitously for certain types of vulnerabilities.
  2. Largely because of #1 above, most of us should probably fear the second order impacts of autosploit possibly more than the first order impacts.  By that I mean even if we are diligent in rapidly patching (or otherwise mitigating) our vulnerable systems,  the ability for the baddies to quickly create new botnets that can be used to perpetrate attacks against other internet infrastructure, like we notably saw with Mirai, creates problems that are much harder and more expensive for us to mitigate than simply patching systems.  And we, unfortunately, don’t get to choose when everyone else patches their systems.

Autosploit-style tools are inevitable, and indeed as some people have pointed out, the technique is not new.  While that is true, autosploit may well accelerate some of the “innovation” (even for as simple of a code-base as it is), and that i going to drive us defenders to likewise have to accelerate our innovation.  In the long run, tools like autosploit, which drive attack efficiency, will very likely change the way IT and infosec have to operate, both from a first-order defense perspective and a second-order defense perspective.

Concentration of Risk and Internally Inconsistent Regulations

Concentration of Risk

Last week, I wrote a post on why we shouldn’t feel bad for cyber insurance carriers because they do a pretty good job of limiting their obligations in the event of a loss.  Since then, I recall reading a story about concentration risk with some insurance providers in Florida and started wondering if there is likely to be a similar challenge evolve in the cyber insurance arena.  The problem in the Florida instance was, as far as I can recall from the story, that a particular carrier insured many houses in Florida.  During a particularly bad year for hurricanes, the carrier went bankrupt because they weren’t able to cover the claims from their many customers.

As I understand it, larger carriers mitigate this risk by maintaining a broad portfolio of insured clients in geographically distributed areas, and limit exposure to any given area through rate adjustments.  Insurance only works when there are many more people paying in than cashing out through claims.  Of course, there are reinsurers who can also help cover losses from a carrier, but they too, have to monitor their risk concentrations.

Hopefully it’s obvious where I’m going here.  In the cyber security world, risk is not concentrated along geographic boundaries, but rather on technical boundaries.  I suspect that the cyber actuaries at insurance providers already have this problem well in hand, but I can see that it will be difficult to avoid risk concentration in cyber insurance, particularly given the mono-cultures we have in technology, that can lead to events such as WannaCry and NotPetya rapidly causing massive damage across geographic and industry lines.  While cyber insurance carriers may be able to limit their exposure on any given policy through liability caps, if all or most of a carrier’s clients simultaneously need to make a claim, we will likely see something similar to the Florida hurricane situation.  From a macro perspective, it seems to me that cyber insurance carriers may be better served to focus more on this technology mono-culture problem, than on monitoring the controls of any given organization.

Internally Inconsistent Regulations

The GDPR has been on my mind a lot lately.  The GDPR mandates that the personal data of EU citizens be both properly protected and accessible.  Exactly what that means is left to the Data Controller and Data Processor to guess, presuming that the data protection authorities will tell them, in the form of a potentially existential fine, if they guess wrong.  The recently disclosed Meltdown and Spectre CPU predictive execution “vulnerabilities” raise an interesting paradox: it may not be possible for an organization to both protect the data AND ensure it is available, at least on certain timelines.  I certainly don’t believe that Meltdown or Spectre create this situation, but they highlight that the situation could exist.

The vendor response to Meltdown and Spectre has been disastrous: vendors are releasing patches, only to later retract them, some software creates incompatibilities with fixes from other vendor, some vendors recommend not installing other vendor fixes, and so on.  And there is no meaningful recourse for technology users beyond waiting for software fixes that potentially cause significant performance impacts and/or random system reboots: hardware vendors have yet to release new processors that fully fix the problem.

If Meltdown and Spectre were more serious vulnerabilities, in terms of the potential risk to personal data, how do Data Controllers and Data Processors balance the risk between the dual mandates of security and availability in the GDPR, with such a potentially significant stick hanging over their heads.  The “regulatory doves”, no doubt, will respond that “regulators would take a measured view in any kind of enforcement situation”. Maybe they would, and maybe they would decide that an organization chose badly when making the determination of whether to shut off it’s systems to protect the personal data, or keep them running (and data vulnerable).

Hopefully this remains in the domain of an academic thought experiment, however with with trend line of EternalBlue, Intel AMT, Intel ME, and now Meltdown/Spectre, it seems highly likely that some organizations will face this decision in the not so distant future.

 

Thoughts On Cyber Insurance and Ponemon Surveys

Insurance

I read this post last week about expectations on cyber insurance shaping the future of cyber security.  At one point, I had the same view: there is a strategic advantage to a company, or an insurer, to develop optimized models on cyber security investment.  I’ve come to accept that, like forecasting the weather, there are just too many variables in IT for such a construct to take hold, at least for the foreseeable future.  Reports I read about cyber insurance typically cogitate on 3 things:

  1. How is the nebulous concept of “loss” from an cyber security event get calculated?  Some losses can be huge.  Where does the line get drawn?
  2. Partly because of #1, and partly because of the difficulty in predicting the rate at which cyber incident-related losses will happen, cyber insurance carriers are very likely carrying a lot of risk.
  3. Insurance companies are going to drive security discipline in their clients through variable rates, or withholding coverage, based on the clients’ hygiene.

Worry over the health of cyber insurance companies due to the perceived dual unknowns of loss magnitude and frequency seems misplaced, because insurances carriers do not offer uncapped damages in their policies, at least that I am aware of.  Indeed, the caps are relatively low, and the premiums are quite expensive.  Given that, insurers shouldn’t need to define loss rates or magnitudes with significant precision to avoid losing money overall – they just need to make an assumption about how many clients will file a claim in a given year and set premiums accordingly.  Even then, carriers cover themselves through their own policies with reinsurance carriers.

Now things will get interesting is when/if the cyber insurance market becomes highly competitive and carriers are competing on premium rates.  I expect that we will indeed see carriers trying to drive hygiene of clients, and to some extent, we are seeing this already through various partnerships between cyber security companies and some insurers, though I expect that is more a statement about the smooth sales pitches of those security firms, rather than necessarily a significant need of the carriers.  In the US, at least, Progressive Insurance offers customers a discount on auto insurance if they are permitted to monitor the driver’s behavior through a device that plugs into the car’s ODB2 port.  I can see a cyber equivalent, though I am not sure exactly what form it will take.  And to be honest, I am not sure how helpful the data would be in measuring the likelihood of a company being breached.

Net: don’t cry for insurance companies; don’t expect insurance companies to deliver the IT industry from our breachy ways.

Surveys

I saw several posts this week about Ponemon’s latest survey on data breaches.  I still contend that they are not very helpful for prioritizing security programs because they are non-statistically valid, backward looking, subjective opinion surveys.  So why do people pay Ponemon to do them?  It hit me this earlier this week: Ponemon reports, and reports like Ponemon, are not intended for infosec people.  They are intended to help infosec vendors understand the buyers of their wares.  I suspect most of the rest of the world already recognized this, but I am not always the sharpest knife in the drawer.

Thoughts on Cloud Computing In The Wake of Meltdown

One of the promises of cloud computing, particularly IaaS, is that the providers operate at a scale that affords certain benefits that hard to justify and implement for all but the largest private datacenter operations.  For example, most cloud datacenters have physical security and power redundancy that is likely cost prohibitive for most companies whose primary business is not running a datacenter.  Meltdown and Spectre highlighted some additional benefits of operating in the cloud, and also some potential downsides.

First, since manage servers is the very business of cloud providers and they tend to have very large numbers of physical servers, it seems that most cloud companies were able to gain early insight into the issues and perform early testing of patches.

Second, because cloud providers do have so many systems to manage and the name of the game is efficiency, cloud providers tend to be highly automated, and so most of the major cloud providers were able patch their estates either well before the disclosure, or shortly after the disclosure.  That’s a good thing, as many companies continue to struggle with obtaining firmware fixes from their hardware vendors nearly two weeks later.  Of course, to fully address the vulnerability, cloud customers also have to apply operating system patches to their virtual server instances.

There are some downsides, however.

First, Meltdown provided an apparent possibility for a guest in one virtual machine to read the memory of a different virtual machine running on the same physical server.  This is a threat that doesn’t exist on private servers, or is much less concerning for private cloud.  This vulnerability existed for many years and we may never know it is was actually used for this purpose, however once it was disclosed, cloud providers (generally) applied fixes before any known exploitation was seen in the wild.

Second, another benefit of cloud from the consumer perspective is “buying only what you need”.  In the case of dedicated servers, we traditionally size the server to accommodate the maximum amount of load it would need to handle while providing the required performance.  Cloud, though, gave us the ability to add capacity on demand, and because on a clock cycle-by-clock cycle basis, cloud is more expensive than a physical server in the long run, we tend to only buy the capacity we need at the time.  After cloud providers applied the Meltdown patches, at least some kinds of workloads saw a dramatic increase in required compute capacity to maintain the same level of performance.  One of the big downsides to cloud therefore, seems to the risk of a sudden change in the operating environment that results in higher cloud service costs.  As problematic as that might be, firing an API to increase the execution cap or add CPUs to a cloud server is logistically much simpler than private physical servers experiencing the same performance hit and needing to be replaced, which requires the arduous process of obtaining approval for a new server, placing the order, waiting, racking, cabling, set up, and so on.

 

The Problem With Breach Surveys

I just read this alarming post citing a survey performed by an insurance company indicating that 29% of US businesses suffered a data breach.  I suspect that most people are well aware that such a survey of 403 senior executives almost certainly can’t be extrapolated in any meaningful way*.  The more important issue with these numbers, as compared to say, the rate of US companies suffering losses from a fire, is that the harm isn’t necessarily recognizable as such.  By that I mean that businesses generally can tell with a high degree of precision if they had a fire in the past year, but that isn’t necessarily true of breaches.  The only reliable response is an affirmative response (“why yes, we did have a breach last year”).  Any other response is really saying “no, we didn’t have a breach that we know about”.  This is pretty significant, because it means that the real rate of corporate breaches is likely much higher.  We have numerous examples (read: Yahoo!) of larger, more sophisticated companies going years before recognizing that a breach occurred.  Depending on the intentions of the adversary involved, less sophisticated companies might never come to realize they were breached.  That raises an interesting question: if a tree falls in the woods and no one hears it, did it make a sound?  I mean, if a company is breached and never recognizes that it happened (and hence never suffers any ill consequences resulting from the breach), does anyone really care about it?  Sure, the harm may be subtle, such as a foreign competitor releasing a competing product without having to invest the R&D expended by the breached company, or the breached company’s clients being harmed in a way that isn’t attributed to that company.

Tautologically , we don’t know how often these happen because we don’t know when they happen.  I strongly suspect if we were able to place the “BREACH DETECTOR 9000 NOW WITH REDUNDANT BLINKY LIGHTS(tm)” that could identify every single breach on every company network, we would find that the rate of breaches is far higher, possibly “almost certain“.  Would security programs and IT departments act differently if the report instead read that “95% of US companies were breached in 2017″?

* Ok, so maybe it would be valid as”29% of US companies whose senior executives we would be able to obtain a response from experienced a breach in 2017”.

Prioritizing Infosec Programs

What follows is a barely intelligible, Christmas cookie-induced attention deficit rant on the state of the industry.

The most excellent Jake Williams wrote on his company’s blog an interesting post from a Twitter Survey he ran, asking whether network or endpoint visibility is more important for detecting APT intrusions.  Jake points out there really isn’t a strong consensus among the over 1100 people that voted in the survey, nor in the responses to the survey, and that there may be a cyclical nature to the way infosec people would rank order these controls over time.

I continue to grow increasingly interested in the psychological aspects of security and perceptions of risk among IT and infosec people, and Jake’s post is a good example of why.  There is not an objectively “right” answer Jake’s question, but that doesn’t really stop us from forming a strong narrative in our minds that leads us to an answer we feel is correct.  I suspect that each of us apply particular context to such questions when forming our position.  For some people who work in organizations with highly diffuse / cloud-y IT, the concept of monitoring networks might not make any sense at all…  Which network would you monitor?  Monitoring the endpoint in this case is the only approach that makes any sense.  Other people point out that IOT devices are becoming more attractive APT targets and endpoint security tools do not (and likely never will) work on these devices, hence the network is the only place that makes sense to monitor.  Still others point out that the “right” answer is to get 100% coverage using which ever approach can accommodate that level of coverage.

I know that Jake intentionally framed this question in an unnatural way that yielded these results.  We can intuitively look at this situation and see that everyone is right, and that no organization would take such a position of going all in on endpoint security or network security.  While that may be true, this example does highlight the varied thought processes each of us as individuals use as we approach such questions, and that almost certainly influences how we approach questions of security investment prioritization – you know the exercises many of us perform where we rank order risks, possibly using addition, multiplication, weighting, and really nice looking heat maps, tweaking the numbers until they match our expected view of reality and hence our view of what we controls we should be implementing where?

An  intuitively “right” way to approach this is to consider whether each asset has the proper level of visibility – in some cases that may be through endpoint controls because the devices are not on an central network, and it might be network controls because we have IOT devices not supported by endpoint security solutions.  I don’t believe this is the right way to think about the problem: in my 20 years of working in and around infosec, the complaint has always been that we try to bolt on security, rather than to bake it in, but I see us continue to perpetuate it – possibly even embracing the notion of “bolt on security” for a variety of reasons.  In my estimation, the objectively “right” solution is to take a more systems-oriented approach to designing our IT systems in the first place.  We can’t use network controls to monitor diffuse IT environments because there is no logical network location to monitor.  What happens when IOT devices are added to that environment?  Where does the network control go?

Clearly this is far outside the bounds of the two answers Jake’s survey permitted.  Though I will hammer on one more point.  Jake’s specific question was “…which one matters more for detecting APT intrusions?”  A number of comments pointed out that “it’s not a breach until the data gets out”, and therefore network detection is critical for the final determination.  Schrödinger’s Breach, I suppose.  What concerns me with this line of thought is that the only harm a threat actor can exact on a victim is data theft.  The question posed wasn’t specific to a “data breach”, but rather an “APT intrusion”.  We have seen cases like Saudi Aramco, Sony, and the Dark Seoul attacks where the end game was destruction.  WannaCry and NotPetya likewise were not intending to exflitrate data.  Under HIPAA and other data protection laws, data doesn’t have to be exfiltrated in order to be reportable (and potentially fine-able) as a data breach.  Plenty of other harms can befall an organization, such as impacting the availability of an application, or physically damaging equipment and so on.

To sum up, I think we have a lot of growing ahead of us as an industry, in terms of how we think about controls, risks, and terminology.

 

Random Thoughts From The OReilly Security Conference 2017

I had a chance to attend the O’Reilly Security Conference earlier this week.  I find that when I am at these conferences, I get into a mode of thinking that is more open and creative.  Here are some random thoughts I noted during the conference which I may write more about in the near future:

  • The somewhat unspoken theme of the conference – at least several of the keynotes – was on reducing the friction of security to the point where, hopefully, doing some given task the desired “secure” way is easier and/or faster than doing it some other way.  I really like that concept, but I think it likely requires talent and investment that a lot of companies don’t have available.  A great example was one of the presenters discussing how their company’s security team modified operating system libraries to implement a more streamlined user experience for logins.  Great in concept, but I suspect that idea doesn’t scale down very well to organizations that don’t have that kind of talent or ability to manage such customized code.
  • When I go to a security conference, I have, let’s say, 99 security problems.  By the end of the conference, I have 111 security problems.  By that I mean that security conference presentations are good at defining problems I previously didn’t know I had.
  • There is almost certainly a selection bias on the presentations that are picked by security conferences: talks are generally about problems that the presenters have solved, or mostly solved.  Those presenters, their problems, and their solutions exist in an ecosystem largely defined by their culture, skills, risk appetite, and so on.  I rarely get “actionable” information out of conference presentations.  For me, the most interesting part of security conferences is looking at the logic and creativity behind how the presenter got to their solution.  That feels like the important take away, and I wonder if conference presenters ought to play up the thought processes as much as their solutions.
  • Thinking about named vulnerabilities like KRACK, Shellshock, and Heartbleed, I’m reminded that we have a pretty immature threat prioritization problem, which has been made worse, in some instance anyhow, by effective vulnerability marketing programs.  With the recent spate of high profile worms (if three can be considered a spate), it seems likely that we should inject a “wormability” factor into the vulnerability assessment score.  I am sure it’s already represented, at least in part, but it seems intuitive, at least to me, that not all CVSS 10.0 vulnerabilities are created equally – some much more pressing than others.  ETERNALBLUE/ETERNALROMANCE/MS17-010 is a good example that enabled the WannaCry outbreak.  That presumes we get enough information with the vulnerability disclosure to make such an assessment.  It’s also clear to me that we have a “self-constructed vulnerabilities” in our environments that are wormable, but for which there is no patch.  NotPetya and Bad Rabbit seem like good examples.  I could have powered a small city with the energy spent on hand wringing when I mentioned there is no “patch” for those two issues.  As I’ve written on this site in the past, these techniques are commonly exploited by more focused attackers, however there’s been some luck at automating them, and I see no reason that trend won’t continue.  We have the CWE concept, but I don’t think that hits the mark of the “self-constructed vulnerabilities”.  I think this is more like an “OWASP TOP 10” for infrastructure.  Anyhow, I’m not aware of anything that uniformly identifies/measures/rates such “self-constructed vulnerabilities”.
  • I see a lot of focus on automation and orchestration creeping into infosec conferences, which I think is a good thing.  There was a presentation at this conference on “Inspec” compliance as code.  I also recently read the book “Infrastructure as Code” which is pretty enlightening and makes my mind spin with possibilities for having “IT as Code” which would include things like “Infrastructure as Code”, “Security as Code”, “Compliance as Code”, “Resiliency/Redundancy/Recovery as Code”, and so on.  I wonder if we will get to the point where our IT is defined in a configuration file that specifies basic organizational parameters which are interpreted and orchestrated into a more or less fully automated, self checking, self monitoring, self healing, and self recovering infrastructure.  This seems inevitable.

That’s it.  Any thoughts on these?

 

 

Why Putting Tape Over Your Webcam Might Make Sense

I will admit that I roll my eyes, even if it is only on the inside some times, when I see people with tape or some other device covering the webcam on their laptop.  My self-righteous logic goes like this: most people I interact with are using the computers I see them using for business purposes, and they likely aren’t perched on a night stand or bathroom counter in the evenings or early mornings.  If the webcam on my laptop was hijacked, the perpetrator would be exposed to hours upon hours of me making faces in reaction to emails and instant messages from co-workers.  Audio is a much, much larger threat to confidentiality, and I have yet to see anyone taking action against the built in microphone on their laptops.  Maybe as humans, we feed that someone secretly watching us is more of an invasion of privacy, but it doesn’t take a lot of thought to conclude that an attacker would obtain far more value from listening, than from looking.  That is, unless the attacker is a doing it for blackmail or out of some twisted, possibly perverted obsession with spying on people.

A few days back, The Verge posted the following video on Twitter:

A casual listen to the video left me laughing it off: haha – tape on the webcam won’t really do a lot, but it may make you feel better.  I listened to it again though, and caught something I missed the first time.  The narrator interviewed Todd Beardsley from Rapid7.  Kudos to Todd for giving what I thought was an amazingly insightful reason for covering a webcam.  In the video, I believe Todd called it “superstitious”, however the point he was making is very important and accurate.  If we believe something about ourselves, we generally act accordingly.  Todd is explaining that if I put a piece of tape on my webcam, that tape serves as a constant reminder throughout my work day that I am a security minded person.  One of the really interesting findings in behavioral psychology is that our mindset is often based on a perception we have of ourselves that according to what we have previously done.  I have to be – I put tape on my webcam after all.  And that constant reminder will permeate into decisions I make that have security consequences, such as picking a better password than I otherwise would, or thinking twice before clicking on a link.  As technologists, that idea probably doesn’t sit well because we expect that it wouldn’t work on “us”.  However, in the world of psychology unlike the world of computers, things are not deterministic and are more about averages.  So yes, this phenomenon will not work every time for every one or to the same extent every time, but on average, it likely does have some beneficial effect, and therefore I am going to stop rolling my eyes when I see tape over webcams.

As I learn more about behavioral psychology, it’s clear that there is a lot of opportunity to explore potential benefits for making security improvements.  If you are interested in learning more, I recommend reading books by Dan Ariely, Daniel Kahneman, Richard Thaler, and Tom Gilovich.

*note: some of my twitter friends pointed out that that tape their webcam to ensure they are not caught by surprise when joining webex-style meetings. That makes sense.

Game Theoretic Impacts of NotPetya and Bad Rabbit

The lateral movement techniques used by NotPetya and Bad Rabbit are not new, particularly to those of us who have to clean up the mess following breaches perpetrated by “sophisticated actors”.  Those techniques, in fact, are a pretty common feature in many targeted attacks.  Until recently, however, they have been carried out by a person, or team, sitting at a keyboard, meaning the damage of a single campaign was more or less contained to a single organization, usually with the intention of surreptitiously stealing data, rather than wanton destruction.  Many such breaches likely either go unnoticed or unreported.

NotPetya and Bad Rabbit are changing the economics of these techniques.  What was the domain of “sophisticated actors” targeting a specific entity has now been largely automated in manner that can target an arbitrary number of victims simultaneously.  The move to wide-spread system and data destruction rather than targeted exfiltration means that organizations generally can’t hide the fact that they’ve been compromised.

NotPetya was seeded into victim organizations through a tainted auto-update to relatively obscure tax software used by some Ukrainian organizations, and yet had dramatic impacts around the world.  Bad Rabbit was seeded into victim organizations through compromised web servers pushing fake Flash updates.  While both NotPetya and Bad Rabbit are alleged to have come from the same actor, it’s not a far leap to expect to see copy cats using use all manner of entry techniques like exploit kits, trojaned downloads, and USBs in the parking lot, to deliver malware that drops RATs, garden variety ransomware, data stealers, and so on that uses these automated lateral movement techniques to broadly infect victim systems.

As the apology letters of most breached organizations state, they take security very seriously.  Likely those letters really mean they take security seriously after the breach.  Most people in IT security can relate to this: it’s tough to get management to invest in mitigating a risk until after a loss is realized from that risk.  NotPetya and Bad Rabbit style attacks have a potential to change that dynamic, though.  These attacks are HIGHLY visible in the media, and for once, the victims weren’t “missing a patch” that “caused” the breach, and damages publicly reported by victims are significant.  The perception of “vulnerability” is also dramatically different in these instances: traditional threats generally impacted only a single employee system, and maybe the data that employee had access to*.  (*remember, the no one thinks the “sophisticated actor” is coming after them)

This type of attack also highlights one of the challenges with relying on the “human” firewall.  Exceptional organizations are able to get their rate of falling for phishing attacks down from the 30-40% average range to the low single digits.  In an organization of any size though, that means at least a few people are likely to fall for any given campaign.  If the attack is one that moves laterally such as Bad Rabbit, the 98% of people that do not fall for the attack don’t change the outcome.  It only took one person to bring the house down.  This is likely true in many other types of targeted attacks on organizations, but that is a post for another day.

The challenge, as always, is figuring out what to do.  Fortunately or unfortunately depending on your perspective, robust architecture design and solid operational processes are an effective mitigant to the types of attacks we have seen thus far.  Security is hard, and remains so.  Possibly NotPetya and Bad Rabbit, and the inevitable next volley that follow in their foot steps, begins to raise interest in making the fundamental improvements necessary to avoid being another statistic in these attacks.