Speed of Patching Versus Breach Likelihood

I am a big fan of the Verizon DBIR.  I was just reading this interview with Mike Denning  from Verizon on Deloitte’s web site about this year’s report.  The whole article is worth reading, but I want to focus on one comment from Mr. Denning:

One of the biggest surprises was the finding that 99.9 percent of the exploited vulnerabilities had occurred more than a year after a patch, which quite possibly would have prevented them, had been published. Organizations are finding it difficult to maintain the latest patch releases. Additionally, the finding speaks to the challenges of endpoint security.

Today, coverage is more important than speed because, through scanning and other methods, attackers are able to find the weakest link in the chain and then quickly move laterally within the organization. …

This comment brought back some thoughts I had when I initially read the 99.9% statistic in the 2015 DBIR.  That number, while a bit surprising, fits the intuition most of us in the field have.  My concern, however, is that this may be interpreted as meaning the following:

“we can exclude ourselves from 99.9% of breaches by just ensuring we keep up with our patching.  After all, we should be able to meet the goal of applying patches no later than, say, 11 months after release.  Or 6  months.”

I see two problems with this thinking:

  1. Few organizations can apply EVERY patch to EVERY system.  Sometimes we consciously “exempt” systems from a patch for various business reasons, sometimes we simply don’t know about the systems or the patches.  If this is the case in your organization, and you get compromised through such a missing patch, you are part of the 99.9%.  You don’t get credit for patching 99.9%.  I wonder how many organizations in the 99.9% statistic thought they were reasonably up-to-date with patches?
  2. Outside of commodity/mass attacks, adversaries are intelligent.  If the adversary wants YOUR data specifically, he won’t slam his hands on the keyboard in exasperation because all of his year-plus old exploit code doesn’t work and then decide the job at McDonalds is a better way to make a living.  He’ll probably try some newer exploit until he finds one that works.  Or maybe not.

My point is not to diminish the importance of patching – clearly it is very important.  My point is, as with any given control, thinking that it will provide dramatic and sweeping improvements on its own is probably a fallacy.

 

 

Post Traumatic Vulnerability Disorder

I’ve talked pretty extensively on the Defensive Security Podcast about the differences between patch management and vulnerability management.  We’ve now had two notable situations within 6 months where a significant vulnerability in a portion of our infrastructure estate is vulnerable to a significant threat.  And no patches.  At least for a while.

In both the HeartBleed and ShellShock cases, the vulnerabilities were disclosed suddenly and exploit code was readily available, trivial to exploit and nearly undetectable (prior to implementing strategies to detected it, at least).

And in both cases we have been stuck wringing our hands waiting for patches for the systems and applications we use.  In the case of Heartbleed, we sometimes waited for weeks.  Or even months.

Circumstances like Heartbleed and ShellShock highlight the disparity between patch management and vulnerability management.  However, being aware of the difference isn’t informative of the actions we can or should take.  I’ve been thinking about our options in this situation.  I find 3 broad options we can choose from:

1. Accept the risk of continuing to operate without patches until patches are available

2. Disconnect or isolate systems that are vulnerable

3. Something in the middle

…and we may choose a combination of these depending on risks and circumstances.  #1 and #2 are self explanatory and may be preferable.  I believe that #1 is somewhat perilous, because we may not fully understand the actual likelihood or impact.  However, organizations choose to take risks all the time.

#3 is where many people will want to be.  This is where it helps to have smart people who can help to quickly figure out how the vulnerability works and how your existing security infrastructure can be mobilized to help mitigate the threat.

In the current case of ShellShock, some of the immediate options are to:

1. implement mod_security rules to block the strings used in the attacks

2. implement IPS or WAF rules to prevent shell command injection

3. Implement iptables rules to filter out connections containing the strings used in the attacks

4. Increasing proactive monitoring, backups and exorcisms on vulnerable systems

…and so on.  So, there ARE indeed things that can be done while waiting for a patch.  But they are most often going to be highly specific to your environment and what defensive capabilities are in place.

HeartBleed and ShellShock should show us that we need to ensure we have the talent and capabilities to intelligently and effectively respond to emerging threats such as these, without having to resort to hand wringing while waiting for patches.

Think about what you can do better next time.  Learn from these experiences.  The next one is probably not that far away.

H/T to my partner in crime, Andy Kalat (@lerg) for the title.