Infosec #FakeNews

In the infosec industry, much of the thought leadership, news, and analysis comes from organizations with something to sell.  I do not believe these groups generally act with an intent to deceive, though we need to be on guard for data that can pollute and pervert our understanding of reality.  Two recent infosec-related posts caught my attention that, in my view, warrant a discussion.  First is a story about a study that indicates data breaches affect stock prices in the long run.

Here is the story: https://www.zdnet.com/article/data-breaches-affect-stock-performance-in-the-long-run-study-finds/

Here is the study: https://www.comparitech.com/blog/information-security/data-breach-share-price-2018/

Most of us who work in the security world struggle to justify the importance of new and continued investment and focus on IT security initiatives, and the prospect of a direct linkage between data breaches and stock price declines is a wonderful thing to include in our powerpoint presentations.  As humans, we are tuned to look for information that confirms our views of the world, and the results of this study seem intuitively correct to most of us.  We WANT this study to be true.

But as with so many things in this world, it’s really not true.  To the credit of the study’s authors, the study includes a section on the limitations of the study, but that really doesn’t detract from the headline, does it?  So, I propose an alternate headline: “Data Breach proves to be a boon for LNKD shareholders!”.

In addition to the issues identified in the “limitations” section, there are other confounding factors to consider:

  1. The all had data breaches.  I know that sounds dull, but consider running a study of people who lost weight, and only including in the study people who are leaving a local gym in the evening.  Do companies that experience data breaches have some other attributes in common, such weak leadership or having a culture of accepting too many risks?  Might these factors also manifest themselves in bad decisions in other aspects of the business that might result in a declining stock price?  We don’t actually know, because the only way to know for sure is through experiments that would be highly unethical, even if immensely fun.
  2. Averages don’t work very well for small data sets.  Consider the following situation
    • Company A, B, C, and D all suffer a data breach on the same day
    • Company A, B, and C all see their stock rise by 2% the week after their respective breaches
    • Company D sees it stick decline by 20% the week after its breach
    • The average decline for this group of companies is 6.5% the week after their beaches.  But that doesn’t tell the whole story, does it?

I’m not saying that breaches don’t cause stock prices to decline.  I am saying that I’ve not yet seen good evidence for that, and that is because we can’t fork the universe and run experiments on alternate realities and compare the results.  If we could, this would not be among the first experiments I’d propose.

Like a good Ponemon study, this is study is great fodder for executive meetings, but be ware that you are not on firm ground if you get challenged.  As an anecdote, I used to be a pretty active investor, and while I did not get the Ferrari, I did learn a few things:

  • I am apparently supposed to buy low and sell high, not the other way around
  • Breaches, from a pure inventor standpoint, are generally viewed as a one time charge, and (generally) do not change the underlying fundamentals of the company.  When investing in a company, it’s the fundamentals that matter, such as: are their sales going up and cost of sales going down?

 

Next, is a story about a study that indicates 90% of retailers “fail PCI”.

Here is the story: https://www.infosecurity-magazine.com/news/over-90-of-us-retailers-fail-pci/

Here is the study: https://explore.securityscorecard.com/rs/797-BFK-857/images/2018-Retail-Cybersecurity-Report.pdf

Unfortunately, the authors of this report don’t give a description of the limitations, but I think we can infer a lot about the limitations based on the type of testing this organization performs to gather the data.  That company gathers and collates open source intelligence, seemingly similar to what other players like BitSight are doing.  I would assert that the report finds that retailers are among the worst industries, based on the data this organization gathered, at patch management.  Without knowing the details of each company in the study, we can’t know whether the environment analyzed was part of the PCI DSS Cardholder Data Environment (CDE) for a given retailer.  Making an assertion that an organization who seemingly must comply with PCI DSS is violating their obligations based on a review of the organizations “digital footprint” is not appropriate.   I am not defending the organizations’ lack of patching, just that patching all of an organization’s systems is not a PCI DSS requirement, though maybe it should be.

The downside in this sort of report is that it likely “normalizes” non-compliance with PCI-DSS.  If I’m spending a tremendous amount of time, energy and money to keep my environment in the right shape for PCI, but then see that 90% of others in my sector are not doing this, how motivated with I or my management team be?  The “right” thing to do clearly doesn’t change, but this study changes our perception of what is going on in the world.

I had a math teacher in high school who told us to keep an open mind, but not so open that people throw their trash in.  Remember to maintain a healthy level of skepticism when reading infosec articles, reports, and studies… And yes, even blog posts like this one.

A Compelling Case For DevOps?

Since I first learned of its existence, I had a mental image of devops that, to me, looks like a few classes of sugar-laced kindergartners running around the playground with scissors, certain that someone would end up hurt pretty bad.  While I certainly think there is an opportunity for bad behavior, like using devops as purely a cover to reduce costs, resulting in important steps being skipped, the recent spate of vulnerabilities with Apache Struts has me wondering if NOT going the devops direction is the more risky path.

Traditionally, business applications that use components like Apache Struts tended to be pretty important to operations, and therefore changes are very measured – often allowing only a few change windows per year.  Making so few changes per year cause a few problems:

  1. When a critical vulnerability is announced, like we have with Struts, the next change window may be a long way off, and performing an interim change is politically difficult to do, and waiting becomes the path of least resistance
  2. Application teams make changes to the application environment so infrequently that testing plans may not be well refined, making a delay until the next change window the most appealing plan

In our current world, we need the agility and confidence to rapidly address critical fixes, like we continue to see with Struts, despite the complexity of environments that Struts tends to be part of.

No True Infosec…

Recent news coverage about West Virginia’s decision to use a smart-phone blockchain voting system (plot twist: calling it “blockchain” might be a stretch) is causing a stir on social media amongst the infosec community.  This XKCD cartoon is a popular one: 

This spawned a lot of thoughtful discussions and debates, and a fair number of ad hominem comments, per usual.  This was a particularly interesting thread (multiple levels deep):

And an equally interesting reply from Rob Graham:

There’s a lot covered in the layers of those threads, and they’re worth a read.  This got me to thinking about how cyber security fits into the world.  It seems that a lot of the struggle comes from attempting to find analogies for cyber security in other aspects of the world, like aviation, building codes, and war. Certainly, aspects of each of these apply, but none are a perfect fit. I previously wrote a little bit about the problem of comparing cyber security to kinetic concepts.

Designing software and IT systems is similar to, but not the same as designing physical structures, but can likely benefit from the concept of common standards. The cyber domain likely can learn from the continual improvements seen in the aviation industry, where failures are scrutinized and industry-wide fixes are implemented, whether something tangible like a defective or worn out component, or intangible like the command structure of personnel in a cockpit.

So much of this seems immensely sensible.  But there are sharp edges on the concept.  As pointed out in the twitter threads above, weather does not evolve to defeat improvements made to aircraft, in the way that adversaries do in the cyber domain.  The same is true for many things in the kinetic world: buildings, elevators, fire suppression systems, and so on.  All are critical, and all need to follow certain standards to help reduce the likelihood someone will be hurt, though these standards often vary somewhat by jurisdiction.  In general, most of these things are not designed to survive an intelligent adversary intent on subverting the system.  That’s not completely true, though, as we know that certain building codes, such as skyscrapers, are often designed to withstand a certain level of malicious intent.  But only to a point, and ready examples should come to mind where this breaks down.

I’ve been thinking about a lot of the ways that threat actors affect physical systems (assuming no electronic/remote access component) and I think it looks approximately like this:

Where the level of black indicates linkage between the motivation and the proximity.  It’s not perfect, and I’m sure if I think about it for a bit, I’ll come up with contradictory examples.

With regard to cyber issues, the “can be anywhere” column turns black at least for malicious, terroristic, and war.  We simply don’t design our elevators, airplanes, or cars with the thought that anyone anywhere in the world is a potential threat actor.  Certainly that’s changing as we IoT-ify everything and put it on the Internet.

So, all this to say we spend too much time arguing which analogies are appropriate.  In two hundred years, I assume that someone will analogize the complicated societal problem of the day with that of primitive cyber security, and someone else will complain about how cyber security is close, but not the same as that modern problem.

It seems intuitive that we *should* look across many different fields, inside and outside IT, for lessons to learn, including things like:

  • Epidemiology
  • Aviation
  • Civil engineering
  • Architecture
  • War fighting
  • Chemistry
  • Sociology
  • Psychology
  • Law enforcement
  • Fire fighting

…but it’s naive to expect that we can apply what worked in these areas to the cyber security problem without significant adaptation.  Rather than bicker whether or not software development needs a set of building codes, or that we should apply the aviation disaster response to cyber security incidents, in my estimation, we ought to be selecting the relevant parts of many different disciplines to create a construct that makes sense in the context of cyber security and all that it entails.

We have to accept that there *will* be electronic voting.  We have to accept that our refrigerator, toaster, toilet, and gym shoes *will* be connected to the Internet some day, if not already.  We don’t have to like these things, and they may scare the hell out of us.  But as the saying goes, progress happens one funeral at a time – some day, I will be gone.  My kids’ kids will be voting from their smart watches.  Technology advances are an unrelenting, irreversible tide.  Life goes on.  There are big problems that have to be dealt with in the area of technology.  We need new ways to reduce the macro risk, but must be cognizant that risk will never be zero.

I watch people I respect in the industry laying down on the tracks in front of the e-voting train, attempting to save us from the inevitable horrors to come.  Honestly, this has echoes of the IT security teams of old (and maybe of today) saying “no” to business requests to do some particular risky thing.  There’s a reason those teams said “no”: what the business was attempting to do was likely dangerous and held hidden consequences that weren’t apparent to the requester.  But over time, those teams were marginalized to the point where even people in the industry make jokes about the unhelpful “department of no” that IT security used to be.  The world moved on, and the department of no was marginalized and (mostly) run out of town.  I don’t think we should expect a different outcome here.

While we are busy grousing about the validity of an XKCD cartoon, or whether building codes or aviation is more representative model, companies like Voatz are off selling their wares to the government.

 

I Was Wrong About Worms Making A Comeback

My friend and co-host @lerg pointed me to this post on Errata Security, heralding the one year anniversary of NotPetya, and in particular, pointing out that reports about NotPetya still mischaracterize the cause as lack of patching.  That blog post is well worth a read and I don’t have much to add beyond it.

In the wake of WannaCry, and later NotPetya, I made the call a number of times on DefSec that we were likely seeing the start of a trend that would see network-based worms making a comeback.  Well, it’s been a year, and there haven’t been any more notable worms.  That is obviously good, but I think it’s also bad in a way.  I strongly believe one of the reasons NotPetya and WannaCry were so devastating is that we had not seen worms in so long, and so network propagating worms really haven’t been firmly in the threat models of many people/organizations for some time.  That led to some complacency, both on patches and also on ID management/trust relationships, as the post above describes.  My fear is that, because worms are fading out of our consciousness again, the next batch of worms in the coming months or years will again be devastating, but even more so, as we become more and more reliant on pervasive IT connected by crappily designed networks.

Hardware Messes As An Opportunity

I’ve been in IT for a long time.  I’ve designed and build datacenters and I’ve created network operations teams.  Not so long ago, the thought of moving my organization’s sensitive data and servers to some 3rd party was a laughable joke to me.  But times have changed, and I hope that I’ve changed some, too.

In the past year, we have seen a spate of significant hardware vulnerabilities, from embedded debug ports, to Meltdown/Spectre, to vulnerable lights out management interfaces, and now the news about TLBleed.  I suspect that each new hardware vulnerability identified creates incentive for other smart people to start looking form more.  And it appears that there is no near term end of hardware bugs to find.

In the aftermath of Meltdown/Spectre, I wrote a bit about the benefits of cloud, specifically that most cloud providers had already implemented mitigations by the time news of the vulnerabilities became public.  There seems to be many benefits of moving infrastructure to cloud, but TLBleed seems like another example of those benefits because we can transfer the capital costs of procuring replacement servers to our providers, if necessary.  (note: I am not convinced TLBleed is an issue that rises to that level of importance) We do, however, need to ensure that the provider has taken the appropriate steps to address the problems.

 

A Modest Proposal To Reduce Password Reuse

Many of us are well aware of ongoing problem of password reuse between online services.  Billions of account records, including clear text and poorly hashed passwords, are publicly accessible to use in attacks on other services.  Verizon’s 2017 DBIR noted that operators of websites that use standard email address and password authentication need to be wary of the impact of other sites being breached on their own site due to the extensive problem of password reuse.  The authors of the DBIR, and indeed many in the security industry including me, recommend combating the problem with two factor authentication.  That is certainly good advice, but it’s not practical for every site and every type of visitor.  As an alternative, I propose that websites begin offering randomized password to those creating accounts.  The site can offer the visitor an opportunity to easily change that password to something of his or her choosing.  Clearly this won’t end password reuse outright, but it will likely make a substantial dent in it without much, if any, additional cost or complexity associated with two factor authentication.  An advantage of this approach is that it allows “responsible” sites to minimize the likelihood of accounts on their own site being breached by attackers using credentials harvested from other sites.

 

What are your thoughts?

Regulation, Open Source, Diversity and Immunity

When the Federal Financial Institutions Examination Council released it’s Cybersecurity Assessment Tool in 2016, I couldn’t quite understand the intent behind open source software being called out as one of the inherent risks.  

Recently, I was thinking about factors that likely impact the macro landscape of cyber insurance risk.  By that I mean how cyber insurers would go about measuring the likelihood of a catastrophic scenario that harmed most or all of their insured clients at the same time.  Such a thing is not unreasonable to imagine, given the homogeneous nature of IT environments.   The pervasive use of open source software, both as a component in commercial and other open source products and used directly by organizations, expand the potential impact of a vulnerability in an open source component, as we saw with Heartbleed, ShellShock and others.  It’s conceivable that all layers of protection in a “defense in depth” strategy contain the same critical vulnerability because they all contain the same vulnerable open source component.  
In a purely proprietary software ecosystem, it’s much less likely that software and products from different vendors will all contain the same components, as each vendor writes its own implementation.  This creates more diversity in the ecosystem, making a single exploit that impacts many  I don’t mean to imply that proprietary is better, but it’s hard to work around this particular aspect of risk given the state of the IT ecosystem.  
I don’t know if this is why the FFIEC called open source as an inherent risk.  I am hopeful their reasoning is similar to this, rather than some assumption that open source software has more vulnerabilities than proprietary software.

Asymptotic Vulnerability Remediation

I was just reading this story indicating that there are still close to 200,000 web sites on the Internet that are vulnerable to Heartbleed and recalled the persistent stories of decade old malware still turning up in honeypot logs on the SAN Internet Storm Center podcast.  It seems that vulnerability remediation must follow an asymptotic decay over time.  This has interesting implications when it comes to things like vulnerable systems being used to botnets and the like: no real need to innovate, if you can just be the Pied Piper to the many long tails of old vulnerabilities.

Also interesting to note is that 75,000 of the vulnerable devices are on AWS.  I wonder if providers, at some point, begin taking action against wayward hosting customers who are potentially putting both their platform and reputation at risk.

I’m also left wondering what the story is behind these 200,000 devices: did the startup go belly up? did the site owner die? is it some crappy web interface on an embedded device that will never get an update again?

#patchyourshit

What Does It Take To Secure PHI?

I was reading an article earlier today called “Why Hackers Attack Healthcare Data, and How to Protect It” and I realized that this may well be the one-thousandth such story I’ve read on how to protect PHI.  I also realized that I can’t recall any of the posts I’ve read being particularly helpful: mostly containing a few basic security recommendations, usually aligned with the security offerings of the author’s employer.  It’s not that the authors of the posts, such as the one I linked to above are wrong, but if we think of defending PHI as a metaphorical house, these authors are describing the view they see when looking through one particular window of the house.  I am sure this is driven by the need for security companies to publish think pieces to help establish credibility with clients.   I’m not sure how well that works in practice, but it leaves the rest of us swimming in a rising tide of fluffy advice posts proclaiming to have the simple answer to your PHI protection woes.

I’m guessing you have figured out by now that this is bunk.  Securing PHI is hard and there isn’t a short list of things to do to protect PHI.  First off, you have to follow the law, which prescribes a healthy number of mandatory, as well as some addressable, security controls.  But we all know that compliance isn’t security, right?  If following HIPAA were sufficient to prevent leaking PHI, then we probably wouldn’t need all those thought-leadership posts, would we?

One of the requirements in HIPAA is to perform risk assessments.  The Department of Health and Human Services has a page dedicated to HIPAA risk analysis.  I suspect this is where a lot of organizations go wrong, and probably the thing that all the aforementioned authors are trying to influence in some small way.

Most of the posts I read talk about the epidemic of PHI theft, and PHI being sold in the underground market, and then focus on some countermeasures to prevent PHI from being hacked.  But let’s take a step back for a minute and think about the situation here.

HIPAA is a somewhat special case in the world of security controls: they are pretty prescriptive and apply uniformly.  But we know that companies continue to leak PHI.  We keep reading about these incidents in the news and reading blog posts about how to ensure our firm’s PHI doesn’t leak.  We should be thinking about these incidents are happening to help us figure out where we should be applying focus, particularly in the area of the required risk assessments.

HHS has great tool to help us out with this, lovingly referred to as the “wall of shame”.  This site contains a downloadable database of all known PHI breaches of over 500 records, and there is a legal requirement to report any such breach, so while there are undoubtedly yet-to-be-discovered breaches, the 1800+ entries give us a lot of data to work with.

Looking through the data, it quickly becomes apparent that hacking isn’t the most significant avenue of loss.  Over half of the incidents arise from lost or stolen devices, or paper/film documents.  This should cause us to consider whether we encrypt all the devices that PHI can be copied on: server drives, desktop drives, laptop drives, USB drives, backup drives, and so on.  Encryption is an addressable control in the HIPAA regulations, and one that many firms seemingly decide to dance around.  How do I know this?  It’s in right there in the breach data.  There are tools, though expensive and onerous, that can help ensure data is encrypted wherever it goes.

The next most common loss vector is unauthorized access which includes misdirected email, physical mail, leaving computers logged in, granting excessive permissions, and so on.  No hacking here*, just mistakes and some poor operational practices.  Notably, at least 100 incidents involved email; presumably misdirected email.  There are many subtle and common failure modes that can lead to this, some as basic as email address auto-completion.  There likely is not a single best method to handle this – anything from email DLP system quarantining detected PHI transmissions for a secondary review, to disabling email address auto-complete may be appropriate, based on the operations of the organization.  This is an incredibly easy way to make a big mistake, and deserves some air time in your risk assessments.

The above loss types make up roughly 1500 of the 1800 reported breaches.

Now, we get into hacking.  HHS’ data doesn’t have a great amount of detail, but “network server” accounts for 170 incidents, and likely make up the majority of the situations we read about in the news.  There are 42 incidents each involving email and PCs.  Since there isn’t a lot of detail, we don’t really know what happened, but can infer that most PCs-related PHI leaks were from malware of some form, and most network server incidents were from some form of actual “hacking”.  The Anthem incident, for example, was categorized as hacking on a network server, though the CHS breach was categorized as “theft”.

Dealing with the hacking category falls squarely into the “work is hard” bucket, but we don’t need new frameworks or new blog posts to help us figure out how to solve it.  There’s a great document that already does this, which I am sure you are already familiar with: the CIS Top 20 Critical Security Controls.

But which of the controls are really important?  They all are.  In order to defend our systems, we need to know what systems we have that contain PHI.  We need to understand what applications are running, and prevent unauthorized code from running on our devices storing or accessing PHI.  We need to make sure people accessing systems and data are who they say they are. We need to make sure our applications are appropriately secured, and our employees are trained, access is limited properly, and all of this is tested for weakness periodically.  It’s the cost of doing business and keeping our name off the wall of shame.

* Well, there appears to be a small number of miscategorized records in the “theft” category, including CHS, and a few others involving malware on servers.

Get On It

Days like today are a harsh reminder that we have limited time to accomplish what we intend to accomplish in our lives.  We have limited time with our friends and relatives.

Make that time count.  It’s easy to get into the mode of drifting through life, but before we know it, the kids are grown, or our parents are gone, or or friend passed away, or we just don’t have the energy to write that book.

Get on it.  Go make the world a better place.