Since I first learned of its existence, I had a mental image of devops that, to me, looks like a few classes of sugar-laced kindergartners running around the playground with scissors, certain that someone would end up hurt pretty bad. While I certainly think there is an opportunity for bad behavior, like using devops as purely a cover to reduce costs, resulting in important steps being skipped, the recent spate of vulnerabilities with Apache Struts has me wondering if NOT going the devops direction is the more risky path.
Traditionally, business applications that use components like Apache Struts tended to be pretty important to operations, and therefore changes are very measured – often allowing only a few change windows per year. Making so few changes per year cause a few problems:
When a critical vulnerability is announced, like we have with Struts, the next change window may be a long way off, and performing an interim change is politically difficult to do, and waiting becomes the path of least resistance
Application teams make changes to the application environment so infrequently that testing plans may not be well refined, making a delay until the next change window the most appealing plan
In our current world, we need the agility and confidence to rapidly address critical fixes, like we continue to see with Struts, despite the complexity of environments that Struts tends to be part of.
This spawned a lot of thoughtful discussions and debates, and a fair number of ad hominem comments, per usual. This was a particularly interesting thread (multiple levels deep):
This is wildly disingenuous, I speak as a flight instructor and major IT incident investigator. Modern software authors have the professional discipline of a cute puppy in comparison to aviation practitioners. https://t.co/6GzCqLNpcl
There’s a lot covered in the layers of those threads, and they’re worth a read. This got me to thinking about how cyber security fits into the world. It seems that a lot of the struggle comes from attempting to find analogies for cyber security in other aspects of the world, like aviation, building codes, and war. Certainly, aspects of each of these apply, but none are a perfect fit. I previously wrote a little bit about the problem of comparing cyber security to kinetic concepts.
Designing software and IT systems is similar to, but not the same as designing physical structures, but can likely benefit from the concept of common standards. The cyber domain likely can learn from the continual improvements seen in the aviation industry, where failures are scrutinized and industry-wide fixes are implemented, whether something tangible like a defective or worn out component, or intangible like the command structure of personnel in a cockpit.
So much of this seems immensely sensible. But there are sharp edges on the concept. As pointed out in the twitter threads above, weather does not evolve to defeat improvements made to aircraft, in the way that adversaries do in the cyber domain. The same is true for many things in the kinetic world: buildings, elevators, fire suppression systems, and so on. All are critical, and all need to follow certain standards to help reduce the likelihood someone will be hurt, though these standards often vary somewhat by jurisdiction. In general, most of these things are not designed to survive an intelligent adversary intent on subverting the system. That’s not completely true, though, as we know that certain building codes, such as skyscrapers, are often designed to withstand a certain level of malicious intent. But only to a point, and ready examples should come to mind where this breaks down.
I’ve been thinking about a lot of the ways that threat actors affect physical systems (assuming no electronic/remote access component) and I think it looks approximately like this:
Where the level of black indicates linkage between the motivation and the proximity. It’s not perfect, and I’m sure if I think about it for a bit, I’ll come up with contradictory examples.
With regard to cyber issues, the “can be anywhere” column turns black at least for malicious, terroristic, and war. We simply don’t design our elevators, airplanes, or cars with the thought that anyone anywhere in the world is a potential threat actor. Certainly that’s changing as we IoT-ify everything and put it on the Internet.
So, all this to say we spend too much time arguing which analogies are appropriate. In two hundred years, I assume that someone will analogize the complicated societal problem of the day with that of primitive cyber security, and someone else will complain about how cyber security is close, but not the same as that modern problem.
It seems intuitive that we *should* look across many different fields, inside and outside IT, for lessons to learn, including things like:
…but it’s naive to expect that we can apply what worked in these areas to the cyber security problem without significant adaptation. Rather than bicker whether or not software development needs a set of building codes, or that we should apply the aviation disaster response to cyber security incidents, in my estimation, we ought to be selecting the relevant parts of many different disciplines to create a construct that makes sense in the context of cyber security and all that it entails.
We have to accept that there *will* be electronic voting. We have to accept that our refrigerator, toaster, toilet, and gym shoes *will* be connected to the Internet some day, if not already. We don’t have to like these things, and they may scare the hell out of us. But as the saying goes, progress happens one funeral at a time – some day, I will be gone. My kids’ kids will be voting from their smart watches. Technology advances are an unrelenting, irreversible tide. Life goes on. There are big problems that have to be dealt with in the area of technology. We need new ways to reduce the macro risk, but must be cognizant that risk will never be zero.
I watch people I respect in the industry laying down on the tracks in front of the e-voting train, attempting to save us from the inevitable horrors to come. Honestly, this has echoes of the IT security teams of old (and maybe of today) saying “no” to business requests to do some particular risky thing. There’s a reason those teams said “no”: what the business was attempting to do was likely dangerous and held hidden consequences that weren’t apparent to the requester. But over time, those teams were marginalized to the point where even people in the industry make jokes about the unhelpful “department of no” that IT security used to be. The world moved on, and the department of no was marginalized and (mostly) run out of town. I don’t think we should expect a different outcome here.
While we are busy grousing about the validity of an XKCD cartoon, or whether building codes or aviation is more representative model, companies like Voatz are off selling their wares to the government.
My friend and co-host @lerg pointed me to this post on Errata Security, heralding the one year anniversary of NotPetya, and in particular, pointing out that reports about NotPetya still mischaracterize the cause as lack of patching. That blog post is well worth a read and I don’t have much to add beyond it.
In the wake of WannaCry, and later NotPetya, I made the call a number of times on DefSec that we were likely seeing the start of a trend that would see network-based worms making a comeback. Well, it’s been a year, and there haven’t been any more notable worms. That is obviously good, but I think it’s also bad in a way. I strongly believe one of the reasons NotPetya and WannaCry were so devastating is that we had not seen worms in so long, and so network propagating worms really haven’t been firmly in the threat models of many people/organizations for some time. That led to some complacency, both on patches and also on ID management/trust relationships, as the post above describes. My fear is that, because worms are fading out of our consciousness again, the next batch of worms in the coming months or years will again be devastating, but even more so, as we become more and more reliant on pervasive IT connected by crappily designed networks.
I’ve been in IT for a long time. I’ve designed and build datacenters and I’ve created network operations teams. Not so long ago, the thought of moving my organization’s sensitive data and servers to some 3rd party was a laughable joke to me. But times have changed, and I hope that I’ve changed some, too.
In the past year, we have seen a spate of significant hardware vulnerabilities, from embedded debug ports, to Meltdown/Spectre, to vulnerable lights out management interfaces, and now the news about TLBleed. I suspect that each new hardware vulnerability identified creates incentive for other smart people to start looking form more. And it appears that there is no near term end of hardware bugs to find.
In the aftermath of Meltdown/Spectre, I wrote a bit about the benefits of cloud, specifically that most cloud providers had already implemented mitigations by the time news of the vulnerabilities became public. There seems to be many benefits of moving infrastructure to cloud, but TLBleed seems like another example of those benefits because we can transfer the capital costs of procuring replacement servers to our providers, if necessary. (note: I am not convinced TLBleed is an issue that rises to that level of importance) We do, however, need to ensure that the provider has taken the appropriate steps to address the problems.
Many of us are well aware of ongoing problem of password reuse between online services. Billions of account records, including clear text and poorly hashed passwords, are publicly accessible to use in attacks on other services. Verizon’s 2017 DBIR noted that operators of websites that use standard email address and password authentication need to be wary of the impact of other sites being breached on their own site due to the extensive problem of password reuse. The authors of the DBIR, and indeed many in the security industry including me, recommend combating the problem with two factor authentication. That is certainly good advice, but it’s not practical for every site and every type of visitor. As an alternative, I propose that websites begin offering randomized password to those creating accounts. The site can offer the visitor an opportunity to easily change that password to something of his or her choosing. Clearly this won’t end password reuse outright, but it will likely make a substantial dent in it without much, if any, additional cost or complexity associated with two factor authentication. An advantage of this approach is that it allows “responsible” sites to minimize the likelihood of accounts on their own site being breached by attackers using credentials harvested from other sites.
When the Federal Financial Institutions Examination Council released it’s Cybersecurity Assessment Tool in 2016, I couldn’t quite understand the intent behind open source software being called out as one of the inherent risks.
Recently, I was thinking about factors that likely impact the macro landscape of cyber insurance risk. By that I mean how cyber insurers would go about measuring the likelihood of a catastrophic scenario that harmed most or all of their insured clients at the same time. Such a thing is not unreasonable to imagine, given the homogeneous nature of IT environments. The pervasive use of open source software, both as a component in commercial and other open source products and used directly by organizations, expand the potential impact of a vulnerability in an open source component, as we saw with Heartbleed, ShellShock and others. It’s conceivable that all layers of protection in a “defense in depth” strategy contain the same critical vulnerability because they all contain the same vulnerable open source component.
In a purely proprietary software ecosystem, it’s much less likely that software and products from different vendors will all contain the same components, as each vendor writes its own implementation. This creates more diversity in the ecosystem, making a single exploit that impacts many I don’t mean to imply that proprietary is better, but it’s hard to work around this particular aspect of risk given the state of the IT ecosystem.
I don’t know if this is why the FFIEC called open source as an inherent risk. I am hopeful their reasoning is similar to this, rather than some assumption that open source software has more vulnerabilities than proprietary software.
I was just reading this story indicating that there are still close to 200,000 web sites on the Internet that are vulnerable to Heartbleed and recalled the persistent stories of decade old malware still turning up in honeypot logs on the SAN Internet Storm Center podcast. It seems that vulnerability remediation must follow an asymptotic decay over time. This has interesting implications when it comes to things like vulnerable systems being used to botnets and the like: no real need to innovate, if you can just be the Pied Piper to the many long tails of old vulnerabilities.
Also interesting to note is that 75,000 of the vulnerable devices are on AWS. I wonder if providers, at some point, begin taking action against wayward hosting customers who are potentially putting both their platform and reputation at risk.
I’m also left wondering what the story is behind these 200,000 devices: did the startup go belly up? did the site owner die? is it some crappy web interface on an embedded device that will never get an update again?
I was reading an article earlier today called “Why Hackers Attack Healthcare Data, and How to Protect It” and I realized that this may well be the one-thousandth such story I’ve read on how to protect PHI. I also realized that I can’t recall any of the posts I’ve read being particularly helpful: mostly containing a few basic security recommendations, usually aligned with the security offerings of the author’s employer. It’s not that the authors of the posts, such as the one I linked to above are wrong, but if we think of defending PHI as a metaphorical house, these authors are describing the view they see when looking through one particular window of the house. I am sure this is driven by the need for security companies to publish think pieces to help establish credibility with clients. I’m not sure how well that works in practice, but it leaves the rest of us swimming in a rising tide of fluffy advice posts proclaiming to have the simple answer to your PHI protection woes.
I’m guessing you have figured out by now that this is bunk. Securing PHI is hard and there isn’t a short list of things to do to protect PHI. First off, you have to follow the law, which prescribes a healthy number of mandatory, as well as some addressable, security controls. But we all know that compliance isn’t security, right? If following HIPAA were sufficient to prevent leaking PHI, then we probably wouldn’t need all those thought-leadership posts, would we?
One of the requirements in HIPAA is to perform risk assessments. The Department of Health and Human Services has a page dedicated to HIPAA risk analysis. I suspect this is where a lot of organizations go wrong, and probably the thing that all the aforementioned authors are trying to influence in some small way.
Most of the posts I read talk about the epidemic of PHI theft, and PHI being sold in the underground market, and then focus on some countermeasures to prevent PHI from being hacked. But let’s take a step back for a minute and think about the situation here.
HIPAA is a somewhat special case in the world of security controls: they are pretty prescriptive and apply uniformly. But we know that companies continue to leak PHI. We keep reading about these incidents in the news and reading blog posts about how to ensure our firm’s PHI doesn’t leak. We should be thinking about these incidents are happening to help us figure out where we should be applying focus, particularly in the area of the required risk assessments.
HHS has great tool to help us out with this, lovingly referred to as the “wall of shame”. This site contains a downloadable database of all known PHI breaches of over 500 records, and there is a legal requirement to report any such breach, so while there are undoubtedly yet-to-be-discovered breaches, the 1800+ entries give us a lot of data to work with.
Looking through the data, it quickly becomes apparent that hacking isn’t the most significant avenue of loss. Over half of the incidents arise from lost or stolen devices, or paper/film documents. This should cause us to consider whether we encrypt all the devices that PHI can be copied on: server drives, desktop drives, laptop drives, USB drives, backup drives, and so on. Encryption is an addressable control in the HIPAA regulations, and one that many firms seemingly decide to dance around. How do I know this? It’s in right there in the breach data. There are tools, though expensive and onerous, that can help ensure data is encrypted wherever it goes.
The next most common loss vector is unauthorized access which includes misdirected email, physical mail, leaving computers logged in, granting excessive permissions, and so on. No hacking here*, just mistakes and some poor operational practices. Notably, at least 100 incidents involved email; presumably misdirected email. There are many subtle and common failure modes that can lead to this, some as basic as email address auto-completion. There likely is not a single best method to handle this – anything from email DLP system quarantining detected PHI transmissions for a secondary review, to disabling email address auto-complete may be appropriate, based on the operations of the organization. This is an incredibly easy way to make a big mistake, and deserves some air time in your risk assessments.
The above loss types make up roughly 1500 of the 1800 reported breaches.
Now, we get into hacking. HHS’ data doesn’t have a great amount of detail, but “network server” accounts for 170 incidents, and likely make up the majority of the situations we read about in the news. There are 42 incidents each involving email and PCs. Since there isn’t a lot of detail, we don’t really know what happened, but can infer that most PCs-related PHI leaks were from malware of some form, and most network server incidents were from some form of actual “hacking”. The Anthem incident, for example, was categorized as hacking on a network server, though the CHS breach was categorized as “theft”.
Dealing with the hacking category falls squarely into the “work is hard” bucket, but we don’t need new frameworks or new blog posts to help us figure out how to solve it. There’s a great document that already does this, which I am sure you are already familiar with: the CIS Top 20 Critical Security Controls.
But which of the controls are really important? They all are. In order to defend our systems, we need to know what systems we have that contain PHI. We need to understand what applications are running, and prevent unauthorized code from running on our devices storing or accessing PHI. We need to make sure people accessing systems and data are who they say they are. We need to make sure our applications are appropriately secured, and our employees are trained, access is limited properly, and all of this is tested for weakness periodically. It’s the cost of doing business and keeping our name off the wall of shame.
* Well, there appears to be a small number of miscategorized records in the “theft” category, including CHS, and a few others involving malware on servers.
Days like today are a harsh reminder that we have limited time to accomplish what we intend to accomplish in our lives. We have limited time with our friends and relatives.
Make that time count. It’s easy to get into the mode of drifting through life, but before we know it, the kids are grown, or our parents are gone, or or friend passed away, or we just don’t have the energy to write that book.
Dan Geer wrote an essay for the National Science Foundation on whether Cyber Security can be considered a science. The short version is this: what constitutes a “science” is somewhat loose, however based on some commonly held dimensions, cyber security is not yet a science, and most likely could be considered a proto-science. Mr. Geer’s essay is worth reading for yourself, since there is far more nuance than this post will cover.
Similarly, Alex Hutton has also stated in some previous talks that information security is something of a trade craft and not a science. Information security, cyber security, or whatever moniker we want to assign it, does indeed seem to be more of a trade craft than a science or engineering discipline.
Mr. Geer’s essay points out a few unique challenges in cyber space relative to other scientific disciplines: a major part of the “thing” being modeled is sentient adversaries that can adapt, learn and deceive, and also that the rapid evolution of technology.
There seem to be other confounding factors as well: the “constituent components” of cyber security are arbitrary and implemented in wildly different fashions by different people and organizations with different levels of skill and incentives, to different specifications, with non-obvious defects, and so on. Translating just a slice of the challenges in cyber security to civil engineering would yield that some timbers used in construction might objectively look similar but have hidden flaws that manifest under certain circumstances, placing a structure’s integrity at risk. The flaws with the timber are not apparent and not easily detectable without incurring extraordinary expense, and even so, not all flaws are likely to be uncovered.
With respect to technology producers, the “building materials” we have to work with in information technology are flawed in many ways, most of which are unseen. With respect to the implementers of technology, the ways in which systems are architected and implemented are generally arbitrary, utilitarian and do not in, any appreciable way, reflect the uncertainty inherent in the technology being used.
If timbers were so structurally flawed, civil engineering, building codes, architecture, engineering and so on would need to accommodate for the uncertainty that comes with building a structure that relies on such timbers. Information technology very inconsistently deals with this uncertainty. The constant spate of breaches seems to indicate that the uncertainty is often not properly accounted for.
Information technology, and by extension information security, is currently a craft. Some are exceptionally good at their craft, and some are quite poor. The proliferation of information technology into daily lives has, in my view, created a somewhat low barrier to entry into this craft. As a result, we have an extremely wide variation in the quality and care with which information technology is implemented. Similar to furniture or jewelry created by craftsmen, some of it is exceedingly well designed and built and others are complete crap.
Evolving information security into a science has been a personal interest of mine for some time. I would propose that a key aspect, though not the only aspect by far, of translating information security into a science is a more objective approach to designing and implementing “systems” that are inherently resilient to failure within certain parameters. Failure to properly engineer at a “system level” view of information technology is what I see most often leading to the most complex security issues. This will very likely mean that some current technical implementations don’t economically fit into a more scientific future state, which will mean that technology producers will need to adapt accordingly to support the market.
A significant part of this will be clearly understanding the limitations of technology components and designing in a safety margin and detective capabilities that indicate failure.
This is a complicated topic. I certainly do not think I have the answers, but I believe I can see the problem, or at least some manifestations of the problem. As Mr. Geer points out in his essay, the way forward is through continued research, continued evolution of our understanding, better defining the “puzzles” that need to be solved and searching for a paradigm that addresses those puzzles, as well as ensuring that practitioners have a common level of competence.
The question is how to start taking those steps.
Thanks to my Twitter friend Rob Lewis (@infosec_tourist) for the link to Mr. Geer’s essay and his constant needling of me in this direction.