NCSAM Day 1: Multifactor Authentication

Enable multifactor authentication everywhere it is feasible to do so.  Where it’s not feasible, figure out how to do it anyway using, for example by using an authenticated firewall in front of a device that doesn’t support MFA.

For many years, sophisticated adversaries have leveraged legitimate credentials in their attacks.  At the same time, organizations have struggled mightily to get their employees to pick “strong” passwords through such clever devices as a password policy that includes a minimum length and a certain diversity of character types, giving rise to the infamous “Password1”.  This problem holds true for server, network, and database administrators, too.

Three shifts in the threat landscape make multifactor authentication more important than ever:

  1. The number of ways that an adversary can obtain a password continues to grow, from mimikatz, to hashcat.
  2. The techniques that were once the domain of sophisticated adversaries are diffusing into the broader cyber crime ecosystem, such as we see with SamSam.
  3. The move to borderless IT – cloud, SaaS, and so on, means that the little safety nets our firewalls once provided are all but gone. Microsoft recently announced that it is deprecating passwords in favor multifactor authentication on some of its cloud services, such as Azure Active Directory.

Multifactor authentication is not cutting edge.  This is 2018.  I first used a multifactor authenticated system in 1998 and it worked well back then.

Some gotchas to be aware of:

  • As has been widely reported, SMS-based multifactor authentication is not advised due to numerous ways adversaries can defeat it
  • Any multifactor scheme that either contains the second factor on (such as a certificate) or delivers the second factor to (such as an email) a computer that is being authenticated from is less that ideal, given that a major use case is one where the workstation is compromised. Adversaries can use certificates on the system along with a captured password to do their deeds.
  • A common adversarial technique to get around multifactor authentication is the helpdesk. Be sure to develop a reasonably secure means of authenticating employees who are having trouble and a means of providing an alternate authentication means if, for example, someone loses their phone.

P.S. Authentication is pronounced Auth-en-ti-cation, not Auth-en-tif-ication.  Thank you.

Cyber Security Awareness Month

Tomorrow starts national cyber security awareness month (NCSAM).  I’m going to take a break from my normal complaining about what does not work and attempt to write a post per day for the next month with suggestions for making improvements based on things I’ve learned the hard way.  NCSAM normally focuses on the “user” experience, but in keeping with the intent of this site, I’ll be focusing on improvements to organizational IT and IT security.  I hope that none of what I will post is new or revolutionary to anyone who is familiar with the IT security, however a reminder and some additional context never hurts.

Stay tuned…

Assessing Risk Assessments

I recently finished listening to the book titled “Suggestible You”.  The book is fascinating overall, but one comment the author made repeatedly is that the human brain is a “prediction machine”.  Our brains are hardwired to make constant snap predictions about the future as a means of surviving in the world.

That statement got me to thinking about IT security, as most things do.  We make predictions based on our understanding of the world…  can I cross the street before that car gets here?  I need to watch out for that curb.  If I walk by the puddle, a car will probably splash me…  and so on.  IT risk assessments are basically predictions, and we are generally quite confident in our ability to perform such predictions.  We need to recognize, however, that our ability to predict is limited by our previous experiences and how often those experiences have entered our awareness.  I suspect this is closely related to the concept of availability bias in behavioral economics, where give more weight to things that are easier to bring to mind.

In the context of an IT risk assessment, limited knowledge of different threat scenarios is detrimental to a quality result.  Our challenge then is that the threat landscape has become incredibly complex meaning that it’s difficult, and possibly just not practical, to know about and consider all threats to a given system. And consider that we generally are not aware of our blind spots: we *think* we have enumerated and considered all of the threats in a proper order, but we have not.

This thought drives me back to the concept of standard “IT building blocks“, that have well documented best practices, risk enumerations, and interfaces with other blocks.  It’s a highly amorphous idea right now, but I don’t see a better way to manage the complexity we are currently faced with.

Thoughts appreciated.  More to come as time permits.

Infosec #FakeNews

In the infosec industry, much of the thought leadership, news, and analysis comes from organizations with something to sell.  I do not believe these groups generally act with an intent to deceive, though we need to be on guard for data that can pollute and pervert our understanding of reality.  Two recent infosec-related posts caught my attention that, in my view, warrant a discussion.  First is a story about a study that indicates data breaches affect stock prices in the long run.

Here is the story: https://www.zdnet.com/article/data-breaches-affect-stock-performance-in-the-long-run-study-finds/

Here is the study: https://www.comparitech.com/blog/information-security/data-breach-share-price-2018/

Most of us who work in the security world struggle to justify the importance of new and continued investment and focus on IT security initiatives, and the prospect of a direct linkage between data breaches and stock price declines is a wonderful thing to include in our powerpoint presentations.  As humans, we are tuned to look for information that confirms our views of the world, and the results of this study seem intuitively correct to most of us.  We WANT this study to be true.

But as with so many things in this world, it’s really not true.  To the credit of the study’s authors, the study includes a section on the limitations of the study, but that really doesn’t detract from the headline, does it?  So, I propose an alternate headline: “Data Breach proves to be a boon for LNKD shareholders!”.

In addition to the issues identified in the “limitations” section, there are other confounding factors to consider:

  1. The all had data breaches.  I know that sounds dull, but consider running a study of people who lost weight, and only including in the study people who are leaving a local gym in the evening.  Do companies that experience data breaches have some other attributes in common, such weak leadership or having a culture of accepting too many risks?  Might these factors also manifest themselves in bad decisions in other aspects of the business that might result in a declining stock price?  We don’t actually know, because the only way to know for sure is through experiments that would be highly unethical, even if immensely fun.
  2. Averages don’t work very well for small data sets.  Consider the following situation
    • Company A, B, C, and D all suffer a data breach on the same day
    • Company A, B, and C all see their stock rise by 2% the week after their respective breaches
    • Company D sees it stick decline by 20% the week after its breach
    • The average decline for this group of companies is 6.5% the week after their beaches.  But that doesn’t tell the whole story, does it?

I’m not saying that breaches don’t cause stock prices to decline.  I am saying that I’ve not yet seen good evidence for that, and that is because we can’t fork the universe and run experiments on alternate realities and compare the results.  If we could, this would not be among the first experiments I’d propose.

Like a good Ponemon study, this is study is great fodder for executive meetings, but be ware that you are not on firm ground if you get challenged.  As an anecdote, I used to be a pretty active investor, and while I did not get the Ferrari, I did learn a few things:

  • I am apparently supposed to buy low and sell high, not the other way around
  • Breaches, from a pure inventor standpoint, are generally viewed as a one time charge, and (generally) do not change the underlying fundamentals of the company.  When investing in a company, it’s the fundamentals that matter, such as: are their sales going up and cost of sales going down?

 

Next, is a story about a study that indicates 90% of retailers “fail PCI”.

Here is the story: https://www.infosecurity-magazine.com/news/over-90-of-us-retailers-fail-pci/

Here is the study: https://explore.securityscorecard.com/rs/797-BFK-857/images/2018-Retail-Cybersecurity-Report.pdf

Unfortunately, the authors of this report don’t give a description of the limitations, but I think we can infer a lot about the limitations based on the type of testing this organization performs to gather the data.  That company gathers and collates open source intelligence, seemingly similar to what other players like BitSight are doing.  I would assert that the report finds that retailers are among the worst industries, based on the data this organization gathered, at patch management.  Without knowing the details of each company in the study, we can’t know whether the environment analyzed was part of the PCI DSS Cardholder Data Environment (CDE) for a given retailer.  Making an assertion that an organization who seemingly must comply with PCI DSS is violating their obligations based on a review of the organizations “digital footprint” is not appropriate.   I am not defending the organizations’ lack of patching, just that patching all of an organization’s systems is not a PCI DSS requirement, though maybe it should be.

The downside in this sort of report is that it likely “normalizes” non-compliance with PCI-DSS.  If I’m spending a tremendous amount of time, energy and money to keep my environment in the right shape for PCI, but then see that 90% of others in my sector are not doing this, how motivated with I or my management team be?  The “right” thing to do clearly doesn’t change, but this study changes our perception of what is going on in the world.

I had a math teacher in high school who told us to keep an open mind, but not so open that people throw their trash in.  Remember to maintain a healthy level of skepticism when reading infosec articles, reports, and studies… And yes, even blog posts like this one.

A Compelling Case For DevOps?

Since I first learned of its existence, I had a mental image of devops that, to me, looks like a few classes of sugar-laced kindergartners running around the playground with scissors, certain that someone would end up hurt pretty bad.  While I certainly think there is an opportunity for bad behavior, like using devops as purely a cover to reduce costs, resulting in important steps being skipped, the recent spate of vulnerabilities with Apache Struts has me wondering if NOT going the devops direction is the more risky path.

Traditionally, business applications that use components like Apache Struts tended to be pretty important to operations, and therefore changes are very measured – often allowing only a few change windows per year.  Making so few changes per year cause a few problems:

  1. When a critical vulnerability is announced, like we have with Struts, the next change window may be a long way off, and performing an interim change is politically difficult to do, and waiting becomes the path of least resistance
  2. Application teams make changes to the application environment so infrequently that testing plans may not be well refined, making a delay until the next change window the most appealing plan

In our current world, we need the agility and confidence to rapidly address critical fixes, like we continue to see with Struts, despite the complexity of environments that Struts tends to be part of.

Prioritizing vulnerability Remediation

As we’ve seen in the past events such as WannaCry and the Equifax breach, timely vulnerability remediation is a challenge for many organizations.  Ideally, all vulnerabilities would be fixed as soon as they are discovered, and patches applied immediately upon release, however that’s often not an option.  For example, patches often need to be tested to ensure nothing breaks, and patching often requires reboots or service restarts, which must be done during a change window.  All of this takes coordination and limits the throughput of applying patches, and so organizations end up adopting prioritization schemes.  Most organizations prioritize remediation based on a combination of the severity of the vulnerability (CVSS score) and the exposure of assets (such as Internet-facing), however the vast majority of vulnerabilities are never exploited in the wild. The team at Kenna Security published a paper that indicates less than two percent of vulnerabilities end up being exploited in the wild and proposes some alternative attributes to help more effectively prioritized remediation.  This is an excellent paper, but the challenge remains: it’s difficult to predict which vulnerabilities actually end up being exploited.

Last week, a researcher posted a link paper written for USENIX on prioritizing vulnerabilities to the Security Metrics mailing list.  The paper describes a method of rapidly detecting vulnerability exploitation, within 10 days of vulnerability disclosure, by comparing known vulnerable hosts to reputation blacklists (RBLs), on the theory that most vulnerability exploitation that happens in the wild ends with the compromised host sending spam.  The authors claim to achieve 90% accuracy in predicting whether there is active exploitation of a vulnerability under analysis.

While I see a few potential issues with the approach, caveated by the fact that I am no where near as smart as the authors of this paper, this is the sort of approach that we need to be developing and refining, rather than the haruspicy that we currently use to prioritize vulnerability remediation today.

No True Infosec…

Recent news coverage about West Virginia’s decision to use a smart-phone blockchain voting system (plot twist: calling it “blockchain” might be a stretch) is causing a stir on social media amongst the infosec community.  This XKCD cartoon is a popular one: 

This spawned a lot of thoughtful discussions and debates, and a fair number of ad hominem comments, per usual.  This was a particularly interesting thread (multiple levels deep):

And an equally interesting reply from Rob Graham:

There’s a lot covered in the layers of those threads, and they’re worth a read.  This got me to thinking about how cyber security fits into the world.  It seems that a lot of the struggle comes from attempting to find analogies for cyber security in other aspects of the world, like aviation, building codes, and war. Certainly, aspects of each of these apply, but none are a perfect fit. I previously wrote a little bit about the problem of comparing cyber security to kinetic concepts.

Designing software and IT systems is similar to, but not the same as designing physical structures, but can likely benefit from the concept of common standards. The cyber domain likely can learn from the continual improvements seen in the aviation industry, where failures are scrutinized and industry-wide fixes are implemented, whether something tangible like a defective or worn out component, or intangible like the command structure of personnel in a cockpit.

So much of this seems immensely sensible.  But there are sharp edges on the concept.  As pointed out in the twitter threads above, weather does not evolve to defeat improvements made to aircraft, in the way that adversaries do in the cyber domain.  The same is true for many things in the kinetic world: buildings, elevators, fire suppression systems, and so on.  All are critical, and all need to follow certain standards to help reduce the likelihood someone will be hurt, though these standards often vary somewhat by jurisdiction.  In general, most of these things are not designed to survive an intelligent adversary intent on subverting the system.  That’s not completely true, though, as we know that certain building codes, such as skyscrapers, are often designed to withstand a certain level of malicious intent.  But only to a point, and ready examples should come to mind where this breaks down.

I’ve been thinking about a lot of the ways that threat actors affect physical systems (assuming no electronic/remote access component) and I think it looks approximately like this:

Where the level of black indicates linkage between the motivation and the proximity.  It’s not perfect, and I’m sure if I think about it for a bit, I’ll come up with contradictory examples.

With regard to cyber issues, the “can be anywhere” column turns black at least for malicious, terroristic, and war.  We simply don’t design our elevators, airplanes, or cars with the thought that anyone anywhere in the world is a potential threat actor.  Certainly that’s changing as we IoT-ify everything and put it on the Internet.

So, all this to say we spend too much time arguing which analogies are appropriate.  In two hundred years, I assume that someone will analogize the complicated societal problem of the day with that of primitive cyber security, and someone else will complain about how cyber security is close, but not the same as that modern problem.

It seems intuitive that we *should* look across many different fields, inside and outside IT, for lessons to learn, including things like:

  • Epidemiology
  • Aviation
  • Civil engineering
  • Architecture
  • War fighting
  • Chemistry
  • Sociology
  • Psychology
  • Law enforcement
  • Fire fighting

…but it’s naive to expect that we can apply what worked in these areas to the cyber security problem without significant adaptation.  Rather than bicker whether or not software development needs a set of building codes, or that we should apply the aviation disaster response to cyber security incidents, in my estimation, we ought to be selecting the relevant parts of many different disciplines to create a construct that makes sense in the context of cyber security and all that it entails.

We have to accept that there *will* be electronic voting.  We have to accept that our refrigerator, toaster, toilet, and gym shoes *will* be connected to the Internet some day, if not already.  We don’t have to like these things, and they may scare the hell out of us.  But as the saying goes, progress happens one funeral at a time – some day, I will be gone.  My kids’ kids will be voting from their smart watches.  Technology advances are an unrelenting, irreversible tide.  Life goes on.  There are big problems that have to be dealt with in the area of technology.  We need new ways to reduce the macro risk, but must be cognizant that risk will never be zero.

I watch people I respect in the industry laying down on the tracks in front of the e-voting train, attempting to save us from the inevitable horrors to come.  Honestly, this has echoes of the IT security teams of old (and maybe of today) saying “no” to business requests to do some particular risky thing.  There’s a reason those teams said “no”: what the business was attempting to do was likely dangerous and held hidden consequences that weren’t apparent to the requester.  But over time, those teams were marginalized to the point where even people in the industry make jokes about the unhelpful “department of no” that IT security used to be.  The world moved on, and the department of no was marginalized and (mostly) run out of town.  I don’t think we should expect a different outcome here.

While we are busy grousing about the validity of an XKCD cartoon, or whether building codes or aviation is more representative model, companies like Voatz are off selling their wares to the government.

 

I Was Wrong About Worms Making A Comeback

My friend and co-host @lerg pointed me to this post on Errata Security, heralding the one year anniversary of NotPetya, and in particular, pointing out that reports about NotPetya still mischaracterize the cause as lack of patching.  That blog post is well worth a read and I don’t have much to add beyond it.

In the wake of WannaCry, and later NotPetya, I made the call a number of times on DefSec that we were likely seeing the start of a trend that would see network-based worms making a comeback.  Well, it’s been a year, and there haven’t been any more notable worms.  That is obviously good, but I think it’s also bad in a way.  I strongly believe one of the reasons NotPetya and WannaCry were so devastating is that we had not seen worms in so long, and so network propagating worms really haven’t been firmly in the threat models of many people/organizations for some time.  That led to some complacency, both on patches and also on ID management/trust relationships, as the post above describes.  My fear is that, because worms are fading out of our consciousness again, the next batch of worms in the coming months or years will again be devastating, but even more so, as we become more and more reliant on pervasive IT connected by crappily designed networks.

Seven Critical Things To Protect Your Infrastructure and Data

Given some recent happenings in the world, I felt it important to get the word out on a few really key things we need to do/stop doing/do differently as we manage our infrastructure to help prevent data breaches.  This is probably more relevant to IT people, like sysadmins, so here goes…

  1. KEEP A FREAKING INVENTORY OF YOUR SYSTEMS, THEIR IP ADDRESSES, THEIR FUNCTIONS, AND WHO TO CONTACT.  Why is this so hard?  Keep it up to date.  By the way, we know this is hard, because it is the #1 control on the CIS Top 20 Critical Cyber Security Controls.  If you’re all cloud-y, I’m sure you can find a way to stick some inventory management into your Jenkins pipeline.
  2. Monitor the antivirus running on your servers.  Unless the server is a file server, if your AV detects an infection, one that you’re reasonably confident is not a false positive, you should proceed immediately to freak out mode.  While workstations ideally wouldn’t be exposed to viruses, we intuitively know that the activities of employees, like browsing the internet, opening email attachments, and connecting USB drives, and so on, will cause a workstation to be in contact with a steady stream of malware.  And so, seeing AV detection and blocks on workstations gives us a bit of comfort that the controls are working.  You should not feel that level of comfort with AV hits on servers.  Move your servers to different group(s) in the AV console, create custom reports, integrate the console with an arduino to make a light flash or to electrify Sam’s chair – I don’t really care how you are notified, but pay attention to those events and investigate what happened.  It’s not normal and something is very wrong when it does happen.
  3. If you have determined that a server is/was infected with malware, please do not simply install the latest DAT file into your AV scanner and/or ran Malwarebytes and the server and put the system back into production.  I know we are measured by availability, but I promise you that, on average, this approach will cause you far, far, far less pain and downtime than the alternative.  When a server is infected, isolate it from the network, try to figure out what happened, but do not put it back into production.  You might be able to clean the malware with some tool like Malwarebytes, but you have no idea if there is a dropper still present, or what else was changed on the system, or what persistence mechanisms may have been implanted.  Build a new system, restore the data, and move on, while trying to figure out how this happened in the first place.  This is a great advantage of virtualized infrastructure, by the way.
  4. If you have an infected or compromised system in the environment, check other systems for evidence of similar activity.  If the environment uses Active Directory, quickly attempt to determine if any administrative accounts are likely compromised, and if so, it’s time to start thinking about all those great ideas you’ve had… you know the ones about how you would do things differently if you were able to start over?  This is probably the point at which you will want to pull in outside help for guidance, but there is little that can be done to assure the integrity of a compromised domain.  Backups, snapshots, and good logging of domain controllers can help more quickly return to operations, but you will need to be wary about any domain-joined system that wasn’t rebuilt.
  5. Periodically validate that you are collecting logs from all the systems that you think you should be, and ensure you have the ability to access those logs quickly.  Major incidents rarely happen on a Tuesday morning in January.  They usually happen late on the Friday of a long weekend, and if Sally is the only person who has access to the log server and she just left for a 7 day cruise, you’re going to be hurting.
  6. Know who to call when you are in over your head.  If you’re busy trying to figure out if someone stole all your nuclear secrets, the last thing you want to be doing is trying to interview incident response vendors, get quotes, and then approval for a purchase order.  Work that stuff out ahead of time.  Most 3rd party incident response companies offer retainer agreements.
  7. Know when you are in over your head.  The average IT person believes they have far above average knowledge of IT[1], but the tactics malware and attackers use may not make sense to someone not familiar with such tactics.  This, by the way, is why I am a strong advocate for IT, and network/system admins in particular, to spend some time learning about red team techniques.  Note, however, that this can have a significant downside[2].

 

1. Yes, I made that up, but Dunning-Kruger tells me I’m probably right.  Or maybe I am just overconfident in my knowledge of human behavior…

2. Red team is sexy, and exposing sysadmins to those tactics may cause a precipitous drop in the number of sysadmins and a sudden glut of penetration testers. Caveat Emptor.

Hardware Messes As An Opportunity

I’ve been in IT for a long time.  I’ve designed and build datacenters and I’ve created network operations teams.  Not so long ago, the thought of moving my organization’s sensitive data and servers to some 3rd party was a laughable joke to me.  But times have changed, and I hope that I’ve changed some, too.

In the past year, we have seen a spate of significant hardware vulnerabilities, from embedded debug ports, to Meltdown/Spectre, to vulnerable lights out management interfaces, and now the news about TLBleed.  I suspect that each new hardware vulnerability identified creates incentive for other smart people to start looking form more.  And it appears that there is no near term end of hardware bugs to find.

In the aftermath of Meltdown/Spectre, I wrote a bit about the benefits of cloud, specifically that most cloud providers had already implemented mitigations by the time news of the vulnerabilities became public.  There seems to be many benefits of moving infrastructure to cloud, but TLBleed seems like another example of those benefits because we can transfer the capital costs of procuring replacement servers to our providers, if necessary.  (note: I am not convinced TLBleed is an issue that rises to that level of importance) We do, however, need to ensure that the provider has taken the appropriate steps to address the problems.