Post Traumatic Vulnerability Disorder

I’ve talked pretty extensively on the Defensive Security Podcast about the differences between patch management and vulnerability management.  We’ve now had two notable situations within 6 months where a significant vulnerability in a portion of our infrastructure estate is vulnerable to a significant threat.  And no patches.  At least for a while.

In both the HeartBleed and ShellShock cases, the vulnerabilities were disclosed suddenly and exploit code was readily available, trivial to exploit and nearly undetectable (prior to implementing strategies to detected it, at least).

And in both cases we have been stuck wringing our hands waiting for patches for the systems and applications we use.  In the case of Heartbleed, we sometimes waited for weeks.  Or even months.

Circumstances like Heartbleed and ShellShock highlight the disparity between patch management and vulnerability management.  However, being aware of the difference isn’t informative of the actions we can or should take.  I’ve been thinking about our options in this situation.  I find 3 broad options we can choose from:

1. Accept the risk of continuing to operate without patches until patches are available

2. Disconnect or isolate systems that are vulnerable

3. Something in the middle

…and we may choose a combination of these depending on risks and circumstances.  #1 and #2 are self explanatory and may be preferable.  I believe that #1 is somewhat perilous, because we may not fully understand the actual likelihood or impact.  However, organizations choose to take risks all the time.

#3 is where many people will want to be.  This is where it helps to have smart people who can help to quickly figure out how the vulnerability works and how your existing security infrastructure can be mobilized to help mitigate the threat.

In the current case of ShellShock, some of the immediate options are to:

1. implement mod_security rules to block the strings used in the attacks

2. implement IPS or WAF rules to prevent shell command injection

3. Implement iptables rules to filter out connections containing the strings used in the attacks

4. Increasing proactive monitoring, backups and exorcisms on vulnerable systems

…and so on.  So, there ARE indeed things that can be done while waiting for a patch.  But they are most often going to be highly specific to your environment and what defensive capabilities are in place.

HeartBleed and ShellShock should show us that we need to ensure we have the talent and capabilities to intelligently and effectively respond to emerging threats such as these, without having to resort to hand wringing while waiting for patches.

Think about what you can do better next time.  Learn from these experiences.  The next one is probably not that far away.

H/T to my partner in crime, Andy Kalat (@lerg) for the title.

Compliance Isn’t Security, or Why It’s Important To Understand Offensive Techniques

The security vs. compliance debate continues, though I’ve lost track of who is actually  still arguing that compliance is security.  Maybe it’s auditors?  Or managers?  This topic comes up a lot for me, both at work and from listeners of my security podcast.

The “compliance” mindset often leads to a belief that if some particular scenario is not prohibited by policy, it must be safe.  After all, if it weren’t safe, it would be explicitly addressed in policy.

I can look around and see compliance thinking behind many breaches.: “If it isn’t safe for card readers to pass unencrypted credit card information to a POS terminal, surely PCI DSS would not permit it!”

For many in the information security field, this is a quaint discussion.  In my advanced age, I’m becoming more convinced that “compliance” is not just a fun philosophical debate, with auditors, but rather an active contributor to the myriad security issues we wrestle with.

Clearly, the better path to take is one of assessing risks in IT systems and addressing those risks.  However, we must keep in mind that we are often asking for that assessment to be performed by the same people who are apt to accept policy as the paragon of protection.

As we debate the perils of focusing too much on offensive security and not enough on defense, it’s important to point out that, without understanding offensive techniques, our imaginations are hobbled as we evaluate risks to IT systems.

For example it probably seems reasonable to the average IT person to connect an Internet-facing Windows web server to her organization’s Active Directory domain.  There is no policy or regulations that prohibit such a thing.  In fact, AD provides many great security capabilities, like the ability to centrally provision and remove user IDs.  Without the context of how this situation can, and indeed often is, leveraged by attackers in significant breaches, the perceived benefits outweigh the risks.

I see value in exposing general IT workers, not just information security personnel, to offensive techniques.  I am not advocating that we teach a Windows administrator how to perform code injection into running processes, but rather that code CAN be injected into running processes.  And so on.

To me. this is lack of understanding is a major contributor to the issues underlying the “security vs. compliance” discussion.  Organizations spend a quite a lot of effort on security awareness programs for employees, but I see almost no focus on educating IT staff on higher order security threats.  I don’t expect much to change until we change this state of affairs.

Human Nature and Cyber Security

This has been a particularly active year for large scale, public breaches in the news. Next year’s Data Breach Investigations Report from Verizon should provide some context on whether we are experiencing a “shark attack” phenomenon of continued media coverage of each new breach, or if this is really an exceptional year.

Regardless of whether we are trending above average or not, it’s pretty clear that a lot of companies are experiencing data breaches.

Information security is a series of trade-offs: investment vs. security, ease of use vs. security, operational costs vs. security and so on.  This isn’t a new or revolutionary concept.  Groups like SIRA focus on higher order efforts to quantify information risk to inform security strategy, justify investment in security programs and so on.

At a lower level, making intelligent decisions on the trade-offs involved in IT systems projects requires a well-informed assessment of the risks involved.  However, experiments in cognitive psychology and behavioral economics consistently demonstrate that humans have a raft of cognitive biases which impact decision making.  For instance, we are generally overconfident in our knowledge and abilities and we tend to think about likelihood in the context of what we have had personal experience with.  Uncertainty, inexperience or ignorance into exactly how IT system security can fail may lead to an improper assessment of risk.  If risks are not clearly understood, decisions made using these assessments will not be as accurate as expected.

Douglas Hubbard writes extensively on the topic of “expert calibration” in his book “How To Measure Anything”.  In this book, calibration involves training experts to more clearly understand and articulate their level of uncertainty when making assessments of likelihoods or impacts of events.  While it doesn’t eliminate error from subjective assessments, Mr. Hubbard claims that it demonstrably improves estimates provided by calibrated experts.  This calibration process likely makes these “experts” more aware of their cognitive biases.  Regardless of the exact mechanism, measurably improving estimates used in decision making is a good thing.

Information security could benefit from a similar calibration concept.  Understanding the mechanisms through which IT systems can be breached underpins our ability to make reasonable assessments about the risks and likelihood of a breach in a given environment.

To pick on Target for a minute:

Would having a clear understanding of the mechanisms by which the external vendor application change the decision to have the server authenticate against the company’s Active Directory system?  An application to coordinate the activities of the myriad vendors a company the size of Target has is almost certainly a necessity, but would a better understanding of the ways that a vendor management server could be exploited have made a case to have the application isolated from the rest of the Target network with the tradeoff of higher operational costs?  Clearly, that question can only be answered by those present when the decision was made.

Daniel Kahneman, in his book “Thinking, Fast and Slow”, describes a cognitive bias he call the availability heuristic. Essentially this idea posits that people judge concepts and likelihoods based on their ability to recall something from memory, and if it can’t be recalled, it is not important. Similarly, Thomas Schelling, a Nobel Prize-winning economist wrote:

There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.

Nate Silver’s book “The Signal and the Noise” has an excellent chapter on this concept (Chapter 13).

To become calibrated experts who can clearly assess security risks arising from systems, the IT industry seemingly would benefit from a more broad understanding of the methods used to penetrate systems and networks.  Certainly this will not “solve” the problem of breaches, however it should help to make better inform decisions regarding IT security tradeoffs.

Nor does this mean that organizations will or should always choose the least risky or most secure path.  Businesses have to deal with risk all the time and often have to accept risk in order to move forward.  The point here is that organizations are often seemingly not fully cognizant of risks they accept when making IT decisions, due to human biases, conflicts and ignorance.

A popular blog post by Wendy Nather recently pushed back on the offensive security effort; pointing out that things will not get better by continuing to point out what is wrong.  Rather, the way forward is to start fixing things.  My view is that both the offensive and defensive sides are important to the security ecosystem.  Certainly things will NOT get better until we start fixing them.  However, “we” is a limited population.  To tackle the fundamental problems with security, we need to engage the IT industry – not just those people with “security” in their titles.  And we need those that do have “security” in their titles to be more consistently aware of threats.  Focusing solely on defense, as this blog post urges, will yield some short term improvements in some organizations.  However, building consistent awareness of IT security risks, particularly in those people responsible for assessing such risks, should help all organizations not be surprised when Brian Krebs calls them up with unfortunate news.

Pay Attention To Anti-Virus Logs

I’m often quite critical of anti-virus and it’s poor ability to actually detect most of the viruses that a computer is likely to see in normal operation.  Anti-virus can detect what it can detect, and that means that generally if the AV engine detects malware, the malware was probably blocked from getting a foot hold on the computer.  In my experience, that has lead to apathy towards anti-virus logs: like watching blocked firewall logs, AV logs show you what was successfully blocked.  As I’ve mentioned on my cyber security podcast a number of times, there are a few important reasons to pay attention to those AV logs.

First, AV logs that show detected malware on servers, particularly where the server is not a file server, should prompt some investigation.  Frequently, some of the tools an attacker will try to push to a target server will be caught by an AV engine and deleted or quarantined.  The attacker may have to iterate through a few different tools to find one that is not detected prior to moving forward in the attack.  Paying attention to AV logs in this circumstance provides an opportunity to identify an attack during the early stages.   I’ve seen this technique most effectively used on Internet facing web servers, where almost any AV detection is bound to be an indication of an active attack.

Second, on workstations , AV detection events will necessarily be more common than on non-interactive servers, due to the  nature of email attachments, web browsing, downloads, USB drives and so on.  In this case, it is more reasonable to accept that AV blocked a particular piece of malware, and generally unworkable to chase after each detected event.  However, there are two opportunities to leverage AV logs in this circumstance to shut down infections.  If a particular workstation is detecting many pieces of malware over a relatively short time, this may be an indication that the person using the workstation is doing something inappropriate or that the system has some other undetected malware infection and AV is catching some second order infection attempt.  In either case, the workstation likely deserves a look.

Additionally, on workstations, certain kinds of malware detection events uncovered during full drive scans should warrant a look at the computer.  Frequently, a piece of malware will not be detected at first, but as other organizations find and submit samples of the malware, AV detection will improve and a previously undetected infection is suddenly detected.

I think it’s important to reiterate that AV is not all that effective at preventing malware infections, however most of us have significant investments in our AV infrastructure and we ought to looking for ways to ensure we are getting the best leverage out of the tools that we have deployed in our environments.

Have you found a clever way to use AV?  Post a message below.

Something is Phishy About The Russian CyberVor Password Discovery

If you’re reading this, you are certainly aware of the story of Hold Security’s recent announcement of 1,200,000,000 unique user ID and passwords being uncovered.

I’m not going to pile on to the stories that assert this is a PR stunt by Hold.  In fact, I think Hold has done some great things in the past, in conjunction with Brian Krebs in uncovering some significant breaches.

However, there are a few aspects of Hold’s announcement that just don’t make sense… At least to me:

The announcement is that 1.2B usernames and passwords were obtained through a combination of pilfering other data dumps – presumably from the myriad of breaches we know of, like eBay, Adobe, and so on, but also from a botnet that ran SQL injection attacks on web sites visited by the users of infected computers which apparently resulted in database dumps from many of those web sites.  420,000 of them, in fact.

That seems like a plausible story.  The SQL injection attack most likely leveraged some very common vulnerabilities – probably in WordPress plugins or in Joomla or something similar.  However, nearly all of the passwords obtained, certainly the ones from the SQL injection attacks, would be hashed in some manner.  Even the Adobe and eBay password dumps were at least “encrypted” – whatever that means.

The assertion is that there were 4.5B “records” found, which netted out to 1.2B unique credentials, belonging to 500M unique email addresses.

I contend that this Russian gang having brute forced 1.2B hashed and/or encrypted passwords is quite unlikely.  The much more likely case is that the dump contains 1.2B email addresses and hashed or encrypted passwords…  Still not a great situation, but not as dire as portrayed, at least for the end users.

If the dump does indeed have actual plain text passwords, which again is not clear from the announcement, I suspect the much more likely source would be phishing campaigns and/or keyloggers, potentially run by that botnet.  However, I believe that Hold would probably have seen evidence if that were the case and would most likely have said as much in the announcement, since it would be an even more interesting story.

Hold is clearly in communication with some of the organizations where records were stolen from ,as indicated in the announcement.  What isn’t clear is whether all of the recognizable organizations were attempted to be contacted, or only the largest, or only those that had a previous agreement in place with Hold.  Certainly Hold has found an interesting niche and is attempting to capitalize on it – and that makes sense to me.  However, it’s going to be a controversial business model that requires organizations to pay Hold in order to be notified if or when Hold finds evidence that the organization’s records have been found.  I’m not going to pass judgement yet.

Perspective on the Microsoft Weak Password Report

Researchers at Microsoft and Carleton University released a report that has gotten a lot of attention, with media headlines like “Why 123456 is a great password”.

The report is indeed interesting: mathematically modelling the difficulty of remembering complex passwords and optimizing the relationship between expected loss resulting from a breached account and the complexity of passwords.

The net finding is that humans have limitations on how much they can remember, and that is at odds with the current guidance of using a strong, unique password for each account.  The suggestion is that accounts should be grouped by loss characteristics, with those accounts that have the highest loss potential getting the strongest password, and the least important having something like “123456”.

The findings of the report are certainly interesting, however there seem to be a number of practical elements not considered, such as:

  • The paper seems focused on the realm of “personal use” passwords, however many people have to worry about both passwords for personal use and for “work” use.
  • Passwords used for one’s job usually have to be changed every 90 days, and are expected to be among the most secure passwords a person would use.
  • People generally do not invest much intellectual energy into segmenting accounts into high risk/low risk when creating passwords.  Often, password creation is done on the fly and stands in the way of some larger, short term objective, such as ordering flowers or getting logged in to send an urgent email to the boss.
  • The loss potential of a given account is not always obvious.
  • The loss potential of a given account likely does not remain constant over time.
  • There are many different minimum password requirements across different services that probably work against the idea of using simple passwords on less important sites.  For example, I have a financial account that does not permit letters in the password, and I have created accounts on trivial web forums that require at least 8 character passwords, with complexity requirements.

It’s disappointing that password managers were dismissed by the report authors as too risky because they represent a concentration of passwords which could itself fall victim to password guessing attacks, when hosted “in the cloud”, leading to the loss of all passwords.  Password managers seem to me as the only viable alternative to managing the proliferation of passwords many of us need to contend with.  Using password managers removes the need to consider the relative importance of a new service and can create random, arbitrarily long and complex passwords on the fly, without needing to worry about trying to remember them – for either important or unimportant accounts.

Now, not all password managers are created equally.  We recently saw a flurry of serious issues with online password managers.  Certainly diligence is required when picking a password manager, and that is certainly not a simple task for most people.  However, I would prefer to see a discussion on how to educate people on rating password managers than encouraging them to use trivial passwords in certain circumstances.

I don’t mean to be overly critical of the report.  I see some practical use for this research by organizations when considering their password strategies.  Specifically, it’s not reasonable to expect employees to pick strong passwords for a business-related of accounts and then not write them down, record them somewhere, or create a predictable system of passwords.  It gets worse when those employees are also expected to change their passwords every 90 days and to use different passwords on different systems.  Finally, those same employees are also having to remember “strong” passwords for some number of personal accounts which adds more complexity to remembering more strong passwords.

In short, I think that this report highlights the importance of using password managers, both for business and for personal purposes.  And yes, I am ignoring multi-factor authentication schemes which, if implemented properly, would be a superior solution.

Why Changing Passwords Might Be A Good Idea After A Data Breach

During my daily reading today, I found this article titled Why changing passwords isn’t the answer to a data breach.  The post brings up a good point: breached organizations would serve their customers or users better if they gave more useful guidance after a breach, rather than just “change your passwords”.  The idea presented by the author is providing recommendations on how to pick a strong password, rather than simply changing it.

I think the author missed an important point though: it’s proving to be a bad idea to use the same password on different sites, no matter the strength of the password.  Possibly if customers or users had an indication of how the passwords were stored on a given site or service, they could make a judgement call of whether to use their strong password or to create a separate password for that site alone.  However, that’s not the world we live in.  We don’t normally get to know that the site we just signed up for stores passwords in plain text or as an md5 hash with no salt.

Passwords should be strong AND unique across sites, but those goals are seemingly at odds.  The passwords we see in password dumps are short and trivial for a reason: they are easy to remember!  If we want someone to create a password like this: co%rre£ctho^rseba&tteryst(aple, we have to accept that the average person is either not going to do it because it’s too hard to remember, or if they can remember it, that’ll be their password across sites – until, of course, they hit on a site that won’t accept certain characters.

While the “best” answer is some form of multi-factor authentication, though it is by no means perfect.  The major problem with multi-factor authentication is that the services we use have to support it.  The next best thing is a password manager.  Password managers let users create a strong and unique password for each service and doesn’t require the person to remember multiple hard to crack passwords.  Certainly password managers are not perfect, and the good ones tend not to be free, either.

So, I would really like to hear a breached organization who lost a password database to give encourage impacted users to use a strong, unique passwords on each site and to use a password manager.

Maybe we could see companies buying a year of 1Password or Lastpass* for affected customers rather than a year of credit monitoring.

One last thing that I want to mention: I hear time and again about how bad of an idea it is to pick a passphrase than consists of a series of memorable words, like “correcthorsebatterystaple” as presented in XKCD.   I’ve heard many hypotheses of why this is a bad idea, and the author points out that hashcat can make quick work of such a password.  However, this kind of idea is at the center of a password scheme called “Diceware”.  Diceware creates a password by rolling some dice to lookup a sequence of words in a dictionary.  It’s not tough to think that “correcthorsebatterystaple” could be the output of Diceware.  However, Diceware is indeed quite secure.  The trap I see most people fall into when disputing the approach is focusing on the number of words in the passphrase and intuition sensibly telling us that there are not all that many ways to arrange 3 or 4 words.  However, when you consider it mathematically, you realize the individual words should be thought of as just a character – a character in a very large set.  Consider that a 12 character password using a normal character set has 2^95 (~3×10^102) combinations.  A Diceware password with 4 words, using a dictionary of 7776 words, has 4^7776  (~4×10^4681) combinations.  Hopefully this will put the correcthorsebatterystaple story in a better light.

* yes, I’m aware Lastpass just announced some vulnerabilities.

I Think I Was Wrong About Security Awareness Training

Andy and I had a bit of a debate on the usefulness of security awareness training in episode 75 of our podcast. The discussion came up while covering a story about ransom campaigns and how the author recommends amping up awareness training to avoid malware and spear phishing, the two main avenues of attack for these attackers.

I was on the side of there being some benefit and Andy on the side of it not being worthwhile.

The logic goes like this: attackers are becoming so sophisticated, that it isn’t practical to expect a lay person to be able to identify these attacks – technical controls are really the only thing that is going to be effective.

My thinking, at the time, was that awareness training is like anti-virus: you should have it in place to defend against those things that it can, but we all know there are plenty of attacks it won’t stop. I think that is still a reasonable assumption.

However, I’ve since thought about it some, and in think Andy is probably right…

Awareness training is about trying to establish some firewall rules in minds of people in an organization. There’s an implicit hope that the training will avoid *some* number attacks and an understanding that it won’t catch all of them.

However, people aren’t wired to be a control point. There is a lot of research that demonstrates this point, notably in Dan Ariely’s “Predictably Irrational” books. Focus, attention, diligence and even ethics are influenced by many factors, and awareness training would need to compete against fundamental nature of people.

But it’s worse than just not effective, and that is why I think I’m wrong here. Awareness training *is* believed to be a security control by many. Awareness training is mandated by every security standard or framework I can think of, alongside antivirus, firewalls and the like. And because it is viewed as a control, we count on its effectiveness as part of our security program.

At least that is my intuition. I don’t have hard data to back it up, but that would be pretty enlightening experiment – if it were done correctly, meaning not through an opinion survey.

Educating employees on company policies is clearly necessary. However, it seems that focusing on hard controls rather than awareness education would be a better investment. Those are things like:

  • Two factor authentication or password managers and crazy password complexity requirements instead of trying teach what a strong password is
  • Controls to prevent the execution of malware delivered through email instead of how to recognize malicious files
  • Controls to prevent browsing to phishing sites or exploit kits instead of how to
  • And so on.

How Bad Administrator Hygiene Contributes To Breaches

I recently wrote about the problems associated with not understanding common attack techniques when designing an IT environment.  I consistently see another factor in breaches: bad hygiene.  This encompasses things such as:

  • Missing patches
  • Default passwords
  • Weak passwords
  • Unmanaged systems
  • Bad ID management practices

My observation is that, at least in some organizations, many of these items are viewed as “compliance problems”.  Administrators don’t often see the linkage between bad hygiene and security breaches. For the most part, these hygiene problems will not enable an initial compromise, though they certainly do from time to time.  What I see much more frequently is that some unforeseen mechanism results in an initial intrusion, such as SQL injection, spear phishing or file upload vulnerability, and the attacker then leverages bad administrator hygiene to move more deeply in an environment.

Most man-made disasters are not the product of a single problem, but rather a chain of failures that line up just right.  In the same way, many breaches are not the result of a single problem, but rather a number of problems that an attacker can uncover and exploit to move throughout an organization’s systems and ultimately accomplish their objective.

It’s important for network, server, application and database administrators to understand the implications of bad hygiene. Clearly, improving awareness doesn’t guarantee better diligence by those administrators.   However, drawing a more clear linkage between bad hygiene and their security consequences, rather than simply raising the ire of some auditors for violating a nebulous policy, should make some amount of improvement.  That is my intuition, anyhow.

Security awareness is a frequently discussed topic in the information security world. Such training is almost exclusively thought of in the context of training hapless users on which email attachments to not open.  Maybe it’s time to start educating the IT department on contemporary security threats and attacker tactics so that they can see the direct connection between their duties and the methods of those attackers.

Threat Modeling, Overconfidence and Ignorance

Attackers continue to refine their tools and techniques and barely a day goes by without news of some significant breach.  I’ve noticed a common thread through many breaches in my experience of handling dozens of incidents and researching many more for my podcast: the organization has a fundamental misunderstanding risks associated with the technology they have deployed and, more specifically, the way in which they deployed it.

When I think about this problem, I’m reminded of Gene Kranz’s line from Apollo 13: “I don’t care what anything was designed to do.  I care about what it can do.”

My observation is that there is little thought given to how things can go wrong with a particular implementation design.  Standard controls, such as user ID rights, file permissions, and so on, are trusted to keep things secure.  Anti-virus and IPS are layered on as supplementary controls, and systems are segregated onto functional networks with access restrictions, all intended to create defense in depth.  Anyone familiar with the technology at hand and who is moderately bright can cobble together what they believe is a robust design.  And the myriad of security standards will tend to back them up, by checking the boxes

  • Firewalls in place? Check!
  • Software up-to-date? Check!
  • Anti-Virus installed and kept up-to-date? Check!
  • User ID’s properly managed? Check!
  • Systems segregated onto separate networks as needed? Check!

And so on.  Until one fateful day, someone notices by accident, a Domain Admin account that shouldn’t be there.  Or a call comes in from the FBI about “suspicious activity” happening on the organization’s network.  Or the Secret Service calls to say that credit cards the organization processed were found on a carder forum.  And it turns out that many of the organization’s servers and applications have been compromised.

In nearly every case, there were a mix of operational and architectural problems that contributed to the breach.  However, the operational issues seem to be transitive: maybe it’s poorly written ASP.net code that allows file uploads, or maybe someone used Password1 as her administrator password, and so on.  But the really serious contributor to the extent of a breach is architectural problems.  This involves things like:

  • A web server on an Internet DMZ making calls to a database server located on an internal network.
  • A domain controlled on an Internet DMZ with 2 way access to other DC’s on other parts of the internal network.
  • Having a mixed Internet/internal server DMZ, where firewall rules govern what is accessible from the Internet.

…And so it goes.  The number of permutations of how technology can be assembled seems nearly infinite.  Without an understanding of how the particular architecture proposed or in place can be leveraged by an attacker, organizations are ignorant of the actual risk to their organization.

For this reason, I believe it is important that traditional IT architects responsible for developing such environments have at least a conceptual understanding of how technology can be abused by attackers.  Threat modeling is also a valuable activity to uncover potential weaknesses, however doing so still requires people who are knowledgeable about the risks.

I also seem some value in establishing common “design patterns”, similar to that seen in programming, but at a much higher level, involving networked systems and applications, where well thought out designs could be starting point for tweaking, rather than starting from nothing and trying to figure out the pitfalls with the new design along the way.  I suspect that would be difficult at best, given the extreme variability in business needs, technology choices and other constraints.