The Road To Breach Hell Is Paved With Accepted Risks

As the story about Sony Picture Entertainment continues to unfold, and we learn disturbing details, like the now infamous “password” directory, I am reminded of a problem I commonly see: assessing and accepting risks in isolation and those accepted risks materially contributing to a breach.

Organizations accept risk every day. It’s a normal part of existing. However, a fundamental requirement of accepting risk is understanding the risk, at least to some level. In many other aspects of business operations, risks are relatively clear cut: we might lose our investment in a new product if it flops, or we may have to lay off newly hired employees if an expected contract falls through. IT risk is a bit more complex, because the thing at risk is not well defined. The apparent downside to a given IT tradeoff might appear low, however in the larger context of other risks and fundamental attributes of the organization’s IT environment, the risk could be much more significant.

Nearly all major man-made disasters are the result of a chain of problems that line up in such a way that allows or enables the disaster and not the result of a single bad decision or bad stroke of luck. The most significant breaches I’ve witnessed had a similar set of weaknesses that lined up just so. Almost every time, at least some of the weaknesses were consciously accepted by management. However, managers would almost certainly not have made such tradeoff decisions if they understood that their decision could have lead to such a costly breach.

The problem is compounded when multiple tradeoffs are made that have no apparent relationship with each other, yet are related.

The message here is pretty simple: we need to do a better job of conveying the real risks of a given tradeoff, without overstating them, so that better risk decisions can be made. This is HARD. But it is necessary.

I’m not proposing that organizations stop accepting risk, but rather that they do a better job of understanding what risks they are actually accepting, so management is not left saying: “I would not have made that decision if I knew it would result in this significant of a breach.”

Day 2: Awareness of Common Attack Patterns When Designing IT Systems

One of the most common traits underlying the worst breaches I’ve seen, and indeed many that are publicly disclosed, is related to external attackers connecting to a server on the organization’s Active Directory domain.

It seems that many an IT architect or Windows administrator are blind to the threat this poses. An application vulnerability, misconfiguration and so on can provide a foothold to an attacker to essentially take over the entire network.

This is just an example, but it’s a commonly exploited tactic. Staff members performing architecture-type roles really need to have some awareness and understanding of common attacker tactics in order to intelligently weigh design points in an IT system or network.

Shellshock Highlights Difficulty In Determining Exploitability

We are all familiar with the shellshock issue in bash.   We know that it’s exploitable via CGI on web servers, through the DHCP client on Linux systems, and can bypass restrictions in SSH.  Yesterday, a new spate of techniques were discussed pretty widely, through OpenVPN, SIP. and more.

This isn’t necessarily a problem with OpenVPN and SIP.  This is still a problem with bash.  These discoveries should highlight the importance of patching a for problem like shellshock quickly, rather than assuming our system is safe just because we are not running a web server, using a static IP and don’t rely on SSH restrictions.  If it’s running OpenVPN, it’s exploitable (if username/password authentication is used).  The broader point however, is that we could be finding innovative new ways to exploit shellshock through different services for months.

Just patch bash.  And get on with life.

Day 1: The Importance Of Workstation Integrity

We are pretty well aware of the malware risks that our users and family members face from spear phishing, watering holes, exploit kits, tainted downloads and so on.

As IT and security people, most of us like to think of ourselves as immune  to these threats – we can spot a phish from a mile away. We would never download anything that would get us compromised. But, the reality is that it does happen. To us.  We don’t even realize theat copy of WinRar was trojaned. And now we are off doing our jobs. With uninvited visitors watching.  It happens.  I’ve been there to clean up the mess afterward and it’s not pretty.

The computers that we use to manage IT systems and applications are some are some of the most sensitive in the average business.  We ought to consider treating them appropriately.

Here are my recommendations:

  • Perform administrative functions on a PC that is dedicated to the task, not used to browse the Internet, check email or edit documents.
  • Isolate computers used for these administrative functions onto separate networks that have the minimum inbound and outbound access needed.
  • Monitor these computers closely for signs of command and control activity.
  • Consider how to implement similar controls for performing such work from home.

What do you do to protect your IT users?

Post Traumatic Vulnerability Disorder

I’ve talked pretty extensively on the Defensive Security Podcast about the differences between patch management and vulnerability management.  We’ve now had two notable situations within 6 months where a significant vulnerability in a portion of our infrastructure estate is vulnerable to a significant threat.  And no patches.  At least for a while.

In both the HeartBleed and ShellShock cases, the vulnerabilities were disclosed suddenly and exploit code was readily available, trivial to exploit and nearly undetectable (prior to implementing strategies to detected it, at least).

And in both cases we have been stuck wringing our hands waiting for patches for the systems and applications we use.  In the case of Heartbleed, we sometimes waited for weeks.  Or even months.

Circumstances like Heartbleed and ShellShock highlight the disparity between patch management and vulnerability management.  However, being aware of the difference isn’t informative of the actions we can or should take.  I’ve been thinking about our options in this situation.  I find 3 broad options we can choose from:

1. Accept the risk of continuing to operate without patches until patches are available

2. Disconnect or isolate systems that are vulnerable

3. Something in the middle

…and we may choose a combination of these depending on risks and circumstances.  #1 and #2 are self explanatory and may be preferable.  I believe that #1 is somewhat perilous, because we may not fully understand the actual likelihood or impact.  However, organizations choose to take risks all the time.

#3 is where many people will want to be.  This is where it helps to have smart people who can help to quickly figure out how the vulnerability works and how your existing security infrastructure can be mobilized to help mitigate the threat.

In the current case of ShellShock, some of the immediate options are to:

1. implement mod_security rules to block the strings used in the attacks

2. implement IPS or WAF rules to prevent shell command injection

3. Implement iptables rules to filter out connections containing the strings used in the attacks

4. Increasing proactive monitoring, backups and exorcisms on vulnerable systems

…and so on.  So, there ARE indeed things that can be done while waiting for a patch.  But they are most often going to be highly specific to your environment and what defensive capabilities are in place.

HeartBleed and ShellShock should show us that we need to ensure we have the talent and capabilities to intelligently and effectively respond to emerging threats such as these, without having to resort to hand wringing while waiting for patches.

Think about what you can do better next time.  Learn from these experiences.  The next one is probably not that far away.

H/T to my partner in crime, Andy Kalat (@lerg) for the title.

Compliance Isn’t Security, or Why It’s Important To Understand Offensive Techniques

The security vs. compliance debate continues, though I’ve lost track of who is actually  still arguing that compliance is security.  Maybe it’s auditors?  Or managers?  This topic comes up a lot for me, both at work and from listeners of my security podcast.

The “compliance” mindset often leads to a belief that if some particular scenario is not prohibited by policy, it must be safe.  After all, if it weren’t safe, it would be explicitly addressed in policy.

I can look around and see compliance thinking behind many breaches.: “If it isn’t safe for card readers to pass unencrypted credit card information to a POS terminal, surely PCI DSS would not permit it!”

For many in the information security field, this is a quaint discussion.  In my advanced age, I’m becoming more convinced that “compliance” is not just a fun philosophical debate, with auditors, but rather an active contributor to the myriad security issues we wrestle with.

Clearly, the better path to take is one of assessing risks in IT systems and addressing those risks.  However, we must keep in mind that we are often asking for that assessment to be performed by the same people who are apt to accept policy as the paragon of protection.

As we debate the perils of focusing too much on offensive security and not enough on defense, it’s important to point out that, without understanding offensive techniques, our imaginations are hobbled as we evaluate risks to IT systems.

For example it probably seems reasonable to the average IT person to connect an Internet-facing Windows web server to her organization’s Active Directory domain.  There is no policy or regulations that prohibit such a thing.  In fact, AD provides many great security capabilities, like the ability to centrally provision and remove user IDs.  Without the context of how this situation can, and indeed often is, leveraged by attackers in significant breaches, the perceived benefits outweigh the risks.

I see value in exposing general IT workers, not just information security personnel, to offensive techniques.  I am not advocating that we teach a Windows administrator how to perform code injection into running processes, but rather that code CAN be injected into running processes.  And so on.

To me. this is lack of understanding is a major contributor to the issues underlying the “security vs. compliance” discussion.  Organizations spend a quite a lot of effort on security awareness programs for employees, but I see almost no focus on educating IT staff on higher order security threats.  I don’t expect much to change until we change this state of affairs.

Human Nature and Cyber Security

This has been a particularly active year for large scale, public breaches in the news. Next year’s Data Breach Investigations Report from Verizon should provide some context on whether we are experiencing a “shark attack” phenomenon of continued media coverage of each new breach, or if this is really an exceptional year.

Regardless of whether we are trending above average or not, it’s pretty clear that a lot of companies are experiencing data breaches.

Information security is a series of trade-offs: investment vs. security, ease of use vs. security, operational costs vs. security and so on.  This isn’t a new or revolutionary concept.  Groups like SIRA focus on higher order efforts to quantify information risk to inform security strategy, justify investment in security programs and so on.

At a lower level, making intelligent decisions on the trade-offs involved in IT systems projects requires a well-informed assessment of the risks involved.  However, experiments in cognitive psychology and behavioral economics consistently demonstrate that humans have a raft of cognitive biases which impact decision making.  For instance, we are generally overconfident in our knowledge and abilities and we tend to think about likelihood in the context of what we have had personal experience with.  Uncertainty, inexperience or ignorance into exactly how IT system security can fail may lead to an improper assessment of risk.  If risks are not clearly understood, decisions made using these assessments will not be as accurate as expected.

Douglas Hubbard writes extensively on the topic of “expert calibration” in his book “How To Measure Anything”.  In this book, calibration involves training experts to more clearly understand and articulate their level of uncertainty when making assessments of likelihoods or impacts of events.  While it doesn’t eliminate error from subjective assessments, Mr. Hubbard claims that it demonstrably improves estimates provided by calibrated experts.  This calibration process likely makes these “experts” more aware of their cognitive biases.  Regardless of the exact mechanism, measurably improving estimates used in decision making is a good thing.

Information security could benefit from a similar calibration concept.  Understanding the mechanisms through which IT systems can be breached underpins our ability to make reasonable assessments about the risks and likelihood of a breach in a given environment.

To pick on Target for a minute:

Would having a clear understanding of the mechanisms by which the external vendor application change the decision to have the server authenticate against the company’s Active Directory system?  An application to coordinate the activities of the myriad vendors a company the size of Target has is almost certainly a necessity, but would a better understanding of the ways that a vendor management server could be exploited have made a case to have the application isolated from the rest of the Target network with the tradeoff of higher operational costs?  Clearly, that question can only be answered by those present when the decision was made.

Daniel Kahneman, in his book “Thinking, Fast and Slow”, describes a cognitive bias he call the availability heuristic. Essentially this idea posits that people judge concepts and likelihoods based on their ability to recall something from memory, and if it can’t be recalled, it is not important. Similarly, Thomas Schelling, a Nobel Prize-winning economist wrote:

There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.

Nate Silver’s book “The Signal and the Noise” has an excellent chapter on this concept (Chapter 13).

To become calibrated experts who can clearly assess security risks arising from systems, the IT industry seemingly would benefit from a more broad understanding of the methods used to penetrate systems and networks.  Certainly this will not “solve” the problem of breaches, however it should help to make better inform decisions regarding IT security tradeoffs.

Nor does this mean that organizations will or should always choose the least risky or most secure path.  Businesses have to deal with risk all the time and often have to accept risk in order to move forward.  The point here is that organizations are often seemingly not fully cognizant of risks they accept when making IT decisions, due to human biases, conflicts and ignorance.

A popular blog post by Wendy Nather recently pushed back on the offensive security effort; pointing out that things will not get better by continuing to point out what is wrong.  Rather, the way forward is to start fixing things.  My view is that both the offensive and defensive sides are important to the security ecosystem.  Certainly things will NOT get better until we start fixing them.  However, “we” is a limited population.  To tackle the fundamental problems with security, we need to engage the IT industry – not just those people with “security” in their titles.  And we need those that do have “security” in their titles to be more consistently aware of threats.  Focusing solely on defense, as this blog post urges, will yield some short term improvements in some organizations.  However, building consistent awareness of IT security risks, particularly in those people responsible for assessing such risks, should help all organizations not be surprised when Brian Krebs calls them up with unfortunate news.

Pay Attention To Anti-Virus Logs

I’m often quite critical of anti-virus and it’s poor ability to actually detect most of the viruses that a computer is likely to see in normal operation.  Anti-virus can detect what it can detect, and that means that generally if the AV engine detects malware, the malware was probably blocked from getting a foot hold on the computer.  In my experience, that has lead to apathy towards anti-virus logs: like watching blocked firewall logs, AV logs show you what was successfully blocked.  As I’ve mentioned on my cyber security podcast a number of times, there are a few important reasons to pay attention to those AV logs.

First, AV logs that show detected malware on servers, particularly where the server is not a file server, should prompt some investigation.  Frequently, some of the tools an attacker will try to push to a target server will be caught by an AV engine and deleted or quarantined.  The attacker may have to iterate through a few different tools to find one that is not detected prior to moving forward in the attack.  Paying attention to AV logs in this circumstance provides an opportunity to identify an attack during the early stages.   I’ve seen this technique most effectively used on Internet facing web servers, where almost any AV detection is bound to be an indication of an active attack.

Second, on workstations , AV detection events will necessarily be more common than on non-interactive servers, due to the  nature of email attachments, web browsing, downloads, USB drives and so on.  In this case, it is more reasonable to accept that AV blocked a particular piece of malware, and generally unworkable to chase after each detected event.  However, there are two opportunities to leverage AV logs in this circumstance to shut down infections.  If a particular workstation is detecting many pieces of malware over a relatively short time, this may be an indication that the person using the workstation is doing something inappropriate or that the system has some other undetected malware infection and AV is catching some second order infection attempt.  In either case, the workstation likely deserves a look.

Additionally, on workstations, certain kinds of malware detection events uncovered during full drive scans should warrant a look at the computer.  Frequently, a piece of malware will not be detected at first, but as other organizations find and submit samples of the malware, AV detection will improve and a previously undetected infection is suddenly detected.

I think it’s important to reiterate that AV is not all that effective at preventing malware infections, however most of us have significant investments in our AV infrastructure and we ought to looking for ways to ensure we are getting the best leverage out of the tools that we have deployed in our environments.

Have you found a clever way to use AV?  Post a message below.

Why Changing Passwords Might Be A Good Idea After A Data Breach

During my daily reading today, I found this article titled Why changing passwords isn’t the answer to a data breach.  The post brings up a good point: breached organizations would serve their customers or users better if they gave more useful guidance after a breach, rather than just “change your passwords”.  The idea presented by the author is providing recommendations on how to pick a strong password, rather than simply changing it.

I think the author missed an important point though: it’s proving to be a bad idea to use the same password on different sites, no matter the strength of the password.  Possibly if customers or users had an indication of how the passwords were stored on a given site or service, they could make a judgement call of whether to use their strong password or to create a separate password for that site alone.  However, that’s not the world we live in.  We don’t normally get to know that the site we just signed up for stores passwords in plain text or as an md5 hash with no salt.

Passwords should be strong AND unique across sites, but those goals are seemingly at odds.  The passwords we see in password dumps are short and trivial for a reason: they are easy to remember!  If we want someone to create a password like this: co%rre£ctho^rseba&tteryst(aple, we have to accept that the average person is either not going to do it because it’s too hard to remember, or if they can remember it, that’ll be their password across sites – until, of course, they hit on a site that won’t accept certain characters.

While the “best” answer is some form of multi-factor authentication, though it is by no means perfect.  The major problem with multi-factor authentication is that the services we use have to support it.  The next best thing is a password manager.  Password managers let users create a strong and unique password for each service and doesn’t require the person to remember multiple hard to crack passwords.  Certainly password managers are not perfect, and the good ones tend not to be free, either.

So, I would really like to hear a breached organization who lost a password database to give encourage impacted users to use a strong, unique passwords on each site and to use a password manager.

Maybe we could see companies buying a year of 1Password or Lastpass* for affected customers rather than a year of credit monitoring.

One last thing that I want to mention: I hear time and again about how bad of an idea it is to pick a passphrase than consists of a series of memorable words, like “correcthorsebatterystaple” as presented in XKCD.   I’ve heard many hypotheses of why this is a bad idea, and the author points out that hashcat can make quick work of such a password.  However, this kind of idea is at the center of a password scheme called “Diceware”.  Diceware creates a password by rolling some dice to lookup a sequence of words in a dictionary.  It’s not tough to think that “correcthorsebatterystaple” could be the output of Diceware.  However, Diceware is indeed quite secure.  The trap I see most people fall into when disputing the approach is focusing on the number of words in the passphrase and intuition sensibly telling us that there are not all that many ways to arrange 3 or 4 words.  However, when you consider it mathematically, you realize the individual words should be thought of as just a character – a character in a very large set.  Consider that a 12 character password using a normal character set has 2^95 (~3×10^102) combinations.  A Diceware password with 4 words, using a dictionary of 7776 words, has 4^7776  (~4×10^4681) combinations.  Hopefully this will put the correcthorsebatterystaple story in a better light.

* yes, I’m aware Lastpass just announced some vulnerabilities.