JPMC Is Getting Off Easy

News today indicates that the JPMC breach which was discovered earlier in 2014 was the result of a neglected server not being configured to require 2FA as it should have been.   That was a pretty simple oversight, right?  Well, no so fast.  There are a lot of other details that previously surfaced which paint a more complicated picture.

– First, we know that the breach started via a vulnerability in a web application.

– Next, we know that the breach was only detected after JPMC’s corporate challenge site was breached and JPMC started examining other networks for similar traffic and found the attackers were also on it’s systems.

– We also know that “gigabytes” of data on 80 million US households was stolen.

– Finally, we know that the breach extended to at least 90 other servers in the JPMC environment.

Attributing the breach to missing 2FA on a server seems very incomplete.

Certainly we have seen a number of breaches attributed to unmanaged systems, such as Bit9 and BrowserStack. This is why inventory is the #1 critical cyber security control. Without it, we don’t know what needs to be secured.

We can also include at least:
– Application vulnerability
– Gigabytes of data being exfiltrated undetected
– Hacker activity and command and control activity on 90 different servers undetected
– Configuration management

This isn’t intended to drag JPMC through the mud; rather it’s to point out that these larger breaches are the unfortunately alignment of a number of control deficiencies rather than a single, simple oversight in configuring a server.

Some Infosec Wins

This is the time of year when bloggers and media publish lists of the biggest breaches of year, biggest infosec fails of the year, and so on.  2014 certainly saw a distinguished list of failures.  But I’m feeling optimistic, so I want to write something about infosec wins.  Most of the time we don’t hear about infosec wins, for obvious reasons.  Occasionally we do, though.

Two that come to mind are the recent ICANN breach and the UPS Store breach from earlier in the year.  Both were indeed breached, but both also apparently discovered the breach in a timely manner and reacted to minimize the damage.  These two wins highlight an important capability organizations need to continue to refine: detecting breaches early, rather than relying on a phone call from Brian Krebs.

As my friend and co-host Andy Kakat says, we have to free some of our security staff from the daily grind of “addressing tickets” in order to focus on building these detection capabilities.  Hopefully 2015 will see more infosec wins.

 

The Legacy Of The Sony Pictures Entertainment Breach

The disclosures by Edward Snowden in 2013 drove a flurry of activity in many companies, much of which centered on keeping confidential information out of the hands of dirty contractors.

Much of enterprise risk management seems, to me at least, to follow the TSA playbook: consider the threat after it manifests itself somewhere, then become fixated on it.

Which leads me to wonder how the Sony Pictures Entertainment (SPE) attack will be ingested by ERM processes at large. Certainly, the threat of losing intellectual property has been a central fixture for many years, but I suspect this will add a new dimension. Information security threats, I suspect, are about to go from being a bothersome potential for lost IP to an existential threat.

The concept of a focused and competent attacker bent on dismantling and destroying the company likely hasn’t been considered very often, but that may now change which will yield some interesting implications on IT generally. We certainly don’t have all the details about what happened to SPE yet, but it seems highly likely that common tactics were used, which we know from many other venues are very hard to defend against, particularly in large and complex IT environments.

Named Vulnerabilities and Dread Risk

In the middle of my 200 mile drive home today, it occurred to me that the reason Heartbleed, Shellshock and Poodle received so much focus and attention, both within the IT community and generally in the media, is the same reason that most people fear flying: something that Gerd Gigerenzer calls “dread risk” in his book “Risk Savvy: How to Make Good Decisions”.  The concept is simple: most of us dread the thought of dying in a spectacular terrorist attack or a plane crash, which are actually HIGHLY unlikely to kill us, while we have implicitly accepted the risks of the far more common yet mundane things that will almost certainly kill us: car crashes, heart disease, diabetes and so on. (At least for those of us in the USA)

These named “superbugs” seem to have a similar impact on many of us: they are probably not the thing that will get our network compromised or data stolen, yet we talk and fret endlessly about them, while we implicitly accept the things that almost certainly WILL get us compromised: phishing, poorly designed networks, poorly secured systems and data, drive by downloads, completely off-the-radar and unpatched systems hanging out on our network, and so on.  I know this is a bit of a tortured analogy, but similar to car crashes, heart disease and diabetes, these vulnerabilities are much harder to fix, because addressing them requires far more fundamental changes to our routines and operations.  Changes that are painful and probably expensive.  So we latch on to these rare, high-profile named-and-logo’d vulnerabilities that show up on the 11 PM news and systematically drive them out of our organizations, feeling a sense of accomplishment once that last system is patched.  The systems that we know about, anyhow.

“But Jerry”, you might be thinking, “all that media focus and attention is the reason that everything was patched so fast and no real damage was done!”  There may be some truth to that, but I am skeptical…

Proof of concept code was available for Heartbleed nearly simultaneous to it’s disclosure.  Twitter was alight with people posting contents of memory they had captured in the hours and days following.  There was plenty of time for this vulnerability to be weaponized before most vendors even had patches available, let alone implemented by organizations.

Similarly, proof of concept code for Shellshock was also available right away.  Shellshock, in my opinion and in the opinion of many others, was FAR more significant than Heartbleed, since it allowed execution of arbitrary commands on the system being attacked, and yet there has only been one reported case of an organization being compromised using Shellshock – BrowserStack.  By the way, that attack happened against an old, unpatched dev server that hadn’t been patched for quite some time after ShellShock was announced.  We anecdotally know that there are other servers out on the Internet that have been impacted by ShellShock, but as far as anyone can tell, these are nearly exclusively all but abandon web servers.   These servers appear to be subscribed to botnets for the purposes of DDOS.  Not great, but hardly the end of the world.

And then there’s Poodle.  I don’t even want to talk about Poodle.  If someone has the capability to pull off a Poodle attack, they can certainly achieve whatever end far easier using more traditional methods of pushing client-side malware or phishing pages.

The Road To Breach Hell Is Paved With Accepted Risks

As the story about Sony Picture Entertainment continues to unfold, and we learn disturbing details, like the now infamous “password” directory, I am reminded of a problem I commonly see: assessing and accepting risks in isolation and those accepted risks materially contributing to a breach.

Organizations accept risk every day. It’s a normal part of existing. However, a fundamental requirement of accepting risk is understanding the risk, at least to some level. In many other aspects of business operations, risks are relatively clear cut: we might lose our investment in a new product if it flops, or we may have to lay off newly hired employees if an expected contract falls through. IT risk is a bit more complex, because the thing at risk is not well defined. The apparent downside to a given IT tradeoff might appear low, however in the larger context of other risks and fundamental attributes of the organization’s IT environment, the risk could be much more significant.

Nearly all major man-made disasters are the result of a chain of problems that line up in such a way that allows or enables the disaster and not the result of a single bad decision or bad stroke of luck. The most significant breaches I’ve witnessed had a similar set of weaknesses that lined up just so. Almost every time, at least some of the weaknesses were consciously accepted by management. However, managers would almost certainly not have made such tradeoff decisions if they understood that their decision could have lead to such a costly breach.

The problem is compounded when multiple tradeoffs are made that have no apparent relationship with each other, yet are related.

The message here is pretty simple: we need to do a better job of conveying the real risks of a given tradeoff, without overstating them, so that better risk decisions can be made. This is HARD. But it is necessary.

I’m not proposing that organizations stop accepting risk, but rather that they do a better job of understanding what risks they are actually accepting, so management is not left saying: “I would not have made that decision if I knew it would result in this significant of a breach.”

Honey Employees

In between bouts of chasing a POODLE around the yard today, my mind wandered into the realm of honeypots, honey drives and honey records.  I had an idea about creating fake a employee complete with a workstation, company email account, facebook page and so on.

The fake employee would exist for purposes of detecting spear phish attempts, lateral movement to the workstation, access of the employee’s documents, email accounts and so on.  Hence the name “honey employee”. This could serve as a early warning system, and to keep an eye on tactics being used by miscreants trying to worm their way in through the employees.

Is anyone doing this already?

Day 2: Awareness of Common Attack Patterns When Designing IT Systems

One of the most common traits underlying the worst breaches I’ve seen, and indeed many that are publicly disclosed, is related to external attackers connecting to a server on the organization’s Active Directory domain.

It seems that many an IT architect or Windows administrator are blind to the threat this poses. An application vulnerability, misconfiguration and so on can provide a foothold to an attacker to essentially take over the entire network.

This is just an example, but it’s a commonly exploited tactic. Staff members performing architecture-type roles really need to have some awareness and understanding of common attacker tactics in order to intelligently weigh design points in an IT system or network.

Shellshock Highlights Difficulty In Determining Exploitability

We are all familiar with the shellshock issue in bash.   We know that it’s exploitable via CGI on web servers, through the DHCP client on Linux systems, and can bypass restrictions in SSH.  Yesterday, a new spate of techniques were discussed pretty widely, through OpenVPN, SIP. and more.

This isn’t necessarily a problem with OpenVPN and SIP.  This is still a problem with bash.  These discoveries should highlight the importance of patching a for problem like shellshock quickly, rather than assuming our system is safe just because we are not running a web server, using a static IP and don’t rely on SSH restrictions.  If it’s running OpenVPN, it’s exploitable (if username/password authentication is used).  The broader point however, is that we could be finding innovative new ways to exploit shellshock through different services for months.

Just patch bash.  And get on with life.

Day 1: The Importance Of Workstation Integrity

We are pretty well aware of the malware risks that our users and family members face from spear phishing, watering holes, exploit kits, tainted downloads and so on.

As IT and security people, most of us like to think of ourselves as immune  to these threats – we can spot a phish from a mile away. We would never download anything that would get us compromised. But, the reality is that it does happen. To us.  We don’t even realize theat copy of WinRar was trojaned. And now we are off doing our jobs. With uninvited visitors watching.  It happens.  I’ve been there to clean up the mess afterward and it’s not pretty.

The computers that we use to manage IT systems and applications are some are some of the most sensitive in the average business.  We ought to consider treating them appropriately.

Here are my recommendations:

  • Perform administrative functions on a PC that is dedicated to the task, not used to browse the Internet, check email or edit documents.
  • Isolate computers used for these administrative functions onto separate networks that have the minimum inbound and outbound access needed.
  • Monitor these computers closely for signs of command and control activity.
  • Consider how to implement similar controls for performing such work from home.

What do you do to protect your IT users?

Cyber Security Awareness Month

Tomorrow starts National Cyber Security Awareness Month. Many different organizations will be posting security awareness information to help your employees not get cryptolockered and to help your friends and family keep their private selfies private.

I’m going take a different path with this site for the month of October. I’m going to talk about security awareness for US – IT and infosec people.

Crazy, right?

I have been working in this field for a long time. I see stunningly bad decisions by IT behind the worst incidents I’ve been involved in. These decisions weren’t malicious, but rather demonstrate a lack of awareness about how spectacularly IT infrastructures can fail when they are not designed well, when we misunderstand the limitations of technology and when we’re simply careless while exercising our administrative authority.