What The Target Breach Should Tell Us

Important new details have been emerging about the Target breach. First came news that Fazio Mechanical, an HVAC company, was the avenue of entry into the Target network, as reported by Brian Krebs.

This started a firestorm of speculation and criticism that Fazio was remotely monitoring or otherwise accessing the HVAC units at Target stores and that Target connected those HVAC units to the same networks as POS terminals and, by extension, was not complying with the PCI requirement for 2 factor authentication for access to the environment containing card data, as evidenced by Fazio’s stolen credentials leading to the attackers having access to the POS networks.

Fazio Mechanical later issued a statement indicating that they do not perform remote monitoring of Target HVAC systems and that “Our data connection with Target was exclusively for electronic billing, contract submission and project management.”

In a previous post on this story, I hypothesized about the method of entry being a compromised vendor with access to a partner portal, and the attacker leveraging this access to gain a foot hold in the network. Based on the description of access in Fazio Mechanical’s statement, this indeed appears to be exactly what happened.

We still do not know how the attacker used Fazio’s access to Target’s partner systems to gain deeper access into Target’s network. Since the point of this post is not to speculate on what Target did wrong, but rather what lessons we can draw from current events, I will go back to my own hypothetical retail chain, MaliciousCo (don’t let the name fool you, MaliciousCo is a reputable retailer of fine merchandise). As described in my previous post, MaliciousCo has an extranet which includes a partner portal for vendors to interact with MalicousCo, such as submitting invoices, processing payments, refunds and work orders. The applications on this extranet are not accessible from the Internet and require authenticated VPN access for entry. MaliciousCo’s IT operation has customized a number of applications used to for conducting business with its vendors. Applications such as this are generally not intended to be accessible from the Internet and often don’t get much security testing to identify common flaws, and where security vulnerabilities are identified, patches can take considerable time for vendors to develop and even longer for customers to apply. In MaliciousCo’s case, the extranet applications are considered “legacy”, meaning there is little appetite and no budget to invest in them, and because they were highly customized, applying security patches for the applications would take a considerable development effort. Now, MaliciousCo has a robust security program which includes requirements for applying security patches in a timely manner. MaliciousCo’s IT team assessed the risk posed by not patching these applications and determined the risk to be minimal because of the following factors:

1. The applications are not accessible from the Internet.
2. Access to the extranet is limited to a set of vendors who MaliciousCo’s vendor management program screens for proper security processes.
3. There are a number of key financial controls outside of these applications that would limit the opportunity for financial fraud. An attacker couldn’t simply gain access to the application and start submitting invoices without tripping a reconciliation control point.
4. The applications are important for business, but down time can be managed using normal disaster recovery processes should some really bad security incident happen.

Given the desire to divert IT investment to strategic projects and the apparently small potential for impact, MaliciousCo decides against patching these extranet applications, as other Internet accessible application receive. Subsequently, MaliciousCo experiences a significant compromise when an attacker hijacks the extranet VPN account of a vendor. The attacker identified an application vulnerability which allowed a web shell to be uploaded to the server. The attacker then exploited an unpatched local privilege escalation vulnerability on the Windows OS which hosts the extranet application and uses these privileges to collect cached Active Directory credentials for logged in administrators using a combination of mimikatz and JtR. While the extranet is largely isolated from other parts of the MaliciousCo network, certain network ports are open to internal systems to support functionality like Active Directory. From the compromised extranet application server, the attacker moves laterally, first to an extranet domain controller, then to other servers in the internal network environment. From here, the attacker is able to access nearly any system in the MaliciousCo environment, create new Active Directory user IDs, establish alternative methods of access into the MaliciousCo network using reverse shell remote access trojans, mass distribution of malware to MaliciousCo endpoints, collection and exfiltration of data, and so on.

MaliciousCo didn’t fully understand the potential impacts resulting from a compromise of its extranet applications when evaluating the security risks associated with those applications.

We don’t know what happened yet in the case of Target, and MaliciousCo is just a story. But, scenario has apparently played out at organizations like DigiNotar, the State of South Carolina and many others.

Why does this happen?

In my view, the problem is largely a failure to understand the capabilities and common tactics of our adversaries, along with an incomplete understanding of the interplay within and between complex IT system, Active Directory in particular. I intently follow the gory details of publicly disclosed breaches and it is clear to me that attackers are following a relatively common methodology which often involve:
– gaining initial entry through some mechanism (phishing, web app vulnerability, watering hole)
– stealing credentials
– lateral movement via systems which have connectivity with each other using stolen credentials
– establishing a ‘support infrastructure’ inside the victim network
– establishing persistence on victim systems
– identifying and compromising targets using stolen or maliciously created credentials or other via hijacking standard management tools employed by the victim
– exfiltration (or other malicious action)

While we don’t know the details of what happened in the case of Target, it seems quite clear that the attacker was able to laterally move from a partner application server onto networks where POS terminals reside. The specific means by which that happened are not clear and indeed we may never know for sure.

I believe that we, as defenders, need to better understand the risks posed by situations like this. I am not proposing that such security risks must always require action. Rather, based on my experience in IT, I believe these risks often go unidentified, and so are implicitly accepted due to lack of awareness, rather that consciously evaluated.

In the next post, I cover what we can learn regarding the security of vendors based on what has been disclosed about the Target breach so far.

Thoughts On Avoiding The Complex POS Attacks

I have been watching the Target breach story unfold with great interest. In full disclosure, I have no insight into what has happened at Target, beyond the reports that are publicly available. What follows is purely hypothesis and speculation for the purposes of identifying potential mitigations for what may have happened.

Clearly, we don’t know the precise details of how the attack was carried out, however there has been a lot of analysis of various aspects of what is known, including this report from SecureWorks. Malcovery has also released a report speculating the method of entry based on the file hashes provided by in an earlier report by iSight. I am most interested in identifying how the attack happened and what can be done to defend against such an attack. The SecureWorks report provides a good high level list of activities, but not a lot of specificity. For example:

– Firewall ACLs — Access control lists (ACLs) at network borders can be an effective short-term mitigation technique against specific hosts during an active incident when response policies dictate that network traffic to a hostile host be terminated.
– Network segmentation — Organizations should segment PCI networks to restrict access to only authorized users and services.

Among many others are great concepts. However, my observation is that these are not deterministic states, rather they are subjective. What is an “authorized user or service”?

Malcovery believes that the Target attack likely began with a web server being compromised with an SQL injection attack. Let’s assume this is true for a moment in my hypothetical retailer MaliciousCo (oddly, the victim). My web server is on a dedicated network segment. But my site is, of course, a web app connected to a database server. My web site needs to connect to my SQL server, but I don’t want my SQL server hanging out on a network that is accessible to the Internet, even if I don’t allow Internet-originated traffic to the SQL server itself, so I put it on an internal network, because I have other business applications and processes that need to access that same database server. Now, I have a legitimate case where my web site and SQL server are authorized to talk to one another. Because I am a diligent architect, I even route the traffic between the web and SQL servers through an IPS. However, I have created a path into my organization from my web server to my internal systems.

One of the first lessons here, assuming this is the case, is that there should not be ANY connectivity between external server networks and internal networks. The one caveat I would extend is to allow INBOUND traffic from limited internal hosts and the external networks. Outbound traffic into internal networks is not permitted. Not for SQL, not for active directory, not for anything. Additionally, outbound traffic to the Internet should be blocked from hosts on Internet accessible networks too. Only allowing inbound connections from the Internet. The exception might be a very specific mechanism for accessing a payment gateway.

Having done this, any intrusion into my web server is contained on the server itself, along with any other systems that might be on that same network, and there is no practical avenue of lateral movement into the innards of my MaliciousCo network.

Interestingly, in the case of Target, I doubt very highly that the problem involved the main web environment, which includes their online retail operation. We know that the breach didn’t involve the online part of Target’s business. We have also heard Target make reference to a vendor’s credentials being used to commit the breach. At this point, it’s not at all clear exactly what they meant, but I theorize Target is referring to the BMC Patrol user ID and password seen hard coded in the POS malware. However, this opens up another line of consideration: extranets or vendor portals. I have no insight into whether Target actually has such a thing, but my hypothetical mega retailer MaliciousCo does. This vendor portal is used by vendors to receive orders, submit invoices, communicate shipment information and so on. This portal is wholly separate from my main web presence. Access to the vendor portal is obtained via an authenticated VPN and isn’t accessible to the Internet at large.

If one of my suppliers becomes compromised, an attacker might have access to my vendor portal. Since I don’t have any direct control, or even indirect control over my vendor’s security posture (yes, I have them complete a checklist once per year, but we both know this is a Kabuki dance), I opt to treat the vendor portal exactly as I do my Internet sites by isolating them. This effectively restricts the ability of an attacker controlling my vendor portal from lateral movement into my network.

Having said all of this, we don’t actually know how Target was breached. We know that a number of major breaches in the past have happened as a result of SQL injection on web servers. But, it’s also possible that the initial attack looked like Syrian Electronic Army attack, relying on iteratively more sophisticated and deeper spear phishing attacks. Or, maybe it perpetrated using a watering hole attack using a site of interest to the retail industry – after all, we are hearing that there are many retailers involved. Or maybe it’s an attack on Cold Fusion running somewhere in their environment. My point is that there are many windows of opportunity. If MaliciousCo does a stellar job of isolating the web environment, determined attackers are going to try another approach to get at my juicy POS terminals.

My POS terminals should be on a strictly isolated network with all required Supporting infrastructure contained on that network. The only exception being specific access to a payment gateway.

Planning for failure of other controls, my POS terminals themselves should be well locked down, using application white listing to block execution of any unknown software.

Configuring and isolating environments like this is inefficient, cumbersome and expensive. Our adversaries are clever and highly motivated. I am not proposing that we have to take these drastic and costly precautions; we can continue to optimize the design and operation of our IT environments around the axis of efficiency, but we should not feign surprise when major breaches occur. Breaches are inevitable where we have the intersection of means, opportunity and incentives. We can’t do a lot about the means or incentives variables. But we do control the opportunity variable.

I’ll be following up with a few more posts about different aspects we can learn from such as monitoring later.

By the way, I am not proposing that I have the only answer to this. This is a thought experiment and I encourage you to post your views, ideas or criticisms in the comments.

Clever New Phishing Lure

I thought I had seen every permutation of phishing lure imaginable.  Then, I looked at my email today and saw this:

funeralphish

 

Pretty clever.  As usual, the trick is to overwhelm the skepticism a person might have with curiosity to click on the link.  Nothing says reputable funeral home like justaskfreddie.com.

*** edit January 17, 2014
This tactic is now being reported in more mainstream news sources, like this one.

A Different Perspective On The PTV Website Vulnerability Debacle

The story about Australian teenager Joshua Rogers who identified a vulnerability in the website of the Public Transport Victoria (PTV) and subsequently reported the vulnerability to PTV, and PTV’s now infamous response of reporting Joshua to the police has been well covered.

Critics of PTV’s reaction point out that their behavior is a continuation of the terrible idea of demonizing and criminalizing security research, similar to the story of First State Superannuation of course, Weev’s famous dust up with ATT, among other similar stories.

The common thinking, as is pointed out in a CSO article on the PTV situation, is that those people with the ability to hack are bad or dangerous, ignoring the fact that many are indeed good people who are trying to help out and earn an honest living.

I suspect that the security community expected PTV to not only fix the vulnerability, which they did, but also to thank Joshua, or even reward him; certainly not report him to the police. This seems sensible, but I’d like to offer a different perspective on how this may have come about.

First, though, I’d like to point out that I am not defending PTV’s actions. I generally find the behavior of companies who respond to above-board reports of vulnerabilities in their products with lawsuits and threats of legal action to be reprehensible and dangerous to society. However, in this case, I see some logic.

In this particular case, PTV was notified that someone found a vulnerability that allowed access to a database containing customer information, including credit card numbers. This is a serious problem for PTV. I am not aware of the nature of the conversation or communication between PTV and Joshua, but I would bet PTV asked if Joshua had accessed any records, made any copies of the data or communicated the vulnerability to anyone else. Assuming that did happen, I would assume that Joshua, who appears to be an upstanding person, said no to those questions, assuming the answer really was no.

Would PTV be performing proper due diligence by accept Joshua’s word? If you were a PTV customer whose information was exposed, or a bank who would have to eat the cost of resulting credit card fraud, or even PTV itself who might be sued for damages, would you find this to be acceptable? Or would you rather demonstrate to those stakeholders that you took all reasonable actions by fixing the problem promptly and asking the police to investigate if data may have been stole ? Wouldn’t it seem more acceptable to let Joshua convince the police that he did not do anything nefarious with the information he had access to?

There is not enough information to know if this is really what motivated PTV, or if it indeed was the normal knee-jerk reaction to dangerous hacker-types who defiled their reputation with his supposed misdeeds of observing and reporting a security flaw. Time will likely tell.

***Update Jan 13, 2014
Dave Lewis posted the transcript of his interview with Joshua Rogers. The interview sheds more light on the situation, and it’s still hard to tell what PTV’s motivation for involving the police is. What is interesting, though, is that PTV never acknowledged Joshua’s report.

Photo by Brian Searle

Waking Up To Hardware Threats

For many years, hardware-based attacks were the thing of hypothetical conversations and security conference presentations.  2013 changed the nature of the game and as an industry we are waking up to the real threats posed by hardware.

First, we learned about the ability to weaken the random number generators in Intel CPUs during the manufacturing process in a manner that is extremely difficult to detect.  Then, we had a report about a researcher creating malware resident on a video card – of course, the story there was that the researcher created a prototype anti-malware tool to detect malware-laden video cards.  While these were interesting stories, they were essentially more of the same – theoretical attacks.

Then later in the year, Dragos Ruiu began discussing what he believed to be a potent new piece of malware that he named “bad bios“.  This created a firestorm of speculation, both that Dragos is crazy and that some government has it out for him.  A lot was written about why this was and was not possible.   Bad bios seemed to represent the worst case scenario for malware – very hard to detect, persistent across operating system re-installations and able to communicate across air gaps.  All with no indication of what the intended purpose of the malware is, if the malware if even real.

Most recently, we learned from leaked NSA documents reported by Der Spiegel that the NSA will intercept shipments of computers destined for target individuals or organizations and “…often seek to place their malicious code in BIOS, software located directly on a computer’s motherboard…” and “… also attack firmware on computer hard drives…” with “…spyware capable of embedding itself unnoticed into hard drives…” We appear to have jumped from the realm of hypothetical and theoretical attacks involving hardware into a world where this is apparently a commonplace and well established practice.

At the same time, we have seen a hardware hacker take control of a Western Digital hard drive and essentially install Linux on the embedded controller, theorizing that such a strategy would work to hide persistent malware or even destroy data on a disk that is being copied, but otherwise allowing normal access to data contained on the drive.

As well, at the annual CCC in Germany, a presentation was delivered on the ability to take control of the embedded microcontroller of SD cards, similarly offering the ability to hide malware or data.

Defending against hardware based attacks is going to be very challenging.  I see a lot of opportunity for security companies to create strategies to attest to the integrity of attached devices, like hard drives, BIOS and SD cards – not just the contents, but the actual controllers, if such a thing is even realistic to accomplish.

This is an interesting new world that we are waking up to, and I look forward to seeing how our industry will take on the challenges it presents.