Never Let A Serious Cyber Crisis Go To Waste

For nearly a week now, a non-stop parade of news reports berated the UK’s NHS, who temporarily suspended operations at some hospitals due to WannaCry infections, for their continued use of Windows XP.  An example is this one.  This article points out, and many other news stories also report, that Citrix obtained a response to a freedom of information (FoI) request which indicated “that 90% of hospitals still had machines running on Windows XP”.  There is no indication of how big the problem really is, though.  If 90% the hospitals had a single XP-based ATM, that would be reported the same as if each of those hospitals ran tens of thousands of XP systems.  Indeed another report on the survey had this to say:

The trusts that provided details said Windows XP made up a small part of their overall PC estate — one said it was 50 out of 5,000 PCs, for example.

A bit of Googling reveals that Citrix sent an FoI request to 63 out of more than 200 NHS trusts, and received responses from 42 (source).  The survey was almost certainly intended for marketing purposes by Citrix, intended to help sell more of their product.  We should be wary about that stat.

Now come reports that XP systems really were not commonly infected.  Various discussions on Twitter even indicate that XP SP3 is not susceptible to WannaCry infections.

There are people who are clearly angry with the NHS for its continued use of XP, and that is probably well founded.  However, the angry stories tenuously linking XP in 90% of NHS hospitals to the services outages experienced at those NHS hospitals, apparently completely misses the point: if we want to beat up on the NHS about something related to it’s WannaCry woes, we should go after their failure to patch Win7 and/or Win2008 in a timely manner.

It’s All About The Benjamins… Or Why Details Matter

A team at Carnegie Mellon released a report a few weeks back, detailing the results of an experiment to determine the how many people would run a suspicious application at different levels of compensation. The report paints a pretty cynical picture of the “average” Internet user, which generally meshes with our intuition on.   Basically, the vast majority of participants ran suspicious code for $1, even though they knew it is dangerous to do so.  Worse, a significant number of participants ran the code for the low, low price of one cent.  This seems to paint a pretty dire picture, in the same vein as previous research where subjects gave up passwords for a candy bar.

However, I noticed a potential problem wit the report.  The researchers relied on Amazon’s Mechanical Turk service to find participants for this study.  When performing studies like this, it’s important that the population sampled are representative of the population which the study is intending to estimate.  If the population sampled is not representative of the broader population, the results will be unreliable for estimating against the broader population.

Consider this scenario: I want to estimate the amount of physical activity the average adult in my city gets per day.  So, I set  up a stand at the entrance to a shopping center where there is a gym and I survey a those who enter the parking lot.  With this methodology, I will not end up with an average amount of physical activity for the city, because I have skewed the numbers by setting up shop near a gym.  I will only be able to estimate the amount of physical activity for those people who frequent this particular shopping center.

The researchers cite a previous study which determined that the “workers” of Mechanical Turk are more or less representative of the average users of the Internet at large based on a number of demographic dimensions, like age, income and gender.

I contend that this is akin to finding that the kinds of stores in my hypothetical shopping center draw a representative sample of the city as a whole, based on the same demographic dimensions, and in fact I see that result in my parking lot survey.  However, my results are still unreliable, even though the visitors are, in fact, representative of the city.  Why is that?  Hours of physical activity is orthogonal (mostly) to the demographic dimensions I checked: income, age and gender.

In the same fashion, I contend that while the demographics of Mechanical Turk “workers” match that of the average Internet user, the results are similarly unreliable for estimating all Internet users.  Mechanical Turk is a concentration of people who are willing perform small tasks for small amounts of money.  I propose that the findings of the report are only representative of the population of Mechanical Turk users, not of the general population of Internet users.

It seems obvious that the average Internet user would indeed fall victim to this at some price, but we don’t know for sure what percentage and at what price points.

I still find the report fascinating, and it’s clear that someone with malicious intent can go to a market place like Mechanical Turk and make some money by issuing “jobs” to run pay per install malware.

One Weird Trick To Secure You PCs

Avecto released a report which analyzed recent Microsoft vulnerabilities and found that 92% of all critical vulnerabilities reported by Microsoft were mitigated if when the exploit attempt happened on an account without local administrator permissions. Subsequently, there has been a lot of renewed discussion about removing admin rights as a mitigation from these kinds of vulnerabilities.

Generally, I think it’s a good idea to remove admin rights if possible, but there are a number of items to think about which I discuss below.

First, when a user does not have local administrator rights, a help desk person will generally need to remotely perform software installs or other administrative activities on the user’s behalf. This typically involves a support person logging on to the PC using some manner of privileged domain account which was configured to have local administrator rights of the PCs. Once this happens, a cached copy of the login credentials used by the support staff are saved to the PC, albeit in a hashed manner. Should an attacker be able to obtain access to a PC using some form of malware, she may be able to either brute force recover the password from the hash or use a pass-the-hash attack, which would grant the attacker far broader permissions on the victim organization’s network than a standard user ID would. Additionally, an attacker who already has a presence on a PC may use a tool such as mimikatz to directly obtain the plain text password of the administrative account.

You might be thinking “but, if I remove administrator rights, attackers would be very unlikely to gain access to the PC in manner to steal hashes or run mimikatz, both of which require at least administrator level access. What gives?”

That is a good question which dovetails into my second point. The Avecto report covers vulnerabilities which Microsoft deems the severity to be critical. However, most local privilege escalation vulnerabilities I could find are only rated Important by Microsoft. This means that if even if you don’t have administrator rights, I can trick you into running a piece of code of my choosing, such as one delivered through an email attachment or even using a vulnerability in another piece of code like Java, Flash Player or PDF reader, and my code initially would be running with your restricted permissions, however my code could then leverage a privilege escalation flaw to obtain administrator or system privileges. From there, I can then steal hashes or run mimikatz. Chaining exploits in attacks is not all that uncommon any longer, and we shouldn’t consider this scenario to be so unlikely that it isn’t worth our attention.

I’ll also point out that many organizations don’t quickly patch local privilege escalation flaws, because they tend to carry a lower severity rating and they intuitively seem less important to focus on, as compared to other vulnerabilities which are rated critical.

Lastly, many of the recent high profile, pervasive breaches in recent history heavily leveraged Active Directory by means of credential theft and subsequent lateral movement using those stolen credentials. This means that the know-how for navigating Active Directory environments through credential stealing is out there.

Removing administrator rights is generally a prudent thing to do from a security standpoint. A spirited debate has been raging for years on whether removing administrator rights costs money, in the form of additional help desk staff who now have to perform some activities which users used to do themselves and related productivity loss by the users who now have to call the help desk, or is a net savings because there are less malware infections, less misconfigurations by users, less incident response costs, and associate higher user productivity, or if those two factors simply cancel each other out. I can’t add a lot to that debate, as the economics are going to be very specific to each organization considering removing administrator rights.

My recommendations for security precautions to take when implementing a program to remove admin rights are:
1. Prevent Domain Administrator or other accounts with high privileges from logging into PCs. Help desk technicians should be using a purpose-created account which only has local admin rights on PCs, and systems administrators should not be logging in to their own PCs with domain admin rights.
2. Do not disable UAC.
3. Patch local privilege escalation bugs promptly.
4. Use tools like EMET to prevent exploitation of some Oday privilege escalation vulnerabilities.
5. Disable cached passwords if possible, noting that this isn’t practical in many environments.
6. Use application whitelisting to block tools like mimikatz from running.
7. Follow a security configuration standard like the USGCB.

Please leave a comment below if you disagree or have any other thoughts on what can be done.

H/T @lerg for sending me the story and giving me the idea for this post.

What The Target Breach Should Tell Us

Important new details have been emerging about the Target breach. First came news that Fazio Mechanical, an HVAC company, was the avenue of entry into the Target network, as reported by Brian Krebs.

This started a firestorm of speculation and criticism that Fazio was remotely monitoring or otherwise accessing the HVAC units at Target stores and that Target connected those HVAC units to the same networks as POS terminals and, by extension, was not complying with the PCI requirement for 2 factor authentication for access to the environment containing card data, as evidenced by Fazio’s stolen credentials leading to the attackers having access to the POS networks.

Fazio Mechanical later issued a statement indicating that they do not perform remote monitoring of Target HVAC systems and that “Our data connection with Target was exclusively for electronic billing, contract submission and project management.”

In a previous post on this story, I hypothesized about the method of entry being a compromised vendor with access to a partner portal, and the attacker leveraging this access to gain a foot hold in the network. Based on the description of access in Fazio Mechanical’s statement, this indeed appears to be exactly what happened.

We still do not know how the attacker used Fazio’s access to Target’s partner systems to gain deeper access into Target’s network. Since the point of this post is not to speculate on what Target did wrong, but rather what lessons we can draw from current events, I will go back to my own hypothetical retail chain, MaliciousCo (don’t let the name fool you, MaliciousCo is a reputable retailer of fine merchandise). As described in my previous post, MaliciousCo has an extranet which includes a partner portal for vendors to interact with MalicousCo, such as submitting invoices, processing payments, refunds and work orders. The applications on this extranet are not accessible from the Internet and require authenticated VPN access for entry. MaliciousCo’s IT operation has customized a number of applications used to for conducting business with its vendors. Applications such as this are generally not intended to be accessible from the Internet and often don’t get much security testing to identify common flaws, and where security vulnerabilities are identified, patches can take considerable time for vendors to develop and even longer for customers to apply. In MaliciousCo’s case, the extranet applications are considered “legacy”, meaning there is little appetite and no budget to invest in them, and because they were highly customized, applying security patches for the applications would take a considerable development effort. Now, MaliciousCo has a robust security program which includes requirements for applying security patches in a timely manner. MaliciousCo’s IT team assessed the risk posed by not patching these applications and determined the risk to be minimal because of the following factors:

1. The applications are not accessible from the Internet.
2. Access to the extranet is limited to a set of vendors who MaliciousCo’s vendor management program screens for proper security processes.
3. There are a number of key financial controls outside of these applications that would limit the opportunity for financial fraud. An attacker couldn’t simply gain access to the application and start submitting invoices without tripping a reconciliation control point.
4. The applications are important for business, but down time can be managed using normal disaster recovery processes should some really bad security incident happen.

Given the desire to divert IT investment to strategic projects and the apparently small potential for impact, MaliciousCo decides against patching these extranet applications, as other Internet accessible application receive. Subsequently, MaliciousCo experiences a significant compromise when an attacker hijacks the extranet VPN account of a vendor. The attacker identified an application vulnerability which allowed a web shell to be uploaded to the server. The attacker then exploited an unpatched local privilege escalation vulnerability on the Windows OS which hosts the extranet application and uses these privileges to collect cached Active Directory credentials for logged in administrators using a combination of mimikatz and JtR. While the extranet is largely isolated from other parts of the MaliciousCo network, certain network ports are open to internal systems to support functionality like Active Directory. From the compromised extranet application server, the attacker moves laterally, first to an extranet domain controller, then to other servers in the internal network environment. From here, the attacker is able to access nearly any system in the MaliciousCo environment, create new Active Directory user IDs, establish alternative methods of access into the MaliciousCo network using reverse shell remote access trojans, mass distribution of malware to MaliciousCo endpoints, collection and exfiltration of data, and so on.

MaliciousCo didn’t fully understand the potential impacts resulting from a compromise of its extranet applications when evaluating the security risks associated with those applications.

We don’t know what happened yet in the case of Target, and MaliciousCo is just a story. But, scenario has apparently played out at organizations like DigiNotar, the State of South Carolina and many others.

Why does this happen?

In my view, the problem is largely a failure to understand the capabilities and common tactics of our adversaries, along with an incomplete understanding of the interplay within and between complex IT system, Active Directory in particular. I intently follow the gory details of publicly disclosed breaches and it is clear to me that attackers are following a relatively common methodology which often involve:
– gaining initial entry through some mechanism (phishing, web app vulnerability, watering hole)
– stealing credentials
– lateral movement via systems which have connectivity with each other using stolen credentials
– establishing a ‘support infrastructure’ inside the victim network
– establishing persistence on victim systems
– identifying and compromising targets using stolen or maliciously created credentials or other via hijacking standard management tools employed by the victim
– exfiltration (or other malicious action)

While we don’t know the details of what happened in the case of Target, it seems quite clear that the attacker was able to laterally move from a partner application server onto networks where POS terminals reside. The specific means by which that happened are not clear and indeed we may never know for sure.

I believe that we, as defenders, need to better understand the risks posed by situations like this. I am not proposing that such security risks must always require action. Rather, based on my experience in IT, I believe these risks often go unidentified, and so are implicitly accepted due to lack of awareness, rather that consciously evaluated.

In the next post, I cover what we can learn regarding the security of vendors based on what has been disclosed about the Target breach so far.