Thoughts on Autosploit

This announcement created quite a stir in the infosec community last week:

Much of the debate is centered on the concern that any real use of the tool is likely to be illegal and that there is no particular security research utility from the tool; all the tools serves to do is make it simple for script kids to break into our systems.

Many people have rightly pointed out that this tool isn’t enabling anything, in terms of breaking into vulnerable systems, than is already possible – said another way, we shouldn’t see this tool as a problem – we should see the vulnerable devices as the problem, and if the tool can affect your devices because they are vulnerable, a) that’s not the tool’s fault, and b) you’re probably already pwnt or are living on borrowed time.

I think there are two big issues that autosploit raises that I haven’t seen discussed (not to say someone hasn’t brought it up before me):

  1. Autosploit will likely serve as a framework for future automated exploitation, and using shodan for targeting effectively allows an attacker to efficiently target all vulnerable systems accessible from the Internet which haven’t blocked shodan’s scanners.  This means that we should expect the marginal time to compromise of our vulnerable internet-connected systems to drop precipitously for certain types of vulnerabilities.
  2. Largely because of #1 above, most of us should probably fear the second order impacts of autosploit possibly more than the first order impacts.  By that I mean even if we are diligent in rapidly patching (or otherwise mitigating) our vulnerable systems,  the ability for the baddies to quickly create new botnets that can be used to perpetrate attacks against other internet infrastructure, like we notably saw with Mirai, creates problems that are much harder and more expensive for us to mitigate than simply patching systems.  And we, unfortunately, don’t get to choose when everyone else patches their systems.

Autosploit-style tools are inevitable, and indeed as some people have pointed out, the technique is not new.  While that is true, autosploit may well accelerate some of the “innovation” (even for as simple of a code-base as it is), and that i going to drive us defenders to likewise have to accelerate our innovation.  In the long run, tools like autosploit, which drive attack efficiency, will very likely change the way IT and infosec have to operate, both from a first-order defense perspective and a second-order defense perspective.

What is with the DoublePulsar hoopla?

During the previous week, a number of security researchers took to scanning the internet for Windows systems lacking the MS17-010 patch and that are infected with the NSA’s DoublePulsar implant.  Results from various researchers seemed to vary from very few up to tens of thousands.  This article from Bleeping Computer indicates about 30,000 systems in the Internet are infected with the implant.

DoublePulsar is a non-persistent piece of malware that hooks into the SMB stack on infected systems, intercepting specially crafted SMB packets to run whatever code is sent to it.  This sort of thing is important in the context of a spying operation, where the objective is to blend in with the background and not raise suspicion.  Here is a great technical write up on DoublePulsar, in case you are interested in that sort of thing.

Here’s where I will probably get you shaking your fist at me: DoublePulsar is not the problem here.  Counting DoublePulsar infected systems is interesting, but really isn’t that informative.  The reboot after applying the patch MS17-010 drops DoublePulsar like a bad habit.  A system that is vulnerable to MS17-010 is susceptible to all manner of malware infections.  DoublePulsar itself is just a means by which to run other commands on an infected system and does not have it’s own nefarious instructions.  Metasploit, among many other offensive tools, has support for MS17-010 which allows implanting arbitrary payloads.  Here is a video of someone using MS17-010 to install meterpreter on a vulnerable system.

In my view, vulnerable Windows systems exposed to the Internet after April 14 are likely infected by something and rebuilt.

One final note: WHY, OH WHY, are there over 5.5 million Windows systems with port 445 exposed to the Internet?

Central Banks and Used Switches

Salacious headlines are making the rounds, complete with possibly the worst stock hacker picture ever, indicating that the $81 million dollar theft from the Central Bank of Bangladesh was pretty easy to pull off because the bank used “second hand routers” and implying that there was no firewall employed by the bank.

The money was stolen when criminals hijacked the SWIFT terminal(s) at the Central Bank of Bangladesh, and proceeded to issue transfers totally $1 billion to foreign bank accounts.  Fortunately, most of the transactions were cancelled after the attackers apparently made a spelling mistake in the name of one of the recipients.

We don’t know all that much about how the crime really happened, and a Reuters story gives a little more detail, but not much more, based on comments from an investigator.

We know is the following:

  1. The Central Bank of Bangladesh has 4 “servers” that it keeps in an isolated room with no windows on the 8th floor of it’s office.
  2. Investigators commented that these 4 servers were connected to the bank network using second hand, $10 routers or switches (referred to as both in various sources).
  3. Investigators commented that the crime would have been more difficult if a firewall had been in place.

And so we end up with a headline that reads “Bank with No Firewall…” and “Bangladesh Bank exposed to hackers by cheap switches, no firewall: police”.

The implication is that the problem arose from the quality of the switches and the lack of a firewall.  These factors are not the cause of the problem.  This bank could have spent a few thousand dollars on a managed switch, and a few tens of thousands on a fancy next gen firewall from your favorite vendor.  And almost certainly they would have been configured in a manner that still let the hack happen.  If an organization does not have the talent and/or resources to design and operate a secure network, as is apparently the case here, they we will end up with the fancy managed switch configured to be a dumb switch and the firewall will probably have a policy that lets all traffic through in both directions.  We are pointing the finger at the technology used, but the state of the technology is a symptom, not the problem.

We can infer from the story that the four SWIFT servers in the isolated room are attached to a cheap 5 or 10 port switch, plugged into a jack that connects those systems to the broader, probably flat, bank network.  I strongly suspect that the bank does indeed have a firewall at it’s Internet gateway, but there was very likely nothing sitting between the football watching, horoscope checking, phishing link clicking masses of bank employee workstations to protect those delicious SWIFT terminals in the locked room*.  Or maybe the only place to browse the Internet in private at the bank is from the SWIFT terminals themselves.  After all, the room is small, locked and has no windows**.

It doesn’t take expensive firewalls or expensive switches to protect four systems in a locked room.  But, we apparently think of next gen firewalls as the APT equivalent of my tiger repellent rock***.

*I have no idea if they really do this, but it happens everywhere else, so I’m going with it.

** I have no idea if they did this, either, but I know people who would have done it, were the opportunity available to them.

***Go ahead and laugh.  I’ve NEVER been attacked by a tiger, though.

On The Sexiness of Defense

For years now, defenders have been trying to improve the perception of defense relative to offense in the IT security arena.  One only has to look at the schedule of talks at the average security conference to see that offense still draws the crowds.  I’ve discussed the situation with colleagues, who also point out that much of the entrepreneurial development in information security is on the offense/red team side of the fence.

That made me reflect on the many emails I receive from listeners of the security podcast I co-host.  Nearly everyone who has asked for advice, except for a few, was looking for advice on getting into the offensive side of security.

I’ve been pondering why that is, and I have a few thoughts:

Offense captures the imagination

Let’s face it, hacking things is pretty cool.  Many people have pointed out that hackers are like modern-day witches, at least as viewed by some of the political establishment.

Offense is about technology.  We LOVE technology.  And we love to hate some of the technology.

Also, offense activities make for great stories and conferences, and can often be pretty easily demonstrated in front of an audience.

Offense has a short cycle time

From the perspective of starting a security business, the cycle time for developing an “offering” is far shorter than a more traditional security product or service.  The service simply relies on the abilities and reputation of the people performing service.  I, of course, do not mean to downplay the significant talent and countless hours of experience such people have; I am pointing out that by the time such a venture is started, these individuals already possess much of the talent, as opposed to needing to go off and develop a new product.

Offense is deterministic (and rewarding)

Penetrating a system is deterministic; we can prove that it happened.  We get a sense of satisfaction.  Getting a shell probably gives us a bit of a dopamine rush (this would be an interesting experiment to perform in an MRI, in case anyone is looking for a research project).

We can talk about our offensive conquests

Offense are often able to discuss the details of their successes publicly, as long as certain information is obscured, such as the name of a customer.

If you know how to break it…

You must know how to defend it.  My observation is that many organizations seek out offense to help improve their defense.

…And then there is defense

Defense is more or less the opposite of the above statements.  If we are successful, there’s often nothing to say, at least that would captivate an audience.  If we aren’t successful, we probably don’t want to talk about it publicly.  Unlike many people on the offense side, defenders are generally employees of the organization they defend, and so if I get up and talk about my defensive antics, everyone will implicitly know which company the activity happened at, and my employer would not approve of such disclosure.  Defense is complicated and often relies on the consistent functioning of a mountain of boring operational processes, like patch management, password management, change management and so on.

Here’s what I think it would take to make defense sex[y|ier]

What we need, in my view, is to apply the hacker mindset to defensive technology.  For example, a script that monitors suspicious DNS queries and automatically initiates some activities such as capturing the memory of the offending device, moving the device to a separate VLAN, or something similar.  Or a script that detects outbound network traffic from servers and performs some automated triage and/or remedial activity.  And so on.

Certainly there are pockets of this happening, but not enough.  It is a bit surprising too, since I would think that such “defensive hackers” would be well sought after by organizations looking to make significant enhancements to their security posture.

Having said all of that, I continue to believe that defenders benefit from having some level of understanding of offensive tactics – it is difficult to construct a robust defense if we are ignorant of the TTPs that attackers use.

Cyber Introspection: Look at the Damn Logs

I was talking to my good friend Bob today about whatever came of Dick Cheney’s weather machine when he interrupted with the following question:

“Why, as an community, are we constantly seeking better security technology when we aren’t using what we have?”

Bob conveyed the story of a beach response engagement he worked on for a customer involving a compromised application server.  The application hadn’t been patched in years and had numerous vulnerabilities for anyone with some inclination to exploit.  And exploited it was.  The server was compromised for months prior to being detected.

The malware dropped on the server for persistence and other activities was indeed sophisticated.  There was no obvious indication that the server had been compromised.  System logs were cleared from the time of the breach and subsequent logs had nothing related to the malicious activity on the system.

A look at the logs from a network IDS sensor which monitors the network connecting the server to the Internet showed nearly no alerts originating from that server until the suspected date of the intrusion, as determined by forensic analysis of the server.  On that day, the IDS engine started triggering many, many alerts as the server was attempting to perform different activities such as scanning other systems on the network.

But no one was watching the IDS alerts.

The discussion at the client quickly turned to new technologies to stop such attacks in the future and to allow fast reaction if another breach were to happen.

But no one talked about more fully leveraging the components already in place, like IDS logs.  IDS is an imperfect system that requires care and feeding (people); clearly an inferior option when compared to installing a fancy advanced attack.

I previously wrote a similar post a while back regarding AV logs.

Why are we so eager to purchase and deploy yet more security solutions, which are undoubtedly imperfect and also undoubtedly requires resources to manage, when we are often unable to get full leverage from the tools we already have running?

Maybe we should start by figuring out how to properly implement and manage our existing IT systems, infrastructure and applications.  And watch the damn logs.

JPMC Is Getting Off Easy

News today indicates that the JPMC breach which was discovered earlier in 2014 was the result of a neglected server not being configured to require 2FA as it should have been.   That was a pretty simple oversight, right?  Well, no so fast.  There are a lot of other details that previously surfaced which paint a more complicated picture.

– First, we know that the breach started via a vulnerability in a web application.

– Next, we know that the breach was only detected after JPMC’s corporate challenge site was breached and JPMC started examining other networks for similar traffic and found the attackers were also on it’s systems.

– We also know that “gigabytes” of data on 80 million US households was stolen.

– Finally, we know that the breach extended to at least 90 other servers in the JPMC environment.

Attributing the breach to missing 2FA on a server seems very incomplete.

Certainly we have seen a number of breaches attributed to unmanaged systems, such as Bit9 and BrowserStack. This is why inventory is the #1 critical cyber security control. Without it, we don’t know what needs to be secured.

We can also include at least:
– Application vulnerability
– Gigabytes of data being exfiltrated undetected
– Hacker activity and command and control activity on 90 different servers undetected
– Configuration management

This isn’t intended to drag JPMC through the mud; rather it’s to point out that these larger breaches are the unfortunately alignment of a number of control deficiencies rather than a single, simple oversight in configuring a server.

The Elephant In The Room With Preventing DDOS Attacks

DDOS attacks have been a regular fixture in infosec news for some time now. Primarily, those attacks have been using open DNS resolvers, though recently NTP flared up as a service of choice. The community made dramatic improvements in the number of NTP servers which were susceptible to being used in DDOS attacks in a pretty short amount of time. However, both open resolvers and NTP continue to be a problem. And there are likely to be other services targeted in the future, like SNMP.

One common theme is that these services are UDP-based, and so it’s trivial to spoof a source IP address and get significant traffic amplification directed toward the victims of these DDOS attacks.

While I think it’s necessary to focus on addressing the open resolver problem, NTP and similar issues, I’m very surprised that we, as a community, are not pushing to have ISPs implement a very basic control that would dramatically restrict these kinds of attacks: simple source address egress filtering.

Yes, this likely puts additional load on routers or firewalls, but it’s pretty basic hygiene to not allow packets out of a network with a source address which that ISP does not announce for. I am sure there are some edge case exceptions, such as with asymmetric routing, but it should be manageable between the customer and the ISP.

So, each time we hear about a DDOS attack and ponder the pool of poorly configured DNS servers, I propose that we should also be pondering the ISPs who allow traffic out of their networks with a source address that is clearly spoofed.

Lessons From The Neiman Marcus Breach

Bloomberg released a story about a forensic report from Protiviti detailing the findings of their investigation into the Neiman Marcus breach. There are very few details in the story, but what is included is quite useful.

First, Protiviti asserts that the attackers who breached Neiman do not appear to be the same as those who breached Target. While this doesn’t seem like a major revelation to many of us, it does point out that there are numerous criminals with the ability to successfully pull off such attacks. And from this, we should consider that these “sophisticated” attacks are not all that hard to perpetrate given the relative reward.

Next, Protiviti was apparently unable to determine the method of entry used by the attackers. While that is unfortunate, we should not solely focus on hardening our systems against initial attack vectors, but also apply significant focus to protecting our important data and the systems that process and store that data. Criminals have a number of options to pick from for initial entry, such as spear phishing and watering hole attacks. We need to plan for failure when we design our systems and processes.

The activities of the attackers operating on Neiman systems apparently created nearly 60,000 “alerts” during the time of the intrusion. It is very hard to draw specific conclusions because we don’t actually know what kind of alerts are being referenced. I am going to speculate, based on other comments in the article, that the alerts were from anti-virus or application white listing:

…their card-stealing software was deleted automatically each day from the Dallas-based retailer’s payment registers and had to be constantly reloaded.

…the hackers were sophisticated, giving their software a name nearly identical to the company’s payment software so that any alerts would go unnoticed amid the deluge of data routinely reviewed by the company’s security team.

The company’s centralized security system, which logged activity on its network, flagged anomalous behavior of a malicious software program though it didn’t recognize the code itself as malicious or expunge it, according to the report. The system’s ability to automatically block the suspicious activity it flagged was turned off because it would have hampered maintenance, such as patching security holes, the investigators noted.

The 59,746 alerts set off by the malware indicated “suspicious behavior” and may have been interpreted as false positives associated with legitimate software.

However, some of these comments are a bit contradictory. For instance:

payment registers and had to be constantly reloaded

And

it didn’t recognize the code itself as malicious or expunge it

In any event, a key take away is that we often have the data we need to detect that an attack is underway.

Next is a comment that highlights a common weakness I covered in a previous post:

The server connected both to the company’s secure payment system and out to the Internet via its general purpose network.

Servers that bridge network “zones”, as this Neiman server apparently did, are quite dangerous and exploitation of them tends to be one of the common traits of many breaches. Such systems should be eliminated.

Finally, a very important point from the story to consider is this:

The hackers had actually broken in four months earlier, on March 5, and spent the additional time scouting out the network and preparing the heist…

This should highlight for us the importance of a robust ability to detect malicious activity on our network and systems. While some attacks will start and complete before anyone could react, many of these larger, more severe breaches tend to play out over a period of weeks or months. This has been highlighted in a number if industry reports, such as the Verizon DBIR.

One Weird Trick To Secure You PCs

Avecto released a report which analyzed recent Microsoft vulnerabilities and found that 92% of all critical vulnerabilities reported by Microsoft were mitigated if when the exploit attempt happened on an account without local administrator permissions. Subsequently, there has been a lot of renewed discussion about removing admin rights as a mitigation from these kinds of vulnerabilities.

Generally, I think it’s a good idea to remove admin rights if possible, but there are a number of items to think about which I discuss below.

First, when a user does not have local administrator rights, a help desk person will generally need to remotely perform software installs or other administrative activities on the user’s behalf. This typically involves a support person logging on to the PC using some manner of privileged domain account which was configured to have local administrator rights of the PCs. Once this happens, a cached copy of the login credentials used by the support staff are saved to the PC, albeit in a hashed manner. Should an attacker be able to obtain access to a PC using some form of malware, she may be able to either brute force recover the password from the hash or use a pass-the-hash attack, which would grant the attacker far broader permissions on the victim organization’s network than a standard user ID would. Additionally, an attacker who already has a presence on a PC may use a tool such as mimikatz to directly obtain the plain text password of the administrative account.

You might be thinking “but, if I remove administrator rights, attackers would be very unlikely to gain access to the PC in manner to steal hashes or run mimikatz, both of which require at least administrator level access. What gives?”

That is a good question which dovetails into my second point. The Avecto report covers vulnerabilities which Microsoft deems the severity to be critical. However, most local privilege escalation vulnerabilities I could find are only rated Important by Microsoft. This means that if even if you don’t have administrator rights, I can trick you into running a piece of code of my choosing, such as one delivered through an email attachment or even using a vulnerability in another piece of code like Java, Flash Player or PDF reader, and my code initially would be running with your restricted permissions, however my code could then leverage a privilege escalation flaw to obtain administrator or system privileges. From there, I can then steal hashes or run mimikatz. Chaining exploits in attacks is not all that uncommon any longer, and we shouldn’t consider this scenario to be so unlikely that it isn’t worth our attention.

I’ll also point out that many organizations don’t quickly patch local privilege escalation flaws, because they tend to carry a lower severity rating and they intuitively seem less important to focus on, as compared to other vulnerabilities which are rated critical.

Lastly, many of the recent high profile, pervasive breaches in recent history heavily leveraged Active Directory by means of credential theft and subsequent lateral movement using those stolen credentials. This means that the know-how for navigating Active Directory environments through credential stealing is out there.

Removing administrator rights is generally a prudent thing to do from a security standpoint. A spirited debate has been raging for years on whether removing administrator rights costs money, in the form of additional help desk staff who now have to perform some activities which users used to do themselves and related productivity loss by the users who now have to call the help desk, or is a net savings because there are less malware infections, less misconfigurations by users, less incident response costs, and associate higher user productivity, or if those two factors simply cancel each other out. I can’t add a lot to that debate, as the economics are going to be very specific to each organization considering removing administrator rights.

My recommendations for security precautions to take when implementing a program to remove admin rights are:
1. Prevent Domain Administrator or other accounts with high privileges from logging into PCs. Help desk technicians should be using a purpose-created account which only has local admin rights on PCs, and systems administrators should not be logging in to their own PCs with domain admin rights.
2. Do not disable UAC.
3. Patch local privilege escalation bugs promptly.
4. Use tools like EMET to prevent exploitation of some Oday privilege escalation vulnerabilities.
5. Disable cached passwords if possible, noting that this isn’t practical in many environments.
6. Use application whitelisting to block tools like mimikatz from running.
7. Follow a security configuration standard like the USGCB.

Please leave a comment below if you disagree or have any other thoughts on what can be done.

H/T @lerg for sending me the story and giving me the idea for this post.

What The Target Breach Can Teach Us About Vendor Management

A recent report by Brian Krebs identified that Fazio Mechanical, the HVAC company who was compromised and used to attack Target, was breached through an “email attack”, which allegedly stole Fazio’s credentials.

In my weekly security podcast, I rail pretty hard on workstation security, particularly for those systems which have access to sensitive systems or data, since attacking the workstation has become a common method for our adversaries. And hygiene on workstations is generally pretty terrible.

However, I am not going to pick on that aspect in this post. I want to explore this comment in Krebs’ story:
“But investigators close to the case took issue with Fazio’s claim that it was in full compliance with industry practices, and offered another explanation of why it took the Fazio so long to detect the email malware infection: The company’s primary method of detecting malicious software on its internal systems was the free version of Malwarebytes Anti-Malware.”

This assertion seems to be hearsay by some unknown sources rather than fact, although Krebs’ sources tend to be quite reliable and accurate in the past. I am going to focus on a common problem I’ve seen which is potentially demonstrated by this case.

I am not here to throw rocks at Target or Fazio. Both are victims of a crime, and this post is intended to be informative rather than making accusations, so I will go back to the fictitious retailer, MaliciousCo, for this discussion. We know that MaliciousCo was compromised through a vendor who was itself compromised. As I described in the last post, MaliciousCo has a robust security program, which includes vendor management. Part of the vendor management program includes a detailed questionnaire which is completed annually by vendors. A fictitious cleaning company, JanitorTech, was compromised and led to the breach of MaliciousCo.

Like Fazio, JanitorTech installed the free version of Malwarebytes (MBAM) on its workstations and an IT person would run it manually on a system if a user complained about slowness, pop-ups or other issues. When MaliciousCo would send out its annual survey, the JanitorTech client manager would come to a question that read: “Vendor systems use anti-virus software to detect and clean malicious code?” and answer “yes” without hesitation because she sees the MBAM icon on her computer desktop every day. MaliciousCo saw nothing particularly concerning in the response; all of JanitorTech’s practices seem to align well with MaliciousCo policies. However there is clearly a disconnect.

What’s worse is that MaliciousCo’s vendor management program seems to be oblivious to the current state of attack techniques. The reliance on anti-virus for preventing malicious code is a good example of that.

So, what should MaliciousCo ask instead? I offer this suggestion:
– Describe the technology and process controls used by the vendor to prevent, block or mitigate malicious code.

I have personally been on both sides of the vendor management questionnaire over the years. I know well how a vendor will work hard to ‘stretch’ the truth to in order to provide expected answers. I also know that vendor management organizations are quick to accept answers given without much evidence or inspection. Finally, I saw that vendor management questionnaires, and the programs behind them, tend not to get updated to incorporate the latest threats.

This should serve as an opportunity for us to think about our own vendor management programs: how up-to-date they are, and whether there is room for the kind of confusion demonstrated in the JanitorTech example above.