An Inconvenient Problem With Phishing Awareness Training

Snapchat recently disclosed that it was the victim of an increasingly common attack where someone in the HR department is  tricked into providing personal details of employees to someone purporting to be the company’s CEO.

In response, the normal calls for “security awareness training!” and “phishing simulations!” is making the rounds.  As I have said, I am in favor of security awareness training and phishing simulation exercises, but I am wary of people or organizations that believe this is a security “control”.

When organizations, information security people and management begin viewing awareness training and phishing simulations as a control, incidents like happened at Snapchat are viewed as a control failure.  Management may ask “did this employee not take the training, or was he just incompetent?”  I understand that your gut reaction may be to think such a reaction would not happen, but let me assure you that it does.  And people get fired for falling for a good phish.  Maybe not everywhere.  Investment in training is often viewed the same as investment in other controls.  When the controls fail, management wants to know who is responsible.

If you ask any phishing education company or read any of their reports, you will notice that there are times of day and days of the week where phishing simulations get more clicks than others, with everything else held constant.  The reason is that people are human.  Despite the best training in the world, factors like stress, impending deadlines, lack of sleep, time awake, hunger, impending vacations and many other factors will increase or decrease the likelihood of someone falling for a phishing email.  Phishing awareness training needs to be considered for what it is: a method to reduce the frequency, in aggregate, of employees falling for phishing attacks.

So, I do think that heads of HR departments everywhere should be having a discussion with their employees on this particular attack.  But, when a story like Snapchat makes news, we should be thinking about prevention strategies beyond just awareness training.  And that is hard because it involves some difficult trade offs that many organizations don’t want to think about.  Not thinking about them, however, is keeping our head in the sand.

Cyber Introspection: Look at the Damn Logs

I was talking to my good friend Bob today about whatever came of Dick Cheney’s weather machine when he interrupted with the following question:

“Why, as an community, are we constantly seeking better security technology when we aren’t using what we have?”

Bob conveyed the story of a beach response engagement he worked on for a customer involving a compromised application server.  The application hadn’t been patched in years and had numerous vulnerabilities for anyone with some inclination to exploit.  And exploited it was.  The server was compromised for months prior to being detected.

The malware dropped on the server for persistence and other activities was indeed sophisticated.  There was no obvious indication that the server had been compromised.  System logs were cleared from the time of the breach and subsequent logs had nothing related to the malicious activity on the system.

A look at the logs from a network IDS sensor which monitors the network connecting the server to the Internet showed nearly no alerts originating from that server until the suspected date of the intrusion, as determined by forensic analysis of the server.  On that day, the IDS engine started triggering many, many alerts as the server was attempting to perform different activities such as scanning other systems on the network.

But no one was watching the IDS alerts.

The discussion at the client quickly turned to new technologies to stop such attacks in the future and to allow fast reaction if another breach were to happen.

But no one talked about more fully leveraging the components already in place, like IDS logs.  IDS is an imperfect system that requires care and feeding (people); clearly an inferior option when compared to installing a fancy advanced attack.

I previously wrote a similar post a while back regarding AV logs.

Why are we so eager to purchase and deploy yet more security solutions, which are undoubtedly imperfect and also undoubtedly requires resources to manage, when we are often unable to get full leverage from the tools we already have running?

Maybe we should start by figuring out how to properly implement and manage our existing IT systems, infrastructure and applications.  And watch the damn logs.

Ideas For Defending Against Watering Hole Attacks For Home Users

In episode 106, we discussed a report detailing an attack that leveraged the website to direct visitors to an exploit kit and subsequently infect certain designated targets in the defense and financial industries using two zero day vulnerabilities.  A number of people have asked me for ideas on how to defend against this threat from the perspective of a home user, so I thought it best to write a blog post about it.   Just a heads up: this is aimed at Windows users.

One of the go-to mitigations for the defending against drive by web browser style attacks are ad blockers, like AdBlock Plus.  In the Forbes instance, it isn’t clear whether an ad blocker would have helped, since the malicious content may not have originated from an ad network, and instead was added through a manipulation of the Forbes site itself to include content from the exploit-hosting site.  Many of the targeted watering hole attacks commonly alter the web site itself.  Regardless, recent reports indicate AdBlock Plus accepts payment from ad networks in return for allowing ads through.  I would not consider ad blocking a reasonable protection in any instance.

A much more effective, however more painful, avenue is NoScript, however NoScript is a FireFox plugin, and I’ve not found great plugins that work as well for Chrome, IE or Opera.  With some fiddling, NoScript can provide a reasonable level of protection from general web content threats while mostly keeping your sanity intact.  Mostly.  You will probably not want to install NoScript on your grandparents’ computer.  NoScript can be a blunt instrument, and if the user is not diligent, will likely opt to simply turn it off, at which point we are back where we started from.

Running Flash and Java are like playing with matches in a bed of dry hay.  NoScript certainly helps, but it’s not a panacea.  For most people, the Java browser plugin should be disabled.  Don’t worry, you can still play Minecraft without the plugin.  By the way, every time you update Java, the plugin is re-installed and re-enabled.  Flash… Well, use NoScript to limit where Flash scripts come from to those you really need.

Browsing using a Windows account that does not have administrator rights also mitigates a lot of known browser exploits.  To do this,  create a wholly separate user account which does not have administrator rights and use that unprivileged account for general use, logging out or using UAC (requiring the username and password of the ID that has administrator rights) to perform tasks that require administrator rights.  It’s important that you use a separate account, even those UAC gives the illusion that administrative operations will always prompt for permission to elevate authority when you are using an account with administrator rights.  UAC was not designed to be a security control point, though.  This might be a hassle that home users may not find palatable or be disciplined enough to stick with, however it is effective at blocking many common attacks.

Finally, using Microsoft’s Enhanced Mitigation Experience Toolkit (EMET) will block many exploit attempts, and is definitely worth installing.  The default policy is pretty effective in the latest versions of EMET.  The configuration can be tweaked to protect other applications not in the default policy, but doing so will require some testing, since some of these protections can cause applications to crash if they were not built with those settings in mind.

Finally, a web filter such as Blue Coat K9 can help prevent surreptitious connections to malicious web servers hosting exploit kits, so long as the site is known malicious.

Remarkably, anti-virus didn’t make the list.  Yes, it needs to be installed and kept up to date, but don’t count on it to save you.

One additional thought for those who are really adventurous: install VirtualBox or use HyperV to install Windows or Linux in a virtual machine and use the browser in the virtual machine.  I’ll write a post on the advantages of doing this sometime in the future.

Do you have other recommendations?  Leave a comment!

Day 2: Awareness of Common Attack Patterns When Designing IT Systems

One of the most common traits underlying the worst breaches I’ve seen, and indeed many that are publicly disclosed, is related to external attackers connecting to a server on the organization’s Active Directory domain.

It seems that many an IT architect or Windows administrator are blind to the threat this poses. An application vulnerability, misconfiguration and so on can provide a foothold to an attacker to essentially take over the entire network.

This is just an example, but it’s a commonly exploited tactic. Staff members performing architecture-type roles really need to have some awareness and understanding of common attacker tactics in order to intelligently weigh design points in an IT system or network.

Shellshock Highlights Difficulty In Determining Exploitability

We are all familiar with the shellshock issue in bash.   We know that it’s exploitable via CGI on web servers, through the DHCP client on Linux systems, and can bypass restrictions in SSH.  Yesterday, a new spate of techniques were discussed pretty widely, through OpenVPN, SIP. and more.

This isn’t necessarily a problem with OpenVPN and SIP.  This is still a problem with bash.  These discoveries should highlight the importance of patching a for problem like shellshock quickly, rather than assuming our system is safe just because we are not running a web server, using a static IP and don’t rely on SSH restrictions.  If it’s running OpenVPN, it’s exploitable (if username/password authentication is used).  The broader point however, is that we could be finding innovative new ways to exploit shellshock through different services for months.

Just patch bash.  And get on with life.

Day 1: The Importance Of Workstation Integrity

We are pretty well aware of the malware risks that our users and family members face from spear phishing, watering holes, exploit kits, tainted downloads and so on.

As IT and security people, most of us like to think of ourselves as immune  to these threats – we can spot a phish from a mile away. We would never download anything that would get us compromised. But, the reality is that it does happen. To us.  We don’t even realize theat copy of WinRar was trojaned. And now we are off doing our jobs. With uninvited visitors watching.  It happens.  I’ve been there to clean up the mess afterward and it’s not pretty.

The computers that we use to manage IT systems and applications are some are some of the most sensitive in the average business.  We ought to consider treating them appropriately.

Here are my recommendations:

  • Perform administrative functions on a PC that is dedicated to the task, not used to browse the Internet, check email or edit documents.
  • Isolate computers used for these administrative functions onto separate networks that have the minimum inbound and outbound access needed.
  • Monitor these computers closely for signs of command and control activity.
  • Consider how to implement similar controls for performing such work from home.

What do you do to protect your IT users?

Post Traumatic Vulnerability Disorder

I’ve talked pretty extensively on the Defensive Security Podcast about the differences between patch management and vulnerability management.  We’ve now had two notable situations within 6 months where a significant vulnerability in a portion of our infrastructure estate is vulnerable to a significant threat.  And no patches.  At least for a while.

In both the HeartBleed and ShellShock cases, the vulnerabilities were disclosed suddenly and exploit code was readily available, trivial to exploit and nearly undetectable (prior to implementing strategies to detected it, at least).

And in both cases we have been stuck wringing our hands waiting for patches for the systems and applications we use.  In the case of Heartbleed, we sometimes waited for weeks.  Or even months.

Circumstances like Heartbleed and ShellShock highlight the disparity between patch management and vulnerability management.  However, being aware of the difference isn’t informative of the actions we can or should take.  I’ve been thinking about our options in this situation.  I find 3 broad options we can choose from:

1. Accept the risk of continuing to operate without patches until patches are available

2. Disconnect or isolate systems that are vulnerable

3. Something in the middle

…and we may choose a combination of these depending on risks and circumstances.  #1 and #2 are self explanatory and may be preferable.  I believe that #1 is somewhat perilous, because we may not fully understand the actual likelihood or impact.  However, organizations choose to take risks all the time.

#3 is where many people will want to be.  This is where it helps to have smart people who can help to quickly figure out how the vulnerability works and how your existing security infrastructure can be mobilized to help mitigate the threat.

In the current case of ShellShock, some of the immediate options are to:

1. implement mod_security rules to block the strings used in the attacks

2. implement IPS or WAF rules to prevent shell command injection

3. Implement iptables rules to filter out connections containing the strings used in the attacks

4. Increasing proactive monitoring, backups and exorcisms on vulnerable systems

…and so on.  So, there ARE indeed things that can be done while waiting for a patch.  But they are most often going to be highly specific to your environment and what defensive capabilities are in place.

HeartBleed and ShellShock should show us that we need to ensure we have the talent and capabilities to intelligently and effectively respond to emerging threats such as these, without having to resort to hand wringing while waiting for patches.

Think about what you can do better next time.  Learn from these experiences.  The next one is probably not that far away.

H/T to my partner in crime, Andy Kalat (@lerg) for the title.

Perspective on the Microsoft Weak Password Report

Researchers at Microsoft and Carleton University released a report that has gotten a lot of attention, with media headlines like “Why 123456 is a great password”.

The report is indeed interesting: mathematically modelling the difficulty of remembering complex passwords and optimizing the relationship between expected loss resulting from a breached account and the complexity of passwords.

The net finding is that humans have limitations on how much they can remember, and that is at odds with the current guidance of using a strong, unique password for each account.  The suggestion is that accounts should be grouped by loss characteristics, with those accounts that have the highest loss potential getting the strongest password, and the least important having something like “123456”.

The findings of the report are certainly interesting, however there seem to be a number of practical elements not considered, such as:

  • The paper seems focused on the realm of “personal use” passwords, however many people have to worry about both passwords for personal use and for “work” use.
  • Passwords used for one’s job usually have to be changed every 90 days, and are expected to be among the most secure passwords a person would use.
  • People generally do not invest much intellectual energy into segmenting accounts into high risk/low risk when creating passwords.  Often, password creation is done on the fly and stands in the way of some larger, short term objective, such as ordering flowers or getting logged in to send an urgent email to the boss.
  • The loss potential of a given account is not always obvious.
  • The loss potential of a given account likely does not remain constant over time.
  • There are many different minimum password requirements across different services that probably work against the idea of using simple passwords on less important sites.  For example, I have a financial account that does not permit letters in the password, and I have created accounts on trivial web forums that require at least 8 character passwords, with complexity requirements.

It’s disappointing that password managers were dismissed by the report authors as too risky because they represent a concentration of passwords which could itself fall victim to password guessing attacks, when hosted “in the cloud”, leading to the loss of all passwords.  Password managers seem to me as the only viable alternative to managing the proliferation of passwords many of us need to contend with.  Using password managers removes the need to consider the relative importance of a new service and can create random, arbitrarily long and complex passwords on the fly, without needing to worry about trying to remember them – for either important or unimportant accounts.

Now, not all password managers are created equally.  We recently saw a flurry of serious issues with online password managers.  Certainly diligence is required when picking a password manager, and that is certainly not a simple task for most people.  However, I would prefer to see a discussion on how to educate people on rating password managers than encouraging them to use trivial passwords in certain circumstances.

I don’t mean to be overly critical of the report.  I see some practical use for this research by organizations when considering their password strategies.  Specifically, it’s not reasonable to expect employees to pick strong passwords for a business-related of accounts and then not write them down, record them somewhere, or create a predictable system of passwords.  It gets worse when those employees are also expected to change their passwords every 90 days and to use different passwords on different systems.  Finally, those same employees are also having to remember “strong” passwords for some number of personal accounts which adds more complexity to remembering more strong passwords.

In short, I think that this report highlights the importance of using password managers, both for business and for personal purposes.  And yes, I am ignoring multi-factor authentication schemes which, if implemented properly, would be a superior solution.

Threat Modeling, Overconfidence and Ignorance

Attackers continue to refine their tools and techniques and barely a day goes by without news of some significant breach.  I’ve noticed a common thread through many breaches in my experience of handling dozens of incidents and researching many more for my podcast: the organization has a fundamental misunderstanding risks associated with the technology they have deployed and, more specifically, the way in which they deployed it.

When I think about this problem, I’m reminded of Gene Kranz’s line from Apollo 13: “I don’t care what anything was designed to do.  I care about what it can do.”

My observation is that there is little thought given to how things can go wrong with a particular implementation design.  Standard controls, such as user ID rights, file permissions, and so on, are trusted to keep things secure.  Anti-virus and IPS are layered on as supplementary controls, and systems are segregated onto functional networks with access restrictions, all intended to create defense in depth.  Anyone familiar with the technology at hand and who is moderately bright can cobble together what they believe is a robust design.  And the myriad of security standards will tend to back them up, by checking the boxes

  • Firewalls in place? Check!
  • Software up-to-date? Check!
  • Anti-Virus installed and kept up-to-date? Check!
  • User ID’s properly managed? Check!
  • Systems segregated onto separate networks as needed? Check!

And so on.  Until one fateful day, someone notices by accident, a Domain Admin account that shouldn’t be there.  Or a call comes in from the FBI about “suspicious activity” happening on the organization’s network.  Or the Secret Service calls to say that credit cards the organization processed were found on a carder forum.  And it turns out that many of the organization’s servers and applications have been compromised.

In nearly every case, there were a mix of operational and architectural problems that contributed to the breach.  However, the operational issues seem to be transitive: maybe it’s poorly written code that allows file uploads, or maybe someone used Password1 as her administrator password, and so on.  But the really serious contributor to the extent of a breach is architectural problems.  This involves things like:

  • A web server on an Internet DMZ making calls to a database server located on an internal network.
  • A domain controlled on an Internet DMZ with 2 way access to other DC’s on other parts of the internal network.
  • Having a mixed Internet/internal server DMZ, where firewall rules govern what is accessible from the Internet.

…And so it goes.  The number of permutations of how technology can be assembled seems nearly infinite.  Without an understanding of how the particular architecture proposed or in place can be leveraged by an attacker, organizations are ignorant of the actual risk to their organization.

For this reason, I believe it is important that traditional IT architects responsible for developing such environments have at least a conceptual understanding of how technology can be abused by attackers.  Threat modeling is also a valuable activity to uncover potential weaknesses, however doing so still requires people who are knowledgeable about the risks.

I also seem some value in establishing common “design patterns”, similar to that seen in programming, but at a much higher level, involving networked systems and applications, where well thought out designs could be starting point for tweaking, rather than starting from nothing and trying to figure out the pitfalls with the new design along the way.  I suspect that would be difficult at best, given the extreme variability in business needs, technology choices and other constraints.

One Weird Trick To Secure You PCs

Avecto released a report which analyzed recent Microsoft vulnerabilities and found that 92% of all critical vulnerabilities reported by Microsoft were mitigated if when the exploit attempt happened on an account without local administrator permissions. Subsequently, there has been a lot of renewed discussion about removing admin rights as a mitigation from these kinds of vulnerabilities.

Generally, I think it’s a good idea to remove admin rights if possible, but there are a number of items to think about which I discuss below.

First, when a user does not have local administrator rights, a help desk person will generally need to remotely perform software installs or other administrative activities on the user’s behalf. This typically involves a support person logging on to the PC using some manner of privileged domain account which was configured to have local administrator rights of the PCs. Once this happens, a cached copy of the login credentials used by the support staff are saved to the PC, albeit in a hashed manner. Should an attacker be able to obtain access to a PC using some form of malware, she may be able to either brute force recover the password from the hash or use a pass-the-hash attack, which would grant the attacker far broader permissions on the victim organization’s network than a standard user ID would. Additionally, an attacker who already has a presence on a PC may use a tool such as mimikatz to directly obtain the plain text password of the administrative account.

You might be thinking “but, if I remove administrator rights, attackers would be very unlikely to gain access to the PC in manner to steal hashes or run mimikatz, both of which require at least administrator level access. What gives?”

That is a good question which dovetails into my second point. The Avecto report covers vulnerabilities which Microsoft deems the severity to be critical. However, most local privilege escalation vulnerabilities I could find are only rated Important by Microsoft. This means that if even if you don’t have administrator rights, I can trick you into running a piece of code of my choosing, such as one delivered through an email attachment or even using a vulnerability in another piece of code like Java, Flash Player or PDF reader, and my code initially would be running with your restricted permissions, however my code could then leverage a privilege escalation flaw to obtain administrator or system privileges. From there, I can then steal hashes or run mimikatz. Chaining exploits in attacks is not all that uncommon any longer, and we shouldn’t consider this scenario to be so unlikely that it isn’t worth our attention.

I’ll also point out that many organizations don’t quickly patch local privilege escalation flaws, because they tend to carry a lower severity rating and they intuitively seem less important to focus on, as compared to other vulnerabilities which are rated critical.

Lastly, many of the recent high profile, pervasive breaches in recent history heavily leveraged Active Directory by means of credential theft and subsequent lateral movement using those stolen credentials. This means that the know-how for navigating Active Directory environments through credential stealing is out there.

Removing administrator rights is generally a prudent thing to do from a security standpoint. A spirited debate has been raging for years on whether removing administrator rights costs money, in the form of additional help desk staff who now have to perform some activities which users used to do themselves and related productivity loss by the users who now have to call the help desk, or is a net savings because there are less malware infections, less misconfigurations by users, less incident response costs, and associate higher user productivity, or if those two factors simply cancel each other out. I can’t add a lot to that debate, as the economics are going to be very specific to each organization considering removing administrator rights.

My recommendations for security precautions to take when implementing a program to remove admin rights are:
1. Prevent Domain Administrator or other accounts with high privileges from logging into PCs. Help desk technicians should be using a purpose-created account which only has local admin rights on PCs, and systems administrators should not be logging in to their own PCs with domain admin rights.
2. Do not disable UAC.
3. Patch local privilege escalation bugs promptly.
4. Use tools like EMET to prevent exploitation of some Oday privilege escalation vulnerabilities.
5. Disable cached passwords if possible, noting that this isn’t practical in many environments.
6. Use application whitelisting to block tools like mimikatz from running.
7. Follow a security configuration standard like the USGCB.

Please leave a comment below if you disagree or have any other thoughts on what can be done.

H/T @lerg for sending me the story and giving me the idea for this post.