Human Nature And Selling Passwords

A new report by Sailpoint indicating that one in seven employees would sell company passwords for $150 is garnering a lot of news coverage in the past few days.  The report also finds that 20% of employees share passwords with coworkers.  The report is based on a survey of 1,000 employees from organizations with over 3,000 employees.  It isn’t clear whether the survey was conducted using statistically valid methods, so we must keep in mind the possibility for significant error when evaluating the results.

While one in seven seems like an alarming number, what isn’t stated in the report is how many would sell a password for $500 or $1,000.  Not to mention $10,000,000.  The issue here is one of human nature.  Effectively, the report finds that one in seven employees are willing to trade $150 for a spin of a roulette wheel where some spaces result in termination of employment or end his or her career.

Way back in 2004, an unscientific survey found that 70% of those surveyed would trade passwords for a chocolate bar, so this is by no means a new development.

As security practitioners, this is the control environment we work in.  The problem here is not one of improper training, but rather the limitations of human judgement.

Incentives matter greatly.  Unfortunately for us, the potential negative consequences associated with violating security policy, risking company information and even being fired are offset by more immediate gratification: $150 or helping a coworker by sharing a password.  We shouldn’t be surprised by this: humans sacrifice long term well being for short term gain all the time, whether smoking, drinking, eating poorly, not exercising and so on.  Humans know the long term consequences of these actions, but generally act against their own long term best interest for short term gain.

We, in the information security world, need to be aware of the limitations of human judgement.  Our goal should not be to give employees “enough rope to hang themselves”, but rather to develop control schemes that accommodate limitations of human judgement.  For this reason, I encourage those in the information security field to become familiar with the emerging studies under the banner of cognitive psychology/behavioral economics.  Better understanding the “irrationalities” in human judgement, we can design better incentive systems and security control schemes.

Named Vulnerabilities and Dread Risk

In the middle of my 200 mile drive home today, it occurred to me that the reason Heartbleed, Shellshock and Poodle received so much focus and attention, both within the IT community and generally in the media, is the same reason that most people fear flying: something that Gerd Gigerenzer calls “dread risk” in his book “Risk Savvy: How to Make Good Decisions”.  The concept is simple: most of us dread the thought of dying in a spectacular terrorist attack or a plane crash, which are actually HIGHLY unlikely to kill us, while we have implicitly accepted the risks of the far more common yet mundane things that will almost certainly kill us: car crashes, heart disease, diabetes and so on. (At least for those of us in the USA)

These named “superbugs” seem to have a similar impact on many of us: they are probably not the thing that will get our network compromised or data stolen, yet we talk and fret endlessly about them, while we implicitly accept the things that almost certainly WILL get us compromised: phishing, poorly designed networks, poorly secured systems and data, drive by downloads, completely off-the-radar and unpatched systems hanging out on our network, and so on.  I know this is a bit of a tortured analogy, but similar to car crashes, heart disease and diabetes, these vulnerabilities are much harder to fix, because addressing them requires far more fundamental changes to our routines and operations.  Changes that are painful and probably expensive.  So we latch on to these rare, high-profile named-and-logo’d vulnerabilities that show up on the 11 PM news and systematically drive them out of our organizations, feeling a sense of accomplishment once that last system is patched.  The systems that we know about, anyhow.

“But Jerry”, you might be thinking, “all that media focus and attention is the reason that everything was patched so fast and no real damage was done!”  There may be some truth to that, but I am skeptical…

Proof of concept code was available for Heartbleed nearly simultaneous to it’s disclosure.  Twitter was alight with people posting contents of memory they had captured in the hours and days following.  There was plenty of time for this vulnerability to be weaponized before most vendors even had patches available, let alone implemented by organizations.

Similarly, proof of concept code for Shellshock was also available right away.  Shellshock, in my opinion and in the opinion of many others, was FAR more significant than Heartbleed, since it allowed execution of arbitrary commands on the system being attacked, and yet there has only been one reported case of an organization being compromised using Shellshock – BrowserStack.  By the way, that attack happened against an old, unpatched dev server that hadn’t been patched for quite some time after ShellShock was announced.  We anecdotally know that there are other servers out on the Internet that have been impacted by ShellShock, but as far as anyone can tell, these are nearly exclusively all but abandon web servers.   These servers appear to be subscribed to botnets for the purposes of DDOS.  Not great, but hardly the end of the world.

And then there’s Poodle.  I don’t even want to talk about Poodle.  If someone has the capability to pull off a Poodle attack, they can certainly achieve whatever end far easier using more traditional methods of pushing client-side malware or phishing pages.

Thoughts On The Benefits Of Cyber Insurance

This post was inspired by a Twitter discussion with Rob Lewis (@Infosec_Tourist).

I recently saw a IT services contract that had a stipulation requiring the service provider carry a multi-million dollar cyber insurance policy.  I think that’s pretty smart.  More broadly, I see that cyber insurance has to potential to be a maturing and transformational force for information security.  Here’s why…

At it’s core, information security, for most firms, is a lot like insurance: a necessary expense to address a set of risks.  We read story after story about boards of directors, CEO’s and other executives not understanding or not taking information security threats seriously.  There are likely many reasons for this, some of which are no doubt due to the non-deterministic nature of information security: it’s hard to tell when you’ve done enough, and that there is always something new to buy or hire someone to do.

Security is generally not a competitive differentiator for most firms and therefore the management naturally desires to minimize those costs in order to spend money on more profitable areas of the business.  Insurance policies are not competitive differentiators and few companies like to buy more coverage than necessary.

Insurance companies are in business to make money.  Companies wanting cyber insurance coverage will need to meet certain requirements set by insurers, and likely different levels of maturity and control will afford different premiums.

In addition to the obvious benefit of having coverage should something bad happen, with cyber insurance companies now have a direct, tangible financial linkage between the maturity of their security program and eligibility for coverage and the rates they will pay for such coverage. Additionally, CFO’s and other executives responsible for obtaining insurance coverage will receive advice and counsel related to the organization’s information security posture from a “trusted” partner, rather than only her internal security staff or auditors.

The coverage itself is also important for many firms.  As an example, most companies purchase insurance that covers loss due to fire.  A firm’s building may have a fire sprinkler system, but there is still some remaining risk of loss due to a fire and so it is most economical to cover that risk with insurance, rather than hiring teams of firefighters to patrol the halls full time.  The same is going to be true in the cyber world.  Sensible controls should be in place, but we have to be cognizant that it’s not economical, indeed maybe not even possible, to eliminate the risk of loss due to electronic attacks, and so cyber insurance coverage seems like a sensible thing.

Organizations certainly should not see cyber insurance as the “easy button”; allowing them to pass all or almost all of the risk on to insurers and the costs of the coverage onto customers.  I would assume that the insurers will take reasonable steps to ensure that they are not facilitating bad behavior, since that would impact their own business.

Back to the contractual requirement for a cyber insurance policy.  I think it is pretty smart to include such a requirement, since that it not only ensures that you, the customer, have some deep pockets to collect from if things go south, but also that the insurance company is going to essentially be working for you to ensure your vendor is acting responsibly with respect to information security.