Behavioral Economics Sightings in Information Security

Below is a list of resources I am aware of exploring the intersection of behavioral economics and information security.  If you are aware of others, please leave a comment.

Website: Applying Behavioral Economics to Harden Cyberspace

Paper: Information Security: Lessons from Behavioural Economics

Paper: Using Behavioural Insights To Improve the Public’s Use of Cyber Security Best Practices

Links: Psychology and Security Resource Page

Book: The Psychology of Information Security

Conference Talks:

 

Human Nature And Selling Passwords

A new report by Sailpoint indicating that one in seven employees would sell company passwords for $150 is garnering a lot of news coverage in the past few days.  The report also finds that 20% of employees share passwords with coworkers.  The report is based on a survey of 1,000 employees from organizations with over 3,000 employees.  It isn’t clear whether the survey was conducted using statistically valid methods, so we must keep in mind the possibility for significant error when evaluating the results.

While one in seven seems like an alarming number, what isn’t stated in the report is how many would sell a password for $500 or $1,000.  Not to mention $10,000,000.  The issue here is one of human nature.  Effectively, the report finds that one in seven employees are willing to trade $150 for a spin of a roulette wheel where some spaces result in termination of employment or end his or her career.

Way back in 2004, an unscientific survey found that 70% of those surveyed would trade passwords for a chocolate bar, so this is by no means a new development.

As security practitioners, this is the control environment we work in.  The problem here is not one of improper training, but rather the limitations of human judgement.

Incentives matter greatly.  Unfortunately for us, the potential negative consequences associated with violating security policy, risking company information and even being fired are offset by more immediate gratification: $150 or helping a coworker by sharing a password.  We shouldn’t be surprised by this: humans sacrifice long term well being for short term gain all the time, whether smoking, drinking, eating poorly, not exercising and so on.  Humans know the long term consequences of these actions, but generally act against their own long term best interest for short term gain.

We, in the information security world, need to be aware of the limitations of human judgement.  Our goal should not be to give employees “enough rope to hang themselves”, but rather to develop control schemes that accommodate limitations of human judgement.  For this reason, I encourage those in the information security field to become familiar with the emerging studies under the banner of cognitive psychology/behavioral economics.  Better understanding the “irrationalities” in human judgement, we can design better incentive systems and security control schemes.

Human Nature and Cyber Security

This has been a particularly active year for large scale, public breaches in the news. Next year’s Data Breach Investigations Report from Verizon should provide some context on whether we are experiencing a “shark attack” phenomenon of continued media coverage of each new breach, or if this is really an exceptional year.

Regardless of whether we are trending above average or not, it’s pretty clear that a lot of companies are experiencing data breaches.

Information security is a series of trade-offs: investment vs. security, ease of use vs. security, operational costs vs. security and so on.  This isn’t a new or revolutionary concept.  Groups like SIRA focus on higher order efforts to quantify information risk to inform security strategy, justify investment in security programs and so on.

At a lower level, making intelligent decisions on the trade-offs involved in IT systems projects requires a well-informed assessment of the risks involved.  However, experiments in cognitive psychology and behavioral economics consistently demonstrate that humans have a raft of cognitive biases which impact decision making.  For instance, we are generally overconfident in our knowledge and abilities and we tend to think about likelihood in the context of what we have had personal experience with.  Uncertainty, inexperience or ignorance into exactly how IT system security can fail may lead to an improper assessment of risk.  If risks are not clearly understood, decisions made using these assessments will not be as accurate as expected.

Douglas Hubbard writes extensively on the topic of “expert calibration” in his book “How To Measure Anything”.  In this book, calibration involves training experts to more clearly understand and articulate their level of uncertainty when making assessments of likelihoods or impacts of events.  While it doesn’t eliminate error from subjective assessments, Mr. Hubbard claims that it demonstrably improves estimates provided by calibrated experts.  This calibration process likely makes these “experts” more aware of their cognitive biases.  Regardless of the exact mechanism, measurably improving estimates used in decision making is a good thing.

Information security could benefit from a similar calibration concept.  Understanding the mechanisms through which IT systems can be breached underpins our ability to make reasonable assessments about the risks and likelihood of a breach in a given environment.

To pick on Target for a minute:

Would having a clear understanding of the mechanisms by which the external vendor application change the decision to have the server authenticate against the company’s Active Directory system?  An application to coordinate the activities of the myriad vendors a company the size of Target has is almost certainly a necessity, but would a better understanding of the ways that a vendor management server could be exploited have made a case to have the application isolated from the rest of the Target network with the tradeoff of higher operational costs?  Clearly, that question can only be answered by those present when the decision was made.

Daniel Kahneman, in his book “Thinking, Fast and Slow”, describes a cognitive bias he call the availability heuristic. Essentially this idea posits that people judge concepts and likelihoods based on their ability to recall something from memory, and if it can’t be recalled, it is not important. Similarly, Thomas Schelling, a Nobel Prize-winning economist wrote:

There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.

Nate Silver’s book “The Signal and the Noise” has an excellent chapter on this concept (Chapter 13).

To become calibrated experts who can clearly assess security risks arising from systems, the IT industry seemingly would benefit from a more broad understanding of the methods used to penetrate systems and networks.  Certainly this will not “solve” the problem of breaches, however it should help to make better inform decisions regarding IT security tradeoffs.

Nor does this mean that organizations will or should always choose the least risky or most secure path.  Businesses have to deal with risk all the time and often have to accept risk in order to move forward.  The point here is that organizations are often seemingly not fully cognizant of risks they accept when making IT decisions, due to human biases, conflicts and ignorance.

A popular blog post by Wendy Nather recently pushed back on the offensive security effort; pointing out that things will not get better by continuing to point out what is wrong.  Rather, the way forward is to start fixing things.  My view is that both the offensive and defensive sides are important to the security ecosystem.  Certainly things will NOT get better until we start fixing them.  However, “we” is a limited population.  To tackle the fundamental problems with security, we need to engage the IT industry – not just those people with “security” in their titles.  And we need those that do have “security” in their titles to be more consistently aware of threats.  Focusing solely on defense, as this blog post urges, will yield some short term improvements in some organizations.  However, building consistent awareness of IT security risks, particularly in those people responsible for assessing such risks, should help all organizations not be surprised when Brian Krebs calls them up with unfortunate news.

It’s All About The Benjamins… Or Why Details Matter

A team at Carnegie Mellon released a report a few weeks back, detailing the results of an experiment to determine the how many people would run a suspicious application at different levels of compensation. The report paints a pretty cynical picture of the “average” Internet user, which generally meshes with our intuition on.   Basically, the vast majority of participants ran suspicious code for $1, even though they knew it is dangerous to do so.  Worse, a significant number of participants ran the code for the low, low price of one cent.  This seems to paint a pretty dire picture, in the same vein as previous research where subjects gave up passwords for a candy bar.

However, I noticed a potential problem wit the report.  The researchers relied on Amazon’s Mechanical Turk service to find participants for this study.  When performing studies like this, it’s important that the population sampled are representative of the population which the study is intending to estimate.  If the population sampled is not representative of the broader population, the results will be unreliable for estimating against the broader population.

Consider this scenario: I want to estimate the amount of physical activity the average adult in my city gets per day.  So, I set  up a stand at the entrance to a shopping center where there is a gym and I survey a those who enter the parking lot.  With this methodology, I will not end up with an average amount of physical activity for the city, because I have skewed the numbers by setting up shop near a gym.  I will only be able to estimate the amount of physical activity for those people who frequent this particular shopping center.

The researchers cite a previous study which determined that the “workers” of Mechanical Turk are more or less representative of the average users of the Internet at large based on a number of demographic dimensions, like age, income and gender.

I contend that this is akin to finding that the kinds of stores in my hypothetical shopping center draw a representative sample of the city as a whole, based on the same demographic dimensions, and in fact I see that result in my parking lot survey.  However, my results are still unreliable, even though the visitors are, in fact, representative of the city.  Why is that?  Hours of physical activity is orthogonal (mostly) to the demographic dimensions I checked: income, age and gender.

In the same fashion, I contend that while the demographics of Mechanical Turk “workers” match that of the average Internet user, the results are similarly unreliable for estimating all Internet users.  Mechanical Turk is a concentration of people who are willing perform small tasks for small amounts of money.  I propose that the findings of the report are only representative of the population of Mechanical Turk users, not of the general population of Internet users.

It seems obvious that the average Internet user would indeed fall victim to this at some price, but we don’t know for sure what percentage and at what price points.

I still find the report fascinating, and it’s clear that someone with malicious intent can go to a market place like Mechanical Turk and make some money by issuing “jobs” to run pay per install malware.

Game Theory, Behavioral Economics and Anti-Virus

The information security community continuously laments the ineffectiveness of anti-virus lately.  Report after report indicate that AV catches only between 5% and 55% of malware.  Can any organization justify the cost for such a generally ineffective control?  Symantec themselves has even stated that  the usefulness of AV is waning.

However, when the bill comes for next year’s maintenance on your chosen AV platform, you’re going to pay it, aren’t you?  And so will nearly everyone else.

Why is that?  Behavioral economists categorize a number of cognitive biases in human psychology, such as “herd mentality”.  I suspect that we are inclined to “do what everyone else is doing”, which is indeed to keep AV around.  Another bias is the “sunk cost fallacy”.  We spent a lot of money deploying AV and have spent a lot of money each year since to keep it fed and cared for.  Abandoning AV will be turning our back on the investment we’ve made, even if it would save us money now.

I think that there may be an even stronger game theoretic force at play here.  If I am responsible for security at my organization, I have many factors to consider when prioritizing my spending.  I may fully believe that AV will not provide additional malware protection beyond other controls in place, and therefore I could reallocate the savings from not using AV to some more productive purpose.  However, if there IS an incident involving malware at my organization and I made the choice to not use AV, even if AV  wouldn’t have stopped it, or if the damages suffered were much less than the savings from not using AV, I am probably going to be working on my resume.  Or I assume that I will.

I suspect this is a similar reason why we will not see requirements for AV relaxed in various security standards and frameworks any time soon.  From the perspective of a standards body, there is only downside in removing that requirement:

  • The AV industry, and probably others, may ridicule the standard for not prescribing the mainstay of security controls, which they have a financial incentive to keep in place
  • Organizations following the standard that have malware-related losses may point back to the standard and call it ineffective
  • The standards body generally will not incur costs resulting from including a given control, so removing AV as a requirement is not sensible since it does catch some amount of malware, however small

 

You might be asking: “what exactly are you getting at here?”  I’m not proposing that you, or anyone else dump AV.  I am proposing that we question why things are being done the way they are. As defenders, we have a limited amount of money and time to spend, and we ought to ensure we are prioritizing our security controls based on effectiveness at mitigating risk to our systems and data and not just because it’s what everyone else is doing.

I’ll also say that, if we’re not willing to dump AV, we ought to (at least from time to time) change the nature of the discussions and criticisms of AV into something productive.  For example, if AV is mandatory and it’s not all that effective, we ought to be purchasing the most economical product to save money for other endeavors.  Rather than simply comparing effectiveness rates, we could be considering the cost of effectiveness rates per user.  If I am paying $50/year/user for an AV platform that is 35% effective, it would be good to know that I could pay $25/year/user for one that is 30% effective.  This assumes, of course, that we settle on a standard methodology for rating the effectiveness of AV, which seems like a challenge on its own.

 

 

 

 

 

Behavioral Economics and Information Security

I recently finished reading Dan Ariely’s “Predictably Irrational” book series about behavioral economics and the impacts of cognitive biases on behaviors and decision making.  The lessons from behavioral economics seem, to me at least, to have significant implications for information security.  I was a bit surprised at the apparent lack of study around this linkage.  Maybe it shouldn’t be all that surprising.  One paper I did find, “Information Security: Lessons from Behavioural Economics” by Michelle Baddeley, focuses on the impact of cognitive biases on decisions involving privacy, social media security, and so on.  The point of the paper is illustrating the need to factor lessons from behavioral economics into the design of security policies and regulations and that policies and regulations should recognize the influence of cognitive biases, emotions, limited information, and so on, rather than assuming the people have equal access to facts and can make economically rational decisions.

There seems to be another important angle to consider: the impacts of limited information, cognitive biases and associated psychological factors related to decision making on those of us working to defend organizations.  This is an uncomfortable area to tread.  As security people, we are apt to talk about the foibles of the common user; debating whether we can train them to avoid security pitfalls, or whether it’s a lost cause and our only real hope is building systems that don’t rely on people recognizing and avoiding threats.

I spend a lot of time thinking about the causes of breaches: both those that I’m involved in in investigating and those that are documented in the media.  I can see indicators that the causes of at least some breaches likely stem from similar cognition problems described by behavior economics.

For instance, a common error which has resulted in a number of significant breaches is very basic network architecture, specifically not recognizing that a particular configuration enables a relatively straight forward and quite common method for moving about a network.

The reasons why this happens are fascinating to me.  Clearly, I don’t know with certainty why they happened in most cases, but all possible reasons are interesting unto themselves.

At the end of the day, we need to be efficient and effective with our information security programs.  I can look at strategic information security decisions I have made and see the influence of some biases which are plainly described in Mr. Ariely’s research.  I expect this will be the beginning of a series of posts as I start to delve more deeply into the topic.  In the meantime, I am very curious to hear whether others have already thought about this and what conclusions might have been drawn.

Some recommended reading:

Dan Ariely’s Irrational bundle

Douglas Hubbard’s How To Measure Anything and The Failure of Risk Management