Cyber Security Lessons From Behavioral Economics

In this series, I am exploring the intersection of information security and behavioral economics.  As a long time information security person that recently began studying behavioral economics, I’ve come to realize that much of traditional information security programs are built using standard economic models.

For example, the Simple Model of Rational Crime (SMOC) has implicitly influenced the creation of security policies and conduct guidelines, as well as much of criminal law.  Simply put, SMOC takes the traditional economic view of human decisions as it pertains to maximizing utility when in comes to committing crimes: people perform a cost-benefit calculation and decide whether or not to commit the crime.

We have four levers to push and pull on as it pertains to managing employee threat:

  1. The explicitness of the requirements to ensure employees understand their obligations and can’t hide behind ignorance
  2. The severity of punishment, such as getting fired or even sued by the company
  3. Controls that increase the likelihood that misdeeds are detected
  4. Controls that prevent misdeeds from occurring

Many corporate security programs rely heavily on the first three levers assuming that if people clearly understand the expectations of them, clearly understand the consequences and have some expectation that they’ll be caught, employees will make economically rational choices after weighing the cost-benefit of whatever opportunistic misdeed lays in front of them.  It’s hard to consider the possibility that a sane person would choose to risk their well paying job for a few hundred dollars or to cut a corner that saves them a few minutes.  Anyone who would do such a thing must, by definition, not be of sound mind and therefore isn’t really good for the company.  Right?

But this scenario happens all the time.  Our policies and expectations are built on the understanding that people are indeed rational and make rational cost-benefit assessments before taking an action.   A growing body of research points out that people are influenced by a great many things, from their mood to project deadlines to how tired they are.  We don’t like level number four, because it’s expensive, inconvenient and we shouldn’t have to do it anyhow given the above conditions.  But we should reconsider.

Dan Ariely’s book “The Honest Truth About Dishonesty” details many experiments that illustrate how the SMOC model doesn’t represent the actual behavior of people and is well worth a read for anyone responsible for designing security programs or security awareness training.

The take away for this post is that relying on employees “to do the right thing” as an integral part of a security program doesn’t make sense given what we know about the human mind.  As mentioned in the previous post, reminders about honesty can help in some cases, but not in all.  The integrity of key processes should not rely solely on policy and employment agreements, but rather be designed to prevent, or at least quickly detect, employee misdeeds.  Such controls clearly won’t work for all organizations or in all circumstances due to cost constraints, politics, technological limitations and so on, but we need to be clear about what to expect when those controls are absent.  Too many organizations are surprised when an employee violates policy, despite the policy being explicit on expectations, explicit on the ramifications of violating the policy and despite an elaborate security awareness campaign.