I recently finished reading Dan Ariely’s “Predictably Irrational” book series about behavioral economics and the impacts of cognitive biases on behaviors and decision making. The lessons from behavioral economics seem, to me at least, to have significant implications for information security. I was a bit surprised at the apparent lack of study around this linkage. Maybe it shouldn’t be all that surprising. One paper I did find, “Information Security: Lessons from Behavioural Economics” by Michelle Baddeley, focuses on the impact of cognitive biases on decisions involving privacy, social media security, and so on. The point of the paper is illustrating the need to factor lessons from behavioral economics into the design of security policies and regulations and that policies and regulations should recognize the influence of cognitive biases, emotions, limited information, and so on, rather than assuming the people have equal access to facts and can make economically rational decisions.
There seems to be another important angle to consider: the impacts of limited information, cognitive biases and associated psychological factors related to decision making on those of us working to defend organizations. This is an uncomfortable area to tread. As security people, we are apt to talk about the foibles of the common user; debating whether we can train them to avoid security pitfalls, or whether it’s a lost cause and our only real hope is building systems that don’t rely on people recognizing and avoiding threats.
I spend a lot of time thinking about the causes of breaches: both those that I’m involved in in investigating and those that are documented in the media. I can see indicators that the causes of at least some breaches likely stem from similar cognition problems described by behavior economics.
For instance, a common error which has resulted in a number of significant breaches is very basic network architecture, specifically not recognizing that a particular configuration enables a relatively straight forward and quite common method for moving about a network.
The reasons why this happens are fascinating to me. Clearly, I don’t know with certainty why they happened in most cases, but all possible reasons are interesting unto themselves.
At the end of the day, we need to be efficient and effective with our information security programs. I can look at strategic information security decisions I have made and see the influence of some biases which are plainly described in Mr. Ariely’s research. I expect this will be the beginning of a series of posts as I start to delve more deeply into the topic. In the meantime, I am very curious to hear whether others have already thought about this and what conclusions might have been drawn.
Some recommended reading:
Dan Ariely’s Irrational bundle
Douglas Hubbard’s How To Measure Anything and The Failure of Risk Management