Game Theory, Behavioral Economics and Anti-Virus

The information security community continuously laments the ineffectiveness of anti-virus lately.  Report after report indicate that AV catches only between 5% and 55% of malware.  Can any organization justify the cost for such a generally ineffective control?  Symantec themselves has even stated that  the usefulness of AV is waning.

However, when the bill comes for next year’s maintenance on your chosen AV platform, you’re going to pay it, aren’t you?  And so will nearly everyone else.

Why is that?  Behavioral economists categorize a number of cognitive biases in human psychology, such as “herd mentality”.  I suspect that we are inclined to “do what everyone else is doing”, which is indeed to keep AV around.  Another bias is the “sunk cost fallacy”.  We spent a lot of money deploying AV and have spent a lot of money each year since to keep it fed and cared for.  Abandoning AV will be turning our back on the investment we’ve made, even if it would save us money now.

I think that there may be an even stronger game theoretic force at play here.  If I am responsible for security at my organization, I have many factors to consider when prioritizing my spending.  I may fully believe that AV will not provide additional malware protection beyond other controls in place, and therefore I could reallocate the savings from not using AV to some more productive purpose.  However, if there IS an incident involving malware at my organization and I made the choice to not use AV, even if AV  wouldn’t have stopped it, or if the damages suffered were much less than the savings from not using AV, I am probably going to be working on my resume.  Or I assume that I will.

I suspect this is a similar reason why we will not see requirements for AV relaxed in various security standards and frameworks any time soon.  From the perspective of a standards body, there is only downside in removing that requirement:

  • The AV industry, and probably others, may ridicule the standard for not prescribing the mainstay of security controls, which they have a financial incentive to keep in place
  • Organizations following the standard that have malware-related losses may point back to the standard and call it ineffective
  • The standards body generally will not incur costs resulting from including a given control, so removing AV as a requirement is not sensible since it does catch some amount of malware, however small

 

You might be asking: “what exactly are you getting at here?”  I’m not proposing that you, or anyone else dump AV.  I am proposing that we question why things are being done the way they are. As defenders, we have a limited amount of money and time to spend, and we ought to ensure we are prioritizing our security controls based on effectiveness at mitigating risk to our systems and data and not just because it’s what everyone else is doing.

I’ll also say that, if we’re not willing to dump AV, we ought to (at least from time to time) change the nature of the discussions and criticisms of AV into something productive.  For example, if AV is mandatory and it’s not all that effective, we ought to be purchasing the most economical product to save money for other endeavors.  Rather than simply comparing effectiveness rates, we could be considering the cost of effectiveness rates per user.  If I am paying $50/year/user for an AV platform that is 35% effective, it would be good to know that I could pay $25/year/user for one that is 30% effective.  This assumes, of course, that we settle on a standard methodology for rating the effectiveness of AV, which seems like a challenge on its own.

 

 

 

 

 

Thoughts On The Benefits Of Cyber Insurance

This post was inspired by a Twitter discussion with Rob Lewis (@Infosec_Tourist).

I recently saw a IT services contract that had a stipulation requiring the service provider carry a multi-million dollar cyber insurance policy.  I think that’s pretty smart.  More broadly, I see that cyber insurance has to potential to be a maturing and transformational force for information security.  Here’s why…

At it’s core, information security, for most firms, is a lot like insurance: a necessary expense to address a set of risks.  We read story after story about boards of directors, CEO’s and other executives not understanding or not taking information security threats seriously.  There are likely many reasons for this, some of which are no doubt due to the non-deterministic nature of information security: it’s hard to tell when you’ve done enough, and that there is always something new to buy or hire someone to do.

Security is generally not a competitive differentiator for most firms and therefore the management naturally desires to minimize those costs in order to spend money on more profitable areas of the business.  Insurance policies are not competitive differentiators and few companies like to buy more coverage than necessary.

Insurance companies are in business to make money.  Companies wanting cyber insurance coverage will need to meet certain requirements set by insurers, and likely different levels of maturity and control will afford different premiums.

In addition to the obvious benefit of having coverage should something bad happen, with cyber insurance companies now have a direct, tangible financial linkage between the maturity of their security program and eligibility for coverage and the rates they will pay for such coverage. Additionally, CFO’s and other executives responsible for obtaining insurance coverage will receive advice and counsel related to the organization’s information security posture from a “trusted” partner, rather than only her internal security staff or auditors.

The coverage itself is also important for many firms.  As an example, most companies purchase insurance that covers loss due to fire.  A firm’s building may have a fire sprinkler system, but there is still some remaining risk of loss due to a fire and so it is most economical to cover that risk with insurance, rather than hiring teams of firefighters to patrol the halls full time.  The same is going to be true in the cyber world.  Sensible controls should be in place, but we have to be cognizant that it’s not economical, indeed maybe not even possible, to eliminate the risk of loss due to electronic attacks, and so cyber insurance coverage seems like a sensible thing.

Organizations certainly should not see cyber insurance as the “easy button”; allowing them to pass all or almost all of the risk on to insurers and the costs of the coverage onto customers.  I would assume that the insurers will take reasonable steps to ensure that they are not facilitating bad behavior, since that would impact their own business.

Back to the contractual requirement for a cyber insurance policy.  I think it is pretty smart to include such a requirement, since that it not only ensures that you, the customer, have some deep pockets to collect from if things go south, but also that the insurance company is going to essentially be working for you to ensure your vendor is acting responsibly with respect to information security.

 

Behavioral Economics and Information Security

I recently finished reading Dan Ariely’s “Predictably Irrational” book series about behavioral economics and the impacts of cognitive biases on behaviors and decision making.  The lessons from behavioral economics seem, to me at least, to have significant implications for information security.  I was a bit surprised at the apparent lack of study around this linkage.  Maybe it shouldn’t be all that surprising.  One paper I did find, “Information Security: Lessons from Behavioural Economics” by Michelle Baddeley, focuses on the impact of cognitive biases on decisions involving privacy, social media security, and so on.  The point of the paper is illustrating the need to factor lessons from behavioral economics into the design of security policies and regulations and that policies and regulations should recognize the influence of cognitive biases, emotions, limited information, and so on, rather than assuming the people have equal access to facts and can make economically rational decisions.

There seems to be another important angle to consider: the impacts of limited information, cognitive biases and associated psychological factors related to decision making on those of us working to defend organizations.  This is an uncomfortable area to tread.  As security people, we are apt to talk about the foibles of the common user; debating whether we can train them to avoid security pitfalls, or whether it’s a lost cause and our only real hope is building systems that don’t rely on people recognizing and avoiding threats.

I spend a lot of time thinking about the causes of breaches: both those that I’m involved in in investigating and those that are documented in the media.  I can see indicators that the causes of at least some breaches likely stem from similar cognition problems described by behavior economics.

For instance, a common error which has resulted in a number of significant breaches is very basic network architecture, specifically not recognizing that a particular configuration enables a relatively straight forward and quite common method for moving about a network.

The reasons why this happens are fascinating to me.  Clearly, I don’t know with certainty why they happened in most cases, but all possible reasons are interesting unto themselves.

At the end of the day, we need to be efficient and effective with our information security programs.  I can look at strategic information security decisions I have made and see the influence of some biases which are plainly described in Mr. Ariely’s research.  I expect this will be the beginning of a series of posts as I start to delve more deeply into the topic.  In the meantime, I am very curious to hear whether others have already thought about this and what conclusions might have been drawn.

Some recommended reading:

Dan Ariely’s Irrational bundle

Douglas Hubbard’s How To Measure Anything and The Failure of Risk Management