Business Economics Of Data Protection

I recently started listening to “The Portable MBA”, which me reflecting on the business implications of information security. None of what I write below seems new or enlightening; I thought this might spark some interesting discussions and also serve to sharpen my own thoughts.

Business managers need to take risks. Indeed, the fundamental tenants of being in business require risk taking. Generally, these are financial risks and impact investors and directly related parties. For instance, hiring or not hiring another worker is a risk, as is buying a new piece of equipment.

Think for a moment about a piece of manufacturing equipment: once purchased and installed, a business manager generally needs to pay to maintain the equipment to keep it functioning.

The manager can, however, cut back on time and money spent on maintaining the equipment. For a while, this decision will improve profits. Eventually, however, the equipment will stop operating as it should, causing reduced production. The attempt to save money through inadequate maintenance financially hurts both the firm and the manager through lower sales and possibly higher repair costs than those saved originally.

This is sort of tradeoff is very common in business and managers are constantly seeking the optimal level of operational overhead: too much wastes money that could be used for more profitable purposes and too little creates eventual productivity and production problems.

These decisions by the business manager impact those with a stake in the company such as investors, bankers, owners, shareholders and even employees. To some extent, customers are impacted as well, since such decisions may impact prices or availability.

Data security is an odd case considering the above background. As it pertains to securing customer data, the investment decisions made by a business manager directly impact the customers whose data may be stolen, but only indirectly impacts the firm itself. The data may not even belong to the customers of the firm, but rather several layers removed.

This seems to present a conflict of interest: what incentive does a manger have to protect customer data? There appears to be a few likely reasons:

  1. Government regulatory actions
  2. Lawsuits from customers or other impacted parties
  3. Reduced revenues due to customer rejection
  4. Sense of responsibility

One might argue that the free market will reward those firms who act responsibly and punish those that act irresponsibly. On a sufficiently long timeline, that may happen. Recent events appear to indicate that losing customer data does not cause companies to go out of business, and may not even significantly impact customer demand or loyalty.

An interesting attribute is that in the context of information security, the firm that loses the data isn’t the bad actor. The firm is itself victim.

All of this makes me wonder: is the responsibility for storing sensitive data simply incongruent with the objectives of a profit-driven company?

Is it reasonable to expect such companies to invest in security, including potentially reduced productivity of employees, to avoid the possibility of losing sensitive data?   Clearly some companies take the responsibility incredibly seriously, but many others do not and the market forces, to date, don’t seem to be punishing the irresponsible parties (much).

Human Nature And Selling Passwords

A new report by Sailpoint indicating that one in seven employees would sell company passwords for $150 is garnering a lot of news coverage in the past few days.  The report also finds that 20% of employees share passwords with coworkers.  The report is based on a survey of 1,000 employees from organizations with over 3,000 employees.  It isn’t clear whether the survey was conducted using statistically valid methods, so we must keep in mind the possibility for significant error when evaluating the results.

While one in seven seems like an alarming number, what isn’t stated in the report is how many would sell a password for $500 or $1,000.  Not to mention $10,000,000.  The issue here is one of human nature.  Effectively, the report finds that one in seven employees are willing to trade $150 for a spin of a roulette wheel where some spaces result in termination of employment or end his or her career.

Way back in 2004, an unscientific survey found that 70% of those surveyed would trade passwords for a chocolate bar, so this is by no means a new development.

As security practitioners, this is the control environment we work in.  The problem here is not one of improper training, but rather the limitations of human judgement.

Incentives matter greatly.  Unfortunately for us, the potential negative consequences associated with violating security policy, risking company information and even being fired are offset by more immediate gratification: $150 or helping a coworker by sharing a password.  We shouldn’t be surprised by this: humans sacrifice long term well being for short term gain all the time, whether smoking, drinking, eating poorly, not exercising and so on.  Humans know the long term consequences of these actions, but generally act against their own long term best interest for short term gain.

We, in the information security world, need to be aware of the limitations of human judgement.  Our goal should not be to give employees “enough rope to hang themselves”, but rather to develop control schemes that accommodate limitations of human judgement.  For this reason, I encourage those in the information security field to become familiar with the emerging studies under the banner of cognitive psychology/behavioral economics.  Better understanding the “irrationalities” in human judgement, we can design better incentive systems and security control schemes.

Human Nature and Cyber Security

This has been a particularly active year for large scale, public breaches in the news. Next year’s Data Breach Investigations Report from Verizon should provide some context on whether we are experiencing a “shark attack” phenomenon of continued media coverage of each new breach, or if this is really an exceptional year.

Regardless of whether we are trending above average or not, it’s pretty clear that a lot of companies are experiencing data breaches.

Information security is a series of trade-offs: investment vs. security, ease of use vs. security, operational costs vs. security and so on.  This isn’t a new or revolutionary concept.  Groups like SIRA focus on higher order efforts to quantify information risk to inform security strategy, justify investment in security programs and so on.

At a lower level, making intelligent decisions on the trade-offs involved in IT systems projects requires a well-informed assessment of the risks involved.  However, experiments in cognitive psychology and behavioral economics consistently demonstrate that humans have a raft of cognitive biases which impact decision making.  For instance, we are generally overconfident in our knowledge and abilities and we tend to think about likelihood in the context of what we have had personal experience with.  Uncertainty, inexperience or ignorance into exactly how IT system security can fail may lead to an improper assessment of risk.  If risks are not clearly understood, decisions made using these assessments will not be as accurate as expected.

Douglas Hubbard writes extensively on the topic of “expert calibration” in his book “How To Measure Anything”.  In this book, calibration involves training experts to more clearly understand and articulate their level of uncertainty when making assessments of likelihoods or impacts of events.  While it doesn’t eliminate error from subjective assessments, Mr. Hubbard claims that it demonstrably improves estimates provided by calibrated experts.  This calibration process likely makes these “experts” more aware of their cognitive biases.  Regardless of the exact mechanism, measurably improving estimates used in decision making is a good thing.

Information security could benefit from a similar calibration concept.  Understanding the mechanisms through which IT systems can be breached underpins our ability to make reasonable assessments about the risks and likelihood of a breach in a given environment.

To pick on Target for a minute:

Would having a clear understanding of the mechanisms by which the external vendor application change the decision to have the server authenticate against the company’s Active Directory system?  An application to coordinate the activities of the myriad vendors a company the size of Target has is almost certainly a necessity, but would a better understanding of the ways that a vendor management server could be exploited have made a case to have the application isolated from the rest of the Target network with the tradeoff of higher operational costs?  Clearly, that question can only be answered by those present when the decision was made.

Daniel Kahneman, in his book “Thinking, Fast and Slow”, describes a cognitive bias he call the availability heuristic. Essentially this idea posits that people judge concepts and likelihoods based on their ability to recall something from memory, and if it can’t be recalled, it is not important. Similarly, Thomas Schelling, a Nobel Prize-winning economist wrote:

There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.

Nate Silver’s book “The Signal and the Noise” has an excellent chapter on this concept (Chapter 13).

To become calibrated experts who can clearly assess security risks arising from systems, the IT industry seemingly would benefit from a more broad understanding of the methods used to penetrate systems and networks.  Certainly this will not “solve” the problem of breaches, however it should help to make better inform decisions regarding IT security tradeoffs.

Nor does this mean that organizations will or should always choose the least risky or most secure path.  Businesses have to deal with risk all the time and often have to accept risk in order to move forward.  The point here is that organizations are often seemingly not fully cognizant of risks they accept when making IT decisions, due to human biases, conflicts and ignorance.

A popular blog post by Wendy Nather recently pushed back on the offensive security effort; pointing out that things will not get better by continuing to point out what is wrong.  Rather, the way forward is to start fixing things.  My view is that both the offensive and defensive sides are important to the security ecosystem.  Certainly things will NOT get better until we start fixing them.  However, “we” is a limited population.  To tackle the fundamental problems with security, we need to engage the IT industry – not just those people with “security” in their titles.  And we need those that do have “security” in their titles to be more consistently aware of threats.  Focusing solely on defense, as this blog post urges, will yield some short term improvements in some organizations.  However, building consistent awareness of IT security risks, particularly in those people responsible for assessing such risks, should help all organizations not be surprised when Brian Krebs calls them up with unfortunate news.

It’s All About The Benjamins… Or Why Details Matter

A team at Carnegie Mellon released a report a few weeks back, detailing the results of an experiment to determine the how many people would run a suspicious application at different levels of compensation. The report paints a pretty cynical picture of the “average” Internet user, which generally meshes with our intuition on.   Basically, the vast majority of participants ran suspicious code for $1, even though they knew it is dangerous to do so.  Worse, a significant number of participants ran the code for the low, low price of one cent.  This seems to paint a pretty dire picture, in the same vein as previous research where subjects gave up passwords for a candy bar.

However, I noticed a potential problem wit the report.  The researchers relied on Amazon’s Mechanical Turk service to find participants for this study.  When performing studies like this, it’s important that the population sampled are representative of the population which the study is intending to estimate.  If the population sampled is not representative of the broader population, the results will be unreliable for estimating against the broader population.

Consider this scenario: I want to estimate the amount of physical activity the average adult in my city gets per day.  So, I set  up a stand at the entrance to a shopping center where there is a gym and I survey a those who enter the parking lot.  With this methodology, I will not end up with an average amount of physical activity for the city, because I have skewed the numbers by setting up shop near a gym.  I will only be able to estimate the amount of physical activity for those people who frequent this particular shopping center.

The researchers cite a previous study which determined that the “workers” of Mechanical Turk are more or less representative of the average users of the Internet at large based on a number of demographic dimensions, like age, income and gender.

I contend that this is akin to finding that the kinds of stores in my hypothetical shopping center draw a representative sample of the city as a whole, based on the same demographic dimensions, and in fact I see that result in my parking lot survey.  However, my results are still unreliable, even though the visitors are, in fact, representative of the city.  Why is that?  Hours of physical activity is orthogonal (mostly) to the demographic dimensions I checked: income, age and gender.

In the same fashion, I contend that while the demographics of Mechanical Turk “workers” match that of the average Internet user, the results are similarly unreliable for estimating all Internet users.  Mechanical Turk is a concentration of people who are willing perform small tasks for small amounts of money.  I propose that the findings of the report are only representative of the population of Mechanical Turk users, not of the general population of Internet users.

It seems obvious that the average Internet user would indeed fall victim to this at some price, but we don’t know for sure what percentage and at what price points.

I still find the report fascinating, and it’s clear that someone with malicious intent can go to a market place like Mechanical Turk and make some money by issuing “jobs” to run pay per install malware.

Game Theory, Behavioral Economics and Anti-Virus

The information security community continuously laments the ineffectiveness of anti-virus lately.  Report after report indicate that AV catches only between 5% and 55% of malware.  Can any organization justify the cost for such a generally ineffective control?  Symantec themselves has even stated that  the usefulness of AV is waning.

However, when the bill comes for next year’s maintenance on your chosen AV platform, you’re going to pay it, aren’t you?  And so will nearly everyone else.

Why is that?  Behavioral economists categorize a number of cognitive biases in human psychology, such as “herd mentality”.  I suspect that we are inclined to “do what everyone else is doing”, which is indeed to keep AV around.  Another bias is the “sunk cost fallacy”.  We spent a lot of money deploying AV and have spent a lot of money each year since to keep it fed and cared for.  Abandoning AV will be turning our back on the investment we’ve made, even if it would save us money now.

I think that there may be an even stronger game theoretic force at play here.  If I am responsible for security at my organization, I have many factors to consider when prioritizing my spending.  I may fully believe that AV will not provide additional malware protection beyond other controls in place, and therefore I could reallocate the savings from not using AV to some more productive purpose.  However, if there IS an incident involving malware at my organization and I made the choice to not use AV, even if AV  wouldn’t have stopped it, or if the damages suffered were much less than the savings from not using AV, I am probably going to be working on my resume.  Or I assume that I will.

I suspect this is a similar reason why we will not see requirements for AV relaxed in various security standards and frameworks any time soon.  From the perspective of a standards body, there is only downside in removing that requirement:

  • The AV industry, and probably others, may ridicule the standard for not prescribing the mainstay of security controls, which they have a financial incentive to keep in place
  • Organizations following the standard that have malware-related losses may point back to the standard and call it ineffective
  • The standards body generally will not incur costs resulting from including a given control, so removing AV as a requirement is not sensible since it does catch some amount of malware, however small

 

You might be asking: “what exactly are you getting at here?”  I’m not proposing that you, or anyone else dump AV.  I am proposing that we question why things are being done the way they are. As defenders, we have a limited amount of money and time to spend, and we ought to ensure we are prioritizing our security controls based on effectiveness at mitigating risk to our systems and data and not just because it’s what everyone else is doing.

I’ll also say that, if we’re not willing to dump AV, we ought to (at least from time to time) change the nature of the discussions and criticisms of AV into something productive.  For example, if AV is mandatory and it’s not all that effective, we ought to be purchasing the most economical product to save money for other endeavors.  Rather than simply comparing effectiveness rates, we could be considering the cost of effectiveness rates per user.  If I am paying $50/year/user for an AV platform that is 35% effective, it would be good to know that I could pay $25/year/user for one that is 30% effective.  This assumes, of course, that we settle on a standard methodology for rating the effectiveness of AV, which seems like a challenge on its own.